text
stringlengths
216
4.52M
meta
dict
\section{Introduction} \label{sec:1} Finding global models for rotating objects in general relativity has proven to be extremely difficult, even for axially symmetric configurations in equilibrium. So far there are no known explicit global models except for spherical stars (hence non-rotating) or the Neugebauer and Meinel disc of dust \cite{NeugebauerMeinel92}, where the matter source is encoded as suitable jumps on the first derivatives of the metric, which is otherwise vacuum everywhere. Since a proper understanding of rotating objects in equilibrium within the context of general relativity is obviously fundamental for many astrophysical situations, alternative methods have been developed over the years. The two most important ones are, without doubt, the use of numerical methods and perturbation theory. The former can handle fully relativistic situations with intense gravitational fields and high velocities and has produced a large variety of very interesting results. However, the fact that closed expressions are never found in this setting leaves plenty of room for other approaches, like for instance perturbation theory. Its field of applicability is restricted to slowly rotating stars (so that the perturbation parameter can be taken to be the maximum angular velocity of the star, for instance) and also a wealth of results have been obtained (see e.g. \cite{LR} and references therein). This short note is a summary of a longer paper \cite{MMV} by the same authors where many more results and their proofs can be found. The object of this contribution is to explain in a few words the main motivation of this work, present the main result and give some indications of how it can be proven by analyzing the simplest possible situation. The aim of this work is to study perturbation theory of rotating stars from a different perspective than normally done. We want to consider slowly rotating bodies with an arbitrary matter content and we wish to concentrate on the effect that the rotation has in the vacuum outside region. Thus, our background spacetime is composed of a static object with some non-vanishing energy-momentum tensor (typically a perfect fluid, but not necessarily). Although we have in mind the case when the static body is spherically symmetric, which is the physically most relevant one, all our results hold also for axially symmetric backgrounds. We assume, as it is usual in this context, that the object has a sharp boundary and that there is no matter outside it, so that the metric exterior to the body is vacuum. The metrics inside and outside the body must satisfy certain junction conditions (see \cite{MarsSenovilla93} for a detailed account) in order to produce a well-defined spacetime. Given this background, we want to perturb it {\it arbitrarily} in the interior except for the restriction that the object is still in equilibrium and has axial symmetry (i.e. the perturbed metric is stationary and axially symmetric). Furthermore we want to do perturbations up to second order in perturbation theory. The necessity of going to second order comes from the fundamental results obtained in the seminal paper by Hartle \cite{Hartle67} where rigidly rotating perturbations of spherically symmetric and static perfect fluids were analyzed and where it was found that to first order rotation only affects the $, \phi$ crossed term in the metric (more technically, it only produces so-called axial or odd perturbations) without modifying, for instance, the spherical shape of the object, while second order perturbations, already affect the shape of the body and the rest of the metric components. In Hartle's paper some heuristic arguments were used at some places, specially regarding the matching procedure of the perturbed metrics. One of the aims of our paper is to set up a proper theoretical framework so that, in a future work, we can make rigorous all the arguments used by Hartle. This is important insofar Hartle's paper has served as the basis of many developments in perturbation theory of rotating objects over the years. \section{Brief summary of stationary and axially symmetric rotating bodies} A spacetime describing a stationary and axially symmetric rotating body with a boundary surface is composed by two regions; one inside the body solving the Einstein field equations with matter, and another outside the body solving the vacuum field equations and being asymptotically flat (because we are dealing with an isolated body). Furthermore the two metrics must fulfill the so-called matching conditions on the boundary of the body, i.e. on a timelike, stationary and axially symmetric hypersurface $\Sigma$. The vacuum field equations outside the body can be written in terms of a coupled system of non-linear elliptic PDE for two scalars $U$ and $\Omega$ called the ``norm'' and the ``twist'' potentials respectively and which are defined in terms of the unique stationary Killing vector $\vec{\xi}$ which is unit at infinity. The twist potential is defined only up to an arbitrary additive constant which is fixed by demanding $\Omega \rightarrow 0$ at infinity. With this choice, $\Omega$ vanishes if and only if the spacetime is static, hence the twist potential determines whether the body is rotating or not. In the vacuum region, there exist local coordinates $\{t, \phi, \rho, z \}$ called Weyl coordinates which are adapted to the stationary and axial Killing vectors and which locate the axis of symmetry at $\rho=0$. This coordinate system is defined uniquely except for an additive constant in $z$, which can be fixed whenever the spacetime has an equatorial plane by choosing $z=0$ on the equator. We will assume that the Weyl coordinates exist also globally in the vacuum exterior region (see \cite{Raul05} for global existence results of the Weyl coordinate system). In this setting $U$ and $\Omega$ are functions of $\rho$ and $z$ alone and satisfy the so-called Ernst equations, which read \begin{equation} \label{eq:ernst} \begin{array}[c]{l} \displaystyle{\triangle_{\gamma} U+\frac{1}{2}e^{-4U} \left(\mathrm{d} \Omega,\mathrm{d} \Omega\right)_\gamma=0},\\ \displaystyle{\triangle_{\gamma}\Omega-4\left(\mathrm{d} \Omega,\mathrm{d} U\right)_\gamma =0}, \end{array} \end{equation} where $\triangle_{\gamma}$ is the Laplacian of the flat metric $\gamma = d\rho^2 + dz^2 + \rho^2 d\phi^2$ and $( \, , \,)_{\gamma}$ denotes scalar product with respect to it. The asymptotic flatness condition demands $U = 1 - M /r + O (r^{-2})$, $\Omega = -2 z J /r^3 + O(r^{-3})$, for some constants $M,J$ and where $r^2 = \rho^2 + z^2$. The boundary of the rotating body as seen from the exterior region will be denoted by $\Sigma^E$ and will be defined by two functions $\rho = \rho (\mu)$, $z = z (\mu)$. If the metric inside the body is assumed to be known, then the matching conditions can be seen \cite{MarsSenovilla98} (see \cite{conv} for the complete generalisation) to be equivalent to the following data: (i) the explicit form of the matching hypersurface in Weyl coordinates, i.e. the functions $\rho(\mu), z(\mu)$ and (ii) the values of $U$ and $\Omega$ (the latter except for an additive constant) {\it together with their normal derivatives} on $\Sigma^E$. Notice that the Ernst equations are elliptic, which means that appropriate boundary data consist of fixing the value of the functions {\it or} their normal derivatives on the boundary (or perhaps a combination of both), but never the full values of the functions {\it and} their normal derivatives. In more technical terms the boundary conditions are of Cauchy type, which is unsuitable for elliptic problems. This property reflects the fact that given an arbitrary metric describing a rotating body in equilibrium, in general there will {\it not} exist a vacuum solution matching with the given metric and extending to infinity in an asymptotically flat manner. The problem of finding a global model of a rotating object is truly global in nature and cannot be broken into an interior and exterior problem without paying some price. In our case this translates into the necessity of dealing with an overdetermined boundary value problem for an elliptic system. Thus, one has to worry about existence of the solutions, i.e. which are the restrictions that the boundary data (and hence the interior metric) must satisfy so that they truly represent an isolated rotating body. Let us stress here that uniqueness is a much simpler problem which can be solved in full generality \cite{MarsSenovilla98}. With regard to existence, there are results involving an infinite set of compatibility conditions on the boundary data which are necessary for existence to hold \cite{MarsERE}. Whether they are also sufficient is still an open problem. \section{First and second order perturbations of the exterior region} After this brief reminder of the non-linear case, let us move into perturbation theory. Calling $\epsilon$ the perturbation parameter, we consider a one-parameter family of spacetimes depending on $\epsilon$. Since we take every element in the family to be a stationary and axially symmetric spacetime, vacuum outside some spatially compact region (defining the rotating body) and asymptotically flat, we have two families of functions $U_\epsilon (\rho,z)$, $\Omega_\epsilon (\rho,z)$, which for all $\epsilon$ satisfy the Ernst equations (\ref{eq:ernst}). Defining $U' \equiv \partial_{\epsilon} U_\epsilon |_{\epsilon=0}$, $U'' \equiv \partial_{\epsilon} \partial_{\epsilon} U_\epsilon |_{\epsilon=0}$ and similarly for $\Omega'$ and $\Omega''$, and using now that the background spacetime is static, i.e. that $\Omega_\epsilon |_{\epsilon=0} =0$, we find the first and second order perturbed field equations \begin{equation} \left . \begin{array}[c]{l} \displaystyle{\triangle_{\gamma}U'_0 =0}\\ \displaystyle{\triangle_{\gamma}\Omega'_0- 4\left(\mathrm{d} \Omega'_0,\mathrm{d} U_0\right)_\gamma=0} \end{array} \right \}, \quad \left . \begin{array}[c]{l} \displaystyle{\triangle_{\gamma}U''_0+e^{-4U_0} \left(\mathrm{d} \Omega'_0,\mathrm{d} \Omega'_0\right)_\gamma=0}\\ \displaystyle{\triangle_{\gamma}\Omega''_0-8 \left(\mathrm{d} \Omega'_0,\mathrm{d} U'_0\right)_\gamma -4\left(\mathrm{d} \Omega''_0,\mathrm{d} U_0\right)_\gamma=0} \end{array} \right \}, \label{eq:ernstper} \end{equation} simply by taking first and second partial derivatives of (\ref{eq:ernst}) with respect to $\epsilon$ and evaluating the result at $\epsilon=0$. Notice that we are performing the perturbation (i.e. derivative with respect to $\epsilon$) by considering $\rho,z$ as independent of $\epsilon$. This entails a suitable identification between the different spacetimes for different $\epsilon$. Such an identification must always be made in order to define metric perturbations. However, the identification is not unique, as we could perform an arbitrary diffeomorphism on each element of the family $V_\epsilon$ of spacetimes and the diffeomorphisms can obviously depend on $\epsilon$. This freedom in the identification implies the gauge freedom which is inherent to perturbation theory in general relativity (and indeed in any geometric theory). Thus, when we take perturbations by performing derivatives with respect to $\epsilon$ with fixed $\{\rho,z\}$ we are effectively fixing the gauge. We should consider now which is the domain where the equations (\ref{eq:ernstper}) hold and what kind of boundary data need to be fulfilled. Given the interior family of metrics, the matching conditions fix (for every $\epsilon$) two functions $\rho_{\epsilon}(\mu)$ and $z_{\epsilon}(\mu)$ defining the matching hypersurface (i.e. the boundary of the body) for every $\epsilon$. They can also be seen as two-surfaces defined in Euclidean 3-space $\mathbb{E}^3$ with the flat metric $\gamma$ written in cylindrical coordinates. If $\epsilon$ is close enough to $\epsilon=0$ we have a family of axially symmetric surfaces $\Sigma_\epsilon$ in $\mathbb{E}^3$, all of them diffeomorphic to the surface corresponding to the static background, which will be denoted by $\Sigma_0$. We will furthermore assume that the background surface is diffeomorphic to a sphere, which is physical reasonable (and certainly true whenever the static background is spherically symmetric). Let us also choose the range of variation of $\mu$ so that there exist two fixed values $\mu_S$ and $\mu_N$ such that $\rho_{\epsilon} (\mu_S)= \rho_{\epsilon} (\mu_N)=0$ for all $\epsilon$, i.e. that the intersection points of the surfaces with the axis of symmetry occurs at the same value of $\mu$. As explained before, the matching conditions together with the interior metrics provide us with four functions, which we denote as $G_{\epsilon}$, $L_{\epsilon}$, $Y_{\epsilon}$ and $W_{\epsilon}$, all of them defined on $\Sigma_\epsilon$ and such that the boundary values for the exterior problems are $U_\epsilon |_{\Sigma_\epsilon} = G_{\epsilon}$, $\vec{n}_{\epsilon} (U_\epsilon) |_{\Sigma_\epsilon} = L_{\epsilon}$, $\Omega |_{\Sigma_\epsilon} = Y_{\epsilon}$, $\vec{n}_{\epsilon} (\Omega_\epsilon) |_{\Sigma_\epsilon} = W_{\epsilon}$, where $\vec{n}_{\epsilon} \equiv -\dot z_\epsilon\partial_\rho+\dot\rho_\epsilon \partial_z|_{\Sigma_\epsilon}$ is a vector field orthogonal to $\Sigma_\epsilon$ (dot denotes derivative with respect to $\mu$). Notice that in these expressions the right-hand sides are functions of $\mu$ and $\epsilon$ alone, while for the left hand sides they are functions of $\mathbb{E}^3$ evaluated on a (moving) surface $\Sigma_\epsilon$. We can now take derivatives of these expressions with respect to $\epsilon$ (at $\mu$ constant) in order to obtain the boundary values for the perturbed functions $U'_0$, $\Omega'_0$, $U''_0$ and $\Omega''_0$ and their normal derivatives on the unperturbed surface $\Sigma_0$. In particular, it follows that the domain where the perturbed Ernst equations hold coincides exactly with the domain corresponding to the static background. Moreover, the boundary data of $U'$, $U''$, $\Omega'$ and $\Omega''$ on the unperturbed hypersurface $\Sigma_0$ can be explicitly calculated in terms of the interior background metrics and the interior first and second perturbation metrics alone. The resulting expressions are long and will not be given here (see $\cite{MMV}$ for a detailed discussion and explicit expressions). Having discussed briefly the perturbed exterior problems and their boundary conditions, we can now address the issue of existence of the exterior solution fulfilling these boundary data. \section{Compatibility conditions} \label{Sectcompatibility} In the previous section we have seen that, as in the non-linear case, the perturbed exterior problem involves an elliptic system of equations with Cauchy boundary data. Again this is an overdetermined system and we should not expect asymptotically flat solutions to exist for any Cauchy data (i.e. for any interior perturbation). Thus we need to address the question of what is the set of necessary and sufficient conditions that the boundary data must satisfy so that solutions exist. Asymptotic flatness demands $\lim_{r \rightarrow \infty} U'_0 = \lim_{r\rightarrow \infty} \Omega'_0 = \lim_{r\rightarrow \infty} U''_0 = \lim_{r \rightarrow \infty} \Omega''_0 = 0$. The perturbed Ernst equations can be collectively written as \begin{equation} \triangle_{\hat{\gamma}} u = j, \label{master} \end{equation} where $u=u(\rho,z)$ stands for $U_0$, $U_0'$, etc..., and $j=j(\rho,z)$ represents the inhomogeneous terms in the second order perturbation equations. The metric $\hat{\gamma}$ corresponds to either $\gamma$, for the $U$-equations, or $\tilde{\gamma} \equiv e^{-8 U_0} \gamma$, for the $\Omega$-equations. The domain $(D_0,\hat{\gamma})$ is clearly unbounded because $\hat{\gamma}$ is an asymptotically flat metric. Thus, the compatibility conditions for the boundary values of $U'_0$, $U''_0$, $\Omega'_0$, $\Omega''_0$ can be studied as particular cases for the compatibility conditions of the Cauchy problem for the general inhomogeneous Poisson equation (\ref{master}) defined on an unbounded asymptotically flat region $(D_0,\hat{\gamma})$ with boundary $\Sigma_0= \{ \rho = \rho_0 (\mu), z = z_0 (\mu), \phi = \varphi \}$, corresponding to the boundary of the static background metric. Furthermore, we will assume that $j$ tends to zero at infinity at least like $1/r^4$ (which follows in our case from asymptotic flatness). In order to give a flavor of why Theorem \ref{casebycase} holds, let us concentrate on the simplest possible case, i.e. when $\hat{\gamma} = \gamma$ and the source term $j$ vanishes. A simple consequence of Gauss' theorem is the so-called Green identity, which reads: for any compact domain $K \in D_0$ and any function $\psi$ (both suitably differentiable) \begin{equation} \int_{K}\left(\psi \triangle_{\gamma} u- u\triangle_{\gamma} \psi\right) \eta_{\gamma}= \int_{\partial K}\left [ \frac{}{} \psi \vec{n}_{\gamma} ( u ) - u \vec{n}_{\gamma} ( \psi ) \right] \mathrm{d} S_{\gamma}, \label{Green} \end{equation} where $\vec{n}_{\gamma}$ is a unit (with respect to $\gamma$) normal vector pointing outside $K$, $\eta_{\gamma}$ is the volume form of $(D_0,\gamma)$ and $\mathrm{d} S_{\gamma}$ is the induced surface element of $\partial K$. We intend to apply this identity to a function $\psi$ that (i) solves the Laplace equation $\triangle_{\gamma} \psi = 0$ on $D_0$, (ii) admits a $C^1$ extension to $\partial D_0 \equiv \Sigma_0$ and (iii) it decays at infinity in such a way that $r \psi $ is a bounded function on $D_0$. A function $\psi$ satisfying these three properties is called a {\it regular $\gamma$-harmonic function} on $D_0$ (if a function satisfies just (ii) and (iii) and is $C^2$ on $D_0$ we will call it {\it regular}). For such a function we may take $K = D_0$ in (\ref{Green}) because the integral at the boundary ``at infinity'' can be easily shown to vanish. Thus, denoting the boundary data of $u$ on $\Sigma_0$ as $u |_{\Sigma_0} \equiv f_0$ and $\vec{n}_{\gamma} (u) |_{\Sigma_0} \equiv f_1$ of $u$, the expression above becomes \begin{equation} \int_{\Sigma_0}\left [ \frac{}{} \psi f_1 - f_0 \vec{n}_{\gamma} ( \psi ) \right] \mathrm{d} S_{\gamma} = 0. \label{GreenBoun} \end{equation} These are obviously necessary conditions that the overdetermined boundary data must satisfy in order for a regular solution $u$ of the Laplace equation to exist. It is natural to ask whether such conditions are also sufficient. More precisely, give two arbitrary continuous functions $f_0$ and $f_1$ on $\Sigma_0$ which satisfy (\ref{GreenBoun}) for {\it any} choice of regular $\gamma$-harmonic function $\psi$. We want to check whether there always exists a function $u$ satisfying the Laplace equation $\triangle_{\gamma} u = 0$, with $r u$ bounded at infinity and such that the boundary equations $u |_{\Sigma_0} = f_0$, $\vec{n}_{\gamma} (u) |_{\Sigma_0} = f_1$ are satisfied. The answer is yes as we show next. Consider the Dirichlet problem $\triangle_{\gamma} u = 0 $ with $u |_{\Sigma_0} = f_0$. Standard elliptic theory tells us that this problem always admits a unique solution $u$ which tends to zero at infinity. Let us define $\tilde{f}_1$ on $\Sigma_0$ as $\tilde{f}_1 \equiv \vec{n}_{\gamma} (u) |_{\Sigma_0}$. Since $u$ solves the Laplace equation, expression (\ref{GreenBoun}) must hold replacing $f_1$ with $\tilde{f}_1$. Furthermore, our assumption is that $f_0$ and $f_1$ satisfy (\ref{GreenBoun}) for any regular $\gamma$-harmonic function $\psi$. Subtracting both expressions we get $\int_{\Sigma_0} \psi ( f_1 - \tilde{f}_1 ) \mathrm{d} S_{\gamma} =0$. However, since the regular $\gamma$-harmonic function $\psi$ is arbitrary and the Dirichlet problem for the Laplace equation always admits a solution, we have that $\psi |_{\Sigma_0}$ is an arbitrary continuous function. This readily implies $f_1 = \tilde{f}_1$ and hence compatibility of the overdetermined boundary data. However, condition (\ref{GreenBoun}) has a serious practical disadvantage, namely that it must be checked for an {\it arbitrary} decaying solution $\psi$ of the Laplace equation. This makes it not useful in practical terms. Our aim is to reduce the number of solutions $\psi$ that must be checked in (\ref{GreenBoun}) while still implying compatibility of $f_0$ and $f_1$. Here is where axial symmetry plays an essential role. This allows us to reduce the number of conditions to be checked to just a one-parameter family set. Without going into the details, let us just state that, in the axially symmetric case, conditions (\ref{GreenBoun}) restricted to the set of functions \begin{equation} \psi_y(\rho,z) = 1/\sqrt{\rho^2 + (z -y )^2}, \label{psiy} \end{equation} where $y$ is a constant such that the point $\{\rho=0, z=y, \phi \}$ lies {\it outside} the domain $D_0$, are already necessary and sufficient for existence of the solution $u$. Thus the number of conditions reduces dramatically and becomes manageable. The main result we present in this contribution is the generalization of this result to the four cases corresponding to the perturbed Ernst equations. The integrals to be performed are, of course, more complicated in general but the idea is still the same. In order to write down our main theorem we need to introduce some notation. First of all, define $z_S < z_N$ as the values of $z$ at the intersection of $\Sigma_0$ with the axis of symmetry (i.e. the south and north poles respectively) and restrict $y$ to the interval $(z_S,z_N)$. We define an angle $\Upsilon_y\in [0, 2 \pi )$ by $\cos \Upsilon_y(\rho,z) \equiv (z-y)/\sqrt{\rho^2+(z-y)^2}$, $\sin \Upsilon_y(\rho,z) \equiv \rho/\sqrt{\rho^2+(z-y)^2}$ and three functions $W_{y}$, $Q_{+}$ and $Q_{-}$ as the unique vanishing at infinity solutions of the following compatible PDE \begin{eqnarray*} \mathrm{d} W_{y} & = & \cos \Upsilon_y\,\mathrm{d} U_0 + \sin \Upsilon_y\,\star \mathrm{d} U_0, \nonumber \\ \mathrm{d} Q_{\pm } & = & e^{-2 U_0 \pm 2 W_{y} } \left [ \left ( \mp 1 - \cos \Upsilon_y\right ) \mathrm{d} \Omega'_0 - \sin \Upsilon_y\star \mathrm{d} \Omega'_0 \right ]. \end{eqnarray*} Here $\star$ means Hodge dual with respect to the metric $dz^2+d\rho^2$. Then, we have \begin{theorem} \label{casebycase} Let $f_0$, $f_1$ be continuous axially symmetric functions on a $C^1$ simply connected, axially symmetric surface $\Sigma_0$ of $\mathbb{E}^3$. Let this surface be defined in cylindrical coordinates by $\{ \rho = \rho_0 (\mu), z = z_0 (\mu), \phi = \varphi \}$, where $\mu\in [\mu_S,\mu_N ]$ and $\mu_S < \mu_N$ are the only solutions of $\rho_0(\mu)=0$. Call $z_S \equiv z (\mu_S)$ and $z_N \equiv z(\mu_N)$ and assume $z_S < z_N$ (i.e. that these values correspond to the ``south'' and ``north'' poles of the surface, respectively). Denote by $D_0$ the exterior region of this surface and $\vec{n} = - \dot{z}_0 \partial_{\rho} + \dot{\rho}_0 \partial_z$ a normal vector to it. Then, \noindent (i) the Cauchy boundary value problem $ \triangle_{\gamma} U'_0 =0, U'_0 |_{\Sigma_0} = f_0, \vec{n} \left (U'_0 \right ) |_{\Sigma_0} =f_1$ admits a regular solution on $D_0$ if and only if $$ \int_{\mu_S}^{\mu_N}\left[\psi_{y} \, f_1 - f_0 \,\vec n(\psi_{y}) \right]\rho_0 \mathrm{d} \mu = 0, \hspace{2cm} \forall y \in (z_S,z_N), $$ (ii) the Cauchy boundary value problem $\triangle_{\gamma}\Omega'_0- 4\left(\mathrm{d} \Omega'_0,\mathrm{d} U_0\right)_\gamma=0, \Omega'_0|_{\Sigma_0} = f_0, \vec{n} \left ( \Omega'_0 \right ) |_{\Sigma_0} = f_1$ admits a regular solution on $D_0$ if and only if $$ \int_{\mu_S}^{\mu_N}\left[\Psi_{y} \, f_1 - f_0 \,\vec n(\Psi_{y}) \right]\rho_0 e^{-4 U_0 |_{\Sigma_0}} \mathrm{d} \mu = 0, \hspace{2cm} \forall y \in (z_S,z_N), $$ (iii) the Cauchy boundary value problem $\triangle_{\gamma} U''_0+e^{-4U_0} \left(\mathrm{d} \Omega'_0,\mathrm{d} \Omega'_0\right)_\gamma=0, U''_0|_{\Sigma_0} = f_0, $ $\vec{n} \left ( U''_0 \right ) |_{\Sigma_0} = f_1$ admits a regular solution on $D_0$ if and only if $$ \int_{\mu_S}^{\mu_N}\left[\psi_{y} \, f_1 - f_0 \,\vec n(\Psi_{y}) - \mathsf{T}_1 \left ( \vec{n} \right ) \right]\rho_0 \mathrm{d} \mu = 0, \hspace{2cm} \forall y \in (z_S,z_N), $$ and (iv) the Cauchy boundary value problem $\triangle_{\gamma}\Omega''_0-8 \left(\mathrm{d} \Omega'_0,\mathrm{d} U'_0\right)_\gamma -4\left(\mathrm{d} \Omega''_0,\mathrm{d} U_0\right)_\gamma=0, \Omega''_0|_{\Sigma_0} = f_0, \vec{n} \left ( \Omega''_0 \right ) |_{\Sigma_0} = f_1$ admits a regular solution on $D_0$ if and only if $$ \int_{\mu_S}^{\mu_N}\left[ \left ( \Psi_{y} \, f_1 - f_0 \,\vec n(\Psi_{y}) \right ) e^{-4 U_0 |_{\Sigma_0}} - \mathsf{T}_2 \left (\vec{n}\right ) \right]\rho_0 \mathrm{d} \mu = 0, \hspace{2cm} \forall y \in (z_S,z_N), $$ where $\psi_y$ is given in (\ref{psiy}), $\Psi_y \equiv \frac{e^{ 2 U_0 - 2 W_{y}}}{\sqrt{ \rho^2 + (z - y)^2}}$, $\mathsf{T}_1 \equiv \frac{1}{2 \rho} Q_{+} \star \mathrm{d} Q_{-}$ and $\mathsf{T}_2 \equiv \frac{8}{\rho} Q_{-} \star \mathrm{d} \left ( W'_{y}+ U'_0 \right)$. \end{theorem} \section*{Acknowledgements} The authors thank EPSRC for funding project GR/R53685/01. RV thanks the IRCSET postdoctoral fellowship Ref. PD/2002/108.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Dehn function is a geometric invariant of a space (typically, a riemannian manifold or a simplicial complex) which measures the difficulty of filling closed curves with discs. This can be made into a group invariant by defining the Dehn function of a group to be the Dehn function of a space on which the group acts cocompactly and properly discontinuously. The choice of space affects the Dehn function, but its rate of growth depends solely on the group. Indeed, it depends solely on the quasi-isometry class of the group. A small Dehn function can have various geometric implications. In particular, a group with a Dehn function growing slower than $n^2$ must be hyperbolic; indeed, a group is hyperbolic if and only if its Dehn function grows linearly \cite{GroHG}. The connectedness of the asymptotic cone is also related to the growth of the Dehn function; any asymptotic cone of a group with quadratic Dehn function is simply connected, and conversely, if all asymptotic cones of a group are simply connected, its Dehn function is bounded by a polynomial \cite[5.F$''_1$]{GroAII}\cite{DrutuRemp}. One widely-studied family of groups is the set of lattices $\Gamma$ in semisimple Lie groups $G$. The Dehn function of a cocompact lattice is easy to find; such a lattice acts on a non-positively curved symmetric space $X$, and this non-positive curvature gives rise to a linear or quadratic Dehn function, depending on the rank of the space. Non-cocompact lattices have more complicated behavior. The key difference is that if the lattice is not cocompact, it acts cocompactly on a subset of $X$ rather than the whole thing, and this subset has different geometry. In the case that $\Gamma$ has $\ensuremath{\mathbb{Q}}$-rank 1, the Dehn function is almost completely understood, and depends primarily on the $\ensuremath{\mathbb{R}}$-rank of $G$. In this case, $\Gamma$ acts cocompactly on a space consisting of $X$ with infinitely many disjoint horoballs removed. When $G$ has $\ensuremath{\mathbb{R}}$-rank 1, the boundaries of these horoballs correspond to nilpotent groups, and the lattice is hyperbolic relative to these nilpotent groups. The Dehn function of the lattice is thus equal to that of the nilpotent groups, and Gromov showed that unless $X$ is the complex, quaternionic, or Cayley hyperbolic plane, the Dehn function is at most quadratic \cite[5.A$_6$]{GroAII}. If $X$ is the complex or quaternionic hyperbolic plane, the Dehn function is cubic \cite{GroAII,PittetCLB}; if $X$ is the Cayley hyperbolic plane, the precise growth rate is unknown, but is at most cubic. When $G$ has $\ensuremath{\mathbb{R}}$-rank 2 and $\Gamma$ has $\ensuremath{\mathbb{Q}}$-rank 1 or 2, Leuzinger and Pittet \cite{LeuzPitRk2} proved that the Dehn function grows exponentially. As in the $\ensuremath{\mathbb{R}}$-rank 1 case, the proof relies on understanding the subgroups corresponding to the removed horoballs, but in this case the subgroups are solvable and have exponential Dehn function. Finally, when $G$ has $\ensuremath{\mathbb{R}}$-rank 3 or greater and $\Gamma$ has $\ensuremath{\mathbb{Q}}$-rank 1, Drutu \cite{DrutuFilling} has shown that the boundary of a horoball satisfies a quadratic filling inequality and that $\Gamma$ enjoys an ``asymptotically quadratic'' Dehn function, i.e., its Dehn function is bounded by $n^{2+\epsilon}$ for any $\epsilon>0$. When $\Gamma$ has $\ensuremath{\mathbb{Q}}$-rank larger than $1$, the geometry of the space becomes more complicated. The main difference is that the removed horoballs are no longer disjoint, so many of the previous arguments fail. In many cases, the best known result is due to Gromov, who sketched a proof that the Dehn function of $\Gamma$ is bounded above by an exponential function \cite[5.A$_7$]{GroAII}. A full proof of this fact was given by Leuzinger \cite{LeuzingerPolyRet}. In this paper, we consider $SL(n;\ensuremath{\mathbb{Z}})$. This is a lattice with $\ensuremath{\mathbb{Q}}$-rank $n-1$ in a group with $\ensuremath{\mathbb{R}}$-rank $n-1$, so when $n$ is small, the methods above apply. When $n=2$, the group $SL(2;\ensuremath{\mathbb{Z}})$ is virtually free, and thus hyperbolic. As a consequence, its Dehn function is linear. When $n=3$, the result of Leuzinger and Pittet mentioned above implies that the Dehn function of $SL(3;\ensuremath{\mathbb{Z}})$ grows exponentially; this was first proved by Epstein and Thurston \cite{ECHLPT}. This exponential growth has applications to finiteness properties of arithmetic groups as well; Bux and Wortman \cite{BuxWortman} describe a way that the constructions in \cite{ECHLPT} lead to a proof that $SL(3;\ensuremath{\mathbb{F}}_q[t])$ is not finitely presented, then generalize to a large class of lattices in reductive groups over function fields. Much less is known about the Dehn function for lattices in $SL(n;\ensuremath{\mathbb{Z}})$ when $n\ge 4$. By the results of Gromov and Leuzinger above, the Dehn function of any such lattice is bounded by an exponential function, but Thurston (see \cite{GerstenSurv}) conjectured that \begin{conj} When $n\ge 4$, $SL(n;\ensuremath{\mathbb{Z}})$ has a quadratic Dehn function. \end{conj} This is a special case of a conjecture of Gromov that lattices in symmetric spaces with $\ensuremath{\mathbb{R}}$-rank at least 3 have polynomial Dehn functions. In this paper, we will prove Thurston's conjecture when $n\ge 5$: \begin{thm}\label{thm:mainThm} When $n\ge 5$, $SL(n;\ensuremath{\mathbb{Z}})$ has a quadratic Dehn function. \end{thm} The basic idea of the proof of Theorem~\ref{thm:mainThm} is to use fillings of curves in the symmetric space $SL(p;\ensuremath{\mathbb{R}})/SO(p)$ as templates for fillings of words in $SL(p;\ensuremath{\mathbb{Z}})$. Fillings which lie in the thick part of $SL(p;\ensuremath{\mathbb{R}})/SO(p)$ correspond directly to fillings in $SL(p;\ensuremath{\mathbb{Z}})$, but in general, an optimal filling of a curve in the thick part may have to go deep into the cusp of $SL(p;\ensuremath{\mathbb{Z}})\backslash SL(p;\ensuremath{\mathbb{R}})/SO(p)$. Regions of this cusp correspond to parabolic subgroups of $SL(p;\ensuremath{\mathbb{Z}})$, so our first step is to develop geometric techniques to cut the filling into pieces which each lie in one such region. This reduces the problem of filling the original word to the problem of filling words in parabolic subgroups of $\Gamma$. We fill these words using combinatorial techniques, especially the fact that $\Gamma$ contains many overlapping solvable subgroups. Some of the ideas in this work were inspired by discussions at the American Institute of Mathematics workshop, ``The Isoperimetric Inequality for $SL(n;\ensuremath{\mathbb{Z}})$,'' and the author would like to thank the organizers, Nathan Broaddus, Tim Riley, and Kevin Wortman; and participants, especially Mladen Bestvina, Alex Eskin, Martin Kassabov, and Christophe Pittet. The author would also like to thank Tim Riley, Yves de Cornulier, Kevin Wortman, and Mladen Bestvina for many helpful conversations while the author was visiting Bristol University, Universit\'e de Rennes, and University of Utah. Finally, the author would like to thank New York University for its hospitality during part of the preparation of this paper. \section{Preliminaries}\label{sec:prelims} In this section, we will give some preliminaries on the geometry of $SL(n;\ensuremath{\mathbb{Z}})$. We use a variant of big-O notation throughout this paper; the notation $$f(x,y,\dots)=O(g(x,y,\dots))$$ means that there is a $c>0$ such that $|f(x,y,\dots)|\le c g(x,y,\dots)+c$ for all values of the parameters. If $f:X\to Y$ is Lipschitz, we say that $f$ is $c$-Lipschitz if $d(f(x),f(y))\le cd(x,y)$ for all $x,y\in X$, and we let $\Lip(f)$ be the infimal $c$ such that $f$ is $c$-Lipschitz. \subsection{Words and curves}\label{sec:wordscurves} If $S$ is a set, we call a formal product of elements of $S$ and their formal inverses a {\em word} in $S$. For our purposes, $S$ will usually be a set of generators $\{g_1,\dots\}$ of a group $G$; we call a word {\em reduced} if no generator appears next to its inverse. If $$u=g_{i(1)}^{\pm1}g_{i(2)}^{\pm1}\dots g_{i(n)}^{\pm1}$$ is a word, we call $\ell_{\text{w}}(u):=n$ its length. We denote the set of words in $S$ by $S^*$, and if $g,h\in S^*$, we let $gh$ be their concatenation. We denote the empty word by $\varepsilon$. When $S$ is a generating set for $G$, there is a natural evaluation map from $S^*$ to $G$, and we say that a word {\em represents} the corresponding group element. A word $w$ in $S$ corresponds to a path in the Cayley graph of $G$ with respect to $S$. This path connects $e$ (the identity element) to the element that $w$ represents, and if $w$ is reduced, the path does not double back on itself. Simplicial paths in the Cayley graph which start at $e$ are in bijective correspondence with words in $S$. There is also an approximate version of this correspondence for some group actions on complexes or manifolds. Let $X$ be a simplicial complex or riemannian manifold and let $G$ act on $X$ (by maps of simplicial complexes or by isometries, respectively). We will usually consider either the case that $G$ is a discrete group acting on a complex or the case that $G=X$ is a Lie group or symmetric space. Let $x_0\in X$. Let $S\subset G$ and for all $g\in S$, let $\gamma_g:[0,1]\to G$ be a curve connecting $x_0$ to $gx_0$. If $\gamma_1$ and $\gamma_2$ are two curves connecting $x_0$ to $g_1x_0$ and $g_2x_0$ respectively, let $\gamma_1\gamma_2$ be the concatenation of $\gamma_1$ with the translation $g_1\gamma_2$ of $\gamma_2$. This curve connects $x_0$ to $g_1g_2x_0$. Similarly, let $\gamma_1^{-1}$ be the curve connecting $x_0$ to $g_1^{-1} x_0$ which traces the translation $g_1^{-1}\gamma_1$ in reverse. In this fashion, a word $g_1^{\pm1}\dots g_n^{\pm1}$ corresponds to a curve $\gamma_{g_1}^{\pm1}\dots \gamma_{g_n}^{\pm1}$ which connects $x_0$ to $g_1^{\pm1}\dots g_n^{\pm1}x_0$. By abuse of notation, we will often use words to denote their corresponding curves. In the next section, we will describe the reverse direction and use the Filling Theorem to approximate curves in $G$ by words in $S$. \subsection{Dehn functions and the Filling Theorem} The Dehn function is a group invariant providing one way to describe the difficulty of the word problem for the group. The word problem is combinatorial, but the correspondence between words and curves allows us to work with the Dehn function from either a combinatorial or a geometric perspective. If $$H=\langle h_1,\dots,h_d \mid r_1,\dots,r_s\rangle$$ is a finitely presented group, we can let $\Sigma=\{h_1,\dots,h_d\}$ and consider words in $\Sigma$. If a word $w$ represents the identity, then there is a way to prove this using the relations. That is, there is a sequence of steps which reduces $w$ to the empty word, where each step is a free expansion (insertion of a subword $x_i^{\pm 1}x_i^{\mp1}$), free reduction (deletion of a subword $x_i^{\pm 1}x_i^{\mp1}$), or the application of a relator (insertion or deletion of one of the $r_i$). We call the number of applications of relators in a sequence its {\em cost}, and we call the minimum cost of a sequence which starts at $w$ and ending at $\varepsilon$ the {\em filling area} of $w$, denoted by $\delta_H(w)$. We then define the {\em Dehn function} of $H$ to be $$\delta_H(n)=\max_{\ell_{\text{w}}(w)\le n} \delta_H(w),$$ where $\ell_{\text{w}}(w)$ represents the length of $w$ as a word and the maximum is taken over words representing the identity. This depends {\em a priori} on the chosen presentation of $H$; we will see that the growth rate of $\delta_H$ is independent of this choice. For convenience, if $v,w$ are two words representing the same element of $H$, we define $\delta_H(v,w)=\delta_H(vw^{-1})$; this is the minimum cost to transform $v$ to $w$. This satisfies a triangle inequality in the sense that if $w_1,w_2,w_3$ are words representing the same element of $H$, then $$\delta_H(w_1,w_3)\le \delta_H(w_1,w_2)+\delta_H(w_2,w_3).$$ This can also be interpreted geometrically. If $K_H$ is the {\em presentation complex} of $H$ (a simply-connected 2-complex whose 1-skeleton is the Cayley graph of $H$ and whose $2$-cells correspond to translates of the relators), then $w$ corresponds to a closed curve in the $1$-skeleton of $K_H$ and the sequence of steps reducing $w$ to the identity corresponds to a homotopy contracting this closed curve to a point. More generally, if $X$ is a riemannian manifold or simplicial complex, we can define the filling area $\delta_X(\gamma)$ of a Lipschitz curve $\gamma:S^1\to X$ to be the infimal area of a Lipschitz map $D^2\to X$ which extends $\gamma$. Then we can define the Dehn function of $X$ to be $$\delta_X(n)=\sup_{\ell_{\text{c}}(\gamma)\le n} \delta_X(\gamma),$$ where $\ell_{\text{c}}(\gamma)$ is the length of $\gamma$ as a curve and the supremum is taken over null-homotopic closed curves. As in the combinatorial case, if $\beta$ and $\gamma$ are two curves connecting the same points and which are homotopic with their endpoints fixed, we define $\delta_X(\beta,\gamma)$ to be the infimal area of a homotopy between $\beta$ and $\gamma$ which fixes their endpoints. Gromov stated a theorem connecting these two definitions, proofs of which can be found in \cite{Bridson} and \cite{BurTab}: \begin{thm}[Gromov's Filling Theorem]\label{thm:GroFill} If $X$ is a simply connected riemannian manifold or simplicial complex and $H$ is a finitely presented group acting properly discontinuously, cocompactly, and by isometries on $M$, then $\delta_H\sim \delta_M$. \end{thm} Here, $f \sim g$ if $f$ and $g$ grow at the same rate. Specifically, if $f,g:\ensuremath{\mathbb{N}}\to \ensuremath{\mathbb{N}}$, let $f\lesssim g$ if and only if there is a $c$ such that $$f(n)\le c g(cn+c)+c\text{ for all }n$$ and $f\sim g$ if and only if $f\lesssim g$ and $g\lesssim f$. One consequence of Theorem~\ref{thm:GroFill} is that the Dehn functions corresponding to different presentations of a group are equivalent under this relation. The idea behind the proof of the Filling Theorem is that under the given conditions, a closed curve in $X$ can be approximated by a word, and a homotopy filling the curve can be approximated by a sequence of applications of relators. Let $\langle R\mid S\rangle$ be a finite presentation for $H$. We can choose a basepoint and curves in $X$ corresponding to each generator as in the previous section; this corresponds to a choice of an equivariant map from the Cayley graph of $H$ to $X$. Since $X$ is simply-connected, we can extend this map to a map on $K_H$. The following lemma, which follows from the Federer-Fleming Deformation Lemma \cite{FedFlem} or from the Cellulation Lemma \cite[5.2.3]{Bridson}, allows us to approximate curves and discs in $X$ by curves (words) and discs in $K_H$: \begin{lemma} \label{lem:approx} Let $H$ and $X$ be as in the Filling Theorem, and let $f:K_H\to X$ be an $H$-equivariant map of a presentation complex for $H$ to $X$. Then: \begin{enumerate} \item Let $s:[0,1]\to X$ connect $f(e)$ and $f(h)$, where $e$ is the identity in $H$ and $h\in H$. There is a word $w$ which represents $h$ and which has length $\ell_{\text{w}}(w)\le c\ell_{\text{c}}(s)+c$. If $X$ is simply connected, then $w$ approximates $s$ in the sense that if $\gamma_w:[0,1]\to K_H$ is the curve corresponding to $w$, then $$\delta_X(s,f\circ \gamma_w)=O(\ell_{\text{c}}(s)).$$ \item If $w$ is a word representing the identity in $H$ and $\gamma:S_1\to K_H$ is the corresponding closed curve in $K_H$, then $$\delta_H(w)\le c(\ell_{\text{w}}(w)+\delta_X(f\circ \gamma)).$$ \end{enumerate} \end{lemma} \subsection{The geometry of $SL(p;\ensuremath{\mathbb{R}})$ and $SL(p;\ensuremath{\mathbb{Z}})$}\label{sec:geomSym} Let $\Gamma=SL(p;\ensuremath{\mathbb{Z}})$ and let $G=SL(p)=SL(p;\ensuremath{\mathbb{R}})$. The group $\Gamma$ is a lattice in $G$, and the geometry of $G$, $\Gamma$, and the quotient will be important in our proof. In this section, we will focus on the geometry of $G$ and $\Gamma$; in the next, we will describe the geometry of the quotient. For our purposes, the main geometric feature of $G$ is that it acts on a non-positively curved symmetric space. Let $\ensuremath{\mathcal{E}}=SL(p;\ensuremath{\mathbb{R}})/SO(p)$. The tangent space of $\ensuremath{\mathcal{E}}$ at the identity, $T_{I}\ensuremath{\mathcal{E}}$ is isomorphic to the space of symmetric matrices with trace 0. If $u^{tr}$ represents the transpose of $u$, then we can define an inner product $\langle u,v\rangle=\trace(u^{tr}v)$ on $T_I \ensuremath{\mathcal{E}}$. Since this is $SO(p)$-invariant, it gives rise to a $G$-invariant riemannian metric on $\ensuremath{\mathcal{E}}$. Under this metric, $\ensuremath{\mathcal{E}}$ is a non-positively curved symmetric space. The lattice $\Gamma$ acts on $\ensuremath{\mathcal{E}}$ with finite covolume, but the action is not cocompact. Let $\ensuremath{\mathcal{M}}:=\Gamma\backslash \ensuremath{\mathcal{E}}$. If $x\in G$, we write the equivalence class of $x$ in $\ensuremath{\mathcal{E}}$ as $[x]_\ensuremath{\mathcal{E}}$; similarly, if $x\in G$ or $x\in \ensuremath{\mathcal{E}}$, we write the equivalence class of $x$ in $\ensuremath{\mathcal{M}}$ as $[x]_\ensuremath{\mathcal{M}}$. If $g\in G$ is a matrix with coefficients $\{g_{ij}\}$, we define $$\|g\|_2=\sqrt{\sum_{i,j}g_{ij}^2},$$ $$\|g\|_\infty=\max_{i,j}|g_{ij}|.$$ Note that for all $g,h\in G$, we have $$\|gh\|_2\le \|g\|_2\|h\|_2$$ $$\|g^{-1}\|_2\ge \|g\|^{1/p}_2$$ and that $$\log \|g\|_2 = O(d_G(I,g)).$$ The rows of a matrix in $G$ form a unimodular basis of $\ensuremath{\mathbb{R}}^p$, so we can interpret $G$ as the space of unimodular bases in $\ensuremath{\mathbb{R}}^p$. From this viewpoint, $SO(p)$ acts on a basis by rotating the basis vectors, so $\ensuremath{\mathcal{E}}$ consists of the set of bases up to rotation. An element of $\Gamma$ acts by replacing the basis elements by integer combinations of basis elements. This preserves the lattice that they generate, so we can think of $\Gamma\backslash G$ as the set of unit-covolume lattices in $\ensuremath{\mathbb{R}}^p$. The quotient $\ensuremath{\mathcal{M}}=\Gamma\backslash \ensuremath{\mathcal{E}}$ is then the set of unit-covolume lattices up to rotation. Nearby points in $\ensuremath{\mathcal{M}}$ or $\ensuremath{\mathcal{E}}$ correspond to bases or lattices which can be taken into each other by small linear deformations of $\ensuremath{\mathbb{R}}^p$. Note that this set is not compact -- for instance, the injectivity radius of a lattice is a positive continuous function on $\ensuremath{\mathcal{M}}$, and there are lattices with arbitrarily small injectivity radiuses. We can use this function to define a subset of $\ensuremath{\mathcal{E}}$ on which $\Gamma$ acts cocompactly. Let $\ensuremath{\mathcal{E}}(\epsilon)$ be the set of points which correspond to lattices with injectivity radius at least $\epsilon$. This is invariant under $\Gamma$, and when $0<\epsilon \le 1/2$, it is contractible and $\Gamma$ acts on it cocompactly \cite{ECHLPT}. We call it the {\em thick part} of $\ensuremath{\mathcal{E}}$, and its preimage $G(\epsilon)$ in $G$ the thick part of $G$. ``Thick'' here refers to the fact that the quotients $\Gamma\backslash \ensuremath{\mathcal{E}}(\epsilon)$ and $\Gamma\backslash G(\epsilon)$ have injectivity radius bounded below. Lemma~\ref{lem:approx} allows us to approximate curves and discs in the thick part of $G$ or $\ensuremath{\mathcal{E}}$ by words in a finite generating set of $\Gamma$ and discs in $K_\Gamma$, so proving a filling inequality for $\Gamma$ is equivalent to proving one for $\ensuremath{\mathcal{E}}(\epsilon)$. We will also define some subgroups of $G$. In the following, $\ensuremath{\mathbb{K}}$ represents either $\ensuremath{\mathbb{Z}}$ or $\ensuremath{\mathbb{R}}$. Let $z_1,\dots,z_p$ be the standard generators for $\ensuremath{\mathbb{Z}}^p$, and if $S\subset \{1,\dots,p\}$, let $\ensuremath{\mathbb{R}}^S=\langle z_s\rangle_{s\in S}$ be a subspace of $\ensuremath{\mathbb{R}}^p$. If $q\le p$, there are many ways to include $SL(q)$ in $SL(p)$. Let $SL(S)$ be the copy of $SL(\#S)$ in $SL(p)$ which acts on $\ensuremath{\mathbb{R}}^S$ and fixes $z_t$ for $t\not \in S$. If $S_1,\dots,S_n$ are disjoint subsets of $\{1,\dots, p\}$ such that $\bigcup S_i=\{1,\dots,p\}$, let $$U(S_1,\dots,S_n;\ensuremath{\mathbb{K}})\subset SL(p;\ensuremath{\mathbb{K}})$$ be the subgroup of matrices preserving the flag $$\ensuremath{\mathbb{R}}^{S_i}\subset \ensuremath{\mathbb{R}}^{S_i\cup S_{i-1}} \subset \dots\subset \ensuremath{\mathbb{R}}^p $$ when acting on the right. If the $S_i$ are sets of consecutive integers in increasing order, $U(S_1,\dots,S_n;\ensuremath{\mathbb{K}})$ is block upper-triangular. For example, $U(\{1\},\{2,3,4\};\ensuremath{\mathbb{K}})$ is the subgroup of $SL(4;\ensuremath{\mathbb{K}})$ consisting of matrices of the form: $$\begin{pmatrix} * & * & * & * \\ 0 & * & * & * \\ 0 & * & * & * \\ 0 & * & * & * \end{pmatrix}.$$ If $d_1,\dots d_n>0$ and $\sum_i d_i=p$, let $U(d_1,\dots,d_n;\ensuremath{\mathbb{K}})$ be the group of upper block triangular matrices with blocks of the given lengths, so that the subgroup illustrated above is $U(1,3;\ensuremath{\mathbb{K}})$. Each group $U(d_1,\dots,d_n;\ensuremath{\mathbb{Z}})$ is a parabolic subgroup of $\Gamma$, and any parabolic subgroup of $\Gamma$ is conjugate to a unique such group. Let $\ensuremath{\mathcal{P}}$ be the set of groups $U(d_1,\dots,d_n;\ensuremath{\mathbb{Z}})$, including $U(p;\ensuremath{\mathbb{Z}})=\Gamma$. One feature of $SL(p;\ensuremath{\mathbb{Z}})$ is its particularly simple presentation. If $1\le i\ne j\le p$, let $e_{ij}(x)\in \Gamma$ be the matrix which consists of the identity matrix with the $(i,j)$-entry replaced by $x$; we call these {\em elementary matrices}. Let $e_{ij}:=e_{ij}(1)$. When $p\ge 3$, there is a finite presentation which has the matrices $e_{ij}$ as generators \cite{Milnor}: \label{sec:origSteinberg} \begin{align} \notag \Gamma=\langle e_{ij} \mid \; &[e_{ij},e_{kl}]=I & \text{if $i\ne l$ and $j\ne k$}\\ & [e_{ij},e_{jk}]=e_{ik} & \text{if $i\ne k$}\label{eq:steinberg}\\ \notag & (e_{ij} e_{ji}^{-1} e_{ij})^4=I \rangle, \end{align} where we adopt the convention that $[x,y]=xyx^{-1}y^{-1}$. We will use a slightly expanded set of generators. Let $$\Sigma:=\{e_{ij}\mid 1\le i\ne j\le p\}\cup D,$$ where $D\subset \Gamma$ is the set of diagonal matrices in $SL(p;\ensuremath{\mathbb{Z}})$; note that this set is finite. If $R$ is the set of relators given above with additional relations expressing each element of $D$ as a product of elementary matrices, then $\langle \Sigma\mid R\rangle$ is a finite presentation of $\Gamma$ with relations $R$. Furthermore, if $H=SL(q;\ensuremath{\mathbb{Z}})\subset SL(p;\ensuremath{\mathbb{Z}})$ or if $H$ is a subgroup of block-upper-triangular matrices, then $H$ is generated by $\Sigma \cap H$. For each $s\in \Sigma$, associate $s$ with a geodesic in $\ensuremath{\mathcal{E}}(1/2)$ connecting $[I]_\ensuremath{\mathcal{E}}$ to $[s]_\ensuremath{\mathcal{E}}$. Let $c_\Sigma$ be the maximum length of one of these curves. These give a correspondence between words in $\Sigma$ and curves in $\ensuremath{\mathcal{E}}$; by abuse of notation, we will often use words in $\Sigma$ to refer to their corresponding curves in $\ensuremath{\mathcal{E}}$. One key fact about the geometry of $SL(p;\ensuremath{\mathbb{Z}})$ is a theorem of Lubotzky, Mozes, and Raghunathan \cite{LMRComptes}: \begin{thm}\label{thm:LMR} The word metric on $SL(p;\ensuremath{\mathbb{Z}})$ for $p\ge 3$ is equivalent to the restriction of the Riemannian metric of $SL(p;\ensuremath{\mathbb{R}})$ to $SL(p;\ensuremath{\mathbb{Z}})$. That is, there is a $c$ such that for all $g\in SL(p;\ensuremath{\mathbb{Z}})$, we have $$c^{-1} d_G(I,g)\le d_{\Gamma}(I,g)\le c d_G(I,g).$$ \end{thm} Part of the proof of this theorem is a construction of short words which represent large elementary matrices. Lubotzky, Mozes, and Raghunathan construct these words by including the solvable group $\ensuremath{\mathbb{R}}\ltimes \ensuremath{\mathbb{R}}^2$ in the thick part of $G$. Since $\ensuremath{\mathbb{R}}^2\subset \ensuremath{\mathbb{R}}\ltimes \ensuremath{\mathbb{R}}^2$ is exponentially distorted, there are short curves in $\ensuremath{\mathbb{R}}\ltimes \ensuremath{\mathbb{R}}^2$ connecting $I$ to $e_{ij}(x)$ which can be approximated by short words in $\Gamma$. For our purposes, we will need a more general construction. \begin{lemma}\label{lem:shortcuts} Let $p\ge 3$, and let $S,T\subset \{1,\dots,p\}$ be disjoint. Let $s=\#S$ and $t=\#T$, and assume that $s\ge 2$. If $V\in \ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T$, define $$u(V)=u(V;S,T):=\begin{pmatrix} I_S & V \\ 0 & I_T \end{pmatrix}\in U(S,T).$$ There is a subgroup $$H_{S,T}\cong (\ensuremath{\mathbb{R}}^{s-1}\times\ensuremath{\mathbb{R}}^{t-1})\ltimes (\ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T)\subset U(S,T)$$ which lies in $G(\epsilon)$ and a family of curves $\widehat{u}(V;S,T):[0,1]\to H_{S,T}$ connecting $I$ to $u(V;S,T)$ such that $\ell_{\text{c}}(\widehat{u}(V;S,T))=O(\mathop{\overline{\log}} \|V\|_2)$, where $\mathop{\overline{\log}} x = \max\{1,\log x\}$. \end{lemma} \begin{proof} First, we define $H_{S,T}\subset U(S,T)$. Let $A_1,\dots, A_{s}$ be a set of simultaneously diagonalizable positive-definite matrices in $SL(S;\ensuremath{\mathbb{Z}})$. The $A_i$'s have the same eigenvectors; call these shared eigenvectors $v_1,\dots, v_{s}\in \ensuremath{\mathbb{R}}^{S}$, and normalize them to have unit length. The $A_i$ are entirely determined by their eigenvalues, and we can define vectors $$q_i=(\log \|A_iv_1\|_2,\dots,\log \|A_iv_s\|_2)\in \ensuremath{\mathbb{R}}^s$$ Since $A_i\in SL(S;\ensuremath{\mathbb{Z}})$, the product of its eigenvalues is $1$, and the sum of the coordinates of $q_i$ is 0. We require that the $A_i$ are independent in the sense that the $q_i$ span a $(s-1)$-dimensional subspace of $\ensuremath{\mathbb{R}}^s$; since they are all contained in an $(s-1)$-dimensional subspace, this is the maximum rank possible. If a set of matrices satisfies these conditions, we call them a set of {\em independent commuting matrices} for $S$. A construction of such matrices can be found in Section 10.4 of \cite{ECHLPT}. The $A_i$ generate a subgroup isomorphic to $\ensuremath{\mathbb{Z}}^{s-1}$, and by possibly choosing a different generating set for this subgroup, we can assume that $\lambda_i:=\|A_iv_i\|_2>1$ for all $i$. If $t\ge 2$, let $B_1^{tr},\dots, B_{t}^{tr}\in SL(T;\ensuremath{\mathbb{Z}})$ (where $^{tr}$ represents the transpose of a matrix) be a set of independent commuting matrices for $T$ and let $w_1,\dots, w_{t}\in \ensuremath{\mathbb{R}}^{T}$ be a basis of unit eigenvectors of the $B_i^{tr}$. If $t=1$, let $B_1=(1)$ and let $w_1$ be a generator of $\ensuremath{\mathbb{R}}^T=\ensuremath{\mathbb{R}}$. Let \begin{align*} H_{S,T} :=&\left\{\begin{pmatrix}\prod_i A_i^{x_i} & V \\ 0 & \prod_i B_i^{y_i}\end{pmatrix} \middle|\; x_i,y_i\in \ensuremath{\mathbb{R}}, V\in \ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T\right\} \\ =&(\ensuremath{\mathbb{R}}^{s-1}\times\ensuremath{\mathbb{R}}^{t-1})\ltimes (\ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T). \end{align*} When the $x_i$ and $y_i$ are integers and $V$ has integer coefficients, this matrix has integer coefficients. Furthermore, $H_{S,T}\cap \Gamma$ is a cocompact lattice in $H_{S,T}$, so $H_{S,T}$ is contained in $G(\epsilon)$ for some $\epsilon>0$. By abuse of notation, let $A_i$ and $B_i$ refer to the corresponding matrices in $H_{S,T}$. The group $H_{S,T}$ is generated by powers of the $A_i$, powers of the $B_i$, and elementary matrices in the sense that any element of $H_{S,T}$ can be written as $$\prod A_i^{x_i}\prod B_i^{y_i} \begin{pmatrix} I_S & V \\ 0 & I_T \end{pmatrix},$$ for some $x_i,y_i\in \ensuremath{\mathbb{R}}$ and $V\in \ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T$, where $I_S$ and $I_T$ represent the identity matrix in $SL(S;\ensuremath{\mathbb{Z}})$ and $SL(T;\ensuremath{\mathbb{Z}})$ respectively. Let $$W=\{A_i^{x}\}_{x\in \ensuremath{\mathbb{R}}}\cup \{B_i^{x}\}_{x\in \ensuremath{\mathbb{R}}}\cup \{u(V)\}_{V\in \ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T}.$$ We let $A_i^{x}$ correspond to the curve $$d\mapsto \begin{pmatrix} A_i^{xd} & 0 \\ 0 & I_T \end{pmatrix},$$ $B_i^{x}$ to the curve $$d\mapsto \begin{pmatrix} I_S & 0 \\ 0 & B_i^{xd} \end{pmatrix},$$ and $$u(V)=\begin{pmatrix} I_S & V \\ 0 & I_T \end{pmatrix}$$ to the curve $$d\mapsto \begin{pmatrix} I_S & dV \\ 0 & I_T \end{pmatrix},$$ where in all cases, $d$ ranges from $0$ to $1$. This gives a correspondence between elements of $W$ and curves. Let $c\ge \max_i\{\ell_{\text{c}}(A_i)\}$. The word $A_i^x u(v_i\otimes w) A_i^{-x}$ represents the matrix $u(\lambda_i^xv_i\otimes w)$ and corresponds to a curve of length at most $2cx+\|v_i\|_2\| w\|_2$ connecting $I$ and $u(\lambda_i^xv_i\otimes w)$. If $V\in \ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T$, then $$V=\sum_{i,j} x_{ij}v_i\otimes w_j$$ for some $x_{ij}\in \ensuremath{\mathbb{R}}$. Let $$\gamma_{ij}(x)=\begin{cases} A_i^{\log_{\lambda_i} |x|} u(\sign(x) v_i\otimes w_j) A_i^{-\log_{\lambda_i} |x|} & \text{ if $|x|>1$,} \\ u(x v_i\otimes w_j)&\text{ if $|x|\le 1$,} \end{cases}$$ where $\sign(x)=\pm1$ depending on whether $x$ is positive or negative. Let $$\widehat{u}(V):=\prod_{i,j} \gamma_{ij}(x_{ij}).$$ Then $\widehat{u}(V)$ represents $u(V)$ and $$\ell_{\text{c}}(\widehat{u}(V))=O(\mathop{\overline{\log}} \|V\|_2)$$ for all $V$, where $\mathop{\overline{\log}} x = \max\{1,\log x\}$. \end{proof} If $i\in S$ and $j\in T$, then $e_{ij}(x)$ and $u(x z_i\otimes z_j;S,T)$ represent the same matrix. If $x\in \ensuremath{\mathbb{Z}}$, then we define $\widehat{e}_{ij;S,T}(x)\in \Sigma^*$ to be a word approximating $\widehat{u}(x z_i\otimes z_j;S,T)$. Changing $S$ and $T$ changes $\widehat{e}_{ij;S,T}(x)$, but in Sec.~\ref{sec:steinberg}, we will prove that in many cases, $\widehat{e}_{ij;S,T}(x)$ and $\widehat{e}_{ij;S',T'}(x)$ are connected by a homotopy of area $O((\log |x|)^2)$. Because of this, the choice of $S$ and $T$ is largely irrelevant. Thus, for each $(i,j)$, we choose a $d\not\in \{i,j\}$ and let $$\widehat{e}_{ij}(x)=\widehat{e}_{ij;\{i,d\},\{j\}}(x).$$ These ``shortcuts'' are key to our proof. Just as Lubotzky, Mozes, and Raghunathan build paths in $\Gamma$ out of the $\widehat{e}_{ij}(x)$, we will build fillings in $\Gamma$ out of fillings of words involving the $\widehat{e}_{ij}(x)$. With this in mind, we introduce an infinite generating set for $\Gamma$. Let $$\widehat{\Sigma}:=\{\sgen{ab}{x}\mid x\in \ensuremath{\mathbb{Z}} \setminus \{0\}, a,b\in \{1,\dots,p\}, a\ne b\}\cup D.$$ This set contains $\Sigma$, so it generates $\Gamma$. Define a map $\lambda:\widehat{\Sigma}^*\to \Sigma^*$ which sends elements of $\Sigma$ to themselves and sends $\sgen{ab}{x}$ to $\widehat{e}_{ab}(x)$, and define a length function $\widehat{\ell}:\widehat{\Sigma}^*\to \ensuremath{\mathbb{Z}}$ by $$\widehat{\ell}(w)=\ell_{\text{w}}(\lambda(w)).$$ Words in $\widehat{\Sigma}$ are associated to curves in the thick part of $G$, and in Section~\ref{sec:steinberg}, we will construct discs filling several families of such curves. We will use these ``relations'' to manipulate words in $\widehat{\Sigma}$. \subsection{The geometry of $\ensuremath{\mathcal{M}}$} \label{sec:redThe} Since $\ensuremath{\mathcal{E}}$ is non-positively curved, $\delta_{\ensuremath{\mathcal{E}}}$ grows at most quadratically, and our goal is to show that the same is true for the subset $\ensuremath{\mathcal{E}}(\epsilon)$. In order to do so, we must study the {\em thin part} $\ensuremath{\mathcal{E}}\setminus \ensuremath{\mathcal{E}}(\epsilon)$ of $\ensuremath{\mathcal{E}}$, and since the cusp of $\ensuremath{\mathcal{M}}$ corresponds to this thin part, we will study $\ensuremath{\mathcal{M}}$. The constructions in this section generalize to many reductive and semisimple Lie groups with the use of precise reduction theory, but we will only state the results for $SL(p;\ensuremath{\mathbb{Z}})$, as stating the theorems in full generality requires a lot of additional background. Let $\diagmat(t_1,\dots, t_p)$ be the diagonal matrix with entries $(t_1,\dots, t_p)$. Let $A$ be the set of diagonal matrices in $G$ and if $\epsilon>0$, let $$A^+_{\epsilon}=\{\diagmat(t_1,\dots, t_p)\mid \prod t_i=1, t_i > 0, t_i\ge \epsilon t_{i+1}\}.$$ One of the main features of $\ensuremath{\mathcal{M}}$ is that it is Hausdorff equivalent to $A^+_{\epsilon}$; our main goal in this section is to describe this Hausdorff equivalence and its ``fibers''. Let $N$ be the set of upper triangular matrices with 1's on the diagonal and let $N^+$ be the subset of $N$ with off-diagonal entries in the interval $[-1/2,1/2]$. Translates of the set $N^+A^+_\epsilon$ are known as Siegel sets. The following properties of Siegel sets are well known (see for instance \cite{BorHar-Cha}). \begin{lemma}\label{lem:redThe}\ \\ There is an $1>\epsilon_{\ensuremath{\mathcal{S}}}>0$ such that if we let $$\ensuremath{\mathcal{S}}:=[N^+A^+_{\epsilon_{\ensuremath{\mathcal{S}}}}]_\ensuremath{\mathcal{E}}\subset \ensuremath{\mathcal{E}},$$ then \begin{itemize} \item $\Gamma\ensuremath{\mathcal{S}}=\ensuremath{\mathcal{E}}$. \label{lem:redThe:cover} \item There are only finitely many elements $\gamma\in \Gamma$ such that $\gamma \ensuremath{\mathcal{S}} \cap \ensuremath{\mathcal{S}} \ne \emptyset$. \label{lem:redThe:fundSet} \end{itemize} \end{lemma} In particular, the quotient map $\ensuremath{\mathcal{S}}\to \ensuremath{\mathcal{M}}$ is a surjection. We define $A^+:=A^+_{\epsilon_{\ensuremath{\mathcal{S}}}}$. The inclusion $A^+ \hookrightarrow \ensuremath{\mathcal{S}}$ is a Hausdorff equivalence: \begin{lemma}[{\cite{JiMacPherson}}] \label{lem:easyHausdorff} Give $A$ the riemannian metric inherited from its inclusion in $G$, so that $$d_{A}(\diagmat(d_1,\dots,d_p),\diagmat(d'_1,\dots,d'_p))=\sqrt{\sum_{i=1}^p \left|\log \frac{d'_i}{d_i}\right|^2}.$$ \begin{itemize} \item There is a $c$ such that if $n\in N^+$ and $a\in A^+$, then $d_\ensuremath{\mathcal{E}}([na]_\ensuremath{\mathcal{E}},[a]_\ensuremath{\mathcal{E}})\le c$. In particular, if $x\in \ensuremath{\mathcal{S}}$, then $d_\ensuremath{\mathcal{E}}(x,[A^+]_\ensuremath{\mathcal{E}})\le c$. \item If $x,y\in A^+$, then $d_{A}(x,y)=d_{\ensuremath{\mathcal{S}}}(x,y)$. \end{itemize} \end{lemma} \begin{proof} For the first claim, note that if $x=[na]_\ensuremath{\mathcal{E}}$, then $x=[a(a^{-1} n a)]_\ensuremath{\mathcal{E}}$, and $a^{-1}na\in N$. Furthermore, $$\|a^{-1}na\|_\infty\le \epsilon_\ensuremath{\mathcal{S}}^p,$$ so $$d_\ensuremath{\mathcal{E}}([x]_\ensuremath{\mathcal{E}},[a]_\ensuremath{\mathcal{E}})\le d_G(I,a^{-1}na)$$ is bounded independently of $x$. For the second claim, we clearly have $d_{A}(x,y)\ge d_{\ensuremath{\mathcal{S}}}(x,y)$. For the reverse inequality, it suffices to note that the map $\ensuremath{\mathcal{S}}\to A^+$ given by $na\mapsto a$ for all $n\in N^+$, $a\in A^+$ is distance-decreasing. \end{proof} Siegel conjectured that the quotient map from $\ensuremath{\mathcal{S}}$ to $\ensuremath{\mathcal{M}}$ is also a Hausdorff equivalence, that is: \begin{thm}\label{thm:SiegConj} There is a $c'$ such that if $x,y\in \ensuremath{\mathcal{S}}$, then $$d_{\ensuremath{\mathcal{E}}}(x,y)-c'\le d_{\ensuremath{\mathcal{M}}}([x]_\ensuremath{\mathcal{M}},[y]_\ensuremath{\mathcal{M}})\le d_{\ensuremath{\mathcal{E}}}(x,y)$$ \end{thm} Proofs of this conjecture can be found in \cite{Leuzinger,Ji,Ding}. As a consequence, the natural quotient map $A^+\to \ensuremath{\mathcal{M}}$ is a Hausdorff equivalence. Any point $x\in \ensuremath{\mathcal{E}}$ can be written (possibly non-uniquely) as $x=[\gamma na]_\ensuremath{\mathcal{E}}$ for some $\gamma\in \Gamma$, $n\in N^+$ and $a\in A^+$. The theorem implies that these different decompositions are a bounded distance apart: \begin{cor}[{see \cite{JiMacPherson}, Lemmas 5.13, 5.14}]\label{cor:Hausdorff} There is a constant $c''$ such that if $x,y\in \ensuremath{\mathcal{M}}$, $n,n' \in N^+$ and $a,a'\in A^+$ are such that $x=[na]_\ensuremath{\mathcal{M}}$ and $y=[n'a']_\ensuremath{\mathcal{M}}$, then $$|d_{\ensuremath{\mathcal{M}}}(x,y)- d_{A}(a,a')|\le c''.$$ In particular, if $[na]_\ensuremath{\mathcal{M}}=[n'a']_\ensuremath{\mathcal{M}}$, then $d_{A}(a,a')\le c''.$ \end{cor} \begin{proof} Note that by Lemma~\ref{lem:easyHausdorff}, $$d_{\ensuremath{\mathcal{M}}}(x,[a]_\ensuremath{\mathcal{M}})\le d_{\ensuremath{\mathcal{E}}}([na]_\ensuremath{\mathcal{E}},[a]_\ensuremath{\mathcal{E}})\le c$$ and likewise $d_{\ensuremath{\mathcal{M}}}(y,[a']_\ensuremath{\mathcal{M}})\le c$. Furthermore, by the theorem and the lemma, $$d_A(a,a')-c'=d_\ensuremath{\mathcal{S}}(a,a')-c' \le d_{\ensuremath{\mathcal{M}}}([a]_\ensuremath{\mathcal{M}},[a']_\ensuremath{\mathcal{M}}) \le d_A(a,a'),$$ so if we let $c''=c'+2c$, the corollary follows. \end{proof} Let $\rho:\ensuremath{\mathcal{E}}\to \Gamma$ be a map such that $\rho(\ensuremath{\mathcal{S}})=I$ and $x\in \rho(x)\ensuremath{\mathcal{S}}$ for all $x$. Any point $x\in \ensuremath{\mathcal{E}}$ can be uniquely written as $x=[\rho(x)na]_\ensuremath{\mathcal{E}}$ for some $n\in N^+$ and $a\in A^+$. Let $\phi:\ensuremath{\mathcal{E}}\to A^+$ be the map $[\rho(x)na]_\ensuremath{\mathcal{E}}\mapsto a$. If $\phi(x)=\diagmat(a_1,\dots,a_p)$, let $\phi_i(x)=\log a_i$. If $x,y\in \ensuremath{\mathcal{E}}$, then $|\phi_i(x)-\phi_i(y)|\le d_\ensuremath{\mathcal{E}}(x,y)+ c''$; let $c_\phi:=c''$.\label{sec:cphi} If $x\in \ensuremath{\mathcal{E}}$, then the values of $\rho$ near $x$ are largely determined by $\phi(x)$. For instance, if $\|\phi(x)\|_2$ is small, then $x$ is in the thick part of $\ensuremath{\mathcal{E}}$, and so if $y$ is close to $x$, then $\rho(x)$ is close to $\rho(y)$. If $\|\phi(x)\|_2$ is large, then $[x]_\ensuremath{\mathcal{M}}$ is deep in the cusp of $\ensuremath{\mathcal{M}}$ and so $\rho(y)$ may be exponentially far from $\rho(x)$. Parts of the cusp, however, generally have simpler topology than the entire quotient. Recall that if $\tilde{x}\in G$ is a representative of $x$, we can construct a lattice $\ensuremath{\mathbb{Z}}^p \tilde{x}\subset \ensuremath{\mathbb{R}}^p$. If $\|\phi(x)\|_2$ is large, then $\ensuremath{\mathbb{Z}}^p \tilde{x}$ has short vectors. In the following lemmas, we will use the subspaces generated by these short vectors to show that the fundamental group of a small ball in the cusp is contained in a parabolic subgroup. Recall that $z_1,\dots, z_p\in \ensuremath{\mathbb{Z}}^p$ are the standard generators of $\ensuremath{\mathbb{Z}}^p$. \begin{lemma}\label{lem:phiFlag} Let $x=[\gamma n a]_\ensuremath{\mathcal{E}}$ for some $\gamma\in \Gamma$, $n\in N^+$, and $a\in A^+$. Let $\tilde{x}\in G$ be such that $x=[\tilde{x}]_\ensuremath{\mathcal{E}}$ and let $$V(x,r)=\langle v\in \ensuremath{\mathbb{Z}}^p\mid \|v \tilde{x}\|_2 \le r \rangle.$$ Different choices of $\tilde{x}$ differ by an element of $SO(p)$, so $V(x,r)$ is independent of the choice of $\tilde{x}$. We claim there is a $c_V>0$ such that if $a=\diagmat(a_1,\dots,a_p)$ and $$e^{c_V}a_{k+1}<r<e^{-c_V}a_{k},$$ then $V(x,r)=Z_k\gamma^{-1}$, where $Z_k:=\langle z_{k+1},\dots, z_p \rangle$. \end{lemma} \begin{proof} Note that $V(\gamma' x',r)=V(x',r){\gamma'}^{-1}$, so we may assume that $\gamma=I$ without loss of generality. Let $n=\{n_{ij}\}\in N^+$ and let $\tilde{x}=na$. We have \begin{align*} z_j \tilde{x} &= z_j na \\ &= a_j z_j+\sum_{i=j+1}^p n_{ji} z_ia_{i}. \end{align*} Since $a_{i+1}\le a_i \epsilon_{\ensuremath{\mathcal{S}}}^{-1}$, we have $a_i\le a_{k+1}\epsilon_{\ensuremath{\mathcal{S}}}^{-p}$ for $i\ge k+1$ and $a_i\ge a_{k}\epsilon_{\ensuremath{\mathcal{S}}}^{p}$ for $i\le k$. Since $|n_{ji}|\le 1/2$ when $i>j$, we have $$\|z_j \tilde{x}\|_2 \le a_{k+1}\sqrt{p}\epsilon_{\ensuremath{\mathcal{S}}}^{-p}$$ when $j>k$. Thus $$V(x,a_{k+1}\sqrt{p}\epsilon_{\ensuremath{\mathcal{S}}}^{-p})\supset Z_k.$$ On the other hand, assume that $v\not \in Z_k$, and let $v=\sum_i v_i z_i$ for some $v_i\in \ensuremath{\mathbb{Z}}$. Let $j$ be the smallest integer such that $v_j\ne 0$; by assumption, $j\le k$. The $z_{j}$-coordinate of $v\tilde{x}$ is $v_{j} a_{j}$, so $$\|v \tilde{x} \|_2 \ge a_{j} > a_{k} \epsilon_{\ensuremath{\mathcal{S}}}^{p}$$ and thus if $t<a_{k} \epsilon_{\ensuremath{\mathcal{S}}}^{p}$, then $V(x,t)\subset Z_k$. Therefore, if $$a_{k+1}\sqrt{p}\epsilon_{\ensuremath{\mathcal{S}}}^{-p}\le t<a_{k} \epsilon_{\ensuremath{\mathcal{S}}}^{p},$$ then $V(\tilde{x},t)=Z_j$. We can choose $c_V=\log \sqrt{p}\epsilon_{\ensuremath{\mathcal{S}}}^{-p}$. \end{proof} \begin{lem} \label{lem:parabolicNbhds} There is a constant $c$ such that if $1\le j<p$ and $x,y\in \ensuremath{\mathcal{E}}$ are such that $$d_{\ensuremath{\mathcal{E}}}(x,y)<\frac{\phi_j(x)-\phi_{j+1}(x)}{2}-c$$ and $g,h \in \Gamma$ are such that $x\in g\ensuremath{\mathcal{S}}$ and $y\in h\ensuremath{\mathcal{S}}$, then $g^{-1}h\in U(j,p-j)$. \end{lem} \begin{proof} Let $$r=\exp\frac{\phi_j(x)+\phi_{j+1}(x)}{2},$$ and let $c=c_\phi+c_V$, so that we have $$\phi_{j+1}(x)+c+d_{\ensuremath{\mathcal{E}}}(x,y)<\log r <\phi_{j}(x)-c- d_{\ensuremath{\mathcal{E}}}(x,y).$$ By the previous lemma, $$V(x,r\exp(-d_\ensuremath{\mathcal{E}}(x,y)))=V(x,r\exp(d_\ensuremath{\mathcal{E}}(x,y)))=Z_jg^{-1}.$$ Furthermore, $|\phi_{i}(x)-\phi_{i}(y)|< d_{\ensuremath{\mathcal{E}}}(x,y)+c_\phi$, so $$\phi_{j+1}(y)-c_\phi+c<\log r <\phi_{j}(y)+c_\phi-c$$ and $V(y,r)=Z_jh^{-1}.$ Note that since $$\|v \tilde{y}\|_2\exp(-d_\ensuremath{\mathcal{E}}(x,y)) \le \|v \tilde{x}\|_2\le \|v \tilde{y}\|_2\exp(d_\ensuremath{\mathcal{E}}(x,y)),$$ we have $$V(x,r\exp(-d_\ensuremath{\mathcal{E}}(x,y)))\subset V(y,r)\subset V(x,r\exp(d_\ensuremath{\mathcal{E}}(x,y))),$$ so $Z_j g^{-1}h=Z_j$. Since $g^{-1}h$ stabilizes $Z_j$, we have $g^{-1}h\in U(j,p-j)$ as desired. \end{proof} The differences $\phi_j(x)-\phi_{j+1}(x)$ increase as the distance between $x$ and $\Gamma$ increases: \begin{cor} \label{cor:parabolicNbhds} There is a $c'>0$ such that if $$d_{\ensuremath{\mathcal{E}}}(x,y)<\frac{d_{\ensuremath{\mathcal{M}}}([x]_\ensuremath{\mathcal{M}},[I]_{\ensuremath{\mathcal{M}}})}{2p^2}-c',$$ and $g,h \in \Gamma$ are such that $x\in g\ensuremath{\mathcal{S}}$ and $y\in h\ensuremath{\mathcal{S}}$, then there is a $j$ depending only on $x$ such that $g^{-1}h\in U(j,p-j)$. \end{cor} \begin{proof} We will find a $c'$ such that if the hypothesis is satisfied, then $$d_{\ensuremath{\mathcal{E}}}(x,y)<\frac{\phi_j(x)-\phi_{j+1}(x)}{2}-c$$ for some $j$, where $c$ is as in Lemma~\ref{lem:parabolicNbhds}. Since $\sum_i \phi_i(x)=0$, we have $$|\phi_j(x)|\le p \max_i |\phi_i(x)-\phi_{i+1}(x)|,$$ and since $\phi_i(x)-\phi_{i+1}(x)$ is bounded below, there is a $c_0$ such that $$\frac{d_A(I,\phi(x))}{p^2}\le \max_i (\phi_i(x)-\phi_{i+1}(x)))+c_0.$$ Furthermore, we have $$d_{A}(I,\phi(x))\ge d_{\ensuremath{\mathcal{M}}}([x]_\ensuremath{\mathcal{M}},[I]_{\ensuremath{\mathcal{M}}}) - c_\phi$$ by the definition of $c_\phi$. Combining these two inequalities, we find that there is a $c'>0$ such that $$d_{\ensuremath{\mathcal{E}}}(x,y)<\frac{\phi_j(x)-\phi_{j+1}(x)}{2}-c$$ for some $j$, so $x$ and $y$ satisfy the conditions of Lemma~\ref{lem:parabolicNbhds}. \end{proof} \subsection{Sketch of proof} The basic idea behind the proof of Theorem~\ref{thm:mainThm} is to use a filling in $\ensuremath{\mathcal{E}}$ as a template for a filling in $\Gamma$. Since $\ensuremath{\mathcal{E}}$ is non-positively curved, it has a quadratic Dehn function, so a word in $\Gamma$ can be filled by a quadratic disc in $\ensuremath{\mathcal{E}}$. The problem with this filling is that it is likely to leave the thick part of $\ensuremath{\mathcal{E}}$ and go deep into the cusp of $\ensuremath{\mathcal{M}}$, so we need to replace the parts of the filling that lie in the cusp with pieces in the thick part of $\ensuremath{\mathcal{E}}$. To do this, we use Cor.~\ref{cor:parabolicNbhds}, which implies that piece of the filling which lie in the cusp actually lie in parabolic subgroup. This lets us fill them using an inductive argument. The argument breaks down into two main pieces. The first is primarily geometric: we partition $\ensuremath{\mathcal{M}}$ into pieces corresponding to parabolic subgroups and break a filling in $\ensuremath{\mathcal{E}}$ into triangles which each lie in one of the pieces. The second is more combinatorial: we use combinatorial methods to fill each of these triangles. \section{The geometric step: creating a template} As curves in a group grow longer, they can increase in both size and complexity. Many bounds on Dehn functions come from a combination of techniques: one to reduce a complex curve to simple curves, and one to fill those simple curves. This section is devoted to the former problem; we will use the geometry of $\ensuremath{\mathcal{M}}$ to reduce a curve in $\Gamma$ to a collection of simpler curves in parabolic subgroups. We will first describe a framework for reducing curves to triangles which appears frequently in proofs of filling inequalities. Let $H$ be a group generated by $S$ and let $\omega:H\to S^*$ be a map such that $\omega(h)$ represents $h$ for all $h\in H$ and $\ell_{\text{w}}(\omega(h))=O(d(I,h))$. We call this a normal form for $H$; note that it does not have to satisfy a fellow-traveler condition. Let $\tau$ be a directed planar graph whose boundary is a circle, whose internal faces have either two or three sides, and whose vertices are labeled by elements of $h$. We think of $\tau$ as a triangulation of $D^2$. Typically, to fill a word $w=w_1\dots w_n$, we use a $\tau$ whose boundary is an $n$-gon with labels $I$, $w_1$, $w_1w_2$, $\dots$, $w_1\dots w_{n-1}$; we call this a {\em template for $w$}. We can use $\tau$ and $\omega$ to construct a map $D^2\to K_H$. First, we send each vertex to its label. Second, if $e$ is an edge from a vertex labeled $h_1$ to a vertex labeled $h_2$, we send $e$ to $h_1\cdot \omega(h_1^{-1} h_2)$, which connects $h_1$ and $h_2$. This sends the boundary of $\tau$ to a word $\omega(h_1)^{\pm 1}\dots \omega(h_n)^{\pm 1}$ which we call the {\em boundary word} of $\tau$. If $\tau$ is a template for $w$ and $w_\tau$ is its boundary word, then $\delta(w,w_\tau)\le c \ell_{\text{w}}(w)$, for some $c$ depending only on $H$ and $\omega$. It sends the boundary of each triangle to a product $$\omega(h_1)^{\pm 1}\omega(h_2)^{\pm 1}\omega(h_3)^{\pm 1},$$ which we call an $\omega$-triangle, and likewise, each bigon to a product $$\omega(h_1)^{\pm 1}\omega(h_2)^{\pm 1}.$$ We think of a bigon as a degenerate triangle in which $h_3=I$. By filling the $\omega$-triangles and bigons, we get a filling of the boundary word of $\tau$ whose area depends on $\tau$ and on the difficulty of filling $\omega$-triangles and bigons. \begin{figure} \includegraphics[width=3in]{seashell} \caption{\label{fig:seashell}A seashell template} \end{figure} In many cases, a good choice of $\omega$ and of $\tau$ makes these $\omega$-triangles easy to fill. One example is the case of automatic groups; in an automatic group, $\omega$ is the automatic structure, and $\tau$ is the ``seashell'' template in Fig.~\ref{fig:seashell}, whose vertices are labeled $I$, $w_1$, $w_1w_2$, $\dots$, $w_1\dots w_{n-1}$. Each triangle then has two edges in normal form and one short edge, and the fellow-traveler property of $\omega$ lets us fill each such triangle with a disc of area $O(n)$. Since there are $n$ triangles, this gives a quadratic bound on the Dehn function. Similarly, some proofs (for example, \cite{GeHoRi}) define a normal form $\omega$ for elements of $H$ and describe homotopies between $\omega(h)s$ and $\omega(hs)$ for all $h\in H$ and $s\in S$; these can also be described in terms of a seashell template. Another application is the following proposition, which can be proved using templates like that in Fig.~\ref{fig:dyadic}. It implies that if $\omega$-triangles can be filled by discs of polynomial area, then so can arbitrary curves. \begin{figure} \includegraphics[width=3in]{fractal} \caption{\label{fig:dyadic}A dyadic template} \end{figure} \begin{prop}\label{prop:dyadic} If there is an $\alpha>1$ such that for all $h_i\in H$ such that $h_1h_2$ and $d(I,h_i)\le \ell$, we have $$\delta_H(\omega(h_1)\omega(h_2)\omega(h_1h_2)^{-1})=O(\ell^\alpha),$$ then $\delta_H(n)\lesssim n^\alpha.$ \end{prop} \begin{proof} Without loss of generality, we may assume that the identity $I$ is in the generating set of $H$. It suffices to consider the case that $\ell_{\text{w}}(w)=n=2^k$ for some $k\in \ensuremath{\mathbb{Z}}$; otherwise, we may pad $w$ with the letter $I$ until its length is a power of 2. Let $\tau$ be the template consisting of $2^k-2$ triangles and 1 bigon as in Fig.~\ref{fig:dyadic}. Let $w=w_1\dots w_n$ and let $w(i)=w_1\dots w_i$. Label the vertices of the template by $w(i)$. Each triangle has vertices labeled $$w(i2^j),w((i+\frac{1}{2})2^{j-1}),w((i+1)2^j)$$ for some $1\le j<k$ and $0\le i < 2^{k-j}$, which are separated by distances at most $2^{j}$. By the hypothesis, such a triangle has a filling of area $O(2^{\alpha j})$. Similarly, the bigon can be filled at cost $O(n^{\alpha})$. Finally, this is a template for $w$, so the cost to transform $w$ to the boundary word is $O(n)$. Summing all the contributions, we find that $\delta_H(w)=O(n^\alpha)$. \end{proof} Such a construction is used in \cite[5.A$''_3$]{GroAII}, \cite{GroCC}, and \cite{LeuzPitQuad}. A third application is a proof that $SL(n;\ensuremath{\mathbb{Z}})$ has an exponential Dehn function. It is straightforward to show that the injectivity radius of $z\in \ensuremath{\mathcal{M}}$ shrinks exponentially quickly as $z\to \infty$; that is, that there is a $c$ such that if $x,y\in \ensuremath{\mathcal{E}}$, $d_\ensuremath{\mathcal{E}}(I,x)\le r$, and $d_{\ensuremath{\mathcal{E}}}(x,y)\le e^{-c r}$, then $d_{\Gamma}(\rho(x),\rho(y))\le c$, where $\rho:\ensuremath{\mathcal{E}}\to \Gamma$ is the map from Sec.~\ref{sec:redThe}. If $\alpha$ is a curve of length $\ell=\ell_{\text{c}}(\alpha)$ in the thick part of $\ensuremath{\mathcal{E}}$, let $\ensuremath{D^2}(t)=[0,t]\times [0,t]$ and let $f:\ensuremath{D^2}(\ell)\to \ensuremath{\mathcal{E}}$ be a Lipschitz disc filling $\alpha$; this exists because $\ensuremath{\mathcal{E}}$ is non-positively curved. We can in fact ensure that $\Lip(f)\le 2$, so that $d(I,f(x))\le 2\ell$ for all $x\in D^2(\ell)$. Let $\tau$ be a triangulation of $\ensuremath{D^2}(\ell)$ with $O(e^{4c\ell})$ triangles with side lengths at most $e^{-2c\ell}$. Label each vertex $v\in \tau^{(0)}$ by $\rho(f(v))$. Each triangle in the template $\tau$ is labeled by three elements of $\Gamma$ which are no more than $c$ apart, so each $\omega$-triangle can be filled with a disc of area $\delta_\Gamma(3c)$. This gives a filling of $\alpha$ with area $O(e^{2c\ell})$, as desired. The proof is based on the fact that if two points in $\ensuremath{\mathcal{E}}$ are exponentially close, then the corresponding elements of $\Gamma$ are a bounded distance apart. The key to our proof that $SL(p;\ensuremath{\mathbb{Z}})$ has a quadratic Dehn function is that points in $\ensuremath{\mathcal{E}}$ which are farther apart also satisfy certain relationships, so we can find a more efficient filling by using a template with larger triangles. In a previous paper \cite{YoungQuart}, we proved a quartic filling inequality for $SL(n;\ensuremath{\mathbb{Z}})$ by using a template with $O(\ell^2)$ triangles of side length at most $1$. In this paper, we will improve this to a quadratic filling inequality by using a template with larger cells. \subsection{Adaptive triangulations}\label{sec:adaptive} We claim that if $w$ is a word in $\Sigma$ which represents the identity, then there is a template $\tau$ for $w$ such that all large triangles in $\tau$ have labels contained in a translate of $U(j,p-j)$ for some $j$. Furthermore, $\tau$ can be constructed with relatively few triangles. Specifically: \begin{prop}\label{prop:templateExist} Let $p\in \ensuremath{\mathbb{Z}}$ and $p\ge 2$, and let $\Sigma, \Gamma, \ensuremath{\mathcal{E}},$ etc.\ be as in Sec.~\ref{sec:prelims}. There is a $c$ such that if $w=w_1\dots w_\ell$ is a word in $\Sigma$ which represents the identity, then there is a template for $w$ such that \begin{enumerate} \item \label{prop:templateExist:para} If $g_1, g_2, g_3\in \Gamma$ are the labels of a triangle in the template, then either $d_\Gamma(g_i,g_j)\le 2c$ for all $i,j$ or there is a $k$ such that $g_i^{-1}g_{j}\in U(k,p-k)$ for all $i,j$. \item \label{prop:templateExist:area}$\tau$ has $O(\ell^2)$ triangles, and if the $i$th triangle of $\tau$ has vertices labeled $(g_{i1},g_{i2},g_{i3})$, then $$\sum_{i}(d_\Gamma(g_{i1},g_{i2})+d_\Gamma(g_{i1},g_{i3})+d_\Gamma(g_{i2},g_{i3}))^2=O(\ell^2).$$ Similarly, if the $i$th edge of $\tau$ has vertices labeled $h_{i1}, h_{i2}$, then $$\sum_{i}d_\Gamma(h_{i1},h_{i2})^2=O(\ell^2).$$ \end{enumerate} \end{prop} The basic technique is the same as the construction in the previous section; we start with a filling of $w$ by a Lipschitz disc $f:D^2\to \ensuremath{\mathcal{E}}$, then construct a template for $w$ by triangulating the disc and labelling its vertices using $\rho$. We ensure that properties \ref{prop:templateExist:para} and \ref{prop:templateExist:area} hold by carefully controlling the lengths of edges. If edges are too long, then property \ref{prop:templateExist:para} will not hold; on the other hand, if edges are too short, then there will be too many triangles, and \ref{prop:templateExist:area} will not hold. Corollary~\ref{cor:parabolicNbhds} says that the length necessary for property \ref{prop:templateExist:para} to hold varies based on where $f$ lies in $\ensuremath{\mathcal{M}}$, so we will construct a triangulation with varying side lengths. \begin{prop}\label{prop:adaptive} Let $t=2^k$, $k\ge 0$, let $D^2(t)=[0,t]\times[0,t]$, and let $h:\ensuremath{D^2}(t)\to \ensuremath{\mathbb{R}}$ be a $1$-Lipschitz function such that $h(x)\ge 1$ for all $x$. There is a triangulation $\tau_h$ of $\ensuremath{D^2}(t)$ such that \begin{enumerate} \item All vertices of $\tau_h$ are lattice points. \item If $x$ and $y$ are connected by an edge of $\tau_h$, then $$\frac{\min\{h(x)/4,t\}}{2}\le d(x,y) \le\sqrt{2}h(x).$$ \item No more than $32$ triangles meet at any vertex. \item $$\sum_{\Delta\in \tau_h} (\perim \Delta)^2 \le 1152 t^2$$ Furthermore, the number of triangles in $\tau_h$ is at most $32 t^2$. \end{enumerate} \end{prop} \begin{proof} As in the construction of the Whitney decomposition, we will construct a decomposition of $\ensuremath{D^2}(t)$ into {\em dyadic squares}, that is, squares of the form $$S_{i,j,s}:=[i 2^s,(i+1) 2^s]\times [j 2^s,(j+1) 2^s]$$ for some $i,j,s\in \ensuremath{\mathbb{Z}}$, $s\ge 0$. Let $\ensuremath{\mathcal{D}}_t$ be the set of dyadic squares contained in $\ensuremath{D^2}(t)$. If $S$ is a dyadic square, let $\sigma(S)$ be its side length and let $a(S)$ be the smallest dyadic square that strictly contains it, so that $$a(S_{i,j,s})=S_{\lfloor\frac{i}{2}\rfloor,\lfloor\frac{j}{2}\rfloor,s+1}.$$ If $S$ and $T$ are dyadic squares whose interiors intersect, then either $S\subset T$ and $T=a^k(S)$ for some $k$ or vice versa. Let $$U_0:=\{S \mid S\in \ensuremath{\mathcal{D}}_t \text{ and } h(x)\ge \sigma(S) \text{ for all } x\in S\}$$ and $$U:=\{S \mid S\in U_0 \text{ and } a(S)\not\in U_0\}.$$ We claim that $U$ is a cover of $\ensuremath{D^2}(t)$ by squares which overlap only along their edges. If $x\in \ensuremath{D^2}(t)$, then $x\in S$ for some $S\in \ensuremath{\mathcal{D}}_t$ with $\sigma(S)=1$. Since $h(z)\ge 1$ for all $z\in \ensuremath{D^2}(t)$, we know that $S\in U_0$; we claim that $a^n (S)\in U$ for some $n$. Indeed, if $n$ is the largest integer such that $a^n(S)\in U_0$, then $a^n(S)\in U$. This $c$ exists because when $2^n>t$, then $a^n(S)\not \in U_0$. Thus $U$ is a cover of $\ensuremath{D^2}(t)$. Furthermore, squares in $U$ intersect only along their edges. Let $S,T\in U$ be such that $\inter S\cap \inter T\ne \emptyset$ and $S\ne T$. In this case, we have $S\subsetneq T$ or $T\subsetneq S$; assume that $S\subsetneq T$. Then there is a $n$ such that $a^n(S)=T$. By the definition of $U$, we know that $a(S)\not \in U_0$, but this is impossible, since the fact that $T\in U_0$ implies that any dyadic square contained in $T$, including $a(S)$, is also in $U_0$. Two adjacent squares in $U$ need not intersect along an entire edge, so $U$ is generally not a polyhedron. To fix this, we subdivide the edges of each square so that two distinct polygons in $U$ intersect either in a vertex, in an edge, or not at all; call the resulting polyhedron $U'$. By replacing each $n$-gon in $U'$ with $n-2$ triangles, we obtain a triangulation, which we denote $\tau_h$. We claim that this $\tau_h$ satisfies the required properties. The first property is clear; the vertices of any dyadic square are lattice points by definition. For the second and third property, we will first show that if $x\in D^2(t)$ and $x\in S$ for some dyadic square $S\in U$, then $$\frac{\min\{h(x)/4,t\}}{2}\le \sigma(S) \le h(x).$$ The inequality $\sigma(S)\le h(x)$ follows from the definition of $U$. For the other inequality, let $s$ be such that $$2^s\le \min\{\frac{h(x)}{4},t\} < 2^{s+1},$$ and let $S_0$ be a dyadic square of side length $2^s$ such that $x\subset S_0$. If $y\in S_0$, then $d(x,y)\le 2^{s+1}$, so $h(y)\ge h(x)-2^{s+1}$. Since $h(x)\ge 4\cdot 2^s$, we have $h(y)\ge 2^{s+1}\ge \sigma(S_0)$, so $S_0\in U_0$. Consequently, any square $S$ in $U$ must also contain $S_0$, so $$\sigma(S)\ge 2^s \ge \frac{\min\{h(x)/4,t\}}{2}$$ as desired. This implies property 2, because if $e$ is an edge of $\tau_h$, there is an $S\in U$ such that $e\subset S$ and $\sigma(S)\le \ell_{\text{c}}(e)\le \sqrt{2}\sigma(S)$. Say that $S$ and $T$ are adjacent squares in $U$ with $\sigma(S)\ge \sigma(T)$. By the above, we know that if $x\in S\cap T$, then $\sigma(S)\le h(x)$ and $$\frac{\min\{h(x)/4,t\}}{2}\le \sigma(T),$$ so $$\sigma(S)\le 8 \sigma(T).$$ This implies that a polygon in $U'$ has at most $32$ sides and that each vertex in $\tau_H$ has degree at most $128$; this is property 3. Finally, since $U$ is a cover of $\ensuremath{D^2}(t)$ by squares with disjoint interiors, $$\sum_{S\in U} \sigma(S)^2 = t^2.$$ Each square $S$ in $U$ corresponds to at most $c$ triangles in $\tau_h$, each of which has perimeter at most $6\sigma(S)$, so $$\sum_{\Delta\in \tau_h} (\perim \Delta)^2 \le 36\cdot 32 t^2=1152 t^2$$ as desired. Furthermore, $U$ contains at most $t^2$ squares, so $\tau$ contains at most $32 t^2$ triangles. \end{proof} \begin{proof}[Proof of Prop.~\ref{prop:templateExist}] Let $w(i)=w_1\dots w_i$. Let $\alpha:[0,\ell]\to \ensuremath{\mathcal{E}}$ be the curve corresponding to $w,$ parameterized so that $\alpha(i)=[w(i)]_\ensuremath{\mathcal{E}}$. If $c_\Sigma$ is as in Sec.~\ref{sec:geomSym}, then $\alpha$ is $c_\Sigma$-Lipschitz. Let $t=2^k$ be the smallest power of 2 larger than $\ell$, and let $\alpha':[0,t]\to\ensuremath{\mathcal{E}}$ $$\alpha'(x)=\begin{cases}\alpha(x) & \text{if $x\le \ell$} \\ [I]_{\ensuremath{\mathcal{E}}} & \text{otherwise}. \end{cases}$$ Since $\ensuremath{\mathcal{E}}$ is non-positively curved, we can use geodesics to fill $\alpha'$. If $x,y\in \ensuremath{\mathcal{E}}$, let $\gamma_{x,y}:[0,1]\to \ensuremath{\mathcal{E}}$ be a geodesic parameterized so that $\gamma_{x,y}(0)=x$, $\gamma_{x,y}(1)=y$, and $\gamma_{x,y}$ has constant speed. We can define a homotopy $f:[0,t] \times [0,t]\to \ensuremath{\mathcal{E}}$ by $$f(x,y)=\gamma_{\alpha'(x),\alpha'(0)}(y/t);$$ this sends three sides of $\ensuremath{D^2}(t)$ to $[I]_\ensuremath{\mathcal{E}}$ and is a filling of $\alpha$. Since $\ensuremath{\mathcal{E}}$ is non-positively curved, this map is $2c_\Sigma$-Lipschitz and has area $O(\ell^2)$. Let $c'$ be the constant in Cor.~\ref{cor:parabolicNbhds}. Let $r_0:\ensuremath{\mathcal{E}}\to \ensuremath{\mathbb{R}}$ be $$r_0(x)=\frac{d_{\ensuremath{\mathcal{M}}}([x]_\ensuremath{\mathcal{M}},[I]_{\ensuremath{\mathcal{M}}})}{2 p^2}- c',$$ and let $r:D^2(t)\to \ensuremath{\mathbb{R}}$ be $$r(x)=\max\{1,\frac{r_0(x)}{4c_\Sigma}\},$$ This function is 1-Lipschitz. Let $\tau_r$ be the triangulation constructed in Prop.~\ref{prop:adaptive}. Let $\tau$ be $\tau_r$, with orientations on edges chosen arbitrarily. If $v$ is an interior vertex of $\tau$, label it $\rho(f(v))$. If $(i,0)$ is a boundary vertex on the side of $\ensuremath{D^2}(t)$ corresponding to $\alpha'$ and $i\le \ell$, label it by $w(i)$. Otherwise, label it $I$. If $v$ is a vertex, let $g_v\in \Gamma$ be its labeling and note that $f(v)\in g_v\ensuremath{\mathcal{S}}$. If $x$ is a lattice point on the boundary of $\ensuremath{D^2}(t)$, then $f(x)=[I]_\ensuremath{\mathcal{M}}$ and so $r(x)=1$. In particular, each lattice point on the boundary of $\ensuremath{D^2}(t)$ is a vertex of $\tau_0$, so the boundary of $\tau_0$ is a $4t$-gon with vertices labeled $I,w(1),\dots,w(n-1),I\dots,I$. We identify vertices labeled $I$ and remove self-edges to get a template $\tau$ for $w$. If $x_1$ and $x_2$ are two adjacent vertices labeled $g_1$ and $g_2$, then Prop.~\ref{prop:adaptive} implies that $d(x_1,x_2)\le 2 r(x_1)$ and $$d_{\ensuremath{\mathcal{E}}}(f(x_1),f(x_2))\le 4c_\Sigma r(x_1) = \max\{4 c_\Sigma, r_0(f(x_1))\}.$$ If \begin{equation}\label{eq:triCase1} 4 c_\Sigma \le r_0(f(x_1)), \end{equation} then $$d_{\ensuremath{\mathcal{E}}}(f(x_1),f(x_2))\le r_0(f(x_1)),$$ so by Cor.~\ref{cor:parabolicNbhds}, $g_1x^{-1}g_2\in U(j,p-j)$ for some $j$. Since $j$ depends only on $x_1$, if $x_1,x_2$, and $x_3$ form a triangle, $x_3$ is labeled by $g_3$, and \eqref{eq:triCase1} holds, then $g_1x^{-1}g_3\in U(j,p-j)$ as well, and thus $g_2x^{-1}g_3\in U(j,p-j)$. Otherwise, \begin{equation}\label{eq:triCase2} r_0(f(x_1)) \le 4 c_\Sigma, \end{equation} so $d_{\ensuremath{\mathcal{E}}}(f(x_1),f(x_2))\le 4 c_\Sigma$ and $$d_{\ensuremath{\mathcal{M}}}([f(x_1)]_\ensuremath{\mathcal{M}},[I]_{\ensuremath{\mathcal{M}}})\le 2p^2(c'+4 c_\Sigma).$$ In this case, $f(x_1)$ and $f(x_2)$ are both in the thick part of $\ensuremath{\mathcal{E}}$, and since $$d_{\ensuremath{\mathcal{E}}}(f(x_1),f(x_2))<1,$$ there is a $c''$ independent of $x_1$ and $x_2$ such that $d_{\Gamma}(g_1,g_2)\le c''$. As before, if $x_1$, $x_2$, and $x_3$ form a triangle and \eqref{eq:triCase2} holds, then $d_{\Gamma}(g_1,g_3)\le c''$, so $d_{\Gamma}(g_2,g_3)\le 2c''$. This proves part \ref{prop:templateExist:para} of \ref{prop:templateExist}. To prove part \ref{prop:templateExist:area}, we will show that the distance between pairs of labels is essentially the same as the distance between the corresponding vertices in $\ensuremath{D^2}(t)$. This follows from Thm.~\ref{thm:LMR}. If $(v_1,v_2)$ is an edge of $\tau$ and $v_i$ is labeled by $g_i$, we know that $$d_{\ensuremath{\mathcal{M}}}([f(v_i)]_\ensuremath{\mathcal{M}},[I]_{\ensuremath{\mathcal{M}}})=O(d_{D^2}(v_1,v_2)).$$ By Thm.~\ref{thm:SiegConj}, $$d_\ensuremath{\mathcal{E}}([g_i]_\ensuremath{\mathcal{E}},f(v_i))= d_\ensuremath{\mathcal{M}}([I]_\ensuremath{\mathcal{M}},[f(v_i)]_\ensuremath{\mathcal{M}})+O(1)=O(d_{D^2}(v_1,v_2)),$$ so \begin{align*} d_\ensuremath{\mathcal{E}}([g_1]_\ensuremath{\mathcal{E}},[g_2]_\ensuremath{\mathcal{E}})& \le d_\ensuremath{\mathcal{E}}([g_1]_\ensuremath{\mathcal{E}},f(v_1)) + d_\ensuremath{\mathcal{E}}(f(v_1),f(v_2))+ d_\ensuremath{\mathcal{E}}(f(v_2),[g_2]_\ensuremath{\mathcal{E}})\\ &= O(d_{D^2}(v_1,v_2)). \end{align*} By Thm.~\ref{thm:LMR}, $$d_{\Gamma}(g_1,g_2)=O(d_{D^2}(v_1,v_2))$$ as well. Part \ref{prop:templateExist:para} of Prop.~\ref{prop:templateExist} follows from this bound and the bounds in Prop.~\ref{prop:adaptive}. This proves Prop.~\ref{prop:templateExist}. \end{proof} Note that it is not necessary that $p\ge 5$ for this template to exist. In fact, a suitable generalization of the proposition should hold for any lattice in a semisimple Lie group. In the next section, we will fill this template. We will construct a normal form $\omega$, and show that triangles and bigons whose edges are words in this normal form can be filled efficiently. Indeed, we will show that these triangles can be filled by discs with quadratically large area; by Lemma~\ref{prop:templateExist:area}, this will give a quadratic filling of $w$. \section{The combinatorial step: filling the template} In the previous section, we constructed a template for a filling of a word $w$ which represents the identity in $SL(p;\ensuremath{\mathbb{Z}})$. In this section, we will use this to prove Theorem~\ref{thm:mainThm}. The first thing we need is a normal form $\omega$ for $SL(p;\ensuremath{\mathbb{Z}})$; we construct this in Section~\ref{sec:combing}. The template from the previous section then allows us to reduce the problem of filling $w$ with a disc of quadratic area to the problem of filling $\omega$-triangles with vertices in a parabolic subgroup by discs of quadratic area. We construct these discs inductively, by reducing the problem of filling such an $\omega$-triangle to the problem of finding relative fillings for words in $SL(q;\ensuremath{\mathbb{Z}})$, $q<p$ (that is, fillings of words in $SL(q;\ensuremath{\mathbb{Z}})$ by discs in $SL(p;\ensuremath{\mathbb{Z}})$). Let $\Sigma_q:=\Sigma\cap SL(q;\ensuremath{\mathbb{Z}})$ and $\Sigma_{S}:=\Sigma \cap SL(S;\ensuremath{\mathbb{Z}})$; likewise, let $\widehat{\Sigma}_q:=\widehat{\Sigma}\cap SL(q;\ensuremath{\mathbb{Z}})$ and $\widehat{\Sigma}_{S}:=\widehat{\Sigma} \cap SL(S;\ensuremath{\mathbb{Z}})$. In Sec.~\ref{sec:combing}, we construct the normal form $\omega$. In Sec.~\ref{sec:steinberg}, we construct fillings of analogues of the Steinberg relations (see Sec.~\ref{sec:origSteinberg}); these are our basic tools for manipulating words in $\widehat{\Sigma}$. In Sec.~\ref{sec:paraReduce}, we use these tools to reduce the problem of filling $\omega$-triangles with vertices in a parabolic subgroup of $U(s_1,\dots,s_k)\subset SL(q;\ensuremath{\mathbb{Z}})$, $q\le p$ to the problem of filling words in $\Sigma_{s_i}$ and words in $\widehat{\Sigma}_2$, except for the case that the $\omega$-triangle has vertices in $U(p-1,1)$. In Sec.~\ref{sec:paraException}, we consider the case of $U(p-1,1)$. Together, Secs.~\ref{sec:adaptive}, \ref{sec:paraReduce}, and \ref{sec:paraException} reduce the problem of filling words in $\Sigma_q$ when $q\le p$ to the problem of filling words in $\Sigma_{q_i}$ where $3\le q_1,\dots, q_k <q$ and words in $\widehat{\Sigma}_2$. In Sec.~\ref{sec:shortTemplates}, we find quadratic fillings of words in $\widehat{\Sigma}_{2}$. Finally, in Sec.~\ref{sec:fullProof}, we bring all of these tools together to prove Thm.~\ref{thm:mainThm}. Throughout this section, $p$ will be an integer which is at least $5$. \subsection{Constructing a normal form}\label{sec:combing} We first construct a normal form $\omega:\Gamma \to \Sigma^*$ for $\Gamma$. Let $g\in \Gamma$ and let $P=U(S_1,\dots,S_k)\in \ensuremath{\mathcal{P}}$ be the unique minimal $P\in \ensuremath{\mathcal{P}}$ containing $g$. Then $g$ is a block-upper-triangular matrix which can be written as a product \begin{equation}\label{eq:blockDecomp} g=\begin{pmatrix} m_1 & V_{12} & \dots & V_{1k} \\ 0 & m_2 & \dots & V_{2k} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & m_k \end{pmatrix} d, \end{equation} where the $i$th block of the matrix corresponds to $S_i$. Here, $V_{i,j}\in \ensuremath{\mathbb{Z}}^{S_i}\otimes \ensuremath{\mathbb{Z}}^{S_j}$, and $d\in D$ is a diagonal matrix chosen so that $\det m_i=1$. If $P=\Gamma$, then there is only one block, and we take $m_1=g$, and $d=I$. If $V\in \ensuremath{\mathbb{Z}}^{S_i}\otimes \ensuremath{\mathbb{Z}}^{S_j}$, let $u_{ij}(V)=u_{S_i,S_j}(V)$. We can write $g$ as a product: $$\gamma_i:=\biggl(\prod_{j=1}^{i-1} u_{ji}(V_{ji})\biggr) m_i$$ $$g=\gamma_k\dots \gamma_1 d.$$ We will construct $\omega(g)$ by replacing the terms in this product decomposition with words. When $\#S_k\ge 3$, we can use Thm.~\ref{thm:LMR} to replace the $m_k$ by words in $\Sigma\cap SL(S_k;\ensuremath{\mathbb{Z}})$, but the theorem does not apply when $\#S_k=2$ because $SL(2;\ensuremath{\mathbb{Z}})$ is exponentially distorted inside $SL(p;\ensuremath{\mathbb{Z}})$. We thus use a variant of the Lubotzky-Mozes-Raghunathan theorem to write $m_k$ as a word in $\widehat{\Sigma}\cap SL(S_k;\ensuremath{\mathbb{Z}})$. \begin{prop}[{\cite{LMRComptes}}]\label{prop:shortLMR} There is a constant $c$ such that for all $g\in SL(k;\ensuremath{\mathbb{Z}})$, there is a word $w\in \widehat{\Sigma}^*$ which represents $g$ and has length $$\widehat{\ell}(w)\le c\log \|g\|_2.$$ \end{prop} For $i=1,\dots,k$, let $\widehat{m_k}\in \widehat{\Sigma}^*$ be a word representing $m_k$ as in Prop.~\ref{prop:shortLMR}. For $1\le i< j\le k$ and $V\in \ensuremath{\mathbb{Z}}^{S_i}\otimes \ensuremath{\mathbb{Z}}^{S_j}$, let $$\widehat{v}(V;S_i,S_j):=\prod_{a\in S_i, b\in S_j}\sgen{ab}{x_{ab}},$$ where $x_{ab}$ is the $(a,b)$-coefficient of $V$. Let $$\widehat{\gamma}_i(g):=\biggl(\prod_{j=i+1}^{k} \widehat{v}(V_{ij};S_i,S_j)\biggr) \widehat{m_j}$$ $$\omega_0(g)=\widehat{\gamma}_k(g)\dots \widehat{\gamma}_1(g)d.$$ This is a word in $\widehat{\Sigma}$ which represents $g$, and we let $\omega(g)=\lambda(\omega_0(g))$, where $\lambda$ is the map defined in Sec.~\ref{sec:geomSym} which replaces each letter $e_{ab}(x)$ with the word $\widehat{e}_{ab}(x)$. It is straightforward to show that there is a constant $c_\omega$ independent of $g$ such that $\ell_{\text{w}}(\omega(g))\le c_\omega d_\Gamma(I,g)$. \subsection{The shortened Steinberg relations}\label{sec:steinberg} In this section, we will develop methods for filling simple words in $\widehat{\Sigma}$, based on the Steinberg relations. The key to the methods in this section is the group $$H_{S,T}=(\ensuremath{\mathbb{R}}^{s-1}\times\ensuremath{\mathbb{R}}^{t-1})\ltimes (\ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T)$$ from Section~\ref{sec:geomSym}, which we used to construct $\widehat{e}_{ij}$. This group has the key properties that it is contained in the thick part of $G$, and when either $S$ or $T$ is large enough, then $H_{S,T}$ has quadratic Dehn function. The quadratic Dehn function is a special case of a theorem of de Cornulier and Tessera \cite{dCpersonal}: \begin{thm}\label{thm:HDehn} If $s=\#S\ge 3$ or $t=\#T\ge 3$, then $H_{S,T}$ has a quadratic Dehn function. \end{thm} \begin{proof} The groups $H_{S,T}$ and $H_{T,S}$ are isomorphic, so we may assume that $s\ge 3$. By a change of basis, we may assume that $\ensuremath{\mathbb{R}}^{s-1}\subset SL(S)$ and $\ensuremath{\mathbb{R}}^{t-1}\subset SL(T)$ are the subgroups of diagonal matrices with positive coefficients. To prove that $H_{S,T}$ has a quadratic Dehn function, we use its semidirect product structure. Each element of $H_{S,T}$ can be written uniquely as a product $d v$ of a $d\in \ensuremath{\mathbb{R}}^{s-1}\times \ensuremath{\mathbb{R}}^{t-1}$ and $v\in \ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T$, so if $\omega_1$ and $\omega_2$ are normal forms for $\ensuremath{\mathbb{R}}^{s-1}\times \ensuremath{\mathbb{R}}^{t-1}$ and $\ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T$, we get a normal form for $H$ by letting $\omega_H(dv)=\omega_1(d)\omega_2(v)$. By Prop.\ \ref{prop:dyadic}, it suffices to show that curves of the form $z=\omega_H(dv)\omega_H(d'v')\omega_H(dvd'v')^{-1}$ have quadratic fillings. These can be rewritten in the form $$\omega_1(d)\omega_2(v)\omega_1(d')\omega_2(v')\omega_2({d'}^{-1}vd'+v')^{-1}\omega_1(dd')^{-1},$$ where multiplication in $\ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T$ is written additively. This curve forms the boundary of the triangle in Fig.~\ref{fig:HDehnFill}, so we can fill it by filling the curves \begin{equation}\label{eq:HDehn:conjH} \omega_2(v)\omega_1(d')\omega_2({d'}^{-1}vd')^{-1}\omega_1(d')^{-1}, \end{equation} \begin{equation}\label{eq:HDehn:Rtris} \omega_1(d)\omega_1(d')\omega_1(dd')^{-1}, \end{equation} and \begin{equation}\label{eq:HDehn:shortTris} \omega_2({d'}^{-1}vd')\omega_2(v')\omega_2(({d'}^{-1}vd')+v')^{-1} \end{equation} which form the boundaries of the cells of Fig.~\ref{fig:HDehnFill}. \begin{figure} \includegraphics[width=3in]{HDehnFillReplaced} \caption{\label{fig:HDehnFill}A quadratic filling of $z.$} \end{figure} First, we define $\omega_1$ and $\omega_2$. We let $\omega_1(d)$ be the map $t\mapsto d^t$ for $t\in [0,1]$; by abuse of notation, we also call this curve $d$. We will construct $\omega_2(v)$ the same way that we constructed $\widehat{u}(v;S,T)$. Let $D_i(\lambda)\in SL(S)$ be the diagonal matrix such that $D_i(\lambda) z_i = \lambda z_i$ and $D_i(\lambda) z_j = \lambda^{-\frac{1}{s-1}}$. If $v\in \ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T$, let $v_{ij}$ be the $(i,j)$-component of $v$, and if $x\in \ensuremath{\mathbb{R}}$, let $\overline{x}=\max \{1,|x|\}$. We define $$\omega_2(v):=\prod_{i,j} \gamma_{ij}(v_{ij}),$$ where $$\gamma_{ij}(x)=D_i(\overline{x}) e_{ij}(x/\overline{x}) D_i(\overline{x})^{-1}$$ is a curve representing $e_{ij}(x)$. Now we fill \eqref{eq:HDehn:conjH}. It suffices to consider the case that $v=x z_i\otimes z_j$; the general case is a combination of such cases. Let $\lambda$ be such that ${d'}^{-1}vd'=\lambda v$. We need to fill $$w_0=\gamma_{ij}(x) d' \gamma_{ij}(\lambda x)^{-1}d'^{-1}.$$ We can conjugate $w_0$ by $D_i(\overline{x})$ and collect the diagonal matrices to get $w_1=e_{ij}(x_1) D e_{ij}(x_2)D^{-1},$ where $x_1=x/\overline{x}$, $x_2=\lambda x/\overline{\lambda x}$, and $D=D_i(\overline{x})^{-1} d' D_i(\overline{\lambda x})$. Conjugating by $D_i(\overline{x})$ is just a change of basepoint, so it has no cost, and because $\ensuremath{\mathbb{R}}^{s-1}\times \ensuremath{\mathbb{R}}^{t-1}$ has a quadratic Dehn function, collecting diagonal matrices has cost $O(\ell_{\text{c}}(w_0)^2)$. This has a thin rectangle as a filling: the map $\beta:[0,\ell_{\text{c}}(D)] \times [0,1]\to H_{S,T}$ $$\beta(a,b)= e_{ij}(b x_1) D^a$$ is a distance-decreasing map with boundary $w_1$. This gives a filling of \eqref{eq:HDehn:conjH} with quadratic area, as desired. The curve \eqref{eq:HDehn:Rtris} can be filled quadratically because $\ensuremath{\mathbb{R}}^{s-1}\times \ensuremath{\mathbb{R}}^{t-1}$ has a quadratic Dehn function. The curve \eqref{eq:HDehn:shortTris} is more complicated. Because $\omega_2(v)$ is composed of curves $\gamma_{ij}(x)$, it suffices to find fillings of combinations of the $\gamma_{ij}(x)$. Namely, if $\mathop{\overline{\log}} x = \max\{1,\log x\}$, \begin{equation}\label{eq:HDehn:add} \delta(\gamma_{ij}(x)\gamma_{ij}(y), \gamma_{ij}(x+y))=O((\mathop{\overline{\log}} x+\mathop{\overline{\log}} y)^2) \end{equation} for all $x,y$ and \begin{equation}\label{eq:HDehn:commute} \delta([\gamma_{ij}(x),\gamma_{kl}(y)])=O((\mathop{\overline{\log}} x+\mathop{\overline{\log}} y)^2) \end{equation} when $i\ne k$ or $j\ne l$. Recall that the group $\Sol_{2s-1}=\ensuremath{\mathbb{R}}^{s-1}\ltimes \ensuremath{\mathbb{R}}^s$ has a quadratic Dehn function when $s\ge 3$ \cite[5.A.9]{GroAII}. $H_{S,T}$ contains several copies of $\Sol_{2s-1}$; if $j_1,\dots,j_t\in \{1,\dots, t\}$, then the subgroup of $H_{S,T}$ generated over $\ensuremath{\mathbb{R}}$ by $D_1,\dots, D_s$, $e_{1j_1},\dots, e_{sj_s}$ is isomorphic to $\Sol_{2s-1}$. In particular, curves of the form \eqref{eq:HDehn:add} are curves in a copy of $\Sol_{2s-1}$ and when $i\ne k$, curves of the form \eqref{eq:HDehn:commute} are curves in a copy of $\Sol_{2s-1}$. Such curves have quadratic fillings. It remains to consider the case that $i=k$ and show that $$\delta([\gamma_{ij}(x),\gamma_{ik}(y)])=O((\overline{\log |x|}+\overline{\log |y|})^2).$$ Assume without loss of generality that $x\le y$; if $y\le 1$, then the curve has bounded length, so we may take $y>1$. Since $x\le y$, we have $\lambda_{ij}(x/y)=e_{ij}(x/y)$, and the curve $$\delta(\gamma_{ij}(x), D_i(y) e_{ij}(x/y) D_i(y)^{-1})$$ is a curve of the form \eqref{eq:HDehn:conjH}. The techniques used to fill \eqref{eq:HDehn:conjH} show that it has a filling of area $O((\overline{\log |x|}+\overline{\log |y|})^2)$. thus since $$\gamma_{ik}(y)= D_i(y) e_{ik}(1) D_i(y)^{-1},$$ we have $$\delta(\gamma_{ij}(x)\gamma_{ik}(y), D_i(y) u(x/y z_i\otimes z_j + z_i\otimes z_k) D_i(y)^{-1})=O((\overline{\log |x|}+\overline{\log |y|})^2)$$ as well. Similarly, $$\delta(\gamma_{ij}(y)\gamma_{ik}(x), D_i(y) u( z_i\otimes z_k + x/y z_i\otimes z_j) D_i(y)^{-1})=O((\overline{\log |x|}+\overline{\log |y|})^2),$$ so $$\delta([\gamma_{ij}(x),\gamma_{ik}(y)])=O((\overline{\log |x|}+\overline{\log |y|})^2),$$ as desired. \end{proof} Recall that we defined the words $\widehat{e}_{ij}(x)$ as approximations of curves $\widehat{u}(v;S,T)$ in the $H_{S,T}$. We can use the fact that $H_{S,T}$ has quadratic Dehn function to manipulate these curves. In the next lemma, we find fillings for words representing conjugations of $\widehat{u}(V;S,T)$. Let $\Sigma_S:=\Sigma\cap SL(S;\ensuremath{\mathbb{Z}})$ and $\Sigma_T:=\Sigma\cap SL(T;\ensuremath{\mathbb{Z}})$. These are generating sets for $SL(S;\ensuremath{\mathbb{Z}})$ and $SL(T;\ensuremath{\mathbb{Z}})$. \begin{lem} \label{lem:xiConj} Let $0<\epsilon<1/2$ be sufficiently small that $H_{S,T}\subset G(\epsilon)$. If $\#S\ge 3$ and $\#T\ge 2$ or vice versa, $\gamma$ is a word in $(\Sigma_S\cup \Sigma_T)^*$ representing $(M,N)\in SL(S;\ensuremath{\mathbb{Z}}) \times SL(T;\ensuremath{\mathbb{Z}})$, and $V\in \ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T$, then $$\delta_{\ensuremath{\mathcal{E}}(\epsilon)}(\gamma \widehat{u}(V;S,T) \gamma^{-1},\widehat{u}(MVN^{-1};S,T))= O((\ell_{\text{w}}(\gamma)+\log (\|V\|_2+2))^2).$$ \end{lem} \begin{proof} In this proof, we will write $u(V;S,T)$ and $\widehat{u}(V;S,T)$ as $u(V)$ and $\widehat{u}(V)$, leaving $S$ and $T$ implicit. Let $$\omega:=\gamma \widehat{u}(V)\gamma^{-1} \widehat{u}(MVN^{-1})^{-1};$$ this is a closed curve in $G(\epsilon)$ of length $O((\ell_{\text{w}}(\gamma)+\log (\|V\|_2+2))^2).$ We first consider the case that $V=x v_i\otimes w_j$ and $\gamma\in \Sigma_T^*$. In this case, $M=I$; and $\gamma \widehat{u}(V)\gamma^{-1}$ and $\widehat{u}(VN^{-1})$ are both words in the alphabet $$\Sigma_F:=\{A_i^x\mid x\in \ensuremath{\mathbb{R}}\}\cup \{u(W)\mid W\in \ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T\}\cup \Sigma_T.$$ Furthermore, $\Sigma_F$ generates the group \begin{align*} F &:=\left\{\begin{pmatrix}\prod_i A_i^{x_i} & W \\ 0 & D \end{pmatrix} \middle|\; x_i\in \ensuremath{\mathbb{R}}, D\in SL(T;\ensuremath{\mathbb{Z}}), W\in \ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T\right\} \\ &=(\ensuremath{\mathbb{R}}^{s-1}\times SL(T;\ensuremath{\mathbb{Z}}) )\ltimes (\ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T), \end{align*} and $F\subset G(\epsilon)$, so words in $\Sigma_F^*$ correspond to curves in $\ensuremath{\mathcal{E}}(\epsilon)$. Words in $\Sigma_F^*$ satisfy certain relations which correspond to discs in $\ensuremath{\mathcal{E}}(\epsilon)$. In particular, note that if $\sigma\in \Sigma_T$, $|x|\le 1$, and $\|W\|_2\le 1$, then \begin{equation}\label{eq:commute}[\sigma, A_k^x]\end{equation} and \begin{equation}\label{eq:conj}\sigma u(W)\sigma^{-1}u(W\sigma^{-1})^{-1} \end{equation} are both closed curves of bounded length and thus have bounded filling areas. We can think of them as ``relations'' in $F$. Let $\lambda_i>1$ be such that $\lambda_i v_i=A_i v_i$ for $i=1,\dots,s$. Let $C=\log_{\min_k\{\lambda_k\}} (p+1)$, and let $z=C\ell_{\text{w}}(\gamma)+\max\{1,\log_{\lambda_i} |x|\}$. This choice of $z$ ensures that $$\|V N\|_2\le \lambda_i^{z} .$$ Indeed, it ensures that if $d_{SL(T;\ensuremath{\mathbb{Z}})}(I,N')\le \ell_{\text{w}}(\gamma)$, then $$\|V N'\|_2\le \lambda_i^{z}.$$ Furthermore, $z=O(\ell_{\text{c}}(\omega))$. We will construct a homotopy which lies in $\ensuremath{\mathcal{E}}(\epsilon)$ and goes through the stages \begin{align*} \omega_1&=\gamma \widehat{u}(V)\gamma^{-1} \\ \omega_2&=\gamma A_i^{z} u(\lambda_i^{-z} V) A_i^{-z} \gamma^{-1} \\ \omega_3&= A_i^{z} \gamma u(\lambda_i^{-z} V) \gamma^{-1} A_i^{-z} \\ \omega_4&= A_i^{z} u(\lambda_i^{-z} V N^{-1}) A_i^{-z} \\ \omega_5&= \widehat{u}(VN^{-1}). \end{align*} Each stage is a word in $\Sigma_F^*$ and so corresponds to a curve in $\ensuremath{\mathcal{E}}(\epsilon)$. We can construct a homotopy between $\omega_1$ and $\omega_2$ and between $\omega_4$ and $\omega_5$ using Thm.~\ref{thm:HDehn}. We need to construct homotopies between $\omega_2$ and $\omega_3$ and between $\omega_3$ and $\omega_4$. We can transform $\omega_2$ to $\omega_3$ by applying \eqref{eq:commute} at most $O(\ell_{\text{c}}(\omega)^2)$ times. This corresponds to a homotopy with area $O(\ell_{\text{c}}(\omega)^2)$. Similarly, we can transform $\omega_3$ to $\omega_4$ by applying \eqref{eq:conj} at most $\ell_{\text{w}}(\gamma))$ times, corresponding to a homotopy of area $O(\ell_{\text{w}}(\gamma))$. Combining all of these homotopies, we find that $$\delta_{\ensuremath{\mathcal{E}}(\epsilon)}(\gamma \widehat{u}(V)\gamma^{-1},\widehat{u}(VN^{-1}))\le O(\ell_{\text{c}}(\omega)^2).$$ as desired. We can generalize to the case $V=\sum_{i,j}x_{ij} v_i\otimes w_j$ and $\gamma\in \Sigma_T^*$. By applying the case to each term of $\widehat{u}(V)$, we obtain a homotopy of area $O(\ell_{\text{c}}(\omega)^2)$ from $\gamma \widehat{u}(V)\gamma^{-1}$ to $$\prod_{i,j}\widehat{u}(x_{ij} v_i\otimes w_j N^{-1}).$$ This is a curve in $H_{S,T}$ of length $O(\ell_{\text{c}}(\omega))$ which connects $I$ and $u(VN^{-1})$. By Thm.~\ref{thm:HDehn}, there is a homotopy between this curve and $\widehat{u}(VN^{-1})$ of area $O(\ell_{\text{c}}(\omega)^2)$. When $\gamma\in \Sigma_S^*$, we instead let $F$ be the group \begin{align*} F&:=\left\{\begin{pmatrix}D & W \\ 0 & \prod_i B_i^{x_i} \end{pmatrix} \middle|\; x_i\in \ensuremath{\mathbb{R}}, D\in SL(S;\ensuremath{\mathbb{Z}}), W\in \ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T\right\} \\ &=(SL(S;\ensuremath{\mathbb{Z}})\times \ensuremath{\mathbb{R}}^{t-1} )\ltimes (\ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T). \end{align*} Here, $\widehat{u}(V)$ is not a word in $F$, but since $\#T\ge 2$, we can replace the $A_i$ with the $B_i$ in the construction of $\widehat{u}(V)$. This results in shortcuts $\widehat{u}'(V)$ in the alphabet $$\{B_i^x\mid x\in \ensuremath{\mathbb{R}}\}\cup\{u(V)\mid V\in \ensuremath{\mathbb{R}}^S\otimes \ensuremath{\mathbb{R}}^T\}.$$ These are curves in $H_{S,T}$ which represent $u(V)$ and have length $O(\log\|V\|_2)$, so by Thm.~\ref{thm:HDehn}, there is a homotopy of area $O((\log\|V\|_2)^2)$ between $\widehat{u}'(V)$ and $\widehat{u}(V)$. Modifying the argument appropriately, we can show that when $\gamma\in \Sigma_S^*$, $$\delta_{\ensuremath{\mathcal{E}}(\epsilon)}(\gamma \widehat{u}'(V)\gamma^{-1},\widehat{u}'(MV))=O(\ell_{\text{c}}(\omega)^2).$$ Replacing $\widehat{u}'(V)$ with $\widehat{u}(V)$ and $\widehat{u}'(MV)$ with $\widehat{u}(MV)$ adds area $O(\ell_{\text{c}}(\omega)^2)$, so $$\delta_{\ensuremath{\mathcal{E}}(\epsilon)}(\gamma \widehat{u}(V)\gamma^{-1},\widehat{u}(MV))=O(\ell_{\text{c}}(\omega)^2).$$ If $\gamma\in (\Sigma_S\cup \Sigma_T)^*$, and $\gamma_S\in \Sigma_S^*$ and $\gamma_T\in \Sigma_T^*$ are the words obtained by deleting all the letters in $\Sigma_T$ and $\Sigma_S$ respectively, then $\delta_\Gamma(\gamma,\gamma_S\gamma_T)=O(\ell_{\text{c}}(\omega)^2)$. We can construct a homotopy from $\gamma \widehat{u}(V)\gamma^{-1}$ to $\widehat{u}(MVN^{-1}))$ which goes through the steps \begin{align*} \gamma \widehat{u}(V)\gamma^{-1} & \to \gamma_S \gamma_T \widehat{u}(V)\gamma_T^{-1} \gamma_S^{-1}\\ &\to \gamma_S \widehat{u}(VN^{-1}) \gamma_S^{-1}\\ &\to \widehat{u}(MVN^{-1}). \end{align*} This homotopy has area $O(\ell_{\text{c}}(\omega)^2)$. \end{proof} When we constructed $\widehat{e}_{ij}(x)$, we made a choice of $d$ for each pair $(i,j)$. The next lemma shows that this choice doesn't matter very much; different choices of $d$ lead to curves which can be connected by a quadratic homotopy. \begin{lem}\label{lem:shortEquiv} If $i\in S,S'$ and $j\in T,T'$, where $2\le \#S,\#S'\le p-2$, then $$\delta_{\Gamma}(\widehat{e}_{ij;S,T}(x), \widehat{e}_{ij;S',T'}(x))=O((\log |x|)^2).$$ In particular, $$\delta_{\Gamma}(\widehat{e}_{ij}(x), \widehat{e}_{ij;S,T}(x))=O((\log |x|)^2).$$ \end{lem} \begin{proof} Let $V=x z_i\otimes z_j$. We first consider two special cases: the case that $S=S'$, and the case that $S\subset S'$, $\#S'\ge 3$, $T\subset T'$, and $\#T'\ge 2$. Case 1: $S=S'$. In this case, $\widehat{u}(V;{S,T})$ and $\widehat{u}(V;{S',T'})$ are both curves in $H_{S,S^c}$ for $S^c$ the complement of $S$. Since $p\ge 5$, this has quadratic Dehn function, so the lemma follows. In particular, $$\delta_{\Gamma}(\widehat{e}_{ij;S,T}(x), \widehat{e}_{ij;S,\{j\}}(x))=O((\log |x|)^2).$$ Case 2: $S\subset S'$, $\#S'\ge 3$, $T\subset T'$, and $\#T'\ge 2$. Let $\{A_i\}$ be as in the definition of $H_{S,T}$, with eigenvectors $v_i$ and let $\{A'_i\}\in SL(S';\ensuremath{\mathbb{Z}})$ be the set of independent commuting matrices used in defining $H_{S',T'}$. Recall that $\widehat{u}(V;{S,T})$ is the concatenation of curves $\gamma_i$ of the form $$\gamma_i=A_i^{c_i} u(x_i v_i\otimes z_j) A_i^{-c_i}$$ where $c_i\in \ensuremath{\mathbb{R}}$ and $|x_i|\le 1$. Let $\alpha_i$ be a word in $\Sigma$ representing $A_i$. There is a homotopy between $\gamma_i$ and $$\gamma'_i = \alpha_i^{\floor{c_i}} u(\lambda_i^{c_i-\floor{c_i}} x_i v_i\otimes z_j) \alpha_i^{-\floor{c_i}}$$ of area $O(\log |x|)$. Since $\alpha_i$ is a word in $\Sigma_{S'}$, Lemma~\ref{lem:xiConj} implies that there is a homotopy of area $O((\log |x|)^2)$ between $\gamma'_i$ and $$\gamma''_i=\widehat{u}(\lambda_i^{c_i} x_i v_i\otimes z_j;{S',T'}).$$ Each of the $\gamma''_i$ lie in $H_{S',T'}$, and the product of the elements they represent is $e_{ij}(x)$. Since $\widehat{u}(V;{S',T'})$ also lies in $H_{S',T'}$ and $H_{S',T'}$ has quadratic Dehn function, this implies that $$\delta_{\Gamma}(\widehat{e}_{S,T}(V),\widehat{e}_{S',T'}(V))=O((\log |x|)^2),$$ as desired. Combining these two cases proves the lemma. First, we construct a homotopy between $\widehat{e}_{S,T}(V)$ and a word of the form $\widehat{e}_{\{i,d\},\{j\}}(V)$. Let $d\in S$ be such that $d\ne i$. We can construct a homotopy going through the stages $$\widehat{e}_{S,T}(V)\to \widehat{e}_{S,S^c}(V)\to \widehat{e}_{\{i,d\},\{j\}}(V).$$ The first step uses case 1; the second step uses case 2, since $j\in S^c$. We can use the same procedure to construct a homotopy between $\widehat{e}_{S',T'}(V)$ and a word of the form $\widehat{e}_{\{i,d'\},\{j\}}(V)$. If $d=d'$, we're done. Otherwise, we can use case 2 to construct homotopies between each word and $\widehat{e}_{\{i,d,d'\},\{i,d,d'\}^c}(V)$. \end{proof} One important use of this lemma is that if $\#S\ge 3$ and $i,j,d\in S$ are distinct, then the lemma lets us replace $\widehat{e}_{ij}(x)$ by $\widehat{e}_{ij;\{i,d\},\{j\}}(x)$ for a cost of $O((\log |x|)^2)$. This is a word in $\Sigma_S$. More generally, \begin{cor}\label{cor:shortUEquiv} If $i\in S,S'$ and $j\in T,T'$, where $2\le \#S,\#S'\le p-2$, and $V\in \ensuremath{\mathbb{R}}^{S\cap S'} \otimes \ensuremath{\mathbb{R}}^{T\cap T'}$, then $$\delta_{\ensuremath{\mathcal{E}}(1/2)}(\widehat{u}(V;{S,T}), \widehat{u}(V;{S',T'}))=O((\log \|V\|)^2).$$ \end{cor} \begin{proof} Note that $\widehat{u}(V;{S,T})=\widehat{u}(V;{S,S^c})$, so we may assume that $T=S^c$ and $T'=S'^c$, so $H_{S,T}$ and $H_{S',T'}$ have quadratic Dehn functions. Let $V=\sum_{i,j} x_{ij} z_i\otimes z_j$. Note that $\prod_{i,j} \widehat{u}(x_{ij} z_i\otimes z_j; S,T)$ is a curve in $H_{S,T}$ which represents $u(V;S,T)$. There is a homotopy in $\ensuremath{\mathcal{E}}$ going through the stages: \begin{align*} \omega_1 =& \widehat{u}(V;{S,T}) \\ \omega_2 =& \prod_{i,j} \widehat{u}(x_{ij} z_i\otimes z_j; S,T)\\ \omega_3 =& \prod_{i,j} \widehat{u}(x_{ij} z_i\otimes z_j; S',T')\\ \omega_4 =& \widehat{u}(V;{S',T'}). \end{align*} Here, $\omega_1$ and $\omega_2$ are curves in $H_{S,T}$, so there is a quadratic-area homotopy from one to the other. Likewise, $\omega_3$ and $\omega_4$ are both curves in $H_{S',T'}$. We can use Lemma~\ref{lem:shortEquiv} to construct the homotopy from $\omega_2$ to $\omega_3$. These homotopies lie in the thick part of $\ensuremath{\mathcal{E}}$ and have total area $O((\log |V|)^2)$. \end{proof} Using these lemmas, we can give fillings for a wide variety of curves, including shortened versions of the Steinberg relations. \begin{lem}\label{lem:infPres} If $x,y\in \ensuremath{\mathbb{Z}}\setminus \{0\}$, then \begin{enumerate} \item \label{lem:infPres:add} If $1\le i,j\le p$ and $i\ne j$, then $$\delta_{\Gamma}(\widehat{e}_{ij}(x)\widehat{e}_{ij}(y),\widehat{e}_{ij}(x+y))=O((\log |x|+\log |y|)^2).$$ In particular, $$\delta_{\Gamma}(\widehat{e}_{ij}(x)\widehat{e}_{ij}(-x))=\delta_{\Gamma}(\widehat{e}_{ij}(x)^{-1},\widehat{e}_{ij}(-x))=O((\log |x|)^2).$$ \item \label{lem:infPres:multiply} If $1\le i,j,k\le p$ and $i\ne j\ne k$, then $$\delta_{\Gamma}([\widehat{e}_{ij}(x),\widehat{e}_{jk}(y)],\widehat{e}_{ik}(xy))= O((\log |x|+\log |y|)^2).$$ \item \label{lem:infPres:commute} If $1\le i,j,k,l\le p$, $i\ne l$, and $j\ne k$ $$\delta_{\Gamma}([\widehat{e}_{ij}(x),\widehat{e}_{kl}(y)])=O((\log |x|+\log |y|)^2).$$ \item \label{lem:infPres:swap} Let $1\le i,j,k,l\le p$, $i\ne j$, and $k\ne l$, and $$s_{ij}=e_{ji}^{-1}e_{ij}e_{ji}^{-1},$$ so that $s_{ij}$ represents $$\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}\inSL(\{i,j\};\ensuremath{\mathbb{Z}}).$$ Then $$\delta_{\Gamma}(s_{ij} \widehat{e}_{kl}(x) s^{-1}_{ij},\widehat{e}_{\sigma(k)\sigma(l)}(\tau(k,l)x))=O( (\log |x|+\log |y|)^2),$$ where $\sigma$ is the permutation switching $i$ and $j$, and $\tau(k,l)=-1$ if $k=i$ or $l=i$ and $1$ otherwise. \item \label{lem:infPres:diag} If $b=\diagmat(b_1,\dots,b_p)$, then $$\delta_{\Gamma}(b \widehat{e}_{ij}(x) b^{-1},\widehat{e}_{ij}(b_i b_j x))=O( \log |x|^2).$$ \end{enumerate} \end{lem} \begin{proof} For part \ref{lem:infPres:add}, let $S=\{i,d\}$ and $T=S^c$. We can use Lemma~\ref{lem:shortEquiv} to replace $$\widehat{e}_{ij}(x)\widehat{e}_{ij}(y)\widehat{e}_{ij}(x+y)^{-1}$$ by $$\widehat{e}_{ij;S,T}(x)\widehat{e}_{ij;S,T}(y)\widehat{e}_{ij;S,T}(x+y)^{-1}$$ This is an approximation of a closed curve in $H_{S,S^c}$ of length $O(\log|x|+\log|y|)$, which can be filled using Thm.~\ref{thm:HDehn}. For part \ref{lem:infPres:multiply}, let $d\not \in \{i,j,k\}$ and let $S=\{i,j,d\}$, so that $\widehat{e}_{ij;\{i,d\},\{j\}}(x)$ is a word in $SL(S;\ensuremath{\mathbb{Z}})$. We construct a homotopy going through the stages \begin{align*} \omega_0&= [\widehat{e}_{ij}(x),\widehat{e}_{jk}(y)]\widehat{e}_{ik}(xy)^{-1} \\ \omega_1&= [\widehat{e}_{ij;\{i,d\},\{j\}}(x),\widehat{u}(y z_{j}\otimes z_{k};S,S^c)]\widehat{e}_{ik;S,S^c}(xy)^{-1}\\ \omega_2&= \widehat{u}((xy z_i+y z_{j})\otimes z_{k};S,S^c)\widehat{u}(y z_{j}\otimes z_{k};S,S^c)^{-1}\widehat{u}(xy z_{i}\otimes z_k;S,S^c)^{-1}. \end{align*} Here, we use Lemma~\ref{lem:shortEquiv} to construct a homotopy between $\omega_0$ and $\omega_1$. The homotopy between $\omega_1$ and $\omega_2$ is an application of Lemma~\ref{lem:xiConj} with $\gamma=\widehat{e}_{ij;\{i,d\},\{j\}}(x)$ and $V=y z_{j}\otimes z_{k}$. Finally, $\omega_2$ is a curve in $H_{S,S^c}$ with length $O(\log |x|+\log |y|)$, and thus has filling area $O((\log |x|+\log |y|)^2)$. The total area used is $O((\log |x|+\log |y|)^2)$. For part \ref{lem:infPres:commute}, we let $S=\{i,k,d\}$, $T=\{j,l\}$, and use the same techniques to construct a homotopy going through the stages \begin{align*} & [\widehat{e}_{ij}(x),\widehat{e}_{kl}(y)])\\ & [\widehat{e}_{ij;S,T}(x),\widehat{e}_{kl;S,T}(y)] & \text{by Lem.~\ref{lem:shortEquiv}}\\ & \varepsilon& \text{by Thm.~\ref{thm:HDehn}}, \end{align*} where $\varepsilon$ represents the empty word. This homotopy has area $O((\log |x|+\log |y|)^2)$. Part \ref{lem:infPres:swap} breaks into several cases depending on $k$ and $l$. When $i,j,k,$ and $l$ are distinct, the result follows from part \ref{lem:infPres:commute}, since $s_{ij}=e_{ji}^{-1}e_{ij}e_{ji}^{-1}$, and we can use part \ref{lem:infPres:commute} to commute each letter past $\widehat{e}_{kl}(x)$. If $k=i$ and $l\ne j$, let $d,d'\not\in \{i,j,l\}$, $d\ne d'$, and let $S=\{i,j,d\}$ and $T=\{l,d'\}$. There is a homotopy from $$s_{ij} \widehat{e}_{il}(x) s^{-1}_{ij}\widehat{e}_{jl}(-x)^{-1}$$ to $$s_{ij} \widehat{u}(x z_i\otimes z_l;{S,T}) s^{-1}_{ij}\widehat{e}_{jl}(x z_j\otimes z_l)$$ of area $O( (\log |x|)^2),$ and since $s_{ij}\in \Sigma_S^*$, the proposition follows by an application of Lemma \ref{lem:xiConj}. A similar argument applies to the cases $k=j$ and $l\ne i$; $k\ne i$ and $l= j$; and $k\ne j$ and $l= i$. If $i=k$ and $j=l$, let $d\not \in \{i,j\}$. There is a homotopy going through the stages \begin{align*} & s_{ij} \widehat{e}_{ij}(x) s^{-1}_{ij} & \\ & s_{ij} [e_{id},\widehat{e}_{dj}(x)] s^{-1}_{ij}& \text{ by part (\ref{lem:infPres:multiply})}\\ & [s_{ij}e_{id}s^{-1}_{ij},s_{ij}\widehat{e}_{dj}(x)s^{-1}_{ij}]& \text{ by free insertion}\\ & [e_{jd}^{-1},\widehat{e}_{di}(x)] & \text{ by previous cases}\\ & \widehat{e}_{ji}(-x) & \text{ by part (\ref{lem:infPres:multiply})} \end{align*} and this homotopy has area $O( (\log |x|)^2)$. One can treat the case that $i=l$ and $j_k$ the same way. Since any diagonal matrix in $\Gamma$ is the product of at most $p$ elements $s_{ij}$, part \ref{lem:infPres:diag} follows from part \ref{lem:infPres:swap}. \end{proof} \subsection{Reducing to smaller groups}\label{sec:paraReduce} In this section, we will apply the constructions in Lemma~\ref{lem:infPres} to reduce the problem of filling an $\omega$-triangle with vertices in $P=U(S_1,\dots, S_n)$ to the problem of filling loops in $SL(S_i;\ensuremath{\mathbb{Z}})$. The main caveat is that the methods in this section fails when $P=U(p-1,1)$; this case will be left to Section~\ref{sec:paraException}. Let $P=U(S_1,\dots, S_n;\ensuremath{\mathbb{Z}}).$ Because the $\omega$-triangles were constructed as products of the $\omega_0(g)$, which are words in $\widehat{\Sigma}$, we will work primarily with words in $\widehat{\Sigma}$. If $w$ is a word in $\widehat{\Sigma}$, define $\widehat{\delta}(w):=\delta_\Gamma(\lambda(w))$, where $\lambda$ is the map defined in Sec.~\ref{sec:geomSym} which replaces each letter $e_{ab}(x)$ with the word $\widehat{e}_{ab}(x)$. Let $P^+\subset P$ be the finite-index subgroup consisting of matrices in $P$ whose diagonal blocks all have determinant 1, and let $\widehat{\Sigma}_{P^+}:= \widehat{\Sigma}\cap P^+$. If $w$ is a word in $\widehat{\Sigma}\cap P$ which represents the identity, we can modify it to get a word in $\widehat{\Sigma}_{P^+}$. First, we can use Lemma~\ref{lem:infPres} to gather together all of the diagonal matrices at cost $O(\widehat{\ell}(w)^2)$; this results in a word $d w'$ where $d$ is the product of all the diagonal matrices in $w$ and $w'$ contains no diagonal matrices. Since $w'$ represents an element of $P^+$, $d'\in P^+$ as well. We decompose $d'$ as a product $d_1\dots d_n$ such that $d_i\in SL(S_i;\ensuremath{\mathbb{Z}})$. Let $f(w)=d_1\dots d_n w'$; this is a word in $\widehat{\Sigma}_{P^+}$, and $\widehat{\ell}(f(w))\le \widehat{\ell}(w)+p$. Let $p_i:P^+\to SL(S_i;\ensuremath{\mathbb{Z}})$ be the map projecting a block-upper triangular matrix to one of its diagonal blocks; this map extends to a map on $\widehat{\Sigma}_{P^+}^*$ which we also denote $p_i$. This map sends each letter of $f(w)$ either to itself or to the identity, so $p_i(w)$ is the word in $\widehat{\Sigma}_{S_i}:=\widehat{\Sigma}\cap SL(S_i;\ensuremath{\mathbb{Z}})$ obtained by deleting all the letters of $f(w)$ except those in $\widehat{\Sigma}_{S_i}$; since $w$ represented the identity, $p_i(w)$ also represents the identity. The main goal of this section is to prove: \begin{prop}\label{prop:paraReduce} Let $P=U(S_1,\dots,S_k)$, where $\#S_i\le p-2$ for all $i$. If $g_1,g_2,g_3\in P$, where $g_1g_2g_3=1$, and $$w=\omega_0(g_1^{e_1})^{e_1}\omega_0(g_2^{e_2})^{e_2}\omega_0(g_3^{e_3})^{e_3},$$ where $e_i=\pm 1$, then $$\widehat{\delta}(w,p_1(w)\dots p_k(w))=O(\widehat{\ell}(w)^2).$$ \end{prop} In particular, this lemma implies that $$\widehat{\delta}(w)\le \sum_{i}\widehat{\delta}(p_i(w))+O(\widehat{\ell}(w)^2).$$ Recall that if $g\in \Gamma$ and $Q=U(T_1,\dots,T_r)$ is the minimal element of $\ensuremath{\mathcal{P}}$ containing $g$, then $$\omega_0(g)=\widehat{\gamma}_r(g)\dots \widehat{\gamma}_1(g)d,$$ where $d\in D$, $$\widehat{v}(V;T_i,T_j):=\prod_{a\in T_i, b\in T_j}\sgen{ab}{x_{ab}},$$ $$\widehat{\gamma}_i(g):=\biggl(\prod_{j=i+1}^{r} \widehat{v}(V_{ij};T_i,T_j)\biggr) \widehat{m_j},$$ and $\widehat{m_j}$ is a word in $\widehat{\Sigma}_{T_j}^*$. If $g\in P$, then $Q\subset P$, and for each $1\le i\le r$, there is an $i'$ such that $T_i\subset S_{i'}$. As a consequence, $\widehat{m_i}$ is a word in $\widehat{\Sigma}_{S_{i'}}$, and $\widehat{v}(V_{ij};T_i,T_j)$ is either a word in $\widehat{\Sigma}_{S_{i'}}$ or can be written as $\widehat{v}(V_{ij};S_{i'},S_{j'})$. We can thus write $f(w)$ as a product of at most $3p^2+9p$ words so that each subword is either a word in $\widehat{\Sigma}_{S_{i}}$ for some $i$ or a word of the form $\widehat{v}(V;S_{i},S_{j})$ for some $i,j$ and $V\in \ensuremath{\mathbb{R}}^{S_{i}}\otimes \ensuremath{\mathbb{R}}^{S_{j}}$; henceforth, we will write $\widehat{v}_{ij}(V):=\widehat{v}(V;S_{i},S_{j})$. We will prove Prop.~\ref{prop:paraReduce} by giving methods for collecting the subwords of the first type. This involves three main techniques: conjugating $\widehat{v}_{ij}(V)$ by a word in $\widehat{\Sigma}_{S_{i}}$ (Lemma~\ref{lem:shortXiConj1}), commuting words in $\widehat{\Sigma}_{S_{i}}$ with words in $\widehat{\Sigma}_{S_{j}}$ (Lemma~\ref{lem:shortCommute}), and reducing products of the $\widehat{v}_{ij}(V)$ (Lemma~\ref{lem:shortNP}). First, we prove a lemma relating $\widehat{v}_{ij}(V)$ and $\widehat{u}(V)$. \begin{lem}\label{lem:shortVequiv} Let $S_1,\dots,S_k$ be as in Prop.~\ref{prop:paraReduce}, let $1\le i<j\le k$, and let $V\in \ensuremath{\mathbb{Z}}^{S_i}\otimes \ensuremath{\mathbb{Z}}^{S_j}$. Let $S\supset S_i$ and $T\supset S_j$ be disjoint sets such that $\#S\ge 2$ and $\#T\ge 3$ or vice versa. Then $$\widehat{\delta}(\widehat{v}_{ij}(V), \widehat{u}(V;S,T))=O((\log \|V\|_2)^2).$$ \end{lem} \begin{proof} Let $x_{ab}$ be the $(a,b)$ coefficient of $V$. There is a homotopy from $\widehat{v}_{ij}(V)$ to $\widehat{u}(V;S,T)$ going through the stages \begin{align*} & \prod_{a\in S_i, b\in S_j}\widehat{e}_{ab}(x_{ab}) & \\ & \prod_{a\in S_i, b\in S_j}\widehat{e}_{ab;S,T}(x_{ab}) & \text{by Lemma~\ref{lem:shortEquiv}} \\ & \prod_{a\in S_i, b\in S_j}\widehat{u}(x_{ab} v_a \otimes v_b;S,T) & \text{by the definition of $\widehat{e}$}\\ & \widehat{u}(V;S,T) & \text{by Thm.~\ref{thm:HDehn}} \end{align*} This homotopy has area $O((\log \|V\|_2)^2)$. \end{proof} Using this, we can prove: \begin{lem}\label{lem:shortXiConj1} Let $S_1,\dots,S_k$ be as in Prop.~\ref{prop:paraReduce}, let $1\le i<j\le k$, and let $V\in \ensuremath{\mathbb{Z}}^{S_i}\otimes \ensuremath{\mathbb{Z}}^{S_j}$. Let $w\in \widehat{\Sigma}_{S_i}^*$ be a word representing $M$. Then $$\widehat{\delta}(w \widehat{v}_{ij}(V) w^{-1}, \widehat{v}_{ij}(M V))\le O((\widehat{\ell}(w)+\log \|V\|_2)^2).$$ Similarly, if instead $w \in \widehat{\Sigma}_{S_j}^*$ is a word representing $N$, then $$\widehat{\delta}(w \widehat{v}_{ij}(V) w^{-1}, \widehat{v}_{ij}(VN^{-1}))\le O((\widehat{\ell}(w)+\log \|V\|_2)^2).$$ \end{lem} \begin{proof} We consider the case that $w\in \widehat{\Sigma}_{S_i}^*$; the other case follows by similar methods. It suffices to prove that $$\widehat{\delta}(w \sgen{ab}{t}w^{-1}, \widehat{v}_{ij}(M t z_a\otimes z_b))=O((\widehat{\ell}(w)+\log |t|)^2),$$ and apply this inequality to each term of $\widehat{v}_{ij}(V)$ individually. Let $d\in S^c$ be such that $b\ne d$, let $T'=\{b,d\}\subset S^c$, and let $S'=(T')^c$; note that $S\subset S'$ and $\#S'\ge 3$. We can use Lemma~\ref{lem:shortEquiv} to replace $\lambda(w)$ by a word $w'$ in $\Sigma_{S'}$ of length $O(\widehat{\ell}(w))$. We construct a homotopy from $w' \widehat{e}_{ab}(t){w'}^{-1}$ to $\widehat{v}_{ij}(M z_a\otimes z_b)$ as follows: \begin{align*} &w' \widehat{e}_{ab}(t)(w')^{-1}\\ & w' \widehat{u}(t z_a\otimes z_b; S', T')(w')^{-1} & \text{by Lemma~\ref{lem:shortEquiv}} \\ & \widehat{u}(t M z_a\otimes z_b; S', T') & \text{by Lemma~\ref{lem:xiConj}} \\ & \widehat{u}(t M z_a\otimes z_b; S_i, S_j) & \text{by Cor.~\ref{cor:shortUEquiv}} \\ & \widehat{v}_{ij}(M z_a\otimes z_b) & \text{by Lemma~\ref{lem:shortVequiv}} \end{align*} Applying this homotopy to each term in $\widehat{v}_{ij}(V)$ results in a product of terms $\widehat{v}_{ij}(V_a)$ such that $\sum V_a=MV$. We can use Lemma~\ref{lem:infPres} to reduce this to $\widehat{v}_{ij}(MV)$. \end{proof} Next, we fill commutators. These could be filled using Lemma~\ref{lem:infPres}.(\ref{lem:infPres:commute}), but a naive application of the lemma would give a cubic filling rather than a quadratic filling. \begin{lemma}\label{lem:shortCommute} Let $S, T\subset \{1,\dots, p\}$ be disjoint subsets such that $\#S,\#T\le p-2$, and let $i,j\not\in S$. Let $w_S$ be a word in $\widehat{\Sigma}_{S}$ and let $w_T$ be a word in $\widehat{\Sigma}_T$. Then $$\widehat{\delta}([w_S, w_T])=O((\widehat{\ell}(w_S)+\widehat{\ell}(w_T))^2).$$ \end{lemma} \begin{proof} If $\#S\ge 3$ and $\#T\ge 3$, we can use Lemma~\ref{lem:shortEquiv} to replace $w_S$ and $w_T$ by words in $\Sigma_S$ and $\Sigma_T$, then commute the resulting words letter by letter. Similarly, if $\#S=1$ or $\#T=1$, then $w_S$ or $w_T$ is trivial. It remains to study the case that $\#S$ or $\#T$ is 2; we can assume that $T=S^c$. Assume without loss of generality that $S=\{2,\dots, p-1\}$ and $\#T=\{1,p\}$. Since $\#S\ge 3$, we can use Lemma~\ref{lem:shortEquiv} to replace $\lambda(w_S)$ by a word $w_S'$ in $\Sigma_S$ at cost $O(\widehat{\ell}(w_S)^2).$ We will first show that for any word $w$ in $\Sigma_S$, $$\delta_\Gamma([w, \widehat{e}_{1p}(t)])=O((\ell_{\text{w}}(w)+\log |t|)^2).$$ Let $M\in SL(S;\ensuremath{\mathbb{Z}})$ be the matrix represented by $w$. We will construct a homotopy from $w\widehat{e}_{1p}(t)w^{-1}$ to $\widehat{e}_{1p}(t)$ through the curves \begin{align*} & w[e_{12}(1),\widehat{e}_{2p}(t)]{w}^{-1} & \text{by Lemmas~\ref{lem:shortEquiv} and \ref{lem:infPres}} \\ & [we_{12}(1){w}^{-1},w \widehat{e}_{2p}(t){w}^{-1}] & \text{by free insertion}\\ & [\mu_{\{1\},S}(z_1\otimes z_2 M^{-1}),\mu_{S,p}(M t z_2\otimes z_p)] & \text{by Lemma~\ref{lem:xiConj}}\\ & \widehat{e}_{1p}(t) & \text{by Lemma~\ref{lem:infPres}} \end{align*} The total cost of these steps is at most $O((\widehat{\ell}(w)+\log |t|)^2)$. The last step needs some explanation. Let $z_2 M^{-1}=(a_2,\dots,a_{p-1})$ and $M t z_2=(b_2,\dots,b_{p-1})$, so that we are transforming $[u,v]$ to $\widehat{e}_{1p}(t)$, where $$u=\mu_{\{1\},S}(z_1\otimes z_2 M^{-1})=\prod_{i=2}^{p-1} \widehat{e}_{1i}(a_i)$$ and $$v=\mu_{S,\{p\}}(M z_2\otimes z_p)=\prod_{i=2}^{p-1} \widehat{e}_{ip}(b_i).$$ We will move terms of $u$ past $v$ one by one. Each term $\widehat{e}_{1i}(a_i)$ of $u$ commutes with every term of $u$ and $v$ except for $\widehat{e}_{ip}(b_i)$ and its inverse, so moving it past $v$ only generates a single $\widehat{e}_{1p}(a_ib_i)$. This commutes with every term of $u$ and $v$, so we can move it to the left side of the word. We can now cancel $\widehat{e}_{1i}(a_i)$ with its inverse. Repeating this process deletes all of the original terms of $u$ and $u^{-1}$, allowing us to cancel $v$ and $v^{-1}$. This leaves a product $$\prod_{i=2}^{p-1} \widehat{e}_{1p}(a_i b_i),$$ but since $z_2 M^{-1}\cdot M t z_2= z_2 \cdot t z_2=t$, where $\cdot$ represents the dot product, we can use Lemma~\ref{lem:infPres} to convert this to $\widehat{e}_{1p}(t)$. All of the coefficients in this process are bounded by $\|M\|_2^2 t$, and $\|M\|_2$ is exponential in $\widehat{\ell}(w)$, so this process has cost $O((\ell_w(w)+\log |t|)^2)$. When $\ell(w)$ is large, we can get a stronger bound by breaking it into segments. Let $$n=\ceil{\frac{\ell_{\text{w}}(w)}{\log|t|+1}}.$$ Let $w=w_1\dots w_n$, where the $w_i$ are words in $\Sigma_S$ of length at most $\log|t|+1$. Then \begin{align*} \delta_\Gamma([w, \widehat{e}_{1p}(t)]) & \le \sum_{i=1}^n \delta_\Gamma([w_i, \widehat{e}_{1p}(t)]). \\ & \le n O((\log |t| +1)^2) \\ & \le O((\log |t| +1)^2+ \ell_{\text{w}}(w)(\log |t|+1)) \end{align*} The same methods show that $$\delta_\Gamma([w, \widehat{e}_{p1}(t)])\le O((\log |t| +1)^2+ \ell_{\text{w}}(w)(\log |t|+1)).$$ Applying this to each term of $w_T=g_1\dots g_k$, where $g_i\in \widehat{\Sigma}_T$, we find \begin{align*} \widehat{\delta}([w_S, w_T])&\le O(\widehat{\ell}(w_S)^2) + \widehat{\delta}([w_S', w_T])\\ &\le O(\widehat{\ell}(w_S)^2)+ \sum_{i=1}^k \widehat{\delta}([w_S', g_i])\\ &\le O(\widehat{\ell}(w_S)^2)+ \sum_{i=1}^k O(\widehat{\ell}(g_i)^2+ \ell_{\text{w}}(w_S')\widehat{\ell}(g_i))\\ &\le O(\widehat{\ell}(w_S)^2)+O(k)+ O(\widehat{\ell}(w_T)^2)+ O(\ell_{\text{w}}(w_{S}')\widehat{\ell}(w_T))\\ &= O((\widehat{\ell}(w_S)+\widehat{\ell}(w_T))^2). \end{align*} \end{proof} Finally, we construct fillings of products of upper-triangular elements of $\widehat{\Sigma}$. \begin{lemma} \label{lem:shortNP} Let $w=w_1\dots w_n$ be a word in $\widehat{\Sigma}$ representing $I$, where $w_i=\sgen{a_i,b_i}{t_i}$ for some $1\le a_i<b_i\le p$. Let $h=\max\{\log |x_i|,1\}$. If $w$ represents the identity, then $\widehat{\delta}(w)=O(n^3h^2)$. \end{lemma} \begin{proof} Let $N=N(\ensuremath{\mathbb{Z}})$ be the subgroup of upper-triangular integer matrices with $1$'s on the diagonal. Our proof is based on the seashell template in Figure~\ref{fig:seashell}; we describe a normal form for points in $N$ and then describe how to fill triangles with two sides in normal form and one short side. If $$m=\begin{pmatrix} 1 & m_{1,2} & \dots & m_{1,p} \\ 0 & 1 & \dots & m_{2,p} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & 1 \end{pmatrix},$$ where $m_{ij}\in \ensuremath{\mathbb{Z}}$, then $$\omega_0(m)=x_p(m)\dots x_1(m)$$ where $$x_k(m)=\sgen{k,k+1}{m_{k,k+1}}\dots \sgen{k,p}{m_{k,p}}.$$ This is a normal form for elements of $N_P$; note that it contains at most $p^2$ letters. Let $n_i=w_1\dots w_i$. We will construct a filling of $w$ by filling the wedges $$\lambda(\omega_0(n_{i-1}) w_{i} \omega_0(n_{i})^{-1});$$ we can consider these fillings as homotopies from $\lambda(\omega_0(n_{i-1})w_{i})$ to $\lambda(\omega_0(n_{i}))$. We fill $\lambda(\omega_0(n_{i-1}) w_{i} \omega_0(n_{i})^{-1})$ by moving $w_{i}$ leftward past $$\lambda(x_{a_i-1}(n_{i-1})\dots x_{1}(n_{i-1})).$$ Each letter in $x_{a_i-1}(n_{i-1})\dots x_{1}(n_{i-1})$ is of the form $\widehat{e}_{ab}(x)$ with $a<a_i$, so we can move $w_i$ leftward by repeatedly replacing subwords of the form $\widehat{e}_{ab}(x)w_i$ with $w_i\widehat{e}_{ab}(x)$ if $b\ne a_i$ (using Lemma~\ref{lem:infPres}) and with $w_i\widehat{e}_{ab_i}(x t_i)\widehat{e}_{ab}(x)$ if $b= a_i$. Each of these steps has cost $O((\log |x|+\log |t_i|)^2)$. Since $$\log|x|\le \log \|n_{i-1}\|_2=O(hn),$$ this is $O(h^2n^2)$. We repeat this process until we have moved $w_i$ past $$\lambda(x_{a_i-1}(n_{i-1})\dots x_{1}(n_{i-1})),$$ which takes at most $p^2$ steps and has total cost $O(h^2n^2)$. Call the resulting word $w'$. During this process, we have inserted at most $p^2$ additional letters, but we can partition $w'$ into a word of the form $w'=w'_{p-1}\dots w'_1$, where $w'_k$ is a word in the alphabet $\{\sgen{k i}{t}\}_{t\in \ensuremath{\mathbb{Z}}, i>k}$ representing the same element of $\Gamma$ as $x_k(n_i)$. The coefficients in these words are bounded by $t_i\|n_{i-1}\|_2$. The letters in each subword commute, so we can apply Lemma~\ref{lem:infPres} $(p+p^2)^2$ times to put the letters in each subword in order, then $p^2$ times to collect like terms. This reduces $w'$ to $\omega(w_i)$. Each of these steps has cost $O(h^2 n^2)$, so $$\widehat{\delta}(\omega_0(n_{i-1}) w_{i} \omega_0(n_{i})^{-1})=O(h^2 n^2).$$ To fill $w$, we need to fill $n$ such wedges, so $\delta_{\Gamma}(w)=O(h^2n^3).$ \end{proof} We can use these lemmas to prove Prop.~\ref{prop:paraReduce}. \begin{proof}[{Proof of Prop.~\ref{prop:paraReduce}}] Recall that $w$ is a product of at most $3p^2+9p$ subwords of two types: words in $\widehat{\Sigma}_{S_{i}}$ or words of the form $\widehat{v}_{ij}(V)$. Since $\#S_i\le p-2$ for all $i$, we can use Lemmas~\ref{lem:shortXiConj1} and \ref{lem:shortCommute} to gather all of the terms in $\widehat{\Sigma}_{S_1}^*$ on the left side of the word. This takes no more than $(3p^2+9p)^2$ applications of the lemmas. We can do the same for all of the $\widehat{\Sigma}_{S_k}^*$. This process has cost $O(\widehat{\ell}(w)^2)$ and results in a word $p_1(w)\dots p_k(w)w'$ of length $O(\widehat{\ell}(w))$, where $w'$ is a word of length at most $3p^2$ in the alphabet $\{\sgen{ab}{t}\}_{a<b,t\in \ensuremath{\mathbb{Z}}}$. Since each of the $p_i(w)$ represent the identity, so does $w'$. There is a $c$ such that the coefficients in the letters of $w'$ are bounded by $|t|\le e^{c \widehat{\ell}(w)}$, so Lemma~\ref{lem:shortNP} provides a filling of $w'$ with area $O(\widehat{\ell}(w)^2)$; this proves the proposition. \end{proof} \subsection{Words in $SL(p-1;\ensuremath{\mathbb{Z}})\ltimes \ensuremath{\mathbb{Z}}^{p-1}$}\label{sec:paraException} The techniques of the previous section fail when $P=U(p-1,1;\ensuremath{\mathbb{Z}})$, but the approach described in Section~\ref{sec:adaptive} allows us to decompose words in $\Sigma_P$ into $\omega$-triangles with vertices in smaller parabolic subgroups. In this section, we will construct a space on which $P$ acts non-cocompactly and which has a quadratic Dehn function, then use fillings of words in this space to construct templates for those words. Since $P$ is a finite-index extension of $SL(p-1;\ensuremath{\mathbb{Z}})\ltimes \ensuremath{\mathbb{Z}}^{p-1},$ any word in $\Sigma_P$ which represents the identity can be reduced to a word in $$\Sigma_P':=SL(p-1;\ensuremath{\mathbb{Z}})\ltimes \ensuremath{\mathbb{Z}}^{p-1}\cap \Sigma_P$$ at cost linear in the length of the word. Dru{\c{t}}u showed that if $p\ge 4$, then the group $H=SL(p-1;\ensuremath{\mathbb{R}})\ltimes \ensuremath{\mathbb{R}}^{p-1}$ has a quadratic Dehn function \cite{DrutuFilling}. Let $\ensuremath{\mathcal{E}}_H=H/SO(p)\subset \ensuremath{\mathcal{E}}$; this fibers over $\ensuremath{\mathcal{E}}_{p-1}:=SL(p-1)/SO(p-1)$ with fiber $\ensuremath{\mathbb{R}}^{p-1}$. If $x\in H$, let $[x]_{{\ensuremath{{\mathcal{E}}_H}}}$ be the corresponding point of ${\ensuremath{{\mathcal{E}}_H}}$. Our first step is to strengthen Dru{\c{t}}u's result by showing that a curve of length $\ell$ in $\ensuremath{\mathcal{E}}_H$ can be filled by a Lipschitz map $f:\ensuremath{D^2}(\ell)\to \ensuremath{\mathcal{E}}_H$ with Lipschitz constant bounded independently of $\ell$. We prove this by using a Lipschitz analogue of Prop.~\ref{prop:dyadic}; we construct a normal form for $\ensuremath{\mathcal{E}}_H$ and then fill triangles in this normal form. First, we construct a family of curves $\widehat{\xi}(v)$ for $v\in \ensuremath{\mathbb{R}}^{p-1}$, roughly analogous to the curves $\widehat{u}(V)$ defined in Section~\ref{sec:geomSym}; these correspond to elements of $\ensuremath{\mathbb{R}}^{p-1}$. Any element of $H$ can be written as the product of an element of $SL(p-1;\ensuremath{\mathbb{R}})$ and an element of $\ensuremath{\mathbb{R}}^{p-1}$; so we construct a normal form for $H$ so that each curve is the concatenation of a curve in $SL(p-1;\ensuremath{\mathbb{R}})$ and a $\widehat{\xi}(v)$. To fill triangles in this normal form, we need three constructions: we need to conjugate the $\widehat{\xi}(v)$ by curves in $SL(p-1;\ensuremath{\mathbb{R}})$ (Lemma~\ref{lem:lipXiConj}), fill triangles whose sides are the $\widehat{\xi}(v)$ (Lemma~\ref{lem:LipShortenedTriFill}), and fill curves in $SL(p-1;\ensuremath{\mathbb{R}})$. The following remarks will help us to glue and combine Lipschitz discs. \begin{rem} \label{rem:polys} If $S$ is a convex polygon with non-empty interior and diameter at most $1$, there is a map $S\to D^2$ whose Lipschitz constant varies continuously with the vertices of $S$. \end{rem} \begin{rem} \label{rem:reparam} If $\gamma$ is a curve of length $\ell$ which can be filled with an Lipschitz disc, then a reparameterization of $\gamma$ can be filled with a Lipschitz disc with only a small increase in the Lipschitz constant. Let $D^2(\ell):=[0,\ell]\times [0,\ell]$ as before, and let $S^1(\ell)$ be the boundary of $D^2(\ell)$, a circle of length $4\ell$. Let $\gamma:S^1(t)\to X$ be a Lipschitz map with length $\ell$, and let $\gamma':S^1(t)\to X$ be the constant-speed parameterization of $\gamma$. It is straightforward to construct a homotopy $h:S^1\times [0,t]\to X$ between $\gamma$ and $\gamma'$ with Lipschitz constant $O(\Lip \gamma)$, so that $h(\theta,0)=\gamma(\theta)$ for all $\theta\in S^1$. A filling of $\gamma$ can be converted into a filling of $\gamma$ and vice versa. Let $\beta:D^2(t)\to X$ be a filling of $\gamma$, and let $D'=(D^2(t) \cup S^1(t)\times [0,t])/\sim$, where $\sim$ identifies $D^2(t)$ with $S^1(t)\times \{0\}$. This space is bilipschitz equivalent to $D^2(t)$. Define a map $\bar{\beta}:D'\to X$ so that $\bar{\beta}(x)=\beta(x)$ for all $x\in D^2(t)$ and $\beta(\theta,r)=h(\theta,r)$ for all $\theta\in S^1$, $r\in [0,t]$. This is well-defined, has boundary is $\gamma'$, and its Lipschitz constant is $O(\Lip(\beta)+\Lip(\gamma'))$. Likewise, if $\beta$ is a filling of $\gamma'$, we can use a similar construction to construct a $\beta'$ such that $\beta'$ fills $\gamma$ and has Lipschitz constant $O(\Lip(\beta)+\Lip(\gamma'))$. One application of this converts homotopies to discs; if $f_1,f_2:[0,\ell]\to X$ are two maps with the same endpoints, and $h:[0,\ell]\times [0,\ell]\to X$ is a Lipschitz homotopy between $f_1$ and $f_2$ with endpoints fixed, then there is a disc filling $f_1f_2^{-1}$ with Lipschitz constant $O(\Lip(h))$. \end{rem} \begin{rem} \label{rem:boundedFill} For every $\ell$, there is a $c(\ell)$ such that any closed curve in $\ensuremath{\mathcal{E}}_H$ of length $\ell$ can be filled by a $c(\ell)$-Lipschitz map $D^2(1)\to \ensuremath{\mathcal{E}}_H$. This follows from compactness and the homogeneity of $\ensuremath{\mathcal{E}}_H$. \end{rem} We define a family of curves $\widehat{\xi}(v)$ which connect $I$ to $v\in \ensuremath{\mathbb{R}}^{p-1}\subset H$. If $v=0$, let $\widehat{\xi}(v)$ be constant. If $v\in \ensuremath{\mathbb{R}}^{p-1},$ $v\ne 0$, let $\kappa=\max\{ \|v\|_2,1\}$ and let $$n(v):=\frac{v}{\kappa},$$ so that $\|n(v)\|\le 1$. Let $v_1=v/\|v\|_2, v_2, \dots,v_{p-1}\in \ensuremath{\mathbb{R}}^{p-1}$ be an orthonormal basis of $\ensuremath{\mathbb{R}}^{p-1}$. Let $D(v)$ be the matrix taking \begin{align*} v_1&\mapsto \kappa v_1\\ v_i&\mapsto \kappa^{-1/(p-2)} v_i. \end{align*} and let $\ensuremath{\mathcal{D}}(v)$ be the curve $t\mapsto D(v)^t$ for $0\le t\le 1$. Then $D(v)u(n(v))D(v)^{-1}=u(v)$, and we can let $\widehat{\xi}(v)$ be $\ensuremath{\mathcal{D}}(v)u(n(v))\ensuremath{\mathcal{D}}(v)^{-1}$. This has length $O(\mathop{\overline{\log}} \|v\|)$, where $\mathop{\overline{\log}} x = \max\{1,\log x\}$. We can prove the following analogue of Lemma~\ref{lem:xiConj} for the $\widehat{\xi}(V)$'s: \begin{lemma}\label{lem:lipXiConj} Let $\gamma:[0,1]\to SL(p-1)$ be a curve connecting $I$ and $M$ and let $v\in \ensuremath{\mathbb{R}}^{p-1}$. Let $$w=\gamma \widehat{\xi}(v)\gamma^{-1} \widehat{\xi}(Mv)^{-1}.$$ There is a map $f:D^2(\ell_{\text{c}}(w))\to {\ensuremath{{\mathcal{E}}_H}}$ such that $f|_{\partial D^2}$ is $[w]_{\ensuremath{{\mathcal{E}}_H}}$ and $f$ has Lipschitz constant bounded independently of $w$. \end{lemma} \begin{proof} If $v=0$, then $w=\gamma\gamma^{-1}$, so we may assume that $v\ne 0$. If $\ell_{\text{c}}(w)\le 1$, we can use Remark~\ref{rem:boundedFill} to fill $w$, so we also assume that $\ell_{\text{c}}(w)\ge 1$. To simplify the notation, let $D(v)$ represent the curve $\ensuremath{\mathcal{D}}(v)$. By the definition of $\widehat{\xi}$, $$w=\gamma \ensuremath{\mathcal{D}}(v)u(n(v)) \ensuremath{\mathcal{D}}(v)^{-1}\gamma^{-1} \ensuremath{\mathcal{D}}(Mv) u(-n(Mv))\ensuremath{\mathcal{D}}(Mv)^{-1}.$$ Let $$\gamma'=\ensuremath{\mathcal{D}}(Mv)^{-1} \gamma \ensuremath{\mathcal{D}}(v).$$ Changing the basepoint of $w$, we obtain the curve $$w_1=\gamma' u(n(v))(\gamma')^{-1}u(-n(Mv)).$$ This can be filled as shown in Figure~\ref{fig:lipXiConj:Rect}, but the horizontal lines in the figure may be exponentially large. The color-coding in the diagrams corresponds to direction; blue lines represent curves in $SL(p-1)$ and red lines represent curves $\widehat{\xi}(v)$. We will use a homotopy in $SL(p-1)$ to replace $w_1$ with a ``thin rectangle'' in which the horizontal lines are short. \begin{figure} \psfrag{frag0}[b][B]{$u(\gamma'(t)^{-1} n(Mv))$} \psfrag{frag1}[r][r]{$\gamma'$} \psfrag{frag2}[l][l]{$\gamma'$} \psfrag{frag3}[b][B]{$u(n(v))$} \psfrag{frag4}[t][t]{$u(n(Mv))$} \includegraphics[width=3in]{lipXiConjRectReplaced} \caption{\label{fig:lipXiConj:Rect}An exponential filling of $\gamma' u(n(v))(\gamma')^{-1}u(-n(Mv))$} \end{figure} We will construct a map $[0,2\ell_{\text{c}}(w)+1]\times [\ell_{\text{c}}(w)]\to {\ensuremath{{\mathcal{E}}_H}}$ whose boundary is a parameterization of $w$. The domain of this map is divided into two $\ell_{\text{c}}(w)\times \ell_{\text{c}}(w)$ squares and a $\ell_{\text{c}}(w)\times 1$ rectangle (Fig.~\ref{fig:lipXiConj}); the squares will correspond to the homotopy in $SL(p-1)$ mentioned above, and bound a central thin rectangle. We will map the boundaries of each of these shapes into $\ensuremath{\mathcal{E}}_H$ by Lipschitz maps and construct Lipschitz discs in $\ensuremath{\mathcal{E}}_H$ with those boundaries. Indeed, each of the edges of the figure are marked by curves, and we can map each edge into $\ensuremath{\mathcal{E}}_H$ as the constant-speed parameterization of the corresponding curve. Our bounds on the lengths of these curves will ensure that these maps are Lipschitz. Let $$S:=\{m\mid m\in SL(p-1), \|m^{-1} n(Mv)\|_2\le 1\},$$ and let $$M':=D(Mv)^{-1}MD(v)$$ be the endpoint of $\gamma'$. Since $n(v)=(M')^{-1}n(Mv)$, the endpoints of $\gamma'$ both lie in $S$. We will construct a curve $\sigma$ in $S$ which has the same endpoints as $\gamma'$ and has length comparable to $\ell_{\text{c}}(\gamma')$. First, let $K\in SO(p-1)$ be a matrix such that $K^{-1}Mv/\|Mv\|_2=v/\|v\|$. There is a curve $\sigma_K$ in $SO(p-1)\subset S$ connecting $I$ to $K$. The vectors $Kn(v)$ and $n(Mv)$ are both multiples of $Mv$, so we can let $\lambda>0$ be such that $Kn(v)=\lambda n(Mv)$. Note that $|\log \lambda|=O(\ell_{\text{c}}(w))$. Let $D\in SL(p-1)$ be the matrix such that $D^{-1}v=\lambda v$ and $D^{-1}w=\lambda^{-1/(p-1)}w$ when $w$ is perpendicular to $v$, so that $$D^{-1} K^{-1}n(Mv)=n(v).$$ Note that $\log \|D\|_2=O(|\log \lambda|)=O(\ell_{\text{c}}(w))$. Let $\sigma_D$ be the path $t\mapsto D^t$, $t\in [0,1]$. This connects $I$ and $D$, and the concatenation $\sigma_K\sigma_D$ connects $I$ to $KD$ in $S$. We next connect $KD$ and $M'$ inside $S$. Note that if $SL(p-1)_{v}$ is the stabilizer of $v$, then $KD\cdotSL(p-1)_{v}\subset S$, and $(KD)^{-1}M'n(v)=n(v)$, so $(KD)^{-1}M'\in SL(p-1)_{v}$. The set $SL(p-1)_{v}$ is isometric to $SL(p-2)\ltimes \ensuremath{\mathbb{R}}^{p-2}$, and since $p\ge 4$, it is quasi-isometrically embedded in $H$, and there is a path $\sigma_0$ in $SL(p-1)_{v}$ connecting $I$ to $(KD)^{-1}M'$, with length $O(\log \|(KD)^{-1}M'\|_2)=O(\ell_{\text{c}}(w))$. The path $$\sigma=\sigma_K\sigma_D\sigma_0$$ is then contained in $S$ and connects $I$ to $M'$. Adding the lengths of the components, we find $\ell_{\text{c}}(\sigma)=O(\ell_{\text{c}}(w))$. Since $\ell_{\text{c}}(w)\ge 1$ and the edges marked $\sigma$ in Fig.~\ref{fig:lipXiConj} have length $\ell_{\text{c}}(w)$, the map on those edges is Lipschitz for some constant independent of $w$. The boundaries of the shapes in the figure are thus $[\sigma\gamma^{-1}]_{{\ensuremath{{\mathcal{E}}_H}}}$ and $$w_2=[\sigma u(n(v)) \sigma^{-1} u(-n(Mv))]_{{\ensuremath{{\mathcal{E}}_H}}}.$$ The first curve, $\sigma\gamma^{-1}$, is a closed curve in $SL(p-1)$ of length $O(\ell_{\text{c}}(w))$. Since $SL(p-1)/SO(p-1)$ is non-positively curved, this curve has a filling in $\ensuremath{\mathcal{E}}_H$ with area $O(\ell_{\text{c}}(w)^2)$. This can be taken to be a $c$-Lipschitz map from $D^2(\ell_{\text{c}}(w))$, where $c$ depends only on $p$. The second curve is the boundary of a ``thin rectangle''. That is, there is a Lipschitz map $$\rho:[0,\ell_{\text{c}}(w)] \times [0,1]\to H$$ $$\rho(x,t)=\sigma(x)u(t \sigma(x)^{-1} n(Mv))=u(t n(Mv))\sigma(x)$$ which sends the four sides of the rectangle to $\sigma, u(n(v)), \sigma^{-1},$ and $u(-n(Mv))$. Projecting this disc to ${\ensuremath{{\mathcal{E}}_H}}$ gives a Lipschitz filling of $w_2.$ We glue these discs together to get a Lipschitz map from the rectangle to ${\ensuremath{{\mathcal{E}}_H}}$. The boundary of the rectangle is a Lipschitz reparameterization of $[w]_{\ensuremath{{\mathcal{E}}_H}}$, so we can use Rem.~\ref{rem:reparam} to get a filling of $[w]_{\ensuremath{{\mathcal{E}}_H}}$. \begin{figure} \psfrag{frag0}[r][r]{$\gamma'$} \psfrag{frag1}[l][l]{$\gamma'$} \psfrag{frag2}[b][B]{$u(n(v))$} \psfrag{frag3}[r][r]{$\sigma$} \psfrag{frag4}[l][l]{$\sigma$} \psfrag{frag5}[t][t]{$u(n(Mv))$} \psfrag{frag6}[b][B]{$\rho$} \includegraphics[width=4in]{lipXiConjReplaced} \caption{\label{fig:lipXiConj}A quadratic filling of $\gamma' u(n(v))(\gamma')^{-1}u(-n(Mv))$} \end{figure} \end{proof} This lets us fill many curves in $H$. \begin{lemma}\label{lem:LipShortenedTriFill} Let $v_1,v_2\in \ensuremath{\mathbb{R}}^{p-1}$. If $w=\widehat{\xi}(v_1)\widehat{\xi}(v_2)\widehat{\xi}(v_1+v_2)^{-1}$, there is a map $f:D^2(\ell_{\text{c}}(w))\to {\ensuremath{{\mathcal{E}}_H}}$ such that $f|_{\partial D^2}=[w]_{\ensuremath{{\mathcal{E}}_H}}$ and $\Lip(f)$ is bounded independently of $w$. \end{lemma} \begin{proof} As before, we may assume that $\ell_{\text{c}}(w)>3$. Let $S=\langle v_1, v_2\rangle\subset \ensuremath{\mathbb{R}}^{p-1}$ be the subspace generated by the $v_i$ and let $\lambda=\max\{\|v_1\|_2, \|v_2\|_2,\|v_1+v_2\|_2\}$. Let $D\in SL(p-1)$ be the matrix such that $Ds=\lambda s$ for $s\in S$ and $Dt=\lambda^{-1/(p-1-\dim(S))} t$ for vectors $t$ which are perpendicular to $S$; this is possible because $\dim(S)\le 2$ and $p\ge 4$. Let $\gamma_D$ be the curve $t\mapsto D^t$ for $0\le t\le 1$; this has length $O(\log \lambda)=O(\ell_{\text{c}}(w))$. We construct a filling of $[w]_{\ensuremath{{\mathcal{E}}_H}}$ based on a triangle with side length $\ell_{\text{c}}(w)$ as in Figure~\ref{fig:shortTri}. The central triangle in the figure has side length $1$; since $\ell_{\text{c}}(w)\ge 3$, the trapezoids around the outside are bilipschitz equivalent to discs $D^2(\ell)$ with Lipschitz constant bounded independently of $w$. Let $f$ take each edge to $H$ as labeled, and give each edge a constant-speed parameterization; $f$ is Lipschitz on each edge, with a Lipschitz constant independent of the $v_i$. Let $\bar{f}$ be the projection of $f$ to ${\ensuremath{{\mathcal{E}}_H}}$. We've defined $\bar{f}$ on the edges in the figure; it remains to extend it to the interior of each cell. The map $\bar{f}$ sends the boundary of the center triangle to a curve of length at most 3, so we can use Rem.~\ref{rem:boundedFill} to extend $\bar{f}$ to its interior. The map $\bar{f}$ sends the boundary of each trapezoid to a curve of the form \begin{equation}[\widehat{\xi}(v_i)^{-1}\gamma_D u(\lambda v_i)\gamma_D^{-1}]_{\ensuremath{{\mathcal{E}}_H}}. \end{equation} Lemma~\ref{lem:lipXiConj} gives Lipschitz discs filling such curves. Each of these fillings has Lipschitz constant bounded independently of $w$, so the resulting map on the triangle also has Lipschitz constant bounded independently of $w$. \begin{figure} \psfrag{frag0}[l][l]{$\gamma_D$} \psfrag{frag1}[l][l]{$\gamma_D$} \psfrag{frag2}[r][r]{$\gamma_D$} \psfrag{frag3}[t][t]{$\widehat{\xi}(v_1+v_2)$} \psfrag{frag4}[r][r]{$\widehat{\xi}(v_1)$} \psfrag{frag5}[l][l]{$\widehat{\xi}(v_2)$} \psfrag{frag6}[l][l]{$u(\lambda^{-1}v_2)$} \psfrag{frag7}[r][r]{$u(\lambda^{-1}v_1)$} \psfrag{frag8}[t][t]{\small $u(\lambda^{-1}(v_1+v_2))$} \includegraphics[width=3in]{shortTriReplaced} \caption{\label{fig:shortTri}A quadratic filling of $\widehat{\xi}(v_1)\widehat{\xi}(v_2)\widehat{\xi}(v_1+v_2)^{-1}$} \end{figure} \end{proof} The group $H$ has a normal form based on the semidirect product structure. Let $g\in H$. Then $g=Mu(v)$ for some $M\in SL(p-1)$ and $v\in \ensuremath{\mathbb{R}}^{p-1}$. Let $\gamma_M$ be a geodesic connecting $I$ to $M$ and define $\omega_H(g)=\gamma_M\widehat{\xi}(v)$. Then $\ell_{\text{c}}(\omega_H(g))=O(\log\|g\|_2)$. We can use the previous lemmas to fill $\omega_H$-triangles. \begin{lemma}\label{lem:LipCombFill} If $g_1, g_2\in H$ and $$w=\omega_H(g_1)\omega_H(g_2)\omega_H(g_1g_2)^{-1},$$ then there is a map $f:D^2(\ell_{\text{c}}(w))\to \ensuremath{\mathcal{E}}_H$ such that $f|_{\partial D^2}=[w]_{\ensuremath{\mathcal{E}}_H}$ and $\Lip(f)\le c$. \end{lemma} \begin{proof} As before, assume that $\ell_{\text{c}}(w)\ge 1$. Let $g_3=g_1g_2$ and let $M_i\in SL(p-1)$ and $v_i\in \ensuremath{\mathbb{R}}^{p-1}$ be such that $g_i=M_i u(v_i)$ for $i=1,2,3$. We construct a filling of $w$ as in Figure~\ref{fig:LipCombFill}, which depicts an isoceles right triangle with legs of length $2\ell_{\text{c}}(w)$, divided into a square and two congruent triangles; this is bilipschitz equivalent to $D^2(\ell_{\text{c}}(w))$, and we call it $\Delta$. Let $f:\Delta^{(1)}\to H$ be the map from the 1-skeleton of the figure to $H$ defined by sending each edge to its corresponding curve, parameterized with constant speed, and let $\bar{f}=[f]_{\ensuremath{{\mathcal{E}}_H}}$. This is a Lipschitz map on the 1-skeleton of the triangle, with Lipschitz constant independent of $w$. The map $\bar{f}$ sends the square in the figure to a curve which can be filled using Lemma~\ref{lem:lipXiConj}, sends the blue (upper left) triangle to a curve in $\ensuremath{\mathcal{E}}_{p-1}$ which can be filled using non-positive curvature, and sends the red triangle (lower right) to a curve which can be filled using Lemma~\ref{lem:LipShortenedTriFill}. Combining these fillings proves the lemma. \begin{figure} \psfrag{frag0}[r][r]{$\gamma_{M_1}$} \psfrag{frag1}[b][B]{$\gamma_{M_2}$} \psfrag{frag2}[b][B]{$\gamma_{M_3}$} \psfrag{frag3}[r][r]{$\widehat{\xi}(v_1)$} \psfrag{frag4}[b][B]{$\widehat{\xi}(v_3)$} \psfrag{frag5}[r][r]{$\widehat{\xi}(M_2^{-1}v_1)$} \psfrag{frag6}[b][B]{$\gamma_{M_2}$} \psfrag{frag7}[b][B]{$\widehat{\xi}(v_2)$} \includegraphics[width=3in]{LipCombFillReplaced} \caption{\label{fig:LipCombFill}A quadratic filling of $\omega_H(g_1)\omega_H(g_2)\omega_H(g_3)^{-1}$} \end{figure} \end{proof} We can construct Lipschitz fillings of arbitrary curves from these triangles (cf.\ \cite{GroCC} and Prop.~\ref{prop:dyadic}). \begin{prop} \label{prop:LipPara} If $w$ is a closed curve in ${\ensuremath{{\mathcal{E}}_H}}$, there is a filling $f:D^2(\ell_{\text{c}}(w))\to H$ of $w$ such that $\Lip(f)$ is bounded independently of $w$. \end{prop} \begin{proof} \begin{figure} \psfrag{frag0}[t][t]{$w$} \psfrag{frag1}[r][r]{$w(0)$} \psfrag{frag2}[l][l]{$w(t)$} \psfrag{frag3}[l][l]{$w(t/2)$} \psfrag{frag4}[l][l]{$w(3t/2)$} \psfrag{frag5}[l][l]{$w(2t)=w(0)$} \psfrag{frag6}[b][B]{$[\omega_H(\tilde{w}(0)^{-1}\tilde{w}(t))]_{\ensuremath{{\mathcal{E}}_H}}$} \psfrag{frag7}[b][B]{$[\omega_H(\tilde{w}(t)^{-1}\tilde{w}(2t))]_{\ensuremath{{\mathcal{E}}_H}}$} \psfrag{frag8}[b][B]{$[\omega_H(\tilde{w}(0)^{-1}\tilde{w}(2t))]_{\ensuremath{{\mathcal{E}}_H}}=w(0)$} \includegraphics[width=4in]{paraLipDehnReplaced} \caption{\label{fig:paraLipDehn}A filling of $w$ in ${\ensuremath{{\mathcal{E}}_H}}$} \end{figure} The proof is based on Gromov's construction of Lipschitz extensions in \cite{GroCC}. Assume that $\ell_{\text{c}}(w)>1$; otherwise, the lemma follows from Rem.~\ref{rem:boundedFill}. Let $k=\lceil \log \ell_{\text{c}}(w) \rceil$, let $t=2^k$, and parameterize $w$ as a map $w:[0,2t]\to {\ensuremath{{\mathcal{E}}_H}}$ with constant speed. Let $\tilde{w}:[0,2t]\to H$ be a map (not necessarily continuous) such that $[\tilde{w}(x)]_{\ensuremath{{\mathcal{E}}_H}}=w(x)$ for all $x$. We construct $f:\ensuremath{D^2}(2t)\to {\ensuremath{{\mathcal{E}}_H}}$ as in Figure~\ref{fig:paraLipDehn}. The figure depicts $k+1$ rows of rectangles; the top row has one $2^k\times 2^{k+1}$ rectangle, while the $i$th from the top consists of $2^{i-1}$ rectangles of dimensions $2^{k-i+1}\times 2^{k-i}$. The bottom row is an exception, consisting of $t$ squares of side length 1. Note that a cell in the $i$th row is bilipschitz equivalent to $D^2(2^{k-i})$. Call the resulting complex $X$. We label all the edges of $X$ by curves in ${\ensuremath{{\mathcal{E}}_H}}$. First, we label all the vertical edges into ${\ensuremath{{\mathcal{E}}_H}}$ as constant curves; the vertical edges with $x$-coordinate $a$ are labeled by $w(a)$. We label horizontal edges using the normal form; the edge from $(x_1,y)$ to $(x_2,y)$ is labeled $[\omega_H(\tilde{w}(x_1)^{-1}\tilde{w}(x_2))]_{\ensuremath{{\mathcal{E}}_H}}$, except for the bottom edge of $X$, which is labeled $w$. We construct a map $f:X^{(1)}\to {\ensuremath{{\mathcal{E}}_H}}$ by sending each edge to the constant-speed parameterization of its label; this map is Lipschitz, with Lipschitz constant independent of $w$, and the map sends the boundary of $X$ to a Lipschitz reparameterization of $w$. If $\Delta$ is a 2-cell from the $i$th row of the figure, $i\le k$, then $f$ sends its boundary to $$[\omega_H(g_1)\omega_H(g_2)\omega_H(g_1g_2)^{-1}]_{\ensuremath{{\mathcal{E}}_H}},$$ where $$g_1=\tilde{w}(n 2^{-i})^{-1}\tilde{w}((n+1) 2^{-i})$$ $$g_2=\tilde{w}((n+1) 2^{-i})^{-1}\tilde{w}((n+2) 2^{-i}).$$ These are the triangles filled in Lemma~\ref{lem:LipCombFill}, so we can extend the map on the boundary of the cell to a map on the whole cell. Furthermore, the Lipschitz constants of these maps are uniformly bounded. This defines a map on all of the rows except the bottom row. Cells in the bottom row have boundaries of the form $$w|_{[n,n+1]}[\omega_H(w(n)^{-1}w(n+1)^{-1})^{-1}]_{\ensuremath{{\mathcal{E}}_H}}$$ for some $n$; this has bounded length, so we can fill each of the squares in the bottom row by maps with uniformly bounded Lipschitz constants. \end{proof} At this point, we are in a similar situation to the one in Section~\ref{sec:adaptive}; if $w\in \Sigma_H^*$, the lemma gives a filling of the curve corresponding to $w$, but this filling may travel far from $H(\ensuremath{\mathbb{Z}})=U(p-1,1;\ensuremath{\mathbb{Z}})$. The next step is to replace this filling by a filling in $H(\ensuremath{\mathbb{Z}})$. First, we will need notation like that in Sec.~\ref{sec:redThe} for sets and maps associated to $SL(p-1;\ensuremath{\mathbb{R}})$. Let $\ensuremath{\mathcal{E}}_{p-1}$ be the symmetric space $SL(p-1;\ensuremath{\mathbb{R}})/SO(p-1)$, let $\ensuremath{\mathcal{S}}_{p-1}\subset \ensuremath{\mathcal{E}}_{p-1}$ be a Siegel set, let $N_{p-1}^+$ and $A_{p-1}^+$ be the sets of unipotent and diagonal matrices used to define $\ensuremath{\mathcal{S}}_{p-1}$, let $\rho_{p-1}$ be the map $\ensuremath{\mathcal{E}}_{p-1}\to SL(p-1;\ensuremath{\mathbb{Z}})$ defined using $\ensuremath{\mathcal{S}}_{p-1}$, and let $\phi_{p-1}:\ensuremath{\mathcal{E}}_{p-1}\to A_{p-1}^+$ be the analogue of the original $\phi.$ Let $$N_H^+:=\{u(v)\mid v\in [-1/2,1/2]^{p-1}\subset \ensuremath{\mathbb{R}}^{p-1}\}\subset H.$$ Recall that $$\ensuremath{\mathcal{S}}_{p-1}=[N_{p-1}^+A_{p-1}^+]_{\ensuremath{\mathcal{E}}_{p-1}};$$ we define a fundamental set for the action of $H(\ensuremath{\mathbb{Z}})$ on ${\ensuremath{{\mathcal{E}}_H}}$ by $$\ensuremath{\mathcal{S}}_{H}=[N_H^+N_{p-1}^+A_{p-1}^+]_{{\ensuremath{{\mathcal{E}}_H}}}.$$ There is a projection $\mu:H\toSL(p-1)$ which descends to a map $\mu_{\ensuremath{\mathcal{E}}}:{\ensuremath{{\mathcal{E}}_H}}\to \ensuremath{\mathcal{E}}_{p-1}$; note that $\mu_{\ensuremath{\mathcal{E}}}(\ensuremath{\mathcal{S}}_H)=\ensuremath{\mathcal{S}}_{p-1}$. This lets us define a $\rho_H:{\ensuremath{{\mathcal{E}}_H}}\to H(\ensuremath{\mathbb{Z}})$ which is compatible with $\rho_{p-1}$, so that $x\in \rho_H(x)\ensuremath{\mathcal{S}}_H$ and $\mu(\rho_H(x))=\rho_{p-1}(\mu_{\ensuremath{\mathcal{E}}}(x))$ for all $x$. All of the results in Sec.~\ref{sec:redThe} apply in this context, possibly with different constants. We will prove the following (cf.\ Prop.~\ref{prop:templateExist}): \begin{lemma} \label{lemma:templateH} There is a $c'$ such that if $w=w_1\dots w_\ell$ is a word in $\Sigma_H$, there is a template for $w$ such that \begin{itemize} \item If $g_1, g_2, g_3\in \Gamma$ are the labels of a triangle in the template, then either $d_\Gamma(g_i,g_j)\le 2c'$ for all $i,j$ or there is a $k$ such that $g_i^{-1}g_{j}\in U(k,p-1-k,1)$ for all $i,j$. \item $\tau$ has $O(\ell^2)$ triangles, and if the $i$th triangle of $\tau$ has vertices labeled $(g_{i,1},g_{i,2},g_{i,3})$, then $$\sum_{i}(d_\Gamma(g_{i,1},g_{i,2})+d_\Gamma(g_{i,1},g_{i,3})+d_\Gamma(g_{i,2},g_{i,3}))^2=O(\ell^2).$$ Similarly, if the $i$th edge of $\tau$ has vertices labeled $(h_{i,1}, h_{i,2})$, then $$\sum_{i}d_\Gamma(h_{i,1},h_{i,2})^2=O(\ell^2).$$ \end{itemize} \end{lemma} \begin{proof} As in Sec.~\ref{sec:wordscurves}, we can choose curves in $\ensuremath{\mathcal{E}}_H$ which correspond to generators of $H(\ensuremath{\mathbb{Z}})$ and concatenate them into a curve $\bar{w}:[0,\ell]\to \ensuremath{\mathcal{E}}_H$ such that $\Lip(\bar{w})$ is bounded independently of $w$ and $\bar{w}(i)=[w_1\dots w_i]_{\ensuremath{\mathcal{E}}_H}$ for $i=0, 1,\dots, \ell$. By Prop.~\ref{prop:LipPara}, we can construct a filling $f:D^2(\ell)\to \ensuremath{\mathcal{E}}_H$ such that $f|_{\partial D^2}=\bar{w}$ and $\Lip(f)$ is bounded independently of $w$. Let $t=2^{\lceil \log_2 \ell \rceil}$. By Remarks \ref{rem:polys} and \ref{rem:reparam}, we can reparameterize $f$ to get a map $f':\ensuremath{D^2}(t)\to \ensuremath{\mathcal{E}}_H$ so that $f'(x,0)=\bar{w}(x)$ for $x<t$ and $f'=[I]_{\ensuremath{\mathcal{E}}_H}$ on the rest of the boundary of $\ensuremath{D^2}(t)$. Note that $\Lip(f')$ is also bounded independently of $w$, say by $c_f$. As in Sec.~\ref{sec:adaptive}, we will use this disc to build a template. Let $r_0:\ensuremath{\mathcal{E}}_{p-1}\to \ensuremath{\mathbb{R}}$ be given by $$r_0(x)=\frac{d_{\ensuremath{\mathcal{M}}_{p-1}}([x]_{\ensuremath{\mathcal{M}}_{p-1}},[I]_{\ensuremath{\mathcal{M}}_{p-1}})}{2 (p-1)^2} - c.$$ where $c$ is as in Cor.~\ref{cor:parabolicNbhds}, i.e., if $x,y\in \ensuremath{\mathcal{E}}_{p-1}$ satisfy $$d_{\ensuremath{\mathcal{E}}_{p-1}}(x,y)<r_0(x)$$ and $g,h \in SL(p-1;\ensuremath{\mathbb{Z}})$ satisfy $x\in g\ensuremath{\mathcal{S}}_{p-1}$ and $y\in h\ensuremath{\mathcal{S}}_{p-1}$, then there is a $1\le j<p-1$ depending only on $x$ such that $g^{-1}h\in U(j,p-1-j)$. Note also that if $g\in H(\ensuremath{\mathbb{Z}})$ and $x\in g\ensuremath{\mathcal{S}}_{H}$, then $x=[gnn'a]_{\ensuremath{\mathcal{E}}}$ for some $n\in N^+_H$, $n'\in N^+_{p-1}$, and $a\in A^+_{p-1}$. Then since $N^+_H$ and $N^+_{p-1}$ are bounded, $$d_{\ensuremath{\mathcal{E}}}(x,[g]_\ensuremath{\mathcal{E}})=d_{A_{p-1}}(I,a)+O(1),$$ and, by Lem.~\ref{cor:Hausdorff} $$d_{A_{p-1}}(I,a) \le d_{\ensuremath{\mathcal{M}}_{p-1}}([I]_{\ensuremath{\mathcal{M}}_{p-1}},\mu_\ensuremath{\mathcal{E}}(x))+c''$$ for some $c''>0$, independently of $x$. We thus have \begin{equation}\label{eq:r0Dist} d_{\ensuremath{\mathcal{E}}}(x,[g]_\ensuremath{\mathcal{E}})=O(r_0(\mu(x))). \end{equation} Let $r:D^2(t)\to \ensuremath{\mathbb{R}}^+$ be $$r(v)=\max\{1, \frac{r_0(\mu_{\ensuremath{\mathcal{E}}}(f'(v)))}{2c_f}\}.$$ This is $1$-Lipschitz. Let $\tau$ be the adaptive triangulation $\tau_r$ constructed in Prop.~\ref{prop:adaptive}. If $v$ is an interior vertex of $\tau$, label it $\rho_H(f'(v))$, and label each boundary vertex by the corresponding $w(i)$ or by $I$. In either case, if $v$ is labeled by $g\in H(\ensuremath{\mathbb{Z}})$, then $f'(v)\in g\ensuremath{\mathcal{S}}_H$ and $\mu_{\ensuremath{\mathcal{E}}}(f'(v))\in \mu(g)\ensuremath{\mathcal{S}}_{p-1}$. As in the proof of Prop.~\ref{prop:templateExist}, each lattice point on the boundary of $D^2(t)$ is a vertex of $\tau$, so $\tau$ is a template for $w$. Let $(a_1,a_2)$ be an edge of $\tau$, and say that $a_i$ is labeled by $g_i\in H(\ensuremath{\mathbb{Z}})$. Decompose each $g_i$ as $h_i u(v_i)$ for some $h_i\in SL(p-1;\ensuremath{\mathbb{Z}})$, $v_i\in \ensuremath{\mathbb{Z}}^{p-1}$. Prop.~\ref{prop:adaptive}.(3) and the fact that $f'$ is $c_f$-Lipschitz ensure that $$d_{{\ensuremath{{\mathcal{E}}_H}}}(f'(a_1),f'(a_2))\le 2 c_f r(a_1)\le \max\{2c_f,r_0(\mu_{\ensuremath{\mathcal{E}}}(f'(a_1)))\}.$$ If $2c_f\le r_0(\mu_{\ensuremath{\mathcal{E}}}(f'(a_1)))$ then $$d_{\ensuremath{\mathcal{E}}_{p-1}}(\mu(f'(a_1)),\mu(f'(a_2)))\le r_0(\mu_{\ensuremath{\mathcal{E}}}(f'(a_1))),$$ and by the choice of $c$, there is a $j$ such that $h_1^{-1}h_2\in U(j,p-1-j)\subset SL(p-1;\ensuremath{\mathbb{Z}})$ and thus $g_1^{-1}g_2\in U(j,p-1-j,1)$. On the other hand, if $$r_0(\mu_{\ensuremath{\mathcal{E}}}(f'(a_1)))\le 2 c_f,$$ then $d_{\ensuremath{\mathcal{E}}_{p-1}}(\mu_\ensuremath{\mathcal{E}}(f'(a_1)),\mu_\ensuremath{\mathcal{E}}(f'(a_2)))\le 2c_f$. Furthermore, $f'(a_1)$ and $f'(a_2)$ are a bounded distance from $H(\ensuremath{\mathbb{Z}})$, so $g_1$ and $g_2$ are a bounded distance apart in $\Gamma$. This proves the first part of the lemma. To prove the second part, we use Thm.~\ref{thm:LMR}. If $(v_1,v_2)$ is an edge of $\tau$ and $v_i$ is labeled by $g_i$, we know that $r(v_i)=O(d_{D^2}(v_1,v_2)).$ By \eqref{eq:r0Dist}, $$d_\ensuremath{\mathcal{E}}([g_i]_\ensuremath{\mathcal{E}},f(v_i))= O(d_{D^2}(v_1,v_2)),$$ so \begin{align*} d_\ensuremath{\mathcal{E}}([g_1]_\ensuremath{\mathcal{E}},[g_2]_\ensuremath{\mathcal{E}})& \le d_\ensuremath{\mathcal{E}}([g_1]_\ensuremath{\mathcal{E}},f(v_1)) + d_\ensuremath{\mathcal{E}}(f(v_1),f(v_2))+ d_\ensuremath{\mathcal{E}}(f(v_2),[g_2]_\ensuremath{\mathcal{E}})\\ &= O(d_{D^2}(v_1,v_2)). \end{align*} By Thm.~\ref{thm:LMR}, this implies $$d_{\Gamma}(g_1,g_2)=O(d_{D^2}(v_1,v_2))$$ as well. The second part of the lemma then follows from the bounds in Prop.~\ref{prop:adaptive}. \end{proof} This reduces the problem of filling words in $H$ to the problem of filling $\omega$-triangles with vertices in subgroups of the form $U(k,p-1-k,1)$. These subgroups can be handled by the methods of the previous subsection, reducing such an $\omega$-triangle to words in $\widehat{\Sigma}_k$ and $\widehat{\Sigma}_{p-1-k}$. These will be filled in the next section. \subsection{Filling words in $\widehat{\Sigma}_{S_i}$}\label{sec:shortTemplates} In the last sections, we reduced the problem of filling an $\omega$-triangle with vertices in a parabolic subgroup to the problem of filling a word $w$ in $\widehat{\Sigma}_{q}$ for $q\le p-2$. When $q\ge 3$, this is straightforward; we can use Lemma~\ref{lem:shortEquiv} to replace $w$ by a word $w'$ in $\Sigma_{q}$ and use Prop.~\ref{prop:templateExist} to reduce the problem of filling $w'$ to the problem of filling $\omega$-triangles in a smaller parabolic subgroup. When $q=2$, this method isn't possible; in this section, we will describe how to fill a word $w$ in $\widehat{\Sigma}_2$. We will use the notation of Sec.~\ref{sec:redThe}; let $\ensuremath{\mathcal{E}}_2:=SL(2)/SO(2)$ and $\ensuremath{\mathcal{M}}_2:=SL(2;\ensuremath{\mathbb{Z}})\backslash \ensuremath{\mathcal{E}}_2$, and let $\ensuremath{\mathcal{S}}_2$ be a Siegel set. Let $\phi:\ensuremath{\mathcal{E}}_2\to A^+_2$ and $\phi_i:\ensuremath{\mathcal{E}}_2\to \ensuremath{\mathbb{R}}$, $i=1,2$ be as in Sec.~\ref{sec:redThe} \begin{lemma} \label{lem:shortSigma2} If $w$ is a word in $\widehat{\Sigma}_2$, then $$\widehat{\delta}(w)\le O(\widehat{\ell}(w)^2).$$ \end{lemma} \begin{proof} As in the proof for the whole group, we proceed by constructing a template for $w$ using Prop.~\ref{prop:adaptive} and then filling the template. The largest change from Sec.~\ref{sec:adaptive} is that the curve $\alpha$ will not be in the thick part of $\ensuremath{\mathcal{E}}_2$. Let $w=w_1\dots w_n$, where $w_i\in \widehat{\Sigma}_2$. The first thing to do is to construct a curve $\alpha$ in $\ensuremath{\mathcal{E}}$ which corresponds to $w$. First, note that we can use Lemma~\ref{lem:infPres} to replace all occurrences of $\sgen{21}{x}$ in $w$ by $g \sgen{12}{-x}g^{-1}$, where $g$ is a word representing a Weyl group element. This has cost $O(\widehat{\ell}(w)^2)$, so we may assume that $\sgen{21}{x}$ does not occur in $w$ for $|x|\ge 1$. Let $w(i)=w_1\dots w_i\in \Gamma_2$, and let $\ell_i=\widehat{\ell}(w_1\dots w_i)$ for $i=0,\dots, n$. Note that $\ell_i$ is always an integer. We will construct a curve $\alpha:[0,\widehat{\ell}(w)]\to \ensuremath{\mathcal{E}}_q$ which has speed bounded independently of $w$ and has the property that if $t\in \ensuremath{\mathbb{Z}}$, $\alpha(t)\in w(\eta(t))\ensuremath{\mathcal{S}}_2$, where $\eta$ is a non-decreasing function such that $\eta(\ell_i)=i$. The curve will be a concatenation of curves $\alpha_i:[0,\widehat{\ell}(w_i)]\to \ensuremath{\mathcal{E}}_2$ corresponding to $w_i$. If $\widehat{\ell}(w_i)<3$, let $\alpha_i$ be the geodesic connecting $[w(i)]_{\ensuremath{\mathcal{E}}_2}$ and $[w(i+1)]_{\ensuremath{\mathcal{E}}_2}$ on the interval $[0,1]$, and let it take the constant value $[w(i+1)]_{\ensuremath{\mathcal{E}}_2}$ on $[1,\widehat{\ell}(w_i)]$. Then $\alpha_i(t)\in w(i)\ensuremath{\mathcal{S}}_q$ if $t=0$ and $\alpha_i(t)\in w(i+1)\ensuremath{\mathcal{S}}_q$ for $t\ge 1$. If $\widehat{\ell}(w_i)\ge 3$, then let $x$ be such that $w_i=\sgen{12}{x}$. Let $$D=\diagmat(e, e^{-1})$$ and note that $D^x\in \ensuremath{\mathcal{S}}_2$ for all $x\ge 0$. Let $t_1\in \ensuremath{\mathbb{Z}}$ be such that $$\frac{\widehat{\ell}(w_i)}{3}\le t_1<t_1+1\le\frac{2\widehat{\ell}(w_i)}{3}.$$ Let $g:[0,\widehat{\ell}(w_i)]\toSL(2;\ensuremath{\mathbb{R}})$ be the concatenation of geodesic segments connecting \begin{align*} p_1&=I\\ p_2&=D^{\log(|x|)/2}\\ p_3&=D^{\log(|x|)/2}e_{12}(\pm 1)\\ p_4&=D^{\log(|x|)/2}e_{12}(\pm 1)D^{-\log(|x|)/2}=e_{12}(x)=w_i. \end{align*} Here the sign of $\pm1$ is the same as the sign of $x$. Parameterize this curve so that $g|_{[0,t_1]}$ connects $p_1$ and $p_2$, $g|_{[t_1,t_1+1]}$ connects $p_2$ and $p_3$, and $g|_{[t_1+1,\widehat{\ell}(w_i)]}$ connects $p_3$ and $p_4$. This curve has velocity bounded independently of $x$. Furthermore, if $t\in \ensuremath{\mathbb{Z}}$, then $g(t)\in \ensuremath{\mathcal{S}}_2$ if $t\le t_1$ and $g(t)\in w_i \ensuremath{\mathcal{S}}$ if $t\ge t_1+1$. Let $\alpha_i(t)=[\gamma_i g(t)]_{\ensuremath{\mathcal{E}}_2}$. Let $\alpha:[0,\widehat{\ell}(w)]\to \ensuremath{\mathcal{E}}_2$ be the concatenation of the $\alpha_i$. From here, we largely follow the proof of Prop.~\ref{prop:templateExist}; we construct a filling $f$ of $\alpha$, an adaptive triangulation $\tau$, and a template based on $\tau$ so that a vertex $x$ of $\tau$ is labeled by an element $\gamma$ such that $f(x)\in \gamma \ensuremath{\mathcal{S}}_2$. Let $d$ be the smallest power of $2$ larger than $\widehat{\ell}(w)$ and let $\alpha':[0,d]\to \ensuremath{\mathcal{E}}_2$ be the extension of $\alpha$ to $[0,d]$, where $\alpha'(t)=[I]_{\ensuremath{\mathcal{E}}_2}$ when $t\ge \widehat{\ell}(w)$. As in Sec.~\ref{sec:adaptive}, let $\gamma_{x,y}:[0,1]\to \ensuremath{\mathcal{E}}$ be the geodesic from $x$ to $y$ and define a homotopy $f:[0,d] \times [0,d]\to \ensuremath{\mathcal{E}}$ by $$f(x,y)=\gamma_{\alpha'(x),\alpha'(0)}(y/d).$$ This is Lipschitz with a constant $c$ independent of $w$ and has area $O(\widehat{\ell}(w)^2)$. As in the proof of Prop.~\ref{prop:templateExist}, let $r_0:\ensuremath{\mathcal{E}}\to \ensuremath{\mathbb{R}}$ be $$r_0(x)=\frac{d_{\ensuremath{\mathcal{M}}_{2}}([x]_{\ensuremath{\mathcal{M}}_{2}},[I]_{\ensuremath{\mathcal{M}}_{2}})}{2} - c'$$ and let $r:D^2(d)\to \ensuremath{\mathbb{R}}$ be $$r(v)=\max\{1, \frac{r_0(x)}{4 c}.$$ As before, we use this function to construct a triangulation $\tau_r$ of $D^2(d)$. One major difference is that $r$ is not necessarily small on the boundary of $D^2(d)$; on the other hand, $\alpha(\ell_i)=[w(i)]_{\ensuremath{\mathcal{E}}_2}$, so each point $(\ell_i,0)$ is a vertex of $\tau_r$. Label the boundary vertices of $\tau_r$ so that $(t,0)$ is labeled by $w(\eta(t))$ and all the others are labeled $I$. Label the interior vertices so that $v$ is labeled by $\rho(f(v))$. If $v$ is a vertex, let $g_v\in \Gamma$ be its label. As in the earlier construction, $f(v)\in g_v\ensuremath{\mathcal{E}}_2$ for all $v$. Furthermore, the set of boundary labels is exactly $\{w(0),\dots, w(n)\}$ and, since $\omega(g,g)$ is the empty word for all $g$, the boundary word of $\tau$ is $$w_\tau=\omega(w_1)\dots \omega(w_n);$$ since $\omega(e_{12}(t))=\widehat{e}_{12}(t)$ and all the other letters in $w$ have bounded length, we have $$\delta_\Gamma(\lambda(w),w_\tau)=O(\widehat{\ell}(w)).$$ A filling of the triangles in $\tau$ thus gives a filling of $w$. As in the earlier construction, each triangle of $\tau$ either has short edges and thus a bounded filling area, or has vertices whose labels lie in a translate of a parabolic subgroup. In this case, that parabolic subgroup must be $U(1,1;\ensuremath{\mathbb{Z}})$, and Lemma~\ref{lem:infPres} allows us to fill any such triangle with quadratic area. Prop.~\ref{prop:adaptive}.(4) thus implies that $\widehat{\delta}(w)=O(\widehat{\ell}(w)^2)$, as desired. \end{proof} \subsection{Proof of Thm.~\ref{thm:mainThm}}\label{sec:fullProof} Let $q<p$. We claim that if $w$ is a word in $\Sigma_q$ which represents the identity, then \begin{equation}\label{eq:induct} \delta_{\Gamma}(w)=O(\ell(w)^2), \end{equation} and that if $w$ is a word in $\widehat{\Sigma}_q$ which represents the identity, then \begin{equation} \label{eq:inductShort}\widehat{\delta}(w)=O(\widehat{\ell}(w)^2). \end{equation} We proceed by induction on $q$. If $q=2$, then \eqref{eq:induct} is a consequence of the fact that $SL(2;\ensuremath{\mathbb{Z}})$ is virtually free, and \eqref{eq:inductShort} follows from Lemma~\ref{lem:shortSigma2}. If $3\le q\le p-1$, then we can prove \eqref{eq:induct} using Prop.~\ref{prop:templateExist} for $SL(q;\ensuremath{\mathbb{Z}})$. The proposition implies that there is a template for $w$ for which every triangle is either small or has vertices in some translate of $U(j,q-j)$ for some $1\le j \le q-1$. Call the $\omega$-triangle corresponding to the boundary of the $i$th triangle $w_i$; then we have $\sum_i \ell(w_i)^2=O(\ell(w)^2)$, $\delta_{\Gamma}(w) \le \sum_i \delta_\Gamma(w_i),$ and each $w_i$ either has length $\le c$ or has vertices in $U(j_i,q-j_i;\ensuremath{\mathbb{Z}})\subsetSL(q;\ensuremath{\mathbb{Z}})\subset SL(p;\ensuremath{\mathbb{Z}})$. In the former case, $\delta_\Gamma(w_i)\le \delta_\Gamma(c)$. In the latter case, we use the inductive hypotheses. Since $j_i\le p-2$ and $q-j_i\le p-2$, we can apply Prop.~\ref{prop:paraReduce} to show that there are words $p_j(w_i)$, $j=1,2$, in $\widehat{\Sigma}_{j_i}$ and $\widehat{\Sigma}_{q-j_i}$ such that $$\delta_\Gamma(w_i)=O(\ell(w_i)^2)+\widehat{\delta}(p_1(w_i))+\widehat{\delta}(p_2(w_i)).$$ By induction, the latter two terms are both $O(\ell(w_i)^2)$. Thus $$\delta_{\Gamma}(w) = \sum_i O(\ell(w_i)^2)=O(\ell(w)^2).$$ The second condition, \eqref{eq:inductShort}, follows from Lemma~\ref{lem:shortEquiv}; if $w$ is a word in $\widehat{\Sigma}_q$, then $\widehat{\delta}(w)=\delta_{\Gamma}(\lambda(w))$, and we can replace $\lambda(w)$ by a word of roughly the same length in $\Sigma_q$ at cost $O(\widehat{\ell}(w)^2)$. If $q=p$, there is an additional step to prove \eqref{eq:induct}. As before, we can break $w$ into $w_i$, but if $j_i=1$ or $p-1$, we need to use Lemma~\ref{lemma:templateH} to fill $w_i$. Let $w_i$ be an $\omega$-triangle with vertices in $U(p-1,1)$. We can use Lemmas~\ref{lem:shortEquiv} to replace $w_i$ by a word of comparable length in $\Sigma_H$ at cost $O(\widehat{\ell}(w)^2)$. By Lemma~\ref{lemma:templateH}, there are $\omega$-triangles $w'_i$ which are either short or have vertices in $U(j'_i,p-1-j'_i,1)$ for some $1\le j'_i\le p-2$, and such that $\sum_i\ell(w_i)^2=O(\ell(w_i)^2)$. By Prop.~\ref{prop:paraReduce} and the inductive hypothesis, $\delta_\Gamma(w_i)=O(\ell(w_i)^2)$, and so $\delta_\Gamma(w)=O(\ell(w)^2)$. Finally, if $w$ is a word in $\widehat{\Sigma}_q$, then $\widehat{\delta}(w)=\delta_\Gamma(\lambda(w))$, and $\widehat{\ell}(w)=\ell(\lambda(w))$. Consequently, $$\widehat{\delta}(w)\le \delta_\Gamma(\widehat{\ell}(w))=O(\widehat{\ell}(w)^2)$$ as desired. \newcommand{\etalchar}[1]{$^{#1}$} \def$'${$'$}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} Quantum Monte Carlo (QMC) methods \cite{assaad07,sandvik10} have become indispensable tools for ground-state and finite-temperature studies of many classes of interacting quantum systems, in particular those for which the infamous ``sign problem'' can be circumvented.\cite{kaul12} In ground-state projector methods, an operator $P(\beta)$ is applied to a ``trial state'' $|\Psi_0\rangle$, such that $|\Psi_\beta\rangle = P(\beta)|\Psi_0\rangle$ approaches the ground state of the Hamiltonian $\mathcal{H}$ when $\beta \to \infty$ and an expectation value $\langle A\rangle = \langle \Psi_\beta|A|\Psi_\beta\rangle/Z$, with the norm $Z = \langle \Psi_\beta|\Psi_\beta\rangle$, approaches its true ground-state value, $\langle A\rangle \to \langle 0| A|0\rangle$. For the projector, one can use $P(\beta)=\exp{(-\beta \mathcal{H})}$ or a high power of the Hamiltonian\cite{betanote}, $P(M)=(-\mathcal{H})^M$. Here we will discuss a modification of the latter projector for studies of dynamical properties of systems out of equilibrium. Real-time dynamics for interacting quantum systems is difficult to deal with computationally. Solving the Schr\"odinger equation directly, computations are restricted to very small system sizes by the limits of exact diagonalization. Despite progress with the Density-Matrix Renormalization Group (DMRG) \cite{PhysRevLett.69.2863, RevModPhys.77.259} and related methods based on matrix-product states, this approach is in practice limited to one-dimensional systems and relatively short times. Efficiently studying long-time dynamics of generic interacting quantum systems in higher dimensions is still an elusive goal. However, recently, in Ref.~[\onlinecite{degrandi11}], it was demonstrated that real-time and imaginary-time dynamics bear considerable similarities, and in the latter case, powerful and high-precision QMC calculations can be carried out on large system sizes for the class of systems where sign problems can be avoided. Our work reported here is a further development of the method introduced in Ref.~[\onlinecite{degrandi11}], where it was realized that a modification of the ground-state projector Monte Carlo approach with $P(\beta)=\exp{(-\beta \mathcal{H})}$ can be used to study non-equilibrium set-ups in quantum quenches (or ramps), where a parameter of the Hamiltonian depends on time according to an arbitrary protocol. By performing a standard Wick rotation of the time axis, a wave function is governed by the Shr\"odinger equation in imaginary time $t=-i\tau$ ($\tau$ being real), \begin{equation} \partial_\tau |\psi (\tau)\rangle = -\mathcal{H}[\lambda(\tau)]|\psi (\tau)\rangle. \label{schrod} \end{equation} Here the Hamiltonian depends on the parameter $\lambda$ through time, e.g., \begin{equation} \label{eq:h} \mathcal{H}=\mathcal{H}_0 + \lambda(\tau)V, \end{equation} where $V$ and $H_0$ typically do not commute. The method is not limited to this form, however, and any evolution of $\mathcal{H}$ can be considered. The Schr\"odinger equation has the formal solution \begin{equation} |\psi(\tau)\rangle=U(\tau)|\psi(\tau_0)\rangle, \label{psitauformal} \end{equation} where the imaginary-time evolution operator is given by \begin{equation} U(\tau)=T_\tau {\rm exp}\left [ - \int_{\tau_0}^\tau d \tau' \mathcal{H}[\lambda(\tau')] \right ], \label{utau} \end{equation} where $T_\tau$ indicates time ordering. A time-evolved state $U(\tau)|\Psi(\tau_0)\rangle$ and associated expectation values can be sampled using a generalized projector QMC algorithm. In Ref.~[\onlinecite{degrandi11}] it was demonstrated that this non-equilibrium QMC (NEQMC) approach can be applied to study dynamic scaling at quantum phase transitions, and there are many other potential applications as well, e.g., when going beyond studies of finite-size gaps in ``glassy'' quantum dynamics and the quantum-adiabatic paradigm for quantum computing. Here we introduce a different approach to QMC studies of quantum quenches, which gives results for a whole range of parameters $\lambda \in [\lambda(\tau_0),\lambda(\tau)]$ in a single run (instead of just the final time), at a computational effort comparable to the previous approach. Instead of using the conventional time-evolution operator Eq.~(\ref{utau}), we consider a generalization of the equilibrium QMC scheme based on projection with $(-\mathcal{H})^M$, acting on the initial ground state of $\mathcal{H}[\lambda(\tau_0)]$ with a product of evolving Hamiltonians: \begin{equation} P_{M,1}=[-\mathcal{H}(\lambda_M)]....[-\mathcal{H}(\lambda_2)][-\mathcal{H}(\lambda_1)], \label{p1m} \end{equation} where \begin{equation} \lambda_t= \lambda_0+t\Delta_\lambda, \label{lambdat} \end{equation} and $\Delta_\lambda=[\lambda_{M}-\lambda_0]/M$ is the single-step change in the tuning parameter.\cite{gridnote} Here we will consider a case where the ground state $|\Psi(\lambda_0)\rangle$ of $\mathcal{H}(\lambda_0)$ is known and easy to generate (stochastically or otherwise) and the ground states for other $\lambda$-values of interest are non-trivial. The stochastic sampling used to compute the evolution then takes place in a space representing path-integral-like terms contributing to the matrix element (the norm) $\langle \Psi(\lambda_0)| P_{1,M} P_{M,1}|\Psi(\lambda_0)\rangle$. We will also later consider a modification of the method in which the ground state at the final point $\lambda_M$ is known as well, in which case contributions to $\langle \Psi(\lambda_M)| P_{M,1}|\Psi(\lambda_0)\rangle$ are sampled. Staying with the doubly-evolved situation for now, we evaluate generalized expectation values after $t$ out of the $M$ operators in the product (\ref{p1m}) have acted: \begin{equation} \langle A\rangle_t = \frac{\langle \Psi(\lambda_0)| P_{1,M} P_{M,t+1}AP_{t,1}|\Psi(\lambda_0)\rangle}{\langle \Psi(\lambda_0)| P_{1,M} P_{M,1}|\Psi(\lambda_0)\rangle}. \label{atdef} \end{equation} We will refer to this matrix element as an {\it asymmetric expectation value}, with the special case $t=M$ corresponding to a true quantum-mechanical expectation value taken with respect to an evolved wave function, \begin{equation} \label{evolved_wf} |\psi_M\rangle =\frac{P_{M,1}|\Psi(\lambda_0)\rangle}{\sqrt{\langle \Psi(\lambda_0)| P_{1,M} P_{M,1}|\Psi(\lambda_0)\rangle} }, \end{equation} which approaches the ground state $|\Psi[\lambda(\tau_M)]\rangle$ of the Hamiltonian $H[\lambda(\tau_M)]$ for $M \to \infty$. Away from the adiabatic limit, the evolved wave function Eq.~(\ref{evolved_wf}) is, generally speaking, not the ground state of the equilibrium system. Nevertheless, as we demonstrate in detail in Sec.~\ref{sec:apt}, a quench velocity $v \propto \Delta_\lambda N$ can be defined such that the symmetric expectation value $\langle A\rangle_{t=M}$ in Eq.~(\ref{atdef}) approaches the expectation value $\langle A(\tau=t)\rangle$ after a conventional linear imaginary-time quantum quench with Eq.~(\ref{utau}) done with the same velocity $v$, if $v$ is low enough. In fact, the two quantities are the same to leading (linear) order in $v$, not only in the strict adiabatic limit $v \to 0$. We therefore name this scheme the {\it quasi-adiabatic} QMC (QAQMC) algorithm. Importantly, the leading corrections to the adiabatic evolution of the asymmetric expectation values for any $t$ contain important information about non-equal time correlation functions, very similar to the imaginary-time evolution. The principal advantage of QAQMC over the NEQMC approach is that expectation values of diagonal operators in the basis used can be obtained simultaneously for the whole evolution path $\lambda_0\ldots \lambda_M$, by measuring $\langle A\rangle_t$ in Eq.~(\ref{atdef}) at arbitrary $t$ points \cite{averagenote} (and one can also extend this to general off-diagonal operators, along the lines of Ref.~[\onlinecite{dorneich01}], but we here limit studies to diagonal operators). The QAQMC scheme is also easier to implement in practice than the NEQMC method because there are no time integrals to sample. As mentioned above, we will here have in mind a situation where the initial state $|\Psi(\lambda_0)\rangle$ is in some practical sense ``simple," but this is not necessary for the method to work---any state that can be simulated with standard equilibrium QMC methods can be used as the initial state for the dynamical evolution. The final evolved state $|\psi_M\rangle$ can be very complex, e.g., for a system in the vicinity of a quantum-critical point or in a ``quantum glass'' (loosely speaking, a system with slow intrinsic dynamics due to spatial disorder and frustration effects). Here, as a demonstration of the correctness and utility of the QAQMC approach, we study generalized dynamic scaling in the neighborhood of the quantum phase transitions in the standard one-dimensional (1D) and 2D transverse-field Ising models (TFIMs). As noted first in Ref.~[\onlinecite{degrandi11}], the NEQMC method can be used to extract the components of the quantum metric tensor,\cite{provost_80} the diagonal elements of which are the more familiar fidelity susceptibilities. Thanks to its ability to capture the leading non-adiabatic corrections to physical observables, the QAQMC approach can also be used for this purpose, and, as we will discuss briefly here and in more detail in Ref.~[\onlinecite{adi_long}], one can also extract the Berry curvature through the imaginary antisymmetric components of the geometric tensor The rest of the paper is organized in the following way. In Sec.~\ref{sec:apt}, we use adiabatic perturbation theory (APT) to demonstrate the ability of the QAQMC scheme to correctly capture the standard Schr\"odinger evolution in imaginary time, not only in the adiabatic limit but also including the leading corrections in the quench velocity. We show how these leading corrections correspond to the geometric tensor. In Sec.~\ref{sec:results}, we discuss tests of the QAQMC scheme on 1D and 2D TFIMs, and also present a high-precision result for the critical field in the 2D model. In Sec.~\ref{sec:conclusions}, we summarize our main conclusions and discuss future potential applications of the algorithm. \section{Adiabatic perturbation theory} \label{sec:apt} The key question we address in this section is whether the matrix element $\langle A\rangle_t$ in Eq.~(\ref{atdef}) can give useful dynamical information for arbitrary ``time'' points $t$ in the sequence of $2M$ operators. The expression only reduces to a conventional expectation value at the symmetric point $t=M$, and even there it is not clear from the outset how $\langle A\rangle_{t=M}$ computed for different $M$ relates to the velocity dependence of the expectation value $\langle \Psi (0)|U^{*}(\tau) \hspace{1pt} A \hspace{1pt} U(\tau)|\Psi(0)\rangle$ based on the Schr\"odinger time-evolution operator in Eq.~(\ref{utau}). Going away from the symmetric point brings in further issues to be addressed. For instance, there is no variational property of the asymmetric expectation value $\langle \mathcal{H} \rangle_t$ of the Hamiltonian for $t\not=M$. Nevertheless, the approach to the adiabatic limit is well behaved and we can associate the leading deviations from adiabaticity with well defined dynamical correlation functions that appear as physical response in real time protocols. We show here, for the linear evolution Eq.~(\ref{lambdat}), that one can identify a velocity $v \propto N/M$ such that a linear imaginary-time quench with $\lambda_t =vt$ in Eq.~(\ref{lambdat}) gives the same results in the two approaches when $t=M$, including the leading (linear) corrections in $v$. For $t\not=M$, the relevant susceptibilities in QAQMC defining non-adiabatic response are different than at $t=M$ but still well defined, contain useful information, and obey generic scaling properties. In order to facilitate the discussion of the QAQMC method, we here first review the previous APT approach for Schr\"odinger imaginary-time dynamics \cite{degrandi11, adi_long} and then derive analogous expressions for the product-evolution. After this, we discuss some properties of the symmetric and asymmetric expectation values. \subsection{Imaginary-time Schr\"odinger dynamics} The NEQMC method \cite{degrandi11} uses a path-integral-like Monte Carlo sampling to solve the imaginary-time Shcr\"odinger equation Eq.~(\ref{schrod}) for a Hamiltonian $\mathcal{H}[\lambda(\tau)]$ with a time-dependent coupling. The formal solution at time $\tau$ is given by the evolution operator Eq.~(\ref{utau}). In the strict adiabatic limit, the system will follow the instantaneous ground state, while in the slow limit one can anticipate deviations from adiabaticity, which will become more severe in gapless systems and, in particular, near phase transitions. Let us discuss the leading non-adiabatic correction to this imaginary-time evolution. The natural way to address this question is to use APT, similar to that developed in Refs.~[\onlinecite{ortiz_2008}] and [\onlinecite{PhysRevB.81.224301}] in real time. We here follow closely the discussion of the generalization to imaginary time in Ref.~[\onlinecite{degrandi11}]. We first write the wave function in the instantaneous eigenbasis $\{ |n(\lambda)\rangle \}$ of the time-dependent Hamiltonian $\mathcal{H}[\lambda(\tau)]$: \begin{equation} | \psi(\tau) \rangle=\sum_n a_n(\tau) |n(\lambda(\tau))\rangle. \label{eq 1} \end{equation} We then substitute this expansion into Eq.~(\ref{schrod}), \begin{equation} {d a_n\over d\tau}+\sum_m a_m(\tau) \langle n|\partial_\tau |m\rangle = -\mathcal E_n (\lambda) \, a_n(\tau), \label{eq 2} \end{equation} where $\mathcal E_n(\lambda)$ are the eigenenergies of the Hamiltonian $\mathcal{H}(\lambda)$ corresponding to the states $|n\rangle$ for this value of $\lambda$. Making the transformation \begin{equation} a_n(\tau)=\alpha_n(\tau)\exp\left[\int_\tau^0 \mathcal E_n(\tau')d\tau'\right], \end{equation} we can rewrite Eq.~(\ref{schrod}) as an integral equation; \begin{eqnarray} &&\alpha_n(\tau)=\alpha_n(0)+\sum_m \int^0_{\tau} d \tau'\, \langle n|\partial_{\tau'}|m\rangle \alpha_m(\tau')\nonumber\\ &&~~~~~~~~~~~\times\exp\left[-\int^0_{\tau'} d\tau''\, \big(\mathcal E_n(\tau'')-\mathcal E_m(\tau'') \big) \right], \label{int_eq} \end{eqnarray} where it should be noted that $\alpha_n(0)=a_n(0)$. In principle we should supply this equation with initial conditions at $\tau=\tau_0$, but this is not necessary if $|\tau_0|$ is sufficiently large, since the sensitivity to the initial condition will then be exponentially suppressed. Instead, we can impose the asymptotic condition $\alpha_n(\tau\to-\infty)\to \delta_{n0}$, which implies that in the distant past the system was in its ground state. Eq.~(\ref{int_eq}) is ideally suited for an analysis with the APT. In particular, if the rate of change is very small, $\dot\lambda(\tau)\to 0$, then to leading order in $\dot\lambda$ the system remains in its ground state; $\alpha_m(\tau)\approx \delta_{m0}$ (except during the initial transient, which is not important because we are interested in large $|\tau_0|$). In the next higher order, the transition amplitudes to the states $n\neq 0$ are given by; \begin{equation} \alpha_n(0)\approx -\int\limits^0_{-\infty} d\tau \, \langle n|\partial_{\tau}|0\rangle \exp\left[-\int^0_{\tau} d\tau'\, \Delta_{n0}(\tau')\right], \label{eq_central1} \end{equation} where $\Delta_{n0}(\tau)=\mathcal E_n(\tau)-\mathcal E_0(\tau)$. The matrix element above for non-degenerate states can also be written as \begin{equation} \langle n|\partial_\tau|0\rangle =-\langle n|\partial_\tau \mathcal H(\tau)|0\rangle/ \Delta_{n0}(\tau). \end{equation} In what follows we will assume that we are dealing with a non-degenerate ground state. To make further progress in analyzing the transition amplitudes Eq.~(\ref{eq_central1}), we consider the very slow asymptotic limit $\dot\lambda\to 0$. To be specific, we assume that near $\tau=0$ the tuning parameter has the form (see also Ref.~[\onlinecite{PhysRevB.81.224301}]) \begin{equation} \lambda(\tau)\approx \lambda(0)+\frac{v_\lambda |\tau^r|}{r!} \Theta(-\tau). \label{vrdef} \end{equation} The parameter $v_\lambda$, which controls the adiabaticity, plays the role of the quench amplitude if $r=0$, the velocity for $r=1$, the acceleration for $r=2$, etc. It is easy to check that in the asymptotic limit $v_\lambda\to 0$, Eq.~(\ref{eq_central1}) gives \begin{equation} \alpha_n\approx v_\lambda {\langle n|\partial_\lambda|0\rangle \over (\mathcal E_n-\mathcal E_0)^r}=- v_\lambda {\langle n|\partial_\lambda \mathcal H|0\rangle \over (\mathcal E_n-\mathcal E_0)^{r+1}}, \label{an1} \end{equation} where all matrix elements and energies are evaluated at $\tau=0$. From this perturbative result we can in principle evaluate the leading non-adiabatic response of various observables and define the corresponding susceptibilities. For the purposes of comparing with the QAQMC approach, Eq.~(\ref{an1}) suffices. \subsection{Operator-product evolution} \label{sub:qaqmc} The quasi-adiabatic QMC method may appear very different from NEQMC but has a similar underlying idea. Instead of imaginary time propagation with Eq.~(\ref{utau}), we apply a simple operator product to evolve the initial state. We first examine the state propagated with the first $t$ operators in the sequence $P_{t,1}$ in Eq.~(\ref{p1m}), \begin{equation} |\psi_t \rangle=[-\mathcal{H}(\lambda_t)] \dots [-\mathcal{H}(\lambda_2)] [-\mathcal{H}(\lambda_1)] |\psi_0\rangle, \label{psitm} \end{equation} and after that we will consider symmetric expectation values of the standard form $\langle \psi_M |A|\psi_M\rangle$ as well as the asymmetric expectation values in Eq.~(\ref{atdef}). We assume that the spectrum of $-\mathcal H$ is strictly positive, which is accomplished with a suitable constant offset to $\mathcal{H}$ if needed. \subsubsection{Linear protocols} The coupling $\lambda$ can depend on the index $t$ in an arbitrary way. It is convenient to define \begin{equation} \tau_i={i\over T}, \end{equation} where $T$ is the overall time scale, which can be set to unity. The leading non-adiabatic corrections will be determined by the system properties and by the behavior of $\lambda(\tau_i)$ near the point of measurement $t$. The most generic is the linear dependence $\lambda(\tau_i)\approx \lambda(t)+\tilde v_\lambda (t-\tau_i)$, where $\tilde v_\lambda$ is related to the quench velocity (see below). In the end of this section we will briefly consider also more general nonlinear quench protocols. Our strategy to analyze Eq.~(\ref{psitm}) in the adiabatic limit will be the same as in the preceding subsection. We first go to the instantaneous basis and rewrite \begin{equation} |\psi(\tau_i)\rangle\equiv |\psi_i\rangle=\sum_n a_n (\tau_{i}) | n(\lambda_{i})\rangle\equiv \sum_n a_n^i |n^i\rangle . \end{equation} In the instantaneous basis, the discrete Schr\"odinger-like equation $|\psi^{i+1}\rangle=-\mathcal H(\tau_{i+1}) |\psi^i\rangle$ reads \begin{equation} a_n^{i+1}=-\sum_m a_m^i \mathcal E_n^{i+1} \langle n^{i+1} | m^{i}\rangle, \end{equation} and it is instructive to compare this with Eq.~(\ref{eq 2}). It is convenient to first make a transformation \begin{equation} a_n^i=\prod_{j=i+1}^{t} {1\over (-\mathcal E_n^j)} \alpha_n^i. \end{equation} This transformation does not affect the transition amplitude at the time of measurement $t$: $a_n^t=\alpha_n^t$. Then the equation above becomes \begin{equation} \alpha_n^{i+1}=\sum_m \alpha_m^i \left[\prod_{j=i+1}^t {\mathcal E_n^j\over \mathcal E_m^j}\right] \langle n^{i+1} | m^{i}\rangle. \end{equation} Let us introduce a discrete derivative \begin{equation} \langle n^i|\overleftarrow{\Delta} \equiv \langle n^{i+1}|-\langle n^i|, \end{equation} and write the Schr\"odinger-like equation as \begin{equation} \alpha_n^{i+1}=\alpha_n^i+\sum_m \alpha_m^i \left[\prod_{j=i+1}^t {\mathcal E_n^j\over \mathcal E_m^j}\right] \langle n^{i} |\overleftarrow{\Delta}| m^{i}\rangle. \end{equation} In the adiabatic limit, the solution of this equation is $\alpha_n^i=\delta_{n0}$, i.e., the instantaneous ground state. To leading order of deviations from adiabaticity we find \begin{equation} \alpha_n^{i+1}=C_n+\sum_{k=0}^i\left[\prod_{j=k+1}^t{\mathcal E_n^j\over \mathcal E_0^j}\right]\langle n^{k}|\overleftarrow{\Delta} | 0^{k}\rangle, \end{equation} where $C_n$ can be determined from the initial condition. In the limit of sufficiently large $t$ the initial state is not important so we should have $\alpha_n^{t-i}\to 0$ for $i\gg 1$, so that $C_n=0$. Therefore we find that the amplitude of the transition to the excited state is approximately \begin{equation} \alpha_n^{t}\approx \sum_{k=0}^{t-1} \left[\prod_{j=k+1}^t{\mathcal E_n^j\over \mathcal E_0^j}\right]\langle n^{k}|\overleftarrow{\Delta} | 0^{k}\rangle . \end{equation} Changing the summation index $k$ to $p=t-k$ we have \begin{equation} \alpha_n^{t}\approx \sum_{p=1}^{t} \left[\prod_{j=t+1-p}^t{\mathcal E_n^j\over \mathcal E_0^j}\right]\langle n^{t-p}|\overleftarrow{\Delta} | 0^{t-p}\rangle. \label{alpha_n} \end{equation} It is clear that for large $t$ only $p\ll t$ terms contribute to the sum. In the extreme adiabatic limit one can thus move the matrix element outside of the summation and use the spectrum of the final Hamiltonian. In this case we find \begin{eqnarray} \alpha_n^t &\approx& {\mathcal E_n\over \mathcal E_0} {\langle n|\overleftarrow{\Delta} | 0\rangle\over 1-{\mathcal E_n/\mathcal E_0}} \nonumber \\ &=& {-\mathcal E_n\Delta_\lambda} {\langle n|\overleftarrow{\partial_\lambda} | 0\rangle\over \mathcal E_n-\mathcal E_0} = {\mathcal E_n\Delta_\lambda} {\langle n|\partial_\lambda | 0\rangle\over \mathcal E_n-\mathcal E_0}, \label{an2} \end{eqnarray} where $\Delta_\lambda=\lambda(t)-\lambda(t-1)$. By comparing Eqs.~(\ref{an1}) and (\ref{an2}) we see that near the adiabatic limit QAQMC and NEQMC are very similar if ${\mathcal E_n/\mathcal E_0}\approx {\rm const}$. This can in principle always be ensured by having a sufficiently large energy offset, but even with a small offset we expect the ratio to be essentially constant for the range of $n$ contributing significantly when the spectrum becomes gapless close to a quantum-critical point. If the condition indeed is properly satisfied, then from Eqs.~(\ref{an1}) and (\ref{an2}), we identify the quench velocity as \begin{equation} v_\lambda=\mathcal E_0\Delta_\lambda. \end{equation} This is the main result of this section. We will confirm its validity explicitly in numerical studies with the QAQMC method in Sec.~\ref{sec:results}. Since $\mathcal E_0 \propto N$, where $N$ is the system size, we can also see that $v_\lambda \propto N\Delta_\lambda \propto N/M$ for a given total change in $\lambda$ over the $M$ operators in the product. Let us point out that Eq.~(\ref{an2}) can be also rewritten as \begin{equation} \alpha_n^t \approx -\mathcal E_0\Delta_\lambda {\langle n|\overleftarrow{\partial_\lambda} | 0\rangle\over \mathcal E_n-\mathcal E_0} -\Delta_\lambda \langle n|\overleftarrow{\partial_\lambda} | 0\rangle. \label{an3} \end{equation} The first contribution here exactly matches that of Eq.~(\ref{an1}) while the second term is an additional contribution corresponding to a sudden quench. \subsubsection{Nonlinear protocols} We can extend the above result, Eq.~(\ref{an3}), to arbitrary quench protocols. In particular, consider \begin{equation} \lambda_{t-p}=\lambda_t+{v_\lambda\over (-\mathcal E_0)^r} {p^{r-1}\over (r-1)!}, \end{equation} where $r \geq 0$ (not necessarily an integer). For $r=1$, we recover the linear protocol analyzed above. Then we can still rely on Eq.~(\ref{alpha_n}) but need to take into account that \begin{eqnarray} \langle n^{t-p}|\overleftarrow{\Delta} | 0^{t-p}\rangle &\approx& \Delta\lambda_{t-p}\langle n^t|\overleftarrow{\partial_\lambda}|0^t\rangle \nonumber \\ &=&{v_\lambda\over (-\mathcal E_0)^r} {p^{r-1}\over (r-1)!} \langle n^t|\overleftarrow{\partial_\lambda}|0^t\rangle. \end{eqnarray} Thus, we find that \begin{equation} \alpha_n^t\approx {v_\lambda\over (-\mathcal E_0)^r (r-1)!}{\rm Li_{1-r}(\mathcal E_n/\mathcal E_0)}\langle n^t|\overleftarrow{\partial_\lambda}|0^t\rangle, \end{equation} where ${\rm {Li}_{1-r}(x)}$ is the Polylog function. In particular, \begin{mathletters} \begin{eqnarray} && {\rm Li}_0(x)={1\over 1-x}, \\ && {\rm Li}_{-1}(x)={x\over (1-x)^2},\\ && {\rm Li}_{-2}(x)={x(x+1)\over (1-x)^3}. \end{eqnarray} \end{mathletters} Under the conditions discussed above (large offset or small energy gap) we again have $x=\mathcal E_n/\mathcal E_0\to 1$ and then we recover the continuum result using the fact that \[ {\rm Li}_{1-r}(1-\epsilon)\approx {(r-1)!\over \epsilon^r}. \] Then, indeed, \begin{eqnarray} \alpha_n^t =a_n^t &\approx& {v_\lambda\over (- \mathcal E_0)^r (r-1)!} {(r-1)!\over (1-\mathcal E_n/\mathcal E_0)^r}\langle n^t|\overleftarrow{\partial_\lambda}|0^t\rangle \nonumber \\ &=& v_\lambda {\langle n^t|\overleftarrow{\partial_\lambda}|0^t\rangle\over (\mathcal E_n-\mathcal E_0)^r}, \end{eqnarray} which exactly matches Eq.~(\ref{an1}). \subsection{Expectation values} \label{sec:expectation_values} While asymptotically Eq.~(\ref{atdef}) gives the ground state of the observable $A$ in the adiabatic limit for all values of $t$, the approach to this limit as $t\to\infty$ is qualitatively different depending on whether $t$ is equal to $M$ or not. More precisely, if $t=\eta M$ where $\eta\in (0,2)$ as $M\to\infty$, we encounter two different asymptotic regimes for $\eta\neq 1$ and $\eta=1$. \subsubsection{Symmetric expectation values; $t=M$} In this limit the expectation value of the observable $A$ in the leading order of the adiabatic perturbation theory reduces to \begin{equation} \langle A\rangle_{t=M}\approx \langle \psi(v_\lambda) | A|\psi(v_\lambda)\rangle, \end{equation} where $v_\lambda\approx \mathcal E_0\Delta_\lambda$ is the imaginary time velocity identified earlier. For generic observables not commuting with the Hamiltonian, we find \begin{equation} \langle A \rangle_{t=M} \approx \langle A\rangle_{0}+v_\lambda \chi'_{A\lambda}, \end{equation} where \begin{equation} \chi'_{A\lambda}=\sum_{n\neq 0} \langle 0|A|n\rangle {\langle n| \partial_\lambda|0\rangle\over \mathcal E_n-\mathcal E_0} + c.c. \label{asusc} \end{equation} is the susceptibility. All energies and matrix elements are evaluated at ``time'' $t=M$. For diagonal observables $A$, like the energy or energy fluctuations, we have \begin{equation} \langle A\rangle_{t=M}\approx \langle A\rangle_{0}+v_\lambda^2 \sum_{n\neq 0} {|\langle n| \partial_\lambda|0\rangle|^2\over (\mathcal E_n-\mathcal E_0)^2}\langle n|A|n\rangle. \label{diag_sym} \end{equation} In particular, the correction to the energy is always positive as it should be for any choice of wave function deviating from the ground state. Let us emphasize that for diagonal observables the leading non-adiabatic response at the symmetric point in imaginary time coincides with that in real time, and, thus QAQMC or NEQMC can be used to analyze real time deviations from adiabaticity, as was pointed out in the case of NEQMC in Ref.~[\onlinecite{degrandi11}]. \subsubsection{Asymmetric expectation value, $t \neq M$} \label{subsub:asymmetric} It turns out that the asymptotic approach to the adiabatic limit is quite different for non-symmetric points $t=\eta M$ with $\eta \neq 1$. Without loss of generality we can focus on $0<\eta<1$ (since all expectation values are symmetric with respect to $\eta \to 2-\eta$ for the symmetric protocol we consider \cite{averagenote}). Then the expectation value of $A$ is evaluated with respect to different eigenstates \begin{equation} \langle A\rangle_t={\langle \psi_L| A|\psi_R\rangle\over \langle \psi_L|\psi_R\rangle}, \end{equation} where \begin{eqnarray} |\psi_R\rangle&=& \mathcal H(\lambda_t) \cdots \mathcal H(\lambda_2)\mathcal H(\lambda_1) |\psi_0\rangle,\nonumber \\ |\psi_L\rangle&=&\mathcal H(\lambda_{t+1}) \cdots \mathcal H(\lambda_{M-1})\mathcal H(\lambda_M) P_ {M,1} | \psi_0\rangle. \label{psi_LR} \end{eqnarray} Note that the overlap $\langle \psi_L|\psi_R\rangle$ is independent of $t$ by construction and is real. It is easy to see that for diagonal observables we obtain a leading asymptotic as in Eq.~(\ref{diag_sym}) but with the opposite sign in the second term \begin{equation} \langle A\rangle_{t \neq M}\approx \langle A\rangle_{0}-v_\lambda^2 \sum_{n\neq 0} {|\langle n| \partial_\lambda|0\rangle|^2\over (\mathcal E_n-\mathcal E_0)^2}\langle n|A|n\rangle. \label{eq:asy} \end{equation} In particular, the leading correction to the ground state energy is negative when $t$ deviates sufficiently from the symmetric point, i.e., $ | \lambda_t - \lambda_1| / v_\lambda \ll M $. There is no contradiction here since the left and right states are different (i.e., we are not evaluating a true expectation value and there is no variational principle). Both Eqs.~(\ref{diag_sym}) and (\ref{eq:asy}) recover the exact result in the adiabatic limit. Since the correction up to the sign exactly matches the real time result, we can still use the non-symmetric expectation value for diagonal observables to extract the real time non-adiabatic response. For $t \to M$, the sign of the correction should change, to connect smoothly to the variational $t=M$ expectation value. The crossover between positive and negative corrections to the energy takes place around a point that asymptotically converges to $t=M$ in the adiabatic limit (where the deviation from the ground-state energy at $t=M$ vanishes). We will illustrate this with numerical results in Sec.~\ref{ecrossover} (see Fig.~\ref{fig1}). As in the symmetric case, using the APT discussed in the previous section the results derived here easily extend to other values of the exponent $r$. \subsubsection{The metric tensor and Berry curvature} If $A=-\partial_\mu H$, then the susceptibility Eq.~(\ref{asusc}) reduces to the $\mu\lambda$ component of the metric tensor,\cite{degrandi11,adi_long} which, thus, can be readily extracted using the QAQMC algorithm. In particular, the diagonal components of the metric tensor define the more familiar fidelity susceptibility. Next, we observe that for $t$ sufficiently different from $M$, the approach to the ground state in the left function $\psi_L$ in Eq.~(\ref{psi_LR}) effectively corresponds to a change in sign of the velocity, and, thus, we find \begin{equation} \langle A \rangle_t\approx {\langle \psi(-v)|A|\psi(v)\rangle \over \langle \psi(-v)|\psi(v)\rangle}, \label{eq:vmv} \end{equation} \noindent where the wave functions $|\psi(v)\rangle$ and $\psi(-v)\rangle$ are evaluated at the same value of the coupling determined by the value of $\eta$. We can use the results of the previous section to find that for off-diagonal observables \begin{equation} \langle A\rangle_t\approx \langle A\rangle_0-i v_\lambda \chi''_{A\lambda}, \label{eq:off} \end{equation} \begin{equation} \chi''_{A\lambda}=i\sum_{n\neq 0} \langle 0|A|n\rangle {\langle n| \partial_\lambda|0\rangle\over \mathcal E_n-\mathcal E_0} - c.c. \end{equation} Based on this result we conclude that the leading non-adiabatic correction is imaginary and coincides, up to the factor of imaginary $i$, with the real-time non-adiabatic correction.\cite{adi_long} In particular, for $A=-\partial_\mu H$ the susceptibility $\chi''_{A\lambda}=\chi''_{\mu\lambda}$ is proportional to the Berry curvature. The fact that we are getting the opposite sign (compared to the real time protocol) in the susceptibility for diagonal observables and the Berry curvature for off-diagonal observables away from the symmetric points in Eqs~(\ref{eq:asy}) and (\ref{eq:off}) is a consequence of general analytic properties of the asymmetric expectation values. As we discuss in Ref.~[\onlinecite{adi_long}] the expectation value Eq.(\ref{eq:vmv}) is the analytic continuation of the real time expectation value to the imaginary velocity $v\to iv$. This continuation is valid in all orders of expansion of the expectation value of $A$ in $v$. \section{Results} \label{sec:results} As a demonstration of the utility of QAQMC and the behaviors derived in the previous section we here study the TFIM, defined by the Hamiltonian \begin{equation} \label{eq:hamiltonian} \mathcal{H} = - s \sum_{\langle i,j \rangle} \hspace{2pt} \sigma_{i}^z \hspace{1pt} \sigma_{j}^z - (1-s) \sum_{i} \sigma_i^x, \end{equation} \noindent where $\langle i,j \rangle$ are nearest-neighbor sites, and $\sigma_z$ and $\sigma_x$ are Pauli matrices. Here, $s$ plays the role of the tuning parameter, which in the simulations reported below will vary between $0$ (where the ground state is trivial) to a value exceeding the quantum-critical point; $s_c=1/2$ in a 1D chain and $s_c \approx 0.247$ in the 2D square lattice.\cite{J.Phys.A.33.6683} We work in the standard basis of eigenstates of all $S^z_i$. The simulation algorithm samples strings of $2M$ diagonal and off-diagonal terms in Eq.~(\ref{eq:hamiltonian}), in a way very similar to the $T>0$ stochastic series expansion (SSE) method, which has been discussed in detail in the case of the TFIM in Ref.~[\onlinecite{PhysRevE.68.056701}]. The modifications for the QAQMC primarily concern the sampling of the initial state, here $|\Psi(0)\rangle = \prod_{i}|\uparrow_i + \downarrow_i\rangle$, which essentially amounts to a particular boundary condition replacing the periodic boundaries in finite-temperature simulations. An SSE-like scheme with such modified boundaries was also implemented for the NEQMC method in Ref.~[\onlinecite{degrandi11}], and recently also in a study of combinatorial optimization problems in Ref.~[\onlinecite{farhi12}]. We here follow the same scheme, using cluster updates in which clusters can be terminated at the boundaries. The implementation for the product with varying coupling $s$ is even simpler than SSE or NEQMC, with the fixed-length product replacing the series expansion of Eq.~(\ref{utau}). The changes relative to Refs.~[\onlinecite{degrandi11}] and [\onlinecite{PhysRevE.68.056701}] are straightforward and we therefore do not discuss the sampling scheme further here. \subsection{Cross-over of the energy correction} \label{ecrossover} As we discussed in Sec.~(\ref{sec:apt}), the asymmetric expectation value (\ref{atdef}) of the Hamiltonian has a negative correction to the ground-state energy when $t$ is sufficiently away from the symmetric point $t=M$. In Fig.~\ref{fig1} we illustrate this property and the convergence to the ground-state energy for all $t$ with increasing $M$ with simulation data for a small 1D TFIM system. We here plot the results versus the rescaled propagation power $\eta=t/M$. The region of negative deviations move toward the symmetric point with increasing $M$. Note that the deviations here are not strongly influenced by the critical point (which is within the parameter $s$ simulated but away from the symmetric point), although the rate of convergence should also be slow due to criticality. The rate of convergence to the ground state can be expected to be (and is here seen to be) most rapid for $\eta < \eta_{c1}$ and $\eta > \eta_{c2}$. \begin{figure} \includegraphics[width=7cm, clip=true]{fig1.eps} \caption{(Color online) Symmetric and asymmetric expectation values of the Hamiltonian in QAQMC calculations for 1D TFIM Eq.~(\ref{eq:hamiltonian}) with $N=24$. Here, the evolution was from $s=0$ to $0.6$ and, thus, $s=0.6$ is the symmetric point here labeled by $\eta=t/2M=1$. For $\eta\le 1$, $s=0.6\eta$ and for $\eta\ge 1$, $s=1.2-0.6\eta$, and the critical point $s=1/2$ hence corresponds to $\eta_{c1} \approx 0.833$ and $\eta_{c2} \approx 1.167$. (Bottom) Expectation value and (top) deviation from the true ground-state energy (obtained using Lanczos exact diagonalization).} \label{fig1} \end{figure} \subsection{Quantum-critical dynamic scaling} The idea of dynamic scaling at a critical point dates back to Kibble and Zurek for quenches (also called ramps, since the parameter does not have to change suddenly, but linearly with arbitrary velocity as a function of time) of systems through classical phase transitions. \cite{J.Phys.A.9.1387,Nature.317.505} Here, the focus was on the density of defects. The ideas were later generalized also to quantities more easily accessible in experiments, such as order parameters, and the scaling arguments were also extended to quantum systems.\cite{PhysRevB.72.161201,RevModPhys.83.863,PhysRevLett.95.105701,PhysRevLett.95.245701} The basic notion is that the system has a relaxation time $t_{\rm rel}$, and if some parameter (here a parameter of the Hamiltonian) is changed such that a critical point is approached, the system can stay adiabatic (or in equilibrium) only if the remaining time $t$ to reach the critical point is much larger than the relaxation time, $t \gg t_{\rm rel}$. In general, one expects $t_{\rm rel} \sim \xi^z \sim \epsilon^{-z\nu}$, where $\xi$ is the correlation length, $\nu$ the exponent governing its divergence with the distance $\epsilon$ to the critical point, and $z$ the dynamic exponent. For a system of finite size (length) $L$, $\xi$ is maximally of order $L$ and, thus, for a linear quench the critical velocity $v_{\rm crit}$ separating slow and fast power-law quenches according to Eq.~(\ref{vrdef}) should heuristically be given by $v_{crit} \sim L^{-(z + 1/\nu)}$, and for a power-law quench with exponent $r$ according to Eq.~(\ref{vrdef}) this generalizes to \cite{PhysRevB.81.224301} \begin{equation} \label{eq:critical_v} v_{crit} \sim L^{-(zr + 1/\nu)}. \end{equation} One then also expects a generalized finite-size scaling form for singular quantities $A$, \begin{equation} A(L,\epsilon) = L^\kappa f(\epsilon L^{1/\nu}, v L^{zr+ 1/\nu}), \label{eq:universal_f} \end{equation} where $\kappa$ characterizes the leading size-dependence at the critical point of the quantity considered. For $v \to 0$, Eq.~(\ref{eq:universal_f}) reduces to the standard equilibrium finite-size scaling hypothesis. This scaling was recently suggested and tested in different systems, both quantum~\cite{deng_08,PhysRevB.81.224301, kolodrubetz_12} and classical~\cite{chandran_12}. The above expression Eq.~(\ref{eq:universal_f}) combined with the product-evolution Eq.~(\ref{p1m}) allows us to study a phase transition based on different combinations of scaling in the system size and the velocity in non-equilibrium setups. For example, if one wants to find the critical point for the phase transition and the exponent $\nu$ is known, one can carry out the evolution under the critical-velocity condition: \begin{equation} \label{eq:const} v L^{z+1/\nu} = c, \end{equation} where $c$ is a constant. In this paper, we focus on linear quench protocols and set $r=1$ henceforth. As we discussed in Sec.~\ref{sub:qaqmc}, the QAQMC method applied to a system of size (volume) $N$ based on evolution with $M$ operators in the sequence and change $\Delta_\lambda$ between each successive operator corresponds to a velocity $v \propto N \Delta_\lambda \propto N/M$, with the prefactor depending on the ground state energy (at the critical point). The exact prefactor will not be important for the calculations reported below, and for convenience in this section, we define \begin{equation} v = s_{\rm f} \frac{N}{M}, \label{vnmdef} \end{equation} where $s_{\rm f}$ is the final value of the parameter $s$ in Eq.~(\ref{eq:hamiltonian}) over the evolution (which is also the total change in $s$, since we start with the eigenstate at $s=0$). The critical product-length $M$ is, thus, given by \begin{equation} M = \frac{1}{c}NL^{z+1/\nu} = \frac{1}{c}L^{d+z+1/\nu}, \label{mscaled} \end{equation} where we have also for simplicity absorbed $s_{\rm f}$ into $c$. Using an arbitrary $c$ of order $1$ in Eq.~(\ref{eq:const}), the critical point $s_c$ can be obtained based on a scaling function with the single argument $\epsilon L^{1/\nu}$ in Eq.~(\ref{eq:universal_f}). We will test this approach here, in Secs.~\ref{sub1d} and \ref{sub2d}, and later, in Sec.~\ref{subfurther}, we will show that exact knowledge of the exponents in Eq.~(\ref{eq:const}) is actually not needed. First, we discuss the quantities we consider in these studies. \subsection{Quantities studied} We will focus our studies here on the squared $z$-component magnetization (order parameter), \begin{equation} \label{eq:m2} m_z^2 = \Big\langle \frac{1}{N^2} \bigg( \sum_i^N \sigma_i^z \bigg)^2 \Big\rangle , \end{equation} We can also define a susceptibility-like quantity (which we will henceforth refer to as the susceptibility) measuring the magnetization fluctuations: \begin{equation} \label{eq:susceptibilty} \chi = N ( \left\langle m_z^2 \right\rangle - \left\langle |m_z| \right\rangle^2). \end{equation} \noindent Here both terms have the same critical size-scaling as the equal-time correlation function; \begin{equation} \label{eq:msquare} \langle m_z\rangle ^2 \sim \left\langle |m_z| \right\rangle^2 \sim L^{-(d+z-2+\eta)}, \end{equation} where $d$ is the spatial dimensionality. The prefactors for the two quantities are different, however, a divergent peak remains in Eq.~(\ref{eq:susceptibilty}) at the transition. Away from the critical point $\chi \to 0$ with increasing system size. To clarify our use of $\chi$, we point out that we could also just study the scaling of $\left\langle m_z^2 \right\rangle$, but the peak produced when subtracting off the second term in Eq.~(\ref{eq:susceptibilty}) is helpful in the scaling analysis. According to Eq.~(\ref{eq:universal_f}) and using $z=1$ in Eq.~(\ref{eq:msquare}), the full scaling behavior of the fluctuation around the critical point should follow the form \begin{equation} \label{eq:scaled_susceptibilty} \chi \sim L^{1-\eta} \hspace{2pt} f \big( (s-s_c) L^{1/\nu}, v L^{1 + 1/\nu } \big), \end{equation} for any dimensionality $d$. We should point out here that the true thermodynamic susceptibility based on the Kubo formula \cite{PhysRevB.43.5950} (imaginary-time integral) yields a stronger divergence $L^{2-\eta}$. This quantity is, however, more difficult to study with the QAQMC algorithm, because, unlike in standard finite-$T$ QMC methods, the time integration cannot simply be carried out within the space of time-evolving Hamiltonians in Eq.~(\ref{p1m}) and Eq.~(\ref{atdef}). The standard Feynman-Suzuki correspondence between the $d$-dimensional quantum and $(d+1)$-dimensional classical systems is not realized in our scheme. The configuration space of time-evolving Hamiltonians builds in the relaxation time, $t_{\rm rel}$, in a different way, not just in terms of equilibrium fluctuations in the time direction, but in terms of evolution as a function of a time-dependent parameter. A useful quantity to consider for extracting the critical point is the Binder cumulant, \cite{PhysRevLett.47.693}, \begin{equation} \label{eq:binder} U = \frac{3}{2} \bigg( 1 - \frac{1}{3} \dfrac{ \left\langle m_z^4 \right\rangle }{ \left\langle m_z^2 \right\rangle^2 } \bigg). \end{equation} For a continuous phase transition, $U$ converges to a step function as $L \to \infty$. The standard way to analyze this quantity for finite $L$ is to graph it versus the argument $s$ for different $L$ and extract crossing points, which approach the critical point with increasing $L$. Normally, this is done in the equilibrium, either by taking the limit of the temperature $T \to 0$ for each $L$ first, or by fixing $\beta=1/T \propto L^z$ if $z$ is known. Here, the latter condition is replaced by Eq.~(\ref{eq:const}), but, as we will discuss further below, the condition can be relaxed and the exponents do not have to be known accurately \textit{a priori}. Our approach can also be used to determine the exponents, either in a combined procedure of simultaneously determining the critical point and the exponents, or with a simpler analysis after first determining the critical point. We have up until now only considered calculations of equal-time observables, but, in principle, it is also possible and interesting to study correlations in the evolution direction, which also can be used to define susceptibilities. In the following we will illustrate various scaling procedures using results for the 1D and 2D TFIMs. The dynamic exponent $z=1$ is known for both cases, and in the 1D case all the exponents are rigorously known since they coincide with those of the classical 2D Ising model. For the 2D TFIM, the exponents are know rather accurately based on numerics for the 3D classical model. \subsection{1D transverse-field Ising model.} \label{sub1d} \begin{figure} \includegraphics[width=7.5cm, clip=true]{fig2.eps} \caption{(Color online) Results of typical QAQMC runs for the 1D TFIM, Eq.~(\ref{eq:hamiltonian}). The binder cumulant Eq.~(\ref{eq:binder}) (bottom) and the susceptibility $\chi$ Eq.~(\ref{eq:susceptibilty}) (top) are graphed versus $s$ for several system sizes $L$. In these simulations, which spanned the range $s \in [0,0.6]$, the length of the index sequence was of the form Eq.~(\ref{mscaled}), i.e., with the exponents applicable in this case $M=L^3/c$ with the arbitrary constant chosen to be $c=4^3/240$.} \label{fig2} \end{figure} The 1D TFIM provides a rigorous testing ground for the new algorithm and scaling procedures since it can be solved exactly.\cite{sachdev_qpt} The critical point corresponds to the ratio between the transverse field and the spin-spin coupling equaling $1$, i.e., $s=1/2$ in the Hamiltonian Eq.~(\ref{eq:hamiltonian}). The critical exponents, known through the mapping to the 2D Ising model,\cite{Prog.Theor.Phys.56.1454} are $\nu=1$ and $\eta = 1/4$. The results presented here were obtained in simulations with the parameter $s$ spanning the range $[0,s_{\rm f}]$ with $s_{\rm f}=0.6$, i.e., going from the trivial ground state of the field term to well above the critical point. Fig.~\ref{fig2} shows examples of results for the susceptibility and the Binder cumulant. The operator-sequence length $M$, Eq.~(\ref{p1m}), was scaled with the system size in order to stay at the critical velocity according to Eq.~(\ref{mscaled}). We emphasize again that a single run produces a full curve within the $s$-range used. In order to focus on the behavior close to criticality, we have left out the results for small $s$ in Fig.~\ref{fig2}. Since $M$ is very large (up to $\approx 10^6$ for the largest $L$ in the cases shown in the figure), we also do not compute expectation values for each $t$ in Eq.~(\ref{atdef}), but typically spacing measurements by $\propto N$ operators. Extracting Binder curve-crossings using system-size pairs $L$ and $L+4$, with $L=4, 8, 12, \dots 60$, and extrapolating the results to $L \rightarrow \infty$, we find $s_c = 0.49984(16)$, as illustrated in Fig.(\ref{fig3}). Thus, the procedure produces results in full agreement with the known critical point. \begin{figure} \includegraphics[width=8cm, clip=true]{fig3.eps} \caption{ (Color online) Results of a Binder-crossing scaling analysis of the 1D TFIM data in Fig.~\ref{fig2} (including also other system sizes not shown there). Crossing points were extracted based on system sizes $L$ and $L+4$, with $L=4, 8, \dots, 60$. The curve is a fit to the form \cite{PhysRevLett.47.693} $s_c(L) = s_c + a/L^b$, $s_c=0.49984(16)$ and $b=1.6(1)$. } \label{fig3} \end{figure} The dynamical scaling of the susceptibility is illustrated in Fig.~\ref{fig4}. Here, there are no adjustable parameters at all, since all exponents and the critical coupling are known (and we use the exact critical coupling $s_c=1/2$, although the numerical result extracted below is very close to this value and produces an almost identical scaling collapse). While some deviations from a common scaling function are seen for the smaller systems and far away from the scaled critical point $(s-s_c)L$, the results for larger sizes and close to the peak rapidly approach a common scaling function. This behavior confirms in practice our discussion of the definition of the velocity and the ability of the QAQMC method to correctly take into account at least the first corrections to the adiabatic evolution. \begin{figure} \includegraphics[width=7.5cm, clip=true]{fig4.eps} \caption{ (Color online) Scaled susceptibility of the 1D TFIM. The axes have been scaled according to the form Eq.~(\ref{eq:scaled_susceptibilty}) with the second argument constant and using the exact critical point $s_c=1/2$. The results are shown on two different scales to make visible deviations (due to subleading size and velocity corrections) from the common scaling function far away from criticality as well as the good data collapse close to the critical point.} \label{fig4} \end{figure} \subsection{2D transverse-field Ising model} \label{sub2d} The 2D transverse-field Ising model provides a more serious test for our algorithm since it cannot be solved exactly. Among many previous numerical studies,\cite{J.Phys.C.4.2370, PhysRevB.57.8494,J.Phys.A.33.6683} Ref.~[\onlinecite{J.Phys.A.33.6683}] arguably has the highest precision so far for the value of the critical coupling ratio. Exact diagonalization was there carried out for up to $6 \times 6$ lattice size. In terms of the critical field $h_c=1-s$ in units of the coupling $J=s$, the critical point was determined to $h_c/J = 1/0.32841(2) = 3.04497(18)$, where the error bar reflects estimated uncertainties in finite-size extrapolations. Results based on QMC simulations \cite{J.Phys.C.4.2370, PhysRevB.57.8494} are in agreement with this value, but the statistical errors are larger than the above extrapolation uncertainty. One might worry that the system sizes $L\le 6$ are very small and the extrapolations may not reflect the true asymptotic $L\to \infty$ size behavior. However, the data points do follow the functional forms expected based on the corresponding low-energy field theory, and there is therefore no \textit{a priory} reason to question the results. It is still useful to try to reach similar or higher precision with other approaches, as we will do here with the QAQMC method combined with dynamic scaling. \begin{figure} \includegraphics[width=7.5cm, clip=true]{fig5.eps} \caption{(Color online) Binder crossings for the 2D TFIM extracted using $L$ and $L+4$ systems with $L=4, 8, \dots, 56$. The crossing points have been fitted to the standard form \cite{PhysRevLett.47.693} $s_c(L) = s_c + a/L^b$, for which the optimal values are $s_c=0.247244(4)$ and $b=4.0(1)$. The results are shown on two different scales to illustrate large deviations from the fitted form for the smaller systems, followed by a rapid convergence for larger sizes.} \label{fig5} \end{figure} In this case we simulate the linear quench in the window of $s \in [0,0.3]$, which contains the previous estimates for the critical value $s_c \approx 0.247$ as discussed above. Although we could also carry out an independent scaling analysis to extract the critical exponents, we here choose to just use their values based on previous work on the classical 3D Ising model; $1/\nu \approx 1.59$, and $\eta \approx 0.036$.\cite{PhysRevB.59.11471} Our goal here is to extract a high-precision estimate of the critical coupling, and, at the same time, to further test the ability of QAQMC to capture the correct critical scaling behavior. We again scale $M$ with $L$ according to Eq.~(\ref{mscaled}), with the constant $c = 4^{4.59}/32$. As in the 1D case, we extract Binder-cumulant crossing points based on linear system sizes $L$ and $L+4$ with $L = 4, 8, \dots, 56$. Fig.~\ref{fig5} shows the results versus $1/L$ along with a fit to a power-law correction \cite{PhysRevLett.47.693} for $s_c(L)$. Extrapolating to infinite size gives $s_c = 0.247244(4)$, which corresponds to a critical field (in unit of $J$) $h_c/J = 3.04458(7)$. This is in reasonable good agreement with the value obtained in Ref.~[\onlinecite{J.Phys.A.33.6683}] and quoted above, with our (statistical) error bar being somewhat smaller. To our knowledge, this is the most precise value for the critical coupling of this model obtained to date. We emphasize that we here relied on the non-equilibrium scaling ansatz to extract the equilibrium critical point. Allowing for deviations from adiabaticity in a controlled way and utilizing the advantage of the QAQMC algorithm allowed us to extract observables in the whole range of couplings in a single run. This requires considerably less computational resources than standard equilibrium simulations, which must be repeated for several different couplings in order to carry out the crossing-point analysis. Fig.~\ref{fig6} shows the susceptibility scaled according to the behavior expected with Eq.~(\ref{eq:universal_f}) when the second argument is held constant. As in the 1D case, the data converge rapidly with increasing size toward a common scaling function in the neighborhood of the transition point, again confirming the correct quasi-adiabatic nature of the QAQMC method. \begin{figure} \includegraphics[width=7.5cm, clip=true]{fig6.eps} \caption{(Color online) Scaled susceptibility of the 2D TFIM, based on Eq.~(\ref{eq:scaled_susceptibilty}) with a constant second argument. Here we have used $1/\nu = 1.59$ and $\eta = 0.036$ for the classical 3D Ising model \cite{PhysRevB.59.11471}.} \label{fig6} \end{figure} \subsection{Further tests} \label{subfurther} The results discussed in the preceding subsections were obtained with the KZ velocity condition Eq.~(\ref{eq:const}), applied in the form of Eq.~(\ref{mscaled}) tailored to the QAQMC approach, with specific values for the constant $c$. In principle, the constant is arbitrary, but the non-universal details of the scaling behavior depend on it. This is in analogy with a dependence on the shape, e.g., an aspect ratio, of a system in equilibrium simulations at finite temperature, or to the way the inverse temperature $\beta=1/T$ is scaled as $aL^z$ with arbitrary $a$ in studies of quantum phase transitions (as an alternative to taking the limit $\beta \to \infty$ for each lattice size). The critical point and the critical exponents should not depend on the choices of such shape factors or limiting procedures. To extract the critical coupling, in the preceding subsections, we fixed the exponents $\nu$ and $z$ at their (approximately) known values, and one may at first sight assume that it is necessary to use their correct values. It is certainly some times convenient to do so, in order to set the second argument of the scaling function Eq.~(\ref{eq:universal_f}) to a constant and, thus, obtain a simpler scaling function depending on a single argument. However, one can study critical properties based on the scaling approach discussed above as long as the velocity approaches zero as the system size increases. This observation can be important in cases where the critical exponents are not known and one would like to obtain an accurate estimate of the critical coupling before carrying out a scaling analysis to study exponents. We will test this in practice here. As we will discuss further below, one should use a different power $\kappa$ in the scaling ansatz Eq.~(\ref{eq:universal_f}) if the velocity is brought to zero slower than the critical form. In cases where we use the ``wrong'' values of the exponents, we formally replace $z+1/\nu$ by a free parameter $\alpha$, \begin{equation} v \sim L^{-\alpha}/c, \label{valphascaled} \end{equation} and the corresponding substitution in Eq.~(\ref{mscaled}). To understand the scaling of the observables for arbitrary $\alpha$, we return to the general scaling form given by Eq.~(\ref{eq:universal_f}). In the case of the Binder cumulant and for linear quench protocol, this form reads \begin{equation} U = f \big((s-s_c) L^{1/\nu}, v L^{z+1/\nu}\big). \label{eq:universal_g} \end{equation} As we pointed out above, when the velocity scales exactly as $L^{-(z+1/\nu)}$, the dependence on the second argument in the scaling function drops out and we can find the crossing point in a standard way as we did in Figs.~\ref{fig3} and \ref{fig5}. Suppose that we do not know the exponents $\nu$ and $z$ \textit{a priory} and instead scale $v$ as in Eq.~(\ref{valphascaled}). Then there are three possible situations: (i) $\alpha=z+1/\nu$, (ii) $\alpha>z+1/\nu$, and (iii) $\alpha<z+1/\nu$, where we already have analyzed scenario (i). In scenario (ii), where velocity scales to zero faster than the critical KZ velocity, the second argument of the scaling function $L^{z+1/\nu}/L^{\alpha}$ approaches zero as the system size increases and, thus, the scaling function effectively approaches the equilibrium velocity-independent form. We can then extract the crossing point as in the first scenario, and this gives the correct critical coupling in the limit of large system sizes. Finally, in case (iii) the velocity scales zero slower than the critical KZ value and the second argument in Eq.~(\ref{eq:universal_g}) diverges, which implies that the system enters a strongly non-equilibrium regime. This scenario effectively corresponds to taking the thermodynamic limit first and the adiabatic limit second. Then, if the system is initially on the disordered side of the transition, the Binder cumulant vanishes in the thermodynamic limit. At finite but large system sizes its approach to zero should be given by the standard Gaussian theory: \begin{equation} U\approx {C\over L^d}. \label{binder_therm} \end{equation} Combining this with the scaling ansatz Eq.~(\ref{eq:universal_g}) we find that for $\alpha<z+1/\nu$, the expected asymptotic of the Binder cumulant is \begin{equation} U\approx L^{-d}v^{-{d}/{(z+1/\nu)}} \tilde f \big((s-s_c) L^{1/\nu} \big), \label{rescaled_binder} \end{equation} where $\tilde f$ is some other velocity independent scaling function. Thus we can find the correct transition point by finding crossing points of $UL^d v^{d/(z+1/\nu)}$. Similar considerations apply to the ordered side of the transition, where the Binder cumulant approaches one as the inverse volume. \begin{figure} \includegraphics[width=8cm]{fig7.eps} \caption{ (Color online) Critical-point estimates based on curve crossings of appropriately scaled quantities for scenarios (i)-(iii) discussed in the text. The Binder cumulant (bottom) and the squared magnetization (top) give estimates $s_c(L)$ and $s'_c(L)$, respectively, based on system sizes $L$ and $L+4$. The red and blue curves correspond to runs in which the velocity was kept at the critical value, scenario (i), but with different constants of proportionality $c$ in Eq.~(\ref{mscaled}); $c_1 = 4^{4.59}/32$ and $c_2 = 4^{4.59}/48$. The yellow curves were obtained with the velocity decreasing faster than $v_{crit}$ with $L$, scenario (ii), with the proportionality constant $c_3= 4^5/32$. The green and pink curves correspond to cases where the velocity is sub-critical, scenario (iii), with constants $c_4= 4^{4.2}/32$, $c_5 = 4^4/32$. In all cases, power-law corrections were fit in order to extrapolate to infinite size (with small sizes excluded until statistically sound fits were obtained).} \label{fig7} \end{figure} The three cases are illustrated in the lower panel of Fig.~\ref{fig7}, which shows Binder-cumulant crossings extracted from appropriately scaled data in cases (i), (ii), and (iii) above. Additionally, to illustrate the insensitivity to the choice of the constant $c$ in the scaled sequence length in Eq.~(\ref{mscaled}), results based on two different constants are shown for case (i). In all cases, the extrapolated critical couplings agree with each other to within statistical errors. Note that, on the one hand, if the exponent $\alpha$ gets very large, then the time of simulations, which scales as $M$, rapidly increases with the system size and the algorithm becomes inefficient. On the other hand, if $\alpha$ is very small, our results indicate that the size dependence is larger and it is more difficult to carry out the extrapolation to infinite size. The optimal value of $\alpha$ should be as close as possible to the critical KZ power, but to be on the safe side when scaling according to the standard KZ critical form, case (i), one may choose a somewhat larger value, since the subcritical velocity in case (ii) has the same scaling form. Next we illustrate how the same idea works in the case of the order parameter. Around the critical point ($s_c$, $v_{\text{crit}}$), the squared magnetization [see Eq.~(\ref{eq:m2})] can be written as \begin{eqnarray} m_z^2 & = & L^{-2\beta/\nu} f \big( (s- s_c) L^{1/\nu}, v L^{z+1/\nu} \big). \label{mz2} \end{eqnarray} As in the previous discussion we scale $v\sim L^{-\alpha}$ and depending on the exponent $\alpha$ there are two different asymptotics of the scaling function. For $\alpha\geq z+1/\nu$ the second argument vanishes or approaches constant so we effectively get the equilibrium scaling \begin{eqnarray} m_z^2 & = & L^{-2\beta/\nu} f \big( (s- s_c) L^{1/\nu}\big) \end{eqnarray} If, conversely, $\alpha<z+1/\nu$ then on the disordered side of the transition $m_z^2$ scales as $L^{-d}$. This immediately determines the asymptotic of the scaling function in Eq.~(\ref{mz2}): \begin{equation} \label{eq:rescaled_m2} m_z^2 = L^{-d} v^{(2\beta/\nu)-d \over z+1/\nu} \tilde f\big( (s- s_c) L^{1/\nu}\big). \end{equation} Equation (\ref{eq:rescaled_m2}) can be used in the same way as the Binder cumulant to extrapolate the critical point, using the standard form \cite{PhysRevLett.47.693} $s'_c(L) = s'_c + a/L^b$ for the rescaled $m_z^2$. As shown in the top panel of Fig.~(\ref{fig7}), after rescaling the order parameter and extrapolating the crossing points between the appropriately rescaled $m_z^2$ curves to the thermodynamics limit, all curves, obtained from below or above the adiabatic limit Eq.~(\ref{eq:critical_v}), converge to the same value $s'_c \approx 0.247$. This approach also suggests a way to determine the transition point in experiment, since one can sweep through the critical point at different velocities, the crossing point can then be extracted through the measurement of the order parameter. It is also worth mentioning that since one can extrapolate the critical point independently without knowing the actual exponent $\nu$ prior to the simulation, an optimization procedure can be carried out to determine the exponents posterior to the simulation.\cite{classical_quench.unpublished} For completeness we also briefly discuss the role of the final point $s_{\rm f}$ of the evolution. Fig.~\ref{fig8} shows 2D results for the squared magnetization Eq.~(\ref{eq:m2}) and susceptibility Eq.~(\ref{eq:susceptibilty}) obtained for a range of final points above the critical value. Here the velocity was kept constant for all the cases. The values of the computed quantities at some fixed $s$, e.g., at $s_c$, show a weak dependence on $s_{\rm f}$ for the lowest-$s_{\rm f}$ runs. The deviations are caused by contributions of order $v^2$ and higher, which are non-universal as discussed in Sec.~\ref{sub:qaqmc}. For very high velocities the dependence on $s_{\rm f}$ can be much more dramatic than in Fig.~\ref{fig8}, but this is not the regime in which the QAQMC should be applied to study universal physics. \begin{figure} \includegraphics[width=7.5cm, clip=true]{fig8.eps} \caption{(Color online) Squared magnetization (bottom) and susceptibility (top) vs $s$ of the 2D TFIM with $L=12$. In these runs, different curves correspond to different end points $s_f$ of the evolution, with the velocity $v \propto s_{\rm f}N/M$ kept constant. The $s_f=0.3$ curve is from the simulation shown in Sec.~\ref{sub2d}.} \label{fig8} \end{figure} \section{Summary and Discussion} \label{sec:conclusions} We have presented a nonequilibrium QAQMC approach to study quantum dynamics, with a simple product of operators with evolving coupling replacing the standard Schr\"odinger time evolution. We showed that this approach captures the leading non-adiabatic corrections to the adiabatic limit, both by analytical calculations based on the APT and by explicit simulations of quantum-critical systems with the QAQMC algorithm. The simulation results obey the expected generalized dynamic scaling with known static and dynamic critical exponents. We also extended the scaling formalism beyond results obtained previously in Ref.~[\onlinecite{degrandi11}]. We analyzed the leading non-adiabatic corrections within this method and showed that they can be used to extract various non-equal time correlation functions, in particular, the Berry curvature and the components of the metric tensor. A clear advantage of the QAQMC approach is that one can access the whole range of couplings in a single run. Being a simple modification of projector QMC, the QAQMC method is applicable to the same class of models as this conventional class of QMC schemes---essentially models for which ``sign problems'' can be avoided. As an illustration of the utility of QAQMC, we applied the algorithm and the scaling procedures to the 1D and 2D TFIMs. The expected scaling behaviors are observed very clearly. In the 1D case we extracted a critical coupling in full agreement with the known value, and in 2D we obtained an estimate with unprecedented (to our knowledge) precision (small error bars); $(h/J)_c=3.04458(7)$. Based on repeating the fitting procedures with different subsets of the data, we believe that any systematical errors due to subleading corrections neglected in the extrapolations should be much smaller than the statistical errors, and, thus, we consider the above result as unbiased. The QAQMC approach bears some similarities to previous implementations of {\it quantum annealing} within QMC algorithms.\cite{Santoro06,Bapst12} However, the previous works have mainly considered standard equilibrium QMC approaches in which some system parameter is changed as a function of the {\it simulation time}. This evolution is not directly related to true quantum dynamics (and, thus, is not really quantum annealing), but is dependent on the particular method used to update the configurations. In contrast, in our scheme, as in the NEQMC method introduced in Ref.~[\onlinecite{degrandi11}], the evolution takes place {\it within} the individual configurations, and there is a direct relationship to true Schr\"odinger evolution in imaginary time. In Green's function (GF) QMC simulations the gradual change of a system parameter with the simulation time is rather closely related to the QAQMC scheme (since also there one applies a series of terms of the Hamiltonian to a state), with the difference being that QAQMC uses true importance sampling of configurations, with no need for guiding wave functions and no potential problems related to mixed estimators. Our asymmetric expectation values could be considered as a kind of mixed estimator as well, but we have completely characterized them within the APT. In addition, the previous uses of GFQMC with time-evolving Hamiltonians have, to our knowledge, never addressed the exact meaning of the velocity of the parameter evolution. The correct definition of the velocity is of paramount importance when applying quantum-critical scaling methods, as we have discussed here. We have here computed the velocity within APT for the QAQMC scheme. The same relationship with Schr\"odinger dynamics may possibly hold for GFQMC as well, but, we have not applied the APT to this case and it is therefore not yet clear whether GFQMC can capture correctly the same universal non-equilibrium susceptibilities as the QAQMC and NEQMC methods. We expect QAQMC to be superior to time-evolving GFQMC, because of its better control over measured symmetric and asymmetric expectation values and fully realized importance sampling. Some variants of GFQMC use true importance sampling, e.g., the Reptation QMC (RQMC) method,\cite{PhysRevLett.82.4745} which also avoids mixed estimators. The configuration space and sampling in the QAQMC method bears some similarities with RQMC, recent lattice versions of which also use SSE-inspired updating schemes.\cite{PhysRevE.82.046710} However, to our knowledge, imaginary-time evolving Hamiltonians have not been considered in RQMC and in other related variants of GFQMC, nor has the role played by the velocity when crossing the quantum critical point been stressed. This has so far been our focus in applications of the QAQMC and NEQMC methods. In principle one could also implement the ideas of time-evolution similar to QAQMC within the RQMC approach. We also stress that we have here not focused on optimization. Previous works on quantum annealing within QMC schemes have typically focused on their abilities to optimize difficult classical problems. While the QAQMC may potentially also offer some opportunities in this direction, our primary interest in the method is to use it to extract challenging dynamical information under various circumstances. A recent theoretical analysis of optimization within sign-problem free QMC approaches \cite{Hastings13} is not directly applicable to the QAQMC and NEQMC approaches but generalizations should be possible. The QAQMC and NEQMC methods provide correct realizations of quantum annealing in imaginary time. Besides their ability to study dynamic scaling, with exponents identical to those in real-time Schr\"odinger dynamics,\cite{degrandi11} it will be interesting to explore what other aspects of real-time dynamics can be extracted with these methods. In particular, their applicability to quantum glasses, of interest in the context of quantum adiabatic computing \cite{farhi12} as well as in condensed matter physics, deserves further studies. The ability of the QAQMC to produce results for a whole evolution path in a single run can in principle also be carried over to the conventional Schr\"odinger imaginary-time evolution with $U(\tau)$ in Eq.~(\ref{utau}). By ``slicing'' the time evolution into $K$ successive evolutions over a time-segment $\Delta_\tau$, \begin{equation} U(\tau)=\prod_{n=1}^K T_{\tau} {\rm exp}\left [ - \int_{\tau_{n-1}}^{\tau_n} d \tau \mathcal{H}[\lambda(\tau)] \right ], \label{utauk} \end{equation} where $\tau_n=n\Delta_\tau$, one can evaluate matrix elements analogous to Eq.~(\ref{atdef}) by inserting the operator of interest at any point within the product of time-slice operators in $\langle \Psi(\lambda_0)|U^*(\tau)U(\tau)|\Psi(\lambda_0)\rangle$. In this case, the symmetric expectation value, evaluated at the mid-point, is identical to the NEQMC method,\cite{degrandi11} and the asymmetric expectation values will exhibit properties similar to those discussed in Sec.~\ref{subsub:asymmetric}. We have not yet explored this approach, and it is not clear whether it would have any other advantage besides the exact reduction to Schr\"odinger dynamics of the symmetric expectation values. In practice the simulations will be more complex than the QAQMC approach because of the need to sample integrals, but not much more so than the NEQMC method. It should be relatively easy to adapt the RQMC method with an evolving Hamiltonian in this formulation of the time-evolution. Finally, we point out that, in principle, one can also carry out a {\it one-way evolutions} with the QAQMC algorithm. Instead of starting with the $\lambda=\lambda_0$ eigenstate at both $\langle \psi_L|$ and $|\psi_R\rangle$ and then projecting them to the $\lambda = \lambda_M$ eigenstate using two sequences of the form Eq.~(\ref{p1m}), one can make $\langle \psi_L|$ correspond to $\lambda_0$ and let it evolve to $|\psi_R\rangle$ corresponding to $\lambda_M$ with only a single operator sequence of length $M$. In the case of the TFIM Eq.~(\ref{eq:hamiltonian}), the obvious choice is then to evolve from $s=0$ to $s=1$ (the classical Ising model), so that both edge states are trivial. All our conclusions regarding the definition of the velocity and applicability of scaling form remain valid in this one-way QAQMC. Results demonstrating this in the case of the 1D TFIM are sown in Fig.~\ref{fig9}. We anticipate that this approach may be better than the two-way evolution in some cases, but we have not yet compared the two approaches extensively. \begin{figure} \includegraphics[width=7.5cm, clip=true]{fig9.eps} \caption{(Color online) One-way evolution $s \in [0,1]$ with QAQMC for the 1D TFIM. (Bottom) The susceptibility Eq.~(\ref{eq:susceptibilty}). (Top) The rescaled susceptibility Eq.~(\ref{eq:scaled_susceptibilty}). Each full curve corresponding to a given chain length $L$ was obtained in a single run. The constant for the critical-velocity condition Eq.~(\ref{eq:const}) was held at $4^3/80$.} \label{fig9} \end{figure} \begin{acknowledgments} We acknowledge collaboration with Claudia De Grandi in a related work and would like to thank Mike Kolodrubetz for valuable comments. This work was supported by the NSF under Grant No.~PHY-1211284. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Accelerating path integral molecular dynamics} Most of the difficulties associated with the execution of path integral simulations, as well as the ideas to accelerate them, can be understood in terms of the properties of the classical ring polymer that arises from the quantum-classical isomorphism~\cite{chan-woly81jcp}. Within the path integral formulation of quantum statistical mechanics (see BOX 1), the quantum mechanical partition function $Z=\operatorname{tr} [ e^{-\beta \hat{H}}]$ (where $\beta=1/k_{B}T$ is the inverse temperature and $\hat{H}$ is the Hamiltonian operator of the system) is mapped onto the classical partition function $Z_P \sim \int e^{-\frac{\beta}{P} H_P}$, corresponding to the Hamiltonian \begin{equation} H_P = \sum_{j=1}^P \frac{ \mathbf{p}_j^2 }{2m} + V(\mathbf{q}_j) + \frac{1}{2}m \omega_P^2 (\mathbf{q}_j-\mathbf{q}_{j+1})^2. \label{eq:trotterham} \end{equation} This so-called ring polymer Hamiltonian represents $P$ copies of the physical system, for each of which the potential $V(\mathbf{q})$ must be computed. Adjacent replicas are connected by harmonic springs of frequency $\omega_P=P/\beta\hbar$, with cyclic boundary conditions $j+P\equiv j$ (see Fig.~\ref{fig:intro}a). Since each replica of the system is just an independent classical realization of the chemical system and the only interaction between the replicas arises from the computationally cheap harmonic springs, the cost of evaluating the ring polymer potential energy thus grows linearly with $P$. Position-dependent observables can be obtained by simply evaluating them at the position $\mathbf{q}$ of any of the replicas. Momentum-dependent observables, require the use of more complicated estimators that depend on replica-replica correlations, which has led to considerable effort being devoted to the development of more efficient forms~\cite{cao-bern89jcp,yama05jcp,ceri-mark13jcp,chen-ceri14jcp,kara-vani17jcp}. This issue arises because the momenta $\mathbf{p}$ appearing in Eq.~\ref{eq:trotterham} do not correspond to the physical momenta but are simply a sampling device, with no explicit physical meaning, although techniques such as CMD and RPMD attempt to use them to construct approximations to quantum time correlation functions to obtain dynamical properties. Sampling the classical ring polymer Hamiltonian (Eq.~\ref{eq:trotterham}) is considerably simpler than solving the original quantum mechanical problem for high-dimensional, many particles problems. In addition, since the ring polymer Hamiltonian is just a classical Hamiltonian in an extended phase phase, the plethora of techniques that have been developed to accelerate the sampling, integration and thermostatting of classical molecular dynamics simulations can usually be easily adapted for use in PIMD simulations. However, there are many evident and hidden challenges to be faced. Most obviously, evaluating $H_P$ is $P$ times more demanding than evaluating the classical ($P=1$) Hamiltonian. Convergence is determined by the highest-frequency physical vibrations, and the asymptotic convergence rate falls slowly, as $\mathcal{O}(P^{-2})$. Furthermore, from the last term in Eq.~\ref{eq:trotterham} one can see that the ring polymer Hamiltonian contains frequencies of the order of $\omega_P$, which is, at convergence, many times higher than the maximum physical frequency. These fast, highly harmonic, ring-polymer modes have generally been considered a hard sampling problem, that call for reduced integration time step and aggressive thermostatting. In fact, the quasi-harmonic nature of the high-frequency vibrations -- both physical and stemming from the spring term in Eq.~\ref{eq:trotterham} -- and the fact that they are the limiting factor to the convergence of quantum observable underpin all of the most recent and successful approaches to reduce the overhead of path integral methods. \subsection{Efficient integrators and thermostats for PI} Efficiently integrating the path integral Hamiltonian in Eq.~\ref{eq:trotterham} presents two main challenges arising from the high frequencies of order $\omega_{P}$ that appear in it. First, these frequencies are typically much higher than the maximum physical frequency, limiting the time step that can be used. Second, the highest of these harmonic frequencies are spectrally well separated from those of the physical system, which makes energy exchange between them highly inefficient, leading to significant sampling and ergodicity problems. However, both of these issues can be addressed so that integration can be performed for path integral simulations with time steps that are usually the same as those used in the corresponding classical simulation. To improve the size of the integration time step, one can exploit the fact that the free ring polymer Hamiltonian, i.e. without $V(\mathbf{q}_j)$, can be transformed into the normal mode or staging representations~\cite{Sprik1985,tuck+93jcp,tuck+96jcp}. These transformations decouple the adjacent replicas from each other and hence in both of these representations the free ring polymer Hamiltonian, which includes the high frequency ring polymer modes, can be integrated analytically, allowing large time-steps to be used \cite{ceri+10jcp}. In addition, since the momenta in the kinetic energy term in Eq.~\ref{eq:trotterham} are introduced purely for sampling, one can also mitigate the integration issues even further by modifying them to bring down the frequency of the highest ring polymer modes \cite{tuck+93jcp,tuck+96jcp}. To achieve efficient sampling and avoid ergodicity issues, one can recognize that in the normal mode or staging representations of the free ring polymer, the frequencies of each mode are known analytically and hence can be targeted with optimally coupled thermostats. This has led to deterministic schemes based on Nose-Hoover chains~\cite{tuck+93jcp,tuck+96jcp} and stochastic schemes using either targeted white noise or colored noise to optimally sample these modes~\cite{ceri+10jcp}. Once these modes are optimally coupled, one is left with just the classical problem of how to efficiently thermostat the diffusive centroid degrees of freedom. Together these approaches, combined with new integrators to give additional stability\cite{liu+16jcp,Mouhat2017}, have alleviated most of the issues that have previously limited time steps and efficiency in path integral simulations. \subsection{Ring polymer contraction} Ring polymer contraction (RPC) provides a framework for reducing the cost of evaluating the potential energy and forces on the ring polymer \cite{Markland2008}. In contrast to the methods discussed in the following sections, RPC does not reduce the total number of replicas, $P$, but instead provides a contracted representation of the imaginary time path consisting of $P'$ replicas on which the computationally costly part of the potential energy can be evaluated. RPC exploits the fact that the strong springs between the replicas keep them spatially close and hence that any smoothly varying interaction can be approximated with negligible error on a much smoother representation of the imaginary time path with fewer replicas (see Fig.~\ref{fig:intro}b and c)\cite{Markland2008,Markland2008b,Fanourgakis2009}. In particular, a contraction scheme is characterized by defining a transformation matrix that takes the full $P$-replica description and maps it to a $P'$-replica one~\cite{Markland2008}. The original, and by far the most commonly employed contraction scheme \cite{Markland2008,Markland2008b,Fanourgakis2009,Marsalek2016,Marsalek2017,kapi+16jcp,John2016}, involves transforming to the normal mode representation of the free ring polymer and discarding the $P-P'$ highest normal modes and then transforming back to the Cartesian representation. More recently other procedures such as averaging contraction \cite{Geng2015} and stride contraction\cite{Cheng2016b} have occasionally also been used, although the latter leads to unstable dynamics in some cases. Once the forces have been evaluated on the contracted $P'$-replica ring polymer they are projected back onto the full $P$-replica ring polymer, and combined with other force components before propagating the ring polymer dynamics. To benefit from RPC, a reference system is required that approximates the rapidly varying forces present in the system. This reference system is subtracted from the full system's forces to leave a smoothly varying difference force which can be evaluated on the $P'$-replica contracted ring polymer. If this reference system is chosen such that its cost is negligible compared to that of the smoothly varying forces, one can decrease the cost of the force evaluations by a factor of $P/P'$. However, it is vital to note that the reference force only has to leave a slowly varying remainder. In fact, the reference force can be something that would give a very poor result for the dynamics and structure of the system if used alone (without the difference force which corrects for its deficiencies). Early applications of RPC with empirical potentials employed reference systems which involved splitting the inter- and intramolecular forces \cite{Markland2008}, range separation of the Coulomb potential \cite{Markland2008b} and in polarizable force fields by splitting of the contributions to the polarization \cite{Fanourgakis2009}. More recently, RPC has been applied to systems with ab initio potential energy surfaces \cite{Geng2015,Marsalek2016,Marsalek2017,kapi+16jcp,John2016}, in a number of cases obtaining dramatic speed-ups \cite{Marsalek2016,Marsalek2017}. The use of a reference system has natural origins in the multiple times scale (MTS) molecular dynamics methodology, where the reversible reference systems propagator algorithm (r-RESPA)\cite{Tuckerman1992} was formulated as a method to allow efficient propagation in molecular dynamics simulations with multiple components of the force which vary on different time scales. Whereas MTS schemes exploit the slowly varying nature of some forces in real time to extend the propagation time step, RPC takes advantage of the spatially smooth variation of the forces in the imaginary time path integral. The considerations needed for a good reference force in the two approaches are thus similar and so it is natural to utilize MTS and RPC simultaneously\cite{Marsalek2016,kapi+16jcp,Marsalek2017}. RPC is a highly appealing approach as it also allows one to calculate approximate dynamics within the CMD or RPMD frameworks and can further be combined with acceleration approaches that also reduce the overall $P$, such as those discussed below. \subsection{High-order PI} The slow convergence of PIMD with the number of replicas $P$ is a consequence of the fact that the kinetic $K$ and potential energy $V$ terms in the high-temperature Boltzmann operator $e^{-\beta \hat{H}/P}$ do not commute. Because of this the commonly adopted Trotter factorization (see BOX 1) introduces an error, which is second-order in the path discretization $\beta\hbar/P$. Several more accurate factorizations have been proposed that include corrections that depend on the commutator $[\hat{V},[\hat{T},\hat{V}]]$. This factorized form of $\operatorname{Tr} e^{-\beta \hat{H}}$ can also be mapped onto a classical sampling problem for $P$ replicas, that converge to the quantum limit with a leading-order error $\mathcal{O}(P^{-4})$. However, the corresponding higher-order Hamiltonian~\cite{suzu95pla,chin97pla,taka-imad84jpsj} contains a term proportional to $\left|V'\right|^2$. The associated forces, which are needed in a PIMD scheme, involve the second derivatives of the potential. This higher-derivatives information is crucial to obtain a more effective path integration (see Fig.~\ref{fig:intro}d), but its calculation is impractical for all but the simplest potential energy models. To circumvent this problem, most high-order PIMD simulations have used reweighting schemes~\cite{jang-voth01jcp,yama05jcp,pere-tuck11jcp,mars+14jctc}, that first perform a conventional path integral trajectory, and then weight different snapshots with a factor that depends on the exponential of the difference between the Trotter and the high-order Hamiltonians. Unfortunately, due to the exponential form, the weights vary wildly and introduce statistical inefficiency that is exacerbated for large-scale systems~\cite{ceri+12prsa}. Recent solutions to this problem include using a truncated cumulant expansion of the weighted average~\cite{polt-tkat16cs}, that avoids statistical problems and has been shown to be affected by remarkably low systematic errors. Alternatively, one can borrow some ideas from path integral Monte Carlo calculations~\cite{buch-vani13cpl}, and develop a finite-difference scheme to evaluate the troublesome second derivatives of the potential. It then becomes possible to sample explicitly high-order path integral Hamiltonians with molecular dynamics~\cite{kapi+16jcp2}, with only a marginal overhead relative to a conventional PIMD simulation with the same $P$. High-order schemes are particularly attractive for simulations below room temperature, or whenever an accuracy in quantum (free) energies of a few meV/atom is required. \subsection{Colored-noise methods} All of the approaches we discussed so far focus on efficiently performing Boltzmann sampling of the classical ring-polymer Hamiltonian. A distinctively different approach, instead, builds on the intuition that quantum mechanical fluctuations can be effectively mimicked by breaking the classical fluctuation-dissipation theorem. In the 1980s, Ford, Kac and others developed a ``quantum Langevin equation''~\cite{ford-kac87jsp,ford+88pra} to model the coupling between a system and a quantum mechanical bath. More recently, several groups have proposed to use a Generalized Langevin Equation (GLE) to enforce the frequency-dependent effective temperature $T^\star(\omega)=\hbar\omega/2k_B \ \coth \beta\hbar\omega/2$, that mimics the effect of quantizing the nuclear degrees of freedom~\cite{buyu+08pre,ceri+09prl2,damm+09prl}. Some of these methods are virtually exact when studying the thermodynamic properties of perfectly harmonic systems. Unfortunately, in the presence of anharmonic couplings, energy flows between high-frequency (hot) and low-frequency (cold) modes, which results in significant deviations from the desired $T^\star(\omega)$~\cite{ceri+09prl2}. While this zero-point energy leakage can be controlled~\cite{ceri+10jctc} and some of the dynamical disturbances induced by the GLE corrected~\cite{ross+18jcp}, it would be desirable to extend the method in such a way that it can be systematically converged. Such convergence can be achieved using path integral + GLE (PI+GLE) techniques. The general idea of these methods is to design an effective $T^\star_P(\omega)$ such that a $P$-replicas PIMD simulation would give converged results in the harmonic limit for any value of $P$~\cite{ceri+11jcp}. It should be stressed that $T^\star_P(\omega)$ is designed to reproduce the marginal distribution of individual replicas, which is enough to accelerate convergence for any structural observable, but not to converge some of the more exotic estimators that depend on the overall distribution of the path. While it is possible to include further constraints on the distribution, e.g. to accelerate convergence of an estimator of the kinetic energy (PIGLET method~\cite{ceri-mano12prl}), one should keep in mind the fact that $T^\star_P(\omega)$ may not be sufficient to converge all estimators when computing more complex properties, such as equilibrium isotope fractionation ratios~\cite{ceri-mark13jcp}. GLE methods can be applied to any empirical or ab initio potential and combined with the other accelerated techniques. They have been shown to provide a dramatic acceleration in the convergence, up to 100-fold when applied at cryogenic temperatures~\cite{uhl+16jcp}, and can typically model aqueous systems at room temperature with as few as 6 replicas. In conclusion, a large tool-kit of methods to accelerate path integral simulations now exist, each of which possess its own benefits and pitfalls. To provide further guidance BOX 2 summarizes each method's strengths and weaknesses, and provides some practical advice to choose the best combination for a given modelling scenario. Finally, it is also worth stressing that many of these methods can be used simultaneously to accelerate path integral simulations even further. \section{Applications} The combination of increased computational power and algorithmic developments over the past decade, discussed in the previous section, have opened the door to apply imaginary time path integral simulations to a wide range of systems. In particular, these advances have facilitated the use of path integrals with ab initio descriptions of the potential energy surface, allowing reactive processes to be studied in systems in areas ranging from biology to materials science. Here we will primarily focus on these recent applications. However, the application of imaginary time path integrals to chemical systems has a long and rich history of pioneering developments and applications~\cite{Tuckerman1997,Marx1998,marx+99nat} spanning a period of over 40 years. For these we refer the reader to earlier reviews \cite{Berne1986,Rossky1991,cepe95rmp,Marx1999Rotors,MarxHutter,marx06cpc,Paesani2009,habe+13arpc,Ceriotti2016}. \subsection{Aqueous and Biological Systems} Hydrogen bonded systems and those involving proton networks have been the target of a large number of path integral studies \cite{Ceriotti2016}. Due to the important role of the proton, which can vary from only being slightly shared to fully delocalized between the hydrogen bond donor and the acceptor depending on the strength of the hydrogen bond, quantum effects can play a major role. In recent years it has become clear that NQEs have a somewhat ambivalent impact on the stability of the H bond: quantum fluctuations along the covalent bond permit increased proton sharing and hence strengthen binding, whereas fluctuations in the orthogonal directions facilitate hydrogen bond breaking.~\cite{Habershon2009,Li2011,McKenzie2014} The delicate balance between such ``competing quantum effects''\cite{Habershon2009} make accurate PIMD simulations crucial to predict the correct trends, and so the accelerated techniques we discussed in the previous sections have been instrumental to the understanding of these effects in many different systems. In particular, the ability to perform ab initio path integral simulations has allowed the role of competing quantum effects to be investigated in systems that involve reactive events such as proton transfer or delocalization. One problem where competing quantum effects have been particularly useful is in studies of the fractionation between isotopes. Fractionation of isotopes has recently been investigated in systems encompassing hydrogen/deuterium fractionation between its liquid and vapor phases\cite{Markland2012,Wang2014}, in water and ion clusters \cite{Videla2014,Videla2015} and at the liquid-vapor interface of water \cite{Liu2013} as well as lithium isotope fractionation between aqueous solution and phyllosilicate minerals \cite{Dupuis2017}. These isotope fractionation ratios, which would be zero if NQEs were neglected, are exploited extensively in applications ranging from monitoring climatic temperature shifts \cite{Zachos2001,Worden2007} to assessing whether low barrier hydrogen bonds are present in biological systems \cite{Harris2000,McKenzie2015}. Indeed, it has recently been shown how one can use the relationship between the difference in the quantum kinetic energy of isotopes in two phases and the fractionation between those phases to provide an approximation of the total quantum kinetic energy for a nucleus in a given chemical phase \cite{chen+16jpcl}. This connection offers an alternative approach to obtaining the absolute quantum kinetic energies of particles requiring only thermodynamic measurements that can be contrasted with the values obtained from deep inelastic neutron scattering experiments \cite{Andreani2005,Pantalei2008}. A recent ab initio path integral study probed the total free energy change arising from the inclusion of NQEs on the hydrogen bonding of DNA base pairs \cite{Fang2016} building on earlier studies of the role of NQEs in these systems \cite{Perez2010}. Here competing quantum effects give rise to the initially counter-intuitive result shown in Fig.~\ref{fig:dna} that, while at 300~K the hydrogen bonds are strengthened when NQEs are included, when the temperature is lowered to 100~K the strengthening effect decreases to almost zero for all the combinations of base pairs studied. This can be understood in a similar way to liquid-vapor isotope fractionation in water, where the differing temperature dependence of the two competing quantum effects tunes the extent to which they cancel at a given condition, leading to crossovers in the dominant effect \cite{Markland2012,Wang2014}. Many hydrogen bonds fall in the range of O--O donor-acceptor distances around 2.8~\AA~ where the competition between quantum effects is high at 300~K \cite{ross+15jpcl}. However, it is important to note that even in these cases the effect on different properties of the system vary dramatically. For example, in water recent advances have allowed the evaluation of NQEs using potential energy surfaces fit to high level ab initio calculations \cite{Reddy2016} and generated on-the-fly using density functional theory (DFT) \cite{Marsalek2017}. These studies, which give excellent agreement with the structural and dynamical properties of water when NQEs are included, suggest that the NQEs on the diffusion constant and O--O radial distribution function of liquid water are small. However, large changes can still occur in properties that depend sensitively on the proton position. Examples of this include the amount that the proton is shared in the hydrogen bond \cite{ceri+13pnas,Wang2014}, which in turn determines the amount of charge transfer between water molecules and from water molecules to ions upon hydrogen bonding \cite{Schran2017}. Electronic properties, such electronic absorption spectra and electron affinities, can also be highly sensitive to chemical bond lengths and proton positions and hence can exhibit marked NQEs \cite{ceri+13pnas, Hollas2016,Sappati2016}. For instance, GLE methods have been used to asses the quantum effects on the redox properties of small aqueous species in water where large effects ($\sim$0.3~eV) on vertical electron attachment and detachment energies were observed \cite{Rybkin2017}. Hence, even at conditions where one might expect large cancellation due to competing quantum effects on heavy atom properties, the cancellation will not apply equally to all properties, particularly those that depend on the proton positions where the NQEs can still be pronounced. \begin{figure} \includegraphics[width=1.0\columnwidth]{fig2.pdf} \caption{(a) Structure of hydrogen-bonded base pair complexes of adenine-thymine (AT) and cytosine-guanine (CG). (b) The binding free energy change due to NQEs as a function of temperature in the AT and CG base pairs obtained from PIMD simulations. Negative values correspond to NQEs strengthening the hydrogen bonds between the base pairs and negative values correspond to weakening. The dashed lines show the harmonic predictions. \emph{Adapted from Ref.~\cite{Fang2016}}\label{fig:dna}} \end{figure} The balance of the competing effects is tuned by the hydrogen bond strength, the temperature, and the particular chemical property being considered. Biological systems are frequently able to position hydrogen bonds at much shorter distances than observed in liquids, leading them to be further away from the conditions under which NQEs cancel significantly. These low barrier hydrogen bonds, which typically occur when the donor-acceptor distance falls below $\sim$2.6~\AA, are in the regime where the NQEs favoring proton delocalization over disruption is dominant. This effect plays a major role in altering the acidity of the intermediate analog state of ketosteroid isomerase shown in Fig.~\ref{fig:ksi}. In this case, PIMD simulations using GLE acceleration have shown that NQEs cause a 10,000 fold change in the acidity constant of a key tyrosine active site residue by allowing quantum proton delocalization\cite{wang+14pnas,Wang2016} and that this leads to the large active site electric fields~\cite{Wang2017} that have recently been shown to be correlated with its enzymatic efficiency \cite{Fried2014}. In addition, the high concentration of strong hydrogen bonds in protein fibrils leads the protons to be extensively delocalized when NQEs are included, and the combination of these two effects has been suggested to be critical in their fluorescence properties \cite{Pinotsi2016}. \begin{figure} \includegraphics[width=1.0\columnwidth]{fig3.png} \caption{Classical and quantum proton sharing distributions in the ketosteroid isomerase enzyme from ab initio molecular dynamics simulations with classical and quantum nuclei. Upon deprotonation of the Tyr57 residue the protons from Tyr16 and Tyr32 can delocalize quantum mechanically to stabilize the residue as shown in the top panels. The bottom panels show the probability distribution along the two proton sharing coordinates, $\nu_{16}$ and $\nu_{32}$ where nuclear quantum effects are shown to markedly increased the sharing. \emph{Adapted from Ref.~\cite{wang+14pnas}}\label{fig:ksi}} \end{figure} While PIMD is now being frequently employed in studies of static properties, the extension of RPC to ab initio molecular dynamics\cite{Marsalek2016,kapi+16jcp} has recently opened the door to the calculation of dynamical properties in condensed phase systems with the approximate CMD and RPMD approaches and their more recent PA-CMD\cite{hone+06jcp} and TRPMD\cite{ross+14jcp} variants (see BOX 3). Until recently the calculation of dynamical properties using these methods was limited to empirical potentials or gas phase molecules. This was due to the combination of the longer time-scales (usually $>$100~ps) required to converge these properties and that, of the acceleration methods mentioned in this review, only RPC is able to provide dynamics that are compatible with the CMD and RPMD methods. Explicit evaluation of the full TRPMD Hamiltonian is possible, and has been recently used to investigate the role of proton transport in water wires \cite{ross+16jpcl}, but is very costly. However, when an appropriate reference potential is available, nanosecond-long path integral simulations on hybrid DFT surfaces for systems of more than 200 atoms are now well within reach \cite{Marsalek2017}. Recent work has used the latest RPC developments to investigate the role of NQEs on the diffusion, orientational dynamics and IR and Raman spectra of liquid water at 300 K using TRPMD and PACMD \cite{Marsalek2017}. This work has revealed that at room temperature NQEs cause a red shift of $\sim$200 cm$^{-1}$ on the OH stretching peak. In addition, it has shown that the accuracy of some simple functionals when used in classical MD is due to a cancellation of errors due to their incorrect treatment of the anharmonicity along the hydrogen bond. More expensive ``hybrid'' functionals are thus needed, and these can only be afforded when the acceleration techniques we discuss here are used to ameliorate their cost. \subsection{Materials science and matter in extreme conditions} Computational materials modelling has benefited tremendously by the development of accurate and relatively inexpensive electronic structure calculations that can describe the properties and stability of the most diverse families of compounds without resorting to \emph{ad hoc} empirical potentials~\cite{burk12jcp,klim-mich12jcp,Marzari2016}. Furthermore, interatomic potentials are more and more often built upon high-end electronic structure reference calculations~\cite{behl-parr07prl,bart+10prl} and as such provide a description of the bare Born-Oppenheimer potential energy surface (i.e. those without NQEs parameterized in for a particular state point). As discussed above, however, there are many properties, from free energies to heat capacities and particle momentum distributions, for which the quantum nature of nuclei can be as important as the underlying description of electronic effects. Indeed, this is often the case for materials containing light elements, at room temperature and below. The availability of methods to treat NQEs more or less inexpensively has made it much easier to probe the qualitative impact they have on complex materials science problems. \begin{figure} \includegraphics[width=1.0\columnwidth]{fig4.pdf} \caption{(a) Particle-momentum distribution $p^2 n(p)$ for a \ce{Li2NH} polycrystalline sample, as measured by deep inelastic neutron scattering (black line), and as computed using a quantum thermostat (full red line). A Maxwell-Boltzmann distribution at the experimental temperature $T=300K$ (blue line), and the curve computed for a Debye crystal based on the ab initio density of states (dashed red line) are also drawn for reference. (b) The three-dimensional PMD for three proposed crystal structures of \ce{Li2NH}; that of Ref.~\citenum{mice+11prb} (left), that of Ref.~\citenum{muel-cede06prb} (center), and that of Ref~\citenum{magy+06prb} (right) would make it possible to discriminate between the three proposals, that differ mostly by the orientation of the \ce{NH} groups. \emph{Adapted from Ref.~\cite{ceri+10prb}}\label{fig:imide}} \end{figure} Methods that rely solely on GLEs to include the NQEs have been used particularly often. One of the first real-world applications involved the study of the particle-momentum distribution (PMD) in the hydrogen-storage material \ce{Li2NH}~\cite{ceri+10prb}. Despite the inherent limitations of pure GLE methods in terms of accuracy, this example demonstrated that it is possible to capture the deviation of the particle-velocity from a Maxwell-Boltzmann distribution -- an entirely quantum mechanical effect -- achieving semi-quantitative agreement with deep inelastic neutron scattering experiments. It is important to note that, despite the quantum thermostat approach\cite{ceri+09prl2,ceri+10jctc} being designed to work in the harmonic limit, it captures some anharmonic effects such as the softening of the high-$p$ tail of the distribution, which can be seen in Figure~\ref{fig:imide}a by contrasting the quantum thermostat results with those obtained from the purely harmonic Debye crystal. GLEs are also appealing for this application as they make it possible to extract the PMD from direct inspection of the particle velocities (including directionally-resolved information, see Fig.~\ref{fig:imide}b), whereas to obtain this information from a path integral simulation one would have to use open paths, which adds an additional layer of complexity~\cite{morr-car08prl,lin+10prl}. Other early studies that used colored-noise thermostatting included the determination of the graphite-to-diamond coexistence line, that is bent at room temperature and below due to zero-point fluctuations~\cite{khal+10prb}, and the assessment of the role of quantum delocalization in controlling the balance between Eigen-like and Zundel-like configurations in crystalline \ce{HCl} hydrates, materials that can be seen as a simplified model of the hydrated proton~\cite{hass+12jacs}. Isotope effects on the lattice dynamics of \ce{LiH} and \ce{LiD} have also been elucidated, using ab initio descriptions of the forces~\cite{Dammak2012}. More recently, these approaches have been used to study the role of NQEs in problems as complex as the shock-wave compression of fused silica~\cite{Shen2016}, thermal vibrations of carbon nanotubes~\cite{Liu2015}, simulated transmission electron microscopy images~\cite{Lofgren2016}, phase transitions under high-pressure pure~\cite{Bronstein2014}, and salt-doped water ices~\cite{Bronstein2016}. \begin{figure} \includegraphics[width=1.0\columnwidth]{fig5.pdf} \caption{Hydrogen-hydrogen radial distribution functions for different phases of solid hydrogen -- $C2c$ (left), $Cmca-12$ (center) and $Pbcn$ (right). All plots correspond to a generalized-gradient approximation density functional, with simulations performed at $T=200$~K. Solid red and dashed blue curves correspond to PI+GLE and classical simulations, respectively. Different curves correspond (top to bottom) to pressure values of 350, 300, 250 and 200 GPa. \emph{Adapted from Ref.~\citenum{mora+13prb}}\label{fig:hydrogen}} \end{figure} While GLE based schemes may suffice to assess the qualitative impact of NQEs on the properties of materials, path-integral methods are needed to ensure quantitative accuracy. PI+GLE and PIGLET methods provide the ability to systematically increase the accuracy of the calculation. These methods have made it possible to assess the effect of NQEs in determining the subtle energy balance between different polymorphs of molecular crystals~\cite{ross+16prl}, a class of systems for which free-energy differences of a few tens of meV/molecule suffice to change the stability ranking. A combination of path integrals and the quantum thermal bath approach has been used to study ferroelectrics, and fuel-cell materials~\cite{brie+16jctc}. NQEs are particularly important for high-pressure physics, a field that probes the behavior of matter in extreme conditions, such as those found in stars or in giant planets, that are difficult to replicate experimentally. At GPa pressures and above, the increased confinement of the nuclei means that their quantum mechanical behavior becomes important even well above room temperature. Examples of simulations probing NQEs under these conditions include the study of the transition between molecular and atomic liquid hydrogen at high pressure and temperature~\cite{mora+13prl}, and the dissociation of water in the GPa pressure regime~\cite{ceri+14cpc}. At temperatures around or below room temperature the case for using path integral sampling is particularly compelling. Figure~\ref{fig:hydrogen}, shows the dramatic change in the computed radial distribution functions of different phases of solid hydrogen, using DFT combined with the PI+GLE method~\cite{mora+13prb}. Quantum fluctuations substantially smooth out structural correlations, and it is clear that only a simulation that includes them will be able to be in agreement with the experimental structural and thermodynamic properties. The use of colored-noise thermostats combined with PIMD have made it possible to reach an accurate description of these fluctuations with a reasonable computational effort, without having to compromise on the accuracy of the underlying potential energy surface. The possibility of directly using ab initio potentials, and even to couple quantum Monte Carlo and PIMD~\cite{arXiv:1708.07344}, is particularly appealing in applications to exotic states of matter, for which the development of empirical potentials that can span multiple phases is particularly challenging. It is also worth mentioning a few recent studies that have used PIMD without utilizing acceleration schemes, as they underscore the urgency of making quantum simulations of nuclear degrees of freedom more affordable. For example, PI simulations have been crucial in determining the impact of NQEs on the delocalization of H atoms and their diffusivity in proton conducting media such as triflic acid hydrates~\cite{haye+09jpcb} and concentrated phosphoric acid~\cite{vilc+12nchem,Heres2016}, as well as in perovskite oxides~\cite{Zhang2008} and metals such as iron~\cite{kimi+11prb}, and on Ni surfaces~\cite{Suleimanov2012}. Simulations of crystals in high-pressure conditions, such as the high-temperature superconducting \ce{SH3} phase of hydrogen sulfide~\cite{erre+15prl,erre+16nature}, is another field in which accessible and accurate modelling of NQEs will increase significantly the predictive power of atomistic simulations. \section{Outlook and Ongoing Challenges} The number and variety of recent publications investigating the role of NQEs that we discussed, which is by no means exhaustive, should leave no doubt about the importance of this class of physical phenomena. Developing better methods to model NQEs has become particularly urgent, since potential energy surfaces are increasingly obtained (either directly or by statistical learning) from electronic-structure calculations, that yield the bare Born-Oppenheimer potential rather than on empirical force fields that can account for NQEs in an effective manner. To address this growing need, in the last few years several algorithms have been introduced that reduce the computational effort needed to accurately treat the quantum fluctuations of nuclei. Whereas previously a path integral calculation could easily have been tens to hundreds of times more demanding than a classical molecular dynamics simulations, state-of-the-art methods now curb this cost to barely more than that of a classical simulation. One of the remaining challenges limiting the widespread adoption of the toolbox of methods (see BOX 2) to accelerate modelling of NQEs lies in their technical nature, and the fact that they are not yet incorporated into many of the major atomistic simulation codes. Many of these methods have now been added to commonly used simulations packages~\cite{vand+05cpc,plim95jcp,Eastman2013,Tuckerman2000}. Further, the wrapper code i-PI~\cite{ceri+14cpc} is now available that offers all the classes of acceleration methods discussed in this review, and that can be easily interfaced with any code which can produce a potential energy surface~\cite{vand+05cpc,gian+09jpcm,plim95jcp,SIESTA,AIMS,Aradi2007}. However, while the simulation of NQEs for static equilibrium properties of distinguishable particles has now been made affordable for most systems, there are several further directions where methodological advances are needed. One is the combination of the methods reviewed here with path integral Monte Carlo schemes for indistinguishable particles~\cite{cepe95rmp,MartinBook,Walewski2014,Walewski2014b}, which have also benefited greatly from acceleration schemes that cut substantially on the computational expense~\cite{Boninsegni2006}. A considerably more challenging open problem involves the simulation of NQEs on the dynamical properties of materials. As discussed in BOX 3, approximate methods inspired by imaginary-time path integrals, such as CMD and RPMD, provide a highly promising approach to compute dynamics incorporating the quantum nature of the nuclei for complex, condensed-phase systems. For this class of techniques, the options to reduce the computational overhead are much more limited, being restricted essentially to RPC (and sometimes rather inaccurate single-bead GLEs for high-frequency spectral properties). What is more, the formal justification of both CMD and RPMD does not involve a hierarchy of well-controlled approximations, making it hard to systematically address their many known artifacts~\cite{Habershon2008,witt+09jcp,ivan+10jcp,ross+14jcp2}. Some success has been recently shown using a (canonical) GLE to improve the quality of vibrational spectra\cite{ross+18jcp}, which leaves some hope that a more principled approach to determine the most suitable form of history-dependent noise may allow the injection of more physics into these approximate approaches. Quantum dynamics, both on a single Born-Oppenheimer surface and also with extensions to non-adiabatic dynamics, is receiving increasing attention \cite{Shushkov2012,Ananth2013,Richardson2013,Kretchmer2016,Shakib2017}, and it is not unreasonable to expect that there may be significant breakthroughs in the near future. However, as far as quantum statistics is concerned, the variety of acceleration techniques that are available and their accessible implementations suggest that the time is ripe for simulations incorporating NQEs to become an essential part of the toolkit of any theoretical chemist, materials scientist, or condensed matter physicist. \subsection{Acknowledgments} \acknowledgments M.C was supported by the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 677013-HBMAP) and the Swiss National Science Foundation (Project No. 200021-159896). T.E.M was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0014437 and the National Science Foundation under Grant No. CHE-1652960. T.E.M also acknowledges support from a Cottrell Scholarship from the Research Corporation for Science Advancement and the Camille Dreyfus Teacher-Scholar Awards Program.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Given their large range of bandgaps, from 0.78 eV to 3.51 eV, InGaN materials have attracted attention from various applications such as LEDs, single-photon emitters, water splitting and solar cells \cite{Nguyen2011,Puchtler2016,Kibria2013,Sang2014,Cheriton2020}. For any application, device performance depends on having an electronic structure well tuned to its target application. Given that the electronic structure of quantum dots can be drastically changed by varying their size and composition, they can be quite attractive for applications. The main problem in modeling complex structures such as quantum dots is including all the necessary effects for the model to be accurate while also keeping computational cost down. Tight binding and $\mathbf{k}\cdot\mathbf{p}$ theory are standard approaches for calculating single-particle electronic structures for bulk materials and nanostructures \cite{Saito2002}. The $\mathbf{k}\cdot\mathbf{p}$ method gives a good balance between accuracy and computational requirements, especially when considering large dots that contain large number of atoms where the tight binding method becomes costly. $\mathbf{k}\cdot\mathbf{p}$ theory has been developed in both real space and Fourier space \cite{Winkelnkemper,Andreev2000}. Following the Fourier-space method, symmetry adapted basis approaches have been developed to reduce the required size of the Hamiltonian, which block diagonalize the Hamiltonian, reducing the computational cost of calculating the system's eigenstates \cite{Vukmirovic2005,Vukmirovic2006,Vukmirovc2008}. InGaN materials are strongly piezoelectric, having both spontaneous and strain-induced contributions to the piezoelectric polarization. Strain calculations have been performed using valence force field and Green's function based methods \cite{Stier1998,Andreev2000}. The latter method has the advantage that it respects the symmetry of the crystal lattice. From the strain, the piezoelectric potential can be calculated from Maxwell's equations \cite{Andreev2000}. References \cite{Andreev1999,Andreev2000,Vukmirovic2006} use the Green's function method for calculating strain and calculated the piezoelectric potential from Maxwell's equations. These works assume uniform elastic and dielectric constants, which was justified for their respective InAs/GaAs and GaN/AlN systems. However, in the case of InGaN, these constants vary more significantly between dot and host. Additionally, InGaN devices frequently do not have sharp interfaces between dot and barrier, with indium diffusing over several nanometers. This smooth alloy profile gives a spatial profile to every material parameter of the system, effectively changing the confining potential seen by the electrons. In this paper, we show the importance of including spatially varying elastic and dielectric constants in strain and piezoelectric potential calculations in the case InGaN systems. For strain calculations, we implement a formalism previously presented for including spatially varying elastic constants \cite{Andreev2000}. We present a new Fourier-space formalism for the calculation of piezoelectric potentials with spatially varying dielectric constants. We also present an approach to include smooth indium profiles in the strain, piezoelectric potential and electronic structure calculations, modeling the smooth alloy profiles found in experimental devices. Considering smooth indium profiles both increase the accuracy of the simulations and decrease their computational cost by decreasing the number of plane waves required for convergence. Strain plays an important role in the electronic structure properties of quantum dots. In quantum dot $\mathbf{k}\cdot\mathbf{p}$, a single real space unit cell is typically used when working in a Fourier-space approach. However, strain decays more slowly than bound state wavefunctions. When studying isolated dots, the difference in decay lengths makes it computationally expensive to fully capture both the strain and electronic structure using a single unit cell. Reference \cite{Vukmirovc2008} presents an approach that implements two different unit cells; one for the electronic structure and on for strain. This method allows for the modeling of the electronic structure and strain, but introduces some complexity in calculating the Hamiltonian, which requires the calculation of multiple composed convolutions on different Fourier-space meshes. These convolutions can be computationally costly depending on the sizes of meshes needed for convergence. By fixing the strain unit cell to be commensurate with the electronic unit cell, we present an approach that reduces the number of needed convolutions, significantly reducing the computational cost. We demonstrate our methodology by calculating the electronic structure for a 1D array of InGaN quantum dots, modeling devices grown as LEDs and for water splitting \cite{Nguyen2011,Kibria2013}. In this example, we show the importance of the inclusion of spatially varying elastic and dielectric constants and smooth indium profiles for accurate electronic structures. We also show that the most important criterion for convergence of the lowest quantum dot electron and hole energies is the maximum wave vector included in the Fourier-space sampling, which can be increased with low computational cost by using a small unit cell. Section \ref{sec: Non-uniform elastic and piezoelectric constants corrections} contains strain and piezoelectric potential calculations using spatially varying elastic and dielectric parameters. Section \ref{sec:Symmetry-adapted-basis k.p} presents the $\mathbf{k}\cdot\mathbf{p}$ model used for electronic structure calculations and our novel approach to efficiently include strain through choices of unit cells. Section \ref{sec:Smooth-indium-profile} introduces a method to use smooth indium profiles in all aspects of our calculations. Section \ref{sec:Energy-shifts-from-corrections} demonstrates our entire methodology for the case of a 1D quantum dot array, such as quantum dots grown inside of nanowires \cite{Nguyen2011}. \section{Spatially varying elastic and piezoelectric constants corrections\label{sec: Non-uniform elastic and piezoelectric constants corrections}} We begin by considering quantum dot heterostructures with abrupt changes in alloy fraction. Alloying the host material changes the local lattice constants, leading to a lattice mismatch at the host and dot material boundary. This lattice mismatch is a source of strain throughout the QD system, affecting the electronic states of the system. For example, InN has a larger lattice constant than GaN, so alloying GaN with indium to form quantum dots induces change in the lattice constant. Additionally, strain can generate strong piezoelectric potentials in materials such as III-nitrides. The piezoelectric potential in III-nitrides is particularly important along the c-axis and can be strong enough to spatially separate electron and hole states through the quantum-confined Stark effect \cite{Renard2009}. In prior work, elastic and dielectric constants are largely assumed to be spatially uniform in Fourier-based calculations of strain and the piezoelectric potential. In fact, these material properties are different in the dot and host materials, which can cause significant errors when determining electronic structures. Here, we calculate the strain and piezoelectric potential of a quantum dot superlattice with elastic and dielectric constants that vary with alloy fraction, while focusing on the changes brought on by spatially changing parameters. In the case of the spatially varying elastic constants, we use a method outlined in Ref.\ \cite{Andreev2000}. We present a version with typos in Eqs. A3, A7 and A8 of Ref.\ \cite{Andreev2000} corrected in Section \ref{subsec:strain }. For the piezoelectric potential, we use a procedure similar to Ref.\ \cite{Andreev2000}, but we construct a theory to include spatially varying dielectric constants. The strain field and piezoelectric potential are coupled into a $\mathbf{k}\cdot\mathbf{p}$ model, presented in Section \ref{sec:Symmetry-adapted-basis k.p}, for electronic structure calculations. \subsection{Quantum dot system\label{subsec:Quantum-dot-system}} \begin{figure}[tbh] \includegraphics[width=8.6cm]{QD_lattice} \caption{Quantum dot superlattice and its basis vectors. White regions are the host material and grey regions are the quantum dots. Dashed lines show the unit cell boundaries of the quantum dot superlattice. Due to the symmetry, we have $L_{12}\equiv\left|\mathbf{L}_{1}\right|=\left|\mathbf{L}_{2}\right|$.\label{fig:Quantum-dot-superlattice}} \end{figure} We consider a superlattice of cylindrical wurtzite quantum dots embedded in a bulk host material, as shown in Fig.\ \ref{fig:Quantum-dot-superlattice}. InGaN quantum dots such as those described in Ref.\ \cite{Nguyen2011} have a lens-like shape and do not have a sharply defined boundary. We approximate these quantum dots as being cylindrical. This choice of dot geometry simplifies calculations, as described in Sec. \ref{subsec:strain }, and preserves the $C_{6v}$ symmetry of the material, which we take advantage of in Section \ref{sec:Symmetry-adapted-basis k.p} for electronic structure calculations. Hexagonal periodic boundary conditions are used to also preserve the material's $C_{6v}$ symmetry. For single-dot calculations, the superlattice unit cell must be large enough that the choice of cell size does not affect results. For actual quantum dot arrays, we consider only hexagonal superlattices in the plane. In this periodic system, the real space quantum dot superlattice is defined by the set of lattice vectors $\mathbf{L}_{i}$, as shown in Fig.\ \ref{fig:Quantum-dot-superlattice}. We denote the real space unit cell by $\Omega_{\mathsf{e}}$, its volume $V_{\mathsf{e}}$, and the reciprocal-space unit cell by $\Omega_{\mathsf{e}}^{-1}$. The index ``e'' indicates that these quantities relate to the electronic cell, as opposed to the strain unit cell, which is introduced in Section \ref{subsec:Including-strain-and-piezo-in-k.p}. Imposing periodic conditions in real space implies a discrete reciprocal space with wave vectors \begin{equation} \mathbf{q}=i_{1}\mathbf{b}_{1}+i_{2}\mathbf{b}_{2}+i_{3}\mathbf{b}_{3}\qquad i_{1},i_{2},i_{3}\in\mathbb{Z}\label{eq: reciprocal wave vectors-1} \end{equation} with the reciprocal basis vectors \begin{equation} \begin{gathered}\mathbf{b}_{1}=\frac{2\pi}{L_{1}}[1,\,-\frac{1}{\sqrt{3}},\,0]\\ \mathbf{b}_{2}=\frac{2\pi}{L_{2}}[1,\,\frac{1}{\sqrt{3}},\,0]\\ \mathbf{b}_{3}=\frac{2\pi}{L_{3}}[0,\,0,\,1] \end{gathered} \label{eq: reciprocal lattice vectors-1} \end{equation} Due to the symmetry of the system, we have $L_{1}=L_{2}$, which we define as $L_{12}$. In our reciprocal-space calculations, we sample on sets of the wave vectors $\mathbf{q}\in\Omega_{\mathsf{e}}^{-1}$. We define $m_{12}$ and $m_{3}$ such that $i_{1},i_{2}=\left\{ -m_{12},\cdots,0,\cdots,m_{12}\right\} $ and $i_{3}=\left\{ -m_{3},\cdots,0,\cdots,m_{3}\right\} $. This sampling produces a hexagonal mesh of size $N=N_{1}N_{2}N_{3}$ where $N_{i}=2m_{i}+1$. To obtain a $C_{6}$ symmetric mesh, we remove points such that $\left|q_{x}\right|>m_{12}\frac{2\pi}{L_{1}}$, leaving a mesh whose size we denote by $N_{\mathsf{e}}$. By choosing the unit cell dimensions $L_{i}$ large enough, it is possible to remove electronic coupling between neighboring dots. This flexibility allows us to model 3D, 2D and 1D arrays of coupled dots. The isolated dot case can also be obtained by choosing both $L_{12}$ and $L_{3}$ sufficiently large. Section \ref{subsec:strain } presents a method that also uncouples dots in terms of strain, which is based on calculating strain and the electronic structure using different unit cells. We illustrate the methods presented in this manuscript by modeling a quantum dot system inspired by Ref.\ \cite{Nguyen2011}. That system consists of InGaN dots grown in GaN nanowires. We approximate this system as a 1D quantum dot array, by choosing $L_{3}$ to match the measured dot-dot spacing and $L_{12}$ large enough to avoid dot-dot coupling. We fix the dot indium alloy fraction, radius and height based on the experimental device. System parameters are listed in Table \ref{tab:System-parameters} and material parameters are in Appendix \ref{sec:Bulk-k.p-parameters}. \begin{table}[tbh] \caption{Quantum dot superlattice system parameters used for calculations unless specified otherwise. \label{tab:System-parameters}} \begin{tabular}{cc} & \tabularnewline \hline \hline Parameter & Value\tabularnewline \hline $X_{0}$ & 0.45\tabularnewline $h$ & 40 $\mathring{A}$\tabularnewline $R$ & 200 $\mathring{A}$\tabularnewline $L_{12}$ & 500 $\mathring{A}$\tabularnewline $L_{3}$ & 70 $\mathring{A}$\tabularnewline $m_{12}$ & 10\tabularnewline $m_{3}$ & 4\tabularnewline $n_{12}$ & 6\tabularnewline $n_{3}$ & 1\tabularnewline $\boldsymbol{\delta}$ & $[1.5,1.5,2.5]$ $\mathring{A}$\tabularnewline \hline \end{tabular} \end{table} \subsection{Strain\label{subsec:strain }} In this section, we present how we calculate strain with elastic constants that depend on alloy fraction for 3D, 2D and 1D quantum dot superlattices and isolated dots. Our method follows from Refs.\ \cite{Andreev2000,Vukmirovc2008}. Materials such as InGaN have elastic constants that vary based on the alloy fraction. Therefore, the spatial variation of the elastic constants throughout the superlattice unit cell must be included for accurate calculations of strain. We present a method, originally derived in Ref.\ \cite{Andreev2000}, to include spatially varying elastic constants in strain calculations. We calculate the strain produced by a single isolated dot and construct the quantum dot superlattice strain by linear superposition. The calculated strain is to be coupled into the electronic structure calculations. However, strain decays considerably slower than bound electronic wavefunctions. In the case of isolated dots, the unit cell must be large enough to accomodate the strain decay. Choosing a unit cell large enough to capture the strain decay reduces the maximum wave vector attainable when using a fixed number of plane waves. As we demonstrate in Section \ref{sec:Energy-shifts-from-corrections}, accurately describing the electronic states requires using sufficiently large wave vectors, and thus a large unit cell requires a large number of plane waves. Following Ref.\ \cite{Vukmirovc2008}, we consider that the electronic model and strain model each have their own real space unit cells. This additional degree of freedom allows accurate and computationally efficient determination of both electronic structure of rapidly decaying confined quantum dot states and longer-range strain effects in isolated dots. In the case of a quantum dot superlattice, different real-space electronic and strain unit cells are not required. \subsubsection{Isolated quantum dot strain\label{subsec:Isolated-quantum-dot}} In prior work, lattice-mismatch-driven strain has been calculated for a single dot using a continuum theory with a Green's function approach while assuming spatially uniform elastic constants \cite{Andreev1999,Andreev2000,Nenashev2018}. Here, we present the method outlined in Appendix A of \cite{Andreev2000} to include spatially varying elastic constants. In this section, we show how the spatially varying elastic constants modify strain and how this modified strain changes the piezoelectric potential in Section \ref{subsec:Piezoelectric-potential}. We show that the elastic constant correction is necessary to obtain accurate strain and piezoelectric potentials. Consider a single InGaN QD in bulk GaN with spatially varying elastic constants $\lambda_{ijmn}\left(\mathbf{r}\right)$ that depend on the local alloy fraction, \begin{equation} \lambda_{ijmn}\left(\mathbf{r}\right)=\lambda_{ijmn}^{\mathsf{d}}+\lambda_{ijmn}^{\mathsf{h}}\left[1-\chi_{\mathsf{d}}\left(\mathbf{r}\right)\right],\label{eq: spatially varying elastic constants} \end{equation} where $\lambda_{ijmn}^{\mathsf{h}}$ and $\lambda_{ijmn}^{\mathsf{d}}$ are the host and dot's elastic constants, respectively. Assuming spatially varying elastic constants, the Green's tensor $\tilde{G}_{in}$ for the displacement field in an infinite anisotropic elastic medium must satisfy \begin{equation} \frac{\partial}{\partial x_{k}}\lambda_{iklm}\left(\mathbf{r}\right)\frac{\partial}{\partial x_{m}}G\left(\mathbf{r},\mathbf{r}^{\prime}\right)=-\delta\left(\mathbf{r}-\mathbf{r}^{\prime}\right)\delta_{in}\label{eq: Diff eq. for Green's tensor} \end{equation} Taking the Fourier transform of Eq.\ \ref{eq: Diff eq. for Green's tensor}, we obtain \[ \begin{aligned}\lambda_{iklm}^{\mathsf{h}}q_{k}q_{m} & \tilde{G}_{ln}\left(\mathbf{q},\mathbf{r}^{\prime}\right)\\ & \phantom{\quad}+\Delta\lambda_{iklm}\sum_{\mathbf{q^{'}}}\tilde{\chi}_{\mathsf{d}}\left(\mathbf{q}-\mathbf{q}^{\prime}\right)q_{k}q_{m}^{\prime}\tilde{G}_{ln}\left(\mathbf{q}^{\prime},\mathbf{r}^{\prime}\right)\\ & =\frac{1}{\left(2\pi\right)^{3}}\mathsf{e}^{i\mathbf{q}\cdot\mathbf{r}^{\prime}}\delta_{in}. \end{aligned} \] The system strain is given by the superposition $\tilde{\epsilon}_{lm}\left(\mathbf{q}\right)=e_{lm}^{\mathsf{T}}\tilde{\chi}_{\mathsf{d}}\left(\mathbf{q}\right)+\tilde{\epsilon}_{lm}^{\mathsf{c}}\left(\mathbf{q}\right)$ where $e_{lm}^{\mathsf{T}}$ is the stress-free strain due to the initial lattice mismatch and $\tilde{\epsilon}_{lm}^{\mathsf{c}}$ is the interface-driven strain \cite{Andreev1999,Andreev2000}. Reference \cite{Andreev2000} showed that the Green's tensor can be related to the strain $\tilde{\epsilon}_{lm}^{\mathsf{c}}\left(\mathbf{q}\right)$ to obtain \begin{equation} \begin{aligned}\lambda_{iklm}^{\mathsf{h}}q_{k} & \tilde{\epsilon}_{lm}^{c}\left(\mathbf{q}\right)\\ & \phantom{\quad}+\Delta\lambda_{iklm}q_{k}\sum_{\mathbf{q^{'}}}\tilde{\chi}_{\mathsf{d}}\left(\mathbf{q}-\mathbf{q}^{'}\right)\tilde{\epsilon}_{lm}^{c}\left(\mathbf{q}^{'}\right)\\ & =-\lambda_{ikpr}^{\mathsf{d}}\tilde{\epsilon}_{pr}^{T}q_{k}\tilde{\chi}^{d}\left(\mathbf{q}\right), \end{aligned} \label{eq:Strain system of equations} \end{equation} with typos fixed, where $\chi_{\mathsf{d}}$ is the characteristic function of the dot, which is unity inside the dot and zero outside (see Appendix \ref{sec:Characteristic-functions} for its Fourier transform in our case of cylindrical dots), and $e_{pr}^{\mathsf{T}}$ is \[ e_{pr}^{\mathsf{T}}=\varepsilon_{\mathsf{a}}\delta_{pr}+\varepsilon_{\mathsf{ca}}\delta_{p3}\delta_{r3} \] with $\varepsilon_{\mathsf{a}}=\left(a^{\mathsf{h}}-a^{\mathsf{d}}\right)/a^{\mathsf{d}}$, $\varepsilon_{\mathsf{c}}=\left(c^{\mathsf{h}}-c^{\mathsf{d}}\right)/c^{\mathsf{d}}$ and $\varepsilon_{\mathsf{ca}}=\varepsilon_{\mathsf{c}}-\varepsilon_{\mathsf{a}}$. Here, $a^{\mathsf{h}}$, $c^{\mathsf{h}}$ are the lattice constants of the host material and, $a^{\mathsf{d}}$ and $c^{\mathsf{d}}$ are of dot material. More specifically, $a$ is the xy-plane lattice constant and $c$ is the lattice constant along the z-axis. A solution for $\tilde{\epsilon}_{lm}^{\mathsf{c}}\left(\mathbf{q}\right)$ can be found by expanding $\tilde{\epsilon}_{lm}^{\mathsf{c}}\left(\mathbf{q}\right)$ in a power series,\textbf{ \begin{equation} \tilde{\epsilon}_{lm}^{\mathsf{c}}\left(\mathbf{q}\right)=\tilde{\epsilon}_{lm}^{\left(0\right)}\left(\mathbf{q}\right)+\tilde{\epsilon}_{lm}^{\left(1\right)}\left(\mathbf{q}\right)+\tilde{\epsilon}_{lm}^{\left(2\right)}\left(\mathbf{q}\right)+\cdots,\label{eq:Strain power expansion} \end{equation} }where $\tilde{\epsilon}_{lm}^{\left(N\right)}\left(\mathbf{q}\right)\propto\left(\frac{\Delta\lambda}{\lambda}\right)^{N}$, $\Delta\lambda_{ijmn}=\lambda_{ijmn}^{\mathsf{h}}-\lambda_{ijmn}^{\mathsf{d}}$ and the condition $\frac{\Delta\lambda}{\lambda}\ll1$ ensures convergence of the series. The leading term $\tilde{\epsilon}_{lm}^{\left(0\right)}$ corresponds to uniform elastic constants of the dot with each subsequent term being a correction to include spatial variations due to the alloy profile. Using the Einstein summation convention, each term has the form \begin{equation} \begin{aligned}\tilde{\epsilon}_{lm}^{\left(N\right)}\left(\mathbf{q}\right)=\frac{\left(2\pi\right)^{3}}{2}\left[F_{p}^{\left(N\right)}\left(\mathbf{q}\right)\right. & q_{l}\tilde{G}_{mp}^{\mathsf{h}}\left(\mathbf{q}\right)\\ & \left.+F_{p}^{\left(N\right)}\left(\mathbf{q}\right)q_{m}\tilde{G}_{lp}^{\mathsf{h}}\left(\mathbf{q}\right)\right] \end{aligned} \label{eq: Strain eq.1} \end{equation} where \begin{equation} F_{i}^{\left(0\right)}\left(\mathbf{q}\right)=-\lambda_{ikpr}^{\mathsf{d}}\tilde{\epsilon}_{pr}^{T}q_{k}\tilde{\chi}_{\mathsf{d}}\left(\mathbf{q}\right)\label{eq: Strain eq.2} \end{equation} \begin{equation} F_{i}^{\left(N\right)}\left(\mathbf{q}\right)=-\Delta\lambda_{iklm}q_{k}\frac{\left(2\pi\right)^{3}}{V}\sum_{\mathbf{q^{'}}}\tilde{\chi}_{\mathsf{d}}\left(\mathbf{q}-\mathbf{q}^{'}\right)\tilde{\epsilon}_{lm}^{\left(N-1\right)}\left(\mathbf{q}^{'}\right)\label{eq: Strain eq.3} \end{equation} where Eqs.\ \ref{eq: Strain eq.1} and \ref{eq: Strain eq.3} are corrected from Ref.\ \cite{Andreev2000}. Here, $\tilde{G}_{in}^{\mathsf{h}}$ is the Green's tensor for the host material and is fully written out in Appendix \ref{sec:Displacement-field-Green's}. It has been shown, when assuming uniform elastic constants, that using the parameters for the host material gives more accurate results. We compare the strain corrected at various orders according to Eq.\ \ref{eq:Strain power expansion} to the usually considered case of uniform elastic constants of the host material. Figure \ref{fig:Convergence-of-strain} shows the convergence of the strain corrections for the 1D quantum dot array system described in Section \ref{subsec:Quantum-dot-system}. We quantify convergence with the following metric for the norm of the strain: \begin{equation} \left|\tilde{\epsilon}\right|=\sqrt{\sum_{m\geqslant l}\frac{V}{\left(2\pi\right)^{3}}\int\mathsf{d}\mathbf{q}^{3}\,\left|\tilde{\epsilon}_{lm}\left(\mathbf{q}\right)\right|^{2}}\label{eq:strain norm} \end{equation} where $m\geqslant l$ indicates the sum of the unique elements of the strain tensor ($\tilde{\epsilon}_{11}$, $\tilde{\epsilon}_{22}$, $\tilde{\epsilon}_{33}$, $\tilde{\epsilon}_{23}$, $\tilde{\epsilon}_{13}$, $\tilde{\epsilon}_{12}$). The green line in Fig.\ \ref{fig:Convergence-of-strain} compares the norm of the corrected strain $\tilde{\epsilon}$, calculated from Eq.\ \ref{eq: Strain eq.1}, to the norm of the strain $\tilde{\epsilon}^{\mathsf{GaN}}$, which is calculated assuming spatially uniform elastic constants of GaN. Blue line shows the self-convergence of the power series in Eq.\ \ref{eq:Strain power expansion}. From these results, we conclude that a 2nd order correction is sufficient to have strain converged within 1\% in self-convergence and that this converged strain differs from the uniform case by about 6\%, indicating that the elastic constant corrections are important for accurate strain fields in InGaN systems. In Sec.\ \ref{subsec:Piezoelectric-potential}, we show that the calculated piezoelectric potential remains essentially unchanged from 3rd order corrections and up. Given that including these corrections are not computationally costly, we have included 3rd order corrections in all of our calculations unless stated otherwise. Figure \ref{fig:Hydrostatic-strain-comparison.} shows the hydrostatic strain along a cut through the axis of the dot, showing relaxation of strain inside the dot with each additional correction. \begin{figure}[tbh] \includegraphics[width=8.6cm]{Strain_convergence_y2019m11d19NP5} \caption{Convergence of the strain with spatially varying elastic constants. Green line is the relative difference between the corrected strain and the case with uniform $\lambda$ of GaN for a 1D quantum dot array as described in Table \ref{tab:System-parameters}. The zeroth term is the case of uniform elastic constants of InGaN with alloy fraction of the dot. Converged strain rests at a 6\% relative difference from the case of uniform elastic constants of GaN, indicating that the corrections are necessary for accurate strain fields. Blue line is self-convergence of the power series in Eq.\ \ref{eq:Strain power expansion} in respect to the zeroth term. Correction magnitudes are found to be less than 1\% starting from 2nd order. We conclude that corrections up to and including order 3 are sufficient for the calculation of accurate strain fields. \label{fig:Convergence-of-strain}} \end{figure} \begin{figure}[tbh] \includegraphics[width=8.6cm]{Strain_comparison_y2019m11d19IRedP7_Edited} \caption{Hydrostatic strain for increasing correction orders in the elastic constants. Black dashed line is strain assuming uniform elastic constants of the host material $\lambda^{\mathsf{GaN}}$ while solid lines are for spatially varying elastic constants with correction order $\lambda^{\left(N\right)}$. As the correction order $\lambda^{\left(N\right)}$ increases, the strain moves toward the uniform case $\lambda^{\mathsf{GaN}}$. Note that the lines for $\lambda^{\left(2\right)}$ and $\lambda^{\left(3\right)}$ are overlapping. However, the uniform case $\lambda^{\mathsf{GaN}}$ underestimates the strain in the dot. \label{fig:Hydrostatic-strain-comparison.}} \end{figure} \subsubsection{Quantum dot superlattice strain\label{subsec:Quantum-dot-superlattice}} Because the strain is linear in stress, the strain produced by the QD superlattice can be obtained from linear superposition of the single-dot strain. However, we want the ability to study dots that are completely uncoupled, both electronically and from strains of the periodic array. Reference \cite{Vukmirovc2008} proposed a method to allow simultaneous treatment of large unit cell for the strain problem and small unit cell for the electronic problem, which together allow isolated dots to be considered in a computationally tractable manner. In this case of two independent cells, the strain is calculated in its own real space unit cell $\Omega_{\mathsf{s}}$ with volume $V_{\mathsf{s}}$. We denote the strain reciprocal unit cell as $\Omega_{\mathsf{s}}^{-1}$ such that it contains the wave vectors $\mathbf{Q}$, which are defined similarly to Eq.\ \ref{eq: reciprocal wave vectors-1} for the electronic cell. Given that strain relaxes more slowly than bound electronic wavefunctions, we only consider $V_{\mathsf{s}}\geq V_{\mathsf{e}}$. In this two-unit-cells approximation, the Fourier transform of the strain produced by the quantum dot array is \begin{align} \tilde{\epsilon}_{ij}^{\mathsf{a}}\left(\mathbf{q}\right) & =\frac{1}{V_{\mathsf{s}}}\sum_{\mathbf{Q}\in\Omega_{\mathsf{s}}^{-1}}\tilde{\epsilon}_{ij}\left(\mathbf{Q}\right)\tilde{\chi}_{\mathsf{e}}\left(\mathbf{q}-\mathbf{Q}\right)\nonumber \\ & =\frac{1}{V_{\mathsf{s}}}\left(\tilde{\epsilon}_{ij}\ast\tilde{\chi}_{\mathsf{e}}\right)_{\mathsf{s}}\left(\mathbf{q}\right),\label{eq:QD superlattice strain} \end{align} where $\chi_{\mathsf{e}}$ is the characteristic function of the electronic unit cell $\Omega_{\mathsf{e}}$ in $\Omega_{\mathsf{s}}$, which is given for our case in Appendix \ref{sec:Characteristic-functions}. Superscript ``$\mathsf{a}$'' indicates array. We follow the notation that $\mathbf{q}\in\Omega_{\mathsf{e}}^{-1}$ and $\mathbf{Q}\in\Omega_{\mathsf{s}}^{-1}$. $\left(\tilde{\epsilon}_{ij}\ast\tilde{\chi}_{\mathsf{e}}\right)_{\mathsf{s}}\left(\mathbf{q}\right)$ denotes a convolution where the subscript ``$\mathsf{s}$'' indicates that the convolution is over the wave vectors $\mathbf{Q}\in\Omega_{\mathsf{s}}^{-1}$, see Appendix \ref{sec:Conventions} for Fourier transform and convolution definitions. We show in Sec.\ \ref{subsec:Including-strain-and-piezo-in-k.p} that choosing the linear dimensions of $\Omega_{\mathsf{s}}$ to be integer multiples of the linear dimensions of $\Omega_{\mathsf{e}}$ ensures that all vectors $\mathbf{q}\in\Omega_{\mathsf{e}}^{-1}$ are also in $\Omega_{\mathsf{s}}^{-1}$. This choice allows Eq.\ \ref{eq:QD superlattice strain} to be evaluated efficiently. \subsection{Piezoelectric potential\label{subsec:Piezoelectric-potential}} III-nitride materials are strongly piezoelectric, having both spontaneous and strain-driven polarizations \cite{Bernardini1997,Zoroddu2001}. Calculation of the polarization from an electric field requires knowledge of the static dielectric constant $\varepsilon$ of the material. In prior work, all Fourier-space based approaches assumed a uniform dielectric constant. We present a method to obtain the Fourier transform of the scalar potential $\tilde{\varphi}\left(\mathbf{q}\right)$ assuming $\varepsilon\left(\mathbf{r}\right)$ changes with the local alloy fraction. We find that correcting for the spatial dependence of the dielectric function leads to important changes in the piezoelectric potential. We also show in Sec.\ \ref{sec:Energy-shifts-from-corrections} that this change in piezoelectric potential significantly shifts the lowest quantum dot energy levels. We do not discuss metallic screening, which can be important in highly doped materials \cite{Chichibu1998,Ibbetson2000,Kim2004}. Generally, we can write the displacement field $\mathbf{D}\left(\mathbf{r}\right)$ as \[ \mathbf{D}\left(\mathbf{r}\right)=\varepsilon_{0}\mathbf{E}\left(\mathbf{r}\right)+\mathbf{P}_{\mathsf{tot}}\left(\mathbf{r}\right), \] where $\mathbf{E}\left(\mathbf{r}\right)$ is the electric field, $\varepsilon_{0}$ is the vacuum permittivity, and $\mathbf{P}_{\mathsf{tot}}$ is the total polarization. In the strained material, there are three sources of polarization: bound charge, strain and spontaneous polarization, \[ \mathbf{P}_{\mathsf{tot}}\left(\mathbf{r}\right)=\mathbf{P}_{\mathsf{bnd}}\left(\mathbf{r}\right)+\mathbf{P}_{\mathsf{st}}\left(\mathbf{r}\right)+\mathbf{P}_{\mathsf{sp}}\left(\mathbf{r}\right). \] Here, we assume no free charge screening and so an intrinsic material. Assuming $\mathbf{P}_{\mathsf{bnd}}$ to be linear with the electric field and incorporated into $\varepsilon\left(\mathbf{r}\right)$ as usual, \begin{equation} \mathbf{D}\left(\mathbf{r}\right)=\varepsilon\left(\mathbf{r}\right)\mathbf{E}\left(\mathbf{r}\right)+\mathbf{P}_{\mathsf{st}}\left(\mathbf{r}\right)+\mathbf{P}_{\mathsf{sp}}\left(\mathbf{r}\right),\label{eq:Displacement field} \end{equation} where $\mathbf{P}_{\mathsf{st}}\left(\mathbf{r}\right)+\mathbf{P}_{\mathsf{sp}}\left(\mathbf{r}\right)=\mathbf{P}\left(\mathbf{r}\right)$ is the residual polarization after electric-field induced bound charge has been included in $\varepsilon\left(\mathbf{r}\right)$. We take $\varepsilon\left(\mathbf{r}\right)$ to be $\varepsilon^{\mathsf{h}}$ in the host material and $\varepsilon^{\mathsf{d}}$ in the dot material, so \begin{equation} \varepsilon\left(\mathbf{r}\right)=\varepsilon^{\mathsf{h}}+\left(\varepsilon^{\mathsf{d}}-\varepsilon^{\mathsf{h}}\right)\chi_{\mathsf{d}}\left(\mathbf{r}\right).\label{eq: dielectric constant} \end{equation} We obtain $\varepsilon^{\mathsf{d}}$ by linear interpolation of the binary compounds' bulk dielectric constants. Taking the divergence of Eq.\ \ref{eq:Displacement field}, using $\boldsymbol{\nabla}\cdot\mathbf{D}=0$, taking the Fourier transform and solving for the electric field gives \begin{equation} E_{m}\left(\mathbf{r}\right)=-\frac{1}{\varepsilon\left(\mathbf{r}\right)}\mathscr{F}^{-1}\left\{ \frac{q_{n}}{q_{m}}\tilde{P}_{n}\left(\mathbf{q}\right)\right\} \label{eq: Electric field} \end{equation} where $\mathscr{F}^{-1}$ represents the inverse Fourier transform. Using $E_{m}=-\partial_{m}\varphi$, where $\partial_{m}\equiv\frac{\partial}{\partial x_{m}}$ and $\varphi\left(\mathbf{r}\right)$ is the scalar potential, \begin{equation} \tilde{\varphi}\left(\mathbf{q}\right)=-\frac{i}{q_{m}}\mathscr{F}\left\{ \frac{1}{\varepsilon\left(\mathbf{r}\right)}\mathscr{F}^{-1}\left\{ \frac{q_{n}}{q_{m}}\tilde{P}_{n}\left(\mathbf{q}\right)\right\} \left(\mathbf{r}\right)\right\} \left(\mathbf{q}\right).\label{eq: Piezoelectric potential 1} \end{equation} For the case of sharp alloy interfaces, $\chi_{\mathsf{d}}\left(\mathbf{r}\right)$ is either 1 or 0 and Eq.\ \ref{eq: dielectric constant} gives \begin{equation} \frac{1}{\varepsilon\left(\mathbf{r}\right)}=\frac{1}{\varepsilon^{\mathsf{h}}}+\left(\frac{1}{\varepsilon^{\mathsf{d}}}-\frac{1}{\varepsilon^{\mathsf{h}}}\right)\chi_{\mathsf{d}}\left(\mathbf{r}\right).\label{eq: inverse dielectric constant} \end{equation} We treat the case of smoothly varying alloy fraction in Sec.\ \ref{sec:Smooth-indium-profile}. Putting this result in Eq.\ \ref{eq: Piezoelectric potential 1} gives \begin{align} \tilde{\varphi}\left(\mathbf{q}\right) & =\tilde{\varphi}_{\mathsf{uni}}^{\mathsf{h}}\left(\mathbf{q}\right)+\Delta\tilde{\varphi}\left(\mathbf{q}\right)\label{eq: piezoelectric potential 2} \end{align} with \begin{equation} \tilde{\varphi}_{\mathsf{uni}}^{\mathsf{h}}\left(\mathbf{q}\right)=-\frac{i}{q_{m}}\frac{1}{\varepsilon^{\mathsf{h}}}\frac{q_{n}}{q_{m}}\tilde{P}_{n}\left(\mathbf{q}\right)\label{eq:uniform piezo} \end{equation} \begin{equation} \Delta\tilde{\varphi}\left(\mathbf{q}\right)=-\frac{i}{q_{m}}\left(\frac{1}{\varepsilon^{\mathsf{d}}}-\frac{1}{\varepsilon^{\mathsf{h}}}\right)\mathscr{F}\left\{ \chi_{\mathsf{d}}\left(\mathbf{r}\right)\mathscr{F}^{-1}\left\{ \frac{q_{n}}{q_{m}}\tilde{P}_{n}\left(\mathbf{q}\right)\right\} \right\} \label{eq:delta piezo} \end{equation} Here, $\tilde{\varphi}_{\mathsf{uni}}^{\mathsf{h}}$ is the contribution to $\varphi$ with $\varepsilon_{\mathsf{r}}\left(\mathbf{r}\right)=\varepsilon^{\mathsf{h}}$, and $\Delta\tilde{\varphi}$ is the change in $\tilde{\varphi}$ due to the dot material having a different dielectric constant. The polarization fields $\tilde{P}_{n}\left(\mathbf{q}\right)$ for the wurtzite crystal structure are given in terms of strain in Appendix \ref{sec:Polarization-fields}. We now show the piezoelectric potentials that result from this formulation, for our model system described in Sec.\ \ref{subsec:Quantum-dot-system}. Figure \ref{fig:piezo and dielectric constant} shows $\varphi\left(z\right)$ along the central axis of the quantum dot calculated with constant $\varepsilon$ of the dot and host and with Eq.\ \ref{eq: piezoelectric potential 2}. The calculation with spatially varying $\varepsilon\left(\mathbf{r}\right)$ agrees with $\varphi_{\mathsf{uni}}^{\mathsf{d}}$ inside the dot and also agrees with $\varphi_{\mathsf{uni}}^{\mathsf{h}}$ outside the dot, with a transition near the boundary that is captured by neither of the uniform cases. We showed in Fig.\ \ref{fig:Hydrostatic-strain-comparison.} how spatially varying elastic constants change strain profiles. Figure \ref{fig:piezo and elastic constants} shows how $\varphi$ changes due to the elastic constants' correction propagates into the piezoelectric potential. We find that the changes in piezoelectric potential, a peak correction of 8 mV, are significant if looking to converge the energy levels within a few meV's. \begin{figure}[tbh] \includegraphics[width=8.6cm]{Dielectric_correction_effects_y2019m11d19HRedP4} \caption{Piezoelectric potential $\varphi\left(z\right)$ along the central axis, beginning in the center of the dot, with parameters as in Table \ref{tab:System-parameters}. Blue and red dashed lines show $\varphi$ calculated with uniform $\varepsilon\left(\mathbf{r}\right)=\varepsilon^{\mathsf{h}}$ and $\varepsilon\left(\mathbf{r}\right)=\varepsilon^{\mathsf{d}}$, respectively. Black line shows the case with spatially varying $\varepsilon\left(\mathbf{r}\right)$. Note that the $\varphi$ is antisymmetric in $z$. These results show that the case of uniform $\varepsilon$ cannot capture $\varphi$ throughout the system. Strain calculations include third order corrections for the nonuniform elastic constants. \label{fig:piezo and dielectric constant}} \end{figure} \begin{figure}[tbh] \includegraphics[width=8.6cm]{Elastic_constants_on_piezo_difference_y2019m11d19KRedP7Rescaled} \caption{Piezoelectric potential difference along the central axis of the dot for various elastic constant corrections. Potential difference is with respect to $\varphi_{\mathsf{h}}\left(\mathbf{r}\right)$, which is calculated with a uniform $\lambda\left(\mathbf{r}\right)=\lambda^{\mathsf{GaN}}$. Calculations implement an alloy smoothing of $\boldsymbol{\delta}=[1.5,1.5,2.5]\,\mathring{A}$, which is described in Sec.\ \ref{sec:Smooth-indium-profile}, and the vertical dashed line indicates the nominal material interface without smoothing. These results are for the same system as Fig.\ \ref{fig:piezo and dielectric constant}, calculated using various orders of correction for the effects of nonuniform elastic constants, described in Sec.\ \ref{subsec:Isolated-quantum-dot}. Spatially varying dielectric constants are included. The zeroth order case (blue line) shows $\lambda\left(\mathbf{r}\right)=\lambda^{\mathsf{d}}$. \label{fig:piezo and elastic constants}} \end{figure} \section{Symmetry adapted basis $\mathbf{k}\cdot\mathbf{p}$ for wurtzite quantum dots\label{sec:Symmetry-adapted-basis k.p}} Here, we present the quantum dot $\mathbf{k}\cdot\mathbf{p}$ model we use for electronic structure calculations. We first present a theory for bulk materials and use it to construct a theory for quantum dots. This quantum dot Hamiltonian is written in a symmetry adapted basis, which reduces the computational cost for calculating and diagonalizing the Hamiltonian. In this symmetry adapted basis, we show how the strain produced by the quantum dots contributes to the Hamiltonian. We also introduce strain effects using a different unit cell than that of the electronic cell defined in Fig.\ \ref{fig:Quantum-dot-superlattice}. In this section, our goal is to show our method of efficiently including strain in the quantum dot $\mathbf{k}\cdot\mathbf{p}$ model, which we do by choosing the strain unit cell's dimensions to be integer multiples of the unit cell used for the electronic structure calculations. \subsection{Bulk $\mathbf{k}\cdot\mathbf{p}$ model\label{subsec:Bulk k.p}} To describe the electronic structure of bulk wurtzite systems, we use an 8-band $\mathbf{k}\cdot\mathbf{p}$ model, which includes spin-orbit coupling, crystal field splitting and strain. An 8-band $\mathbf{k}\cdot\mathbf{p}$ model for bulk wurtzite material has been presented by Ref. \cite{Winkelnkemper} in the basis of $\Gamma$-point Bloch functions. References \cite{Winkelnkemper,Chuang} presented a 6-band model using eigenfunctions of the angular momentum operator $\hat{J}_{z}$. Since choosing $\hat{J}_{z}$ eigenfunctions aids in the construction of a symmetry adapted basis, which is presented in Sec.\ \ref{subsec:Quantum dot k.p}, we have used these two references to construct an 8-band Hamiltonian in the $\hat{J}_{z}$ eigenfunctions basis. More precisely, we have constructed the Hamiltonian using $S$, $X$, $Y$ and $Z$ $\Gamma$-point Bloch functions as a basis and then performed a basis transformation to obtain the $\hat{J}_{z}$ eigenfunctions basis. While $\mathbf{k}\cdot\mathbf{p}$ parameters are usually obtained in $S$, $X$, $Y$ and $Z$ basis, recent work has obtained $\mathbf{k}\cdot\mathbf{p}$ parameters directly in the symmetry adapted basis using \textit{ab initio} calculations \cite{Jocic2020}. We consider the time-independent Schr\"odinger equation for a single electron \begin{equation} \hat{H}\ket{\psi}=E\ket{\psi}\label{eq: General Eigenvalue problem} \end{equation} where \begin{equation} \hat{H}=\hat{K}+\hat{V}+\hat{H}_{\mathsf{so}}+\hat{H}_{\mathsf{cr}}+\hat{H}_{\mathsf{st}}\label{eq: General Hamiltonian} \end{equation} Here, $\hat{K}$ is the kinetic term of the electrons, $\hat{V}$ the potential from the electron-ion interaction, $\hat{H}_{\mathsf{so}}$ is spin-orbit coupling, $\hat{H}_{\mathsf{cr}}$ is crystal field splitting and $\hat{H}_{\mathsf{st}}$ is strain coupling. We expand $\ket{\psi}$ in terms of the $\hat{J}_{z}$ eigenfunctions $\ket{u_{i}}$ \begin{equation} \ket{\psi}=e^{i\mathbf{k}\cdot\mathbf{r}}\sum_{\alpha=1}^{8}C_{\alpha}\ket{u_{i}}\label{eq: Bulk eigenfunction expansion} \end{equation} where \begin{align} & \ket{u_{1}}=\ket{iS,\uparrow} & & \ket{u_{5}}=\ket{-iS,\downarrow}\nonumber \\ & \ket{u_{2}}=\ket{-\frac{X+iY}{\sqrt{2}},\uparrow} & & \ket{u_{6}}=\ket{\frac{X-iY}{\sqrt{2}},\downarrow}\nonumber \\ & \ket{u_{3}}=\ket{\frac{X-iY}{\sqrt{2}},\uparrow} & & \ket{u_{7}}=\ket{-\frac{X+iY}{\sqrt{2}},\downarrow}\nonumber \\ & \ket{u_{4}}=\ket{Z,\uparrow} & & \ket{u_{8}}=\ket{Z,\downarrow}\label{eq: bulk basis} \end{align} Here, $S$, $X$, $Y$ and $Z$ are $\Gamma$-point Bloch functions with arrows indicating spin. The eigenvalues of the $\hat{J}_{z}$ eigenfunctions are \[ J_{z}=\left\{ \frac{1}{2},\quad\frac{3}{2},\quad-\frac{1}{2},\quad\frac{1}{2},\quad-\frac{1}{2},\quad-\frac{3}{2},\quad\frac{1}{2},\quad-\frac{1}{2}\right\} , \] respectively. Inserting Eq.\ \ref{eq: Bulk eigenfunction expansion} into \ref{eq: General Eigenvalue problem}, the eigenvalue problem can be written as \begin{equation} H_{\alpha^{\prime}\alpha}C_{\alpha}=EC_{\alpha^{\prime}}\label{eq: Bulk eigenvalue problem} \end{equation} Keeping terms only up to order $k^{2}$, the 8x8 $\mathbf{k}\cdot\mathbf{p}$ Hamiltonian is \begin{align} H & =\left[\begin{array}{cc} g\left(\mathbf{k}\right) & \gamma\\ -\gamma^{*} & g^{*}\!\left(\mathbf{k}\right) \end{array}\right]\label{eq: bulk Hamiltonian} \end{align} where \begin{align*} g\left(\mathbf{k}\right) & =g_{1}\left(\mathbf{k}\right)+g_{2}\left(\mathbf{k}\right)+g_{\mathsf{cr}}+g_{\mathsf{so}}+g_{\mathsf{st}} \end{align*} \begin{widetext} \[ g_{1}\left(\mathbf{k}\right)=\left[\begin{array}{cccc} E_{\mathsf{c}}^{\prime} & -\frac{P_{2}}{\sqrt{2}}k_{+} & \frac{P_{2}}{\sqrt{2}}k_{-} & P_{1}k_{z}\\ -\frac{P_{2}}{\sqrt{2}}k_{-} & E_{\mathsf{v}}^{\prime} & 0 & 0\\ \frac{P_{2}}{\sqrt{2}}k_{+} & 0 & E_{\mathsf{v}}^{\prime} & 0\\ P_{1}k_{z} & 0 & 0 & E_{\mathsf{v}}^{\prime} \end{array}\right] \] \[ g_{2}\left(\mathbf{k}\right)=\left[\begin{array}{cccc} A_{2}^{\prime}\left(k_{x}^{2}+k_{y}^{2}\right)+A_{1}^{\prime}k_{z}^{2} & 0 & 0 & 0\\ 0 & \left(\frac{L_{1}^{\prime}+M_{1}}{2}\right)\left(k_{x}^{2}+k_{y}^{2}\right)+M_{2}k_{z}^{2} & -\frac{1}{2}N_{1}^{\prime}k_{-}^{2} & -\frac{1}{\sqrt{2}}N_{2}^{\prime}k_{-}k_{z}\\ 0 & -\frac{1}{2}N_{1}^{\prime}k_{+}^{2} & \left(\frac{L_{1}^{\prime}+M_{1}}{2}\right)\left(k_{x}^{2}+k_{y}^{2}\right)+M_{2}k_{z}^{2} & \frac{1}{\sqrt{2}}N_{2}^{\prime}k_{+}k_{z}\\ 0 & -\frac{1}{\sqrt{2}}N_{2}^{\prime}k_{+}k_{z} & \frac{1}{\sqrt{2}}N_{2}^{\prime}k_{-}k_{z} & M_{3}\left(k_{x}^{2}+k_{y}^{2}\right)+L_{2}^{\prime}k_{z}^{2} \end{array}\right] \] \begin{equation} g_{\mathsf{st}}=\left[\begin{array}{cccc} a_{2}\left(\epsilon_{xx}+\epsilon_{yy}\right)+a_{1}\epsilon_{zz} & 0 & 0 & 0\\ 0 & \frac{1}{2}\left(l_{1}+m_{1}\right)\left(\epsilon_{xx}+\epsilon_{yy}\right)+m_{2}\epsilon_{zz} & -\frac{1}{2}\left(l_{1}-m_{1}\right)\left(\epsilon_{xx}-\epsilon_{yy}\right)+in_{1}\epsilon_{xy} & -\frac{n_{2}\left(\epsilon_{xz}-i\epsilon_{yz}\right)}{\sqrt{2}}\\ 0 & -\frac{1}{2}\left(l_{1}-m_{1}\right)\left(\epsilon_{xx}-\epsilon_{yy}\right)-in_{1}\epsilon_{xy} & \frac{1}{2}\left(l_{1}+m_{1}\right)\left(\epsilon_{xx}+\epsilon_{yy}\right)+m_{2}\epsilon_{zz} & \frac{n_{2}\left(\epsilon_{xz}+i\epsilon_{yz}\right)}{\sqrt{2}}\\ 0 & -\frac{n_{2}\left(\epsilon_{xz}+i\epsilon_{yz}\right)}{\sqrt{2}} & \frac{n_{2}\left(\epsilon_{xz}-i\epsilon_{yz}\right)}{\sqrt{2}} & m_{3}\left(\epsilon_{xx}+\epsilon_{yy}\right)+l_{2}\epsilon_{zz} \end{array}\right]\label{eq: Bulk strain Hamiltonian} \end{equation} \end{widetext} \[ g_{\mathsf{cr}}=\Delta_{\mathsf{cr}}\left[\begin{array}{cccc} 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 \end{array}\right] \] \[ g_{\mathsf{so}}=\frac{\Delta_{\mathsf{so}}}{3}\left[\begin{array}{cccc} 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & 0 \end{array}\right] \] \begin{align*} \gamma & =\frac{\sqrt{2}\Delta_{\mathsf{so}}}{3}\left[\begin{array}{cccc} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & -1 & 0 \end{array}\right] \end{align*} Here, $\Delta_{\mathsf{cr}}$ and $\Delta_{\mathsf{so}}$ are the crystal field splitting and spin-orbit coupling, respectively. The band edges are $E_{\mathsf{c}}^{\prime}=E_{\mathsf{v}}+E_{\mathsf{g}}+\Delta_{\mathsf{cr}}+\frac{\Delta_{\mathsf{so}}}{3}+\varphi$ and $E_{\mathsf{v}}^{\prime}=E_{\mathsf{v}}+\varphi$ where $\varphi$ is any additional scalar potential such as the piezoelectric potential. The $A_{i}^{\prime}$ parameters are related to the Kane parameters $P_{i}$ and $L_{i}^{\prime}$, $M_{i}$, $N_{i}^{\prime}$ to the Luttinger-like parameters $A_{i}$, all of which are shown in Appendix \ref{sec:Bulk-k.p-parameters}. $g_{\mathsf{st}}$ is the contribution to the Hamiltonian due to strain $\epsilon_{ij}$. The parameters $a_{i}$, $l_{i}$. $m_{i}$ and $n_{i}$ for the strain contribution are given in Appendix \ref{sec:Bulk-k.p-parameters} in terms of deformation potentials. In example calculations, alloy parameters have been obtained by linearly interpolating between bulk GaN and InN parameters, which are given in Appendix \ref{sec:Bulk-k.p-parameters}, except the band gap, which has bowing included. \subsection{Quantum dot $\mathbf{k}\cdot\mathbf{p}$ \label{subsec:Quantum dot k.p}} For the quantum dot system, we construct the Hamiltonian from the bulk system described in Section \ref{subsec:Bulk k.p}. We use slowly varying envelope functions and apply a spatial dependence to the bulk Hamiltonian. The problem is expressed in a symmetry adapted basis to obtain a block diagonal Hamiltonian from which we calculate the eigenstates of the quantum dot. We start from Eq.\ \ref{eq: General Eigenvalue problem}, but expand $\ket{\psi}$ in terms of envelope functions $F_{\alpha}\left(\mathbf{r}\right)$ that are slowly varying compared to the lattice constant \cite{Andreev2000,Tomic2006,Vukmirovic2005,Vukmirovc2008}, \[ \ket{\psi}=\sum_{\alpha=1}^{8}\ket{F,\alpha} \] \begin{equation} \braket{\mathbf{r}|F,\alpha}=F_{\alpha}\left(\mathbf{r}\right)u_{\alpha}\left(\mathbf{r}\right)\label{eq:envelope function definiton} \end{equation} where the $u_{\alpha}\left(\mathbf{r}\right)$ are defined by Eq.\ \ref{eq: bulk basis} and are periodic with the crystal lattice. Analogous to Eq.\ \ref{eq: Bulk eigenvalue problem}, this envelope function expansion leads to \begin{equation} \sum_{\alpha=1}^{8}H_{\alpha^{'}\alpha}F_{\beta}\left(\mathbf{r}\right)=EF_{\alpha^{'}}\left(\mathbf{r}\right)\label{eq: Quantum dot eigenvalue problem 1} \end{equation} where $H_{\alpha^{'}\alpha}$ are the bulk Hamiltonian matrix elements from Eq.\ \ref{eq: bulk Hamiltonian}. Due to the broken translation symmetry in the quantum dot system, we apply the substitution \begin{equation} k_{j}\rightarrow-i\frac{\partial}{\partial x_{j}}\label{eq: k to der substitution} \end{equation} to the bulk Hamiltonian in Eq.\ \ref{eq: bulk Hamiltonian}. Each parameter in the bulk Hamiltonian also has a spatial dependence based on the alloy distribution, \begin{equation} f\left(\mathbf{r}\right)=f^{\mathsf{d}}\chi_{\mathsf{d}}\left(\mathbf{r}\right)+f^{\mathsf{h}}\left[1-\chi_{\mathsf{d}}\left(\mathbf{r}\right)\right]\label{eq: Spatial dependance of k.p parameters} \end{equation} Here, $f$ stands for any of the parameters in the bulk Hamiltonian that are material dependent. $f^{\mathsf{h}}$ and $f^{\mathsf{d}}$ are the parameter values of the host and alloyed dot material, respectively. Applying the substitution in Eq.\ \ref{eq: k to der substitution} to Eq.\ \ref{eq: bulk Hamiltonian}, the Hamiltonian consists of terms of the form $f\left(\mathbf{r}\right)$, $f\left(\mathbf{r}\right)\frac{\partial}{\partial x_{j}}$ and $f\left(\mathbf{r}\right)\frac{\partial^{2}}{\partial x_{i}\partial x_{j}}$. To preserve Hermiticity, we symmetrize the derivatives \cite{Morrow1984,Vukmirovic2005,Tomic2006}: \begin{equation} f\left(\mathbf{r}\right)\frac{\partial}{\partial x_{j}}\rightarrow\frac{1}{2}\left(f\left(\mathbf{r}\right)\frac{\partial}{\partial x_{j}}+\frac{\partial}{\partial x_{j}}f\left(\mathbf{r}\right)\right)\label{eq:2nd type matrix element symmetrisation} \end{equation} \begin{equation} f\left(\mathbf{r}\right)\frac{\partial^{2}}{\partial x_{i}\partial x_{j}}\rightarrow\frac{1}{2}\left(\frac{\partial}{\partial x_{i}}f\left(\mathbf{r}\right)\frac{\partial}{\partial x_{j}}+\frac{\partial}{\partial x_{j}}f\left(\mathbf{r}\right)\frac{\partial}{\partial x_{i}}\right)\label{eq:3rd type of matrix element symmetrisation} \end{equation} The envelope functions $F_{\alpha}\left(\mathbf{r}\right)$ are periodic with the superlattice and can be expanded in Fourier domain using the superlattice reciprocal wave vectors $\mathbf{q}$ defined in Eq.\ \ref{eq: reciprocal wave vectors-1}. Writing the envelope functions $F_{\alpha}\left(\mathbf{r}\right)$ in terms of plane waves leads to a non-sparse Hamiltonian \cite{Andreev2000,Tomic2006}. For computational efficiency, we use a symmetry adapted basis, which takes advantage of the $C_{6}$ symmetry of the wurtzite crystal structure by block diagonalizing the Hamiltonian. Symmetry adapted bases have been fully described for both zincblende and wurtzite systems \cite{Vukmirovic2005,Vukmirovic2006}. We use a symmetry-adapted basis with elements $\ket{m_{f},\alpha,\mathbf{q}}$ where $\mathbf{q}=\left(q_{x},q_{y},q_{z}\right)$ are chosen within a single sextant, so $0\leq q_{y}\leq\tan\left(\frac{2\pi}{6}\right)q_{x}$, and $m_{f}=\left\{ -5/2,-3/2,-1/2,1/2,3/2,5/2\right\} $ can be interpreted as a total quasi angular momentum \cite{Vukmirovic2006,Vukmirovc2008}. This basis consists of the basis functions of the irreducible representations of the double group $\bar{C}_{6}$. Using this basis reduces the Fourier space sampling to a single sextant of the full space and block diagonalizes the Hamiltonian into 6 blocks, which are labeled by $m_{f}$. This block diagonalization greatly reduces the computational cost to diagonalize the Hamiltonian. Figure \ref{fig:Example-of-SAB-sampling} shows an example of the Fourier space sampling used in the symmetry adapted basis. Written out, the basis states are \begin{equation} \ket{m_{f},\alpha,\mathbf{q}}=\Lambda\left(m_{f},\alpha,\mathbf{q},\mathbf{r}\right)\ket{u_{\alpha}}\label{eq:Symmetry adapted basis} \end{equation} \begin{widetext} \begin{equation} \Lambda\left(m_{f},\alpha,\mathbf{q},\mathbf{r}\right)=\begin{cases} \frac{1}{\sqrt{6}}\sum_{l=0}^{5}e^{i\mbox{\ensuremath{\left(\overleftrightarrow{\mathbf{R}}_{\!l}\mathbf{q}\right)}}\cdot\mathbf{r}}e^{il\frac{2\pi}{6}\left[m_{f}-J_{z}\left(\alpha\right)\right]} & q_{x}\neq0\,\mathsf{or}\,q_{y}\neq0\\ e^{i\mathbf{q}\cdot\mathbf{r}} & q_{x}=q_{y}=0\qquad J_{z}\left(\alpha\right)=m_{f} \end{cases}\label{eq:SAB defintion} \end{equation} \end{widetext} where$\overleftrightarrow{\mathbf{R}}_{l}$ is the $l\frac{2\pi}{6}$ rotation around the z-axis. Equation \ref{eq:SAB defintion} distinguishes wave vectors that are purely along the z-axis from those that have an xy-component, which we denote by $\mathbf{q}_{z}$ and $\mathbf{q}$, respectively. These two cases differ because a z-axis rotation leaves $\mathbf{q}_{z}$ invariant while sending $\mathbf{q}$ to a new wave vector $\overleftrightarrow{\mathbf{R}}_{\!l}\mathbf{q}$. The case of $\ket{m_{f},\alpha,\mathbf{q}_{z}}$ with $J_{z}\left(\alpha\right)\neq m_{f}$ does not exist in the basis set. Using the symmetry adapted basis, the eigenstates can be written \begin{equation} \ket{\psi_{i,m_{f}}}=\sum\limits _{\alpha=1}^{8}\sum_{\mathbf{q}}A_{im_{f}}^{\alpha}\left(\mathbf{q}\right)\ket{m_{f},\alpha,\mathbf{q}}\label{eq:Solutions in SAB} \end{equation} where the $\mathbf{q}$ summation is restricted to the sextant, shown in Fig.\ \ref{fig:Example-of-SAB-sampling}. Writing the envelope functions in the symmetry adapted basis, the eigenvalue problem in Eq.\ \ref{eq: Quantum dot eigenvalue problem 1} can then be written \begin{equation} \sum_{\alpha=1}^{8}\sum_{\mathbf{q}}\mathcal{H}_{m_{f}\alpha^{\prime}\alpha}\left(\mathbf{q}^{\prime},\mathbf{q}\right)A_{im_{f}}^{\alpha}\left(\mathbf{q}\right)=E_{i}A_{im_{f}}^{\alpha^{\prime}}\left(\mathbf{q}^{\prime}\right)\label{eq:Eigenvalue problem in basis} \end{equation} with \begin{align} \mathcal{H}_{m_{f}\alpha^{\prime}\alpha}\left(\mathbf{q}^{\prime},\mathbf{q}\right) & \equiv\braket{m_{f},\alpha^{\prime},\mathbf{q}^{\prime}|\hat{\mathcal{H}}|m_{f},\alpha,\mathbf{q}}\nonumber \\ & =\frac{1}{V_{\mathsf{e}}}\int\limits _{V_{\mathsf{e}}}\mathsf{d}^{3}\mathbf{r}\,\Lambda^{*}\left(\mathbf{r}\right)H_{\alpha^{\prime}\alpha}\Lambda\left(\mathbf{r}\right)\label{eq:General Hamiltonian matrix element} \end{align} where $H_{\alpha^{\prime}\alpha}$ are the bulk Hamiltonian matrix elements presented in Sec.\ \ref{subsec:Bulk k.p}. Expressions for $\mathcal{H}_{m_{f}\alpha^{\prime}\alpha}\left(\mathbf{q}^{\prime},\mathbf{q}\right)$ are fully written out in Appendix \ref{sec:QD-k.p-Hamiltonian} in terms of the bulk Hamiltonian matrix elements and quantum dot characteristic function. \begin{figure}[tbh] \includegraphics[width=8.6cm]{Symmetry_adapted_basis_samplingP1} \caption{Fourier space sampling used for the symmetry adapted basis. Red circles are the Fourier space points $\mathbf{q}$ used in the symmetry adapted basis. Blue dots are the full Fourier space sampled by rotations $\protect\overleftrightarrow{\mathbf{R}}_{\!l}\mathbf{q}$. Dashed lines highlight the 6 sextants.\label{fig:Example-of-SAB-sampling}} \end{figure} \subsection{Including strain and piezoelectric effects\label{subsec:Including-strain-and-piezo-in-k.p}} Deformation potentials and piezoelectric effects, which are both strain-driven, are important for accurate calculations of electronic structure in III-N materials. However, including deformation potentials can be computationally costly for the case of isolated dots. The two-unit cell approach presented in Sec.\ \ref{subsec:Quantum-dot-superlattice} allows for the study of isolated dots, but at the cost of computationally expensive convolutions. Additionally, another layer of convolutions appears in the Hamiltonian matrix elements, leading to composed convolutions. Here, we present the matrix elements due to strain and show our computationally efficient approach of dealing with these composed convolutions by choosing the linear dimensions of the real-space strain cell $\Omega_{\mathsf{s}}$ to be integer multiples of those of the electronic cell $\Omega_{\mathsf{e}}$. The bulk strain Hamiltonian matrix elements in Eqs.\ \ref{eq: bulk Hamiltonian} and \ref{eq: Bulk strain Hamiltonian} can be written as \[ H_{\alpha^{\prime}\alpha}=\sum_{ij}f_{\alpha^{\prime}\alpha}^{ij}\epsilon_{ij}\left(\mathbf{r}\right) \] where the $f_{\alpha^{\prime}\alpha}^{ij}$ consist of $\mathbf{k}\cdot\mathbf{p}$ parameters ($a_{i}$, $l_{i}$, $m_{i}$ and $n_{i}$). Using the prescription of Sec.\ \ref{subsec:Quantum dot k.p}, the strain contributions to the quantum dot Hamiltonian are \begin{widetext} \[ \mathcal{H}_{m_{f}\alpha^{\prime}\alpha}^{ij,\mathsf{st}}\left(\mathbf{q}^{\prime},\mathbf{q}\right)=\frac{1}{6}\sum_{l^{\prime}=0}^{5}\sum_{l=0}^{5}\mathsf{e}^{i\frac{2\pi}{6}\left\{ l\left[m_{f}-J_{z}\left(\alpha\right)\right]-l^{\prime}\left[m_{f}-J_{z}\left(\alpha^{\prime}\right)\right]\right\} }h_{\alpha^{\prime}\alpha}^{ij,\mathsf{st}}\left(\overleftrightarrow{\mathbf{R}}_{\!l^{\prime}}\mathbf{q}^{\prime},\overleftrightarrow{\mathbf{R}}_{\!l}\mathbf{q}\right) \] \[ \mathcal{H}_{m_{f}\alpha^{\prime}\alpha}^{ij,\mathsf{st}}\left(\mathbf{q}^{\prime},\mathbf{q}_{z}\right)=\frac{1}{\sqrt{6}}\sum_{l'=0}^{5}\mathsf{e}^{-il^{\prime}\phi\left[m_{f}-J_{z}\left(\alpha^{\prime}\right)\right]}h_{\alpha^{\prime}\alpha}^{ij,\mathsf{st}}\left(\overleftrightarrow{\mathbf{R}}_{\!l^{\prime}}\mathbf{q}^{\prime},\mathbf{q}_{z}\right) \] \[ \mathcal{H}_{m_{f}\alpha^{\prime}\alpha}^{ij,\mathsf{st}}\left(\mathbf{q}_{z}^{\prime},\mathbf{q}_{z}\right)=h_{\alpha^{\prime}\alpha}^{ij,\mathsf{st}}\left(\mathbf{q}_{z}^{\prime},\mathbf{q}_{z}\right) \] where \begin{align} h_{\alpha^{\prime}\alpha}^{ij,\mathsf{st}}\left(\mathbf{q}^{\prime},\mathbf{q}\right) & =\frac{\left(2\pi\right)^{3}f_{\alpha^{\prime}\alpha}^{ij,\mathsf{h}}}{V_{\mathsf{e}}}\tilde{\epsilon}_{ij}^{\mathsf{a}}\left(\mathbf{q}^{\prime}-\mathbf{q}\right)+\frac{\left(2\pi\right)^{6}\left(f_{\alpha^{\prime}\alpha}^{ij,\mathsf{d}}-f_{\alpha^{\prime}\alpha}^{ij,\mathsf{h}}\right)}{V_{\mathsf{e}}^{2}}\left(\tilde{\chi}_{\mathsf{d}}\ast\tilde{\epsilon}_{ij}^{\mathsf{a}}\right)_{\mathsf{e}}\left(\mathbf{q}^{\prime}-\mathbf{q}\right).\label{eq:Strain matrix element} \end{align} \end{widetext}Here, $f_{\alpha^{\prime}\alpha}^{ij,\mathsf{h}}$ and $f_{\alpha^{\prime}\alpha}^{ij,\mathsf{d}}$ are the $\mathbf{k}\cdot\mathbf{p}$ parameters for bulk host and dot materials, respectively. $\tilde{\epsilon}_{ij}^{\mathsf{a}}$ is the strain produced by the quantum dot array calculated in Sec.\ \ref{subsec:strain }. The subscript ``e'' in $\left(\tilde{\chi}_{\mathsf{d}}\ast\tilde{\epsilon}_{ij}^{\mathsf{a}}\right)_{\mathsf{e}}$ indicates that the convolution is over the wave vectors $\mathbf{q}\in\Omega_{\mathsf{e}}^{-1}$. Inserting the superlattice strain from Eq.\ \ref{eq:QD superlattice strain} into the $\mathbf{k}\cdot\mathbf{p}$ strain matrix elements from Eq.\ \ref{eq:Strain matrix element} leads to composed convolutions, \begin{align} \left(\tilde{\chi}_{\mathsf{d}}\ast\tilde{\epsilon}_{ij}^{\mathsf{a}}\right)_{\mathsf{e}} & \left(\mathbf{q}\right)=\sum_{\mathbf{q}^{\prime}\in\Omega_{\mathsf{e}}^{-1}}\tilde{\chi}_{\mathsf{d}}\left(\mathbf{\mathbf{q^{\prime}}}\right)\tilde{\epsilon}_{ij}^{\mathsf{a}}\left(\mathbf{q}-\mathbf{q^{\prime}}\right)\label{eq:Composed convolutions 1}\\ & =\frac{1}{V_{\mathsf{s}}}\sum_{\mathbf{q}^{\prime}\in\Omega_{\mathsf{e}}^{-1}}\tilde{\chi}_{\mathsf{d}}\left(\mathbf{\mathbf{q^{\prime}}}\right)\sum_{\mathbf{Q}\in\Omega_{\mathsf{s}}^{-1}}\tilde{\epsilon}_{ij}\left(\mathbf{Q}\right)\tilde{\chi}_{\mathsf{e}}\left(\mathbf{q}-\mathbf{q^{\prime}}-\mathbf{Q}\right)\label{eq:Composed convolutions 2} \end{align} which can be computationally demanding depending on the number of wave vectors used. The original proposal of using a large strain cell with a smaller electronic cell imposed no relationship between their sizes \cite{Vukmirovc2008}. Equation \ref{eq:Composed convolutions 2} then requires evaluating $\tilde{\chi}_{\mathsf{e}}$ at points $\mathbf{q}-\mathbf{q^{\prime}}-\mathbf{Q}$, which are contained on neither the electronic nor strain meshes, requiring a unique convolution be calculated for every $\mathbf{q^{\prime}}$. It is well known that using the convolution theorem to compute a convolution between two vectors of length $N$ has a computational cost that scales as $N\log\left(N\right)$. Similarly, the computational cost for a convolution on a 3D $N\times N\times N$ mesh scales as $N^{3}\log\left(N\right)$. Computing the composed convolutions in Eq.\ \ref{eq:Composed convolutions 2} would then scale as $N_{\mathsf{e}}^{3}\log\left(N_{\mathsf{e}}\right)N_{\mathsf{s}}^{3}\log\left(N_{\mathsf{s}}\right)$ since a convolution in $\mathbf{Q}$ has to be calculated for each individual $\mathbf{\mathbf{q}^{\prime}}$. Note that the convolutions from Eqs.\ \ref{eq:Composed convolutions 1}-\ref{eq:Composed convolutions 2} are linear convolutions, which implies that the arrays of function values must be padded with zeros before using the convolution theorem as detailed in Appendix \ref{sec:Conventions}. This zero padding increases both $N_{\mathsf{e}}$ and $N_{\mathsf{s}}$. We show that choosing a strain unit cell to be a supercell of the electronic unit cell reduces the number of convolutions to compute, leading to an improved scaling of $N_{\mathsf{e}}^{3}\log\left(N_{\mathsf{e}}\right)+N_{\mathsf{s}}^{3}\log\left(N_{\mathsf{s}}\right)$. Choosing the strain unit cell linear dimensions to be multiples of the electronic cell, we have \begin{equation} L_{i}^{\mathsf{s}}=n_{i}L_{i}^{\mathsf{e}}\qquad i=12,3\label{eq: MeshFact definition} \end{equation} where the $n_{i}$ take positive integer values. This choice of real-space unit cells leads to the electronic Fourier-space mesh being contained in the strain mesh $\mathbf{\Omega}_{\mathsf{e}}^{-1}\subset\mathbf{\Omega}_{\mathsf{s}}^{-1}$. The wave vectors $\mathbf{Q}$ then have a spacing that is a fraction of the spacing of the electronic wave vectors $\mathbf{q}$, \begin{equation} \Delta Q_{i}=\frac{\Delta q_{i}}{n_{i}}\label{eq: Overlapping q meshes} \end{equation} Note that from Eq.\ \ref{eq:Composed convolutions 1}, $\tilde{\epsilon}_{ij}^{\mathsf{a}}$ is only sampled at points $\Delta\mathbf{q}=\mathbf{q}-\mathbf{q^{\prime}}$, which belong to the electronic mesh. Our procedure starts with using the convolution theorem (see Appendix \ref{sec:Conventions}) to efficiently calculate the inner convolution $\left(\tilde{\epsilon}_{ij}\ast\tilde{\chi}_{\mathsf{e}}\right)_{\mathsf{s}}\left(\mathbf{Q}\right)$ on the strain mesh to obtain $\tilde{\epsilon}_{ij}^{\mathsf{a}}\left(\mathbf{Q}\right)$. Since the wave vectors $\mathbf{Q}$ also contain the wave vectors $\mathbf{q}$, we can then extract the points that lie on the electronic mesh to obtain $\tilde{\epsilon}_{ij}^{\mathsf{a}}\left(\mathbf{q}\right)$. Lastly, we perform the second convolution $\left(\tilde{\chi}_{\mathsf{d}}\ast\tilde{\epsilon}_{ij}^{\mathsf{a}}\right)_{\mathsf{e}}\left(\mathbf{q}\right)$, again utilizing the convolution theorem. This workflow is shown in Fig.\ \ref{fig: Strain and piezo workflows}(a). In our method, we compute only two 3D convolutions and so get a complexity scaling of $N_{\mathsf{e}}^{3}\log\left(N_{\mathsf{e}}\right)+N_{\mathsf{s}}^{3}\log\left(N_{\mathsf{s}}\right)$, which is a considerable improvement compared to the non-overlapping case. Note that $N_{\mathsf{s}}$ is generally much larger than $N_{\mathsf{e}}$ to obtain appropriate convergence, so the computational cost is dominated by the convolutions on $\Omega_{\mathsf{s}}^{-1}$. \begin{figure}[tbh] \includegraphics[width=8.6cm]{k_space_meshes} \caption{Example of Fourier space meshes where $n_{12}=2$, leading to $\Delta Q_{i}=\frac{\Delta q_{i}}{2}$, which implies the vectors $\mathbf{Q}$ contain all the vectors $\mathbf{q}$.} \end{figure} The piezoelectric potential brings no additional complexity, and the workflow for calculating the piezoelectric potential is shown in Fig.\ \ref{fig: Strain and piezo workflows}(b). The potential is initially calculated on the strain mesh, and the electronic mesh portion is extracted to calculate the piezoelectric potential contributions to the Hamiltonian, which are written out in Appendix \ref{sec:QD-k.p-Hamiltonian}. \begin{figure}[tbh] (a) \[ \begin{array}{ccccc} & \text{Extract}\\ & \text{ to e-mesh}\\ \underbrace{\tilde{\epsilon}_{ij}^{\mathsf{a}}\left(\mathbf{Q}\right)} & \longrightarrow & \tilde{\epsilon}_{ij}^{\mathsf{a}}\left(\mathbf{q}\right) & \longrightarrow & \underbrace{\left(\tilde{\chi}_{\mathsf{d}}\ast\tilde{\epsilon}_{ij}^{\mathsf{a}}\right)_{\mathsf{e}}\left(\mathbf{q}\right)}\\ \text{Array strain } & & & & \text{Calculate on e-mesh}\\ \text{on s-mesh} \end{array} \] (b) \[ \begin{array}{ccccc} \underbrace{\tilde{\epsilon}^{\mathsf{arr}}\left(\mathbf{Q}\right)\longrightarrow\tilde{\varphi}\left(\mathbf{Q}\right)} & \longrightarrow & \underbrace{\tilde{\varphi}^{\mathsf{arr}}\left(\mathbf{Q}\right)=\left(\tilde{\varphi}\ast\tilde{\chi}_{e}\right)_{\mathsf{s}}\left(\mathbf{Q}\right)} & \longrightarrow & \underbrace{\tilde{\varphi}^{\mathsf{arr}}\left(\mathbf{q}\right)}\\ \text{Strain gives potential } & & \text{Truncate to } & & \text{Extract }\\ \text{in strain box} & & \text{electronic unit cell} & & \text{on e-mesh} \end{array} \] \caption{(a) Workflow for calculation of composed convolutions of the form shown in Eq.\ \ref{eq:Composed convolutions 1}. (b) Workflow used to obtain piezoelectric potential on the electronic mesh from the strain on the strain mesh. e-mesh and s-mesh signify the Fourier space electronic and strain meshes, respectively. \label{fig: Strain and piezo workflows}} \end{figure} \section{Smooth alloy profile\label{sec:Smooth-indium-profile}} \begin{figure}[tbh] \includegraphics[width=8.6cm]{Band_gap_approx_CylinderSmoothingRedP3_Edited} \includegraphics[width=8.6cm]{Band_gap_approx_difference_CylinderSmoothingRedP4} \caption{(a) Band gap in a quantum dot system obtained by bowed interpolation using Eq.\ \ref{eq: General smoothed bandgap} along the z and y axes through the center of the dot. Band gap obtained from linear interpolation using Eq.\ \ref{eq: Approx band gap} is visually indistinguishable. (b) Difference of the bowed and linearly interpolated band gaps along the z and x axes. Quantum dot parameters are listed in Table \ref{tab:System-parameters}. However, for computational simplicity and smooth curves, these results were obtained using a rectangular real-space unit cell with dimensions $L_{\mathsf{x}}=L_{\mathsf{y}}=500\mathring{A}$ and $L_{\mathsf{z}}=70\mathring{A}$ and a smoothing of $\delta=[3,3,5]\mathring{A}$. \label{fig:(a) smoothing and bandgap}} \end{figure} \begin{figure}[tbh] \includegraphics[width=8.6cm]{DielectricCylRedoneP1_Edited} \includegraphics[width=8.6cm]{DielectricCylRedoneP3} \caption{(a) Inverse dielectric constant from Eq.\ \ref{eq: Exact smooth dielectric constant} along the z and x axes through the center of the dot. Linearly interpolated inverse from Eq.\ \ref{eq: Approx smoothed inverse dielectric constant} is visually indistinguishable. (b) Relative difference of the linear and nonlinear dielectric constants along the z and x axes. Quantum dot parameters are listed in Table \ref{tab:System-parameters}. However, for computational simplicity and smooth curves, these results were obtained using a rectangular real-space unit cell with dimensions $L_{\mathsf{x}}=L_{\mathsf{y}}=500\mathring{A}$ and $L_{\mathsf{z}}=70\mathring{A}$ and a smoothing of $\delta=[3,3,5]\mathring{A}$. \label{fig: Smoothing dielectric constant}} \end{figure} When InGaN devices are grown by molecular beam epitaxy (MBE), indium diffuses between layers \cite{Nguyen2011}. While most studies of MBE-grown materials simulate abrupt junctions, this diffusion leads to smoothing of the material interfaces, producing a continuously varying alloy fraction, which changes the local band properties and lattice constant, which in turn change strain and polarization fields. This smooth alloy profile must be included for accurate modeling. Smooth indium profiles also provide a computational benefit, since sharp features of the confining potentials are removed, so fewer wave vectors are required to attain convergence. In this section, we present a method to include alloy diffusion effects by effectively smoothing the characteristic function of the dot. We focus on indium alloying here for the examples, but the methods are general for all $\mathbf{k}\cdot\mathbf{p}$ calculations of alloy structures. \subsubsection{Smoothing method} In the case of a sharp material interface, the local alloy fraction $X\left(\mathbf{r}\right)$ can be defined by the characteristic function of the dot \[ X\left(\mathbf{r}\right)=X_{0}\chi_{\mathsf{d}}\left(\mathbf{r}\right), \] where the characteristic function $\chi_{\mathsf{d}}\left(\mathbf{r}\right)$ defines the geometry of the dot with indium fraction $X_{0}$. By convolving with a Gaussian $G\left(\mathbf{r},\boldsymbol{\delta}\right)=\frac{1}{\sqrt{2\pi}\delta_{x}\delta_{y}\delta_{z}}\mathsf{e}^{-\frac{1}{2}\left(\frac{x^{2}}{\delta_{x}^{2}}+\frac{y^{2}}{\delta_{y}^{2}}+\frac{z^{2}}{\delta_{z}^{2}}\right)}$ or other kernel, we can obtain a smooth version of the characteristic function \begin{align*} X_{\mathsf{sm}}\left(\mathbf{r}\right) & =\left(X_{0}\chi_{\mathsf{d}}\ast G\right)\left(\mathbf{r}\right)\\ & =X_{0}\chi_{\mathsf{sm}}\left(\mathbf{r}\right), \end{align*} where $\boldsymbol{\delta}=\left[\delta_{x},\delta_{y},\delta_{z}\right]$ controls the radius of smoothing and needs to be chosen to model the desired alloy diffusion. $G\left(\mathbf{r},\boldsymbol{\delta}\right)$ is normalized to preserve the total amount of alloying element, and $\chi_{\mathsf{sm}}\left(\mathbf{r}\right)$ is a smoothed characteristic function. Using the convolution theorem, the smoothed characteristic function satisfies \begin{align*} \tilde{\chi}_{\mathsf{sm}}\left(\mathbf{q}\right) & =\tilde{\chi}_{\mathsf{d}}\left(\mathbf{q}\right)\mathsf{e}^{-\frac{\left(\delta_{x}^{2}q_{x}^{2}+\delta_{y}^{2}q_{y}^{2}+\delta_{z}^{2}q_{z}^{2}\right)}{2}}. \end{align*} Note that $\chi_{\mathsf{sm}}\left(\mathbf{r}\right)$ is no longer strictly a characteristic function, as it takes values continuously between 0 and 1. We now show that it can be inserted in place of the characteristic function in the previous sections to give $\mathbf{k}\cdot\mathbf{p}$ parameters, strain and piezoelectric fields accurately with a smooth alloy profile. \subsubsection{Material parameters} We now focus on the case of InGaN to illustrate the interpolation of material parameters. In the case of sharp material interfaces, the host and dot regions each consist of uniform material. The host material is a binary material and has well-defined parameters. The dot region consists of alloyed InGaN, and its parameters are obtained by either linear or bowed interpolation of bulk GaN and InN parameters, which are listed in Appendix \ref{sec:Bulk-k.p-parameters}. In the case of a smooth alloy profile, the dot and host regions are no longer uniform, giving the material parameters a smooth spatial dependence. Parameters that were linearly interpolated in the sharp interface case can still be obtained from a simple linear interpolation based on the local alloy fraction $X\left(\mathbf{r}\right)$. The band gap $E_{\mathsf{g}}$ is nonlinear in the alloy fraction due to a bowing factor. This nonlinearity prevents us from using the convolution theorem in calculating the Hamiltonian matrix elements. However, we show that neglecting the bowing parameters in the alloy-smoothing region can still give computationally efficient and accurate smoothed profiles when the alloy fraction is not too large. The local value for any of the linearly interpolated material parameters depends on the local alloy fraction \begin{equation} f\left(\mathbf{r}\right)=f^{\mathsf{B}}X\left(\mathbf{r}\right)+\left[1-X\left(\mathbf{r},X_{0}\right)\right]f^{\mathsf{A}}\label{eq: Definition of linear parameter} \end{equation} where $f$ can be a parameter such as lattice constant, and subscripts A and B stand for the two binary materials, GaN and InN for example. For this case of linearly interpolated quantities, smoothed parameters can be written \begin{comment} Proof: \begin{align*} f\left(\mathbf{r},X_{0}\right) & =f^{InN}X\left(\mathbf{r},X_{0}\right)+\left[1-X\left(\mathbf{r},X_{0}\right)\right]f^{GaN}\\ & =f^{InN}X_{0}\chi_{sm}\left(\mathbf{r}\right)+\left[1-X_{0}\chi_{sm}\left(\mathbf{r}\right)\right]f^{GaN}\\ & =\left(X_{0}f^{InN}-X_{0}f^{GaN}\right)\chi_{sm}\left(\mathbf{r}\right)+f^{GaN}\\ & =\left(X_{0}f^{InN}-X_{0}f^{GaN}+f^{GaN}\right)\chi_{sm}\left(\mathbf{r}\right)-f^{GaN}\chi_{sm}\left(\mathbf{r}\right)+f^{GaN}\\ & =\left[X_{0}f^{InN}+\left(1-X_{0}\right)f^{GaN}\right]\chi_{sm}\left(\mathbf{r}\right)+\left[1-\chi_{sm}\left(\mathbf{r}\right)\right]f^{GaN}\\ & =f^{d}\chi_{sm}\left(\mathbf{r}\right)+\left[1-\chi_{sm}\left(\mathbf{r}\right)\right]f^{GaN} \end{align*} \end{comment} \begin{equation} f\left(\mathbf{r}\right)=f^{\mathsf{d}}\left(X_{0}\right)\chi_{\mathsf{sm}}\left(\mathbf{r}\right)+\left[1-\chi_{\mathsf{sm}}\left(\mathbf{r}\right)\right]f^{\mathsf{A}}\label{eq: smoothed linear parameters} \end{equation} where $f^{\mathsf{d}}\left(X_{0}\right)$ is the linearly interpolated material parameter at the nominal alloy fraction $X_{0}$ of the quantum dot. Band gaps do not vary linearly with alloy fraction and are generally well described with a bowing term, as \begin{equation} E_{g}\left(\mathbf{r}\right)=E_{g}^{\mathsf{B}}X\left(\mathbf{r}\right)+\left[1-X\left(\mathbf{r}\right)\right]E_{g}^{\mathsf{A}}-X\left(\mathbf{r}\right)\left[1-X\left(\mathbf{r},X_{0}\right)\right]C\label{eq: Definition of bowed parameter} \end{equation} where $C$ is a bowing constant. Following the same procedure as in Eq.\ \ref{eq: smoothed linear parameters}, a smoothed version can be written: \begin{align} E_{\mathsf{g}}\left(\mathbf{r}\right) & =X_{0}\chi_{\mathsf{sm}}\left(\mathbf{r}\right)E_{\mathsf{g}}^{\mathsf{B}}+\left[1-X_{0}\chi_{\mathsf{sm}}\left(\mathbf{r}\right)\right]E_{\mathsf{g}}^{\mathsf{A}}-CX_{0}\chi_{\mathsf{sm}}\left(\mathbf{r}\right)\left[1-X_{0}\chi_{\mathsf{sm}}\left(\mathbf{r}\right)\right]\nonumber \\ & =E_{\mathsf{g}}^{\mathsf{A}}+\left[E_{\mathsf{g}}^{\mathsf{B}}-E_{\mathsf{g}}^{\mathsf{A}}\right]E_{\mathsf{g}}^{\mathsf{A}}X_{0}\chi_{\mathsf{sm}}\left(\mathbf{r}\right)-CX_{0}\chi_{\mathsf{sm}}\left(\mathbf{r}\right)\left[1-X_{0}\chi_{\mathsf{sm}}\left(\mathbf{r}\right)\right]\label{eq: General smoothed bandgap} \end{align} where the first two terms are the linear interpolation and the last term is the bowing. This bowing term brings additional complexity when performing $\mathbf{k}\cdot\mathbf{p}$ calculations due to the nonlinearlity in $\chi_{\mathsf{sm}}\left(\mathbf{r}\right)$. We approximate the band gap by a linear interpolation between the host and dot band gaps, \begin{equation} E_{g}\left(\mathbf{r}\right)\approx E_{g}^{\mathsf{d}}\left(X_{0}\right)\chi_{\mathsf{sm}}\left(\mathbf{r}\right)+\left[1-\chi_{\mathsf{sm}}\left(\mathbf{r}\right)\right]E_{g}^{\mathsf{h}}.\label{eq: Approx band gap} \end{equation} Here, $E_{g}^{\mathsf{d}}$ is the bulk band gap at an alloy fraction of $X_{0}$ and $E_{g}^{\mathsf{h}}$ is the bulk band gap of the host material. This linear interpolation gives a good approximation for the band gap for most regions and as well for moderate indium fractions, as shown in Fig.\ \ref{fig:(a) smoothing and bandgap}. The regions with largest deviation are in the same locations where $E_{\mathsf{g}}$ changes over 1.5 eV, so we expect the slight shift of position where each band gap value occurs to have minimal effect. The neglect of the $\chi_{\mathsf{sm}}^{2}$ term allows the theory to stay linear and therefore efficiently calculated with the convolution theorem. \subsubsection{Strain and the piezoelectric potential} Here we show how smoothing is included in the strain and piezoelectric potential calculations. Once calculated, those strains and piezoelectric potentials can be included in the $\mathbf{k}\cdot\mathbf{p}$ model exactly as shown in Sec.\ \ref{subsec:Including-strain-and-piezo-in-k.p}. Following the derivations from Refs.\ \cite{Andreev1999,Andreev2000}, it is not obvious how smoothing is to be implemented in strain calculations since they begin from the stress of the sharp interface dot/barrier interface. However, Ref.\ \cite{Nenashev2018} presents an alternative derivation for the same strain calculation indicating that $\chi\left(\mathbf{r}\right)$ in the strain expressions can be exchanged for the smoothed version $\chi_{\mathsf{sm}}\left(\mathbf{r}\right)$ without any further changes. For the piezoelectric potential, Eq.\ \ref{eq: inverse dielectric constant} for the spatially varying inverse dielectric constant assumed sharp interfaces. In the case of a smooth indium profile, we use Eq.\ \ref{eq: Definition of linear parameter} to write \begin{equation} \varepsilon\left(\mathbf{r}\right)=\varepsilon^{\mathsf{B}}X\left(\mathbf{r}\right)+\left[1-X\left(\mathbf{r}\right)\right]\varepsilon^{\mathsf{A}}\label{eq: Exact smooth dielectric constant} \end{equation} In the scenario where $X\left(\mathbf{r}\right)$ is spatially varying, Eq.\ \ref{eq: inverse dielectric constant} can no longer be applied, because the inverse of the dielectric constant is not a linear function of indium. However, similar to the band gap, we find that \begin{equation} \frac{1}{\varepsilon\left(\mathbf{r}\right)}\approx\frac{1}{\varepsilon^{\mathsf{d}}\left(X_{0}\right)}\chi_{\mathsf{sm}}\left(\mathbf{r}\right)+\left[1-\chi_{\mathsf{sm}}\left(\mathbf{r}\right)\right]\frac{1}{\varepsilon^{\mathsf{A}}}\label{eq: Approx smoothed inverse dielectric constant} \end{equation} still gives an accurate representation of $\varepsilon^{-1}\left(\mathbf{r}\right)$. Figure \ref{fig: Smoothing dielectric constant} shows a disagreement of less than 1 percent between the inverse dielectric from Eq.\ \ref{eq: Exact smooth dielectric constant} and the linear interpolation in Eq.\ \ref{eq: Approx smoothed inverse dielectric constant}. The form of Eq.\ \ref{eq: Approx smoothed inverse dielectric constant} allows us to use Eqs.\ \ref{eq: piezoelectric potential 2}-\ref{eq:delta piezo} for the piezoelectric potential with a simple substitution of $\chi\left(\mathbf{r}\right)$ by $\chi_{\mathsf{sm}}\left(\mathbf{r}\right)$. \begin{figure}[tbh] \includegraphics[width=8.6cm]{k12max_study_R200_y2020m07d23KP1} \includegraphics[width=8.6cm]{k12max_study_R40_y2020m07d23LP1} \caption{Convergence of the fundamental gap of the quantum dot $E_{\mathsf{0}}$ for a 1D array of quantum dots with the maximum magnitude of $q_{12}$ included in the $\mathbf{k}\cdot\mathbf{p}$ calculations. $L_{12}^{\mathsf{e}}$ and $n_{12}$ vary while (a) has $R=200\mathring{A}$ with strain box size held constant at $L_{12}^{\mathsf{s}}=10R$ and (b) has $R=40\mathring{A}$ with $L_{12}^{\mathsf{s}}=16R$. Plane wave sampling $m_{12}$ from 3 to 11 are shown for each choice of $L_{12}^{\mathsf{e}}$; the number next to each point indicates $m_{12}$. Different values of $m_{12}$, $L_{12}^{\mathsf{e}}$ that produce the same $q_{12}^{\mathsf{max}}$ can be seen to produce approximately the same $E_{\mathsf{0}}$, showing that $q_{12}^{\mathsf{max}}$ is a useful metric for convergence of these states. Since $q_{12}^{\mathsf{max}}=m_{12}\pi/L_{12}^{\mathsf{e}}$ and computational cost scales with $m_{12}$, smaller $L_{12}^{\mathsf{e}}$ allows easier access to large $q_{12}^{\mathsf{max}}$. In both panels, the black curves have $L_{12}^{\mathsf{e}}=2R$, so the dots touch each other at the 6 edges of the hexagonal unit cell. For larger dot, there is no visible deviation of $E_{\mathsf{0}}$ from the trend with separated dots. With the smaller dot, tunneling of the wavefunctions into neighboring dots causes $E_{\mathsf{0}}$ to have a significant change, labeled $\Delta$. \label{fig: kmax study}} \end{figure} \begin{figure}[tbh] \includegraphics[width=8.6cm]{Electronic_structure_y2020m07d23CRerunP1V2} \caption{Electronic structure of the QD superlattice system along the central axis of the dot for $\boldsymbol{\delta}=[1.5,1.5,2.5]\mathring{A}$, corresponding to $s=1$ in Fig.\ \ref{fig: Smoothing and edge states}(a). Other system parameters are in Table \ref{tab:System-parameters}. Lowest bound electron and hole states energies are shown by the horizontal dashed lines. Thick solid lines are the bulk band edges under the influence of the piezoelectric field and strain. Thin solid lines are z-axis projections of the probability distributions obtained from the envelope functions. Dashed vertical lines are the nominal material interfaces before smoothing. These calculations include spatially varying elastic and dielectric constants. \label{fig: electronic structure}} \end{figure} \begin{table}[h] \caption{Energy shifts due to spatially varying $\lambda\left(\mathbf{r}\right)$ and $\varepsilon\left(\mathbf{r}\right)$ relative to the case with uniform constants of the host material $\lambda^{\mathsf{GaN}}$ and $\varepsilon^{\mathsf{GaN}}$, respectively. System parameters in Table \ref{tab:System-parameters}. \label{tab: Energy shifts}} \begin{tabular}{cccc} & & & \tabularnewline \hline \hline Energy shifts & $\quad\lambda\left(\mathbf{r}\right)$ \& $\varepsilon^{\mathsf{GaN}}$ & $\quad\lambda^{\mathsf{GaN}}$ \& $\varepsilon\left(\mathbf{r}\right)$ & $\quad\lambda\left(\mathbf{r}\right)$ \& $\varepsilon\left(\mathbf{r}\right)$\tabularnewline \hline $\Delta E_{\mathsf{c}}$ (meV) & 16.7 & 46.7 & 64.7\tabularnewline $\Delta E_{\mathsf{v}}$ (meV) & -5.1 & -30.6 & -37.4\tabularnewline $\Delta E_{\mathsf{0}}$ (meV) & 21.7 & 77.4 & 102.1\tabularnewline \hline \end{tabular} \end{table} \begin{figure}[tbh] \includegraphics[width=8.6cm]{Smoothing_sweep_y2020m07d23FP10Edited} \includegraphics[width=8.6cm]{Smoothing_Confining_potentials_y2020m07d23FRerunP15} \caption{Effects of indium diffusion, given by $\boldsymbol{\delta}=s[1.5,1.5,2.5]\mathring{A}$, for (a) $E_{\mathsf{c}}$ and $E_{\mathsf{v}}$ and (b) bulk conduction band edge. Remaining parameters are as listed in Table \ref{tab:System-parameters}. Increase in indium diffusion leads to less confinement, which pushes the two states apart in energy, widening the electronic gap $E_{\mathsf{0}}$. Electronic structure for $s=1$ is shown in Fig.\ \ref{fig: electronic structure}. \label{fig: Smoothing and edge states}} \end{figure} \section{Impacts of corrections\label{sec:Energy-shifts-from-corrections}} In this section, we apply our methodology to study the case of a 1D array of quantum dots, such as described in Ref.~\cite{Nguyen2011}, though we do not consider the boundaries of the nanowire. We achieve this 1D array by taking $n_{3}=1$ to fully couple the dots in the z-direction and $n_{12}>1$ to avoid strain effects from neighboring dots in the xy-plane. In this section, we investigate convergence of the lowest electron and hole state energies $E_{\mathsf{c}}$ and $E_{\mathsf{v}}$, which define the fundamental gap of the dot $E_{\mathsf{0}}=E_{\mathsf{c}}-E_{\mathsf{v}}$. More specifically, we show that the largest wave vector sampled plays a dominant role in convergence. We also show the energy shifts experienced by these two states when using uniform or spatially varying material parameters and when including alloy smoothing. We model an infinite 1D quantum dot array with parameters listed in Table \ref{tab:System-parameters}. The 1D dot array has an experimentally well characterized dot-dot spacing in z, which fixes $L_{3}^{\mathsf{e}}=L_{3}^{\mathsf{s}}=L_{3}$, leaving $L_{12}$ and $n_{12}$ to be fixed. These quantum dots have a rather large radius, so the smallest spatial feature that we need to resolve is the decay of the bound wavefunctions into the classically forbidden region. Given that bound wavefunctions decay faster than strain, we need wave vectors that are relatively large to be able to resolve the wavefunctions. Increasing $m_{12}$ increases the maximum wave vector contained in the mesh, but we can also sample at larger wave vectors by using a smaller $L_{12}^{\mathsf{e}}$. However, if the electronic cell is chosen too small, then there can be electronic wavefunction overlap between states of neighboring dots. We must then choose $L_{12}^{\mathsf{e}}$ as small as possible while also avoiding dot-dot interactions. As for strain, in order to study a 1D array, we must choose $n_{12}$ sufficiently large to have $L_{12}^{\mathsf{s}}=n_{12}L_{12}^{\mathsf{e}}$ large enough that the strain of the quantum dot superlattice does not extend across neighboring strain unit cells in the xy-plane. With this intuition, we turn to the convergence of $E_{0}$ in terms of $m_{12}$, $L_{12}^{e}$ and $n_{12}$. Figure \ref{fig: kmax study}(a) shows the importance of the largest $\mathbf{q}$ in the electronic mesh, $q_{12}^{\mathsf{max}}$, for convergence of $E_{0}$. In this study, $L_{12}^{\mathsf{e}}$ and $n_{12}$ are chosen to keep a constant $L_{12}^{\mathsf{s}}=L_{12}^{\mathsf{e}}n_{12}=2000\mathring{A}$. We observe that $E_{0}$ is to good approximation a function of only $q_{12}^{\mathsf{max}}$ and not of $m_{12}$ and $L_{12}^{\mathsf{e}}$, converging towards the same value for all choices of $L_{12}^{\mathsf{e}}$. We also observe that the smallest $L_{12}^{\mathsf{e}}$ with highest $m_{12}$ gives the most converged $E_{0}$, since $q_{12}^{\mathsf{max}}=m_{12}\pi/L_{12}^{\mathsf{e}}$. The black line in Fig.\ \ref{fig: kmax study}(a) represents the case of dots touching in the xy plane and, interestingly, does not break the convergence trend. However, we do find a break in the convergence trend for smaller dots in Fig.\ \ref{fig: kmax study}(b). This difference in convergence is due to the larger quantum dots having better confined states compared to the smaller dots. Smaller dots have wavefunctions that extend further outside the dot region, which makes them more able to tunnel to a neighboring dot. Consequently, care has to be taken in choosing the unit cell dimensions for small quantum dots. The lowest quantum dot confined electron and hole energies, $E_{\mathsf{c}}$ and $E_{\mathsf{v}}$, have respectively been converged within $5$ meV by choosing $m_{12}$, $m_{3}$ and $n_{12}$ sufficiently large, see Table \ref{tab:System-parameters}. Material parameters are listed in Appendix \ref{sec:Bulk-k.p-parameters}. Band edges and lowest-energy confined states are shown in Fig.\ \ref{fig: electronic structure}. The thick black solid lines represent the bulk band edges modified by the piezoelectric potential and strain. To include strain effects in the bulk band edges, we have used the $(1,1)$ matrix element from Eq.\ \ref{eq: Bulk strain Hamiltonian} to modify the conduction band edge and a third of the trace of the $3\times3$ valence band block for the valence band edge. The modifications in both the strain and piezoelectric potential due to spatially varying elastic and dielectric constants also have effects on the electronic structure. Table \ref{tab: Energy shifts} shows how much $E_{\mathsf{v}}$ and $E_{\mathsf{c}}$ shift due to the corrections. We find that both corrections push the states apart, leading to an energy gap 100 meV larger than from simpler calculations with uniform $\varepsilon$ and $\lambda$, a significant change that shows the importance of accurate modeling of dielectric and elastic parameters. Figure \ref{fig: Smoothing and edge states}(a) shows that indium diffusion pushes the lowest electron and hole states apart, which is due to changes in the confining potentials. From Fig.\ \ref{fig: Smoothing and edge states}(b), we see that indium diffusion reduces the depth of the confining potential. We have observed similar behavior for the hole state confining potential, leading to $E_{\mathsf{v}}$ being pushed down in energy. Consequently, the gap $E_{0}$ increases in energy as indium diffusion is increased. In the case of sharp material interfaces, large wave vectors are needed to resolve the discontinuous parameter profiles. Smoothing removes the sharp interfaces and yields smoothly varying material parameters. Consequently, the required $q_{12}^{\mathsf{max}}$ for the same degree of convergence is smaller, which means that smaller $m_{12}$ and $m_{3}$ and therefore reduced computational cost are needed with increasing $\boldsymbol{\delta}$. In Figs.\ \ref{fig: Smoothing and edge states} (a) and (b), we converged for the case of $s=1$, guaranteeing convergence for the rest of the sweep. \section{Conclusions} We have demonstrated techniques for and results of four modifications of standard quantum dot $\mathbf{k}\cdot\mathbf{p}$ theory. We have included spatially varying elastic and dielectric constants as alloy fraction changes in strain and piezoelectric potential calculations. The effects of the spatially varying parameters are non-negligible on the strain and piezoelectric potential and also produce important shifts of the lowest electron and hole states, significantly changing the calculated gap of the quantum dot. We have also presented a method to include smoothly varying alloy profiles in Fourier-based strain, piezoelectric potential and $\mathbf{k}\cdot\mathbf{p}$ calculations. This smoothing has to be chosen to represent the device of interest, such as for indium diffusion in InGaN systems. For the case of $\mathbf{k}\cdot\mathbf{p}$ theory for isolated dots, we have presented a new methodology of overlapping electronic and strain meshes to facilitate the coupling of strain into the $\mathbf{k}\cdot\mathbf{p}$ Hamiltonian, greatly reducing the computational cost of calculating the Hamiltonian matrix elements. Lastly, we have shown that the maximum wave vector contained in the electronic sampling mesh is the most important criterion for determining convergence of quantum dot levels.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \seclabel{intro} Let $Q$ be a non-empty set equipped with a binary operation (denoted multiplicatively throughout the paper). For each $a \in Q$, the left and right translations $L_a$ and $R_a$ are defined by $L_a x = ax$ and $R_a x = xa$ for all $x \in Q$. The structure $(Q,\cdot)$ is called a \emph{quasigroup} if all of the right and left translations are permutations of $Q$ \cite{Br, Pf}. In a quasigroup $(Q,\cdot)$, there exist transformations $\alpha, \beta : Q\to Q$ such that $x \alpha(x) = x = \beta(x) x$ for all $x\in Q$. A quasigroup $Q$ is called a \emph{left F-quasigroup} if \[ x \cdot yz = xy \cdot \alpha(x)z \tag{$F_l$} \] for all $x, y, z \in Q$. Dually, $Q$ is called a \emph{right F-quasigroup} if \[ zy \cdot x = z\beta(x) \cdot yx \tag{$F_r$} \] for all $x,y,z \in Q$. If $Q$ is both a left F- and right F-quasigroup, then $Q$ is called a (two-sided) \emph{F-quasigroup} \cite{BF, Go, Ke, KKP1, KKP2, Mu, Sa}. Recall that for a quasigroup $(Q,\cdot)$ and for fixed $a,b\in Q$, the structure $(Q,+)$ consisting of the set $Q$ endowed with the binary operation $+ : Q\times Q\to Q$ defined by $x + y = R_b\iv x \cdot L_a\iv y$ is called a \emph{principal isotope} of $(Q, +)$. Here $(Q, +)$ is a quasigroup with neutral element $0 = ab$, that is, $(Q, +)$ is a \emph{loop} \cite{Br}. (Throughout this paper, we will use additive notation for loops, including groups, even if the operation is not commutative.) To study any particular class of quasigroups, it is useful to understand the loops isotopic to the quasigroups in the class. In \cite{KKP1}, we have shown that every loop isotopic to an F-quasigroup is a Moufang loop. In this paper, which is in some sense a prequel to \cite{KKP1}, we study the structure of a particular subclass of F-quasigroups, namely those which are isotopic to groups. An F-quasigroup isotopic to a group will be called an \emph{FG-quasigroup} in the sequel. A quasigroup $Q$ is called \emph{medial} if $xa \cdot by = xb \cdot ay$ for all $x,y,a,b \in Q$. We see that \Fl and \Fr are generalizations of the medial identity. The main result of {\S}\secref{basics} is that the class of FG-quasigroups is axiomatized by two stronger generalizations the medial identity. In particular, we will show (Theorem \thmref{main}) that a quasigroup is an FG-quasigroup if and only if \[ xy \cdot \alpha(u) v = x\alpha(u) \cdot yv \tag{$A$} \] and \[ xy \cdot \beta(u) v = x\beta(u) \cdot yv \tag{$B$} \] hold for all $x,y,u,v$. In {\S}\secref{FG-linear}, we will show that FG-quasigroups are more than just isotopic to groups; they are, in fact, linear over groups. A quasigroup $Q$ is said to be \emph{linear} over a group $(Q,+)$ if there exist $f,g\in \Aut(Q,+)$ and $e\in Q$ such that $xy = f(x) + e + g(y)$ for all $x,y\in Q$. In {\S}\secref{linear-quasi}, we give necessary and sufficient conditions in terms of $f, g,$ and $e$ for a quasigroup $Q$ linear over a group $(Q,+)$ to be an FG-quasigroup. In {\S}\secref{structure}, we will use the linearity of FG-quasigroups to describe their structure. For a quasigroup $Q$, set $M(Q) = \{a \in Q : xa \cdot yx = xy \cdot ax \; \forall x,y \in Q\}$. We will show (Proposition \prpref{structure}) that in an FG-quasigroup $Q$, $M(Q)$ is a medial, normal subquasigroup and $Q/M(Q)$ is a group. In particular, this gives us a complete description of simple FG-quasigroups (Corollary \corref{simple}) up to an understanding of simple groups. In {\S}\secref{forms} we codify the relationship between FG-quasigroups and groups by introducing the notion of \emph{arithmetic form} for an FG-quasigroup (Definition \defref{form}). This enables us to show an equivalence of equational classes between (pointed) FG-quasigroups and certain types of groups with operators (Theorem \thmref{correspondence} and Lemma \lemref{homom-forms}). Finally, motivated by this equivalence, we introduce in {\S}\secref{modules} a notion of \emph{central generalized module} over an associative ring, and we show an equivalence of equational classes between (pointed) FG-quasigroups and central generalized modules over a particular ring (Theorem \thmref{mod-equiv}). In \cite{KKP2}, which is the sequel to \cite{KKP1}, we will examine the more general situation for arbitrary F-quasigroups and introduce a correspondingly generalized notion of module. \section{Characterizations of FG-quasigroups} \seclabel{basics} \begin{proposition} \prplabel{basic} Let $Q$ be a left F-quasigroup. Then \begin{enumerate} \item[1.] \qquad $\alpha\beta = \beta\alpha$ and $\alpha$ is an endomorphism of $Q$. \item[2.] \qquad $R_aL_b = L_bR_a$ for $a,b \in Q$ if and only if $\alpha(b) = \beta(a)$. \item[3.] \qquad $R_{\alpha(a)}L_{\beta(a)} = L_{\beta(a)}R_{\alpha(a)}$ for every $a \in Q$. \end{enumerate} \end{proposition} \begin{proof} For (1): $x \cdot \alpha\beta(x)\alpha(x) = \beta(x)x \cdot \alpha\beta(x)\alpha(x) = \beta(x) \cdot x\alpha(x) = \beta(x)x = x = x\alpha(x)$ and so $\alpha\beta(x) = \beta\alpha(x)$. Further, $xy \cdot \alpha(x)\alpha(y) = x \cdot y\alpha(x) = xy = xy \cdot \alpha(xy)$ and $\alpha(x)\alpha(y) = \alpha(xy)$. For (2): If $R_aL_b = L_bR_a$, then $ba = R_aL_b\alpha(b) = L_bR_a\alpha(b) = b \cdot \alpha(b)a, a = \alpha(b)a$ and $\beta(a) = \alpha(b)$. Conversely, if $\beta(a) = \alpha(b)$ then $b \cdot xa = bx \cdot \alpha(b)a = bx \cdot \beta(a)a = bx \cdot a$. Finally (3), follows from (1) and (2). \end{proof} \begin{corollary} \corlabel{commute} If $Q$ is an F-quasigroup, then $\alpha$ and $\beta$ are endomorphisms of $Q$, and $\alpha \beta = \beta \alpha$. \end{corollary} For a quasigroup $(Q,\cdot)$, if the loop isotope $(Q,+)$ given by $x + y = L_b\iv x\cdot R_a\iv y$ for all $x,y\in Q$ is a associative (\textit{i.e.}, a group), then $L_b\iv x \cdot R_a\iv (L_b\iv y\cdot R_a\iv z) = L_b\iv (L_b\iv x\cdot R_a\iv y) \cdot R_a\iv z$ for all $x,y,z\in Q$. Replacing $x$ with $L_b x$ and $z$ with $R_a z$, we have that associativity of $(Q,\circ)$ is characterized by the equation \begin{equation} \eqnlabel{assoc} x \cdot L_b\iv (R_a\iv y\cdot z) = R_a\iv (x\cdot L_b\iv y)\cdot z \end{equation} for all $x,y,z \in Q$, or equivalently, \begin{equation} \eqnlabel{assoc2} L_x L_b\iv R_z R_a\iv = R_z R_a\iv L_x L_b\iv \end{equation} for all $x,z in Q$. \begin{lemma} \lemlabel{group-iso} Let $Q$ be a quasigroup. The following are equivalent: \begin{enumerate} \item[1.] \qquad Every loop isotopic to $Q$ is a group. \item[2.] \qquad Some loop isotopic to $Q$ is a group. \item[3.] \qquad For all $x,y,z,a,b \in Q$, \peqref{assoc} holds. \item[4.] \qquad There exist $a,b\in Q$ such that \peqref{assoc} holds for all $x,y,z \in Q$. \end{enumerate} \end{lemma} \begin{proof} The equivalence of (1) and (2) is well known \cite{Br}. (3) and (4) simply express (1) and (2), respectively, in the form of equations. \end{proof} \begin{lemma} \lemlabel{FG-char0} Let $Q$ be an F-quasigroup. The following are equivalent: \begin{enumerate} \item[1.] \qquad $Q$ is an FG-quasigroup, \item[2.] \qquad $x \beta(a) \cdot (L_b\iv R_a\iv y\cdot z) = (x \cdot R_a\iv L_b\iv y) \cdot \alpha(b) z$ for all $x,y,z\in Q$. \end{enumerate} \end{lemma} \begin{proof} Starting with Lemma \lemref{group-iso}, observe that \Fr and \Fl give $R_a\iv (uv) = R_{\beta(a)}\iv u\cdot R_a\iv v$ and $L_b\iv (uv) = L_b\iv u\cdot L_{\alpha(b)}\iv v$ for all $u,v, \in Q$. Replace $x$ with $x \beta(a)$ and replace $z$ with $\alpha(b) z$. The result follows. \end{proof} \begin{lemma} \lemlabel{FG-char} Let $Q$ be an F-quasigroup and let $a,b \in Q$ be such that $\alpha(b) = \beta(a)$. Then $Q$ is an FG-quasigroup if and only if $x \beta(a) \cdot y z = x y \cdot \alpha(b) z$ for all $x,y,z\in Q$. \end{lemma} \begin{proof} By Proposition \prpref{basic}(2), $R_aL_b = L_bR_a$ and so $R_a\iv L_b = L_b R_a\iv$. The result follows from Lemma \lemref{FG-char0} upon replacing $y$ with $R_a L_b y$. \end{proof} \begin{proposition} \prplabel{F-as-FG} The following conditions are equivalent for an F-quasigroup $Q$: \begin{enumerate} \item[1.] \quad $Q$ is an FG-quasigroup, \item[2.] \quad $x\alpha\beta(w) \cdot yz = xy \cdot \alpha\beta(w)z$ for all $x,y,z,w \in Q$. \item[3.] \quad There exists $w \in Q$ such that $x\alpha\beta(w) \cdot yz = xy \cdot \alpha\beta(w)z$ for all $x,y,z \in Q$. \end{enumerate} \end{proposition} \begin{proof} For given $w\in Q$, set $a = \alpha(w)$ and $b = \beta(w)$. By Corollary \corref{commute}, $\alpha(b) = \beta(a)$, and so the result follows from Lemma \lemref{FG-char}. \end{proof} The preceding results characterize FG-quasigroups among F-quasigroups. Thus the F-quasigroup laws together with Proposition \prpref{F-as-FG}(2) form an axiom base for FG-quasigroups. Now we turn to the main result of this section, a two axiom base for FG-quasigroups. \begin{lemma} \lemlabel{rearrange} Let $Q$ be an FG-quasigroup. For all $x,y,u,v\in Q$, $L_x L_y\iv R_v\iv R_u = R_v\iv R_u L_x L_y\iv$. \end{lemma} \begin{proof} Another expression for \Fr is $R_v\iv R_u = R_{\beta(u)} R_{R_u\iv v}\iv$, and so the result follows from \peqref{assoc2}. \end{proof} \begin{theorem} \thmlabel{main} A quasigroup $Q$ is an FG-quasigroup if and only if the identities $(A)$ and $(B)$ hold. \end{theorem} \begin{proof} Suppose first that $Q$ is an FG-quasigroup. We first verify the following special case of ($A$): for all $x,y,u,v\in Q$, \begin{equation} \eqnlabel{special} \alpha(x) y \cdot \alpha(u) v = \alpha(x) \alpha(u) \cdot y v \end{equation} Indeed, \Fl implies $y = L_u\iv R_{\alpha(u) v}\iv R_{yv} u$. Using this and Lemma \lemref{rearrange}, we compute \[ \alpha(x) y \cdot \alpha(u) v = R_{\alpha(u) v} L_{\alpha(x)} L_u\iv R_{\alpha(u) v}\iv R_{yv} u = R_{yv} L_{\alpha(x)} L_u\iv u = \alpha(x) \alpha(u) \cdot yv \] as claimed. Next we verify ($B$). For all $x,y,u,v\in Q$, \[ \begin{array}{rcll} x \beta(\alpha(u) y)\cdot (u\cdot vy) &=& x \beta(\alpha(u) y)\cdot (uv\cdot \alpha(u)y) & \text{by \Fl} \\ &=& (x\cdot uv) \cdot \alpha(u)y & \text{by \Fr} \\ &=& (xu\cdot \alpha(x)v) \cdot \alpha(u)y & \text{by \Fl} \\ &=& (xu\cdot \beta(\alpha(u)y))\cdot (\alpha(x)v \cdot \alpha(u)y) & \text{by \Fr} \\ &=& (xu\cdot \beta(\alpha(u)y))\cdot (\alpha(x)\alpha(u) \cdot vy) & \text{by \peqref{special}} \\ &=& xu \cdot (\beta(\alpha(u)y) \cdot vy) & \text{by \Fl} \end{array} \] where we have also used Corollary \corref{commute} in the last step. Replacing $v$ with $R_y\iv v$ and then $y$ with $L_{\alpha{u}}\iv y$, we have ($B$). The proof of ($A$) is similar. Conversely, suppose $Q$ satisfies ($A$) and ($B$). Obviously, ($A$) implies \Fl and ($B$) implies ($F_r$), and so we may apply Proposition \prpref{F-as-FG} to get that $Q$ is an FG-quasigroup. \end{proof} \section{Quasigroups linear over groups} \seclabel{linear-quasi} Throughout this section, let $Q$ be a quasigroup and $(Q,+)$ a group, possibly noncommutative, but with the same underlying set as $Q$. Assume that $Q$ is linear over $(Q,+)$, that is, there exist $f,g \in \Aut(Q,+)$, $e \in Q$ such that $xy = f(x) + e + g(y)$ for all $x,y\in Q$. Let $\Phi\in \Aut(Q,+)$ be given by $\Phi(x) = -e + x + e$ for all $x \in Q$. If we define a multiplication on $Q$ by $x\cdot_1 y = f(x) + g(y) + e$ for all $x,y \in Q$, then $x\cdot_1 y = f(x) + e -e + g(y) + e = f(x) + e + \Phi g(y)$. On the other hand, if we define a multiplication on $Q$ by $x\cdot_2 y = e + f(x) + g(y)$ for all $x,y \in Q$, then $x\cdot_2 y = \Phi\iv f(x) + e + g(y)$. In particular, there is nothing special about our convention for quasigroups linear over groups; we could have used $(Q,\cdot_1)$ or $(Q,\cdot_2)$ instead. \begin{lemma} \lemlabel{F-char-linear} With the notation conventions of this section, \begin{enumerate} \item[1.] $Q$ is a left F-quasigroup if and only if $fg = gf$ and $-x + f(x) \in Z(Q,+)$ for all $x \in Q$, \item[2.] $Q$ is a right F-quasigroup if and only if $fg = gf$ and $-x + g(x) \in Z(Q,+)$ for all $x \in Q$, \item[3.] $Q$ is an F-quasigroup if and only if $fg = gf$ and $-x + f(x), -x + g(x) \in Z(Q,+)$ for all $x \in Q$. \end{enumerate} \end{lemma} \begin{proof} First, note that $\alpha(u) = -g\iv(e) - g\iv f(u) + g\iv(u)$ and $\beta(u) = f\iv(u) - f\iv g(u) - f\iv(e)$ for all $u\in Q$. For (1): Fix $u,v,w\in Q$ and set $x = f(u)$ and $y = gf(v)$. We have \[ u \cdot vw = f(u) + e + gf(v) + g(e) + g^2(w) \] and \[ uv \cdot \alpha(u)w = f^2(u) + f(e) + fg(v) + e - gfg^{-1}(e) - gfg^{-1}f(u) + gfg^{-1}(u) + g(e) + g^2(w). \] Thus \Fl holds if and only if \begin{equation} \eqnlabel{tmp} x + e + y = f(x) + f(e) + fgf^{-1}g^{-1}(y) + e - gfg^{-1}(e) - gfg^{-1}(x) + gfg^{-1}f^{-1}(x) \end{equation} for all $x,y \in Q$. Suppose \Fl holds. Then setting $x = 0$ in \peqref{tmp} yields $e + y = f(e) + fgf\iv g\iv (y) + e - gfg\iv(e)$ and $x = 0 = y$ yields $-f(e) + e = e - gfg\iv(e)$. Thus $-f(e) + e + y = fgf\iv g\iv (y) - f(e) + e$ and $x + e + y = f(x) + e + y - gfg\iv (x) + gfg\iv f\iv (x)$. Setting $y = -e$ in the latter equality, we get $-f(x) + x = -gfg\iv (x) + gfg\iv f\iv (x)$ and hence $-f(x) + x + e + y = e + y - f(x) + x$. Consequently, $-f(x) + x \in Z(Q,+)$ for all $x \in Q$ and looking again at the already derived equalities, we conclude that $fg = gf$. For the converse, suppose $fg = gf$. Then \peqref{tmp}, after some rearranging, becomes \[ (-f(x) + x) + e + y = f(e) + y + (e - f(e)) + (- f(x) + x) . \] If we also suppose $-x + f(x) \in Z(Q,+)$ for all $x \in Q$, then the latter equation reduces to a triviality, and so \Fl holds. The proof of (2) is dual to that of (1), and (3) follows from (1) and (2). \end{proof} It is straightforward to characterize F-quasigroups among quasigroups linear over groups for the alternative definitions $(Q,\cdot_1)$ and $(Q,\cdot_2)$ above. Recalling that $\Phi(x) = e + x - e$, observe that if $-z + f(z) \in Z(Q,+)$ for all $z\in Q$, then $fg = gf$ if and only if $f\Phi g = \Phi gf$. Using this observation and Lemma \lemref{F-char-linear}(3), we get the following assertion: $(Q,\cdot_1)$ is an F-quasigroup if and only if $fg = gf$ and $-x + f(x), -x + \Phi g(x) \in Z(Q,+)$ for all $x \in Q$. Similarly, $(Q,\cdot_2)$ is an F-quasigroup if and only if $fg = gf$ and $-x + \Phi^{-1}f(x), -x + g(x) \in Z(Q,+)$ for all $x \in Q$. \section{FG-quasigroups are linear over groups} \seclabel{FG-linear} Let $h$ and $k$ be permutations of a group $(Q,+)$. Define a multiplication on $Q$ by $xy = h(x) + k(y)$ for all $x,y \in Q$. Clearly, $Q$ is a quasigroup. \begin{lemma} \lemlabel{RF-linear} Assume that $Q$ is a right F-quasigroup. Then: \begin{enumerate} \item[1.] $h(x + y) = h(x) - h(0) + h(y)$ for all $x,y \in Q$. \item[2.] The transformations $x \mapsto h(x) - h(0)$ and $x \mapsto -h(0) + h(x)$ are automorphisms of $(Q,+)$. \end{enumerate} \end{lemma} \begin{proof} We have $\beta(u) = h^{-1}(u - k(u))$ and $h(h(w) + k(v)) + k(u) = wv \cdot u = w\beta(u) \cdot vu = h(h(w) + kh^{-1}(u - k(u))) + k(h(v) + k(u))$ for all $u,v,w \in Q$. Then $h(x + y) + z = h(x + kh^{-1}(k^{-1}(z) - z)) + k(hk^{-1}(y) + z)$ for all $x,y,z \in Q$. Setting $z = 0$ we get $h(x + y) = h(x + t) + khk^{-1}(y)$ where $t = kh^{-1}k^{-1}(0)$. Consequently, $h(y) = h(t) + khk^{-1}(y)$ and $khk^{-1}(y) = -h(t) + h(y)$. Similarly, $h(x) = h(x + t) + khk^{-1}(0) = h(x + t) - h(t) + h(0), h(x + t) = h(x) - h(0) + h(t)$. Thus, $h(x + y) = h(x) - h(0) + h(t) - h(t) + h(y) = h(x) - h(0) + h(y)$. This establishes (1). (2) follows immediately from (1). \end{proof} \begin{lemma} \lemlabel{LF-linear} Assume that $Q$ is a left F-quasigroup. Then: \begin{enumerate} \item[1.] $k(x + y) = k(x) - k(0) + k(y)$ for all $x,y \in Q$. \item[2.] The transformations $x \mapsto k(x) - k(0)$ and $x \mapsto -k(0) + k(x)$ are automorphisms of $(Q,+)$. \end{enumerate} \end{lemma} \begin{proof} Dual to the proof of Lemma \lemref{RF-linear}. \end{proof} Now let $Q$ be an FG-quasigroup, $a,b \in Q, h = R_a, k = L_b$ and $x + y = h^{-1}(x) \cdot k^{-1}(y)$ for all $x,y \in Q$. Then $(Q,+)$ is a group (every principal loop isotope of $Q$ is of this form), $0 = ba$ and $xy = h(x) + k(y)$ for all $x,y \in Q$. Moreover, by Lemmas \lemref{RF-linear} and \lemref{LF-linear}, the transformations $f:x \mapsto h(x) - h(0)$ and $g:x \mapsto -k(0) + k(x)$ are automorphisms of $(Q,+)$. We have $xy = f(x) + e + g(y)$ for all $x,y \in Q$ where $e = h(0) + k(0) = 0 \cdot 0 = ba \cdot ba$. \begin{corollary} \corlabel{FG-linear} Every FG-quasigroup is linear over a group. \end{corollary} \section{Structure of FG-quasigroups} \seclabel{structure} Throughout this section, let $Q$ be an FG-quasigroup. By Corollary \corref{FG-linear}, $Q$ is linear over a group $(Q,+)$, that is, there exist $f,g\in \Aut(Q,+)$, $e\in Q$ such that $xy = f(x) + e + g(y)$ for all $x,y\in Q$. Recall the definition \[ M(Q) = \{a \in Q : xa \cdot yx = xy \cdot ax \; \forall x,y \in Q\} . \] \begin{lemma} \lemlabel{M=Z-e} $M(Q) = Z(Q,+) - e = \{a \in Q: xa \cdot yz = xy \cdot az \; \forall x,y,z \in Q\}$. \end{lemma} \begin{proof} If $a \in M(Q)$, then $f^2(x) + f(e) + fg(a) + e + fg(y) + g(e) + g^2(x) = xa \cdot yx = xy \cdot ax = f^2(x) + f(e) + fg(y) + e + fg(a) + g^2(x)$ or, equivalently, $fg(a) + e + z = z + e + fg(a)$ for all $z \in Q$. The latter equality is equivalent to the fact that $fg(a) + e \in Z(Q,+)$ or $a \in f\iv g\iv (Z(Q,+) - e) = Z(Q,+) - f\iv g\iv (e) = Z(Q,+) - e$, since $f\iv g\iv (e) - e \in Z(Q,+)$. We have shown that $M(Q) \subseteq Z(Q,+) - e$. Proceeding conversely, we show that $Z(Q,+) - e \subseteq \{a \in Q: xa \cdot yz = xy \cdot az\}$, and the latter subset is clearly contained in $M(Q)$. \end{proof} \begin{corollary} \corlabel{M-equiv} The following conditions are equivalent: \begin{enumerate} \item[1.] \qquad $M(Q) = Z(Q,+)$. \item[2.] \qquad $e \in Z(Q,+)$. \item[3.] \qquad $0 \in M(Q)$. \end{enumerate} \end{corollary} \begin{lemma} \lemlabel{ab-in-M} $\alpha(Q) \cup \beta(Q) \subseteq M(Q)$. \end{lemma} \begin{proof} This follows from Theorem \thmref{main}. \end{proof} \begin{lemma} \lemlabel{M-subquasi} $M(Q)$ is a medial subquasigroup of $Q$. \end{lemma} \begin{proof} If $u,v,w \in Z(Q,+)$ then $(u - e) \cdot (v - e) = f(u) - f(e) + e + g(v) - g(e) = w - e \in Z(Q,+) - g(e) = Z(Q,+) - e = M(Q)$. Thus $M(Q) = Z(Q,+) - e$ (Lemma \lemref{M=Z-e}) is closed under multiplication, and it is easy to see that for each $a,b\in Z(Q,+)$, the equations $(a-e)\cdot (x-e) = b-e$ and $(y-e)\cdot (a-e) = b-e$ have unique solutions $x,y\in Z(Q,+)$. We conclude that $M(Q)$ is a subquasigroup of $Q$. Applying Lemma \lemref{M=Z-e} again, $M(Q)$ is medial. \end{proof} \begin{lemma} \lemlabel{M-normal} $M(Q)$ is a normal subquasigroup of $Q$, and $Q/M(Q)$ is a group. \end{lemma} \begin{proof} $Z(Q,+)$ is a normal subgroup of the group $(Q,+)$, and if $\rho$ denotes the (normal) congruence of $(Q,+)$ corresponding to $Z(Q,+)$, it is easy to check that $\rho$ is a normal congruence of the quasigroup $Q$, too. Finally, by Lemma \lemref{ab-in-M}, $Q/M(Q)$ is a loop, and hence it is a group. \end{proof} Putting together Lemmas \lemref{M=Z-e}, \lemref{ab-in-M}, \lemref{M-subquasi}, and \lemref{M-normal}, we have the following. \begin{proposition} \prplabel{structure} Let $Q$ be an FG-quasigroup. Then $\alpha(Q) \cup \beta(Q) \subseteq M(Q) = \{a \in Q: xa \cdot yz = xy \cdot az \; \forall x,y,z \in Q\}$, $M(Q)$ is a medial, normal subquasigroup of $Q$, and $Q/M(Q)$ is a group. \end{proposition} \begin{corollary} \corlabel{simple} A simple FG-quasigroup is medial or is a group. \end{corollary} \section{Arithmetic forms of FG-quasigroups} \seclabel{forms} \begin{definition} \deflabel{form} An ordered five-tuple $(Q,+,f,g,e)$ will be called an \emph{arithmetic form} of a quasigroup $Q$ if the following conditions are satisfied: \begin{enumerate} \item The binary structures $(Q,+)$ and $Q$ share the same underlying set (denoted by $Q$ again); \item $(Q,+)$ is a (possibly noncommutative) group; \item $f,g \in \Aut(Q,+)$; \item $fg = gf$; \item $-x + f(x), -x + g(x) \in Z(Q,+)$ for all $x \in Q$; \item $e \in Q$; \item $xy = f(x) + e + g(y)$ for all $x,y \in Q$. \end{enumerate} If, moreover, $e \in Z(Q,+)$, then the arithmetic form will be called \emph{strong}. \end{definition} \begin{theorem} \thmlabel{FG-forms} The following conditions are equivalent for a quasigroup $Q$: \begin{enumerate} \item[1.] $Q$ is an FG-quasigroup. \item[2.] $Q$ has at least one strong arithmetic form. \item[3.] $Q$ has at least one arithmetic form. \end{enumerate} \end{theorem} \begin{proof} Assume (1). From Corollary \corref{FG-linear} and Lemma \lemref{F-char-linear}(3), we know that for all $a,b \in Q$, $Q$ has an arithmetic form $(Q,+,f,g,e)$ such that $0 = ba$. Further, by Lemma \lemref{ab-in-M}, $\alpha(Q) \cup \beta(Q) \subseteq M(Q)$. Now, if the elements $a$ and $b$ are chosen so that $ba \in \alpha(Q) \cup \beta(Q)$ (for instance, choose $a = b = \alpha\beta(c)$ for some $c \in Q$ and use Corollary \corref{commute}), or merely that $ba \in M(Q)$, then the form is strong by Corollary \corref{M-equiv}. Thus (2) holds. (2) implies (3) trivially, and (3) implies (1) by Lemma \lemref{F-char-linear}(3). \end{proof} \begin{lemma} \lemlabel{rigid} Let $(Q,+,f_1,g_1,e_1)$ and $(Q,*,f_2,g_2,e_2)$ be arithmetic forms of the same FG-quasigroup $Q$. If the groups $(Q,+)$ and $(Q,*)$ have the same neutral element $0$, then $(Q,+) = (Q,*)$, $f_1 = f_2, g_1 = g_2$, and $e_1 = e_2$. \end{lemma} \begin{proof} We have $f_1(x) + e_1 + g_1(y) = xy = f_2(x) * e_2 * g_2(y)$ for all $x,y \in Q$. Setting $x = 0 = y$, we get $e_1 = e_2 = e$. Setting $x = 0$ we get $p(y) = e + g_1(y) = e_2 * g_2(y)$ and so $f_1(x) + p(y) = f_2(x) * p(y)$. But $p$ is a permutation of $Q$ and $p(y) = 0$ yields $f_1 = f_2$. Similarly, $g_1 = g_2$ and, finally, $(Q,+) = (Q,*)$. \end{proof} \begin{theorem} \thmlabel{correspondence} Let $Q$ be an FG-quasigroup. Then there exists a biunique correspondence between arithmetic forms of $Q$ and elements from $Q$. This correspondence restricts to a biunique correspondence between strong arithmetic forms of $Q$ and elements from $M(Q)$. \end{theorem} \begin{proof} Combine Corollary \corref{FG-linear}, Lemma \lemref{F-char-linear}(3), and Corollary \corref{M-equiv}. \end{proof} \begin{lemma} \lemlabel{homom-forms} Let $Q$ and $P$ be FG-quasigroups with arithmetic forms $(Q,+,f,g,e_1)$ and $(P,+,h,k,e_2)$, respectively. Let $\varphi:Q \to P$ be a mapping such that $\varphi(0) = 0$. Then $\varphi$ is a homomorphism of the quasigroups if and only if $\varphi$ is a homomorphism of the groups, $\varphi f = h\varphi, \varphi g = k\varphi$ and $\varphi(e_1) = e_2$. \end{lemma} \begin{proof} This generalization of Lemma \lemref{rigid} has a similar proof. \end{proof} Denote by $\mathcal{F}_{g,p}$ the equational class (and category) of pointed FG-quasigroups. That is $\mathcal{F}_{g,p}$ consists of pairs $(Q,a)$, $Q$ being an FG-quasigroup and $a \in Q$ a fixed element. If $(P,b) \in \mathcal{F}_{g,p}$ then a mapping $\varphi:Q \to P$ is a homomorphism in $\mathcal{F}_{g,p}$ if and only if $\varphi$ is a homorphism of the quasigroups and $\varphi(a) = b$. Further, put $\mathcal{F}_{g,m} = \{(Q,a) \in \mathcal{F}_{g,p}: a \in M(Q)\}$. Clearly $\mathcal{F}_{g,m}$ is an equational subclass (and also a full subcategory) of $\mathcal{F}_{g,p}$. Let $\varphi:Q \to P$ be a homomorphism of FG-quasigroups. For every $a \in Q$ we have $(Q, \alpha(a)), (P,\alpha\varphi(a)) \in \mathcal{F}_{g,m}$, and $\varphi\alpha(a) = \alpha\varphi(a)$. Thus $\varphi$ is a homomorphism in $\mathcal{F}_{g,m}$. Similarly, $(Q,\beta(a)), (P,\beta\varphi(a)) \in \mathcal{F}_{g,m}$ and $\varphi\beta(a) = \beta\varphi(a)$. Denote by $\mathcal{G}$ the equational class (and category) of algebras $Q(+,f,g,f\iv ,g\iv ,e)$ where $(Q,+)$ is a group and conditions (2)-(6) of Definition \defref{form} are satisfied. If $P(+,h,k,h\iv ,k\iv ,e_1) \in \mathcal{G}$, then a mapping $\varphi:Q \to P$ is a homomorphism in $\mathcal{G}$ if and only if $\varphi$ is a homomorphism of the groups such that $\varphi f = h\varphi, \varphi g = k \varphi$ and $\varphi(e) = e_1$. Finally, denote by $\mathcal{G}_c$ the equational subclass of $\mathcal{G}$ given by $e \in Z(Q,+)$. It follows from Theorem \thmref{correspondence} and Lemma \lemref{homom-forms} that the classes $\mathcal{F}_{g,p}$ and $\mathcal{G}$ are equivalent. That means that there exists a biunique correspondence $\Phi:\mathcal{F}_{g,p} \to \mathcal{G}$ such that for every algebra $A \in \mathcal{F}_{g,p}$, the algebras $A$ and $\Phi(A)$ have the same underlying set, and if $B \in \mathcal{F}_{g,p}$, then a mapping $\varphi:A \to B$ is an $\mathcal{F}_{g,p}$-homomorphism if and only if it is a $\mathcal{G}$-homomorphism. \begin{corollary} \corlabel{equiv-classes} The equational classes $\mathcal{F}_{g,p}$ and $\mathcal{G}$ are equivalent. The equivalence restricts to an equivalence between $\mathcal{F}_{g,m}$ and $\mathcal{G}_c$. \end{corollary} \section{Generalized modules} \seclabel{modules} Let $(G,+)$ be a (possibly noncommutative) group. An endomorphism $\varphi\in \End(G,+)$ will be called \emph{central} if $\varphi(G) \subseteq Z(G,+)$. We denote by $\ZEnd(G,+)$ the set of central endomorphisms of $(G,+)$. Clearly, the composition of central endomorphisms is again a central endomorphism and $\ZEnd(G,+)$ becomes a multiplicative semigroup under the operation of composition. Furthermore, if $\varphi \in \ZEnd(G,+)$ and $\psi \in \End(G,+)$ then $\varphi + \psi \in \End(G,+)$ where $(\varphi + \psi)(x) = \varphi(x) + \psi(x)$ for all $x \in G$. Consequently, $\ZEnd(G,+)$ becomes an abelian group under pointwise addition, and, altogether, $\ZEnd(G(+))$ becomes an associative ring (possibly without unity). Let $R$ be an associative ring (with or without unity). A \emph{central generalized (left)} $R$\emph{-module} will be a group $(G,+)$ equipped with an $R$-scalar multiplication $R \times G \to G$ such that $a(x + y) = ax + ay, (a + b)x = ax + bx, a(bx) = (ab)x$ and $ax \in Z(G,+)$ for all $a,b \in R$ and $x,y \in G$. If $G$ is a central generalized $R$-module, then define the \emph{annihilator} of $G$ to be $\Ann(G) = \{a \in R : aG = 0\}$. It is easy to see that $\Ann(G)$ is an ideal of the ring $R$. Let $\textbf{S} = \mathbb{Z}[\textbf{x},\textbf{y},\textbf{u},\textbf{v}]$ denote the polynomial ring in four commuting indeterminates $\textbf{x}, \textbf{y}, \textbf{u}, \textbf{v}$ over the ring $\textbf{Z}$ of integers. Put $\textbf{R} = S\textbf{x} + S\textbf{y} + S\textbf{u} + S\textbf{v}$. That is, $\textbf{R}$ is the ideal of $\textbf{S}$ generated by the indeterminates. On the other hand, $\textbf{R}$ is a commutative and associative ring (without unity) freely generated by the indeterminates. Let $\mathcal{M}$ be the equational class (and category) of central generalized $\textbf{R}$-modules $G$ such that $\textbf{x} + \textbf{u} + \textbf{x} \textbf{u} \in \Ann(G)$ and $\textbf{y} + \textbf{v} + \textbf{y} \textbf{v} \in \Ann(G)$. Further, let $\mathcal{M}_p$ be the equational class of pointed () objects from $\mathcal{M}$. That is, $\mathcal{M}_p$ consists of ordered pairs $(G,e)$ where $G \in \mathcal{M}$ and $e \in G$. Let $\mathcal{M}_c$ denote the subclass of centrally pointed objects from $\mathcal{M}_p$, \textit{i.e.}, $(G,e)\in \mathcal{M}_c$ iff $(G,e)\in \mathcal{M}_p$ and $e\in Z(G,+)$. \begin{theorem} \thmlabel{mod-equiv} The equational classes $\mathcal{F}_{g,p}$ and $\mathcal{M}_p$ are equivalent. This equivalence restricts to an equivalence between $\mathcal{F}_{g,m}$ and $\mathcal{M}_c$ \end{theorem} \begin{proof} Firstly, take $(Q,a) \in \mathcal{F}_{g,p}$. Let $(Q,+,f,g,e)$ be the arithmetic form of the FG-quasigroup $Q$, such that $a = 0$ in $(Q,+)$. Define mappings $\varphi, \mu, \psi, \nu : Q\to Q$ by $\varphi(x) = -x + f(x)$, $\mu(x) = -x + f^{-1}(x)$, $\psi(x) = -x + g(x)$ and $\nu(x) = -x + g^{-1}(x)$ for all $x\in Q$. It is straightforward to check that $\varphi, \mu, \psi, \nu$ are central endomorphisms of $(Q,+)$, that they commute pairwise, and that $\varphi(x) + \mu(x) + \varphi\mu(x) = 0$ and $\psi(x) + \nu(x) + \psi\nu(x) = 0$ for all $x \in Q$. Consequently, these endomorphisms generate a commutative subring of the ring $\ZEnd(Q,+)$, and there exists a (uniquely determined) homomorphism $\lambda:\textbf{R} \to \ZEnd(Q,+)$ such that $\lambda(\textbf{x}) = \varphi$, $\lambda(\textbf{y}) = \psi$, $\lambda(\textbf{u}) = \mu$, and $\lambda(\textbf{v}) = \nu$. The homomorphism $\lambda$ induces an $\textbf{R}$-scalar multiplication on the group $(Q,+)$ and the resulting central generalized $\textbf{R}$-module will be denoted by $\bar{Q}$. We have $\lambda(\textbf{x} + \textbf{u} + \textbf{x} \textbf{u}) = 0 = \lambda(\textbf{y} + \textbf{v} + \textbf{y}\textbf{v})$ and so $\bar{Q} \in \mathcal{M}$. Now define $\rho : \mathcal{F}_{g,p} \to \mathcal{M}_p$ by $\rho(Q,a) = (\bar{Q},e)$, and observe that $(\bar{Q},e) \in \mathcal{M}_c$ if and only if $e \in Z(Q,+)$. Next, take $(\bar{Q},e) \in \mathcal{M}_p$ and define $f,g : Q\to Q$ by $f(x) = x + \textbf{x}x$ and $g(x) = x + \textbf{y}x$ for all $x \in Q$. We have $f(x + y) = x + y + \textbf{x}x + \textbf{x}y = x + \textbf{x}x + y + \textbf{x}y = f(x) + f(y)$ for all $x,y \in Q$, and so $f\in \End(Q,+)$. Similarly, $g\in \End(Q,+)$. Moreover, $fg(x) = f(x + \textbf{y}x) = x + \textbf{y}x + \textbf{x}x + \textbf{x}\textbf{y}x = x + \textbf{x}x + \textbf{y}x + \textbf{y}\textbf{x}x = gf(x)$, and therefore $fg = gf$. Still further, if we define $k : Q\to Q$ by $k(x) = x + \textbf{u}x$ for $x\in Q$, then $fk(x) = x + (\textbf{x} + \textbf{u} + \textbf{x} \textbf{u})x = x = kf(x)$, and it follows that $k = f\iv$ and so $f\in \Aut(Q,+)$. Similarly, $g\in \Aut(Q,+)$. Of course, $-x + f(x) = \textbf{x}x \in Z(Q,+)$ and $-x + g(x) \in Z(Q,+)$. Consequently, $Q$ becomes an FG-quasigroup under the multiplication $xy = f(x) + e + g(y)$. Define $\sigma : \mathcal{M}_p\to \mathcal{F}_{g,p}$ by $\sigma(\bar{Q},e) = (Q,0)$. Using Theorem \thmref{correspondence} and Lemma \lemref{homom-forms}, it is easy to check that the operators $\rho$ and $\sigma$ represent an equivalence between $\mathcal{F}_{g,p}$ and $\mathcal{M}_p$. Further, $0 \in M(Q)$ if and only if $e \in Z(Q,+)$, so that the equivalence restricts to $\mathcal{F}_{g,m}$ and $\mathcal{M}_c$. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Relation of the mean coherence to quantities defined in the literature} For Fock states (i.e. eigenstates of all number operators $\hat{N}_{p\alpha}=\hat{a}^\dagger_{p\alpha} \hat{a}_{p\alpha}$), the \textit{degree of indistinguishability} \[ \mathcal{I} =\sum_{m \neq n\in \mathcal{B}{}_\mathrm{ext}}\sum_{\alpha \in \mathcal{B}{}_\mathrm{ext}} N_{m\alpha}N_{n\alpha}\ \Big/ \! \sum_{m \neq n\in \mathcal{B}{}_\mathrm{ext}} N_{m}N_{n} \] was introduced in \cite{brunner_signatures_2018,dufour_many-body_2020} to quantify partial distinguishability in multi-component bosonic systems. In particular, $\mathcal{I}$ was shown to correlate with the time-average of the density variances $\braket{N_m^2(t)}-\braket{N_m(t)}^2$, which probe the $2P$ reduced state evolving from the initial Fock state. The degree of indistinguishability is related to the 2P mean coherence for arbitrary definite external mode occupations $N_p \in \mathbb{N}$, by \[ \mathcal{I} =N (N-1) (\mathcal{W}^{(2)}-1)\ \Big/ \! \sum_{m \neq n\in \mathcal{B}{}_\mathrm{ext}} N_{m}N_{n}~. \] For an $N$P state $\rho$, the reduced external state $\rho{}_\mathrm{ext}=\rho{}_\mathrm{ext}^{(N)}$ coincides with $1/N!$ times the $J$ matrix introduced in \cite{shchesnovich_sufficient_2014,shchesnovich_tight_2015,shchesnovich_partial_2015} if ideal detectors are assumed. Actually, the author of \cite{shchesnovich_partial_2015} writes ``Note that quantum coherence of photon paths is reflected in the $J$ matrix in a way very similar as in the usual density matrix of a quantum system'' but does not push the connection further. One can measure the bosonic character of the external reduced state $\rho{}_\mathrm{ext}$ by its projection onto the symmetric subspace \cite{shchesnovich_tight_2015,dittel_wave-particle_2019} \[ p_s=\mathrm{tr}(\rho{}_\mathrm{ext} P_S) \,, \] where $P_S=\frac{1}{N!}\sum_{\pi\in S_N} \pi$ is the symmetrizer and $\pi$ acts on $\bm{m}\in \mathcal{H}{}_\mathrm{ext}^{(N)}$ as $\pi\ket{\bm{m}}=\ket{ m_{\pi^{-1}(1)} ,\dots, m_{\pi^{-1}(N)} }$. This quantity is proportional to the $N$P mean coherence, with \[ p_s=\mathcal{W}^{(N)}\prod_{m\in\mathcal{B}{}_\mathrm{ext}} N_p!/N! \,. \] For particles with individual pure internal states $\ket{\phi_i}$, this is also equal to $1/N!$ times the permanent of the distinguishability matrix $\mathcal{S}=(\braket{\phi_i|\phi_j})_{i,j}$ introduced in \cite{tichy_sampling_2015}. \section{Sampling of internal states}\label{app:sampling_intstates} To map out the full transition from indistinguishable fermions to bosons via the intermediate case of distinguishable particles, in terms of the $k$P mean coherences $\mathcal{W}^{(k)}$ as uniformly as possible, we use the following two-step sampling procedure of pure internal states for each of the particles: The dimension of the internal Hilbert space has to be taken larger than the particle number. To map out the neighborhood of indistinguishable particles, we start from a unit vector $\ket{e}\in\mathcal{H}{}_\mathrm{int}$ and add a perturbation $\ket{f_i}$, with the real and imaginary parts of the components of $\ket{f_i}$ sampled from a normal distribution with zero mean and variance $\epsilon$. By choosing $\epsilon$ sufficiently small, the resulting internal states $\ket{\phi_i}= \ket{e} + \ket{f_i}$, after normalization, are almost parallel. The larger $\epsilon$ gets, the smaller the relative contribution of the constant vector $\ket{e}$ becomes, after renormalization, and we sample the unit sphere in $\mathcal{H}{}_\mathrm{int}$ almost uniformly. As a second step, we sample the neighborhood of perfectly distinguishable particles by choosing orthogonal unit vectors $\ket{e_i}\in\mathcal{H}{}_\mathrm{int}$ for each particle, perturbed by vectors $\ket{f_i}$ sampled as before with normally distributed components in $\mathbb{C}$, followed by renormalization. As before, for large $\epsilon$ the contributions from the constant vectors $\ket{e_i}$ in $\ket{\phi_i}= \ket{e_i} + \ket{f_i}$ vanish after renormalization, and we approach uniform sampling of the unit sphere in $\mathcal{H}{}_\mathrm{int}$. For sufficiently small $\epsilon$, we generate states $\ket{\phi_i}$ in the vicinity of perfect distinguishability. \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Lepton flavor violating (LFV) processes such as the neutrinoless decay of the $\tau$ lepton have long been identified as unambiguous signatures of new physics, because no known fundamental local gauge symmetry forbids such a decay. While forbidden in the Standard Model (SM) because of vanishing neutrino mass in the three lepton generations, extensions to include current knowledge of neutrino mass and mixing imply $\BR(\taumg) \sim {\cal{O}} (10^{-54})$~\cite{Aubert:2005ye}, which is many orders of magnitude below the experimental sensitivity. However, many new theories, as tabulated below, allow for LFV decays: $\tau \to \ell \gamma$, $\tau \to \ell \ell \ell$, $\tau \to \ell h h$ (where $\ell = e, \mu; h = \pi, K$) up to their existing experimental bounds $\sim {\cal{O}} (10^{-7})$:\\ \vspace*{-.5cm} \begin{table}[!h] \begin{center} \begin{tabular}{l|c|c} & {$\BR(\taulg)$} & {$\BR(\taulll)$}\\\hline mSUGRA + seesaw~\cite{Ellis:1999uq,Ellis:2002fe} & $10^{-7}$ & $10^{-9}$ \\ SUSY + SO(10)~\cite{Masiero:2002jn,Fukuyama:2003hn} & $10^{-8}$ & $10^{-10}$ \\ SM + seesaw~\cite{Cvetic:2002jy} & $10^{-9}$ & $10^{-10}$ \\ Non-Universal Z$^\prime$~\cite{Yue:2002ja} & $10^{-9}$ & $10^{-8}$ \\ SUSY + Higgs~\cite{Dedes:2002rh,Brignole:2003iv} & $10^{-10}$ & $10^{-7}$ \\ \end{tabular} \end{center} \end{table} \vspace*{-.25cm} Feynman diagrams for $\tau \to \mu \gamma$ and $\tau \to \mu \mu \mu$ decays via s-neutrino mixing in minimal supergravity model with heavy $\nu_{\rm{R}}$ (seesaw mechanism) and via neutral Higgs exchange in supersymmetric model are shown in Figure 1, respectively. \begin{figure}[!h] \begin{center} \begin{minipage}[l]{.4\textwidth} \begin{center} \epsfxsize=\textwidth\epsfysize=.15\textheight\epsfbox{fig1a.eps} \end{center} \end{minipage} \hspace*{.1\textwidth} \begin{minipage}[r]{.4\textwidth} \begin{center} \epsfxsize=\textwidth\epsfysize=.15\textheight\epsfbox{fig1b.eps} \end{center} \end{minipage} \end{center} Figure 1: Illustrative scenarios for $\tau \to \mu \gamma$ (left) and $\tau \to \mu \mu \mu$ (right). \end{figure} \section{Signal Identification} \label{sec:signal} Searches for LFV decay in modes: $\tau \to \ell \gamma$~\cite{Aubert:2005ye,Aubert:2005wa}, $\tau \to \ell \ell \ell$~\cite{Aubert:2003pc} and $\tau \to \ell h h$~\cite{Aubert:2005tp} have been performed with 232.2 $\mathrm{fb^{-1}}$, 91.6 $\mathrm{fb^{-1}}$ and 221.4 $\mathrm{fb^{-1}}$ of data collected by the \mbox{\sl B\hspace{-0.4em} {\footnotesize\sl A}\hspace{-0.4em} B\hspace{-0.4em} {\footnotesize\sl A\hspace{-0.05em}R}}\ experiment at $\sqrt{s}$ $\approx$ 10.58 \GeV, respectively. The characteristic feature of these decays is that both the energy and the mass of the $\tau$-daughters are known in such an $\mathrm{e^+e^-}$ annihilation environment. In terms of the two independent variables: beam energy constrained mass $(\mathrm{m}_{\mathrm EC})$ and the energy variable ${\Delta {\rm{E}}} = \mathrm{E}_{\tau} - \sqrt{s}/2$, where $\mathrm{E}_{\tau}$ is the energy of the $\tau$-daughters in center-of-mass system, the signal is clustered around $(m_\tau,0)$ in the ($\mathrm{m}_{\mathrm EC},\Delta {\rm{E}}$) plane. The identification of daughters from signal $\tau$ decays are optimized for searches for each decay mode separately. The electrons are identified from the energy deposited in the electromagnetic calorimeter and momentum of the track measured in the drift chamber with an efficiency of 91\% for $\tau \to e \gamma$ and $\tau \to \ell \ell \ell$ searches. The muons are identified by its minimal ionizing particle signature in the calorimeter and hits in the instrumented flux return with an efficiency of 82\%, 63\% and 44\% in $\tau \to \mu \gamma$, $\tau \to \ell \ell \ell$ and $\tau \to \ell h h$ searches respectively. The kaons are identified using the measured rate of ionization loss in the drift chamber and the measured Cherenkov angle in a ring-imaging detector with an efficiency of 81\%. The mis-identification rates for a pion track to be identified as an electron, a muon or a kaon are 0.1\%, 1.0 $-$ 4.8\%, 1.4\% respectively. \section{Background estimation} \label{sec:bkg} The primary backgrounds are from Bhabha or di-muon events, which are restricted to a narrow band at small values of $|\Delta {\rm{E}}|$, or the $e^+e^- \to \tau^+\tau^-$ events, which are restricted to negative values of $\Delta {\rm{E}}$, because the signal topology reconstruction does not account for the missing neutrino's. The remaining backgrounds from $e^+e^- \to q\bar{q}$ are uniformly distributed. \begin{figure}[!h] \begin{center} \begin{minipage}[l]{.49\textwidth} \begin{center} \epsfxsize=\textwidth\epsfysize=.240\textheight\epsfbox{fig2a.eps} \end{center} \end{minipage} \begin{minipage}[r]{.49\textwidth} \begin{center} \epsfxsize=\textwidth\epsfysize=.230\textheight\epsfbox{fig2b.eps} \end{center} \end{minipage} \end{center} Figure 2: $\mathrm{m}_{\mathrm EC}$ distribution inside a $\pm 2 \sigma$ band in $\Delta {\rm{E}}$ for $\tau \to \ell \gamma$ searches. \end{figure} For $\tau \to \ell \gamma$ searches, the signal probability density function (PDF) is described by a double Gaussian shape in $\mathrm{m}_{\mathrm EC}$, and the background is well described by a constant PDF or with a small slope in $\mathrm{m}_{\mathrm EC}$ inside a $\pm 2 \sigma$ band in $\Delta {\rm{E}}$, as shown in Figure 2 for $\tau \to e \gamma$ (left) and $\tau \to \mu \gamma$ (right) decays. For $\tau \to \ell \ell \ell ~(\ell h h)$ searches, the background PDF's are analytically parameterized as function of $\Delta {\rm{E}}$ and $\Delta {\rm{M}}$ = m$_{dau}$ - m$_\tau$, where m$_{dau}$ is the reconstructed mass of the $\tau$ daughters. The background rates are determined by un-binned maximum likelihood fits to the data, shown along with the selected signal MC events in Figure 3 and 4 for $\tau \to \ell \ell \ell$ and $\tau \to \ell h h$ decay modes respectively. All these searches are performed in a blinded manner, where the background predictions from sideband data are compared to the data inside the signal region, only after the optimization and systematic studies have been completed. \begin{figure}[!h] \begin{center} \begin{minipage}[c]{.75\textwidth} \begin{center} \epsfxsize=\textwidth\epsfysize=.24\textheight\epsfbox{fig3.eps} \end{center} \end{minipage} \end{center} Figure 3: Observed data as dots and the boundaries of the signal region in $(\Delta {\rm{M}}, \Delta {\rm E})$ plane for $\tau \to \ell \ell \ell$ searches. The dark and light shading indicates contours containing 50\% and 90\% of the selected MC signal events, respectively. \end{figure} \vspace*{-.75cm} \begin{figure}[!h] \begin{center} \begin{minipage}[c]{\textwidth} \begin{center} \epsfxsize=\textwidth\epsfysize=.24\textheight\epsfbox{fig4.eps} \end{center} \end{minipage} \end{center} Figure 4: Observed data as dots and the boundaries of the signal region in $(\Delta {\rm{M}}, \Delta {\rm E})$ plane for $\tau \to \ell h h$ searches. The dark and light shading indicates contours containing 50\% and 90\% of the selected MC signal events, respectively. \end{figure} \section{Results} \label{sec:res} No signal has been observed. Upper limits at 90\% confidence level (C.L.) are set using: ${\cal{B}}^{90}_{UL}=N^{90}_{UL}/(2\varepsilon {\cal{L}} \sigma_{\tau\tau})$, where $N^{90}_{UL}$ is the 90\% C.L. upper limit on the number of signal events for $N_{\rm obs}$ events observed when $N_{\rm bgd}$ background events are expected, and $\varepsilon$ is the signal efficiency. Efficiency estimates, the number of expected background events $(N_{\rm bgd})$ in the signal region (with total uncertainties), the number of observed events $(N_{\rm obs})$ in the signal region, and the 90\% C.L. upper limit $({\cal{B}}_{\mathrm{UL}}^{90})$ for each decay mode are tabulated below:\\ \vspace*{-.5cm} \begin{table}[!h] \begin{center} \begin{tabular}{lcccc}\hline\hline Mode & Efficiency [\%]& $N_{\rm bgd}$ & $N_{\rm obs}$ & ${\cal{B}}_{\mathrm{UL}}^{90} (10^{-7})$ \\\hline $e^- \gamma$ &$ 4.7 \pm 0.3 $ & $ 1.9 \pm 0.4 $ & 1 & 1.1 \\ $\mu^- \gamma$ &$ 7.4 \pm 0.7 $ & $ 6.2 \pm 0.5 $ & 4 & 0.7 \\\hline $e^- e^+ e^-$ &$ 7.3 \pm 0.2 $ & $ 1.5 \pm 0.1 $ & 1 & 2.0 \\ $\mu^+ e^- e^- $ &$11.6 \pm 0.4 $ & $ 0.4 \pm 0.1 $ & 0 & 1.1 \\ $\mu^- e^+ e^- $ &$ 7.7 \pm 0.3 $ & $ 0.6 \pm 0.1 $ & 1 & 2.7 \\ $e^+ \mu^- \mu^-$ &$ 9.8 \pm 0.5 $ & $ 0.2 \pm 0.1 $ & 0 & 1.3 \\ $e^- \mu^+ \mu^-$ &$ 6.8 \pm 0.4 $ & $ 0.4 \pm 0.1 $ & 1 & 3.3 \\ $\mu^- \mu^+ \mu^-$ &$ 6.7 \pm 0.5 $ & $ 0.3 \pm 0.1 $ & 0 & 1.9 \\\hline $e^- K^+K^-$ &$ 3.8 \pm 0.2 $ & $ 0.2 \pm 0.1 $ & 0 & 1.4 \\ $e^- K^+\pi^-$ &$ 3.1 \pm 0.1 $ & $ 0.3 \pm 0.1 $ & 0 & 1.7 \\ $e^- \pi^+K^-$ &$ 3.1 \pm 0.1 $ & $ 0.1 \pm 0.1 $ & 1 & 3.2 \\ $e^- \pi^+\pi^-$ &$ 3.3 \pm 0.2 $ & $ 0.8 \pm 0.1 $ & 0 & 1.2 \\ $\mu^- K^+K^-$ &$ 2.2 \pm 0.1 $ & $ 0.2 \pm 0.1 $ & 0 & 2.5 \\ $\mu^- K^+\pi^-$ &$ 3.0 \pm 0.2 $ & $ 1.7 \pm 0.3 $ & 2 & 3.2 \\ $\mu^- \pi^+K^-$ &$ 2.9 \pm 0.2 $ & $ 1.0 \pm 0.2 $ & 1 & 2.6 \\ $\mu^- \pi^+\pi^-$ &$ 3.4 \pm 0.2 $ & $ 3.0 \pm 0.4 $ & 3 & 2.9 \\ $e^+ K^-K^-$ &$ 3.9 \pm 0.2 $ & $ 0.0 \pm 0.0 $ & 0 & 1.5 \\ $e^+ K^-\pi^-$ &$ 3.2 \pm 0.1 $ & $ 0.2 \pm 0.1 $ & 0 & 1.8 \\ $e^+ \pi^-\pi^-$ &$ 3.4 \pm 0.2 $ & $ 0.4 \pm 0.1 $ & 1 & 2.7 \\ $\mu^+ K^-K^-$ &$ 2.1 \pm 0.1 $ & $ 0.1 \pm 0.1 $ & 1 & 4.8 \\ $\mu^+ K^-\pi^-$ &$ 2.9 \pm 0.2 $ & $ 1.5 \pm 0.3 $ & 1 & 2.2 \\ $\mu^+ \pi^-\pi^-$ &$ 3.3 \pm 0.2 $ & $ 1.5 \pm 0.3 $ & 0 & 0.7 \\\hline\hline \end{tabular} \end{center} \end{table} \section{Summary} \label{sec:sum} An improvement of five order of magnitude in the upper limits for 4 LFV $\tau$ decay modes over the past two decades is shown in Figure 5. The next five years promises to be most interesting phase in this evolution, when experimental results approach closer the predictions from different theoretical models. \begin{figure}[!h] \begin{center} \begin{minipage}[l]{.495\textwidth} \begin{center} \epsfxsize=\textwidth\epsfysize=.22\textheight\epsfbox{fig5a.eps} \end{center} \vspace*{-.5cm} \end{minipage} \begin{minipage}[r]{.495\textwidth} \begin{center} \epsfxsize=\textwidth\epsfysize=.22\textheight\epsfbox{fig5b.eps} \end{center} \vspace*{-.5cm} \end{minipage} \begin{minipage}[l]{.495\textwidth} \begin{center} \epsfxsize=\textwidth\epsfysize=.22\textheight\epsfbox{fig5c.eps} \end{center} \vspace*{-.5cm} \end{minipage} \begin{minipage}[r]{.495\textwidth} \begin{center} \epsfxsize=\textwidth\epsfysize=.22\textheight\epsfbox{fig5d.eps} \end{center} \vspace*{-.5cm} \end{minipage} \end{center} Figure 5: Evolution of experimental bounds (${\cal{B}}_{\mathrm{UL}}^{90}$) and some predictions. \end{figure} \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{#1}}% \def \AMRsection#1{\section{#1}}% \def \lastfigno{Figure~\hbox{\the\figno}}% \def \nextfigno{{\advance \figno by 1 \lastfigno}}% \def \AMRfigurelabel#1{\expandafter\xdef\csname #1\endcsname{\lastfigno}% \symbol{\lastfigno}{#1}}% \newif\ifAMRfigurenaturalsize \AMRfigurenaturalsizefalse \newif\ifAMRfigureysize \AMRfigureysizefalse \newdimen\AMRfigurewidth \AMRfigurewidth=\hsize \advance\AMRfigurewidth by -\secindent \def\AMRfigure #1 #2\par{% \topinsert \hbox to \hsize{\hfil\begingroup \ifAMRfigurenaturalsize \def\epsfsize##1##2{0pt} \else \ifAMRfigureysize\else \epsfxsize \AMRfigurewidth \fi \fi \hbox to \AMRfigurewidth{\hfil\epsffile{#1}\hfil} \endgroup}% \figure{#2}% \AMRfigurenaturalsizefalse \AMRfigureysizefalse \symbol{\the\figno}{#1} \endinsert}% \newbox\AMRFigurePreboxedBox \def\AMRfigurepreboxed #1\par{% \topinsert \unvbox\AMRFigurePreboxedBox% \figure{#1}% \endinsert}% \def\lasttabno{Table~\hbox{\the\tabno}}% \def \nexttabno{{\advance \tabno by 1 \lasttabno}}% \def \AMRtablelabel#1{\expandafter\xdef\csname #1\endcsname{\lasttabno}}% \title{Instabilities of periodic orbits with spatio-temporal symmetries}[Instabilities of periodic orbits with spatio-temporal symmetries] \author{A M Rucklidge\dag\footnote{\S}{email: A.M.Rucklidge@damtp.cam.ac.uk} and M Silber\ddag \footnote{\P}{email: silber@nimbus.esam.nwu.edu}}[A M Rucklidge and M Silber] \address{\dag\ Department of Applied Mathematics and Theoretical Physics,\hfil\break University of Cambridge, Cambridge CB3 9EW, UK} \address{\ddag\ Department of Engineering Sciences and Applied Mathematics,\hfil\break Northwestern University, Evanston, IL 60208, USA} \abs Motivated by recent analytical and numerical work on two- and three-dimensional convection with imposed spatial periodicity, we analyse three examples of bifurcations from a continuous group orbit of spatio-temporally symmetric periodic solutions of partial differential equations. Our approach is based on centre manifold reduction for maps, and is in the spirit of earlier work by Iooss (1986) on bifurcations of group orbits of spatially symmetric equilibria. Two examples, two-dimensional pulsating waves (PW) and three-dimensional alternating pulsating waves (APW), have discrete spatio-temporal symmetries characterized by the cyclic groups $Z_n$, $n=2$ (PW) and $n=4$ (APW). These symmetries force the Poincar\'e return map $\pmb{\cal M}$ to be the $n^{th}$ iterate of a map~${\widetilde{\pmb{\cal G}}}$: $\pmb{\cal M}={\widetilde{\pmb{\cal G}}}^n$. The group orbits of PW and APW are generated by translations in the horizontal directions and correspond to a circle and a two-torus, respectively. An instability of pulsating waves can lead to solutions that drift along the group orbit, while bifurcations with Floquet multiplier~$+1$ of alternating pulsating waves do not lead to drifting solutions. The third example we consider, alternating rolls, has the spatio-temporal symmetry of alternating pulsating waves as well as being invariant under reflections in two vertical planes. When the bifurcation breaks these reflections, the map ${\widetilde{\pmb{\cal G}}}$ has a ``two-symmetry,'' as analysed by Lamb (1996). This leads to a doubling of the marginal Floquet multiplier and the possibility of bifurcation to two distinct types of drifting solutions. \endabs \submitted \noindent \today \AMRsection{Introduction} Techniques for analysing symmetry-breaking bifurcations of $\Gamma$-invariant equilibria of $\Gamma$-equivariant differential equations are well-developed in the case of compact Lie groups~$\Gamma$ (Golubitsky \etal 1988). The motivation for developing these methods comes, in large part, from problems of pattern formation in fluid dynamics (see, for example, Crawford and\ Knobloch 1991). In the simplest cases, the symmetry-breaking bifurcation corresponds to a pattern-forming instability of a basic state that is both time-independent and fully symmetric, for example, a spatially uniform equilibrium solution of the governing equations. A symmetry-breaking Hopf bifurcation of this spatially uniform state often leads to time-periodic solutions that break the translation invariance of the governing equations and that have spatio-temporal and spatial symmetries. In this paper we address bifurcations of such periodic orbits, which have broken the translation invariance but have retained a discrete group of spatio-temporal symmetries. \addref{refG59} \addref{refC50} We consider problems posed with periodic boundary conditions, for which there is an $S^1$ symmetry associated with each direction of imposed periodicity. If this symmetry is broken by an equilibrium solution, then the solution is not isolated; there is a continuous family of equilibria related through the translations. An instability of this solution can excite the neutral translations modes(s) and lead to new solutions that drift along the translation group orbit. This is the case, for example, in the ``parity-breaking bifurcation'': a reflection-symmetric steady state undergoes a symmetry-breaking bifurcation to a uniformly translating solution. Another example of a bifurcation leading to drift has been observed in two-dimensional convection: when the vertical mirror plane of symmetry that separates steady counter-rotating rolls is broken in a Hopf bifurcation, the resulting solution, called a direction-reversing travelling wave or pulsating wave (PW), drifts to and fro (Landsberg and\ Knobloch 1991; Matthews \etal 1993). This periodic orbit is invariant under the combination of advance of half the period in time with a reflection; any drift in one direction in the first half of the oscillation is exactly balanced by a drift in the other direction in the second half, so there is no net drift during the oscillation. Similarly in thee-dimensional convection with spatial periodicity imposed, for example, on a square lattice, a symmetry-breaking Hopf bifurcation from steady convection in a square pattern can lead to alternating pulsating waves (APW), which are invariant under the combination of advance of one quarter the period and rotation by~$90^\circ$ (Rucklidge 1997). These solutions drift alternately along the two horizontal coordinate directions, but again have no net drift over the whole period of the oscillation. \addref{refL23} \addref{refM48} \addref{refR37} There have been a number of studies of bifurcations of compact group orbits of (relative) equilibria. Iooss (1986) developed an approach based on centre manifold reduction to investigate bifurcations of Taylor vortices in the Taylor--Couette problem. Specifically, he analysed bifurcations in directions orthogonal to the tangent space to the group orbit of equilibria, with the neutral translation mode incorporated explicitly in the bifurcation problem. Krupa (1990) provided a general setting for investigating bifurcations of relative equilibria that focuses on the local dynamics in directions orthogonal to the tangent space to the group orbit. He shows that the resulting bifurcation problem is $\Sigma$-equivariant, where $\Sigma$ is the isotropy subgroup of symmetries of the relative equilibrium, and, building on work of Field (1980), provides a group theoretic method for determining whether or not the bifurcating solutions drift. Aston \etal (1992), and Amdjadi \etal (1997) develop a technique for numerically investigating bifurcations of relative equilibria in $O(2)$-equivariant partial differential equations, and apply their method to the Kuramoto--Sivashinsky equation. Their approach isolates one solution on a group orbit, while still keeping track of any constant drift along the group orbit. \addref{refI3} \addref{refK61} \addref{refF33} \addref{refA40} \addref{refA36} In this paper we investigate bifurcations of time-periodic solutions that are not isolated as they have broken the translation invariance, but that do possess a discrete group of spatio-temporal symmetries. Our approach is similar to that of Iooss (1986). However, we are interested in instabilities of periodic solutions, so we use centre manifold reduction for Poincar\'e maps. We are particularly interested in determining whether the symmetries of the basic state place any restrictions on the types of bifurcations that occur, and whether the bifurcating solutions drift along the underlying group orbit or not. We consider three examples that are motivated by numerical studies of convection with periodic boundary conditions in the horizontal direction(s). First we investigate bifurcations of the pulsating waves and alternating pulsating waves described above. These solutions have discrete spatio-temporal symmetries $Z_2$ and $Z_4$, respectively. The group orbit of the pulsating waves is $S^1$, while the group orbit of the alternating pulsating waves is a two-torus, due to imposed periodicity in two horizontal directions. The third example we treat in this paper is alternating rolls (AR), which have the same spatio-temporal symmetry as APW but are also invariant under reflection in two orthogonal vertical planes (Silber and\ Knobloch 1991). \addref{refS40} The $Z_n$ ($n=2,4$) spatio-temporal symmetry of the basic state places restrictions on the Poincar\'e return map $\pmb{\cal M}$; specifically, we show that it is the $n^{th}$ iterate of a map ${\widetilde{\pmb{\cal G}}}$. A direct consequence of this is that period-doubling bifurcations are nongeneric (Swift and\ Wiesenfeld 1984). Throughout the paper we restrict our analysis to bifurcation with Floquet multiplier~$+1$; we do not consider Hopf bifurcations. We also restrict attention to bifurcations that preserve the spatial-periodicity of the basic state. \addref{refI3} \addref{refS20} Our paper is organized as follows. In the next section we lay the framework for our analysis in the setting of a simple example, namely bifurcation of pulsating waves. We show how the spatio-temporal symmetry is manifest in the Poincar\'e return map. Section 3 considers bifurcation of the three-dimensional analogue of pulsating waves, namely alternating pulsating waves. Section 4 considers bifurcations of alternating rolls. For this problem we need to consider six different cases, which we classify by the degree to which the spatial, and spatio-temporal symmetries are broken. In the case that the spatial reflection symmetries are fully broken by the neutral modes, the Floquet multiplier~$+1$ is forced to have multiplicity two, and more than one solution branch bifurcates from the basic AR state. In one case we find a bifurcation of the AR state leading to two distinct drifting solutions. We present an example of one of the drifting patterns that is obtained by numerically integrating the equations of three-dimensional compressible magnetoconvection. In the course of the analysis of bifurcations of alternating rolls, we make contact with the work on $k$-symmetries of Lamb and\ Quispel (1994) and Lamb (1996). Section 5 contains a summary and indicates some directions for future work. \addref{refL43} \addref{refL41} \AMRsection{Two dimensions: pulsating waves} We write the partial differential equations (PDEs) for two-dimensional convection symbolically as: $$ {{\rm d}\pmb{U}\over{\rm d} t} = \pmb{\cal F}(\pmb{U};\mu),\eqno(\AMReqno)\AMReqlabel{thePDE} $$ where $\pmb{U}$ represents velocity, temperature, density, etc.\ as functions of the horizontal coordinate~$x$, the vertical coordinate~$z$ and time~$t$; $\mu$ represents a parameter of the problem; and $\pmb{\cal F}$ is a nonlinear operator between suitably chosen function spaces. We assume periodic boundary conditions, with spatial period $\ell$, in the $x$-direction. The symmetry group of the problem is $O(2)$, which is the semi-direct product of $Z_2$, generated by a reflection~$\kappa_x$, and an $SO(2)$ group of translations~$\tau_a$, which act as $$ \kappa_x\colon x\to-x,\qquad \tau_a\colon x\to x+a\pmod\ell,\eqno(\AMReqno) $$ where $\tau_\ell$ is the identity and $\tau_a\kappa_x=\kappa_x\tau_{-a}$. The PDEs~\thePDE\ are equivariant under the action of these symmetry operators, so $\pmb{\cal F}(\tau_a\pmb{U};\mu)=\tau_a\pmb{\cal F}(\pmb{U};\mu)$ and $\pmb{\cal F}(\kappa_x\pmb{U};\mu)=\kappa_x\pmb{\cal F}(\pmb{U};\mu)$, where $\tau_a$ and $\kappa_x$ act on the functions as follows: $$ \tau_a\pmb{U}(x,z,t)\equiv\pmb{U}(x-a,z,t),\qquad \kappa_x\pmb{U}(x,z,t)\equiv M_{\kappa_x}\pmb{U}(-x,z,t).\eqno(\AMReqno) $$ Here $M_{\kappa_x}$ is a matrix representing~$\kappa_x$; it reverses the sign of the horizontal component of velocity and leaves all other fields in $\pmb{U}$ unchanged. Suppose that when the parameter $\mu=0$, there is a known pulsating wave solution~$\pmb{U}_0(x,z,t)$ of~\thePDE\ with temporal period~$T$ and spatial period~$\lambda=\ell/N$, where $N$ specifies the number of PWs that fit into the periodic box. The symmetries of $\pmb{U}_0$ are summarized as follows: $$ \pmb{U}_0(x,z,t)= \kappa_x \pmb{U}_0(x,z,t+\hbox{$1\over2$} T)= \pmb{U}_0(x,z,t+T)= \tau_\lambda \pmb{U}_0(x,z,t).\eqno(\AMReqno) $$ There is a continuous group orbit of PWs generated by translations: $\pmb{U}_a=\tau_a \pmb{U}_0$. We are interested in bifurcations from this group orbit. Following the approach developed by Iooss (1986) and Chossat and\ Iooss (1994) for studying instabilities of continuous group orbits of steady solutions, we expand about the group orbit of periodic solutions as follows: $$ \pmb{U}(x,z,t)=\tau_{c(t)}(\pmb{U}_0(x,z,t) + \pmb{A}(x,z,t)). \eqno(\AMReqno)\AMReqlabel{theExpansion} $$ Here translation along the group orbit is given by~$\tau_{c(t)}$, where $c$ is a coordinate parameterizing the group orbit. Small perturbations, orthogonal to the tangent direction of the group orbit, are specified by $\pmb{A}(x,z,t)$. The expansion~\theExpansion\ is substituted into the PDEs~\thePDE\ and, after suitable projection that separates translations along the group orbit from the evolution of the perturbation orthogonal to it, we obtain equations of the form (see Chossat and\ Iooss (1994)): $$ {{\rm d} \pmb{A}\over{\rm d} t} = \pmb{\cal G}(\pmb{A},\pmb{U}_0;\mu),\qquad {{\rm d} c\over{\rm d} t} = h(\pmb{A},\pmb{U}_0;\mu), \eqno(\AMReqno)\AMReqlabel{thePDEprojected} $$ where $\pmb{\cal G}$ and $h$ satisfy $\pmb{\cal G}(0,\pmb{U}_0;0)=0$ and $h(0,\pmb{U}_0;0)=0$. An important consequence of the translation invariance of the original PDEs is that $\pmb{\cal G}$ and $h$ do not depend on the position $c$ along the group orbit; the equation for the drift $c$ is decoupled from the equation for the amplitude of the perturbation $\pmb{A}$. Here we find it convenient to keep track of the explicit time dependence of $\pmb{\cal G}$ and $h$, which enters through their dependence on the basic state $\pmb{U}_0$, by listing $\pmb{U}_0$ as one of the arguments of $\pmb{\cal G}$ and $h$. We determine how the spatio-temporal reflection symmetry of $\pmb{U}_0$ is manifest in the equations for $c$ and $\pmb{A}$ by noting that if $\tau_{c(t)}(\pmb{U}_0(x,z,t)+\pmb{A}(x,z,t))$ is a solution of the PDEs~\thePDE\ , then so is $$ \kappa_x\tau_{c(t)}(\pmb{U}_0(x,z,t)+\pmb{A}(x,z,t)) =\tau_{-c(t)}(\kappa_x\pmb{U}_0(x,z,t)+\kappa_x\pmb{A}(x,z,t)). \eqno(\AMReqno) $$ Hence $$\eqalign{ \pmb{\cal G}(\kappa_x\pmb{A},\kappa_x\pmb{U}_0;\mu)&=\kappa_x\pmb{\cal G}(\pmb{A},\pmb{U}_0;\mu),\cr h(\kappa_x\pmb{A},\kappa_x\pmb{U}_0;\mu)&= -h(\pmb{A},\pmb{U}_0;\mu).} \eqno(\AMReqno) $$ \addref{refI3} \addref{refC72} \AMRfigure rsfig01.eps Illustration of $\kappa_x\pmb{\cal M}_0^t = \pmb{\cal M}_{T/2}^{T/2+t}\kappa_x.$ In this example, the reflection~$\kappa_x$ changes the sign of the horizontal coordinate. The PW periodic solution is shown as a dotted line. (a)~A perturbation at $t=0$ is advanced in time by an amount~$t$ (the solid line, which stays close to the broken line on the periodic orbit), then the system is reflected. (b)~We arrive at the same final position if we reflect (so now the perturbation is about the PW at $t=\hbox{$1\over2$} T$) and then advance in time by the same amount. \AMRfigurelabel{FigureTimeAdvanceSymmetry} Since our basic state $\pmb{U}_0$ is $T$-periodic, we seek a map that gives the perturbation $\pmb{A}$ at time $t=T$ given a perturbation $\pmb{A}(0)$ at some initial time $t=0$. Specifically, we define a time advance map $\pmb{\cal M}_0^t$ acting on the perturbation $\pmb{A}$ by $\pmb{A}(t)=\pmb{\cal M}_0^t(\pmb{A}(0))$. We adopt the approach of Swift and\ Wiesenfeld (1984) and split the time interval from $0$ to $T$ into two stages using the symmetry property of the underlying pulsating waves. Specifically, since $\kappa_x\pmb{A}(t)$ satisfies ${{\rm d} (\kappa_x \pmb{A})\over{\rm d} t} =\pmb{\cal G}(\kappa_x\pmb{A},\kappa_x\pmb{U}_0;\mu)$ and $\kappa_x\pmb{U}_0(x,z,t)=\pmb{U}_0(x,z,t+{T\over 2})$, we have $\kappa_x \pmb{A}(t)=\pmb{\cal M}_{T/2}^{t+T/2}(\kappa_x\pmb{A}(0))$; hence $$ \kappa_x\pmb{\cal M}_0^t = \pmb{\cal M}_{T/2}^{T/2+t}\kappa_x.\eqno(\AMReqno) $$ Advancing the perturbation by a time~$t$ starting from time~$0$ and then reflecting the whole system is equivalent to reflecting the whole system then advancing by a time~$t$ starting from time~$\hbox{$1\over2$} T$ (see \FigureTimeAdvanceSymmetry). It follows immediately that the full period map $\pmb{\cal M}_0^T$ can be written as the second iterate of a map ${\widetilde{\pmb{\cal G}}}$: $$ \pmb{\cal M}_0^T=\pmb{\cal M}_{T/2}^T\kappa_x^2\pmb{\cal M}_0^{T/2} =\left(\kappa_x\pmb{\cal M}_0^{T/2}\right)^2\equiv{\widetilde{\pmb{\cal G}}}^2. \eqno(\AMReqno) $$ Rather than consider the full period map~$\pmb{\cal M}_0^T$, we will consider the map ${\widetilde{\pmb{\cal G}}}\equiv\kappa_x\pmb{\cal M}_0^{T/2}$. The map~${\widetilde{\pmb{\cal G}}}$ has no special property under reflections, but it commutes with translations $\tau_\lambda$, which leave the underlying pulsating waves invariant: ${\widetilde{\pmb{\cal G}}}\tau_\lambda=\tau_\lambda{\widetilde{\pmb{\cal G}}}$. \addref{refS20} The dynamics of the perturbation is now given by the map~${\widetilde{\pmb{\cal G}}}$: $\pmb{\cal A}_{n+1}={\widetilde{\pmb{\cal G}}}(\pmb{\cal A}_n;\mu)$, where each iterate corresponds to advancing in time by $\hbox{$1\over2$} T$ and reflecting; thus $\pmb{A}(\hbox{$1\over2$} T)=\kappa_x\pmb{\cal A}_1$, starting from $\pmb{\cal A}_0$ at time~$0$. In order to compute the drift~$c_1$ of the solution at time~$\hbox{$1\over2$} T$, we integrate the ${\rm d} c/{\rm d} t$ equation~\thePDEprojected\ for a time $\hbox{$1\over2$} T$, starting at a position~$c_0$ and with initial perturbation~$\pmb{A}(0)=\pmb{\cal A}_0$: $$ c_1=c_0 + \int_0^{T/2}h(\pmb{\cal M}_0^t(\pmb{\cal A}_0),\pmb{U}_0(t);\mu)\,{\rm d} t \equiv c_0 + {\tilde h}(\pmb{\cal A}_0;\mu).\eqno(\AMReqno) $$ Then, after a second half-period, $$\eqalignno{ c_2&=c_1 + \int_{T/2}^T h(\pmb{\cal M}_{T/2}^t(\pmb{A}(\hbox{$1\over2$} T)),\pmb{U}_0(t);\mu)\,{\rm d} t\cr &=c_1 + \int_{T/2}^T h(\pmb{\cal M}_{T/2}^t(\kappa_x\pmb{\cal A}_1), \pmb{U}_0(t);\mu)\,{\rm d} t\cr &=c_1 + \int_{T/2}^T h(\kappa_x\pmb{\cal M}_0^{t-T/2}(\pmb{\cal A}_1), \kappa_x\pmb{U}_0(t-T/2);\mu)\,{\rm d} t\cr &=c_1 - \int_0^{T/2} h(\pmb{\cal M}_0^{t'}(\pmb{\cal A}_1),\pmb{U}_0(t');\mu)\,{\rm d} t'\cr &=c_1 - {\tilde h}(\pmb{\cal A}_1;\mu).&(\AMReqno)\cr} $$ Thus the combined dynamics of the perturbation and translation can be written as $$ \pmb{\cal A}_{n+1}={\widetilde{\pmb{\cal G}}}(\pmb{\cal A}_n;\mu),\qquad c_{n+1}=c_n + (-1)^n{\tilde h}(\pmb{\cal A}_n;\mu). \eqno(\AMReqno)\AMReqlabel{theCombinedMap} $$ Since the unperturbed PW is a non-drifting solution of the problem at $\mu=0$ we have ${\widetilde{\pmb{\cal G}}}(0;0)=0$ and ${\tilde h}(0;0)=0$. Moreover, the spatial periodicity of $\pmb{U}_0$ places some symmetry restrictions on ${\widetilde{\pmb{\cal G}}}$ and ${\tilde h}$; specifically, ${\widetilde{\pmb{\cal G}}}(\tau_\lambda\pmb{\cal A}; \mu)=\tau_\lambda{\widetilde{\pmb{\cal G}}}(\pmb{\cal A};\mu)$ and ${\tilde h}(\tau_\lambda\pmb{\cal A};\mu)={\tilde h}(\pmb{\cal A};\mu)$. We turn now to the codimension-one bifurcations of the PW, which are the trivial fixed points $\pmb{\cal A}=0$, $c=c_0$ of \theCombinedMap\ when $\mu=0$. The map~\theCombinedMap\ always has one Floquet multiplier (FM) equal to one because of the translation invariance of the $c$~part of the map. Bifurcations occur when a FM of the linearization of ${\widetilde{\pmb{\cal G}}}$ crosses the unit circle: either a $\hbox{FM}=1$, or a $\hbox{FM}=-1$, or there is a pair of complex conjugate FMs with unit modulus. Because we have assumed periodic boundary conditions in the original PDEs, we expect the spectrum of the linearization to be discrete and the centre manifold theorem for maps to apply. (See Chossat and\ Iooss (1994) for a discussion of the centre manifold reduction in the similar problem of bifurcations from Taylor vortices.) Let $\zeta$ be the eigenfunction associated with the critical \hbox{FM}, so that on the centre manifold, we can write $$ \pmb{\cal A}_n = a_n\zeta + \Phi(a_n),\eqno(\AMReqno) $$ where $\Phi$ is the graph of the centre manifold. The unfolded dynamics takes the form $$ a_{n+1}={\hat g}(a_n;\mu),\qquad c_{n+1}=c_n + (-1)^n{\hat h}(a_n;\mu), \eqno(\AMReqno)\AMReqlabel{theCMMap} $$ where ${\hat g}$ and ${\hat h}$ are the maps ${\widetilde{\pmb{\cal G}}}$ and ${\tilde h}$ reduced to the centre manifold; ${\hat g}$ and ${\hat h}$ share the same symmetry properties as ${\widetilde{\pmb{\cal G}}}$ and~${\tilde h}$. \addref{refC72} In this paper, we only consider the case where $\tau_\lambda$ acts trivially. We therefore expect only generic bifurcations in the map ${\hat g}$: saddle-node when $\hbox{FM}=1$, period-doubling when $\hbox{FM}=-1$ and Hopf when there are a pair of complex FMs. The FMs for the full period map $\pmb{\cal M}_0^T$, which are the squares of the FMs of ${\hat g}$, will generically be either one or come in complex conjugate pairs. In particular, we do not expect $\pmb{\cal M}_0^T$ to have a $\hbox{FM}=-1$; this mechanism for suppressing period-doubling bifurcations was discussed by Swift and\ Wiesenfeld (1984). \addref{refS20} Here we consider only the cases where ${\hat g}$ has a $\hbox{FM}=+1$ or~$-1$. The normal form in the case $\hbox{FM}=1$ is: $$ a_{n+1}=\mu + a_n - a_n^2,\qquad c_{n+1}=c_n + (-1)^n{\hat h}(a_n;\mu),\eqno(\AMReqno) $$ to within a rescaling and a change of sign. The parameter $\mu$ is zero at the bifurcation point, and the fixed points of the $a$~part of the map are $a=\pm\sqrt{\mu}$ when $\mu$~is positive. The spatial translations are $$ c_0,\quad c_1=c_0 + {\hat h}(a;\mu),\quad c_2=c_0,\quad \dots\eqno(\AMReqno) $$ We therefore have a $c_0$-parameterized family of solutions that vanish, in pairs, as $\mu$ is decreased through $\mu=0$. We interpret this bifurcation by considering the solutions with $c_0=-{1\over 2}{\hat h}(a;\mu)$, $a=\pm\sqrt{\mu}$. In this case, we have a pair of pulsating wave solutions, translated with respect to the original PW by $\hbox{$1\over2$}{\hat h}(\pm\sqrt{\mu};\mu)$, which collide in a saddle-node bifurcation at $\mu=0$. The remainder of the family of solutions is obtained by translating this pair. The case $\hbox{FM}=-1$ is more interesting. The normal form in the supercritical case is $$ a_{n+1}=(-1+\mu)a_n - a_n^3,\qquad c_{n+1}=c_n + (-1)^n{\hat h}(a_n;\mu),\eqno(\AMReqno) $$ with a fixed point $a=0$ and a period-two orbit $a_n=(-1)^n\sqrt{\mu}$. The dynamics of the spatial translations are $$\eqalign{ c_0,\quad &c_1=c_0 + {\hat h}(a_0;\mu),\quad c_2=c_0 + {\hat h}(a_0;\mu) - {\hat h}(-a_0;\mu),\quad\cr &c_3=c_0 + 2{\hat h}(a_0;\mu) - {\hat h}(-a_0;\mu),\quad \dots\cr}\eqno(\AMReqno) $$ Since ${\hat h}(0;0)=0$, and generically ${\partial {\hat h}\over\partial a}(0;0)\ne 0$, ${\hat h}(a_0;\mu)$ and ${\hat h}(-a_0;\mu)$ have opposite sign for small $\mu$; this represents a symmetry-breaking bifurcation that leads to a solution that drifts along the group orbit of the PW. The main points of interest in this section are the approach that we have taken in analysing the instabilities of the group orbit of the spatio-temporally symmetric periodic orbit, and the observation that an instability of the pulsating wave with $\hbox{FM}=1$ in the full-period map can lead to drifting solutions or not. Whether solutions drift can only be determined by examining the half-period map. In the next two sections, we apply our method to three-dimensional alternating pulsating waves and to alternating rolls, the latter having spatial as well as spatio-temporal symmetries. \AMRsection{Three dimensions: alternating pulsating waves} Alternating pulsating waves (APW) are the simplest three-dimensional analogue of the pulsating waves discussed in the previous section. These periodic oscillations have been observed in numerical simulations of three-dimensional compressible magnetoconvection with periodic boundary conditions in the two horizontal directions (Matthews \etal 1995). They appear either after a series of global bifurcations (Rucklidge and\ Matthews 1995; Matthews \etal 1996) or in a Hopf bifurcation from convection in a square pattern (Rucklidge 1997), and are invariant under the combined operation of advancing one quarter period in time and rotating $90^\circ$ in space. \addref{refM42} \addref{refR29} \addref{refR37} \addref{refM66} The full symmetry group of the problem is the semi-direct product of the $D_4$~symmetry group of the square lattice and a two-torus~$T^2$ of translations in the two horizontal directions, $x$ and~$y$. $D_4$ is generated by a reflection~$\kappa_x$, a clockwise rotation by $90^\circ$~$\rho$, $$\eqalign{ &\kappa_x\colon (x,y)\to(-x,y),\qquad \rho\colon (x,y)\to(y,-x),\cr &\tau_{a,b}\colon (x,y)\to (x+a\pmod\ell,\ y+b\pmod\ell),}\eqno(\AMReqno) $$ where $\rho\tau_{a,b}=\tau_{b,-a}\rho$. As before, we assume that at $\mu=0$, we have a known APW solution $\pmb{U}_{0,0}(x,y,z,t)$ with spatial period~$\lambda$ in each direction and temporal period~$T$; then $\pmb{U}_{0,0}$ satisfies: $$\eqalign{ \pmb{U}_{0,0}(x,y,z,t)&= \rho \pmb{U}_{0,0}(x,y,z,t+\hbox{$1\over4$} T)= \pmb{U}_{0,0}(x,y,z,t+T)\cr &=\tau_{\lambda,0} \pmb{U}_{0,0}(x,y,z,t)= \tau_{0,\lambda} \pmb{U}_{0,0}(x,y,z,t).}\eqno(\AMReqno) $$ There is a two-parameter continuous group orbit of APWs generated by translations: $\pmb{U}_{a,b}=\tau_{a,b}\pmb{U}_{0,0}$. We expand about this group orbit: $$ \pmb{U}(x,y,z,t)=\tau_{c_x(t),c_y(t)}(\pmb{U}_{0,0}(x,y,z,t) + \pmb{A}(x,y,z,t)), \eqno(\AMReqno)\AMReqlabel{theExpansionThreeD} $$ where $(c_x,c_y)$ is a time-dependent translation around the group orbit and $\pmb{A}$ is the perturbation orthogonal to the tangent plane to the group orbit. As before, we separate the evolution of the translations from that of the perturbation: $$ {{\rm d} \pmb{A}\over{\rm d} t} = \pmb{\cal G}(\pmb{A},\pmb{U}_{0,0};\mu),\quad {{\rm d} c_x\over{\rm d} t} = h_x(\pmb{A},\pmb{U}_{0,0};\mu),\quad {{\rm d} c_y\over{\rm d} t} = h_y(\pmb{A},\pmb{U}_{0,0};\mu), \eqno(\AMReqno)\AMReqlabel{thePDEprojectedThreeD} $$ where we keep track of the explicit time-dependence of $\pmb{\cal G}$, $h_x$, and $h_y$ through the argument $\pmb{U}_{0,0}$. The spatio-temporal symmetry of the basic state $\pmb{U}_{0,0}$ is manifest in $\pmb{\cal G}$, $h_x$ and $h_y$ as follows: $$\eqalign{ \pmb{\cal G}(\rho\pmb{A},\rho\pmb{U}_{0,0};\mu)&=\rho\pmb{\cal G}(\pmb{A},\pmb{U}_{0,0};\mu),\cr h_x(\rho\pmb{A},\rho\pmb{U}_{0,0};\mu)&= h_y(\pmb{A},\pmb{U}_{0,0};\mu),\cr h_y(\rho\pmb{A},\rho\pmb{U}_{0,0};\mu)&= -h_x(\pmb{A},\pmb{U}_{0,0};\mu),\cr }\eqno(\AMReqno) $$ where $\rho\pmb{U}_{0,0}(t+{1\over 4}T)=\pmb{U}_{0,0}(t)$. It is convenient to introduce a complex translation $c\equiv c_x+\ii c_y$ and a corresponding $h\equiv h_x+\ii h_y$, so $\rho\tau_c=\tau_{-\ii c}\rho$. As before, we define a time advance map acting on the perturbation so $\pmb{A}(t)=\pmb{\cal M}_0^t(\pmb{A}(0))$; this has the property $$ \pmb{\cal M}_0^t\rho =\rho \pmb{\cal M}_{T/4}^{T/4+t},\qquad \pmb{\cal M}_0^t\rho^2=\rho^2\pmb{\cal M}_{T/2}^{T/2+t},\qquad \pmb{\cal M}_0^t\rho^3=\rho^3\pmb{\cal M}_{3T/4}^{3T/4+t}\eqno(\AMReqno) $$ because of the underlying spatio-temporal symmetry of the \hbox{APW}. The full period map $\pmb{\cal M}_0^T$ is then the fourth iterate of a map ${\widetilde{\pmb{\cal G}}}$: $$ \pmb{\cal M}_0^T=\rho^4\pmb{\cal M}_{3T/4}^T\pmb{\cal M}_0^{3T/4}=\rho\pmb{\cal M}_0^{T/4}\rho^3\pmb{\cal M}_0^{3T/4} =\left(\rho\pmb{\cal M}_0^{T/4}\right)^4\equiv{\widetilde{\pmb{\cal G}}}^4. \eqno(\AMReqno)\AMReqlabel{thebGtdefinition} $$ Instead of $\pmb{\cal M}_0^T$, we consider ${\widetilde{\pmb{\cal G}}}\equiv\rho\pmb{\cal M}_0^{T/4}$, which has no special properties under reflections and rotations, but does commute with $\tau_{\lambda,0}$ and~$\tau_{0,\lambda}$, which leave the underlying APW invariant. The dynamics of the perturbation is given by $\pmb{\cal A}_{n+1}={\widetilde{\pmb{\cal G}}}(\pmb{\cal A}_n)$, where $\pmb{A}(\hbox{$1\over4$} T)=\rho^3\pmb{\cal A}_1$, etc. Then $$ c_1=c_0 + \int_0^{T/4}h(\pmb{\cal M}_0^t(\pmb{\cal A}_0),\pmb{U}_{0,0}(t);\mu)\,{\rm d} t \equiv c_0 + {\tilde h}(\pmb{\cal A}_0;\mu),\eqno(\AMReqno) $$ where the map ${\tilde h}={\tilde h_x}+\ii{\tilde h_y}$ is invariant under the translations $\tau_{\lambda,0}$ and~$\tau_{0,\lambda}$. After the next quarter period, we find $$\eqalignno{ c_2&=c_1 + \int_{T/4}^{T/2} h(\pmb{\cal M}_{T/4}^t(\pmb{A}(\hbox{$1\over4$} T)),\pmb{U}_{0,0}(t);\mu)\,{\rm d} t\cr &=c_1 + \int_{T/4}^{T/2} h(\pmb{\cal M}_{T/4}^t(\rho^3\pmb{\cal A}_1),\pmb{U}_{0,0}(t);\mu)\,{\rm d} t\cr &=c_1 + \int_{T/4}^{T/2} h(\rho^3\pmb{\cal M}_0^{t-T/4}(\pmb{\cal A}_1),\rho^3\pmb{U}_{0,0}(t-T/4);\mu)\,{\rm d} t\cr &=c_1 + \ii{\tilde h}(\pmb{\cal A}_1;\mu).&(\AMReqno)\cr} $$ So the combined dynamics of the perturbation and the translation can be written as $$ \pmb{\cal A}_{n+1}={\widetilde{\pmb{\cal G}}}(\pmb{\cal A}_n;\mu),\qquad c_{n+1}=c_n + \ii^n{\tilde h}(\pmb{\cal A}_n;\mu),\eqno(\AMReqno) \AMReqlabel{theCombinedMapThreeD} $$ where ${\widetilde{\pmb{\cal G}}}(0;0)={\tilde h}(0;0)=0$. We consider the bifurcations of~\theCombinedMapThreeD\ only in the case where $\tau_{\lambda,0}$ and~$\tau_{0,\lambda}$ act trivially. Note that, as in the case of pulsating waves, the generic bifurcations of APW are either steady state ($\hbox{FM}=+1$) or Hopf, since $\pmb{\cal M}_0^T={\widetilde{\pmb{\cal G}}}^4$. We consider bifurcations with $\hbox{FM}=+1$ of $\pmb{\cal M}_0^T$ only; generically, these occur when the linearization of ${\widetilde{\pmb{\cal G}}}$ has a FM of $+1$ or~$-1$. Near a bifurcation point we reduce the dynamics onto the centre manifold $$ a_{n+1}={\hat g}(a_n;\mu),\qquad c_{n+1}=c_n + \ii^n{\hat h}(a_n;\mu).\eqno(\AMReqno)\AMReqlabel{theCMMapThreeD} $$ When a $\hbox{FM}=1$, once again we have a saddle-node bifurcation, this time involving pairs of APWs that are translated relative to each other. If a FM is $-1$, we have $a_n=(-1)^n\sqrt{\mu}$, and the spatial translations are: $$\vcenter{\openup1\jot\halign{% $\hfil#$\quad&$\hfil#$&${}#$\hfil&\quad$\hfil#$&${}#$\hfil\cr c_0,& c_1&=c_0 + {\hat h}(a_0;\mu), &c_2&=c_0 + {\hat h}(a_0;\mu) +\ii {\hat h}(-a_0;\mu),\cr &c_3&=c_0 + \ii {\hat h}(-a_0;\mu), &c_4&=c_0, \quad\dots\cr}}\eqno(\AMReqno) $$ This solution has no net drift (unlike in the two-dimensional problem), but travels back and forth different amounts in the two horizontal directions since, generically, ${\hat h}_x(a_0;\mu)\ne{\hat h}_y(a_0;\mu)$. The solution remains invariant under advance of half its period in time combined with a rotation of~$180^\circ$. To see this, we construct the solution $\pmb{U}(x,y,z,t)$ at $t=0$ and $t=\hbox{$1\over2$} T$ using the solution in the $c_0$-parameterized family that satisfies $c_0=-c_2$. Specifically, we insert the centre manifold solution $\pmb{A}(0)=\pmb{\cal A}_0=a_0\zeta +\Phi(a_0)$, $\pmb{A}(\hbox{$1\over2$} T)=\rho^2\pmb{\cal A}_2=\rho^2(a_0\zeta+\Phi(a_0))$ in~\theExpansionThreeD. We obtain $$\eqalignno{ \pmb{U}(0)&=\tau_{c_0}(\pmb{U}_{0,0}(0)+a_0\zeta+\Phi(a_0))\cr \pmb{U}(\hbox{$1\over2$} T)&=\tau_{-c_0}(\pmb{U}_{0,0}(\hbox{$1\over2$} T)+\rho^2 a_0\zeta+\rho^2\Phi(a_0))\cr &=\tau_{-c_0}\rho^2(\pmb{U}_{0,0}(0)+a_0\zeta+\Phi(a_0))\cr &=\rho^2\pmb{U}(0),&(\AMReqno)\cr} $$ where we have suppressed the $(x,y,z)$-dependence of $\pmb{U}$, retaining only its $t$-dependence. Thus, in the simple case of APW, we cannot get drifting solutions in a bifurcation with $\hbox{FM}=1$ for the time-$T$ return map. We next consider the same bifurcation for the more complicated example of alternating rolls. This solution has the same spatio-temporal symmetry as APW but has extra spatial reflection symmetries. We shall see that in this case a particular symmetry-breaking bifurcation leads to two distinct types of drifting solutions. \AMRsection{Additional spatial symmetries: alternating rolls} Alternating rolls (AR) are created in a primary Hopf bifurcation from a $D_4\AMRsemidirectprod T^2$ invariant trivial solution (Silber and\ Knobloch 1991). Like alternating pulsating waves, alternating rolls are invariant under the spatio-temporal symmetry of advancing one-quarter period in time and rotating $90^\circ$ in space, but have the additional property of being invariant under reflections in two orthogonal vertical planes. Alternating rolls have been observed in three-dimensional incompressible and compressible magnetoconvection (Clune and\ Knobloch 1994; Matthews \etal 1995). \addref{refS40} \addref{refM42} \addref{refC49} For convenience in this section, we define ${\tilde\rho}$ to be the combined advance of one quarter period in time followed by a $90^\circ$ clockwise rotation about the line $(x,y)=(0,0)$. Reflecting in the planes $x=\hbox{$1\over4$}\lambda$ or $y=\hbox{$1\over4$}\lambda$ leaves alternating rolls unchanged at all times, so the sixteen-element group that leaves AR invariant is generated by ${\kappa_x'}$, ${\kappa_y'}$ and ${\tilde\rho}$, where $$\eqalign{ {\kappa_x'}&\colon (x,y,z,t)\to(\hbox{$1\over2$}\lambda-x,y,z,t),\cr {\kappa_y'}&\colon (x,y,z,t)\to(x,\hbox{$1\over2$}\lambda-y,z,t),\cr {\tilde\rho}&\colon (x,y,z,t)\to(y,-x,z,t+\hbox{$1\over4$} T).\cr}\eqno(\AMReqno) $$ The basic AR solution $\pmb{U}_{0,0}(x,y,z,t)$ exists at $\mu=0$ and satisfies $$\eqalign{ \pmb{U}_{0,0}(x,y,z,t)&= \rho \pmb{U}_{0,0}(x,y,z,t+\hbox{$1\over4$} T)= \pmb{U}_{0,0}(x,y,z,t+T)\cr &={\kappa_x'} \pmb{U}_{0,0}(x,y,z,t) = {\kappa_y'} \pmb{U}_{0,0}(x,y,z,t)\cr &=\tau_{\lambda,0} \pmb{U}_{0,0}(x,y,z,t)= \tau_{0,\lambda} \pmb{U}_{0,0}(x,y,z,t).}\eqno(\AMReqno) $$ As in section~3, we expand about this basic solution and recover the map~\theCombinedMapThreeD. The presence of extra reflection symmetries of the underlying solution manifests itself in the following way: $$\vcenter{\openup1\jot\halign{% $\hfil#$&${}#$\hfil&\qquad$\hfil#$&${}#$\hfil\cr {\widetilde{\pmb{\cal G}}}({\kappa_x'}\pmb{\cal A}) &={\kappa_y'}{\widetilde{\pmb{\cal G}}}(\pmb{\cal A}), &{\widetilde{\pmb{\cal G}}}({\kappa_y'}\pmb{\cal A})&={\kappa_x'}{\widetilde{\pmb{\cal G}}}(\pmb{\cal A}),\cr {\tilde h_x}({\kappa_x'}\pmb{\cal A})&=-{\tilde h_x}(\pmb{\cal A}), &{\tilde h_x}({\kappa_y'}\pmb{\cal A})&={\tilde h_x}(\pmb{\cal A}),\cr {\tilde h_y}({\kappa_x'}\pmb{\cal A})&={\tilde h_y}(\pmb{\cal A}), &{\tilde h_y}({\kappa_y'}\pmb{\cal A})&=-{\tilde h_y}(\pmb{\cal A}).\cr }}\eqno(\AMReqno)\AMReqlabel{theARSymmetryAction} $$ Note that the rotation in the definition of ${\widetilde{\pmb{\cal G}}}$ \thebGtdefinition\ implies that reflecting with ${\kappa_x'}$ then applying ${\widetilde{\pmb{\cal G}}}$ is equivalent to applying ${\widetilde{\pmb{\cal G}}}$ then reflecting with~${\kappa_y'}$, since $\rho{\kappa_x'}={\kappa_y'}\rho$. In the terminology of Lamb and\ Quispel (1994), ${\kappa_x'}$ and~${\kappa_y'}$ are 2-symmetries of ${\widetilde{\pmb{\cal G}}}$, that is, ${\widetilde{\pmb{\cal G}}}^2({\kappa_x'}\pmb{\cal A})={\kappa_x'}{\widetilde{\pmb{\cal G}}}^2(\pmb{\cal A})$. In general, $k$-symmetries arise when the spatial part of the spatio-temporal symmetry of a time-periodic solution does not commute with its purely spatial symmetries (Lamb 1997). \addref{refL43} \addref{refL42} \topinsert \table{Summary of six types of bifurcations of alternating rolls, distinguished by the action of ${\kappa_x'}$ and ${\kappa_y'}$ on the critical modes, and by the critical Floquet multipliers of ${\widetilde{\pmb{\cal G}}}$. Isotropy subgroups (up to conjugacy) of the bifurcating solution branches are indicated, along with their order; in cases ${\rm B}+$\ and ${\rm B}-$, there are two distinct solution branches.} \line{\hfil\vbox{\hrule\smallskip \halign{ \quad#\hfil\quad&% #\hfil\quad&#\hfil\quad&#\hfil\quad &#\hfil\quad\cr Case&Action of ${\kappa_x'}$, ${\kappa_y'}$ on&Floquet&Bifurcation&Isotropy \cr &marginal modes&multiplier(s)& (drift or not) &subgroup (order)\cr \noalign{\smallskip} \noalign{\hrule} \noalign{\smallskip} ${\rm A}+$($+1$)&${\kappa_x'}{\kappa_y'}\zeta=\zeta$&$\hbox{FM}=+1$&Saddle-node&$\langle{\kappa_x'},{\kappa_y'},{\tilde\rho}\rangle$\hfill(16)\cr &${\kappa_x'}\zeta={\kappa_y'}\zeta=\zeta$& &(no drift)&\cr \noalign{\medskip} ${\rm A}+$($-1$)&as ${\rm A}+$($+1$)&$\hbox{FM}=-1$&Symmetry-breaking&$\langle{\kappa_x'},{\kappa_y'},{\tilde\rho}^2\rangle$\hfill(8)\cr & & &(no drift)&\cr \noalign{\medskip} ${\rm A}-$($+1$)&${\kappa_x'}{\kappa_y'}\zeta=\zeta$&$\hbox{FM}=+1$&Symmetry-breaking&$\langle{\kappa_x'}{\kappa_y'},{\tilde\rho}\rangle$\hfill(8)\cr &${\kappa_x'}\zeta={\kappa_y'}\zeta=-\zeta$& &(no drift)&\cr \noalign{\medskip} ${\rm A}-$($-1$)&as ${\rm A}-$($+1$)&$\hbox{FM}=-1$&Symmetry-breaking&$\langle{\kappa_x'}{\kappa_y'},{\kappa_x'}{\tilde\rho}\rangle$\hfill(8)\cr & & &(no drift)&\cr \noalign{\medskip} ${\rm B}+$&${\kappa_x'}{\kappa_y'}\zeta_\pm=-\zeta_\pm$&$\hbox{FM}=\pm1$&Symmetry-breaking&$\langle{\kappa_x'},{\tilde\rho}^2\rangle$\hfill(4)\cr &$\zeta_-={\kappa_x'}\zeta_+=-{\kappa_y'}\zeta_+$& &(no net drift)&$\langle{\tilde\rho}\rangle$\hfill(4)\cr \noalign{\medskip} ${\rm B}-$&as ${\rm B}+$&$\hbox{FM}=\pm\ii$&Symmetry-breaking&$\langle{\kappa_y'}\rangle$\hfill(2)\cr & & &(drift)&Trivial\hfill(1)\cr \noalign{\smallskip} }\smallskip\hrule}\hfil} \endinsert \AMRtablelabel{TableInstabilities} The remainder of this section is devoted to the discussion of the codimension-one steady bifurcations of this problem. We do not consider bifurcations that break the spatial periodicity, so $\tau_{\lambda,0}$ and $\tau_{0,\lambda}$ act trivially, nor do we consider Hopf bifurcations. The results are summarised in \TableInstabilities. We begin by noting that ${\widetilde{\pmb{\cal G}}}({\kappa_x'}{\kappa_y'}\pmb{\cal A})={\kappa_x'}{\kappa_y'}{\widetilde{\pmb{\cal G}}}(\pmb{\cal A})$, so ${\kappa_x'}{\kappa_y'}$ commutes with the linearisation~${\widetilde{\pmb{\cal L}}}$ of~${\widetilde{\pmb{\cal G}}}$, whereas ${\kappa_x'}{\widetilde{\pmb{\cal L}}}={\widetilde{\pmb{\cal L}}}{\kappa_y'}$. The eigenspaces of ${\widetilde{\pmb{\cal L}}}$ are invariant under the reflection ${\kappa_x'}{\kappa_y'}$. We assume the generic situation of one-dimensional eigenspaces, then each eigenfunction $\zeta$ must be either even or odd under the reflection ${\kappa_x'}{\kappa_y'}$, i.e., ${\kappa_x'}{\kappa_y'}\zeta=\zeta$ (case~A) or ${\kappa_x'}{\kappa_y'}\zeta=-\zeta$ (case~B), since $({\kappa_x'}{\kappa_y'})^2$ is the identity. In case~A, if $\zeta={\kappa_x'}{\kappa_y'}\zeta$ is an eigenfunction of ${\widetilde{\pmb{\cal L}}}$ with FM~$s$, then ${\kappa_x'}\zeta={\kappa_y'}\zeta$ has the same FM: $$ {\widetilde{\pmb{\cal L}}}{\kappa_x'}\zeta={\kappa_y'}{\widetilde{\pmb{\cal L}}}\zeta=s{\kappa_y'}\zeta=s{\kappa_x'}\zeta. \eqno(\AMReqno) $$ Therefore, $\zeta$ and ${\kappa_x'}\zeta$ are linearly dependent; moreover, ${\kappa_x'}^2$ is the identity, so either ${\kappa_x'}\zeta=\zeta$ (case~${\rm A}+$) or ${\kappa_x'}\zeta=-\zeta$ (case~${\rm A}-$). Finally, these two cases are subdivided according to the value of the critical Floquet multiplier of ${\widetilde{\pmb{\cal L}}}$, (either $+1$ or~$-1$) at the bifurcation point. Case~B is rather different. Here we have ${\kappa_x'}\zeta=-{\kappa_y'}\zeta$, so $$ {\widetilde{\pmb{\cal L}}}{\kappa_x'}\zeta={\kappa_y'}{\widetilde{\pmb{\cal L}}}\zeta=s{\kappa_y'}\zeta=-s{\kappa_x'}\zeta. \eqno(\AMReqno) $$ Thus ${\kappa_x'}\zeta$ has $\hbox{FM}=-s$ and is linearly independent of $\zeta$, which has $\hbox{FM}=s$. We define $\zeta_+$ to be the eigenfunction of $s$ and $\zeta_-$ to be the eigenfunction of $-s$, with $\zeta_-={\kappa_x'}\zeta_+=-{\kappa_y'}\zeta_+$. There are two ways in which two Floquet multipliers $s$ and $-s$ can cross the unit circle: either at $+1$ and~$-1$ (case~${\rm B}+$) or at $+\ii$ and $-\ii$ (case~${\rm B}-$). Note that in the absence of the reflection symmetries these bifurcations would be codimension-two; here they occur as generic bifurcations. Since the FMs of the time-$T$ map $\pmb{\cal M}_0^T$ are the fourth power of the FMs of~${\widetilde{\pmb{\cal G}}}$, the effect of the symmetry in case~B is to force a repeated $\hbox{FM}=+1$ in the map $\pmb{\cal M}_0^T$. In case~A, we write $$ \pmb{\cal A}_n = a_n\zeta + \Phi(a_n), \eqno(\AMReqno) $$ near the bifurcation point, where $\Phi$ is the graph of the centre manifold. On the centre manifold we have $\pmb{\cal A}={\kappa_x'}{\kappa_y'}\pmb{\cal A}$, so $$ {\tilde h_x}(\pmb{\cal A})={\tilde h_x}({\kappa_x'}{\kappa_y'}\pmb{\cal A})=-{\tilde h_x}({\kappa_y'}\pmb{\cal A})=-{\tilde h_x}(\pmb{\cal A})=0, \eqno(\AMReqno) $$ where we have used~\theARSymmetryAction. Thus in case~A, ${\tilde h_x}$ and ${\tilde h_y}$ are identically zero, and no bifurcation will lead to drift along the group orbit of alternating rolls. The reflections ${\kappa_x'}$ and ${\kappa_y'}$ act trivially in case~${\rm A}+$. A $\hbox{FM}=+1$ leads to a saddle-node bifurcation of alternating rolls. The normal form in the case $\hbox{FM}=-1$ gives $a_n=(-1)^na_0$, from which the bifurcating solution $\pmb{U}(t)$ can be reconstructed. Choosing the initial translation $c_0$ to be zero, and suppressing the $(x,y,z)$-dependence of $\pmb{U}$, we have $$\fl\vcenter{\openup1\jot\halign{% $\hfil#$&${}#$\hfil&\qquad$\hfil#$&${}#$\hfil\cr \pmb{U}(0) &= \pmb{U}_{0,0}(0) + a_0\zeta+\Phi(a_0), &\pmb{U}(\hbox{$1\over4$} T) &= \rho^3(\pmb{U}_{0,0}(0) - a_0\zeta+\Phi(-a_0)), \cr \pmb{U}(\hbox{$1\over2$} T) &= \rho^2(\pmb{U}_{0,0}(0) + a_0\zeta+\Phi(a_0)), &\pmb{U}(\hbox{$3\over4$} T)&= \rho (\pmb{U}_{0,0}(0) - a_0\zeta+\Phi(-a_0)). \cr }}\eqno(\AMReqno) $$ Here it should be recalled that $\pmb{U}_{0,0}(\hbox{$1\over4$} T)=\rho^3\pmb{U}_{0,0}(0)$, and that on the centre manifold $$ \pmb{A}(\hbox{$1\over4$} T)=\rho^3\pmb{\cal A}_1 =\rho^3(a_1\zeta+\Phi(a_1))=\rho^3(-a_0\zeta+\Phi(-a_0)). \eqno(\AMReqno) $$ This solution satisfies $$ \pmb{U}(t)={\kappa_x'}\pmb{U}(t)={\kappa_y'}\pmb{U}(t)=\rho^2\pmb{U}(t+\hbox{$1\over2$} T), \eqno(\AMReqno) $$ and thus has the same symmetries as ``standing cross-rolls'', described by Silber and\ Knobloch (1991). \addref{refS40} In case ${\rm A}-$, ${\kappa_x'}$ and ${\kappa_y'}$ act nontrivially, so the behaviour on the centre manifold is governed by a pitchfork normal form ($a_n=a_0$) when the $\hbox{FM}=+1$ and by a period-doubling normal form ($a_n=(-1)^na_0$) when the $\hbox{FM}=-1$. At leading order in $a_0$, the bifurcating solutions $\pmb{U}(t)$ in the two cases are $$\vcenter{\openup1\jot\halign{% $\hfil#$&${}#$\hfil&\qquad$\hfil#$&${}#$\hfil\cr \pmb{U}(0) &= \pmb{U}_{0,0}(0) + a_0\zeta, &\pmb{U}(\hbox{$1\over4$} T) &= \rho^3(\pmb{U}_{0,0}(0) \pm a_0\zeta), \cr \pmb{U}(\hbox{$1\over2$} T) &= \rho^2(\pmb{U}_{0,0}(0) + a_0\zeta), &\pmb{U}(\hbox{$3\over4$} T)&= \rho (\pmb{U}_{0,0}(0) \pm a_0\zeta). \cr }}\eqno(\AMReqno) $$ These solutions are not invariant under ${\kappa_x'}$ or ${\kappa_y'}$ (since these change the sign of~$\zeta$), but are invariant under the product~${\kappa_x'}{\kappa_y'}$. In addition, $\pmb{U}(t)=\rho\pmb{U}(t+\hbox{$1\over4$} T)$ in the case $\hbox{FM}=+1$ and $\pmb{U}(t)={\kappa_x'}\rho\pmb{U}(t+\hbox{$1\over4$} T)$ in the case $\hbox{FM}=-1$. Case~B is more interesting. On the two-dimensional centre manifold, we write $$ \pmb{\cal A}_n = (-a_n+b_n)\zeta_+ + (a_n+b_n)\zeta_- + \Phi(a_n,b_n); \eqno(\AMReqno) $$ the form of this expression is chosen for later convenience. The map~\theCombinedMapThreeD\ reduces to $$\fl (a_{n+1},b_{n+1})={\hat g}(a_n,b_n;\mu),\qquad c_{n+1}=c_n + \ii^n({\hat h}_x(a_n,b_n;\mu)+\ii{\hat h}_y(a_n,b_n;\mu)). \eqno(\AMReqno)\AMReqlabel{theCMMapCaseB} $$ Since $\zeta_-={\kappa_x'}\zeta_+=-{\kappa_y'}\zeta_+$, we have $$\eqalign{ {\kappa_x'}\pmb{\cal A}_n &= ( a_n+b_n)\zeta_+ + (-a_n+b_n)\zeta_- + {\kappa_x'}\Phi(a_n,b_n),\cr {\kappa_y'}\pmb{\cal A}_n &= (-a_n-b_n)\zeta_+ + ( a_n-b_n)\zeta_- + {\kappa_y'}\Phi(a_n,b_n);\cr }\eqno(\AMReqno) $$ thus $$ {\kappa_x'}(a_n,b_n)=(-a_n,b_n),\qquad {\kappa_y'}(a_n,b_n)=(a_n,-b_n).\eqno(\AMReqno) $$ From this and from \theARSymmetryAction, we deduce that on the centre manifold $$\eqalign{ {\hat h}_x(a_n,b_n)&=-{\hat h}_x(-a_n,b_n)={\hat h}_x(a_n,-b_n)\cr {\hat h}_y(a_n,b_n)&=-{\hat h}_y(a_n,-b_n)={\hat h}_y(-a_n,b_n),\cr}\eqno(\AMReqno) \AMReqlabel{symmetryhhatxy} $$ implying that ${\hat h}_x(0,b;\mu)=0$ and ${\hat h}_y(a,0;\mu)=0$. Moreover, ${\hat g}$ inherits the symmetries \theARSymmetryAction\ of ${\widetilde{\pmb{\cal G}}}$: $$ {\kappa_y'}{\hat g}(a_n,b_n)={\hat g}({\kappa_x'}(a_n,b_n)),\qquad {\kappa_x'}{\hat g} (a_n,b_n)= {\hat g}({\kappa_y'}(a_n,b_n)).\qquad \eqno(\AMReqno) $$ Thus the linearisation ${\hat{\cal L}}$ of ${\hat g}$ satisfies $$ \left(\matrix{1&0\cr0&-1\cr}\right){\hat{\cal L}} ={\hat{\cal L}}\left(\matrix{-1&0\cr0&1\cr}\right), \eqno(\AMReqno) $$ which forces ${\hat{\cal L}}$ to be of the form $$ {\hat{\cal L}}=\left(\matrix{0&\alpha\cr\beta&0\cr}\right), \eqno(\AMReqno) $$ where $a_n$, $b_n$ can be scaled so that $\alpha=1$. There is a bifurcation when $\beta=+1$ or $\beta=-1$, yielding FMs $\pm 1$ (case~${\rm B}+$) or $\pm\ii$ (case~${\rm B}-$), respectively. In order to analyse the dynamics near the bifurcation point, we compute the normal form of the bifurcation problems, expanding~${\hat g}$ as a Taylor series in $a$ and~$b$. The reflection symmetry ${\kappa_x'}{\kappa_y'}$ prohibits quadratic terms, and all but two of the cubic terms can be removed by near-identity transformations. We thus have the unfolded normal form, truncated at cubic order, in the two cases ${\rm B}+$\ and~${\rm B}-$: $$\eqalign{ a_{n+1}&=b_n,\cr b_{n+1}&=\pm(1+\mu)a_n + P a_n^3 + Q a_n b_n^2,\cr c_{n+1}&=c_n + \ii^n({\hat h}_x(a_n,b_n;\mu)+\ii{\hat h}_y(a_n,b_n;\mu)),\cr }\eqno(\AMReqno)\AMReqlabel{theCMMapCaseBNormalForm} $$ where $\mu=0$ at the bifurcation point and $P$ and $Q$ are constants. Lamb (1996) deduced the $(a,b)$ part of this normal form for a local bifurcation of a map with a $Z_2\times Z_2$ 2-symmetry group, appropriate to the case under study here. We have chosen our scalings and near-identity transformations to match Lamb's notation. Lamb described the period-one, two and four orbits that are created in the bifurcation at $\mu=0$ and calculated their stability as a function of the constants $P$ and~$Q$; we will interpret those results in terms of bifurcations from, and drift along, the group orbit of alternating rolls. \addref{refL41} In case ${\rm B}+$, there are three types of orbits created. The first is a period-two orbit $(a_0,0)\leftrightarrow(0,a_0)$, with $0=\mu+Pa_0^2$. From this and the symmetries \symmetryhhatxy\ of ${\hat h}$, we deduce the drift of the solution at each iterate: $$\vcenter{\openup1\jot\halign{% $\hfil#$\quad&$\hfil#$&${}#$\hfil&\quad$\hfil#$&${}#$\hfil\cr c_0,& c_1&=c_0 + {\hat h}_x(a_0,0;\mu), &c_2&=c_0 + {\hat h}_x(a_0,0;\mu) - {\hat h}_y(0,a_0;\mu),\cr &c_3&=c_0 - {\hat h}_y(0,a_0;\mu), &c_4&=c_0, \quad\dots\cr}}\eqno(\AMReqno) $$ There is no net drift along the group orbit in this case. Moreover, the $c_0$-parameterized family of solutions drift to and fro in the $x$~direction only since $c_n-c_0$ is real. Consider $c_0=\hbox{$1\over2$}(-{\hat h}_x(a_0,0;\mu) + {\hat h}_y(0,a_0;\mu))=- c_2$, where $c_0$ is real and thus corresponds to a translation in the $x$-direction. The reconstructed solution $\pmb{U}(t)$, at leading order in $a_0$, satisfies $$\fl\vcenter{\openup1\jot\halign{% $\hfil#$&${}#$\hfil&\qquad$\hfil#$&${}#$\hfil\cr \pmb{U}(0) &= \tau_{c_0}(\pmb{U}_{0,0}(0) - a_0\zeta_+ + a_0\zeta_-), &\pmb{U}(\hbox{$1\over4$} T) &= \tau_{c_1}\rho^3(\pmb{U}_{0,0}(0) + a_0\zeta_+ + a_0\zeta_-), \cr \pmb{U}(\hbox{$1\over2$} T) &= \tau_{-c_0}\rho^2(\pmb{U}_{0,0}(0) - a_0\zeta_+ + a_0\zeta_-), &\pmb{U}(\hbox{$3\over4$} T)&= \tau_{-c_1}\rho (\pmb{U}_{0,0}(0) + a_0\zeta_+ + a_0\zeta_-), \cr }}\eqno(\AMReqno) $$ so we have $\pmb{U}(t)={\kappa_y'}\pmb{U}(t)=\rho^2\pmb{U}(t+\hbox{$1\over2$} T)$. The conjugate orbit, $(0,a_0)\leftrightarrow(a_0,0)$, has symmetry $\langle{\kappa_x'},{\tilde\rho}^2\rangle$ and does not drift at all in the $x$~direction. The second and third types of orbit created in case ${\rm B}+$\ are a period one orbit $(a_0,a_0)$ and a period two orbit $(a_0,-a_0)\leftrightarrow(-a_0,a_0)$, with $0=\mu+(P+Q)a_0^2$ in both cases. These orbits are mapped to each other by ${\kappa_x'}$ or by ${\kappa_y'}$, so we consider only the orbit with period one. The translations at each iterate are $$\vcenter{\openup1\jot\halign{% $\hfil#$\quad&$\hfil#$&${}#$\hfil&\quad$\hfil#$&${}#$\hfil\cr c_0,& c_1&=c_0 + {\hat h}(a_0,a_0;\mu), &c_2&=c_0 + {\hat h}(a_0,a_0;\mu) + \ii{\hat h}(a_0,a_0;\mu),\cr &c_3&=c_0 + \ii{\hat h}(a_0,a_0;\mu), &c_4&=c_0, \quad\dots\cr}}\eqno(\AMReqno) $$ This orbit also has no net drift, and by choosing $c_0=-\hbox{$1\over2$}(1+\ii) {\hat h}(a_0,a_0;\mu)$, we have $c_1=\ii c_0$. The reconstructed solution $\pmb{U}(t)$, at leading order in $a_0$, satisfies $$\fl\vcenter{\openup1\jot\halign{% $\hfil#$&${}#$\hfil&\qquad$\hfil#$&${}#$\hfil\cr \pmb{U}(0) &= \tau_{c_0}(\pmb{U}_{0,0}(0) + 2a_0\zeta_-), &\pmb{U}(\hbox{$1\over4$} T) &= \tau_{\ii c_0}\rho^3(\pmb{U}_{0,0}(0) + 2a_0\zeta_-), \cr \pmb{U}(\hbox{$1\over2$} T) &= \tau_{-c_0}\rho^2(\pmb{U}_{0,0}(0) + 2a_0\zeta_-), &\pmb{U}(\hbox{$3\over4$} T)&= \tau_{-\ii c_0}\rho (\pmb{U}_{0,0}(0) + 2a_0\zeta_-), \cr }}\eqno(\AMReqno) $$ so $\pmb{U}(t)=\rho\pmb{U}(t+\hbox{$1\over4$} T)$, and the isotropy subgroup is $\langle{\tilde\rho}\rangle$. This solution has the same symmetries as the alternating pulsating waves described in section~3, so alternating pulsating waves may be created in a symmetry-breaking bifurcation of alternating rolls. The period-two orbit $(a_0,-a_0)\leftrightarrow(-a_0,a_0)$ has the conjugate isotropy subgroup $\langle{\kappa_x'}{\kappa_y'}{\tilde\rho}\rangle$. Finally, we turn to case~${\rm B}-$. Here, there are two types of periodic orbit created in the bifurcation at $\mu=0$, and in this case they are both of period four. The first orbit is $(a_0,0)\rightarrow(0,-a_0)\rightarrow(-a_0,0)\rightarrow(0,a_0)$, with $0=-\mu+Pa_0^2$ in \theCMMapCaseBNormalForm. The translations are $$\fl\vcenter{\openup1\jot\halign{% $\hfil#$\quad&$\hfil#$&${}#$\hfil&\quad$\hfil#$&${}#$\hfil\cr c_0,& c_1&=c_0 + {\hat h}_x(a_0,0;\mu), &c_2&=c_0 + {\hat h}_x(a_0,0;\mu) + {\hat h}_y(0,a_0;\mu),\cr &c_3&=c_0 +2{\hat h}_x(a_0,0;\mu) + {\hat h}_y(0,a_0;\mu), &c_4&=c_0 +2{\hat h}_x(a_0,0;\mu) +2{\hat h}_y(0,a_0;\mu), \quad\dots\cr}}\eqno(\AMReqno) $$ Note that $c_n-c_0$~is real so there is no drift at all in the $y$~direction, but there is a systematic drift in the $x$~direction. The reconstructed solution $\pmb{U}(t)$ satisfies $$\fl\vcenter{\openup1\jot\halign{% $\hfil#$&${}#$\hfil&\qquad$\hfil#$&${}#$\hfil\cr \pmb{U}(0) &= \tau_{c_0}(\pmb{U}_{0,0}(0) - a_0\zeta_+ + a_0\zeta_-), &\pmb{U}(\hbox{$1\over4$} T) &= \tau_{c_1}\rho^3(\pmb{U}_{0,0}(0) - a_0\zeta_+ - a_0\zeta_-), \cr \pmb{U}(\hbox{$1\over2$} T) &= \tau_{c_2}\rho^2(\pmb{U}_{0,0}(0) + a_0\zeta_+ - a_0\zeta_-), &\pmb{U}(\hbox{$3\over4$} T)&= \tau_{c_3}\rho (\pmb{U}_{0,0}(0) + a_0\zeta_+ + a_0\zeta_-), \cr }}\eqno(\AMReqno)\AMReqlabel{firstBminus} $$ so we have $\pmb{U}(t)={\kappa_y'}\pmb{U}(t)$. A conjugate orbit, started a quarter period later, has isotropy subgroup $\langle{\kappa_x'}\rangle$ and drifts systematically in the $y$~direction. The second type of orbit created in case ${\rm B}-$\ is $(a_0,a_0)\rightarrow(a_0,-a_0)\rightarrow(-a_0,-a_0)\rightarrow(-a_0,a_0)$, with $0=-\mu+(P+Q)a_0^2$. The translations are $$\eqalign{ c_0,\qquad c_1&=c_0 + {\hat h}_x(a_0,a_0;\mu) + \ii{\hat h}_y(a_0,a_0;\mu),\cr c_2&=c_0 + (1+\ii)({\hat h}_x(a_0,a_0;\mu) + {\hat h}_y(a_0,a_0;\mu)),\cr c_3&=c_0 + (2+\ii){\hat h}_x(a_0,a_0;\mu) + (1+2\ii){\hat h}_y(a_0,a_0;\mu),\cr c_4&=c_0 + (2+2\ii)({\hat h}_x(a,a;\mu) + {\hat h}_y(a_0,a_0;\mu)), \quad\dots\cr}\eqno(\AMReqno) $$ This corresponds to a solution that drifts along the diagonal, with a wobble from side to side as it goes. At leading order in $a_0$, the reconstructed solution $\pmb{U}(t)$ satisfies $$\vcenter{\openup1\jot\halign{% $\hfil#$&${}#$\hfil&\qquad$\hfil#$&${}#$\hfil\cr \pmb{U}(0) &= \tau_{c_0}(\pmb{U}_{0,0}(0) + 2a_0\zeta_-), &\pmb{U}(\hbox{$1\over4$} T) &= \tau_{c_1}\rho^3(\pmb{U}_{0,0}(0) - 2a_0\zeta_+), \cr \pmb{U}(\hbox{$1\over2$} T) &= \tau_{c_2}\rho^2(\pmb{U}_{0,0}(0) - 2a_0\zeta_-), &\pmb{U}(\hbox{$3\over4$} T)&= \tau_{c_3}\rho (\pmb{U}_{0,0}(0) + 2a_0\zeta_+), \cr }}\eqno(\AMReqno)\AMReqlabel{secondBminus} $$ which has fully broken the spatial and spatio-temporal symmetries of the underlying alternating rolls solution. We close our discussion of case~${\rm B}-$\ by considering spatio-temporal symmetries of the drifting solutions that are obtained by moving to an appropriate travelling frame. These symmetries are not listed in \TableInstabilities. We first consider the solution \firstBminus\ that drifts in the $y$ direction; it has the following spatio-temporal symmetry $$ U(t)={\kappa_x'}\rho^2\tau_{c_0-c_2}U(t+\hbox{$1\over2$} T). \eqno(\AMReqno)\AMReqlabel{driftsymmetryone} $$ Next we consider the solution \secondBminus\ that drifts along the diagonal; it has spatio-temporal symmetry $$ U(t)={\kappa_y'}\rho\tau_{\ii c_0^*-c_1}U(t+\hbox{$1\over4$} T). \eqno(\AMReqno)\AMReqlabel{driftsymmetrytwo} $$ In summary, we have examined the six different cases in which alternating rolls undergo a bifurcation with $\hbox{FM}=+1$ in the full period map. All six bifurcations preserve the underlying spatial periodicity of the alternating rolls, but may break the spatial and spatio-temporal symmetries. The 2-symmetry present in the B cases forces two Floquet multipliers to cross the unit circle together, and we find two branches of bifurcating solutions, with distinct symmetry properties. It is only in case~${\rm B}-$, with Floquet multipliers $\pm\ii$ in the map ${\widetilde{\pmb{\cal G}}}$, that the bifurcation leads to systematically drifting solutions: one solution drifts along a coordinate axis, while the other drifts along a diagonal. \AMRfigure rsfig02.eps Alternating rolls in three-dimensional compressible magnetoconvection, starting with parameter values from Matthews \etal (1995). The four frames are (approximately) at times (a)~$t=0$, (b)~$t=\hbox{$1\over4$} T$, (c)~$t=\hbox{$1\over2$} T$ and (d)~$t=\hbox{$3\over4$} T$. The frames show contours of the vertical velocity in a horizontal plane in the middle of the layer: solid lines denote fluid travelling upwards, dashed lines denote fluid travelling downwards, and the dotted line denotes zero vertical velocity. The spatial symmetries ${\kappa_x'}$ and ${\kappa_y'}$ are manifest, as is the spatio-temporal symmetry of advancing a quarter period in time followed by a $90^\circ$ rotation (counter-clockwise in this example). The dimensionless parameters are: the mid-layer Rayleigh number (proportional to the temperature difference across the layer) $R=2324$; the Chandrasekhar number (proportional to the square of the imposed magnetic field) $Q=1033$; the Prandtl number $\sigma=0.1$; the mid-layer magnetic diffusivity ratio $\zeta=0.1$; the adiabatic exponent $\gamma=5/3$; the polytropic index $m=1/4$; the thermal stratification $\theta=6$; the mid-layer plasma beta $\beta=32$; and the horizontal wavelengths $\lambda=2$ in units of the layer depth). \AMRfigurelabel{FigureAR} \AMRfigure rsfig03.eps After a bifurcation of type ${\rm B}-$, the alternating rolls begin to drift. The parameter values are as in \FigureAR, but with a higher thermal forcing: $R=3000$ and $Q=1333$. The frames are (approximately) at times (a)~$t=0$, (b)~$t=\hbox{$1\over4$} T$, (c)~$t=\hbox{$1\over2$} T$, (d)~$t=\hbox{$3\over4$} T$, (e)~$t=T$ and (f)~$t=2T$. Note how all spatial and spatio-temporal symmetries have been broken, with the exception of ${\kappa_y'}$, a reflection in the plane $y=\hbox{$1\over4$}\lambda$ (modulo a slight shift in the periodic box). The slow leftward drift of the pattern can be seen by comparing frames (a), (e) and~(f), In addition, a drift symmetry ${\kappa_x'}\rho^2\tau_{c_0-c_2}$, conjugate to~\driftsymmetryone, can be seen by comparing frames (a) and~(c) or (b) and~(d). \AMRfigurelabel{FigureTAR} We finish this section by presenting an example of a bifurcation from alternating rolls to a drifting pattern, which we interpret as an instance of a ${\rm B}-$\ bifurcation. We have solved the PDEs for three-dimensional compressible magnetoconvection in a periodic $2\times2\times1$ box, using the code of Matthews \etal (1995). The PDEs and description of the parameters and numerical method can be found in that paper. \FigureAR\ shows an example of an alternating roll at times approximately $0$, $\hbox{$1\over4$} T$, $\hbox{$1\over2$} T$ and~$\hbox{$3\over4$} T$; the two reflection symmetries ${\kappa_x'}$ and ${\kappa_y'}$ in planes $x=\hbox{$1\over4$}\lambda$ and $y=\hbox{$1\over4$}\lambda$ and the spatio-temporal symmetry of advancing a quarter period in time followed by a $90^\circ$ rotation about the centre of the box are manifest. Increasing the controlling parameter, the temperature difference across the layer, leads to the solution in \FigureTAR: the data are shown at times $0$, $\hbox{$1\over4$} T$, $\hbox{$1\over2$} T$, $\hbox{$3\over4$} T$, $T$ and~$2T$. The only spatial symmetry remaining is the invariance under~${\kappa_y'}$, and the spatio-temporal symmetry has been broken. By comparing frames (a), (e) and (f) at times $0$, $T$ and~$2T$, it can be seen that the solution is drifting slowly leftwards along the $x$-axis. Moveover, a drift symmetry ${\kappa_x'}\rho^2\tau_{c_0-c_2}$ conjugate to~\driftsymmetryone\ can be seen by comparing frames (a) and~(c). All evidence points to the bifurcation being of type ${\rm B}-$\ (though we have not computed the Floquet multipliers and critical eigenfunctions in the PDEs). \addref{refM42} \AMRsection{Conclusion} We have developed a technique for investigating the possible instabilities from continuous group orbits of spatio-temporally symmetric time-periodic solutions of partial differential equations in periodic domains. Our approach is based on centre manifold reduction and symmetry arguments. It is in the spirit of earlier work by Iooss (1986) on bifurcations from continuous group orbits of spatially symmetric steady solutions of partial differential equations. We have treated three examples that arise in convection problems: pulsating waves in two dimensions, and alternating pulsating waves and alternating rolls in three dimensions. A simple bifurcation can lead to drifting solutions in the case of pulsating waves but not alternating pulsating waves. The additional spatial symmetries of alternating rolls can force two Floquet multipliers to cross the unit circle together; this degeneracy can lead to drifting solutions, as in the numerical example presented in the previous section. We have related our work to the theory of $k$-symmetries developed by Lamb and\ Quispel (1994). \addref{refL43} Our approach can readily be applied to other problems. In the future, we plan to tackle spatial period doubling and multiplying, where the $\tau_\lambda$ symmetries do not act trivially; such instabilities are relevant to simulations of convection carried out in larger boxes (Weiss \etal 1996), and will be related to the study of the long-wavelength instabilities of alternating rolls (Hoyle 1994). We also plan to examine the case of the hexagonal lattice: a Hopf bifurcation on a hexagonal lattice leads to a wide variety of periodic orbits with different spatio-temporal symmetries (Roberts \etal 1986). Finally, we plan to investigate the effect of including the extra $Z_2$ mid-layer reflection symmetry that arises when making the Boussinesq approximation for incompressible fluids. \addref{refW31} \addref{refH26} \addref{refR26} \ack It is a great pleasure to acknowledge valuable discussions with G\'erard Iooss, Jeroen Lamb, and Michael Proctor. We are very grateful to Paul Matthews for the use of his code for solving the PDEs for three-dimensional compressible magnetoconvection. This research was supported by a NATO collaborative research grant CRG-950227. The research of AMR is supported by the Royal Astronomical Society. The research of MS is supported by NSF grant DMS-9404266, and by an NSF CAREER award DMS-9502266. \vfill\eject \references \defand\ {and\ } \defJ.~Fluid Mech.{J.~Fluid Mech.} \defGeophys. Astrophys. Fluid Dynamics{Geophys. Astrophys. Fluid Dynamics} \defAnnu. Rev. Fluid Mech.{Annu. Rev. Fluid Mech.} \defPhil. Trans. R.~Soc. Lond.~A{Phil. Trans. R.~Soc. Lond.~A} \defProc. R.~Soc. Lond.~A{Proc. R.~Soc. Lond.~A} \defProc. Nat. Acad. Sci. U.S.A.{Proc. Nat. Acad. Sci. U.S.A.} \defPhys. Rev.{Phys. Rev.} \defAstrophys.~J.{Astrophys.~J.} \defAstrophys.~J. Suppl.{Astrophys.~J. Suppl.} \defAstron. Astrophys.{Astron. Astrophys.} \defPhys. Rev. Lett.{Phys. Rev. Lett.} \defPhys. Lett.{Phys. Lett.} \defMon. Not. R. Astron.~Soc.{Mon. Not. R. Astron.~Soc.} \defJ.~Math. Phys.{J.~Math. Phys.} \defProc. Camb. Phil. Soc.{Proc. Camb. Phil. Soc.} \defC.~R. Acad. Sc. Paris{C.~R. Acad. Sc. Paris} \defCambridge University Press{Cambridge University Press} \defAmerican Geophysical Union{American Geophysical Union} \defJ.~Stat. Phys.{J.~Stat. Phys.} \defInt.~J. Bifurcation and Chaos{Int.~J. Bifurcation and Chaos} \defSIAM J.~Math. Anal.{SIAM J.~Math. Anal.} \defSIAM J.~Appl. Math.{SIAM J.~Appl. Math.} \defSolar and Planetary Dynamos{Solar and Planetary Dynamos} \def\DynamoProcEds{\author{M~R~E}{Proctor}, \author{P~C}{Matthews} and\ \author{A~M}{Rucklidge}} \defLectures on Solar and Planetary Dynamos{Lectures on Solar and Planetary Dynamos} \def\author{M~R~E}{Proctor} \and \author{A~D}{Gilbert}{\author{M~R~E}{Proctor} and\ \author{A~D}{Gilbert}} \newcount\AMRreferenceno \def\genericref #1#2{\ifundefined{#1}\else% \global \advance \AMRreferenceno by 1% {#2}\fi}% \def\author#1#2{#1~#2}% \outer\def\refarticle#1#2#3#4#5#6#7{% \genericref{#1}% {\refjl{#2 #3 #4}{#5}{#6}{#7}}} \outer\def\refarticleshort#1#2#3#4#5{% \genericref{#1}% {\refjl{#2 #3 #4}{#5}{}{}}}% \outer\def\refbook#1#2#3#4#5#6{% \genericref{#1}% {\refbk{#2 #3}{#4}{(#6: #5)}}}% \outer\def\refartbook#1#2#3#4#5#6#7#8#9{% \genericref{#1}% {\refbk{#2 #3 #4}{#5}{ed #6 (#8: #7) pp~\hbox{#9}}}}% \outer\def\refartbooktobe#1#2#3#4#5#6#7#8{% \genericref{#1}% {\refbk{#2 #3 #4}{#5}{ed #6 (#8:#7)}}}% \outer\def\refrussianartbook#1#2#3#4#5#6#7{% \genericref{#1}% {\refbk{#2 #3 #4 (in Russian)}{#5}{#6, #7}}}% \outer\def\refphd#1#2#3#4#5{% \genericref{#1}% {\refbk{#2 #3 #4}{PhD dissertation}{#5}}}% \def{} \def{} \def{} \def\ii{\ii} \begingrou % \spaceskip=0.3333em plus 0.25em minus 0.2em \xspaceskip=0.5em plus 0.15em% \refarticleshort{refA36} {\author{F.}{Amdjadi}, \author{P.J.}{Aston} and\ \author{P.}{Plech\'a$\check{\rm c}$}}{1997} {Symmetry breaking Hopf bifurcations in equations with O(2) symmetry with applications to the Kuramoto-Sivashinsky equation} {{\it J.~Comp.~Phys. \rm (in press)}} \refarticle{refA40} {\author{P.J.}{Aston}, \author{A.}{Spence} and\ \author{W.}{Wu}}{1992} {Bifurcation to rotating waves in equations with O(2)-symmetry} {SIAM J.~Appl. Math.}{52}{792--809} \refbook{refC72} {\author{P.}{Chossat} and\ \author{G.}{Iooss}}{1994} {The Couette--Taylor Problem} {Springer}{New York} \refarticle{refC49} {\author{T.}{Clune} and\ \author{E.}{Knobloch}}{1994} {Pattern selection in three-dimensional magnetoconvection} {Physica}{74D}{151--176} \refarticle{refC50} {\author{J.D.}{Crawford} and\ \author{E.}{Knobloch}}{1991} {Symmetry and symmetry-breaking bifurcations in fluid dynamics} {Annu. Rev. Fluid Mech.}{23}{341--387} \refarticle{refF33} {\author{M.J.}{Field}}{1980} {Equivariant dynamical systems} {Trans. Am. Math. Soc.}{259}{185--205} \refbook{refG59} {\author{M.}{Golubitsky}, \author{I.}{Stewart} and\ \author{D.G.}{Schaeffer}} {1988} {Singularities and Groups in Bifurcation Theory. Volume~II} {Springer}{New York} \refarticle{refH26} {\author{R.B.}{Hoyle}}{1994} {Phase instabilities of oscillatory standing squares and alternating rolls} {Phys. Rev.}{49E}{2875--2880} \refarticle{refI3} {\author{G.}{Iooss}}{1986} {Secondary instabilities of Taylor vortices into wavy inflow or outflow boundaries} {J.~Fluid Mech.}{173}{273--288} \refarticle{refK61} {\author{M.}{Krupa}}{1990} {Bifurcations of relative equilibria} {SIAM J.~Math. Anal.}{21}{1453--1486} \refarticle{refL41} {\author{J.S.W.}{Lamb}}{1996} {Local bifurcations in $k$-symmetric dynamical systems} {Nonlinearity}{9}{537--557} \refarticleshort{refL42} {\author{J.S.W.}{Lamb}}{1997} {$k$-Symmetry and return maps of space-time symmetric flows} {{\rm (preprint)}} \refarticle{refL43} {\author{J.S.W.}{Lamb} and\ \author{G.R.W.}{Quispel}}{1994} {Reversing $k$-symmetries in dynamical systems} {Physica}{73D}{277--304} \refarticle{refL23} {\author{A.S.}{Landsberg} and\ \author{E.}{Knobloch}}{1991} {Direction-reversing traveling waves} {Phys. Lett.}{159A}{17--20} \refarticle{refM48} {\author{P.C.}{Matthews}, \author{M.R.E.}{Proctor}, \author{A.M.}{Rucklidge} and\ \author{N.O.}{Weiss}}{1993} {Pulsating waves in nonlinear magnetoconvection} {Phys. Lett.}{183A}{69--75} \refarticle{refM42} {\author{P.C.}{Matthews}, \author{M.R.E.}{Proctor} and\ \author{N.O.}{Weiss}}{1995} {Compressible magnetoconvection in three dimensions: planforms and nonlinear behaviour} {J.~Fluid Mech.}{305}{281--305} \refarticle{refM66} {\author{P.C.}{Matthews}, \author{A.M.}{Rucklidge}, \author{N.O.}{Weiss} and\ \author{M.R.E.}{Proctor}}{1996} {The three-dimensional development of the shearing instability of convection} {Phys. Fluids}{8}{1350--1352} \refarticle{refR26} {\author{M.}{Roberts}, \author{J.W.}{Swift} and\ \author{D.H.}{Wagner}}{1986} {The Hopf bifurcation an a hexagonal lattice} {Contemp. Math.}{56}{283--318} \refarticle{refR37} {\author{A.M.}{Rucklidge}}{1997} {Symmetry-breaking instabilities of convection in squares} {Proc. R.~Soc. Lond.~A}{453}{107--118} \refartbook{refR29} {\author{A.M.}{Rucklidge} and\ \author{P.C.}{Matthews}}{1995} {The shearing instability in magnetoconvection} {Double-Diffusive Convection} {\author{A.}{Brandt} and\ \author{H.J.S.}{Fernando}} {American Geophysical Union}{Washington}{171--184} \refarticle{refS40} {\author{M.}{Silber} and\ \author{E.}{Knobloch}}{1991} {Hopf bifurcation on square lattices} {Nonlinearity}{4}{1063--1106} \refarticle{refS20} {\author{J.W.}{Swift} and\ \author{K.}{Wiesenfeld}}{1984} {Supression of period doubling in symmetric systems} {Phys. Rev. Lett.}{52}{705--708} \refarticle{refW31} {\author{N.O.}{Weiss}, \author{D.P.}{Brownjohn}, \author{P.C.}{Matthews} and\ \author{M.R.E.}{Proctor}}{1996} {Photospheric convection in strong magnetic fields} {Mon. Not. R. Astron.~Soc.}{283}{1153--1164} \message{These should be the same: \the\AMRreferenceno\space\the\AMRreferencecount} \endgroup \bye
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Graph burning is a discrete-time process introduced by Bonato, Janssen, and Roshanbin~\cite{bonato2014burning, bonato2016how, roshanbin2016burning} that can be viewed as a simplified model for the spread of contagion in a network. Given an undirected finite graph $G$ without loops and multiple edges, each vertex of the graph is either \emph{burned} or \emph{unburned} throughout the process. At first, every vertex of $G$ has the unburned status. Sequentially, a \emph{burning source} (or simply \emph{source}) is placed at an unburned vertex to burn it at the start of every round $t\ge 1$. If a vertex is burned in round $t-1$, then in round $t$, each of its unburned neighbors becomes burned. A burned vertex is assumed to remain burned throughout the burning process. The process terminates when all vertices of $G$ have acquired the burned status, in which case we say the graph $G$ is \emph{burned}. The least number of rounds required to complete the burning process is called the \emph{burning number of $G$} and it is denoted by $b(G)$. While the burning numbers of graphs do not satisfy general hereditary property in the sense of taking subgraphs, Bonato, Janssen, and Roshanbin~\cite{bonato2016how} showed that in order to burn a connected graph, it suffices to burn its spanning trees. Therefore, much focus has been spent on trees in the study of graph burning. As one of their initial results, they also determined the exact burning numbers of paths (and thus cycles or hamiltonian graphs), showing that $b(P_m) = \lceil\sqrt{m}\rceil$ with $P_m$ being the path of order $m$. This marked the beginning of their currently still unresolved main conjecture on graph burning. \begin{burning-conjecture}[\cite{bonato2016how}] If $G$ is a connected graph of order $m$, then $b(G)$ is at most $\lceil\sqrt{m}\rceil$. \end{burning-conjecture} In the literature on graph burning, a graph that satisfies the burning number conjecture is said to be \emph{well-burnable}. So paths and hamiltonian graphs are well-burnable. As remarked above, the burning number conjecture holds if all trees are well-burnable. Classes of trees known to be well-burnable include spiders \cite{bonato2019bounds, das2018burning} and caterpillars \cite{hiller2019burning, liu2020burning}. Here, a \emph{spider} is a tree with one vertex of degree greater than two, and a \emph{caterpillar} is a tree where a path remains after deleting all vertices of degree one. For general connected graphs $G$ of order $m$, since the initial bound of $b(G)\le 2\sqrt{m}-1$ in \cite{bonato2016how}, some attempts~\cite{bessy2018bounds, land2016upper} have been made towards proving the burning number conjecture, with the currently best known upper bound being roughly $\frac{\sqrt{6}}{2}\sqrt{m}$ by Land and Lu~\cite{land2016upper}. Beyond being well-burnable, there are of course trees of order greater than $m^2$ having burning numbers at most $m$, with the simplest such graphs being the stars, where every one of them has burning number exactly two. Together with the intuition that a tree that deviates from a path should be easier to burn, the following slightly stronger conjecture was made in \cite{tan2020graph}. For convenience, if the burning number of a graph is at most $m$, then we say the graph is \emph{$m$-burnable}. \begin{conjecture}\label{Tree Conjecture} Let $m>n\ge 2$. If $T$ is a tree with $n$ leaves and its order is at most $m^2+n-2$, then $T$ is $m$-burnable. \end{conjecture} This conjecture obviously holds for stars and paths, and it has been verified in the same paper for spiders. Calling a spider with $n$ leaves an \emph{$n$-spider}, we note that for any $m,n\ge 2$, there are $n$-spiders of order $m^2+n-1$ that are not $m$-burnable, which implies that the bound in the conjecture is tight. While not all $n$-spider of order $m^2+n-2$ are $m$-burnable for the case when $m \le n$, those that are not $m$-burnable must contain, as a subgraph, the $m$-spider of order $m^2+1$ such that the distance between any two leaves is exactly $2m$. These are the results on the burnability of spiders in \cite{tan2020graph}. One of the purposes of this paper is to further verify Conjecture~\ref{Tree Conjecture} for the next natural class of trees called double spiders - the union of two spiders with an edge joining their respective vertices of maximum degree. More precisely, a \emph{double spider} is a tree that has two special adjacent vertices called \emph{heads} with the property that every other vertex has degree at most two. Note that in a double spider, every leaf that is not a head must be connected to the closest head by a unique path, which we will call an \emph{arm} of the double spider. An \emph{$n$-double spider} is a double spider with $n$ arms. With this definition, paths and spiders can be viewed as double spiders. We also remark that while most $n$-double spiders have exactly $n$ leaves, those where all the arms are joined to the same head have $n+1$ leaves. Our main result here verifies Conjecture~\ref{Tree Conjecture} for double spiders. \begin{theorem}\label{double_spiders} Let $m>n\ge 2$. If $T$ is an $n$-double spider of order at most $m^2 + n - 2$, then $T$ is $m$-burnable. \end{theorem} For the case when $m\le n$, the same examples as those for spiders show that not all $n$-double spiders of order $m^2 + n - 2$ are $m$-burnable. To complete our study on the burnability of double spiders, we show in Theorem~\ref{double_m<n} that other than similar such cases, every $n$-double spider of order $m^2 + n -2$ is $m$-burnable. We note that the bound of $m^2+n-2$ is again tight for any $m,n\ge 2$, simply by attaching $n-2$ leaves to two adjacent vertices on a path of order $m^2 + 1$. A \emph{path forest} is a disjoint union of paths. The study of graph burning of spiders is naturally related to that of path forests, as observed in the previous studies~\cite{bonato2019bounds,das2018burning,tan2020graph} on burning spiders. In particular, it was shown in \cite{tan2020graph} that while every path forest consisting of $n$ paths and of order $m^2 - (n-1)^2$ is \mbox{$m$-burnable}, the only path forest with $n$ paths and of order one larger than $m^2 - (n-1)^2$ that is not \mbox{$m$-burnable} contains $n-1$ independent edges as components. This prompts us into investigating the burnability of larger path forests. Could it be possible that the only larger path forests that are not $m$-burnable is a small collection of nicely described path forests? Therefore, our second focus in this paper is on the burnability of path forests. To this end, we first slightly refine the above result in \cite{tan2020graph}, showing that if $T$ is a path forest with $n$ paths such that its shortest path has order at least six and $$|T|\le m^2 - (n-1)(n-2)+1, $$ then $T$ is $m$-burnable. (See Theorem~\ref{n-paths} for the precise version of the result.) Of course, no path forest of order larger than $m^2$ is $m$-burnable. We observe from existing results that a path forest consisting of two paths and of order $m^2$ is \mbox{$m$-burnable}, provided that the shorter path is not just an edge. What about path forests with more paths? Is it true that every path forest of order $m^2$ with sufficiently long paths is $m$-burnable? While we are not able to quantify ``sufficiently long'' here, even if we fix the number of paths in the path forest, we show that the answer to this question is affirmative. \begin{theorem}\label{long-path-forests} Let $n\geq 2$. There exists $L\in \mathbb{N}$ with the following property. For every path forest $T$ with $n$ paths, if the shortest path of $T$ has length at least $L$ and $|T| = m^2$, then $T$ is $m$-burnable. \end{theorem} As one may notice, the above problem on burning path forests is equivalent to the following partition problem. Given $n$ positive integers $l_1,l_2,\ldots,l_n$ summing to $m^2$, we wish to decide if the set of the first $m$ odd positive integers can be partitioned into $n$ sets $S_1, S_2, \ldots, S_n$ such that for every $1\le i \le n$, the sum of the numbers in $S_i$ is equal to $l_i$. The plan of the paper is as follows. In Section~\ref{path-forests}, we observe some simple results on path forests that are useful for burning double spiders, and we investigate \mbox{$m$-burnable} path forests with $n$ paths and of order larger than $m^2-(n-1)^2 + 1$. Then in Section~\ref{double}, we prove our results on burning double spiders. Our result on the burnability of path forests with sufficiently long shortest paths will be the content of Section~\ref{path-forests-long}. Finally, we mention some remarks and open problems in Section~\ref{conclusion}. While our work focuses on burning trees and path forests, we remark that graph burning of various other classes of graphs has been studied, such as graph products~\cite{mitsche2018burning}, hypercubes~\cite{mitsche2018burning}, and random graphs~\cite{mitsche2017burning}. Since the introduction of graph burning less than a decade ago, a considerable amount of work on this topic has produced many results and algorithms, including those on variants of it. An excellent survey on the topic of graph burning can be found in \cite{MR4233796}. We recall some terminology in the context of graph burning. In a burning process, a \emph{burning sequence} is the sequence of vertices at which the burning sources are placed in each round. The shortest such sequence is said to be \emph{optimal}. So the length of an optimal burning sequence of a graph is the burning number of the graph. For a path forest, its \emph{path orders} indicate the respective order of each of its paths. For an arm of a double spider, we usually consider its vertices in order, with the vertex next to the head it is joined to as the first vertex and so forth while the $0$th vertex is the head to which the arm is joined. \section{Path Forests}\label{path-forests} In this section, we study the burnability of path forests. We begin with the burnability of some special classes of path forests, which will be useful in proving the burnability of double spiders in Section~\ref{double}. For the main result of this section, we obtain a strengthening of the following result in \cite{tan2020graph} to accommodate larger path forests but with more exceptional cases. \begin{theorem}\cite{tan2020graph}\label{092021a} Suppose $m\geq n\geq 2$ and let $T$ be a path forest with $n$ paths. If $$\vert T\vert \leq m^2-(n-1)^2+1,$$ then $T$ is $m$-burnable unless equality holds and the path orders of $T$ are $$m^2-n^2+2, \underbrace{2,2,\dotsc, 2}_{n-1 \text{ times}}.$$ \end{theorem} We first recall a simple lemma from \cite{tan2020graph} on the burnability of path forests with $n$ paths and of order at most $3n-2$. \begin{proposition}[\cite{tan2020graph}]\label{n-paths-3n} Suppose $n\geq 2$ and let $T$ be a path forest with $n$ paths. If $\vert T\vert\leq 3n-2$ and the shortest path of $T$ has one vertex, then $T$ is $n$-burnable. \end{proposition} This simple result was verified by a straightforward induction on the number of paths of $T$. Using almost identical arguments by induction, we have the following simple burnability results for path forests. \begin{proposition}\label{n-paths-linear} Suppose $n\ge 2$ and suppose $T$ is a path forest with path orders $l_1\ge l_2\ge \cdots \ge l_n$. \begin{enumerate}[(i)] \item\label{n-paths-4n} If $\vert T\vert \leq 4n-4$ with $l_n = 1$ and $l_{n-1}\ge 2$, then $T$ is $n$-burnable. \item\label{n-paths-5na} If $ \vert T\vert\leq 5n-6$ with $l_n = 1$ and $l_{n-1} = 3$, then $T$ is $n$-burnable. \item\label{n-paths-5nb} If $\vert T\vert\leq 5n-1$ with $l_n \ge 3$, then $T$ is $(n+1)$-burnable. \end{enumerate} \end{proposition} \begin{proof} For each of the three statements, the base case $n=2$ is straightforward. We suppose now that $n>2$ and the statements are all true for $n-1$. Let $T$ be a path forest as in \eqref{n-paths-4n}. It is obvious that $T$ is $n$-burnable when $l_1\le 3$, and so we consider the case when $l_1\ge 4$. Noting that $l_1\le 2n-1$ and $l_2+l_3+\cdots +l_n \le 4(n-1) - 4$, we see that $T$ is $n$-burnable since by removing the first path of $T$, the resulting path forest is $(n-1)$-burnable by induction hypothesis. For a path forest $T$ as in \eqref{n-paths-5na}, we must also have $l_1\le 2n-1$. So if $l_1\ge 5$, we similarly deduce that $T$ is $n$-burnable by induction hypothesis. Since $n\ge 3$, it is also obvious that $T$ is $n$-burnable when $l_1\le 4$. Finally, suppose $T$ is a path forest as in \eqref{n-paths-5nb}. Again, if $5\le l_1\le 2n+1$, $T$ is $(n+1)$-burnable by induction hypothesis, and if $l_1\le 4$, $T$ is also $(n+1)$-burnable where one of the paths can be burned using the last two burning sources. As $l_n\ge 3$, the only case left is when $l_1 = 2n+2$ and $l_2=l_3=\cdots=l_n=3$, which is clearly $(n+1)$-burnable where the first path can be burned using the first and the last burning sources. \end{proof} \begin{remark} Suppose $T$ is a path forest with path orders $l_1, l_2, \ldots, l_n$ and $T'$ is a path forest with path orders $l_1', l_2', \ldots, l_n'$ such that $l_i'\le l_i$ for each $1\le i\le n$. If $T$ is $m$-burnable, then $T'$ is also clearly $m$-burnable. \end{remark} Before we prove the main result of this section, we shall introduce some notation on path forests. For $m\ge n\ge 2$, let $\mathcal{T}_{n,m}$ be the set of all path forests with $n$ paths where the path orders $l_1,l_2,\ldots,l_n$ are such that \begin{enumerate}[(i)] \item $l_1 = m^2-(n-1)^2+1$ and $l_2=l_3=\cdots=l_n=1$, or \item $l_1= m^2-n^2+2$ and $2\le l_2, l_3,\ldots, l_n \le 3$, or \item $l_1= m^2-(n-1)(n+3)+1$ and $l_2=l_3=\dotsb = l_n=5$. \end{enumerate} (When $m=n$, only (i) is applicable.) It is straightforward to verify that each of the path forests in $\mathcal{T}_{n,m}$ is not \mbox{$m$-burnable}. For convenience, for a path forest $T$, we define $t_T$ to be the number of paths of order two in $T$ if the shortest path of $T$ has order two, and $t_T = 0$ otherwise. So every path forest $T$ in $\mathcal{T}_{n,m}$ has order exactly $m^2-(n-1)(n-2)+1-t_T$. The improvement over Theorem~\ref{092021a} states that if $T$ is a path forests with $n$ paths and the order of $T$ is at most $m^2-(n-1)(n-2)+1-t_T$, then $T$ is \mbox{$m$-burnable}, provided $T$ is not in $\mathcal{T}_{n,m}$. The following lemma on path forests with three paths deals with the base case of this main result. \begin{lemma}\label{3-paths} Let $m\ge 3$ and let $T$ be a path forest consisting of three paths. If $$|T|\le m^2-1-t_T\quad\mbox{and}\quad T\notin\mathcal{T}_{3,m},$$ then $T$ is $m$-burnable. \end{lemma} \begin{proof} We prove by mathematical induction on $m$. Note that for the base case $m=3$, we must have $t_T=0$. Since $T\notin \mathcal{T}_{3,3}$ is a path forest of order at most eight, its path orders $l_1\ge l_2\ge l_3$ must satisfy $l_1\leq 5$, $l_2\leq 3$, and $l_3=1$, and thus $T$ is $3$-burnable. Now, let $m\geq 4$ and suppose the lemma is true for $m-1$. Let $T$ be a path forest with path orders $l_1\geq l_2\geq l_3$ such that $|T|\le m^2-1-t_T$ and $T\notin\mathcal{T}_{3,m}$. If $|T|\le m^2 - 3$, we see that $T$ is $m$-burnable by Theorem~\ref{092021a}. So we will only need to consider the case when $|T|\ge m^2 - 2$, and in particular, we may assume $t_T\le 1$. \begin{case} $t_T=1$ \end{case} In this case, $l_1+l_2+l_3 = m^2-2$, $l_3=2$, and $l_2\ge 4$. If $l_1\ge 2m+3$, the path forest $T'$ with path orders $l_1-(2m-1)\ge 4, l_2,l_3$ is $(m-1)$-burnable by induction hypothesis as $|T'|=(m-1)^2 - 2 = (m-1)^2 - 1 - t_{T'}$ and $T'$ is clearly not in $\mathcal{T}_{3,m-1}$. Observe that we can have $l_1\le 2m + 2$ here only when $m\le 5$ as $m^2 - 2 > 2(2m+2) + 2$ whenever $m\ge 6$. When $m=4$, since $|T|=14$, the path orders of $T$ can be $(8,4,2)$, $(7,5,2)$, or $(6,6,2)$, each of which is clearly $4$-burnable. Similarly for $m=5$, the path orders of $T$ can only be $(12,9,2)$ or $(11,10,2)$, and so $T$ is $5$-burnable. \begin{case} $t_T=0$. \end{case} In this case, $m^2-2\le l_1+l_2+l_3\le m^2-1$ and $l_3\neq 2$. If $l_3 = 1$, the path forest $T'$ with path orders $l_1+1$ and $l_2+1$ has order at most $m^2$. Since $T\notin\mathcal{T}_{3,m}$, we have $l_2 \ge 2$, and so $T'$ is $m$-burnable by Theorem~\ref{092021a}. We can then deduce that the first two paths of $T$ can be burned using the first $m-1$ burning sources, and the last path can be burned using the last burning source. So we may assume that $l_3\ge 3$. Observe that $l_1\le 2m-3$ is possible only when $m=4$ as $m^2-2 > 3(2m-3)$ whenever $m\ge 5$. If $m = 4$, the only possible $T$ with $l_1\le 2m-3 = 5$ has path orders $(5,5,4)$, which is clearly $4$-burnable. So we may further assume that $l_1\ge 2m - 2$. Consider the path forest $T'$ obtained from $T$ as follows: delete a path of order $2m-2$ or $2m-1$ if there is such a path in $T$, or else delete $2m-1$ vertices from the first path of $T$. Clearly, $T$ is $m$-burnable if $T'$ is $(m-1)$-burnable. Observe that either $T'$ is a path forest with two paths and $\vert T'\vert \leq (m-1)^2$, or $T'$ is a path forest with three paths and $\vert T'\vert \leq (m-1)^2 - 1$. For the former, $T'$ is $(m-1)$-burnable by Theorem~\ref{092021a}, and for the latter, we see that $T'$ is $(m-1)$-burnable from induction hypothesis unless $T'\in\mathcal{T}_{3,m-1}$ or $T'$ has order $(m-1)^2 - 1$ with $t_{T'} = 1$. Since $l_3\ge 3$ and $T\notin\mathcal{T}_{3,m}$, for $T'$ to be a path forest in $\mathcal{T}_{3,m-1}$, it must either be the case that $l_1\in \{2m+1,2m+2\}, l_2\ge 4, l_3=3$ or $l_1 = 2m+4, l_2\ge 6, l_3=5$. It is now straightforward to verify that $T$ is $m$-burnable in either of the cases. In the latter case, we must have $m\ge 6$ since $m^2 - 1 \ge l_1+l_2+l_3\ge 2m+15$, and so the first path of $T$ can be burned with the second and the $(m-3)$th burning sources (which would burn $2m-3 + 7 = 2m+4$ vertices) and the last path of $T$ can be burned with the $(m-2)$th burning source, while the remaining burning sources are enough to burn the second path of $T$. The other case can be verified similarly. The final case left is when $T'\notin\mathcal{T}_{3,m-1}$ is of order $(m-1)^2-1$ with $t_{T'} = 1$. We must then have $l_1 = 2m+1$ and $l_3\ge 4$, which is only possible when $m=5$ or $m=6$. By the construction of $T'$, we note that none of the paths in $T$ has order $2m-1$ or $2m-2$, and so the only possible such $T$ here has path orders $(13,13,9)$ for $m=6$ or path orders $(11,7,6)$ for $m=5$, both of which are clearly $m$-burnable. This completes the proof of the lemma. \end{proof} We are now ready to prove the main result on the burnability of path forests. \begin{theorem}\label{n-paths} Suppose $m\ge n\ge 3$ and let $T$ be a path forest with $n$ paths. If $$|T|\le m^2-(n-1)(n-2)+1-t_T\quad\mbox{and}\quad T\notin\mathcal{T}_{n,m},$$ then $T$ is $m$-burnable. \end{theorem} \begin{proof} We shall prove the result by mathematical induction on $n$. By Lemma~\ref{3-paths}, the base case $n=3$ follows. Let $n\ge 4$ and suppose the result holds for $n-1$. We will now proceed by mathematical induction on $m\ge n$. When $m$ equals $n$, suppose $T$ is a path forest with path orders $l_1\ge l_2\ge \cdots\ge l_n$ and $|T|\le 3n - 1 - t_T$. This gives $t_T = 0$ for otherwise, $|T|\ge 3(n-t_T) + 2t_T = 3n-t_T > |T|$. Hence, $|T|\le 3n - 1$ and $l_n = 1$. If $l_1\le 3$, $T$ is clearly $n$-burnable, and so we may assume that $l_1\ge 4$. Since $T\notin\mathcal{T}_{n,n}$ and thus $l_2\ge 2$, we also have $l_1\le 2n-1$. Now the path forest with $n-1$ paths obtained from $T$ by deleting its first path has order at most $3n - 1 - 4 = 3(n-1) - 2$, and so is $(n-1)$-burnable by Proposition~\ref{n-paths-3n}. This implies that $T$ is $n$-burnable, and so the base case $m=n$ holds. Now, consider $m\ge n+1$ and suppose the result holds for $m-1$. Let $T$ be a path forest with path orders $l_1\ge l_2\ge \cdots\ge l_n$ such that $|T|\le m^2 - (n-1)(n-2) +1 - t_T$ and $T\notin\mathcal{T}_{n,m}$. By Theorem~\ref{092021a}, we may assume that $|T| \ge m^2 - (n-1)^2 + 2$. Now if $l_n = 1$, we have $t_T = 0$ and $l_2\ge 2$. Consider the path forest $T'$ with $n-1$ paths with path orders $l_1+1, l_2+1, \ldots, l_{n-1} +1$. Note that \begin{align*} |T'| &\le m^2 - (n-1)(n-2) + (n-1)\le m^2 - (n-2)^2 + 1, \end{align*} and so $T'$ is $m$-burnable by Theorem~\ref{092021a}. This shows that $T$ is $m$-burnable, where the first $n-1$ paths can be burned with the first $m-1$ burning sources, while the last path can be burned with the last burning source. So for the rest of the proof, we may assume that $l_n\ge 2$. Observe that $l_1 \ge 2m-2n+3$, as otherwise, we have $m^2 - (n-1)^2 + 2 \le |T| \le n(2m-2n + 2)$, which gives the contradiction that $(m-n)^2 + 1 \le 0$. We now consider a few cases based on the order of the longest path of $T$. \setcounter{case}{0} \begin{case} $l_1\le 2m-1$. \end{case} Consider the path forest $T'$ with path orders $l_2,l_3,\ldots,l_n$. Note that $t_{T'} = t_{T}$ and $|T'|\le (m-1)^2 - (n-2)(n-3) + 1 - t_{T'}$, with equality only if $l_1 = 2m-2n+3$. Hence, $T'$ is $(m-1)$-burnable whenever $l_1 > 2m-2n+3$ by the first induction hypothesis, and thus $T$ is $m$-burnable. Now for the case when $l_1 = 2m-2n+3$ and $m\ge n+2$, we can see directly that $T$ $m$-burnable where each of the first $n-1$ paths can be burned with any of the first $n-1$ burning sources as $2m - 2(n-1) + 1 \ge l_1$, while the last path can be burned with the $n$th and the $(n+1)$th burning sources. The only case left here is when $m = n+1$ and $l_1 = 2m - 2n + 3 = 5$. Noting that such $T\notin\mathcal{T}_{n,n+1}$ gives $l_n \le 4$, and so $T$ is clearly $(n+1)$-burnable. \begin{case} $l_1\ge 2m$ and $l_1\neq 2m + 1$. \end{case} By considering the path forest $T'$ with path orders $l_1 - (2m-1), l_2, \ldots, l_n$, we see that $t_{T'} \le t_T$ and $|T'|\le (m-1)^2 - (n-1)(n-2) + 1 - t_{T'}$. Hence, $T'$ is \mbox{$(m-1)$-burnable} whenever $T'\notin\mathcal{T}_{n,m-1}$ by the second induction hypothesis, and thus $T$ is $m$-burnable. Since $l_n\ge 2$ and $T\notin\mathcal{T}_{n,m}$, we see that for $T'\in\mathcal{T}_{n,m-1}$, we must have $l_1 = 2m+4$ or $l_1 = 2m+2$. Consider such a path forest $T$. If $l_1 = 2m+4$, we have $l_2 = (m-1)^2 - (n-1)(n+3) + 1$ and $l_3 = l_4 = \cdots = l_n = 5$. Since $T\notin\mathcal{T}_{n,m}$, we also have $l_2\ge 6$. Now observe that $2m+4 = l_1\ge l_2 = (m-n-2)(m+n)+5\ge 6$ is only possible when $m = n+3$. So we have $l_1 = 2m+4 = (2m-3) + 7$, $l_2=2m+2 = (2m-1) + 3$, and it is straightforward to verify that $T$ is $(n+3)$-burnable using only the first $n+2$ burning sources. If $l_1 = 2m+2$, we have $l_2 = (m-1)^2 - n^2 + 2$ and $3\ge l_3, l_4, \ldots, l_n \ge 2$. Since $T\notin\mathcal{T}_{n,m}$, we also have $l_2\ge 4$. Now observe that $2m+2 = l_1\ge l_2 = (m-n-1)(m+n-1)+2\ge 4$ is only possible when $m = n+2$. So we have $l_1 = 2m+2 = (2m-3) + 5$, $l_2=2m-1$, and it is straightforward to verify that $T$ is $(n+2)$-burnable using only the first $n+1$ burning sources. \begin{case} $l_1 = 2m+1$. \end{case} We first observe that in this case, we have $m\ge n+2$, since if $m = n+1$, then $$|T|\ge (2n+3) + 3(n-t_T-1) + 2t_T = m^2 - (n-1)(n-2) + 1 - t_T\ge |T|,$$ which would only be possible if $T\in\mathcal{T}_{n,n+1}$. We now proceed with the following claim using similar arguments as in the previous cases. \begin{claim} If $2m-2n+3\le l_{i_0}\le 2m$ for some $2\le i_0\le n$, then $T$ is $m$-burnable. \end{claim} (Proof of Claim) If there is such an $i_0$, we consider the path forest $T'$ obtained from $T$ by deleting $\min\{2m-1, l_{i_0}\}$ vertices from its $i_0$ path. Then either $T'$ is a path forest consisting of $n$ paths where $ \vert T'\vert\leq (m-1)^2 - (n-1)(n-2) + 1$ and the shortest path of $T'$ has one vertex, or $T'$ is a path forest consisting of $n-1$ paths and $\vert T'\vert \leq (m-1)^2 - (n-2)(n-3) + 1 - t_{T'}$ with $t_{T'}= t_T$. For the former, noting that $l_n\ge 2$ and so $T'\notin\mathcal{T}_{n,m-1}$, we see that $T'$ is $(m-1)$-burnable by the second induction hypothesis. For the latter, we further delete $2m-3$ vertices from the longest path of $T'$ (it is also the first path of $T$) to obtain a path forest $T''$ consisting of $n-1$ paths where one of its path has order $l_1 - (2m-3)=4$ and $t_{T''} = t_T$. It can be verified now that $T''\notin\mathcal{T}_{n-1,m-2}$ and so noting that $|T''|\le (m-2)^2 - (n-2)(n-3) + 1 - t_{T''}$, we see that $T''$ is $(m-2)$-burnable by the first induction hypothesis. This in turn implies that $T'$ is $(m-1)$-burnable. In both cases, $T'$ is $(m-1)$-burnable. It follows that $T$ is $m$-burnable, completing the proof of the claim. With the claim above, it remains to consider $T$ with path orders $$ \underbrace{2m+1,2m+1,\dotsc, 2m+1}_{k \text{ times for some } k\geq 1 }, \underbrace{l_{k+1}, l_{k+2},\dotsc, l_n}_{\text{each } \leq 2m-2n+2}.$$ We now show that $m \ge n+k+1$. Recalling that $m\ge n+2$, this is true for when $k=1$. For $k\ge 2$, we have \begin{align*} m^2 - (n-1)(n-2) + 1 - t_T\ge |T| &\ge k(2m+1) + 3(n-k) - t_T, \end{align*} which gives $$(m-k)^2\ge n^2 + (k-1)^2 > n^2,$$ and thus $m>n+k$. Finally, we show directly that $T$ is $m$-burnable as follows. Noting that $2m + 1 \le (2m - 2i + 1) + (2i + 1)$, each of the first $k$ paths of order $2m+1$ is burned with the $i$th and the $(m-i)$th burning sources for any $1\le i\le k$. This way, the $(k+1)$th until the $n$th burning sources and the last burning source are yet to be used. As the $(n-1)$th burning source can burn $2m - 2n + 3$ vertices, while the $n$th and the last burning sources can collectively burn $2m-2n+2$ vertices, we see that the last $n-k$ paths of $T$, each of which has order at most $2m-2n+2$, can be burned with these $n-k+1$ burning sources. Therefore, $T$ is $m$-burnable. This completes the proof of the theorem. \end{proof} \section{Double Spiders}\label{double} We study the burnability of double spiders in this section. We start with \mbox{$n$-double} spiders of order $m^2+n-2$ where $m>n$, showing that such double spiders are $m$-burnable. This verifies Conjecture~\ref{Tree Conjecture} for double spiders. Clearly, a $2$-double spider is either a path or a $3$-spider, and so it is $m$-burnable when its order is at most $m^2$. The following easy lemma asserts that this can in fact be done in such a way that the heads are burned in the earlier rounds. This lemma will serve as the base case of our first result on double spiders. \begin{lemma}\label{double_n=2} Let $m\geq 3$. Then every $2$-double spider of order at most $m^2$ is \mbox{$m$-burnable}. Furthermore, if the shortest arm has length $l$, then there is a way to burn the double spider in $m$ rounds such that at least $\min\{l,m-2\}$ rounds still remain after both its heads are burned. \end{lemma} \begin{proof} Suppose $T$ is a $2$-double spider with arms of lengths $l_1\geq l_2$ and its order is at most $m^2$. For convenience, we may as well assume that $|T| = m^2$. Regardless of whether $T$ is a path or a $3$-spider, the following burning strategy works. If $l_2\le m-2$, then we put the first burning source at the $(m-2-l_2)$th vertex on the first arm. In $m$ rounds, this first burning source would burn the entire second arm, and clearly after both the heads are burned by this first burning source, there are still $l_2$ rounds left. The remaining vertices unburned by the first burning source form a path of order $(m-1)^2$, and so is clearly $(m-1)$-burnable. Suppose now that $l_2\ge m-1$. Then we put the first burning source at one of the heads so that the remaining at most $(m-1)^2$ vertices unburned by the first burning source form one of the following graphs $T'$. Either $T'$ is a path (possible when $l_2 = m-1$), or $T'$ is a path forest with two paths such that its path orders are not $(m-1)^2 - 2$ and $2$. Therefore, $T'$ is $(m-1)$-burnable, and thus the lemma follows. \end{proof} In the above lemma, we see that when both arms of a $2$-double spider are just long enough (at least $m-2$), we have an optimal burning sequence starting at one of its heads. But just like burning spiders, this is not always the case for double spiders with more than two arms. Consider the $3$-spider with arms of lengths $5$, $5$, and $6$. Then any optimal burning sequence (of length four) has to start from the first vertex next to the head on one of the arms of length five. Hence, if we regard this as a $3$-double spider with three arms of equal lengths, we see that for any optimal burning sequence, after both heads are burned, there is only one round left. So we may not always start the burning sequence at one of the heads for an $n$-double spider with $n\geq 3$ even when all of its arms are just long enough, slightly different from that of Lemma~\ref{double_n=2}. Just like the corresponding result on spiders in \cite{tan2020graph}, extending Theorem~\ref{double_spiders} to keep track on when the heads are burned in a double spider simplifies the proof. \begin{theorem}\label{double_m>n} Suppose $m>n\geq 2$. Then every $n$-double spider of order at most $m^2+n-2$ is $m$-burnable. Furthermore, if the shortest arm has length $l$, then there is a way to burn the double spider in $m$ rounds such that at least $\min\{l,m-3\}$ rounds still remain after both its heads are burned. \end{theorem} \begin{proof} We prove by mathematical induction on $n$. By Lemma~\ref{double_n=2}, the base case $n=2$ follows. Suppose $n\ge 3$ and the result holds for $n-1$. Let $m>n$ and suppose $T$ is an $n$-double spider with arms of lengths $l_1\ge l_2\ge \cdots\ge l_n$ and $\vert T\vert \leq m^2+n-2$. We may as well assume that $|T|=m^2+n-2$. We start with the following claim that deals with the case when the shortest arm is not too long. \begin{claim} We may assume that $l_n\ge m$. \end{claim} (Proof of Claim) First, we suppose $l_n\le m-3$. Let $T'$ be the $(n-1)$-double spider obtained from $T$ by deleting its $n$th arm. Since $|T'| \le m^2+(n-1)-2$, we see from induction hypothesis that there is a way to burn $T'$ in $m$ rounds such that at least $\min\{l_{n-1}, m-3\}\ge l_n$ rounds remain after both its heads are burned. Clearly, $T$ can be burned by the same burning sequence. Now, we suppose $l_n$ is either $m-2$ or $m-1$. If $l_n=m-1$, we put the first source at the head where the $n$th arm is joined to, while if $l_n= m-2$, we put the first source at the head where some arm of length at least $m-1$ is joined to. In either case, in $m$ rounds, the first source would burn at least $n(m-2)+1$ vertices from the $n$ arms as well as both the heads, and the rest of the vertices form a path forest $T'$ with at most $n-1$ paths. Note that \begin{align*} |T'| &\leq m^2+n-2-n(m-2)-3\\ &=(m-1)^2+3n-m(n-2)-6\\ &\le (m-1)^2+3n-(n^2-n-2)-6 &\text{(because $m\geq n+1$)} \\ &= (m-1)^2-(n-2)^2. \end{align*} By Theorem~\ref{092021a}, $T'$ is $(m-1)$-burnable and therefore $T$ can be burned in $m$ rounds starting at one of its heads. This completes the proof of the claim. With the above claim, we now have $l_i\ge m$ for all $1\le i\le n$. We will show that $T$ is $m$-burnable, and that in most cases, $T$ has an optimal burning sequence starting at one of its heads, while in the remaining few cases, the optimal burning sequence starts at the first vertex of an arm next to the head it is joined to. With such burning sequences, at least $m-3$ rounds are left after both heads of $T$ are burned, which would complete the induction step. Our burning strategy for every case is similar. We first put the burning source at one of the heads or adjacent to them (i.e.~the first vertex of an arm), and we observe that the remaining vertices unburned by this first burning source (after $m$ rounds) form a path forest with at most $n$ paths, which we will denote as $T'$. Finally, by showing that $T'$ is $(m-1)$-burnable using Theorem~\ref{092021a} and Proposition~\ref{n-paths-linear}, we see that $T$ is $m$-burnable. We now consider these cases. \setcounter{case}{0} \begin{case} $m\geq n+3$. \end{case} The first source is placed at the head of $T$ where at least two arms are joined to it (possible as $n\geq 3$). This first source would burn at least $m-1$ vertices from the two aforementioned arms in $m$ rounds, and at least $m-2$ vertices from each of the other arms. As before, we see that \begin{align*} |T'| &\leq m^2+n-4-2(m-1)-(n-2)(m-2)\\ &= (m-1)^2+n-(n-2)(m-2)-3\\ &\leq (m-1)^2+n-(n^2-n-2)-3 &\text{(because $m\geq n+3$)} \\ &= (m-1)^2-(n-1)^2. \end{align*} Hence, $T'$ is $(m-1)$-burnable by Theorem~\ref{092021a}. \begin{case} $m=n+1$. \end{case} If $l_n = m$, then we put the first source this time at the head where the $n$th arm is joined to. Letting the path orders of $T'$ be $l_1',l_2',\ldots,l_n'$, we see that $l_i'\le l_i - (m-2) =: l_i''$ for $1\le i\le n-1$ and $l_n' = 1 =: l_n''$. Now, since $l_i''\ge 2$ for $1\le i\le n-1$, $l_n''=1$, and $$\sum_{i=1}^n l_i'' = \left(\sum_{i=1}^n l_i\right) - n(m-2) - 1 = m^2 + n - 4 - n(m-2) - 1 = 4n - 4,$$ Proposition~\ref{n-paths-linear}\eqref{n-paths-4n} implies that the path forest with path orders $l_1'',l_2'',\ldots,l_n''$ is $n$-burnable. This in turn implies that $T'$ is also $n$-burnable. If $l_n\ge m+1$, then it is straightforward that $T$ has at least three arms of length exactly $m+1$. Indeed, letting $k$ to be the number of arms of $T$ of length $m+1$, we see that $$ n^2 + 3n - 3 = m^2 + n - 4 \ge (n-k)(m+2) + k(m+1) = n(n+3)-k,$$ which clearly implies $k\ge 3$. So we may suppose the $(n-1)$th and the $n$th arms are joined to the same head, and we put the first source at the vertex on the $n$th arm next to the head. Observe that this first burning source would burn exactly $m-2$ vertices and $m$ vertices of the last two arms respectively, while at least $m-3$ vertices from each of the other arms. As above, to show that $T'$ is $n$-burnable, it suffices to show that the path forest with path orders$$l_1-(m-3),l_2-(m-3),\ldots,l_{n-2}-(m-3), 3, 1$$ is $n$-burnable. Since this path forest has order $m^2 + n - 4 - n(m-3) - 4 = 5n-7$, it is $n$-burnable by Proposition~\ref{n-paths-linear}\eqref{n-paths-5na}. \begin{case} $m=n+2$. \end{case} If $l_n=m$, the first burning source is put again at the vertex on the $n$th arm next to the head. Observe that this first burning source would burn the $n$th arm completely and at least $m-3$ vertices from each of the other arms. So $T'$ has $n-1$ paths and $$ |T'| \le m^2 + n - 4 - (n-1)(m-3) - m = 6n - 3= (n+1)^2 - (n-2)^2,$$ which is $(n+1)$-burnable by Theorem~\ref{092021a}. For the final case when $l_n\ge m+1$, we first note that $l_1\ge n+5 = m+3$ as $|T| = m^2+n - 2 = n^2+5n+2$. We choose the head of $T$ to which the first arm is joined as where to put our first burning source. Observe that this first source would burn the first $m-1$ vertices from the first arm, and at least $m-2$ vertices from each of the other arms. As in Case 2, we want to show that the path forest with path orders $$l_1 - (m-1), l_2 - (m-2), l_3-(m-2), \ldots, l_n - (m-2)$$ is $(n+1)$-burnable. Noticing that this path forest has order $n^2+5n - n(m-2) - 1 = 5n -1$ and each of its paths has order at least three, we see that it is $(n+1)$-burnable by Proposition~\ref{n-paths-linear}\eqref{n-paths-5nb}. This of course implies that $T'$ is $(n+1)$-burnable. From these cases, the proof of the theorem is now complete. \end{proof} For the remaining of the section, we consider $n$-double spiders of order $m^2+n-2$ where $m\le n$, proving the corresponding result on spiders in \cite{tan2020graph}. In this case, not all such double spiders are $m$-burnable, as observed by the following simple lemma. \begin{lemma} Let $m\geq 2$. If $T$ is an $m$-double spider such that the distance between any two leaves is at least $2m$, then $T$ is not $m$-burnable. \end{lemma} \begin{proof} Suppose $T$ is an $m$-double spider as in the lemma. The unique path joining any two leaves has order at least $2m+1$. Consider a burning process that takes $m$ rounds. Suppose a source is put on this path and it burns one of the two leaves when the burning process ends. Then this burning source would not burn at least two vertices at the opposite end of the path. It follows that in a burning process of $m$ rounds where $m-1$ leaves are burned by the first $m-1$ sources at the end of the process, the last remaining leaf together with its neighbor would not be burned by these $m-1$ burning sources. However, the last burning source burns only one vertex, and thus $T$ would not be completely burned at the end of the process. Therefore, $T$ is not $m$-burnable. \end{proof} The following theorem completes our results on the burnability of double spiders. \begin{theorem}\label{double_m<n} Suppose $n\ge m\ge 3$ and let $T$ be an $n$-double spider such that $\vert T\vert \leq m^2+n-2$. Then $T$ is $m$-burnable unless it includes, as a subgraph, an $m$-double spider such that the distance between any two leaves is at least $2m$. \end{theorem} \begin{proof} Let $T$ be an $n$-double spider of order $m^2+n-2$ and suppose it does not contain an $m$-double spider such that the distance between any two leaves is at least $2m$. Letting $l_1\ge l_2\ge \cdots \ge l_n$ be the lengths of the arms of $T$ as usual, we note that it must be the case that $l_m\le m-1$. We shall assume that $l_1\ge m-1$, for otherwise $T$ is clearly $m$-burnable. Now, we consider a few cases based on $l_m$. \setcounter{case}{0} \begin{case} $l_m\le m-3$. \end{case} Note that $m\ge 4$. Let $T'$ be the $(m-1)$-double spider obtained from $T$ by deleting its last $n-m+1$ arms. Hence, $$|T'| \le m^2 + n - 2 - (n-m+1) = m^2 + (m-1) - 2.$$ So by Theorem~\ref{double_m>n}, there is a way to burn $T'$ in $m$ rounds such that after both of its heads are burned, at least $\min\{l_{m-1}, m-3\}$ rounds remain. Since $\min\{ l_{m-1}, m-3\} \ge l_m$, it follows that the same burning sequence will burn $T$ as well. Thus, $T$ is $m$-burnable. \begin{case} $l_m = m-2$. \end{case} Note that in this case, we must have $l_{m-1}\le m$, for otherwise $|T| \ge (m-1)(m+1) + (m-2) + n - m + 2 = m^2 + n - 1 > |T|$. To show that $T$ is $m$-burnable, we will place the first burning source at one of its heads. Observe that in $m$ rounds, this burning source would clearly burn the last $n-m+1$ arms of $T$ completely and at least $m-2$ vertices from each of the first $m-1$ arms. By appropriately choosing the head for the first burning source as follows, it can be guaranteed that one extra vertex would be burned by this burning source. If $l_{m-1} = m-2$, we choose the head where the first arm is joined to, and if $l_{m-1}\ge m-1$, we choose the head where the $(m-1)$th arm is joined to. This way, we also have that at most one vertex of the $(m-1)$th arm is unburned by the first burning source. Letting $T'$ be the path forest formed by the remaining vertices unburned by the first burning source, we see that $T'$ has at most $m-1$ paths, and if it has exactly $m-1$ paths (only possible if $l_{m-1} = m$), its shortest path has order one. Of course, $T'$ contains, as a spanning subgraph, a path forest with exactly $m-1$ paths where its shortest path has order one. Since $$|T'|\le m^2 + n - 4 - m(m-2) - 1 - (n-m) = 3m - 5 = 3(m-1) - 2,$$ this spanning subgraph, and thus $T'$, is $(m-1)$-burnable by Proposition~\ref{n-paths-3n}. It follows that $T$ is $m$-burnable. \begin{case} $l_m = m-1$. \end{case} We will show in this case that $T$ is $m$-burnable by putting the first source at the head where the $m$th arm is joined to. As in Case 2, the remaining vertices unburned by this first burning source (after $m$ rounds) form a path forest $T'$ of order at most $3m - 5$. We claim that $T'$ has at most $m-2$ paths, or $T'$ has exactly $m-1$ paths with the shortest path having order one, and so again like in Case 2, $T'$ is $(m-1)$-burnable. If $l_{m+1}\le m-2$ (or if $m = n$), we note that among the first $m-1$ arms of $T$, either some arm that joins to the head where the $m$th arm is joined to has length at most $m$, or some arm that joins to the other head has length at most $m-1$. This is due to the property of $T$ that it does not contain an $m$-double spider such that any two leaves have distance at least $2m$. It is now clear that $T'$ is as claimed. If $l_{m+1} = m - 1$ and $l_{m+2}\le m-2$ (or if $l_{m+1} = m-1$ and $n=m+1$), we note by simple counting that we must also have $l_{m-1} = m-1$. So we may assume both the $m$th and the $(m+1)$th arms of $T$ are joined to the same head, and again, we see that $T'$ is as claimed. Finally if $l_{m+2} = m-1$, it must be the case that the first $m+2$ arms of $T$ each has length exactly $m - 1$, while the last $n - m - 2$ arms each has length exactly one. So we may assume the $m$th arm of $T$ is joined to the head where the majority of the longer arms are joined to. Since $m+2\ge 5$, $T'$ is as claimed. In fact, $T'$ consists of at most $\lfloor\frac{m}{2}\rfloor+1$ independent vertices. From the above cases, the proof of the theorem is complete. \end{proof} \section{Path Forests with Sufficiently Long Paths}\label{path-forests-long} This section aims to show that a path forest of order $m^2$ with a sufficiently long shortest path is always $m$-burnable, as stated in Theorem~\ref{long-path-forests}. For this purpose, we may assume in this section that every path forest under our consideration has order $m^2$ for some $m\in\mathbb{N}$. Additionally, we often identify a path forest with $n$ paths with the $n$-tuple listing of its path orders $(l_1,l_2, \ldots, l_n)$ and we \emph{do not} always assume $l_1\ge l_2\ge \cdots \ge l_n$. Generalizing from the context of burning connected graphs, we can also say that a path forest of order $m^2$ is \emph{well-burnable} if it is $m$-burnable. When a path forest is not well-burnable, we say it is \emph{deficient}. We start by defining a relation for comparing `successive' deficient path forests. \begin{definition} Let $n\geq 2$. Suppose the deficient path forests $T =(l_1,l_2, \ldots, l_n)$ and $T'=(l'_1,l'_2, \ldots, l'_n)$ have orders $m^2$ and $(m+1)^2$, respectively. We denote by $T\prec T'$ if there exists $1\leq i\leq n$ such that $l'_i= l_i+(2m+1)$ and $l'_j=l_j$ for all $j\neq i$. We say that $T'$ is obtained from $T$ by \emph{$\prec$-extension at the $i$-th component}, and we also write $T'\succ T$. \end{definition} Starting from a deficient path forest $T$, there are two possibilities for successive $\prec$-extensions, either \begin{enumerate} \item there exists an infinite chain $T=T_0 \prec T_1 \prec T_2 \prec \dotsb$ starting from $T$; or \item no such infinite $\prec$-chains exist starting from $T$. \end{enumerate} For example, $(7,2,2)\prec (12,2,2) \prec (21,2,2) \prec (34,2,2)\prec \dotsb $ is an infinite chain starting from the deficient path forest $(7,2,2)$, whereas $(17,15,4)\prec (17,15,17)\prec (17,30,17)$, $(17,15,4)\prec (17,28,4)\prec (17,43,4)$ and $(17,15,4)\prec (30,15,4)\prec (30,30,4)$ are chains starting from the deficient path forest $(17,15,4)$ which cannot be extended further. The following notation is convenient for stating and proving our results here. \begin{notation} For $n,L\in\mathbb{N}$, let $\mathcal{H}(n,L)$ be the set of all path forests with $n$ paths where each of its paths has order at least $L$. The set of all deficient path forests in $\mathcal{H}(n,l)$ is denoted by $\mathcal{H}_{\text{def}}(n,l)$. \end{notation} \begin{theorem}\label{110322c} For each $n\geq 2$, the following are equivalent. \begin{enumerate} \item\label{first} There exists $L\in\mathbb{N}$ such that every $T\in \mathcal{H}(n,L) $ is well-burnable. \item\label{second} There exists $L'\in\mathbb{N}$ such that there are no infinite $\prec$-chains starting from a path forest $T$ in $\mathcal{H}_{\text{def}}(n,L')$. \end{enumerate} \end{theorem} \begin{proof} Fix the number of paths $n$. Assume \eqref{first} holds and let $L$ has the property that every $T\in\mathcal{H}(n,L) $ is well-burnable. Then \eqref{second} vacuously follows as $\mathcal{H}_{\text{def}}(n,L)$ is empty and so we can pick $L'=L$. Conversely, assume \eqref{second} holds and let $L'$ has the property that there are no infinite $\prec$-chains starting from a path forest in $\mathcal{H}_{\text{def}}(n,L')$. For each $T \in \mathcal{H}_{\text{def}}(n,L') $, we see from the Konig's Tree Lemma that the tree rooted at $T$ induced by the relation $\prec$ is finite. Indeed, every vertex of such a rooted tree has at most $n$ descendants, and there is no infinite path by our assumption. Now, observe that for sufficiently large $m$, the longest path of a path forest of order $m^2$ has order at least $L' + (2m-1)$. Hence, if $T \in \mathcal{H}_{\text{def}}(n,L')$ has large enough order, then there exists $T' \in \mathcal{H}_{\text{def}}(n,L')$ such that $T'\prec T$. This implies that there is some $M\in \mathbb{N}$ such that for every $T \in \mathcal{H}_{\text{def}}(n,L')$ of order $m^2$ where $m\ge M$, there is a decreasing chain $T =T_m \succ T_{m-1}\succ \cdots \succ T_M$ with $T_i \in \mathcal{H}_{\text{def}}(n,L')$ and $|T_i| = i^2$ for every $M\leq i \leq m$; in other words, $T$ is a vertex belonging to the finite tree rooted at $T_M$. Since there are finitely many $T\in \mathcal{H}_{\text{def}}(n,L')$ of order at most $M^2$, the set $\mathcal{H}_{\text{def}}(n,L')$ is finite. Pick a large enough integer $L$ such that $\mathcal{H}(n, L) \cap \mathcal{H}_{\text{def}}(n,L') =\emptyset $. It follows that every $T\in \mathcal{H}(n, L)$ is well-burnable, completing the proof of the theorem. \end{proof} The above theorem provides a reduction of our main result to an equivalent result that we will actually prove, namely \eqref{second} in Theorem~\ref{110322c}. To show that there are no such infinite $\prec$-chains, we will need a series of technical lemmas, starting with Lemma~\ref{090322a} and Lemma~\ref{080322a}, which essentially imply that a deficient path forest with sufficiently long paths cannot be $\prec$-extended infinitely often only at the same one component. \begin{lemma}\label{090322a} Let $n\in \mathbb{N}$, and suppose $l_1,l_2,\ldots, l_n$ are even numbers, each at least $8n$. Then there is some $m$ such that the path forest $(l_1, l_2, \ldots, l_n, m^2-\sum_{i=1}^n l_i)$ has an optimal burning sequence where the last $n$ burning sources are not used to burn the first $n$ paths. \end{lemma} \begin{proof} We may assume that $l_1\ge l_2\ge \cdots \ge l_n$. For each $1\le i \le n$, let $t_i$ be the integer such that $l_i = (2t_i-1) + (2(n+i)-1)$. We can take $m=t_1$. It suffices to verify that $m = t_1>t_2>\cdots > t_n>2n$, as this would imply that the path forest $(l_1, l_2, \ldots, l_n, m^2-\sum_{i=1}^n l_i)$ has an optimal burning sequence where the first $n$ paths are burned using the respective $2n$ distinct burning sources that exclude the last $n$ burning sources. Clearly, $t_i>t_{i+1}$ as $l_{i}\ge l_{i+1}$ for every $1\le i\le n-1$, and the inequality $t_n>2n$ follows from the condition $l_n\ge 8n$. \end{proof} \begin{lemma}\label{080322a} Let $n\in \mathbb{N}$. There exists $L\in \mathbb{N}$ such that whenever $l_1,l_2,\ldots, l_n$ are integers at least $L$, the path forest $(l_1, l_2, \ldots, l_n, m^2-\sum_{i=1}^n l_i)$ is $m$-burnable for some $m\in\mathbb{N}$. \end{lemma} \begin{proof} For a fixed $n$, we can take $L=10n-1$. Let $l_1,l_2,\ldots,l_n$ be some given integers, each at least $10n-1$. Without loss of generality, suppose $l_1, l_2, \ldots, l_k$ are odd while $l_{k+1}, l_{k+2}, \dotsc, l_n$ are even, where $0\leq k \leq n$. Let $l'_i= l_i-(2i-1)$ for $0< i \le k$ and $l'_i= l_i$ for $k<i\leq n$. Note that $l'_i \geq 8n$ for every $1\leq i\leq n$. By Lemma~\ref{090322a}, for some $m\in \mathbb{N}$, the path forest $(l'_1, l'_2, \dotsc, l'_n, m^2-\sum_{i=1}^n l'_i)$ has an optimal burning sequence where the last $n$ burning sources are not used to burn the first $n$ paths. Therefore, by reallocating the last $k$ burning sources accordingly, it follows that the path forest $(l_1, l_2, \dotsc, l_n, m^2-\sum_{i=1}^n l_i)$ is $m$-burnable. \end{proof} We remark that with some careful analysis, one can show that for $n=2$, the least $L$ that satisfies the property stated in Lemma~\ref{080322a} is $L=8$. However, as we will see later, Lemma~\ref{080322a} will not give any quantifiable bound for our main result, and so we do not attempt to optimize the bound of $L$ in the lemma. The main technical lemma needed for our main result is Lemma~\ref{070322b}, which allows us to show that if we go very far along a $\prec$-chain starting from a path forest with $n$ paths, we will have the flexibility to allocate burning sources accordingly so that any of the $n$ next potential extensions is well-burnable, and hence terminating the $\prec$-chain. The following lemma is essentially the base case for the induction proof of Lemma~\ref{070322b}. \begin{lemma}\label{070322a} Suppose $\langle a_i\rangle_{i=1}^\infty$ and $\langle b_i\rangle_{i=1}^\infty$ are strictly increasing sequences of odd integers without common terms such that $a_i+2\in \{ b_j \mid j\in \mathbb{N}\}$ for infinitely many $i$. Then for any given integer $x$, there exist indices $N_1$ and $N_2$ such that the set $\{a_i \mid 1\le i\le N_1\}\cup \{b_i \mid 1\le i\le N_2\}$ can be partitioned into two sets $C\cup D$ with the property that $$\summation(C)= \sum_{i=1}^{N_1} a_i +x \quad\mbox{ and }\quad \summation(D)= \sum_{i=1}^{N_2} b_i-x.$$ \end{lemma} \begin{proof} The result is trivial for $x=0$. So we suppose first that $x=2k$ for some positive integer $k$. Let $i_1, i_2, \ldots, i_k$ enumerate the least $k$ terms of $\langle a_i\rangle_{i=1}^\infty$ with $a_{i}+2 \in \{ b_j \mid j\in \mathbb{N}\}$. Let $N_1= i_k$ and let $N_2$ be the index in $\langle b_i\rangle_{i=1}^\infty$ such that $b_{N_2}= a_{i_k}+2$. By taking the partition $\{a_i \mid 1\le i\le N_1\}\cup \{b_i \mid 1\le i\le N_2\} = C\cup D$, where \begin{align*} C &= \left(\{ a_i \mid 1\le i\le N_1 \}\backslash \{ a_{i_j}\mid 1\leq j\leq k \} \right) \cup \{ a_{i_j}+2\mid 1\le j\le k \} \text{ and } \\ D &=\left(\{ b_i \mid 1\le i\le N_2 \}\backslash \{ a_{i_j}+2\mid 1\leq j\leq k \} \right)\cup \{ a_{i_j}\mid 1\le j\le k \}, \end{align*} it is easy to see that $C$ and $D$ have the required property. Now, suppose $x$ is odd or negative. Choose $l$ such that $x + \sum_{i=1}^l a_i$ is even and nonnegative. (Clearly, such an $l$ exists.) Applying what we have proved above on the sequences $\langle a_i\rangle_{i=l+1}^\infty$ and $\langle b_i\rangle_{i=1}^\infty$, it follows that for some $N_1$ and $N_2$, the set $\{ a_i \mid l+1\leq i\leq N_1 \}\cup \{ b_i \mid 1\leq i\leq N_2\}$ can be partitioned into $C'\cup D'$ with $\summation(C')= \sum_{i=l+1}^{N_1} a_i + (x+ \sum_{i=1}^l a_i) $ and $\summation(D')= \sum_{i=1}^{N_2} b_i- (x+\sum_{i=1}^l a_i)$. It is now straightforward to see that the sets $C := C'$ and $D := D' \cup \{a_i \mid 1\le i\le l\}$ have the required property. \end{proof} \begin{lemma}\label{070322b} Let $n\geq 2$. Suppose $\langle a_{1,i}\rangle_{i=1}^\infty, \langle a_{2,i}\rangle_{i=1}^\infty, \ldots, \langle a_{n,i}\rangle_{i=1}^\infty$ are $n$ strictly increasing sequences of odd integers without common terms such that their union $\bigcup_{j=1}^n\{a_{j,i} \mid i\in \mathbb{N}\}$ is the set of all odd integers at least $\min\{a_{1,1}, a_{2,1}, \ldots, a_{n,1}\}$. Then for any $n$ integers $x_1, x_2,\ldots, x_{n}$ such that $\sum_{j=1}^n x_j=0$, there exist indices $N_1, N_2, \ldots, N_n$ such that the set $\bigcup_{j=1}^n \{ a_{j,i} \mid 1\le i\le N_j \}$ can be partitioned into $C_1\cup C_2\cup \cdots \cup C_n$ with the property that for every $1\le j\le n$, $$\summation(C_j) = \sum_{i=1}^{N_j} a_{j,i} +x_j.$$ \end{lemma} \begin{proof} We prove by mathematical induction on $n$. The case $n=2$ follows from Lemma~\ref{070322a}, as it can be deduced in this case that $a_{1,i}+2\in \{ a_{2,i'} \mid i'\in \mathbb{N}\}$ for infinitely many $i$. For the induction step, suppose $\langle a_{1,i}\rangle_{i=1}^\infty, \langle a_{2,i}\rangle_{i=1}^\infty, \ldots, \langle a_{n+1,i}\rangle_{i=1}^\infty$ are $n+1$ strictly increasing sequences of odd integers without common terms such that the union of their terms $\bigcup_{j=1}^{n+1}\{ a_{j,i} \mid i\in \mathbb{N} \}$ is the set of all odd integers at least $\min\{a_{1,1}, a_{2,1}, \ldots, a_{n+1,1}\}$. We first note that there are infinitely many elements $z$ in $\bigcup_{j=1}^n \{ a_{j,i} \mid i\in \mathbb{N} \}$ such that $z+2\in \{ a_{n+1,i} \mid i\in \mathbb{N} \} $, for otherwise $\{ a_{n+1,i} \mid i\in \mathbb{N} \}$ is either finite or cofinite. By the infinite pigeonhole principle, we may suppose that $a_{n,i}+2\in \{ a_{n+1,i'} \mid i'\in \mathbb{N}\}$ for infinitely many $i$. Let integers $x_1, x_2,\ldots, x_{n+1}$ with $\sum_{j=1}^{n+1} x_j=0$ be given. We merge $\langle a_{n,i}\rangle_{i=1}^\infty $ and $\langle a_{n+1,i}\rangle_{i=1}^\infty$ into one strictly increasing sequence $\langle b_{i}\rangle_{i=1}^\infty$. Consider the $n$ sequences $\langle a_{1,i}\rangle_{i=1}^\infty , \langle a_{2,i}\rangle_{i=1}^\infty, \ldots, \langle a_{n-1,i}\rangle_{i=1}^\infty, \langle b_i\rangle_{i=1}^\infty$ and the $n$ integers $x_1, x_2,\ldots, x_{n-1}, x_n+ x_{n+1}$. By induction hypothesis, for some indices $N_1, N_2, \ldots, N_n$, there exists a partition of $$\bigcup_{j=1}^{n-1} \{ a_{j,i} \mid 1\le i\le N_j \} \cup \{b_i\mid 1\le i\le N_n\} = C_1\cup C_2\cup \cdots \cup C_n$$ with the property that for each $1\le j\le n-1$, $$\summation(C_j)= \sum_{i=1}^{N_j} a_{j,i} + x_j, \quad\text{and}\quad \summation(C_n)= \sum_{i=1}^{N_n} b_{i} + x_n+x_{n+1}.$$ By construction, we have $\{b_i\mid 1\le i\le N_n\}= \{ a_{n,i}\mid 1\le i\le M_1\}\cup \{ a_{n+1,i}\mid 1\le i\le M_2\}$ for some $M_1$ and $M_2$. Now, consider the two increasing sequences $\langle a_{n,i}\rangle_{i=M_1+1}^\infty$ and $\langle a_{n+1,i}\rangle_{i=M_2+1}^\infty$ and set $x= -\sum_{i=1}^{M_2} a_{n+1,i} -x_{n+1}$. By Lemma~\ref{070322a}, for some indices $N'_n$ and $N'_{n+1}$, there exists a partition of $$\{a_{n,i} \mid M_1+1\le i\le N'_n\}\cup \{a_{n+1,i} \mid M_2+1\le i\le N'_{n+1}\} = D_1\cup D_2$$ with the property that $$\summation(D_1)= \sum_{i=M_1+1}^{N'_n} a_{n,i} +x \quad \text{and}\quad \summation(D_2)= \sum_{i=M_2+1}^{N'_{n+1}} a_{n+1,i}-x.$$ Finally, we let $C'_n= C_n\cup D_1$ and $C'_{n+1}= D_2$ so that $\summation(C'_n)= \sum_{i=1}^{N'_{n}} a_{n,i} +x_{n}$ and $\summation(C'_{n+1})= \sum_{i=1}^{N'_{n+1}} a_{n,i} +x_{n+1}$. Observing that $\{b_i\mid 1\le i\le N_n\}\cup D_1\cup D_2$ is the union of the first $N'_n$ terms of $\langle a_{n,i}\rangle_{i=1}^\infty $ and the first $N'_{n+1}$ terms of $\langle a_{n+1,i}\rangle_{i=1}^\infty $, we conclude that $C_1\cup C_2\cup\cdots \cup C_{n-1}\cup C'_n\cup C'_{n+1}$ is a partition of $$\bigcup_{j=1}^{n-1} \{ a_{j,i} \mid 1\le i\le N_j \} \cup \{ a_{n,i}\mid 1\le i\le N'_n \}\cup \{ a_{n+1,i}\mid 1\leq i\leq N'_{n+1}\}$$ with the desired property. This completes the induction step and therefore the proof of the lemma. \end{proof} We are finally ready to prove Theorem~\ref{long-path-forests}, restated with the assumption and notation in this section as follows. \begin{theorem}\label{030922c} Let $n\ge 2$. There exists $L\in \mathbb{N}$ such that every path forest in $\mathcal{H}(n,L)$ is well-burnable. \end{theorem} \begin{proof} Fix the number of paths $n$. Let $L$ be an integer satisfying the property stated in Lemma~\ref{080322a} for $n-1$. By Theorem~\ref{110322c}, it suffices to show that there are no infinite $\prec$-chains starting from a path forest in $\mathcal{H}_{\text{def}}(n,L)$. Assume to the contrary that $T=T_0 \prec T_1 \prec T_2 \prec \dotsb$ is an infinite chain starting from some path forest $T$ in $\mathcal{H}_{\text{def}}(n,L)$. Along the chain, $\prec$-extensions occur infinitely often at some of the components. By starting at some path forest further along the chain if necessary, we may assume without loss of generality that in this infinite chain, \mbox{$\prec$-extensions} occur infinitely often at each of the first $k$ components, while no \mbox{$\prec$-extension} occurs at any of the last $n-k$ components. We say that these first $k$ components are \emph{nonstationary} and the rest are \emph{stationary} components. Suppose $T=(l_1, l_2, \ldots, l_n)$. By the choice of $L$, we can choose an $m\in\mathbb{N}$ so that the path forest $(m^2-\sum_{j=2}^{n} l_j, l_{2}, l_{3}, \ldots, l_{n})$ is $m$-burnable. Of course, $T$ must then have order less that $m^2$ since it is deficient. Consider the path forest $T'= (l'_1, l'_2, \ldots, l'_n)$ of order $m^2$ along the infinite chain. Since the last $n-k$ components are stationary, we have $(l'_{k+1}, l'_{k+2}, \ldots, l'_n)= (l_{k+1}, l_{k+2}, \ldots, l_n)$. If there is only one nonstationary component, then it follows that $l'_1= m^2-\sum_{j=2}^{n} l_j$ and thus $T'$ is $m$-burnable, which gives a contradiction. Hence, there are at least two nonstationary components. Starting from $T'$, the nonstationary components induce $k$ strictly increasing sequences of odd integers without common terms such that the union of all those terms is the set of all odd integers at least $2m+1$. Let $\langle a_{j,i}\rangle_{i=1}^\infty$ be the sequence induced by the $j$th component for $1\leq j\leq k$. Set $x_j= l'_j$ for each $1\le j\le k-1$ and $x_k= l'_k - \left(m^2-\sum_{j=k+1}^n l'_j\right)$ so that $\sum_{j=1}^k x_j = 0$. By Lemma~\ref{070322b}, for some indices $N_1, N_2, \ldots, N_k$, the set $\bigcup_{j=1}^k \{ a_{j,i} \mid 1\le i\le N_j \}$ has a partition $C_1\cup C_2\cup \cdots \cup C_k$ with the property that $\summation(C_j)= \sum_{i=1}^{N_j} a_{j,i} +x_j$ for every $1\le j\le k$. Furthermore, we can choose $N_1, N_2, \dotsc, N_k$ appropriately so that $\bigcup_{j=1}^k \{ a_{j,i} \mid 1\leq i\leq N_j \}$ is a set consisting of consecutive odd integers, say up to $2M-1$ for some $M\in\mathbb{N}$. Now we consider the larger path forest $T''= (l''_1, l''_2, \ldots, l''_n)$ of order $M^2$ further along the infinite chain. We claim that $T''$ is $M$-burnable, which gives us the contradiction we need to complete the proof of the theorem. To see this, observe first that by the choice of $m$ and that $(l''_{k+1}, l''_{k+2}, \ldots, l''_n)= (l_{k+1}, l_{k+2}, \ldots, l_n)$, we can place the last $m$ burning sources at the last $n-k+1$ paths of $T''$ in such a way that the last $n-k$ paths are completely burned at the end of the process, while exactly $m^2-\sum_{j=k+1}^n l_j$ vertices of the $k$th path are burned. (We could always choose $N_k$ large enough so that the $k$th path of $T''$ has more than $m^2-\sum_{j=k+1}^n l_j$ vertices.) Note that by the constructions of our sequences, $l''_j=\sum_{i=1}^{N_j} a_{j,i}+l'_j$ for every $1\leq j \leq k$. So for each $1\le j\le k-1$, we have $l''_j= \summation(C_j)$, and thus the $j$th path of $T''$ can be burned using the burning sources corresponding to the odd integers in $C_j$. Finally, the remaining $l''_k - (m^2-\sum_{j=k+1}^n l_j) = \summation(C_k)$ unburned vertices in the $k$th path of $T''$ can be burned using the burning sources corresponding to the odd integers in $C_k$. Therefore, $T''$ is $M$-burnable as claimed. \end{proof} \section{Remarks}\label{conclusion} We have shown in Theorem~\ref{double_m>n} and Theorem~\ref{double_m<n} that the burning number conjecture holds for double spiders. Moreover, double spiders satisfy the stronger Conjecture~\ref{Tree Conjecture}. While it will be interesting to see how our work on spiders and double spiders can help in making progress towards the burning number conjecture, we believe the immediate future work is to verify Conjecture~\ref{Tree Conjecture} for the larger class of trees with at most two vertices having degrees greater than two. This family of trees includes paths, spiders, and double spiders, but we are yet to consider the more general such trees - the union of two spiders, together with a path connecting their respective maximum degree vertices. \begin{question} Suppose $m>n$. Consider a tree $T$ with $n$ leaves of order $m^2+n-2$ such that $T$ has exactly two vertices of degrees at least three. Must it be that $T$ is $m$-burnable? \end{question} On path forests, our main result in this work shows that every path forest $T$ with a sufficiently long shortest path is well-burnable. In view of this, we introduce the following definition to study bounds on $L$ in Theorem~\ref{long-path-forests}. \begin{definition} For $n\geq 2$, define $L_n$ to be the least integer with the following property: if $T$ is a path forest with $n$ paths such that its shortest path has order at least $L_n$, then $T$ is well-burnable. \end{definition} We know that $L_2 = 3$. With careful analysis and a little help from a computer, we are also able to determine that $L_3 = 18$ and $L_4 = 26$. But unfortunately, we do not see how these analyses and arguments can be generalized. \begin{question} What are the values of $L_n$ for $n\ge 5$? \end{question} \section{Acknowledgment} The second author acknowledges the support for this research by the Research University Grant No.~1001/PMATHS/8011129 of Universiti Sains Malaysia.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{\hfil #1\hfil}} \bibliographystyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{df}[theorem]{Definition} \newtheorem{cor}[theorem]{Corollary} \defg{g} \def\widetilde{g}{\widetilde{g}} \defU_t{U_t} \def\mathbf{T}{\mathbf{T}} \def\mathrm{Spf}{\mathrm{Spf}} \def\pi{\pi} \def\mathcal{O}{\mathcal{O}} \def\mathbf{q}{\mathbf{q}} \def\mathcal{B}{\mathcal{B}} \def\chi{\chi} \def\mathcal{E}{\mathcal{E}} \def\widetilde{\widetilde} \def\langle \kern-0.05em{\langle}{\langle \kern-0.05em{\langle}} \def\rangle \kern-0.05em{\rangle}{\rangle \kern-0.05em{\rangle}} \def\mathrm{Gal}{\mathrm{Gal}} \def\mathbf{Q}{\mathbf{Q}} \def\overline{\Q}{\overline{\mathbf{Q}}} \def\mathbf{Z}{\mathbf{Z}} \def\bf Remark\rm.\ {\bf Remark\rm.\ } \def\mathrm{rad}{\mathrm{rad}} \def\kappa{\kappa} \def\mathcal V{\mathcal V} \def\mathcal W{\mathcal W} \def\frac{E_{\ka}}{V E_{\ka}}{\frac{E_{\kappa}}{V E_{\kappa}}} \def\epsilon{\epsilon} \def\mathbf{C}{\mathbf{C}} \def\mathbf{F}{\mathbf{F}} \def\mathbf{G}{\mathbf{G}} \def\mathbf{R}{\mathbf{R}} \def\mathbf{F}{\mathbf{F}} \def\mathcal{F}{\mathcal{F}} \def\sF{\mathcal{F}} \def\mathcal{G}{\mathcal{G}} \defF{F} \def\overline{\F}{\overline{\mathbf{F}}} \def\overline{\Z}{\overline{\mathbf{Z}}} \defV\kern-0.15em{A}{V\kern-0.15em{A}} \begin{document} \title{The Eigencurve is Proper at Integral Weights} \author{Frank Calegari\footnote{Supported in part by the American Institute of Mathematics.}} \maketitle \section{Introduction} The eigencurve $\mathcal{E}$ is a rigid analytic space parameterizing overconvergent and therefore classical modular eigenforms of finite slope. Since Coleman and Mazur's original work~\cite{eigencurve}, there have been numerous generalizations~\cite{eigenvarieties,chenevier,urban}, as well as alternative constructions using modular symbols~\cite{ashstevens} and $p$-adic representation theory~\cite{emerton}. In spite of these advances, several elementary questions about the geometry of $\mathcal{E}$ remain. One such question was raised by Coleman and Mazur: does there exist a $p$-adic family of finite slope overconvergent eigenforms over a punctured disk, and converging, at the puncture, to an overconvergent eigenform of infinite slope? Another way of phrasing this question is to ask whether the projection $\pi: \mathcal{E} \rightarrow \mathcal W$ satisfies the valuative criterion for properness\footnote{The curve $\mathcal{E}$ has infinite degree over weight space $\mathcal W$, and so, the projection $\pi:\mathcal{E} \rightarrow \mathcal W$ cannot technically be proper.}. In~\cite{buzcal}, this was proved in the affirmative for the particular case of tame level $N = 1$ and $p = 2$. The proof, however, was quite explicit and required (at least indirectly) that the curve $X_0(Np)$ have genus zero. In this paper, we work with general $p$ and arbitrary tame level, although our result only applies at certain arithmetic weights in the center of weight space. Recall that the $\mathbf{C}_p$-points of $\mathcal W$ are the continuous homomorphisms from the Iwasawa algebra $\displaystyle{\Lambda:=\mathbf{Z}_p[[\lim_{\leftarrow} (\mathbf{Z}/Np^k\mathbf{Z})^{\times}]]}$ to $\mathbf{C}_p$. Let $\chi$ denote the cyclotomic character. Our main theorem is: \begin{theorem} Let $\mathcal{E}$ be the $p$-adic eigencurve of tame level $N$. Let $D$ denote the closed unit disk, and let $D^{\times}$ denote $D$ with the origin removed. Let $h:D^{\times} \rightarrow \mathcal{E}$ be a morphism such that $\pi \circ h$ extends to $D$. Suppose, moreover, that $(\pi \circ h)(0) = \kappa$, where $\kappa$ is of the form $$\kappa = \chi^k \cdot \psi,$$ for $k \in \mathbf{Z}$ and $\psi$ a finite order character of conductor dividing $N$. Then there exists a map $\widetilde{h}:D \rightarrow \mathcal{E}$ making the following diagram commute: $$ \xymatrix@=4em{ D^{\times} \ar[r]^{h} \ar[d] & \mathcal{E} \ar[d]^{\pi} \\ D \ar[r] \ar[ru]_{\widetilde{h}} & \mathcal W \\} $$ \label{theorem:main} \end{theorem} The new idea of this paper is, roughly speaking, to specialize the family $h$ of overconvergent modular forms to an infinitesimal neighbourhood of the punctured point. Using the techniques of~\cite{buzcal}, we conclude that the limiting form ``$h(0)$'' will be an overconvergent modular form $G_0$, and thus, it suffices to prove that the $U_p$ eigenvalue of this form is not $0$. However, if $U_p G_0 = 0$, the infinitesimal deformations of $G_0$ will have nilpotent but nonzero $U_p$ eigenvalue. We are able to deduce a contradiction by combining this idea with the philosophy of~\cite{buzcal} that finite slope forms will have large radius of convergence while infinite slope forms will have small radius of convergence. This idea was inspired by work of Bella\"{\i}che and Chenevier~\cite{frenchies}, who study deformations of trianguline $\mathrm{Gal}(\overline{\Q}_p/\mathbf{Q}_p)$-representations by considering deformations of finite dimensional $(\varphi,\Gamma)$-modules over Artinian extensions of $\mathbf{Q}_p$. Their goal was to study the tangent spaces of eigencurves using local techniques from $p$-adic Hodge theory. It is plausible that the properness of the eigencurve is a global manifestation of a purely local theorem; such an idea was suggested to the author --- at least at integral weights --- by Mark Kisin in 2001 and was discussed in several tea room conversations during the Durham symposium on Galois representations in 2004 and the eigenvarieties semester at Harvard in 2006. However, even with current advances in the technology of local Galois representations, a natural conjectural statement implying properness has not yet been formulated. One issue to bear in mind is that slightly stronger statements one may conjecture are false. For example, there exists a pointwise sequence of finite slope forms converging to an infinite slope form~\cite{wascol}. \medskip It is a pleasure to thank Kevin Buzzard for many fruitful discussions; the debt this paper owes to~\cite{buzcal} is clear. I would also like to thank Matthew Emerton, Toby Gee, and Mark Kisin for useful conversations. \section{Overconvergent Modular Forms} \label{section:over} Let $N \ge 5$ be an integer co-prime to $p$; let $X = X_1(N)$; and let $X_0(p) = X(\Gamma_1(N) \cap \Gamma_0(p))$. Since $N \ge 5$, the curves $X$ and $X_0(p)$ are the compactifications of a smooth moduli spaces. The curve $X$ comes equipped with a natural sheaf $\omega$, which, away from the cusps, is the pushforward of the sheaf of differentials on the universal modular curve. Let $A$ be a characteristic zero lift of the Hasse invariant with coefficients in $W(\overline{\F}_p)[[q]]$, and thus, $A \in H^0(X/W(\overline{\F}_p),\omega^{\otimes (p-1)})$ by the $q$-expansion principle. We further insist that $A$ has trivial character if $p > 2$, and that $A^2$ has trivial character if $p = 2$; this is possible since $N > 1$. Let $X_0(p,r) \subseteq X^{\mathrm{an}}_0(p)$ denote the connected component containing $\infty$ of the affinoid $\{x \in X^{\mathrm{an}}_0(p); \ |A(x)| \ge |r|\}$. Standard arguments imply that $|A(x)|$ on $X_0(p,r)$ is independent of the choice of $A$, provided that $v(r) < p/(p+1)$. \medskip Let $r \in \mathbf{C}_p$ be an element with $p/(p+1) > v(r) > 0$. Let $\chi$ denote the cyclotomic character; let $\psi$ denote a finite order character of conductor dividing $N$; and let $k \in \mathbf{Z}$. \begin{df} The overconvergent modular forms of weight $\chi^k \cdot \psi$, level $N$, and radius of convergence $r$ are sections of $H^0(X_0(p,r),\omega^{\otimes k})$ on which the diamond operators act via $\psi$. We denote this space by $M(\mathbf{C}_p,N,\chi^k \cdot \psi;r)$. The space of overconvergent modular forms of weight $\chi^k \cdot \psi$ and level $N$ is $$M(\mathbf{C}_p,N,\chi^k \cdot \psi) := \bigcup_{|r|<1} M(\mathbf{C}_p,N,\chi^k \cdot \psi;r).$$ \end{df} The space $M(\mathbf{C}_p,N,\chi^k \cdot \psi;r)$ has a natural Banach space structure. If $\chi^k = 1$, the norm $\|\cdot\|$ is the supremum norm. \medskip Let $\kappa \in \mathcal W(\mathbf{C}_p)$ denote a point in weight space. Recall that the Eisenstein series $E(\kappa)$ is defined away from zeroes of the Kubota--Leopoldt zeta function by the following formulas: $$E(\kappa) = 1 + \frac{2}{\zeta(\kappa)} \sum_{n=1}^{\infty} \sigma^{*}_{\kappa}(n) q^n, \qquad \sigma^*_{\kappa}(n) = \sum_{(d,p)=1}^{d|n} \kappa(d) d^{-1}.$$ The coefficients of $E(\kappa)$ are rigid analytic functions on $\mathcal W$. If $\kappa$ is trivial on the roots of unity in $\mathbf{Q}_p$, then, as a $q$-expansion, $E(\kappa)$ is congruent to $1$ modulo the maximal ideal of $\overline{\Z}_p$. Coleman's idea is to define overconvergent forms of weight $\kappa$ using the formal $q$-expansion $E(\kappa)$. Before we recall the definition, we also recall some elementary constructions related to weight space. If $$\mathbf{Z}_{p,N}:= \lim_{\leftarrow} (\mathbf{Z}/Np^k\mathbf{Z})^{\times},$$ then there is a natural isomorphism $\mathbf{Z}_{p,N} \simeq (\mathbf{Z}/N \mathbf{q} \mathbf{Z})^{\times} \times (1 + \mathbf{q} \mathbf{Z}_p)$, where $\mathbf{q} = p$ if $p$ is odd, and $\mathbf{q} = 4$ otherwise. If $a \in \mathbf{Z}_{p,N}$, then $\langle \kern-0.05em{\langle} a \rangle \kern-0.05em{\rangle}$ denotes the projection of $a$ onto the second factor, and $\tau(a) = a/\langle \kern-0.05em{\langle} a \rangle \kern-0.05em{\rangle}$ the projection onto the first. The rigid analytic space $\mathcal W$ has a natural group structure. Denote the connected component of $\mathcal W$ by $\mathcal{B}$; the component group of $\mathcal W$ is $(\mathbf{Z}/N \mathbf{q} \mathbf{Z})^{\times}$. If $\kappa \in \mathcal W(\mathbf{C}_p)$, then let $\langle \kappa \rangle$ denote the weight $a \mapsto \kappa(\langle \kern-0.05em{\langle} a \rangle \kern-0.05em{\rangle})$ and $\tau(\kappa)$ the weight $a \mapsto \kappa(\tau(a))$; $\langle \kappa \rangle$ is the natural projection of $\kappa$ onto $\mathcal{B}$. If $\chi$ denotes the cyclotomic character, then for any character $\psi$ of $(\mathbf{Z}/\mathbf{q} N \mathbf{Z})^{\times}$, there is a unique congruence class modulo $p-1$ (or modulo $2$ if $p = 2$) such that for any $k \in \mathbf{Z}$ in this congruence class, $\tau(\eta \cdot \chi^{-k})$ has conductor dividing $N$. We fix once and for all a choice of representative $k \in \mathbf{Z}$ for this congruence class. \medskip We recall now the definition of overconvergent modular forms of weight $\kappa$: \begin{df} Overconvergent modular forms of weight $\kappa$ and tame level $N$ are $q$-expansions of the form $V E_{\langle \kappa \cdot \chi^{-k} \rangle} \cdot F$, where $F \in M(\mathbf{C}_p,N, \chi^k \cdot \tau(\kappa \cdot \chi^{-k}))$. \end{df} Note that this is not the exact definition that occurs on~\cite{eigencurve}, \S 2.4, since we have chosen to work with $\Gamma_0(p)$ structure rather than $\Gamma_1(p)$ structure. Yet both definitions are easily seen to be equivalent, using, for example, Theorem 2.2.2 of \emph{ibid}. We do not define the radius of convergence of an overconvergent form of general weight. \section{Hasse Invariants} In this section, we prove some estimates for the convergence of certain overconvergent modular forms related to Hasse invariants. As in Section~\ref{section:over}, let $A$ be a characteristic zero lift of the Hasse invariant with coefficients in $W(\overline{\F}_p)[[q]]$. \begin{lemma} Let $v(r) < 1/(p+1)$, and let $x$ be a point on $X_0(p,r)$. Then $$\frac{A(x)}{V\kern-0.15em{A}(x)} \equiv 1 \kern-0.5em{\mod \frac{p}{A(x)^{p+1}}}.$$ \label{lemma:one} \end{lemma} \begin{proof} The weight of $A$ is $p -1$. Let $E$ be the elliptic curve associated to $x$, and $H$ the canonical subgroup. Let $\omega_E$ be a N\'{e}ron differential of $E$, and let $a = A(E,\omega_E)$ (we implicitly trivialize $H^0(E,\Omega^1)$). By Theorem 3.1 of Katz (\cite{katz}, p.113), we deduce that $E/H$ is isomorphic to $E^{(p)}$ modulo $p/a$, where $E^{(p)}$ is the image of $E$ under Frobenius. Hence, $$A(E/H,\omega_{E/H}) \equiv a^{p} \kern-0.5em{\mod p/a},$$ where $\omega_{E/H}$ is any differential that can be identified with the inverse image of $\omega_E$ under Frobenius modulo $p/a$. By definition, $$V\kern-0.15em{A}(E,\omega_E) = p^{1-p} \cdot A(E/H,\pi^*\omega_E) = p^{1-p} \cdot \lambda^{1-p} \cdot A(E,\omega_{E/H}),$$ where $\pi^{*} \omega_E = \lambda \cdot \omega_{E/H}$. Remark 3.6.5.0 of Katz (\cite{katz}, p.116) identifies $\lambda$ with $$a_{p-1}/p \equiv A(E,\omega_E)/p \kern-0.5em{\mod 1} \equiv a/p \kern-0.5em{\mod 1}.$$ The factor of $p$ comes from the identity $dx^p/x^p = p(dx/x)$. It follows directly that $ V\kern-0.15em{A}(E) \equiv a \kern-0.5em{\mod p/a^p}$, and the lemma follows after dividing by $a$. \end{proof} \medskip \begin{cor} Suppose that $v(r) < 1/(p+1)$. Then $\log(A/V\kern-0.15em{A}) \in M(\mathbf{C}_p,N,1,r)$. If $s \in \mathbf{C}_p$ is sufficiently small, then $(A/V\kern-0.15em{A})^s \in M(\mathbf{C}_p,N,1,r)$. \label{cor:corny} \end{cor} \begin{comment} \begin{cor} Suppose that $v(r) < 1/(p+1)$. Then $\log(A/V\kern-0.15em{A}) \in M(\mathbf{C}_p,N,1,r)$. If $s \in \mathbf{C}_p$ is sufficiently small, then $$\left(\frac{A}{V\kern-0.15em{A}}\right)^{\kern-0.2em{s}} \in M(\mathbf{C}_p,N,1,r).$$ \end{cor} \end{comment} \begin{proof} From Lemma~\ref{lemma:one}, we deduce that $A/V\kern-0.15em{A} - 1$ has norm $< 1$ on $X_0(p,r)$, which implies the first claim. Moreover, $\|s \cdot \log(A/V\kern-0.15em{A})\| \ll 1$ for sufficiently small $s$, and hence, if $s$ is sufficiently small, $$(A/V\kern-0.15em{A})^s = \exp\left(s \cdot \log\left(A/V\kern-0.15em{A})\right)\right)$$ is well-defined and lies in $M(\mathbf{C}_p,N,1,r)$. \end{proof} \section{Families of Eigenforms} Let $h: D^{\times} \rightarrow \mathcal{E}$ denote an analytic family of overconvergent modular eigenforms of finite slope such that $\pi \circ h$ extends to $D$, and suppose that $(\pi \circ h)(0) =\kappa$, where $\kappa$ is of the form $\kappa = \chi^k \cdot \psi$ with $k \in \mathbf{Z}$ and a finite order character $\psi$ of conductor dividing $N$. We assume that the image of $h$ lies in the cuspidal locus since the Eisenstein locus is easily seen to be proper (cf~\cite{buzcal}, Theorem 8.2). Any weight in $\mathcal W(\mathbf{C}_p)$ sufficiently close to $\kappa$ is of the form $\kappa \cdot \mathcal{B}^*$, where $$\mathcal{B}^{*}:= \left\{ \eta(s): a \mapsto \langle \kern-0.05em{\langle} a \rangle \kern-0.05em{\rangle}^{s(p-1)} \ | \ s \in \mathbf{C}_p, v(s) > -1 + \frac{1}{p-1} \right\},$$ (the inequality should be $v(s) > - 1$ when $p = 2$). Our definition of $\mathcal{B}^*$ is normalized slightly differently from~\cite{eigencurve}~p.28, as we have included an extra factor of $p-1$ in the exponent. After shrinking $D$, if necessary, we may assume that $(\pi \circ h)(D^{\times}) \subset \kappa \cdot \mathcal{B}^{*}$. Given $t \in D$, we may consider $h(t)$ to be a normalized eigenform in $M(\mathbf{C}_p,N,\kappa \cdot \eta(s(t)))$, for some $\eta(s(t)) \in \mathcal{B}^*(\mathbf{C}_p)$ and analytic function $s(t)$. By assumption, $U h(t) = \lambda(t) h(t)$ for some analytic function $\lambda(t)$ which does not vanish on $D^{\times}$. By considering $q$-expansions, we deduce that $h(0)$ exists as a $p$-adic modular form in the sense of Katz~\cite{katz} (for a more detailed proof, see~\cite{buzcal}, p.229). The modular form $A$ has weight $\chi^{p-1} = \eta(1)$ if $p > 2$, and $A^2$ has weight $\chi^2 = \eta(2)$ if $p = 2$. Thus (shrinking $D$ again if necessary), we may construct a map $$g: D^{\times} \rightarrow M(\mathbf{C}_p,N,\kappa)$$ via the formula $g(t) = h(t)/V\kern-0.15em{A}^{s(t)}$. This map is well-defined as an easy consequence of Corollary B4.2.5 of~\cite{coleman}, namely that $E_s/A^s$ is overconvergent of weight zero where $E_s$ is the Eisenstein series of weight $\eta(s)$. \begin{lemma} Suppose that $v(r) < 1/(p+1)$. After shrinking $D$, if necessary, the image of $g$ lands in $M(\mathbf{C}_p,N,\kappa,r)$. \label{lemma:extend} \end{lemma} \begin{proof} By construction, $g(t)$ lies in $M(\mathbf{C}_p,N,\kappa,\mu)$ for some $\mu$ with $v(\mu) > 0$. Since $\kappa$ is of the form $\chi^k \cdot \psi$, we may therefore realize $g(t)$ as a section of $H^0(X_0(p,\mu),\omega^{\otimes k})$. Here we use the fact that $\psi$ has conductor co-prime to $p$. Consider the operator $U_t = (A/V\kern-0.15em{A})^{s(t)} U$, where $U$ is the usual operator on overconvergent modular forms~\cite{coleman,coleman1}. If $s(t)$ is sufficiently small, then by Corollary~\ref{cor:corny}, the factor $(A/V\kern-0.15em{A})^{s(t)}$ lies in $M(\mathbf{C}_p,N,1,r)$. On the other hand, $$U_t(g(t)) = (A/V\kern-0.15em{A})^{s(t)} U(g(t)/V\kern-0.15em{A}^{s(t)}) = (A/V\kern-0.15em{A})^{s(t)} (\lambda(t) g(t)/A^{s(t)}) = \lambda(t) g(t).$$ If $v(\mu) < v(r)$, then $U$ maps $M(\mathbf{C}_p,N,\kappa,\mu)$ to $M(\mathbf{C}_p,N,\kappa,\mu^{p})$. Thus, since $\lambda(t) \ne 0$ for $t \in D^{\times}$, we deduce from the equality $g(t) = \lambda(t)^{-1} U_t(g(t))$ that if $g(t)$ lies in $M(\mathbf{C}_p,N,\kappa,\mu)$, then $g(t)$ lies in $M(\mathbf{C}_p,N,\kappa,\max\{\mu^p,r\})$. Thus, by induction, $g(t)$ lies in $M(\mathbf{C}_p,N,\kappa,r)$. \end{proof} \medskip Let $Y$ be a connected affinoid variety, and let $V$ be a non-empty admissible open affinoid subdomain of $X$. Let $B = \mathrm{Spf}(\mathbf{C}_p\langle T\rangle)$, and $A = \mathrm{Spf}(\mathbf{C}_p\langle T,T^{-1} \rangle)$. Let $\sF$ denote a sheaf on $Y$ such that $\sF(Y) \rightarrow \sF(V)$ is an inclusion. The following is an immediate generalization of~\cite{buzcal}, Lemma 8.1. \begin{lemma} Let $\mathcal{G}$ be the pullback of $\sF$ to $Y \times B$. If $g$ is a section of $\mathcal{G}(V \times B)$ that extends to a section of $\mathcal{G}(Y \times A)$, then $g$ extends to $\mathcal{G}(Y \times B)$. \label{lemma:buzz} \end{lemma} \begin{proof} By assumption $g \in \mathcal{G}(V \times B) = \sF(V)\langle T \rangle$ and $g \in \mathcal{G}(Y \times A) = \sF(Y) \langle T,T^{-1} \rangle$. Since $\sF(Y) \subset \sF(V)$, the intersection of these two modules inside $\mathcal{G}(V \times A) = \sF(V)\langle T,T^{-1} \rangle$ is $\sF(Y)\langle T \rangle = \mathcal{G}(Y \times B)$. \end{proof} \medskip As remarked above, the $q$-expansion $g(0) = h(0)$ is a Katz $p$-adic modular form of weight $\kappa$. Let $Y = X_0(p,r)$; let $V = Y^{\mathrm{ord}}$ be the ordinary locus of $Y$; and, let $\sF = \omega^{\otimes k}$. Since $B(\mathbf{C}_p) = D$, the map $g$ extends to a morphism $B \rightarrow \sF(V)$. On the ``boundary'' $A$ of $B$ (or on any annulus contained in $B$ and not containing zero), $g$ extends to a morphism $A \rightarrow \sF(Y)$. Since morphisms from $B$ to $\sF(Y)$ may be identified with $\mathcal{G}(Y \times B)$, we deduce from Lemma~\ref{lemma:buzz} that $g$ extends to a morphism $B \rightarrow \sF(Y) = M(\mathbf{C}_p,N,\kappa,r)$. Thus, to complete the proof of Theorem~\ref{theorem:main}, it suffices to prove that $g(0)$ has finite slope or, equivalently, that $\lambda(0) \ne 0$. Hence, we assume that $\lambda(0) = 0$. Since $\lambda$ doesn't vanish on $D^{\times}$, it is not identically zero, and thus, $$\lambda(T) = \lambda_m T^m + \ldots,$$ for some $m \in \mathbf{N}_{>0}$ such that $\lambda_{m} \ne 0$. There is, moreover, an identity $$U_{T}(g(T)) = \exp\left(s(T) \cdot \log\left(\frac{A}{V\kern-0.15em{A}}\right) \right) U(g(T)) = \lambda(T) g(T).$$ We now specialize this identity to $\mathbf{C}_p[\epsilon]/\epsilon^{m+1}$ via the map $T \mapsto \epsilon$. This specialization is not strictly necessary, as one could simply work with the first $m$ coefficients of the Taylor expansion of $g(T)$. We persist, however, for psychological reasons, in order to view $g(\epsilon)$ as associated to a form with weight in some infinitesimal neighbourhood of $\kappa$. Suppose that $\displaystyle{g(\epsilon) = \sum_{k=0}^{m} G_k \cdot \epsilon^k}$. Then, since $s(0) = 0$, it follows that $s(\epsilon) \equiv 0 \mod \epsilon$, and thus, $$U g(\epsilon) = \exp\left(-s(\epsilon) \cdot \log\left(\frac{A}{V\kern-0.15em{A}}\right) \right) \cdot \lambda(\epsilon) \cdot g(\epsilon) = \lambda_m \epsilon^m G_0.$$ By equating coefficients, we find that $U G_0 = 0$, and $U G_m = \lambda_m G_0$. Since $U$ increases overconvergence (and $\lambda_m \ne 0$), it follows that $G_0 \in M(\mathbf{C}_p,N,\kappa,r^p)$. Our only condition on $r$ so far is that $v(r) < 1/(p+1)$. Thus, we take $p \cdot v(r) = v(r^p) = 1/(p+1)$. \begin{lemma} If $g(0) = G_0$ does not have finite slope, then \begin{enumerate} \item $G_0 \in M(\mathbf{C}_p,N,\chi^k \cdot \psi,p^{1/(p+1)}) = H^0(X_0(p,p^{1/(p+1)}),\omega^{\otimes k})^{\langle \rangle = \psi}$. \item $U G_0 = 0$. \item $G_0 = q + \ldots \ne 0$. \label{lemma:contra} \end{enumerate} \end{lemma} \begin{proof} The first two claims are proved above. For the final claim, note that $g(0)$ is a limit of normalized cuspidal eigenforms, and so the first coefficient is $q$. \end{proof} To complete the proof, we note that the conclusions of Lemma~\ref{lemma:contra} are in contradiction with the following result from~\cite{buzcal}. \begin{lemma} If $k\in\mathbf{Z}$ and $G\in H^0(X_0(p,p^{1/(p+1)}),\omega^{\otimes k})$ is in the kernel of $U$, then $G = 0$. \end{lemma} \begin{proof} Suppose $G\in H^0(X_0(p,p^{1/(p+1)})),\omega^{\otimes k})$ is arbitrary. Let $E$ be an elliptic curve over a finite extension of $\mathbf{Q}_p$, equipped with a subgroup $C$ of order $p$ and with level $N$ structure $L$. If the corresponding point $(E,C,L)\in Y$ is in $X_0(p,p^{1/(p+1)})$, then one can regard $F(E,C,L)$ as an element of $H^0(E,\Omega^1)^{\otimes k}$. Now define $F\in H^0(X_0(p,p^{(p/(p+1))}),\omega^{\otimes k})$ by $$F(E,L)=\sum_{D\not=C}\pi^*G(E/D,\overline{C},\overline{L}),$$ where the sum is over the subgroups~$D\not=C$ of $E$ of order~$p$; $\pi$ denotes the projection map $E\to E/D$; $\pi^{*}$ denotes the pullback from $H^0(E/D,\Omega^1)^{\otimes k}$ to $H^0(E,\Omega^1)^{\otimes k}$; and, a bar over a level structure denotes its natural pushforward. An easy calculation using Tate curves (see, for example, Proposition~5.1 of~\cite{wild}) shows that $F=p \kern+0.07em{U G}$, and hence, if $U G=0$, then $F=0$. If $E$ is an elliptic curve with no canonical subgroup, and we fix a level $N$ structure $L$ on $E$, then $(E,C,L)\in X_0(p,p^{(p/(p+1))})$ for all $C$. Thus, $F(E,C,L)=0$ for such $E$, and hence, $$\sum_{D\not=C}\pi^*G(E/D,E[p]/D,\overline{L})=0.$$ for all $C$. Summing, one deduces that $G(E/D,E[p]/D,\overline{L})=0$ for all $D$ of order~$p$. This implies that $G$ is identically zero on the ``boundary'' of $X_0(p,p^{1/(p+1)})$ and, hence, that $G$ is identically zero. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Conclusions} \subsubsection*{Acknowledgements} We thank Vivek Balasubramanian and Jumana Dakka for helpful discussions and early contributions. We also thank Chris Layton for helping set up our runs on the NVIDIA DGX-2 compute systems. We also acknowledge support by NSF DIBBS 1443054 and NSF RADICAL-Cybertools 1440677. This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of the manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05- 00OR22725. \section{Discussion} As artificial intelligence (AI) and deep learning (DL) techniques become more pervasive for analyzing scientific datasets, there is an emerging need for supporting AI/DL coupled workflows to traditional HPC applications such as MD simulations. Our approach provides a proof-of-concept for how we can guide MD simulations to sample folded state ensemble of small proteins using DL techniques. The approach that we chose was based on building a generative model for protein conformations and identifying new starting conformations for additional MD sampling. Although the generative model was only used to identify novel conformations for extending our MD simulations, it nevertheless allowed us to guide the MD simulations towards sampling folded conformations of the protein systems we considered. \begin{table} \centering \begin{tabularx}{\linewidth}{c|LLL|L} \hline System & DL training \newline (100 epochs; minutes) & Time per \newline epoch (seconds) & Inference time (ms/frame) & MD simulations \newline (ns per minute) \\ \hline Fs-peptide & 7 & 5 & 5.13 & 1.25 \\ BBA & 11 & 7 & 1.27 & 1.20 \\ \hline \end{tabularx} \caption{Summary statistics of time taken by the individual components of our workflow: (1) train and infer from the CVAE for each system, and (2) running the MD simulation.} \label{tab:2}\vspace{-0.2in} \end{table} Although DL approaches can take significantly longer time to train, we deliberately chose a prototypic DL approach, namely CVAE, to train on our MD simulation data (Table \ref{tab:2}). As can be seen from the table, the computational cost of training and inference times for the CVAE model is on par with the cost for running our MD simulations. That is, within the time required to train our CVAE model, our MD simulations progress only by about a nanosecond. Thus, starting up of new MD simulations based on the guidance received from our CVAE model will not affect the workflow's overall performance. Further, our MD simulations were run using implicit solvent models, which also significantly reduces their computational times. Further, each contact map is no more than a couple of kilobytes of data and hence we did not require the utilization of intrinsic capabilities of the NVIDIA DGX-2, including the ability to potentially stream data across GPUs/ processors. A primary motivation for this work was to use ML/DL based analysis to drive MD simulations, and to calibrate results against non ML/DL driven approaches. In Ref.~\cite{fox2019learning} Fox et al introduced the concept of ``Effective Performance'' that is achieved by combining learning with simulation and without changing the traditional system characteristics. Our selection of physical systems, in particular the BBA peptide allows to provide a coarse-grained estimate for the effective performance of CVAE based adaptive sampling. Using Ref.~\cite{hruska2019extensible} as reference data, we find that the effective performance of CVAE based sampling is at least a factor of 20 greater than "vanilla" MSM based sampling approaches. Our estimate is based upon the convergence of simulated BBA structures to its reference structures to within 4.5~\AA. Note that the ExTASY based simulations in Ref.~\cite{hruska2019extensible} are at least two orders of magnitude more efficient that reference DE Shaw simulations. In future work, we will extend our effective performance estimate to Villin head piece (VHP) and refine our estimates for BBA. We expect that the concomitant increase in data sizes for larger MD simulations would necessitate the use of streaming approaches. Similarly, as the time required to train our DL models with data-/model- parallel approaches increases, it will require the use of emerging memory hierarchy architectures to facilitate efficient data handling/ transfer across compute nodes that are dedicated for training and simulation. Further, data intensive techniques such as reinforcement learning and/or active learning could also have been used to guide our MD simulations. The requirements of the ML/DL driven simulations outlined in this paper are representative of ML/DL driven adaptive workflows --- where the status of the intermediate data analysis drive subsequent computations. Adaptive workflows pose significant challenges~\cite{bala2019implementing} compared to workflows whose execution trajectory is predetermined a priori. Further, the integration of diverse ML/DL approaches as the intermediate analysis driving subsequent computations adds additional complexity, which includes but is not limited to heterogeneous workload, load balancing and resource management. Scalable execution and modern HPC platforms implies the need for specialized middleware that support address these challenges. We are addressing these aspects as part of ongoing work and future development built upon RADICAL-Cybertools~\cite{bala2019implementing,turilli2019middleware}. The source code, associated datasets including the generated simulations and deep learning models will be made available at the time of publication on http://www.github.com. \section{Introduction} Multiscale molecular simulations are widely used to model complex biological phenomena, such as protein folding, protein-ligand (e.g., small molecule, ligand/ drug, protein) interactions, and self-assembly~\cite{Dror_2012,Lee_2009}. However, much of these phenomena occur at timescales that are fundamentally challenging for molecular simulations to access, even with advances in both hardware and software technologies~\cite{Bowman_2009}. Hence, there is a need to develop scalable, adaptive simulation strategies that can enable sampling of timescales relevant to these biological phenomena. Many adaptive sampling techniques~\cite{hruska2019extensible,Singhal_2009,Weber_2011,Doerr_2017,Mittal_2018,Shamzi_2018} have been proposed. All these techniques share some similar characteristics, including (a) the need for efficient and automated approaches to identify a small number of relevant conformational coordinates (either through clustering and/or dimensionality reduction techniques)~\cite{Shirts_2001,Savol_2011_ISMB,fox2019learning}, and (b) the identification of the 'next' set of simulations to run such that more trajectories are successful in attaining a specific end goal (e.g., protein that is well folded, protein bound to its target ligand, etc.)~\cite{Mittal_2018,Shamzi_2018}. These adaptive simulations present methodological and infrastructral challenges. Ref.~\cite{hruska2019extensible} provides important validation of the power of adaptive methods over traditional ``vanilla'' molecular dynamics (MD) simulations or ``ensemble'' simulations. Ref.~\cite{bala2019implementing} highlights challenges of such workflows on high-performance computing platforms. We recently developed a deep learning based approach that uses convolutions and a variational autoencoder (CVAE) to cluster simulations in an unsupervised manner~\cite{Bhowmik_2018}. We have shown that our CVAE can discover intermediate states from protein folding pathways; further, the CVAE-learned latent dimensions cluster conformations into biophysically relevant features such as number of native contacts, or root mean squared deviation (RMSD) to the native state. We posit that the CVAE learned latent features can be used to drive adaptive sampling within MD simulations, where the next set of simulations to run are decided based on a measure of 'novelty' of the simulation/ trajectory frame observed. Integrating CVAE concurrently with large-scale ensemble simulations on high-peformance computing platforms entails the aforementioned complexity of adaptive workflows ~\cite{bala2019implementing}, while introducing additional infrastructural challenges. These arise from the concurrent and adaptive execution of heterogeneous simulations and learning workloads requiring sophisticated workload and performance balancing, inter alia. In this paper, we implement a baseline version of our deep learning driven adaptive sampling workflow with multiple concurrent instances of MD simulations and CVAEs. Our contributions can be summarized as follows: \begin{itemize} \item We demonstrate that deep learning based approaches can be used to drive adaptive MD simulations at scale. We demonstrate our approach in folding small proteins, namely Fs-peptide and the $\beta$-$\beta$-$\alpha$-fold (BBA) protein and show that it is possible to fold them using deep learning driven adaptive sampling strategy. \item We highlight parallel computing challenges arising from the unique characteristics of the worklfow, viz., training of deep learning algorithms can take almost as much time as running simulations, necessitating novel developments to deal with heterogeneous task placement, resource management and scheduling. \end{itemize} Taken together, our approach demonstrates the feasibility of coupling deep learning (DL) and artificial intelligence (AI) workflows with conventional all-atom MD simulations. \section*{Abstract} Significant progress in computer hardware and software have enabled molecular dynamics (MD) simulations to model complex biological phenomena such as protein folding. However, enabling MD simulations to access biologically relevant timescales (e.g., beyond milliseconds) still remains challenging. These limitations include (1) quantifying which set of states have already been (sufficiently) sampled in an ensemble of MD runs, and (2) identifying novel states from which simulations can be initiated to sample rare events (e.g., sampling folding events). With the recent success of deep learning and artificial intelligence techniques in analyzing large datasets, we posit that these techniques can also be used to adaptively guide MD simulations to model such complex biological phenomena. Leveraging our recently developed unsupervised deep learning technique to cluster protein folding trajectories into partially folded intermediates, we build an iterative workflow that enables our generative model to be coupled with all-atom MD simulations to fold small protein systems on emerging high performance computing platforms. We demonstrate our approach in folding Fs-peptide and the $\beta\beta\alpha$ (BBA) fold, FSD-EY. Our adaptive workflow enables us to achieve an overall root-mean squared deviation (RMSD) to the native state of 1.6$~\AA$ and 4.4~$\AA$ respectively for Fs-peptide and FSD-EY. We also highlight some emerging challenges in the context of designing scalable workflows when data intensive deep learning techniques are coupled to compute intensive MD simulations. \input{Introduction} \input{Methods} \input{Results} \input{Discussion} \input{Conclusions} \section{Methods} \vspace{-0.1in} \subsection{Workflow description} \vspace{-0.1in} Two key components of the workflow include the MD simulation module and the deep-learning based CVAE module, which are described below. \paragraph{Molecular dynamics (MD) simulations:} The MD simulations are performed on GPUs with OpenMM 7.3.0~\cite{Eastman_2017}. Both the Fs-peptide and BBA systems were modeled using the Amberff99SB-ildn force field~\cite{Larsen_2011} in implicit Onufriev-Bashford-Case GBSA solvent model~\cite{Onufriev_2004}. The non-bonded interactions are cut off at 10.0 ~$\AA$ and no periodic boundary condition was applied. All the bonds to hydrogen are fixed to their equilibrium value and simulations were run using a 2 fs time step. Langevin integrator was used to maintain the system temperature at 300 K with a friction coefficient at 91 ps$^{-1}$. The initial configuration was optimized using L-BFGS local energy minimizer with tolerance of 10 kJ/mol and maximum of 100 iterations. The initial velocity is assigned to each atom from a Boltzmann distribution at 300 K. We also added a new reporter to calculate the contact matrix of $C_{\alpha}$ atoms in the protein (using a distance cut-off of 8 $\AA$ in hdf5 format using the MDAnalysis module~\cite{Michaud-Agrawal_2011,Beckstein_2016} that could be used as inputs to the deep learning module (described below). Each simulation run outputs a frame every 50 ps. \paragraph{Convolutional Variational Autoencoder (CVAE):} Autoencoder is a deep neural network architecture that can represent high dimensional data in a low dimensional latent space while retaining the key information~\cite{Doersch_2016}. With its unique hourglass shaped architecture, an autoencoder compresses input data into a latent space with reduced dimension and reconstructs it to the original data. We use the CVAE to cluster the conformations from our simulations in an unsupervised manner~\cite{Bhowmik_2018,Romero_2019}. Currently in our workflow, we use the number of latent dimensions as a hyperparameter (varying between ${3,4,5,6}$) and use the CVAE that most accurately reconstructs the input contact maps~\cite{Bhowmik_2018,Romero_2019}. CVAE was implemented using Keras/TensorFlow and trained on a V100 GPU for 100 epochs. \begin{wrapfigure}{L}{7.5cm} \centering \includegraphics[width=7.5cm]{FigureWorkflow.pdf} \caption{Deep generative model driven protein folding simulation workflow.} \label{fig:1} \vspace{-0.2in} \end{wrapfigure} \paragraph{Assembling our workflow:} As illustrated in Figure ~\ref{fig:1}, our prototype workflow couples the two components. In the first stage, the objective is to initially train the CVAE to determine the optimal number of latent dimensions required to faithfully reconstruct the simulation data. We commence our runs as an ensemble of equilibrium MD simulations. Ensemble MD simulations are known to enable better sampling of the conformational landscape, and also can be run in an embarrassingly parallel manner. The simulation data is converted into a contact map representation (to overcome issues with rotation/translation within the simulation box) and are streamed at regular intervals into the CVAE module. The output from the first stage is an optimally learned latent representation of the simulation data, which organizes the landscape into clusters consisting of conformations with similar biophysical features (e.g., RMSD to the native state). Note that this is an emergent property of the clustering and the RMSD to the native state is not used as part of training data. In the second stage, our objective is to identify the most viable/ promising next set of starting states for propagating our MD simulations towards the folded state. We switch the use of CVAE to infer from newly generated contact maps (from simulations) and observe how they are clustered. Based on their similarity to the native state (measured by the RMSD), a subset of these conformations are selected for propagating additional MD runs. The workflow is continued until the protein is folded (i.e., conformations reach a user-defined RMSD value to the native state). \subsection{Implementation, Software and Compute Platform} We used the Celery software to implement the aforementioned workflow. Celery is an asynchronous task scheduler with a flexible distributed system to process messages and manage operations, which enables real-time task processing and scheduling. The tasks can be executed and controlled by the Celery worker server asynchronously or synchronously. Celery applications use callables to represent the modules that are part of the workflow. Once called, the task client adds to the task queue a message where its unique name is referred so that the worker can locate the right function to execute. The flexibility of Celery framework enables real-time interfacing to manage resource and excise control over the task scheduling and execution. All tasks can be monitored and controlled directly by the object functions. By calling the tasks at different stages of their program, we simply build multi-task workflows, which supports a large volume of concurrent tasks with real-time interfacing. The use of Celery framework allows us to establish a baseline for estimating the compute requirements of our workflow. Our workflow comprises of two callables, namely that of MD simulations, and the CVAE used either in training or inference mode. We tested our deep learning driven adaptive simulation framework on the NVIDIA DGX-2 system at Oak Ridge National Laboratory (ORNL). The DGX-2 system provides more than 2 petaflops of computational power from a single node that leverages its 16 interconnected NVIDIA Tesla V100-SXM3-32GB GPUs. This enables us to distribute the MD simulations and CVAE training onto 12 and 4 GPUs respectively. All the components in the workflow are encapsulated within a Python script that manages the various tasks through Celery. It first initializes the Celery worker along with the selected broker, RabbitMQ. All 16 GPUs are then employed for MD simulations to first generate 100,000 conformers as the initial training data for CVAE. With 5 minute interval between iterations, the trained CVAEs continuously compress C$_{\alpha}$ contact map of conformers from MD trajectories into data points in latent space, which are subsequently evaluated with density-based spatial clustering of applications with noise (DBSCAN) for identifying outlier conformations~\cite{Ester_96}. We used DBSCAN for its relative simplicity and also to establish a baseline implementation of our code. For Fs-peptide, outliers were collected all four trained CVAE models and only CVAE with 6 dimensional latent space was applied for BBA outlier searching. In each iteration, the MD runs are examined for outliers. Simulations that pass an initial threshold of 20,000 frames (1 $\mu$s) for Fs-peptide and 10,000 (0.5 $\mu$s) for BBA, but do not produce any outliers for the last 5000 frames (250 ns of simulation time) are purged. With the available GPUs from such MD runs, new MD simulations are launched from the the outliers to ensure appropriate resource management and usage. \section{Results} In previous work~\cite{Bhowmik_2018}, we have shown that the CVAE can learn a latent space from the Fs-peptide simulations such that the conformations from the simulations cluster into distinct clusters consisting of folded and unfolded states. When parameters such as the RMSD (to the native sztate) and the fraction of native contacts are used to annotate the latent dimensions~\cite{Gsponer6719}, we showed that these latent representations correspond to reaction coordinates that describe how a protein may fold (beginning with the unfolded state ensemble). Thus, we posit that we can propagate the simulations along these low-dimensional representations and can drive simulations to sample folded states of the protein in a relatively short number of iterations. \begin{figure} \centering \includegraphics[width=\textwidth]{Fs-peptide-results.pdf} \caption{CVAE-driven folding simulations of Fs-peptide.(A) Root mean squared deviation (RMSD) with respect to the native/ folded state from the 31 trajectories generated using our adaptive workflow for the Fs-peptide system. Only productive simulations -- i.e., simulations that achieve a RMSD cut-off of 4.5 $\AA$ or less are highlighted for clarity. The rest of the simulations are shown in light gray. (B) A histogram of the RMSD values in panel (A) depicting the RMSD cut-off for identifying folded, partially folded, and unfolded ensembles from the data. The corresponding regions are also marked in panel (A). (C) Using the RMSD to the native state as a measure of foldedness of the system, we project the simulation data onto a three dimensional latent representation learned by the CVAE. Note that the folded states (low RMSD values highlighted in deeper shades of blue) are separated from the folding intermediate (shades of green and yellow) and the unfolded states (darker shades of red).(D) A zoomed in projection of the last 0.5 $\mu$s of simulations generated along with the original projections (shown in pale gray, subsampled at every 100$^{th}$ snapshot). (E) highlights the same but just showing the samples from the last 0.5 $\mu$s to highlight the differences between folded and unfolded states. (F) shows representative snapshots from our simulations with respect to the unfolded, partial folded, and native state ensembles. Note that the cartoon representation shown in orange represents the native state (minimum RMSD of 1.6~$\AA$ to reference structure) determined from our simulations. } \label{fig:Fs-peptide-results} \vspace{-0.1in} \end{figure} Figure \ref{fig:Fs-peptide-results} summarizes the results of our folding simulations of Fs-peptide. The peptide consists of 21 residues -- Ace-A$_5$(AAARA)$_3$A-NME -- where Ace and NME represent the N- and C-terminal end caps of the peptide respectively, and A represents the amino acid Alanine, where as R represents the amino acid Arginine. It is often used as a prototypical system for protein folding and adopts a fully helical structure as part of its native state ensemble~\cite{McGibbon2014}. Previous simulations used implicit solvent simulations using the GBSA-OBC potentials and the AMBER-FF99SB-ILDN force-field with an aggregate simulation time of 14 $\mu$s at 300K~\cite{McGibbon2014}. We used the same settings for our MD simulations and initiated our workflow. Summary statistics of the simulations are provided in Table \ref{tab:1}. A total of 90 iterations of the workflow was run to obtain a total sampling of 54.198 $\mu$s. Note that the sampling time of the MD simulations is an aggregate measure similar to the ones reported in previous studies. We began by examining the RMSD with respect to the native state from all of our simulations. As shown in Figure \ref{fig:Fs-peptide-results}A, 13 of the total of 31 simulations are unproductive -- i.e., they do not sample the native state consisting of the fully formed $\alpha$-helix. This is not entirely surprising given that the starting state consists of a nearly linear peptide with no residual secondary structures. Based on this observation, we posited that our CVAE model can be used to identify partially folded states from the simulations. We also examined the histogram of the RMSD values computed for each conformation with respect to the native state ensemble (Figure \ref{fig:Fs-peptide-results}B). Based on the histograms, we can reasonably choose a threshold of 3.1$\AA$ or less to depict the folded state ensemble, followed by 4.6~$\AA$ for partially folded states, and 8.3~$\AA$ for the unfolded states. Any trajectory that shows RMSD values beyond 8.3~$\AA$ are only sampling the unfolded state of the protein. \begin{table}[] \centering \begin{tabularx}{\linewidth}{c|LLLLL} \hline System & Total no. \newline simulations & Total \newline simulation time ($\mu$s) & (Shortest*, Longest) \newline simulations ($\mu$s) & Iterations & Min. RMSD ($\AA$) \\ \hline Fs-peptide & 31 & 54.198 & 1.01, 3.4 & 90 & 1.6 \\ BBA (FSD-EY) & 45 & 18.562 & 0.517, 0.873& 100 & 4.44 \\ \hline \end{tabularx} \caption{Summary statistics of simulations. *Only considering the simulations that pass the initial threshold. } \label{tab:1} \vspace{-0.2in} \end{table} The projections of all the 31 simulations onto the learned CVAE is depicted in Figure \ref{fig:Fs-peptide-results}C. Collectively, $z_1$-$z_3$ provide a description of the Fs-peptide folding process. Notably, much of the folded conformational states (highlighted in blue, indicating low RMSD to the native state) are clustered together. Similarly, the unfolded conformations (conformations colored in darker shades of red with higher RMSD to the native state ensemble) are also clustered together. Taking this further, we examined if the similarity in the conformations hold even with a smaller partition of the data (see Figures \ref{fig:Fs-peptide-results}D and E), namely the last 10\% of the overall simulation data. This can be treated as a test set from which new simulations are initiated. Notably, from these simulations we observe the presence of roughly three arms in the projections (Figure \ref{fig:Fs-peptide-results}E) consisting of: (1) partially folded highlighted in shades of green/yellow, (2) unfolded state ensemble highlighted in shades of red, and (3) a much smaller ensemble of folded states (highlighted in blue). For each of these states, we can also extract the structural characteristics with respect to the folded state (Figure \ref{fig:Fs-peptide-results}F). Many of the unfolded states do not consist of any secondary structural features (top and bottom left panels). The partially folded states consist of partial turns/ helical structures. The final folded state (with RMSD of 1.6~$\AA$) consists of most (if not all) helical turns in the protein. \vspace{-0.1in} \subsection{Folding simulations of FSD-EY} \vspace{-0.1in} The BBA protein namely, FSD-EY is a designed protein that adopts a $\beta$-$\beta$-$\alpha$-fold in its native state; however this protein tends to be dynamic in solution~\cite{Lindorff-Larsen_2012,Sarisky_2011}. Similar to other zinc-finger proteins, the structure of the protein can potentially vary, and represents a challenging use-case for testing our workflow. As shown in Figure. \ref{fig:BBAresults}, our simulations do start with a completly unfolded state of the protein (average RMSD to native state is about 12~$\AA$. Using an aggregated MD sampling time of 18 $\mu$s, we note that we reach a RMSD value of 4.44~$\AA$. \begin{figure} \includegraphics[width=\textwidth]{Figure_BBA_Results.pdf} \caption{CVAE-driven folding simulations of BBA-fold, FSD-EY. (A) RMSD plots with respect to the native state of FSD-EY depicting the near-native state (blue), partially folded states (green) and unfolded (red) trajectories similar to Figure \ref{fig:Fs-peptide-results}. (B) A histogram of the RSMD values to the native state. (C) The learned projections from the CVAE for the trajecotries; similar to the Fs-peptide system, we can observe the clustering of conformations based on their RMSD values to the native state. We have used a RMSD cut-off of 10$\AA$ to highlight states closer to the native state. (D) Although we could not fully fold the protein, we do observe the presence of a well-formed hydrophobic core except for one residue (F25) at the C-terminal end of the protein.} \label{fig:BBAresults} \vspace{-0.1in} \end{figure} Although we do not sample the native state of the protein consisting of the $\beta$-$\beta$-$\alpha$-fold, we are still able to sample regions of the landscape that consist of a defined hydrophobic core consisting of the highlighted residues in Figure \ref{fig:BBAresults}D. Except for the dynamic C-terminal end, where the hydrophobic interactions between F21 and F25 are not entirely stable, the conformations that exhibit low RMSD values to the native state depict the presence of this hydrophobic core. We expect that extending these simulations further using the CVAE-driven protocol will enhance these interactions allowing it to fold completely.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{I. Introduction} Binary Ising systems presented a large interest from both bond and site perspectives [1-6]. In the bond-disordered models the lattice sites are equivalent and the interaction energies between neighbouring sites are randomly assigned from a set of possible values. In the site-disordered model the lattice sites are randomly occupied by two different type of magnetic ions, $A$ and $B$, with spins $S_A$ and $S_B$, and the interaction parameters between two neighbouring spins are completly determined by their species. The randomness in these systems can be considered either quenched or annealed. The annealed systems proved to be handled theoretically much easier by mean-field like methods, and so there are much better understood than the quenched ones. Unfortunately for practical applications the quenched systems are much more appropriate. This is the main reason why we proposed to limit our discussion just for the case of quenched systems. In the case of only ferromagnetic interactions between the spins, these models were used with succes to describe the magnetic properties of quenched and disordered magnetic alloys of the form $A_xB_{1-x}$, where $A$ and $B$ are magnetic atoms \cite{6,7}. When antiferromagnetic and ferromagnetic interactions compete, frustration appears, and the system become a Mattis-Luttinger type spin-glass model \cite{8,9}. For real physical cases the site-disordered models are more realistic, and so we proposed to study the Ising version of this model, considering the simplest case of $S_A=S_B=\frac{1}{2}$, and all the exchange interactions of ferromagnetic type. The hamiltonian of our problem will be: \begin{equation} H=-\sum_{<i,j>} [J_{AA} \cdot \delta_{iA} \cdot \delta_{jA} + J_{BB} \cdot \delta_{iB} \cdot \delta_{jB} + J_{AB}\cdot (\delta_{iA} \cdot \delta_{jB} + \delta_{iB} \cdot \delta_{jA})]\cdot S_i^z \cdot S_j^z , \end{equation} where $\delta_{ix}=1$ if the spin $i$ is of type $x$, and $0$ otherwise, and the sum is refering to all nearest-neighbours. In this paper we consider the real three-dimensional version of the model, and for results concerning the two-dimensional case we propose \cite{10}. The model considered by us was already investigated by many authors, using different methods. The first molecular-field approximations were due to Vonsovskii \cite{11,12}. Frustrated systems were studied by Aharony using renormalization-group technics \cite{13} and by Tatsumi with Monte Carlo simulations \cite{9}. The case of only ferromagnetic interactions was studied using a mean-field like approach by Kouvel \cite{14}, and with the coherent potential approximation by Foo and Wu \cite{15}. Mean-field theoretical approaches were also made in the works of Thorpe and McGurn \cite{3} respective Tahir-Kheli and Kawasaki \cite{2}. Ishikawa and Oguchi \cite{4} considered a Bethe-Peierls approach and in the work of Honmura, Khater, Fittipaldi and Kaneyoshi \cite{5} we find an effective-field theory for the two-dimensional model. Monte Carlo simulations were performed by Scholten \cite{16} to study the critical temperatures of two-dimensional, binary Ising ferromagnets in function of the relative species concentration and the relative interaction energy between unlike ions. Scholten also studied the phase diagram for the three-dimensional problem on cubic lattices for frustrated systems \cite{17}, and included next-nearest-neighbour interactions too. The phase diagrams of binary Ising ferromagnets were studied by Thorpe and McGurn \cite{3} both in the site-disorder and bond-disorder cases. They pointed out that the phase diagrams can be usefully cataloged in terms of the initial slope $\frac{\partial \ln T_c}{\partial q}$ of the transition temperature $T_c$ considered in function of concentration $q$, at the two points $q=0$ and $q=1$. With the help of perturbation theory they also determined the initial slopes for two-dimensional systems. The phase diagrams of binary Ising systems with randomly distributed exchange parameters were investigated by Kaneyoshi and Li using effective-field theory with correlations \cite{18}. In the book from Vonsovskii \cite{7} and in the paper from Luborsky \cite{6} one can find promising comparisions between experimental data and mean-field type predictions. Diluted systems, where one of the two components are non-magnetic, also presented a field of interest [19-21]. Recently there has been much interest in systems of mixed $S_A$ and $S_B$ spins, where $S_A \neq S_B$ [22-25]. In spite of all these earlier works there remained some not complitly clarified questions even for the simplest ferromagnetic case. The main problems are concerning the values of the critical exponents and the determination of the critical temperature of the system in general cases. Our work is intended to study the dependence of the critical temperature in function of the system composition and values of the coupling constants. We do this in a review context by comparing our high-accuracy Monte Carlo simulations with available theoretical formulas. In this manner we will give a practically useful and easy method of approximating the Curie temperature of these systems for general composition and general interaction parameters. We will also check the validity and limitations of different mean-field type approximations available for the Curie temperature of binary magnetic alloys. \section*{II. Used theoretical formulas} The localized model of ferromgnetism involving nearest-neighbour exchange integrals has an attractive simplicity for describing some magnetic systems. Although this approach for the magnetism in metallic systems is not completly acceptable due to the partially itinerant nature of the magnetic electrons, the obtained results are usually in good agreement with experimental data. In the case of binary magnetic alloys we are in a similar situation. The localized model based on the Heisenberg or Ising hamiltonian (1) with nearest-neighbour exchange, or the molecular-field theories proved to be applicable in describing the variation of the critical temperature in function of the alloys composition. The first formula based on the molecular-field approximation was derived, as we stated earlier by Vonsovskii \cite{11,12}, and used with success to describe transition temperatures of binary magnetic alloys. The proposed formula was: \begin{equation} T_c(q)=T_c(A,A)-2 \cdot [T_c(A,A)-T_c(A,B)]\cdot q + [T_c(A,A)+T_c(B,B)- 2 \cdot T_c(A,B)] \cdot q^2 , \end{equation} where $T_c(A,A)$ and $T_c(B,B)$ are the Curie temperatures of the pure $A$ and $B$ systems, $T_c(A,B)$ is the Curie temperature for a pure system caracterized with all exchange interactions equal with the ones between the $A$ and $B$ magnetic ions ($J_{AB}$), $T_c(q)$ is the Curie temperature of the mixture, and $q$ is the concentration of the $B$ component. We mention here that the critical temperature $T_c$ for an Ising system on the simple-cubic lattice, caracterized with $J$ exchange interaction constants (considering just nearest-neighbour interactions) is given by $T_c\approx 4.44425 \cdot \frac{J}{k_B}$, with $k_B$ the Boltzmann constant. Using a phenomenological model based on mean-field theory suitably modified, so that the individual atomic moments are allowed to vary in magnitude with their local environment, and considering only nearest-neighbour interactions Kouvel \cite{14} proposed the formula: \begin{eqnarray} & T_c(q)=\frac{1}{2} \cdot [T_c(A,A) \cdot (1-q)+T_c(B,B) \cdot q] + \nonumber \\ & + \{ \frac{1}{4}\cdot [T_c(A,A) \cdot (1-q) - T_c(B,B) \cdot q]^2+ T_c(A,B)^2 \cdot q \cdot (1-q) \} ^{\frac{1}{2}} . \end{eqnarray} In the work of Foo and Wu \cite{15} the disordered composition dependent exchange interaction is treated in a coherent potential approximation (CPA). In the limit of weak scattering their method give the mean-field like results, but in the strong scattering limit they predict such effects as critical concentration for the appearance of ferromagnetism in the diluted models \cite{21}, which is not obtained in mean-field theories. They proposed the following cubic equation for $T_c(q)$ \begin{eqnarray} & \alpha^2 \cdot T_c(q)^3 + \nonumber \\ & +[\alpha \cdot (T_c(A,A)+T_c(B,B)+T_c(A,B))- \alpha \cdot (1+\alpha) \cdot <T_c>] \cdot T_c(q)^2+ \nonumber \\ & + [(1+\alpha) \cdot T_c(A,A) \cdot T_c(B,B) \cdot T_c(A,B) \cdot < \frac{1}{T_c} > - \nonumber \\ & -\alpha \cdot (T_c(A,A) \cdot T_c(B,B) + T_c(A,B) \cdot T_c(A,A) + T_c(A,B) \cdot T_c(B,B))] \cdot T_c(q) - \nonumber \\ & -T_c(A,A) \cdot T_c(B,B) \cdot T_c(A,B)=0, \end{eqnarray} where \begin{equation} \alpha=\frac{z}{2}-1, \end{equation} with $z$ the coordination number of the lattice (in our case $z=6$), and \begin{eqnarray} & <T_c>=(1-q)^2 \cdot T_c(A,A) + 2 \cdot q \cdot (1-q) \cdot T_c(A,B) + q^2 \cdot T_c(B,B) ,\\ & <\frac{1}{T_c}>= \frac{(1-q)^2}{T_c(A,A)}+ \frac{2\cdot q \cdot (1-q)}{T_c(A,B)} + \frac{q^2}{T_c(B,B)}. \end{eqnarray} We mention that there are also other, more evoluate possibilities of calculating the Curie temperature, based on the Ising model (1) of the system, such as mean-field like renormalization-group technics, series expansion and perturbation methods. Unfortunately these are all very technical ones, and do not give practically usable formulas. \section* {III. The computer simulation method} As stated earlier, Monte Carlo simulations were performed on the considered model (1) in the ferromagnetic case ($ T_c(A,A)>0,\: T_c(A,B)>0$ and $T_c(B,B)>0$ ) by Scholten \cite{16}, and on frustrated systems by Tatsumi \cite{9} and Scholten \cite{17}. Scholtens work for purely ferromagnetic systems refers to the two-dimensional case. He used the classical single spin-flip Metropolis algorithm \cite{26}, and due to this his calculations were rather time-consuming. So, he considered just a few choices for the interaction parameters. He compared his Monte Carlo results with the ones obtained in \cite{2,3}, \cite{4} and \cite{5}. In the present work we proposed to study by high-accuracy Monte Carlo simulations the three-dimensional case of simple-cubic lattices, completing in some sense the earlier works. We used the more performant cluster-flip Swendsen and Wang Monte Carlo method \cite{27} with an original recursion type algorithm. We proposed to compare our results with the ones given in \cite{11,12}, \cite{14} and \cite{15}. Our simulations were performed on relatively large $50 \times 50 \times 50$ simple-cubic lattices. The critical temperature was found by detecting the maximum in the fluctuation of the absolute value of the magnetization. For achieving statistical equilibrium we considered up to $600$ cluster-flips and then studied the fluctuation for $1000$ more iterations. The sensitivity in the determination of the critical temperature was in general of the order of $0.01 \cdot T_c(A,A)$. We chose usually $T_c(A,A)=100$ (units) and we proposed to consider various values for the $T_c(A,B)$ and $T_c(B,B)$ parameters. For every chosen set of interaction parameters we covered the $q \in (0,1)$ concentration interval uniformly with $19$ simulation points. The program was written in C and the simulations were performed on a CRAY Y-MP4D/464 computer and IBM R-6000 RISC workstations. \section*{IV. Results} Our Monte Carlo results for the variation of the Curie temperature in function of the $B$ components concentration, are plotted with different symbols on Fig. 1-7. The simulations, as we stated, earlier were made on a simple-cubic lattice. The curves indicate theoretical results obtained from equations (2) and (3). In Fig. 1 considering four choices of the $J_{AB}$ interaction parameters ($J_{BB}$ and $J_{AA}$ fixed) we compare our Monte Carlo results with the ones obtained from equation (2). Fig. 2 presents the same results in comparision with theoretical data given by equation (3). One can observ that formula (2) predict in general lower values, and in contrast to this (3) predict higher values for the transition temperature than the real ones. We also checked, that equation (4) gives lower values even than (2), so it is much less appropriate for our model. As a first observation we conclude, that in these cases the real transition temperatures are limited with the two curves given by equations (2) and (3). We are also able to confirm, that in these three-dimensional cases the mean-field like results proved to give quite good estimates for the Curie temperature. In Fig. 3 we illustrate that almost a perfect fit with the realistic Curie temperatures can be obtained, if we use the arithmetic mean of the values obtained from equations (2) and (3). In Fig. 4-6 we tried to prove our previous statements considering quite exotic values for the exchange intercation parameters, and thus for the $T_c(A,B)$ and $T_c(B,B)$ critical temperatures. We draw with thin dashed lines the results given by equations (2) and (3) (dense dashes correspond to the curve obtained from (2)). The continuous darker line represents the arithmetic mean of formulas (2) and (3). We conclude again that in general the values given by equations (2) and (3) limit nicely our realistic simulation points, and their arithmetic mean give a fairy good estimate for the realistic Curie temperature. In the case when $J_{AB} \not\in [J_{AA},\:J_{BB}]$, one can also observ, that the strongest difference between the arithmetic mean proposed by us and the simulation results are for concentration values where the critical temperature has an extremum, the real values being lower. In Fig. 7 we present studies concerning the extrem case, when the $J_{AB}$ interaction parameter, and thus the $T_c(A,B)$ critical temperature is getting smaller (weak coupling between the two components). In this case, as expected, we get that the simulation curves in the limit of $q=0$ and $q=1$ are tending to strait-lines with the slope $\frac{1}{T_c(0)}\cdot \frac {\partial T_c(q)}{\partial q}$, equal with $1.13$, characteristic for site-diluted systems \cite{21}. \section*{V. Conclusions} From comparision of computer simulation data with results given by formula (2) and (3) we conclude, that in the three-dimensional case, the mean-field like approaches are working satisfactory well. So, in this way it is not surprising the good fit of the mean-field like predictions with experimental data, presented in \cite{6} and \cite{7}. Generally the curves obtained for the critical temperature from equation (2) and (3) limit rather nicely the real values. Exceptions are the cases where the $J_{AB}$ interaction parameter is not from the interval limited by $J_{AA}$ and $J_{BB}$. In these situations in the vicnity of the extremum of the Curie temperature curve our previous statement might not be true. The theoretical curve constructed from the arithmetic mean of formula (2) and (3) proved to be a good approximation to obtain easily the Curie temperature for quenched, binary Ising ferromagnets. In the limit of small couplings beetween the two components ($J_{AB}$ small), we obtained results in good agreement with the site-diluted model. Our study is intended to complete the earlier ones by giving a practically useful method of estimating the Curie temperature of the proposed system. We also illustrated the validity of our method, and tried to study many possible choices for the interaction parameters. \section*{Acknowledgements} \samepage This study was finished during a bursary offered by the Norwegian Research Council. We thank Y. Brechet, A. Coniglio, L. Csernai, and L. Peliti for their continuous help and useful discussions. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} As the lowest-mass galaxies, the numbers and properties of ultrafaint dwarfs (UFDs; $M_V>-7.7$; \citealt{Simon2019,DW2020}) are extremely sensitive to critical aspects of our theoretical understanding of galaxy formation \citep{Agertz2020} and are among the best existing probes of the nature of dark matter (DM; \citealt{Bullock2017,Nadler2021}). Due to their low masses, small variations in galaxy formation physics result in orders-of-magnitude scatter in galaxy stellar-to-halo mass ratios \citep{Bullock2017,Fitts2017,Munshi2019,Agertz2020} --- a behavior that will manifest itself in the luminosity function and properties of UFD satellites \citep{Smercina2018,Bose2020,Carlsten2021}. Because UFDs are intrinsically faint and have extremely low surface brightnesses, the only way of discovering them has been by seeking concentrations of individual resolved stars in survey datasets \citep[e.g.,][]{belokurov2007,Koposov2015,DW2020}, limiting the most sensitive searches to the Milky Way \citep{DW2020} and M31 \citep{McConnachie2018}. There are signatures in both satellite systems of substructure and the accretion of satellites in groups: the delivery of satellites by the Magellanic Clouds \citep{Koposov2015,Patel2020}, differences between the star formation quenching times \citep{Weisz2019,DSouza2021} and radial profiles of the Milky Way and M31 satellites \citep{Samuel2020}, and claims of alignments or planes of satellites \citep{Pawlowski2012,Ibata2013}. Because the Milky Way and M31 experienced particular growth and accretion histories, our models --- which have been calibrated entirely in the Local Group by necessity --- may not accurately describe the satellite populations of a wider, more representative set of groups \citep[e.g.,][]{Carlsten2022,Smercina2022}. While the survey power of the Vera C. Rubin Observatory and Nancy Grace Roman Space Telescope will spur rapid progress in this field, current facilities (e.g., Magellan's Megacam or Subaru's Hyper Suprime-Cam (HSC)) already allow the discovery of faint (e.g., \citealt{Smercina2017}, \citealt{Okamoto2019} in the M81 group) and ultrafaint (e.g., \citealt{Mutlu2022}, \citealt{Sand2022}) galaxies \edit1{in} the Local Volume ($D<5\,$Mpc). The M81 group ($D=3.6$\,Mpc; \citealt{RS11}) is particularly interesting to study. M81 has a rich satellite population; diffuse light searches with the Canada-France-Hawaii Telescope \citep{Chiboucas2013} and more recent resolved-star work \citep{Smercina2017,Okamoto2019} have revealed 17(!) new group members in the past decade. Deep multiband data \citep{Okamoto2015,Smercina2020} are available, allowing the development and testing of satellite search methods (see also \citealt{Mutlu2021}). Furthermore, M81 is undergoing a group-scale interaction involving the recent arrival and tidal disruption of at least two large satellites \citep{yun1994,Okamoto2015,Smercina2020}, offering a possibility to study the impacts of satellite delivery in group accretions \citep{LiHelmi2008,Deason2015,DSouza2021}. Here, we report the discovery using resolved-star techniques of one new M81 group UFD, and we present five lower surface brightness candidate UFDs, with absolute magnitudes reaching toward $M_V \sim -6$. \section{Observations} \label{sec:reduction} We combine two datasets from HSC \citep{HSC}. \citet{Okamoto2015} surveyed the M81 group in $g$ and $i$ bands using 7 HSC pointings (each $\sim$1\fdg5 field of view), centered on M81. The four eastern pointings have excellent image quality with point-sources size of 0\farcs7--0\farcs9, and 50\% completeness limits $i\sim26.2$. The three western pointings have worse image quality, and while we analyze them, they do not have competitive depth for dwarf searches \citep{Okamoto2019}. \citet{Smercina2020} surveyed two pointings in each of three ($g,r,i$) filters, chosen to cover the outer regions of M81, M82, and NGC 3077. Image depth was nearly uniform across the two fields, yielding extinction-corrected point-source detection limits of $g\,{=}\,27$, $r\,{=}\,26.5$, and $i\,{=}\,26.2$, measured at $\sim$5$\sigma$. Seeing was relatively stable, resulting in consistent point-source sizes of 0\farcs7--0\farcs8. Both datasets were reduced with the HSC optical imaging pipeline, which is a fork of the Legacy Survey of Space and Time (LSST) pipeline whose main features are regularly re-integrated with the LSST pipeline \citep{Bosch2018}. The pipeline performs photometric and astrometric calibration using the Pan-STARRS1 catalog \citep{Magnier2013}, reporting final magnitudes in the HSC natural system. Sources detected in $i$ band determine reference positions for forced photometry, which is performed on co-added image stacks in all available passbands. All magnitudes were corrected for Galactic extinction following \cite{SFD} adopting the updated extinction coefficients from \cite{Schlafly2011}. For this work, we use size measurements derived using the \texttt{ext\_shapeHSM\_HsmSourceMoments} algorithms, which have been optimized for applications such as weak lensing where shape measurements are critical \citep{Hirata2003,Mandelbaum2005}. In order to search for UFD satellites, we must differentiate between stars at the distance of the M81 group and much more numerous unresolved background galaxies. There is no single perfect method; consequently, we use three different methods. \begin{enumerate} \item Morphology: In each passband, we determine the spatially variable point-spread function (PSF) on $0\fdg25$/$0\fdg4$ scales (in the \citealt{Okamoto2015} and \citealt{Smercina2020} data sets respectively) using bright stars $19<i<22$. Morphologically selected stars are then those objects with sizes in each passband smaller than the PSF size plus $(0\farcs3,0\farcs4)$ in the $(g,i)$ bands for the \citet{Okamoto2015} data set, and $(0\farcs3,0\farcs2,0\farcs3)$ in $(g,r,i)$ for the \citet{Smercina2020} data set. In artificial galaxy detection experiments, these thresholds yielded the faintest detection limits that could be achieved with modest contamination. \item Stellar Locus: For the \citet{Smercina2020} dataset, the three-passband coverage allows one to further select sources to have $g-r$ and $r-i$ colors similar to stars. This selection is described in more detail in \citet{Smercina2020}. \item Nearest Neighbor: The \citet{Smercina2020} data are of uniform enough quality that supervised machine-learning techniques can be used. Nine quantities are used: $i$-band magnitude, $g-r$, $r-i$, and the object size in the R.A.\ and decl.\ directions for each of $g$, $r$ and $i$ bands. Each quantity was normalized to have an outlier-resistant standard deviation of unity to carry similar weights in the classification. We select a training set of $\sim26,000$ background objects in areas distant from the M81 group galaxies. In order to assemble a training set of likely stars, we statistically subtract these background objects from the population of objects in a star-rich region of equal area in the stellar envelopes of M82 and NGC 3077. For each background object, the nearest match in the star-rich region is identified in 9-dimensional space, and this object is discarded from the dataset, leaving a sample that is likely to contain primarily stars ($\sim13,000$ objects). These star and background training sets then classify all objects using the majority vote of the 11 nearest neighbors in this 9-dimensional space (using \texttt{scikit.learn.neighbors}). Classifications for the background and star-rich regions were generated using alternate regions. The detailed choice of training regions does not affect any of our conclusions. \end{enumerate} \begin{figure*}[ht!] \centering \includegraphics[width=0.85\textwidth]{M81_compcont.pdf} \caption{Left: completeness, quantified using Gaussian kernel density estimation with $\sigma = 0.15$\,mag, as measured using M81 halo stars with HST imaging. While most HST stars have a counterpart in the Subaru catalog (black line), star--galaxy separation techniques are so selective that they discard many real stars up to 1 mag brighter than the nominal 50\% completeness limit of $i\sim 26.3$ (gray line). Right: contamination by galaxies from each selection. The gray line shows the case when the number of contaminants equals the number of recovered stars. Galaxies dramatically outnumber stars in M81's halo; star--galaxy separation cuts down the contamination considerably. \label{fig:compcont}} \end{figure*} We illustrate the performance of the three different star--galaxy separation techniques in Fig.\ \ref{fig:compcont}. We use as ground truth stars in uncrowded regions of M81's halo with Hubble Space Telescope (HST) imaging from the GHOSTS survey \citep{RS11}\footnote{The relatively small number of HST stars allows star--galaxy separation testing but was insufficient to act as a training set for star--galaxy separation.}. The overall completeness remains above 90\% for $i<25.4$, dropping rapidly thereafter (black curve; left panel). The right panels show the contamination $\Sigma_{contamination}$ divided by the number of detected stars, which is the position-dependent density of stars $\Sigma_{stars} (\alpha,\delta)$ multiplied by the completeness $c_{stars}$ (we analyze uncrowded regions and can neglect spatial variations in completeness; \citealt{Smercina2020}). The GHOSTS fields used to quantify contamination have low stellar density and so give contamination measures $\frac{\Sigma_{contamination}}{\Sigma_{stars} (\alpha,\delta) c_{stars}}$ that are higher than would be expected at the positions of our UFD candidates, but give a robust measure of the {\it relative} performance of star--galaxy separation methods. Notwithstanding this limitation, it is clear that star--galaxy separation is crucial --- in the GHOSTS fields, there are 8-14$\times$ more galaxies than stars at such limits (black curve, right panel), motivating stringent star--galaxy separation. The three star--galaxy selections already lower completeness at $i<25.5$ and strongly reduce it for $i>25.5$; contamination by background galaxies is, however, dramatically reduced, permitting a search for UFD candidates that will be less overwhelmed by the clustering of background compact sources. \section{UFD candidate identification} \label{sec:cand_sel} \begin{figure*} \gridline{\fig{mosaic.pdf}{0.5\textwidth}{} \fig{mag-size.pdf}{0.5\textwidth}{} } \caption{Left: the distribution of known galaxies (orange) and candidate UFDs (red) in the M81 group. The KDE map with a 400\,pc kernel is shown; the $x-$ and $y-$axes are the projected distance at the distance of the M81 group in the R.A.\ and decl.\ direction in kpc. The outer parts are from \citet{Okamoto2015}; inside the purple outline the $gri$ \citet{Smercina2020} dataset is used. The definite dwarf M81-dw J0954+6821 is in brick red. The large gray circle shows the location of background galaxy UGC 5423; small gray circles show rejected overdensities. Top right: the magnitude--size relation for Local Group galaxies (green stars), known M81 satellites (orange), the new M81 definite UFD M81-dw J0954+6821 (brick red), and M81 UFD candidates (red). Lines of constant enclosed surface brightness within the half-light radius are shown in gray. Bottom right: the luminosity function of Milky Way, M31, and M81 galaxies within $D<100$\,kpc for the Milky Way (purple), and $R_{proj} < 100$\,kpc for M31 (cyan) and M81's known satellites (black) and candidates$+$new UFD (red). \label{fig:cand_sel}} \end{figure*} Candidate UFDs will appear as overdensities of objects with the colors and magnitudes expected for metal-poor red giant branch (RGB) stars \citep[e.g.,][]{Martin2008}. We select stars within the $(g-i,i)$ polygon with corners $(0.75, 26.5), (1.45, 26.5), (1.85, 24.2)$, and $(1.4, 24.2)$\footnote{This selection region encloses the metal-poor RGB population characteristic of M81's outer stellar halo (see the 25--35\,kpc and 35--45\,kpc radial bins in Fig.\ 7 of \citealt{Smercina2020}).}. This color--magnitude cut is sufficiently red to avoid contamination by the young stars that are widespread across the whole M81 group \citep[e.g.,][]{Okamoto2015}. In order to identify overdensities with sizes comparable to the half-light radii of $M_V \sim -7$ UFDs in the Local Group, we determine the density of metal-poor RGB stars using kernel density estimation using a top-hat kernel with radii of 200 and 400\,pc; we sample the distribution on 100\,pc scales\footnote{Tests show that these kernels recover artificial galaxies with properties similar to Local Group UFDs.}. We demand that an overdensity has a Poisson probability of being drawn from the spatially varying background (assessed using a 4\,kpc top-hat radius) of $P<10^{-6}$. Rapid changes in density in the inner parts of bright galaxies yield spurious overdensities; we therefore conservatively exclude any recovered overdensity within 22, 12, 10, 5, and 4 projected kpc from M81, M82, NGC 3077, IKN, and KDG61, respectively. We then determine a more tailored measure of significance by allowing modest shifts in the center and choosing the best significance in a range of apertures between 100 and 800\,pc. Candidates are those objects that have final probability $P<10^{-7}$ of being drawn from the background by chance alone. We search each dataset separately: \citet{Okamoto2015} using the Morphology cut, and using all three star--galaxy separation methods in the \citet{Smercina2020} dataset. In areas where both datasets overlap, we choose only candidates detected in the \citet{Smercina2020} dataset. Given the search area, probing $\sim 2\times10^5$ independent 200\,pc radius apertures, $\sim 0.02$ candidates would be expected from chance alone. We quantify the expected degree of contamination by artifacts or groups of background galaxies by analyzing $gri$ archival data taken by the Subaru Strategic Program of a four-pointing deep HSC mosaic in the COSMOS field \citep{Aihara2022}, where the probability of finding a real Local Volume UFD is very low. This has similar area and depth but has slightly worse seeing. The morphological (stellar locus) cut gives three (one) candidates across the four HSC fields with $P<10^{-7}$. In the M81 datasets, we recover all known dwarf galaxies in the search area (Fig.\ \ref{fig:cand_sel}, left) in addition to a background galaxy UGC 5423 ($D \sim 9$\,Mpc; large gray circle). We find 11 additional candidates in the seven-pointing \citet{Okamoto2015} mosaic using the morphology-only cut ($\sim6$ would be expected from our analysis of COSMOS). After experimentation, we found no robust algorithmic methods for rejecting clumps of background objects or spurious detections, so we visually inspect each candidate. Eight candidates are clearly spurious (artifacts near bright stars, field edges, or galactic cirrus) and are discarded completely. Two overlap with the deep coverage with more stringent star--galaxy separation, and are vetoed by that deeper dataset. One candidate --- M81-dw J0954+6821 --- is compact enough to show diffuse surface brightness; furthermore, the diffuse brightness is bluer in color than the resolved RGB stars, as is expected for a partially resolved dwarf galaxy (as the diffuse light is dominated by bluer subgiants and main-sequence turnoff stars; see e.g., Figure 1 of \citealt{Sand2022}). On this basis, we argue that it is a clear dwarf galaxy (brick red circle in Fig.\ \ref{fig:cand_sel}; Table \ref{Table}). We find eight candidates in the two-pointing deep dataset with $\ge5$ stars after background subtraction, where we would expect only 0.5 spurious candidates from our COSMOS analysis. One candidate is close to a bright star and is rejected outright. Seven candidates remain (Table \ref{Table}). One of them is close enough to a bright background galaxy that we are concerned that the point sources might be globular clusters around those nearby galaxies (an important contaminant of candidate M31 UFDs; \citealt{Martin2013}). Another appears to be a background galaxy group, given a clear concentration of blue compact sources in that candidate's color--magnitude diagram (CMD). The remaining five appear to be stellar in nature, and we retain them as candidates, shown as red circles in Fig.\ \ref{fig:cand_sel} (see Table \ref{Table}). We show postage stamps of the candidates in Fig.\ \ref{fig:postage_stamps}. Fig.\ \ref{fig:CMDs} shows background-corrected CMDs for each candidate. We show objects within the 80\% light radius (determined from the fits described in \S \ref{sec:prop_dist}). In order to background-subtract the CMD, we choose a background annulus of equal area (with inner radius $3.5\,r_{e}$), and for every object in that background annulus we choose the closest match in color--magnitude space of objects within the 80\% light radius and discard it from the CMD. Only the remaining objects --- those that are in excess of the background in that region --- are shown in Fig.\ \ref{fig:CMDs}. Gray symbols show background-corrected morphologically selected `stars'; red symbols (in panels 2--6) show stellar-locus-selected stars\footnote{The blue `stars' in M81-dw J1004+6835's CMD are likely to be real, from young stars in M81's {\sc Hi} tidal field \citep{Okamoto2015}.}. \newcolumntype{s}{!{\extracolsep{-4pt}}l!{\extracolsep{0pt}}} \newcolumntype{p}{!{\extracolsep{-4pt}}c!{\extracolsep{0pt}}} \begin{deluxetable*}{spssssss} \tablecaption{Dwarf galaxy candidates and likely contaminants in the M81 group\label{Table}} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{Number} & \colhead{R.A.\ (J2000)} & \colhead{Decl.\ (J2000)} & \colhead{$M_V$} & \colhead{$r_e$} & \colhead{Data Set} & \colhead{Note} \\ \colhead{} & \colhead{} &\colhead{(deg)} & \colhead{(deg)} & \colhead{(mag)} & \colhead{(pc)} & \colhead{} & \colhead{} } \startdata M81-dw J0954+6821$^a$ & 1 & $148.5292\pm0.0008$ & $68.3641 \pm 0.0003$ & $-7.1\pm0.25$ & $78\pm8$ & Okamoto & Definite dwarf \\ M81-dw J0959+6837 & 2 & $149.7931\pm0.0037$ & $68.6212_{-0.0013}^{+0.0022}$ & $-6.5_{-0.6}^{+0.7}$ & $230_{-130}^{+310}$ & Smercina & \\ M81-dw J1000+6841 & 3 & $150.0402_{-0.0055}^{+0.0070}$ & $68.6855_{-0.0014}^{+0.0017}$ & $-7.0_{-0.5}^{+0.6}$ & $360_{-140}^{+190}$ & Smercina & \\ M81-dw J1001+6907 & 4 & $150.4039\pm0.0037$ & $69.1224\pm0.0014$ & $-6.6_{-0.4}^{+0.5}$ & $220_{-80}^{+190}$ & Smercina & \\ M81-dw J1002+6903 & 5 & $150.7405\pm0.0035$ & $69.0559_{-0.001}^{+0.0014}$ & $-6.4_{-0.4}^{+0.5}$ & $200_{-70}^{+150}$ & Smercina & \\ M81-dw J1004+6835 & 6 & $151.1613\pm0.0071$ & $68.5916\pm0.0024$ & $-7.5_{-0.4}^{+0.5}$ & $520\pm160$ & Smercina & Superimposed on \\ & & & & & & & NGC 3077 tidal debris \\ \hline M81-dw J1008+6856 & \nodata & $152.0104_{-0.0080}^{+0.0033}$ & $68.9367_{-0.0019}^{+0.0022}$ & $-6.5_{-0.5}^{+0.8}$ & $470_{-410}^{+230}$ & Smercina & Likely Background \\ & & & & & & & Galaxy Cluster \\ M81-dw J1003+6901 & \nodata & $150.9062\pm0.0038$ & $69.0187\pm0.0018$ & $-6.2_{-0.5}^{+0.6}$ & $220_{-100}^{+270}$ & Smercina & Possible background GCs \\ \enddata \tablecomments{$^a$ M81-dw J0954+6821 has high enough surface brightness to have a well-measured ellipticity $b/a \sim 0.55\pm0.1$ and PA$\sim 25\pm8$. } \end{deluxetable*} \begin{figure*} \centering \includegraphics[width=0.6\textwidth]{m81_gri_ps.pdf} \caption{The distribution of known dwarf galaxies (orange) and our sample (brick red shows the definite dwarf M81-dw J0954+6821, labeled 1; red shows the other candidates, labeled 2--6). The background is the SDSS $gri$ mosaic of the M81 region. We also show $gri$ postage stamps of the candidates. Red circles show likely metal-poor RGB stars in the postage stamps, and the orange circles show the 50\% and 90\% light radii. \label{fig:postage_stamps}} \end{figure*} \begin{figure*} \plotone{M81_dw_cmds.pdf} \caption{Background-subtracted CMDs within the 80\% light radius. Morphologically selected stars are shown in gray; in candidates 2--6 with $gri$ coverage, stellar-locus-selected stars with colors consistent with a metal-poor RGB are shown in red. Representative error bars are shown in the rightmost panels. A 13.1\,Gyr ${\rm[M/H]}=-1.6$ isochrone is shown in red, and the selection region for RGB stars is shown in blue. In all cases, objects with close color--magnitude matches in a background annulus of equal area were subtracted from the CMD. \label{fig:CMDs}} \end{figure*} \section{The properties and distribution of dwarf and ultrafaint dwarf candidates} \label{sec:prop_dist} \subsection{Candidate properties} In order to estimate candidate properties, we assume that each is at the M81 group distance of $D=3.6$\,Mpc. We follow \citet{Martin2008} in fitting a two-dimensional exponential profile with uniform background to the distribution of detected stars. The candidates have so few stars that their ellipticity and position angle are virtually unconstrained. We therefore instead fit a four-parameter model: R.A., decl., the major-axis half-light radius $r_e$, and the number of stars. A Markov Chain Monte Carlo maximum likelihood fit is performed using \texttt{emcee} \citep{ForemanMackey2013}, using uniform priors in all parameters except position; a Gaussian prior on position is applied with $\sigma = 400$\,pc in each direction. We estimate absolute magnitude by scaling the observed number of stars by the magnitude-dependent completeness (Fig.\ \ref{fig:compcont}; this correction is roughly a factor of two) and the absolute magnitude per detected RGB star, estimated from a 13.3\,Gyr old [M/H]$=-1.6$ Padova isochrone\footnote{\url{http://stev.oapd.inaf.it/cmd}, using the PARSEC evolutionary tracks version 1.2S \citep{Bressan2012}.} (given the ages and metallicities of similarly luminous UFDs; \citealt{Brown2014}, \citealt{Simon2019}; shown in Fig.\ \ref{fig:CMDs}). Dwarf galaxy M81-dw J0954+6821 has high surface brightness and is crowded; we therefore directly calculate the flux and half-light radius from the image itself. The results of these fits are given in Table \ref{Table}. M81-dw J0954+6821 has a high surface brightness within the half-light radius ($\langle \mu_V \rangle_{<r_e} \sim $26 mag\,arcsec$^{-2}$; Fig.\ \ref{fig:cand_sel}a) and is a definite dwarf galaxy. It has well-measured parameters: $r_e=78\pm8$pc, $b/a = 0.5\pm0.1$, PA$=25\deg$, and $M_V = -7.1\pm0.25$. Its magnitude error is dominated by background uncertainty. Its CMD (Fig.\ \ref{fig:CMDs}) is sparse, due at least partially to crowding. In contrast, our most diffuse ($\langle \mu_V \rangle_{<r_e} \sim $29.5 mag\,arcsec$^{-2}$) candidate is M81-dw J1004+6835 --- it is just $\sim 10\,$kpc south of NGC 3077 and is superimposed on a rich stellar population from NGC 3077 itself. The other four candidates have $M_V$ between $-6.5$ and $-7$ and $r_e$ values between 200 and 350\,pc. Due to their extreme surface brightnesses ($\langle \mu_V \rangle_{<r_e} \sim $29 mag\,arcsec$^{-2}$), no diffuse light can be detected; deeper CMD data from HST or JWST will be required for confirmation. It is clear why these candidates have so far evaded detection: all candidates are fainter than all known M81 group dwarf galaxies, and all but one of the candidates have much lower surface brightness than known M81 dwarfs. Yet the M81 satellites' properties are not unexpected, overlapping with the ranges of magnitudes and sizes of dwarf galaxies and UFDs in the Local Group (Fig.\ \ref{fig:cand_sel}, top right). Within or near the Local Group, there are two analogs to the relatively compact galaxy M81-dw J0954+6821: Pegasus V/Andromeda XXXIV ($D=690$\,kpc; \citealt{Collins2022}) and Tucana B ($D=1.4$\,Mpc; \citealt{Sand2022}). In their relatively high luminosities and large sizes, M81-dw J1004+6835 and M81-dw J1000+6841 are most similar to Canes Venatici I/Andromeda IX and Hercules/Andromeda XXIV, respectively. The remaining dwarfs are analogs of Bo\"otes I, or equivalently Andromeda XIII or XXII. \subsection{The spatial distribution of candidates} Fig.\ \ref{fig:cand_sel} shows that candidates are not distributed uniformly in the M81 group. One might have expected the M81 group satellites to be clustered around M81 itself, or potentially, given the evidence for \edit1{satellite} infall with the Magellanic Clouds, around M82, M81's largest satellite. Yet, instead, they are clustered around NGC 3077. One immediate implication is that most of these candidates are likely to be real --- our COSMOS blank-field study shows that spurious candidates are more spatially uniform. Given the spatially varying seeing, fully accounting for completeness requires forward modeling given expected satellite distributions and is beyond the scope of this work. The \citet{Smercina2020} dataset has uniform depth, permitting instead a preliminary estimate of significance. In the northern field, excluding M82, there is one known fainter satellite. In NGC 3077's field, excluding NGC 3077, there are five known fainter satellites and five new candidates (10 total). The chance of drawing 10 satellites from a Poisson distribution if the mean is 1 is $\sim 10^{-8}$; alternatively, the chance of drawing one satellite if the mean is 10 is $\sim 5 \times 10^{-4}$. If the mean is 5.5 (the average), the chance of drawing one or less for one draw and 10 or more for the other is $\sim 0.027 \times 0.025$, or $7 \times 10^{-4}$. We conclude that there is less than a $\sim 7 \times 10^{-4}$ chance that this difference in satellite counts is from chance alone\footnote{Including satellites or candidates outside the \citet{Smercina2020} footprint, excluding those galaxies that are closer to NGC 2976 with R.A.$<148^{\circ}$ or decl.$<68^{\circ}$ (2 vs.\ 14), $P<2 \times 10^{-4}$ instead. Restricting our attention conservatively to clear dwarf galaxies in this area (2 vs.\ 9) gives $P<0.5\%$.}. \section{Discussion} \label{sec:disc} Assuming that these candidates are real, and neglecting completeness corrections, we illustrate the impact of these new discoveries on the M81 group luminosity function within a projected radius of $R_{proj} < 100$\,kpc (Fig.\ \ref{fig:cand_sel}, bottom right), in comparison with $D < 100$\,kpc Milky Way satellites (purple; \citealt{DW2020}) and $R_{proj} < 100$\,kpc M31 satellites (cyan; \citealt{McConnachie2018}). This satellite and these candidates extend the M81 group luminosity function faintward by 1.5 mag (or a factor of 4 in luminosity). Despite M81's stellar mass being comparable to those of the Milky Way or M31 (with $M_*/10^{10} M_{\odot} \sim 6$, 6 and 10 respectively; \citealt{Bell2017}), the M81 group is richer than either the Milky Way's or M31's within equivalent radii. At this stage, we refrain from ascribing this difference to a single cause, as several factors should (or have been observed to) correlate with the overall number of satellites --- e.g., the stellar mass of the host, virial mass of the DM halo, and delivery of satellites by recent group accretions \citep{Carlsten2021,DSouza2021,Smercina2022}. One of the most interesting features of these candidates is that all of them are projected close to NGC 3077. Clearly, confirmation of the candidates via deep high-resolution HST or JWST photometry and velocity measurements (via semi-resolved spectroscopy, following, e.g., \citealt{Toloba2016}) would be very valuable to understand this association with NGC 3077. This strengthens the recognition by \citet{Chiboucas2013} of the M81 group's highly flattened satellite system and is reminiscent of M31's asymmetric satellite system, where 80\% of its satellites are preferentially on the near side of M31 (Fig.\ 13 in \citealt{Savino2022}). The spatial coincidence of NGC 3077 and most of the M81 group satellites is surprising. Since satellite number should scale with DM halo mass (e.g., \citealt{Jiang2015}), M81 should host most of the satellites. Yet the satellite distribution is extremely asymmetric, indicating that many of the M81 group's satellites were recently accreted as a group and have not yet had time to phase-mix (as discussed by, e.g., \citealt{DSouza2021}). This work adds further evidence that satellites of satellites are important in building up the satellite populations of Milky Way--mass galaxies (see also, e.g., \citealt{LiHelmi2008}, \citealt{Deason2015}, \citealt{Patel2020}, \citealt{DSouza2021}). In this picture, one would expect most of the recently arrived satellites to be associated with M82, which is clearly undergoing tidal disruption \citep{Okamoto2015,Smercina2020} and has a stellar mass 10$\times$ larger than the next most massive satellite, NGC 3077 \citep{Smercina2020}. M82 should then have had a more massive DM halo, and therefore have delivered a substantial number of satellites \citep{Smercina2022}. Yet the satellites are spatially clustered around NGC 3077. It is possible that these satellites were instead delivered by NGC 3077, and for some reason M82 had few satellites. Yet it is also possible that these satellites and NGC 3077 itself were M82's satellites. Satellites are stripped relatively early as a group falls into a larger potential well, while M82, due to its much larger mass, may be subject to much stronger dynamical friction \citep{DSouza2021}. It is therefore possible that these satellites were in fact previously all part of M82's group but were `left behind' by M82, as it loses energy through dynamical friction as it merges with M81. \section{Conclusions} \label{sec:conc} In this letter, we report the discovery using resolved-star techniques of one new M81 group UFD (similar to Tucana B) and present five lower surface brightness candidate UFDs (similar to Canes Venatici I, Hercules, and Bo\"otes I), with absolute magnitudes reaching toward $M_V \sim -6$. While these candidates, with $\langle \mu_V \rangle_{<r_e}$ typically between 28 and 29.5 mag\,arcsec$^{-2}$, require HST or JWST follow-up for confirmation, blank-field searches with comparable areas yield $<1$ candidate, and the properties of these candidates overlap with those of Local Group UFDs, suggesting that most or all of the candidates should be real. The candidates are not distributed uniformly but instead cluster strongly around NGC 3077 --- the third-brightest galaxy in the central parts of the M81 group --- at the $<$99.9\% significance level. This underlines the importance of group accretion in shaping the satellite populations of nearby galaxies. However, it also raises a puzzle of why M81 and M82 --- both more massive in stars, and likely more massive in DM --- do not host more satellites; this puzzle remains unresolved. \begin{acknowledgments} This work was partly supported by the National Science Foundation through grant NSF-AST 2007065 and by the WFIRST Infrared Nearby Galaxies Survey (WINGS) collaboration through NASA grant NNG16PJ28C through subcontract from the University of Washington. AM gratefully acknowledges support by FONDECYT Regular grant 1212046 and by the ANID BASAL project FB210003, as well as funding from the Max Planck Society through a "PartnerGroup" grant. This research has made use of NASA's Astrophysics Data System Bibliographic Services. Based on observations utilizing Pan-STARRS1 Survey. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard--Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. Based on observations obtained at the Subaru Observatory, which is operated by the National Astronomical Observatory of Japan, via the Gemini/Subaru Time Exchange Program. We thank the Subaru support staff for invaluable help preparing and carrying out the observing run. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. \software{\texttt{HSC Pipeline} \citep{Bosch2018}, \texttt{Matplotlib} \citep{matplotlib}, \texttt{NumPy} \citep{numpy}, \texttt{Astropy} \citep{astropy}, \texttt{SciPy} \citep{scipy}, \texttt{Scikit-learn} \citep{scikit-learn}} \end{acknowledgments} \input{m81ufd.bbl} \bibliographystyle{aasjournal} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In string theory, it has been suggested that a landscape of vacua is vast and consistent quantum theory of gravity is believed to be formulated with consistent low-energy effective field theories (EFTs). Even more recently, the authors of Refs.\cite{Obied:2018sgi,Agrawal:2018own,Ooguri:2018wrx} suggested that the landscape is possibly surrounded by an even more vast neighborhood called \lq\lq swampland \rq\rq where consistent EFTs, which are coupled to gravity, are inconsistent with quantum theory of gravity. To be more concrete, the swampland can be formulated by the set of consistent effective field theories which cannot be completed into any quantum gravity in the high energy regime. Hence, it is desirable for consistent EFTs not to lie in the swamplands. See a comprehensive review on the Swampland \cite{Palti:2019pca}. In light of recently proposed swampland conjectures which can be translated to inequalities on the potential for the scalar field driving inflation, we consider \begin{eqnarray} \frac{|\Delta \chi|}{M_{P}}&\lesssim&\Delta \sim 1,\label{e1}\\ M_{P}\frac{|U'(\chi)|}{๊U}&\gtrsim& c \sim 1,\label{e2} \end{eqnarray} where $M_{P}$ is the reduced Planck mass, and $\Delta, c$ are positive constants of order unity, $U(\chi)$ is a potential of an canonical normalized field $\chi$. Eq.(\ref{e1}) describes effective field theory description to be valid at the different points in the field space. The conjectures require the above inequalities to be satisfied by any EFT which has a self-consistent ultraviolet (UV) completion. Notice that single-field inflation is apparently satisfied by the first condition, and the field variation is in excellent agreement with the well-known Lyth Bound for single-field inflation \cite{Lyth:1996im}. However, the second condition poses difficulties for single-field paradigm \cite{Agrawal:2018own,Garg:2018reu,Dimopoulos:2018upl,Matsui:2018bsy}. This is so since it is in tension with the slow roll condition. This tension has been explored in a number of recent works, see for example Refs.\cite{Kehagias:2018uem,Achucarro:2018vey,Kinney:2018nny,Das:2018hqy,Ashoorioon:2018sqb,Brahma:2018hrd,Lin:2018rnx,Motaharfar:2018zyb,Lin:2018kjm,Holman:2018inr}. Interestingly, however, more complex models like, among many others, multifield inflation \cite{Achucarro:2018vey} and warm inflation \cite{Motaharfar:2018zyb} are still allowed. In light of alternative theories of gravity, Swampland criteria was also investigated, see for instance \cite{Yi:2018dhl,Heisenberg:2019qxz,Brahma:2019kch}. In \cite{Artymowski:2019vfy}, the viability of $f(R)$ and Brans-Dicke theories of gravity was also investigated. Very recently, the "refined" version of the swampland conjecture has been suggested \cite{Garg:2018reu,Ooguri:2018wrx}. It was found that the refined conjecture imposes a slightly weaker criterion on the scalar field potential in inflation, and is consistent with the existence of a tachyonic instability. In light of the refined swampland conjecture, a scalar field potential associated with a self-consistent UV-complete effective field theory must satisfy one of the two conditions: \begin{eqnarray} \Bigg(M_{P}\frac{|U'|}{๊U}\gtrsim c\Bigg)\quad{\rm or}\quad \Bigg(M^{2}_{P}\frac{|U''|}{๊U}\lesssim-c'\Bigg) \end{eqnarray} where $c$ and $c'$ are constants of order unity. In the present work, we consider the implications of this slightly weaker constraint on a deformation of Starobinsky gravity. It is worth noting that a different model of deformed Starobinsky inflation was so far studied in Ref.\cite{Sebastiani:2013eqa}, while log corrections to $R^{2}$ gravity were investigated in Ref.\cite{Myrzakulov:2014hca,Bamba:2014jia}. Regarding the refined version, there exist relevant discussions of general constraints on inflation and other cosmological/astrophysical models, see Refs.\cite{Kinney:2018kew,Haque:2019prw,Andriot:2018mav,Fukuda:2018haz,Garg:2018zdg, Park:2018fuj,Schimmrigk:2018gch,Cheong:2018udx, Chiang:2018lqx}. This paper is organized as follows. In Sec.\ref{sec2}, we present a summary of deformed Starobinsky gravity by following the work proposed by Ref.\cite{Codello:2014sua}. Here we linearize the action by introducing an auxiliary field method. We use the conformal transformations in order to transform the theory in the Jordan frame to the Einstein frame. The above transformation connects both theories and allows us to rewrite the action in terms of a propagating scalar field minimally coupled to gravity. Sec.\ref{swa} is devoted to discuss the theoretical viability of deformed Starobinky gravity in light of the recent refined Swampland conjectures. Here we will constrain the model parameters using the conjecture. Our conclusions are reported in the last section. \section{Deformed Starobinsky gravity: a short recap}\label{sec2} Among many others, an intriguing possibility is that the gravity itself can be directly responsible for the inflationary period of the universe. The examination requires us to go beyond time-honored Einstein-Hilbert (EH) action. This can be achieved by adding a $R^{2}$ term to the original EH theory as in the Starobinsky model \cite{Starobinsky:1980te}. The model is highly natural since gravity itself drives cosmic inflation without the need of the scalar fields. It is worth noting that the model predicts a nearly vanishing ratio of tensor to scalar modes which is in excellent agreement with the observations, e.g. PLANCK data \cite{Ade:2015lrj,Akrami:2018odb}. In light of the observations, a discovery of primordial tensor modes can be used to constrain the cosmological parameters at the inflationary scale, which turn out to be close or at the grand unification energy scale. In general, the effective action for gravity can be in principle derived by considering the Taylor expansion in the Ricci scalar, $R$. Here without assuming a concrete form for the function $f(R)$, we consider \begin{eqnarray} {\cal S}=\int d^{4}x\sqrt{-g}f(R)\equiv\int d^{4}x\sqrt{-g}\Big[a_{0}+a_{1}R+a_{2}R^{2}+...\Big]\,.\label{R} \end{eqnarray} The first term $a_{0}$ is like the cosmological constant and must be small. The next coefficient $a_{1}$ can be set to one as in general relativity. Regarding the Starobinsky gravity, we have $a_{2}=1/(6M^{2})$ where a constant $M$ has the dimensions of mass, see cosmological implications of the model \cite{Chatrabhuti:2015mws}. Here the ellipses may include the Weyl tensor $C^{2}$ and the Euler topological terms $E$. As mentioned in Ref.\cite{Codello:2014sua}, the $E$ terms can be ignored since it is just a total derivative. Moreover, the Weyl terms are subleading when gravity is quantized around a flat background. Higher powers of $R,\,C^{2}$ and $E$ are naturally suppressed by the Planck mass. Interestingly, the authors of Ref.\cite{Codello:2014sua} also take into account marginal deformations of the action (\ref{R}) by including logarithmic corrections. The authors consider a simple form of the gravitational action formulated in the Jordan frame: \begin{eqnarray} {\cal S}_{J}=\int d^{4}x\sqrt{-g}\Big[-\frac{M^{2}_{P}}{2}R+hM^{4\alpha}_{P}R^{2(1-\alpha)}\Big]\,,\label{RJ} \end{eqnarray} where $h$ is a dimensionless parameter and $\alpha$ is a real parameter which is assumed as $2|\alpha|<1$. Note that the condition of the parameter $\alpha$ is further examined in the context of gravity's rainbow \cite{Channuie:2019kus}. One can linearize the above action by introducing an auxiliary field $y$ such that ${\cal S}_{J}=\int d^{4}x\sqrt{-g}\Big[f(y)+f'(y)(R-y)\Big]$ with $f(R)=-M^{2}_{P}R/2+hM^{4\alpha}_{P}R^{2(1-\alpha)}$ where $f'(y)=df(y)/dy$. Here the equation of motion for $y$ implies $R=y$ provided $f''(y)$ does not vanish. The explicit relation between (\ref{R}) and the effective quantum-corrected nonminimally coupled scalar field theory used in Ref.\cite{Joergensen:2014rya} can be done by introducing the conformal mode $\psi=-f'(y)$ with $V(\psi)=-y(\psi)\psi-f(y(\psi))$ and having introduced the mass-dimension one real scalar field $\varphi$ via $2\psi-M^{2}_{P}=\xi\varphi^{2}$ \cite{Codello:2014sua} to obtain: \begin{eqnarray} {\cal S}_{J}=\int d^{4}x\sqrt{-g}\Big[-\frac{M^{2}_{P}+\xi\varphi^{2}}{2}R+V(\varphi)\Big]\,,\label{RJp} \end{eqnarray} where \begin{eqnarray} V(\varphi)=\lambda\varphi^{4}\Big(\frac{\varphi}{M_{P}}\Big)^{4\gamma}\,\,\,{\rm with}\,\,\,\alpha=\frac{\gamma}{1+2\gamma},\label{RJpa} \end{eqnarray} and \begin{eqnarray} h^{1+2\gamma}=\Big(\frac{\xi(1+2\gamma)}{4(1+\gamma)}\Big)^{2(1+\gamma)}\frac{1}{\lambda(1+2\gamma)}.\label{RJpa} \end{eqnarray} Notice from Eq.(\ref{RJp}) that the kinetic term for the field $\varphi$ is absent in the Jordan frame. However introducing the following conformal transformation of the metric, the kinetic term of the field can be simply generated via: \begin{eqnarray} {\tilde g}_{\mu\nu}=\Omega(\varphi)^{2}g_{\mu\nu}\,,\,\,{\rm with}\,\,\Omega(\varphi)=1+\frac{\xi\varphi^{2}}{M^{2}_{P}}.\label{RJpa} \end{eqnarray} The above transformation connects both theories and allows us to rewrite the action in terms of a propagating scalar field minimally coupled to gravity. The resulting action is written in the Einstein frame and takes the form: \begin{eqnarray} {\cal S}_{E}=\int d^{4}x\sqrt{-g}\Big[-\frac{M^{2}_{P}}{2}R+\frac{1}{2}g^{\mu\nu}\partial_{\mu}\chi\partial_{\nu}\chi-U(\chi)\Big]\,,\,\,\,U(\chi)=\Omega^{-4}V(\varphi(\chi)).\label{REp} \end{eqnarray} Notice that the action is written in terms of the canonically normalized field $\chi$ which is related to $\varphi$ via \cite{Codello:2014sua} \begin{eqnarray} \frac{1}{2}\Big(\frac{d\chi}{d\varphi}\Big)^{2}=\frac{M^{2}_{P}(\sigma M^{2}_{P}+(\sigma+3\xi)\xi\varphi^{2})}{(M^{2}_{P}+\xi\varphi^{2})^{2}}.\label{REp} \end{eqnarray} It is worth noting that when setting $\sigma = 0$ a map from the Jordan frame of $f(R)$ gravity to the Einstein frame with a canonically normalized field is obtained. Throughout this work, we will set $\sigma=0$. An explicit relation between $\chi$ and $\varphi$ can be obtained by assuming that inflation occurs at large values of the scalar field, i.e. $\varphi\gg M_{P}/\sqrt{\xi}$ and we obtain \begin{eqnarray} \chi\simeq \sqrt{6} M_{P}\log\Big[\frac{\sqrt{\xi}\varphi}{M_{P}}\Big]. \end{eqnarray} Substituting the above canonical normalized field into Eq.(\ref{RJpa}), therefore the Einstein frame potential takes the form \begin{eqnarray} U(\chi) = \frac{\lambda M^{4}_{P}}{\xi^{2}}\Bigg(1+\exp\Bigg(\frac{-2\chi}{\sqrt{6} M_{P} }\Bigg)\Bigg)^{-2}\xi^{-2\gamma}\exp\Bigg(\frac{4\gamma\chi}{\sqrt{6} M_{P} }\Bigg)\,,\,\,\gamma=\frac{\alpha}{1-2\alpha}.\label{UU} \end{eqnarray} It is worth noting that in the limit of $\gamma=0$ (or equivalently $\alpha=0$) one recovers the Starobinsky model. It's pointed out in Ref.\cite{Codello:2014sua} that for $0 < \alpha < 0.5$ the potential grows exponentially, and then an inflationary model with nonzero primordial tensor modes can be successfully obtained. Note that the exact Einstein-frame potential of $f(R)=R+\lambda R^{p}$ with $p$ being not necessarily an integer in general was also derived in Ref.\cite{Motohashi:2014tra}. \section{The refined Swampland criteria in deformed Starobinsky gravity}\label{swa} We note here that in the previous section we adopted a conformal transformation and transformed the theory from the Jordan to the Einstein frame. Deriving this potential with respect to the field $\chi$, we obtain \begin{eqnarray} \frac{\partial U(\chi)/\partial\chi}{U} = \frac{4 \gamma +4 \gamma e^{\frac{2 \chi }{\sqrt{6} M_{P}}}+4}{\sqrt{6} M_{P} e^{\frac{2 \chi }{\sqrt{6} M_{P}}}+\sqrt{6} M_{P}}.\label{rat} \end{eqnarray} Hence the first condition of the refined swampland conjecture (\ref{e2}) suggests the following inequality: \begin{eqnarray} c\lesssim \frac{4 \gamma +4 \gamma \exp\Big(\frac{2 \chi }{\sqrt{6} M_{P}}\Big)+4}{\sqrt{6}\left(1 +\exp\Big(\frac{2 \chi }{\sqrt{6} M_{P}}\Big)\right)}.\label{c2} \end{eqnarray} In establishing the connection among the swampland conditions and the parameters of the model considered, we consider the well-known inflationary parameters, i.e., the scalar spectral index $n_{s}$, and tensor to scalar ratio $r$, and from the standard formulation we write \begin{eqnarray} n_{s}=1-6\epsilon_{U}+2\eta_{U},\,\,r=16\epsilon_{U}, \label{ns} \end{eqnarray} where in terms of the potential the slow-roll parameters can be defined as \begin{eqnarray} \epsilon_{U}=\frac{M^{2}_{P}}{2}\left(\frac{U'}{U}\right)^{2}\,,\,\,\eta_{U}=M^{2}_{P}\left(\frac{U''}{U}\right), \end{eqnarray} where primes denote derivatives with respect to the field $\chi$. Using the above expressions, we can recast Eq.(\ref{ns}) in terms of the inflaton field $\chi$ as \begin{eqnarray} 1-n_{s}&=&\frac{16 \left(2 \gamma +\gamma ^2 \left(\exp\Big(\frac{2 \chi }{\sqrt{6} M_{P}}\Big)+1\right)+1\right)}{\sqrt{6}^2 \left(\exp\Big(\frac{2 \chi }{\sqrt{6} M_{P}}\Big)+1\right)},\nonumber\\r&\lesssim& 8c^{2}\approx 8\Bigg(\frac{4 \gamma +4 \gamma \exp\Big(\frac{2 \chi }{\sqrt{6} M_{P}}\Big)+4}{\sqrt{6}\left(1 +\exp\Big(\frac{2 \chi }{\sqrt{6} M_{P}}\Big)\right)}\Bigg)^{2},\label{nsr} \end{eqnarray} where we have used the potential from Eq.(\ref{UU}). The first expression of (\ref{nsr}) allows us to rewrite the inflaton field in terms of the parameters $\gamma$ and $n_{s}$ to obtain \begin{eqnarray} \chi := \frac{1}{2} \sqrt{6} M_{P}\log \left(\left|\frac{16 \gamma ^2+32 \gamma -\sqrt{6}^2 (1-n_{s})+16}{\sqrt{6}^2 (1-n_{s})-16 \gamma ^2}\right|\right).\label{ci} \end{eqnarray} After combining Eqs.(\ref{ci}) and (\ref{c2}), we will see that particular values of $c$ yield the values of scalar spectral index $n_{s}$ to be fitted in a region at the $2\sigma$ level. Here we find \begin{eqnarray} c\lesssim\frac{4 \alpha ^2-4 \alpha -3 (1-2 \alpha )^2 n_{s}+3}{2 \sqrt{6} (1 -2 \alpha)},\label{cns} \end{eqnarray} where the constant $c$ is rewritten in terms of the parameters $\alpha$ and $n_{s}$. Together with the swampland condition, this also allows us to use the observational data, e.g., PLANCK satellite, to constrain the model. The behavior of $c$ in Eq.(\ref{cns}) is illustarted in Fig.(\ref{sep1}). \begin{figure}[h] \begin{center} \includegraphics[width=0.48\linewidth]{cal010.png} \includegraphics[width=0.48\linewidth]{cal020.png} \includegraphics[width=0.48\linewidth]{cal030.png} \includegraphics[width=0.48\linewidth]{cal032.png} \caption{We plot allowed values of $c$ as a function of $n_{s}$ parametrized by Eq.(\ref{cns}) by varying the values of $\alpha$. Here the gray shaded region represents the allowed values of $n_{s}$ at $2\sigma$ level observed by PLANCK 2018 \cite{Akrami:2018odb}.} \label{sep1} \end{center} \end{figure} Similarly from the second condition of the conjecture, we find the following further constraint on the inflaton field value \begin{eqnarray} \frac{8 \left(2 (\gamma +1)^2+2 \gamma^2 \exp\Big(\frac{2 \chi }{\sqrt{6} M_{P}}\Big)+\left(4 \gamma ^2+4 \gamma -1\right) \exp\Big(\frac{2 \chi }{\sqrt{6} M_{P}}\Big)\right)}{\sqrt{6}^2 \left(\exp\Big(\frac{2 \chi }{\sqrt{6}M_{P}}\Big)+1\right)^2}\lesssim -c'.\label{cp} \end{eqnarray} Combing the Eqs.(\ref{ci}), (\ref{cns}) and (\ref{cp}), we are able to find constraints on $c,\,c'$. The combined equation can be written as \begin{eqnarray} c^{2}\,c'^{2}&\lesssim&f(\alpha)\equiv\frac{(1-2 \alpha )^6}{16384 \sqrt{6}^6}\Bigg(\frac{12288 (\alpha -1)^3 \alpha ^3}{(1-2 \alpha )^6}-3 \sqrt{6}^6 n_{s}^3+\frac{16 (9 (\alpha -1) \alpha +1) \sqrt{6}^4 n_{s}^2}{(1-2 \alpha )^2}\nonumber\\&&\quad\quad\quad\quad\quad\quad\quad\quad-\frac{256 (\alpha -1) \alpha (9 (\alpha -1) \alpha +1) \sqrt{6} ^2 n_{s}}{(1-2 \alpha )^4}\Bigg)^2. \label{ccp} \end{eqnarray} By using the above constraint relation along with any one of the swampland conjectures, we have the allowed region of the $(c,\,c')$ space as shown in Fig.\ref{sep2}. \begin{figure}[h] \begin{center} \includegraphics[width=0.48\linewidth]{ccp025n.png} \includegraphics[width=0.48\linewidth]{ccp032n.png} \caption{We show the resulting constraints for $c$ and $c'$ in Eq.(\ref{ccp}) by using the precise value of $n_{s}=0.964$ \cite{Akrami:2018odb}. Note that the allowed values of $c$ and $c'$ are displayed with color-shaded regions controlled by the refined swampland conditions.} \label{sep2} \end{center} \end{figure} Further using the swampland distance conjecture (\ref{e1}), we find in a unit of the Planck mass that the condition $|\Delta \chi|\lesssim {\cal O}(1)$ is satisfied for the model we are considering here. Hence using Eq.(\ref{rat}) and the relation $\Delta U=\Big(U_{\chi}/U\Big)\Delta\chi$ with $U_{\chi}\equiv \partial U/\partial\chi$, we can write \begin{eqnarray} \frac{\Delta\chi}{M_{P}} = \frac{4\sqrt{6} (1+ 2 \gamma)}{16 \gamma^2+16 \gamma +\sqrt{6}^2 (1-n_{s})}\frac{\Delta U}{U}\lesssim \Delta \sim 1. \label{delU0} \end{eqnarray} Therefore, we obtain in terms of $\alpha$: \begin{eqnarray} \Delta U\lesssim \frac{4 \alpha ^2-4 \alpha -3 (1-2 \alpha )^2 n_{s}+3}{2 \sqrt{6} (1-2 \alpha)}U. \label{delU} \end{eqnarray} \begin{figure}[h] \begin{center} \includegraphics[width=0.58\linewidth]{DelU.png} \caption{We display the relation between $\Delta U/U$ and $n_{s}$ given in Eq.(\ref{delU}). Here we use $\alpha=0.32$ and $\alpha=0.25$ for comparison. The gray shaded region represents the values of $n_{s}$ at $2\sigma$ deviation.} \label{sep4} \end{center} \end{figure} Assuming the value $|\Delta \chi|\sim 1$ as the upper limit in the equation above, we can see that for the value $\alpha\sim 0.32$ and $n_{s}= 0.964$ the first swampland condition is satisfied whenever \begin{eqnarray} \Delta U \sim U\,\,\longrightarrow \,\,\frac{\Delta U}{U}\sim {\cal O}(1). \end{eqnarray} However, we can come up with stronger condition if we take $\alpha \ll 0.32 $. For example, using $\alpha=0.032$ we find $\frac{\Delta U}{U}\sim 0.07 \ll {\cal O}(1)$. The inequality of (\ref{delU}) is displayed in Fig.(\ref{sep4}). We can further consider the relation of Eq.(\ref{ns}). Hence we are able to write \begin{eqnarray} \alpha\lesssim \frac{1}{4(3n_{s}-1)}\Big(6n_{s}+\sqrt{3r}-2-\sqrt{24 n_{s}+3 r-8}\Big). \label{ine} \end{eqnarray} \begin{figure}[h] \begin{center} \includegraphics[width=0.75\linewidth]{rnsalpha.png} \caption{We plot an allowed region parametrized by Eq.(\ref{ine}). The values of the model parameters below the region are satisfied by the conjecture.} \label{sep5} \end{center} \end{figure} We display an allowed region satisfied an inequality given in Eq.(\ref{ine}) as shown in Fig.(\ref{sep5}). We find that using the upper bound on the tensor to scalar ratio $r<0.064$ and the scalar spectral index $n_{s}=0.964$ \cite{Akrami:2018odb} the upper bound of the $\alpha$ parameter as \begin{eqnarray} \alpha < 4.1\times 10^{-2}.\label{up} \end{eqnarray} In this case, we find that the model we are considering may pose difficulty of the first swampland conjecture. This is so since using $\alpha=0.04$ we discover from Eq.(\ref{ci}) that $c\simeq 0.1$. As mentioned in Ref.\cite{Kehagias:2018uem}, however, the requirement that no critical de Sitter vacua exist allows any value of $c$ as long as $c$ is positive, even tiny values. Given that no quantitive argument for having $c\simeq 1$ is currently available, $c\simeq 0.1$ seems as good as $c\simeq 1$ to us. \begin{figure}[h] \begin{center} \includegraphics[width=0.48\linewidth]{rns1.png} \includegraphics[width=0.48 \linewidth]{rns2.png} \caption{We show a 2D plot of an allowed region satisfied an inequality given in Eq.(\ref{ine}) by using $\alpha=0.04$ (left panel) and $\alpha=0.045$ (right panel).} \label{sep6} \end{center} \end{figure} We display the relation between $r$ and $n_{s}$ given by Eq.(\ref{ine}). In Fig.(\ref{sep6}), we compare the parameters by using $\alpha=0.04$ and $\alpha=0.045$. We find that when setting $\alpha=0.04$ both $r$ and $n_{s}$ are simultaneously satisfied the bound reported by PLANCK \cite{Akrami:2018odb}; while using $\alpha=0.045$ both $r$ and $n_{s}$ can not be satisfied the bound. Therefore the upper bound of $\alpha$ given by Eq.(\ref{up}) can be satisfactorily verified. \section{Conclusion} Recently, the Swampland conjecture has attracted significant attention. The conjecture allows us to validate or invalidate a large class of low energy effective theories. It can be formulated as inequalities on the potential of a scalar field which is satisfied by certain constraints. In this work, We first considered the deformation of the form $f(R)\sim R^{2(1-\alpha)}$ with $\alpha$ being a constant. Here we linearized the original action by introducing an auxiliary field method and used the conformal transformations in order to transform the theory in the Jordan frame to the Einstein frame. We rewrote the action in terms of the cannonical normalized scalar field minimally coupled to gravity. We discussed the theoretical viability of deformed Starobinky gravity in light of the recent refined Swampland conjectures. Our analysis showed that the deformed gravity considered is consistent with the refined swampland conjecture. We showed the resulting constraints on $c$ and $c'$ for the precise value of the scalar spectral index $n_{s}$. For the model under consideration, the value of $c$ turned out to be bounded within $(0,0.1)$. Interestingly, we discovered the upper bound of the parameter $\alpha$ to be $\alpha <0.041$. \acknowledgments This research was partially supported by the New Strategic Research (P2P) project, Walailak University, Thailand.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Most of the tools and methods developed for causal discovery rely on a graphical representation based on Bayesian networks which assume independent and identically distributed (i.i.d) instances. Probabilistic relational models~\cite{getoor-book07} have been developed that relax this assumption. The key advantage of these \textit{relational} models is that they can represent systems involving multiple types of entities interacting with each other with some probabilistic dependence. Causal reasoning over such relational systems is key to understanding many real-world phenomena, such as social influence. Influence in complex dynamic systems is often mutual and represented by a feedback loop or cycle in the relational model. Identifying mutual influence in relational models is of great interest in the research community. For example, social scientists and marketing experts are interested to study the social dynamics between people and products in social networks~\cite{bakshy-wsdm11,bakshy-science15,ogburn-stats20}. However, there is a lack of available methods for discovering mutual influence or cycles in complex relational systems. Sound and complete algorithms have been proposed for learning relational causal models from observational data~\cite{maier-uai13,lee-uai16,lee-aaai16}. However, they assume acyclicity and thus cannot reason about mutual influence or cycles. In a recent work, \citet{ahsan-clear22} develop $\sigma$-abstract ground graph (\text{$\sigma$-AGG}{}), a sound and complete representation for cyclic relational causal models. Even though \text{$\sigma$-AGG}{} is shown to be sound and complete for cyclic relational causal models, to the best of our knowledge no work on discovering \text{$\sigma$-AGG}{} or identifying relational cycles from observational data exists in the literature. The closest works on cyclic causal discovery are mostly from the domain of Bayesian networks. \citet{richardson-uai96} develop a cyclic causal discovery (CCD) algorithm which is shown to be sound but not complete. In recent work, \citet{mooij-uai20} provide necessary conditions for constraint-based causal discovery algorithms developed for acyclic causal models, such as PC~\cite{pearl-book00} and FCI~\cite{spirtes-mitp00}, to be sound and complete for cyclic causal models under sigma-separation criteria. There are several other algorithms for cyclic causal discovery from i.i.d samples~\cite{rothenhausler-nips15,strobl-springer19} but no such algorithm exists for cyclic relational causal models. In this work, we examine the necessary and sufficient conditions for which constraint-based relational causal discovery can be shown to be sound and complete for cyclic relational causal models under $\sigma$-separation. We introduce \textit{relational acyclification}, an operation that helps to reason over the scope of cyclic relational models which are identifiable with constraint-based causal discovery algorithms. Following this criterion, we show that RCD~\cite{maier-uai13}, a pioneering relational causal discovery algorithm for acyclic relational models, is sound and complete for cyclic relational models under $\sigma$-separation and causal sufficiency assumption. We provide experimental results on synthetic relational models in support of our claims. We also demonstrate the effectiveness of the algorithm on a real-world dataset. \section{Related Work} \citet{richardson-uai96} develop cyclic causal discovery (CCD) algorithm for directed cyclic graphs under the causal sufficiency assumption. They provide a characterization of the equivalence class of cyclic causal models. They show that a class of graphs called Partial Ancestral Graphs (PAG) is sufficient to represent the equivalence class of cyclic causal models. Two models are equivalent if they entail the same set of d-separation relationships. There are a few caveats about CCD and its choice of representation. Markov property of $d$-separation holds for directed cyclic graphs only with linear structural equation models (SEM). So, it is not sure how to extend CCD to more general models beyond linear SEMs. Moreover, CCD is not complete in the sense that it does not guarantee to produce the maximally oriented PAG. \citet{rothenhausler-nips15} develop a general discovery algorithm (BackShift) that allows latent confounders in addition to cycles. The method relies on equilibrium data of the model recorded under a specific kind of intervention called \textit{shift interventions}. \citep{strobl-ijdsa19} develop the CCI algorithm which allows both latent confounders and selection bias apart from cycles. Similar to CCD, both Backshift and CCI are restricted to only linear SEM models. CCI considers a different representation for equivalence class, called maximal almost ancestral graph (MAAG) which is carefully chosen to allow for both latent confounders and selection bias. The FCI algorithm is a constraint-based causal discovery algorithm designed specifically for acyclic causal models with latent confounders~\cite{spirtes-mitp00}. \citet{mooij-uai20} show that FCI is sound and complete for cyclic models under $\sigma$-separation criteria which is different than $d$-separation and not restricted to only linear models. They also show that any constraint-based causal discovery algorithm (PC, FCI~\cite{spirtes-mitp00} etc.) which is sound and complete for acyclic causal models, can be shown to be sound and complete for cyclic causal models under some background knowledge (i.e. sufficiency) and assumptions. \citet{maier-uai13} develop the first sound and complete algorithm that can discover the dependencies of a relational causal model under the assumption of $d$-faithfulness, sufficiency, acyclicity. It is designed based on the PC algorithm with some additional steps introduced specifically to handle relational aspects of the representation. They utilize the \textit{abstract ground graph}, an abstract representation that allows answering relational queries based on $d$-separation criterion~\cite{maier-arxiv13}. RCD introduces Relational Bi-variate Orientation (RBO)- an orientation rule specifically designed for relational models. \citet{lee-aaai16} develop an efficient version of RCD named RCD-Light which requires polynomial time to run. They also develop an alternative algorithm RpCD based on \textit{path semantic} which describes a unique way of defining relational paths~\cite{lee-uai16}. \citet{ahsan-clear22} develop $\sigma$-abstract ground graph, a sound and complete abstract representation for cyclic relational causal models under $\sigma$-separation. They introduce relational $\sigma$-separation and show that this criterion can consistently answer relational queries on cyclic relational models. \section{Preliminaries} We present a comprehensive description of the necessary notation and terminologies used in graphical causal modeling and relational causal models in the Appendix. Here, we present a high-level overview of the important notions used in this paper. For more details, we refer the reader to the literature (\cite{pearl-book00,spirtes-mitp00,richardson-uai96,forre-arxiv17,mooij-uai20,maier-uai13,lee-uai15}). \subsection{Cyclic Graphical Causal Models} The most common graphical representation for causal models is \textit{directed acyclic graphs (DAGs)}. DAGs provide ways for natural causal interpretation and satisfy the Markov property under $d$-separation. A more general class of graphs are \textit{directed cyclic graphs (DCGs)} which drop the assumption of acyclicity (and allow feedback loops). These graphs are appropriate for (possibly cyclic) structural causal models (SCMs) where the corresponding Markov properties and causal interpretation are more subtle~\cite{bongers-arxiv21}. Cyclic SCMs are useful to represent causal semantics of equilibrium states in dynamical systems~\cite{bongers-arxiv21}. Directed cyclic graphs offer certain properties that help model cyclic causal models. Given a directed cyclic graph $G = (V,E)$, all nodes on directed cycles passing through node $i \in V$ together form the strongly connected component $SC_G(i) = AN_G(i) \bigcap DE_G(i)$ of $i$ where $AN_G(i)$ and $DE_G(i)$ refers to the ancestors and descendants of node $i \in V$. The set of conditional independence entailed in DCG, $G$ is refereed to as independence model $IM(G)$. Unlike DAGs, DCGs are not guaranteed to satisfy the Markov property in a general case under $d$-separation. Instead, a different notion of separation, called $\sigma$-separation satisfies the Markov property of DCGs~\cite{forre-arxiv17}. The $\sigma$-separation criterion is very similar to the $d$-separation criterion where the main difference is $\sigma$-separation has as an additional condition for a non-collider to block a path that it has to point to a node in a different strongly connected component~\cite{mooij-uai20}. $\sigma$-\textit{faithfulness} refers to the property which states that all statistical dependencies found in the distribution generated by a given causal model are entailed by the $\sigma$-separation relationships. \citet{richardson-uai96} show that a class of graphs called Partial Ancestral Graphs (PAG) is a sufficient representation for the equivalence class of cyclic causal models represented by DCGs. PAGs have also been shown to be a sufficient representation for causal discovery with cycles and unobserved confounders~\cite{mooij-uai20}. Since we are assuming no selection bias for simplicity, we will only discuss directed PAGs (DPAG) in this study. \citet{forre-arxiv17} introduced an operation called \textit{acyclification} for directed cyclic graphs that generates DAGs with equivalent independence models as the given DCG. It allows a single DPAG to represent the ancestral relationship of a DCG $G$ and all its acyclifications $G^\prime$. \begin{definition} [Acyclification \cite{forre-arxiv17}] \label{dfn:acy} Given a DCG $G = (\mathcal{V}, \mathcal{E})$, an acyclification of $G$ is a DAG $G^{\prime} = (\mathcal{V}, \mathcal{E}^{\prime})$ with \begin{enumerate}[i] \item the same nodes $\mathcal{V}$; \item for any pair of nodes ${i, j}$ such that $i \notin SC_G(j)$: $i \rightarrow j \in \mathcal{E}^{\prime}$ iff there exists a node $k$ such that $k \in SC_G(j)$ and $i \rightarrow k \in \mathcal{E}$; \item for any pair of distinct nodes ${i, j}$ such that $i \in SC_G(j)$: $i \rightarrow j \in \mathcal{E}^{\prime}$ or $i \leftarrow j \in \mathcal{E}^{\prime}$; \end{enumerate} \end{definition} \begin{proposition}[\cite{mooij-uai20}] \label{prop:im} For any DCG $G$ and any acyclification $G^\prime$ of $G$, $IM_\sigma(G) = IM_\sigma(G^\prime) = IM_d(G^\prime)$ where $IM_\sigma(G)$ and $IM_d(G)$ refers to the independence model of the given DCG $G$ under $\sigma$-separation and $d$-separation respectively. \end{proposition} \citet{mooij-uai20} provide the necessary conditions under which constraint-based causal discovery algorithms for acyclic causal models, such as PC~\cite{pearl-book00} and FCI~\cite{spirtes-mitp00}, are sound and complete in the presence of cycles under $\sigma$-separation. Their result depends on the following assumptions: \begin{assumption} \label{asm:sfaith} The underlying causal model is $\sigma$-faithful. \end{assumption} \begin{assumption} \label{asm:acy} There exists one or more valid acyclifications of the given causal model which contains the same set of ancestral relationships as the given model.~\footnote{See Appendix for details} \end{assumption} \begin{corollary}[\cite{mooij-uai20}] \label{cor:pc_gen} The PC algorithm with Meek's orientation rules is sound, arrowhead-complete, tail-complete, and Markov complete (in the $\sigma$-separation setting without selection bias) for directed cyclic graphs.~\cite{mooij-uai20} \end{corollary} \begin{figure}[!ht] \centering \subfloat{ \label{sfig:rcm} \includegraphics[width=\linewidth]{images/crcm.pdf} } \caption{ Example of a cyclic relational model. There are three entity types (USER, POST, MEDIA) and two relationship types (REACTS, CREATES) among them. Attributes are shown in oval shapes. The bold arrows refer to the relational dependencies. } \label{fig:rcm} \end{figure} \begin{figure*}[!ht] \centering \subfloat[Skeleton]{ \label{sfig:skel} \includegraphics[width=0.25\linewidth]{images/skel.pdf} } \subfloat[Ground Graph]{ \label{sfig:gg} \includegraphics[width=0.25\linewidth]{images/cgg.pdf} } \subfloat[Abstract Ground Graph]{ \label{sfig:agg} \includegraphics[width=0.50\linewidth]{images/cagg.pdf} } \caption{ Fragments of a relational skeleton, ground graph, and $\sigma$-abstract ground graph corresponding to the relational causal model from Figure \ref{fig:rcm}. The arrows represent relational dependencies. } \label{fig:rcm_all} \end{figure*} \subsection{Relational Causal Models (RCMs)} \label{sec:rcm} Relational causal models are an expressive class of graphical models that can represent probabilistic dependence among instances of different entity types interacting with each other following a specific schema. We use a simplified Entity-Relationship model to describe relational models following previous work ~\citep{heckerman-isrl07,maier-uai13,lee-uai20}. A relational schema $\mathcal{S} = \langle \bm{\mathcal{E}}, \bm{\mathcal{R}}, \bm{\mathcal{A}}, card \rangle$ represents a relational domain where $\bm{\mathcal{E}}$, $\bm{\mathcal{R}}$ and $\bm{\mathcal{A}}$ refer to the set of entity, relationship and attribute classes respectively. Figure \ref{fig:rcm} shows an example relational model that describes a simplified user-media engagement system. The cardinality constraints are shown with crow's feet notation— a user can react to multiple posts, multiple users can react to a post, and a post can be created by only a single media entity. A relational causal model $\mathcal{M} = \langle \mathcal{S},\mathcal{D} \rangle$, is a collection of relational dependencies defined over schema $\mathcal{S}$. \textit{Relational dependencies} consist of two relational variables, cause, and effect. As an example, consider the following relational dependency [Post, Reacts, User].Sentiment $\rightarrow$ [Post].Engagement which states that the engagement of a post is affected by the actions of users who react to that post. Note that, all causal dependencies are defined with respect to a specific \textit{perspective} (entity type). A relational model $\mathcal{M} = (\mathcal{S}, \mathcal{D})$ is said to be \textit{cyclic} if the set of relational dependencies $\mathcal{D}$ construct one or more directed cycles of arbitrary length. There is a direct feedback loop in the relational model of Figure \ref{fig:rcm} making it a cyclic relational causal model. \sloppy A realization of a relational model $\mathcal{M}$ with a relational skeleton is referred to as the \textit{ground graph} $GG_\mathcal{M}$. It is a directed graph consisting of attributes of entities in the skeleton as nodes and relational dependencies among them as edges. Figure \ref{sfig:gg} shows the ground graph for the relational model from Figure \ref{fig:rcm}. A $\sigma$-\textit{abstract ground graph} ($\sigma$-AGG) is an abstract representation that captures the dependencies consistent in all possible ground graphs and represents them in a directed graph. $\sigma$-AGGs are defined for a specific perspective and \textit{hop threshold}, $h$. Hop threshold refers to the maximum length of the relational paths. Figure \ref{sfig:agg} presents the $\sigma$-AGG from the perspective of USER with $h = 6$. Conditional independence facts are only useful when they hold across all ground graphs that are consistent with the model. \citet{maier-arxiv13} show that relational $d$-separation is sufficient to achieve that for acyclic models. In recent work, \citep{ahsan-clear22} introduced relational $\sigma$-separation criteria specifically for cyclic relational models which directly follows from the definition of relational $d$-separation except it uses $\sigma$-separation criterion instead of $d$-separation. \subsection{Relational Causal Discovery (RCD)} The RCD algorithm developed by \citet{maier-uai13} is the first sound and complete algorithm that can discover the dependencies of a relational causal model (RCM) under the assumption of $d$-faithfulness, sufficiency, acyclicity, and a maximum hop threshold $h$. It is designed based on the PC algorithm with some additional steps introduced specifically to handle relational aspects of the representation. \citet{maier-uai13} provides theoretical guarantees for soundness and completeness of RCD. \section{Relational Causal Discovery with Cycles} Cyclic relational causal models (CRCM) are relational causal models where dependencies form one or more directed cycles~\citep{ahsan-clear22}. The cycles or feedback loops can represent equilibrium states in dynamic systems. Consider the example from Figure \ref{fig:rcm} where sentiments of users and engagements in a media post may reach an equilibrium. Identifying such cycles or feedback loops from observational samples requires proper representation and a learning algorithm. \citet{ahsan-clear22} introduce an abstract representation, \text{$\sigma$-AGG}{} that entails all the conditional independence relations consistent across all ground graphs of the model and shows that it is sound and complete under $\sigma$-separation. Given \text{$\sigma$-AGG}{} representation, discovering CRCM transforms into the problem of learning the \text{$\sigma$-AGG}{} from observational samples of a relational model. Since \text{$\sigma$-AGG}{} is a DCG, we can consider DPAGs to represent the equivalence class of \text{$\sigma$-AGG}{} following the previous work of \citet{richardson-uai96}. \begin{pblm} [Cyclic Relational Causal Discovery] Given observational samples from a $\sigma$-faithful cyclic relational causal model $\mathcal{M} = \langle \mathcal{S},\mathcal{D} \rangle$ with hop threshold $h$, learn the maximally oriented DPAG that contains the corresponding $\sigma$-AGGs of $\mathcal{M}$. \end{pblm} \begin{figure*}[!ht] \centering \subfloat[Cyclic RCM]{\label{sfig:counter_model}\includegraphics[width=.30\textwidth]{images/counter_crcm.pdf}}\hfill \subfloat[True AGG for perspective A]{\label{sfig:true_agg}\includegraphics[width=.30\textwidth]{images/true_agg_a.pdf}}\hfill \subfloat[RCD output for perspective A]{\label{sfig:rcd_agg}\includegraphics[width=.30\textwidth]{images/rcd_agg_a1.pdf}}\\ \caption{Counterexample showing RCD produces incorrect output for cyclic RCM under $\sigma$-separation. } \label{fig:rcd_counter} \end{figure*} \subsection{RCD for cyclic relational causal models} The RCD algorithm developed by \citet{maier-uai13} is the first sound and complete constraint-based algorithm that can learn relational dependencies of a relational causal model (RCM) under the assumption of $d$-faithfulness, sufficiency, acyclicity, and a maximum hop threshold $h$. It is designed based on the PC algorithm with additional steps introduced specifically to handle relational aspects of the representation. Following the recent development by \citet{mooij-uai20} (Corollary \ref{cor:pc_gen}), and considering that RCD is developed based on the PC algorithm, a natural question arises: \textit{Is RCD sound and complete for cyclic relational causal models?} To the best of our knowledge, no prior work addresses this question. More generally, the effectiveness and theoretical guarantees of existing relational causal structure learning algorithms for cyclic RCMs under $\sigma$-separation are not studied in the current literature. \subsubsection{A counterexample} We present a counterexample that shows that RCD is not sound and complete for discovering cyclic relational causal models in general. Figure \ref{sfig:counter_model} shows a CRCM with three entity types A,B,C, and two relationship types AB, BC and maximum hop threshold $h = 2$. The attribute types X1, Y1, and Z1 refer to the attributes of entity types A, B, and C respectively. There are three relational dependencies: 1) [A, AB, B].Y1 $\rightarrow$ [A].X1, 2) [B, AB, A].X1 $\rightarrow$ [B].Y1, and 3) [B, BC, C].Z1 $\rightarrow$ [B].Y1. The first two dependencies form a feedback loop. Figure \ref{sfig:true_agg} shows the true \text{$\sigma$-AGG}{} built from perspective A with maximum hop threshold $h = 4$~\footnote{\citet{maier-arxiv13} showed that the AGG needs to include nodes with higher hop thresholds than the model in order for it to be a sound and complete representation. However, the hop threshold of relational dependencies should be bounded by the hop threshold of the model. The same arguments hold for $\sigma$-AGGs as well. We refer readers to Theorem E.2 of \cite{maier-arxiv13}}. Figure \ref{sfig:rcd_agg} shows the output of RCD with a $\sigma$-separation oracle. We see that RCD orients arrows [A, AB, B].Y1 $\rightarrow$ [A].X1 and [A, AB, B].Y1 $\rightarrow$ [A, AB, B, AB, A].X1 which refers to the relational dependency [A, AB, B].Y1 $\rightarrow$ [A].X1. However, the true model contains a feedback loop between [A, AB, B].Y1 and [A].X1. This example shows that RCD, even with $\sigma$-separation oracle produces incorrect edge orientations. \section{Relational Acyclification for Cyclic Relational Causal Models} In this section, we present relational acyclification which enables the discovery of relational causal models with cycles. We also discuss how to read off features of the true model from the output of the discovery algorithm. \subsection{Relational Acyclification} The counterexample in the previous subsection shows that the RCD algorithm is not sound and complete for general cyclic RCMs under $\sigma$-separation. For the given counterexample, RCD orients edges that contradict the given relational model. In order to understand what causes this error and to find a solution, we focus on the acyclification operation introduced by \citet{forre-arxiv17} which is a key tool for the generalization results by \citet{mooij-uai20}. \begin{figure}[ht!] \centering \includegraphics[width=.30\textwidth]{images/agg_acy.pdf} \caption{Invalid acyclification of \text{$\sigma$-AGG}{} from Figure \ref{sfig:true_agg}} \label{fig:acy} \end{figure} Figure \ref{fig:acy} shows an acyclification of the \text{$\sigma$-AGG}{} presented in Figure \ref{sfig:true_agg} following definition \ref{dfn:acy}. Here we see the edges [A, AB, B, BC, C].Z1 $\rightarrow$ [A].X1 and [A, AB, B, BC, C].Z1 $\rightarrow$ [A, AB, B, AB, A].X1 which does not follow the relational model since the hop threshold of such dependencies ($h$ = 4) exceed the hop threshold of the given model ($h$ = 2). The definition of acyclification, as given by \cite{forre-arxiv17} essentially considers all the nodes or entities to be of the same entity type. As a result, applying it directly to relational models creates erroneous results. We propose a new definition of acyclification for relational models which specifically mentions that the maximum hop threshold of an acyclification can be different than the hop threshold of the original model. \begin{definition} [Relational Acyclification] \label{dfn:rel_acy} Given a relational schema $\mathcal{S} = (\mathcal{E}, \mathcal{R}, \mathcal{A}, card)$, $\sigma$-AGG $G = (V, E)$, and a hop threshold $h$, a relational acyclification of $G$ is a $\sigma$-AGG $G^{\prime} = (V, E^{\prime})$ with hop threshold $h^\prime \ge h$ containing \begin{enumerate}[i] \item the same nodes $V$; \item for any pair of nodes ${P.X, Q.Y}$ such that $P.X \notin SC_G(Q.Y)$: $P.X \rightarrow Q.Y \in E^{\prime}$ iff there exists a node $R.Z$ such that $R.Z \in SC_G(Q.Y)$ and $P.X \rightarrow R.Z \in E$ and $P.X \rightarrow Q.Y$ is a valid relational dependency with maximum hop threshold $h^\prime$; \item for any pair of distinct nodes ${P.X, Q.Y}$ such that $P.X \in SC_G(Q.Y)$: $P.X \rightarrow Q.Y \in E^{\prime}$ or $P.X \leftarrow Q.Y \in E^{\prime}$; \end{enumerate} \end{definition} The definition of relational acyclification follows from Definition \ref{dfn:acy} where the main distinction is that it allows a new bound on the maximum hop threshold which is different than the bound of the original model. The implication of this is that the potential dependencies RCD considers in building the skeleton, may not be sufficient for soundness and completeness. \subsection{Maximum hop threshold for relational acyclification} Definition \ref{dfn:rel_acy} suggests that the maximum hop threshold used in a relational acyclification of a $\sigma$-AGG may be higher than the hop threshold of the given model. It is important to characterize the maximum bound of relational acyclifications for allowing practical implementation of the RCD algorithm for cyclic models. The following proposition provides the maximum bound on the hop threshold of relational acyclifications. \begin{proposition} \label{prop:rel_hop} Given a relational model $\mathcal{M} = (\mathcal{S}, \mathcal{D})$ with hop threshold $h$ and corresponding \text{$\sigma$-AGG}{} $G = (V, E)$ with a given perspective, the hop threshold $h^\prime$ of any relational acyclification $G^\prime$ of $G$ can be at most $\lfloor \frac{2 + l^c}{2} \rfloor h$ where $l^c$ refers to the length of the longest cycle of dependencies in the relational model $\mathcal{M}$. \end{proposition} The need for higher hop thresholds arises for the additional edges drawn for any incoming edges to a strongly connected component (Definition \ref{dfn:acy}). Any such incoming edge has a maximum hop threshold $h$ of the given model. In order to reach the farthest node in the cycle where each dependency can be of at most $h$ hop threshold, we need at most $\lfloor \frac{l^c}{2} \rfloor h$ hop threshold where $l^c$ refers to the length of the cycle. So, in total it can be at most $h + \lfloor \frac{l^c}{2} \rfloor h = \lfloor \frac{2 + l^c}{2} \rfloor h$. Note that in order to calculate an upper bound on the hop threshold of relational acyclification we need to assume the maximum length of any cycle, $l^c$ in the given relational model. \subsection{Soundness and completeness of RCD for cyclic relational causal models} We consider RCD as a mapping $\mathcal{P}_{RCD}$ from independence models (on variables $V$) to DPAGs (with vertex set $V$), which maps the independence model of a \text{$\sigma$-AGG}{} $G$ to the DPAG $\mathcal{P}_{RCD}(IM_\sigma(G))$. We assume the following: \begin{assumption} \label{asm:rel_acy} There exists one or more valid relational acyclifications with hop threshold not exceeding the hop threshold of the given relational causal model ($h^\prime = h$). \end{assumption} \begin{assumption} \label{asm:sagg} The degree of any entity in the relational skeleton is greater than one. \end{assumption} Assumption \ref{asm:rel_acy} follows from Assumption \ref{asm:acy} and also limits the set of relational causal models for which RCD can be shown to be sound and complete. Assumption \ref{asm:sagg} satisfies the soundness and completeness of \text{$\sigma$-AGG}{}~\cite{ahsan-clear22}. \begin{theorem} Considering Assumption \ref{asm:sfaith}, \ref{asm:rel_acy}, \ref{asm:sagg} and causal sufficiency holds, RCD is \begin{enumerate}[(i)] \item sound: for all \text{$\sigma$-AGG}{}s $G$, \mr{G} contains $G$; \item arrowhead complete: for all \text{$\sigma$-AGG}{}s $G$: if $i \notin AN_{\tilde{G}}(j)$ for any DCG $\tilde{G}$ that is $\sigma$-Markov equivalent to $G$, then there is an arrowhead $j\; \circarrow \;i$ in \mr{G} \item tail complete: for all \text{$\sigma$-AGG}{}s $G$, if $i \in AN_{\tilde{G}}(j)$ in any DCG $\tilde{G}$ that is $\sigma$-Markov equivalent to $G$, then there is a tail $i \rightarrow j$ in \mr{G}; \item Markov complete: for all \text{$\sigma$-AGG}{}s $G_1$ and $G_2$, $G_1$ is $\sigma$-Markov equivalent to $G_2$ iff $\mr{G_1} = \mr{G_2}$ \end{enumerate} in the $\sigma$-separation setting given sufficient hop threshold. \end{theorem} \begin{proof} The main idea of the proof is very similar to the proof of Theorem 1 from \citet{mooij-uai20} where they prove the soundness and completeness of FCI for cyclic models under $\sigma$-separation. To prove soundness, let $G$ be a \text{$\sigma$-AGG}{} and $\mathcal{P} = \mr{G}$. The acyclic soundness of RCD means that for all AGGs $G^\prime$, $\mr{G^\prime}$ contains $G^\prime$. Hence, by Definition \ref{dfn:rel_acy} and Assumption \ref{asm:rel_acy}, $\mathcal{P}$ contains $G^\prime$ for all acyclifications $G^\prime$. But then $\mathcal{P}$ must contain $G$ which can be easily shown using Proposition 3 of \citet{mooij-uai20}. To prove arrowhead completeness, let $G$ be a \text{$\sigma$-AGG}{} and suppose that $i \notin AN_{\tilde{G}}(j)$ in any DCG $\tilde{G}$ that is $\sigma$-Markov equivalent to $G$. Since $G^\prime$ is $\sigma$-Markov equivalent to $G$, this implies in particular that for all AGGs $\tilde{G}$ that are $d$-Markov equivalent to $G^\prime$, $i \notin AN_{\tilde{G}}(j)$. Because of the acyclic arrowhead completeness of RCD, there must be an arrowhead $j\; \stararrow \;i$ in $ \mr{G^\prime} = \mr{G}$. Tail completeness is proved similarly. To prove Markov completeness: Definition \ref{dfn:rel_acy} and Proposition \ref{prop:im} imply both $\im{G_1} = \im[d]{G_1^\prime}$ and $\im{G_2} = \im[d]{G_2^\prime}$. From the acyclic Markov completeness of RCD\footnote{Since relational d-separation is equivalent to the Markov condition and it is sound and complete on abstract ground graph~\cite{maier-arxiv13}}, it then follows that $G_1^\prime$ must be $d$-Markov equivalent to $G_2^\prime$, and hence $G_1$ must be $\sigma$-Markov equivalent to $G_2$. \end{proof} The statement of this theorem can be seen as a special case of the generalization claim (Theorem 2) by \citet{mooij-uai20}. There is an important point to discuss about Assumption \ref{asm:rel_acy}. Even though Assumption \ref{asm:rel_acy} limits the scope of possible relational causal models, it is possible to modify RCD in a way so that it can work for models with relational acyclification having hop threshold higher than the hop threshold of the given model ($h^\prime > h$). The intuition here is that the skeleton building process should consider this new hop threshold $h^\prime$ (which is upper bounded by $\lfloor \frac{2 + l^c}{2} \rfloor h$) rather than the true hop threshold $h$. However, it requires further proof of soundness and completeness with this modified skeleton. We leave this for future work. \subsection{Identification of relational (non-)cycles} \citet{mooij-uai20} show that the patterns in strongly connected components in DCGs can be used as a sufficient condition for identifying the absence of certain cyclic causal relations in a complete DPAG. Given Definition \ref{dfn:rel_acy}, the same condition holds for relational models and \text{$\sigma$-AGG}{}s as well. We present the necessary and sufficient conditions for identifying non-cycles in the output of RCD following Proposition 10 by \cite{mooij-uai20}: \begin{proposition} \label{prop:id} Let $G$ be a \text{$\sigma$-AGG}{} and denote by $\mathcal{P} = \mr{G}$ the corresponding complete DPAG output by RCD. Let $i \neq j$ be two nodes in $\mathcal{P}$. If there is an edge $i\; \circ$---$\circ \;j$ in $\mathcal{P}$, and all nodes $k$ for which $k\; \stararrow \;i$ is in $\mathcal{P}$ also have an edge of the same type $k\; \stararrow \;j$ (i.e., the two edge marks at $k$ are the same) in $\mathcal{P}$, then there exists a DCG $\tilde{G}$ with $j \in SC_{\tilde{G}}(i)$ that is $\sigma$-Markov equivalent to $G$, but also a DCG $H$ with $j \notin SC_H(i)$ that is $\sigma$-Markov equivalent equivalent to $G$. \end{proposition} \begin{figure} \centering \subfloat[Cyclic RCM]{\label{sfig:id_crcm}\includegraphics[width=.17\textwidth]{images/crcm_id1.pdf}}\hfill \subfloat[RCD output DPAG]{\label{sfig:id_pag}\includegraphics[width=.30\textwidth]{images/pag_id1.pdf}}\hfill \caption{An example cyclic relational model and its corresponding DPAG output by RCD under $\sigma$-separation.} \label{fig:id} \end{figure} In other words, under the conditions of this proposition, it is not identifiable from $\mathcal{P}$ alone whether $j$ and $i$ are part of a causal cycle, but they are candidates of being part of a cycle. Figure \ref{fig:id} shows an example of this identifiability criteria. Figure \ref{sfig:id_pag} shows the output DPAG of an example cyclic RCM from Figure \ref{sfig:id_crcm}. The edges between nodes [A].X1, [A].X2 and [A, AB, B, AB, A].X1, [A, AB, B, AB, A].X2 satisfies the conditions given in Proposition \ref{prop:id}. It means they could be part of a cycle but it is not possible to confirm that based on the output alone. \section{Experiments} In this section, we examine the effectiveness of RCD for cyclic RCMs using both synthetically generated cyclic RCMs satisfying relational acyclification criteria and a demonstration with a real-world dataset. Since there is no other algorithm designed to discover cyclic RCMs, we compare against the vanilla RCD with $d$-separation oracle. \subsection{Experimental Setup} We follow the procedure introduced by \citet{maier-uai13} for synthetic experiment except we allow feedback loops in the model. We generate 100 random cyclic causal models over randomly generated schema for each of the following combinations: entities (1–3); relationships (one less than the number of entities) with cardinalities selected uniformly at random; attributes per item drawn from Pois($\lambda$ = 1) + 1; and the number of relational dependencies (4, 6, 8, 10, 12) limited by a hop threshold of 2 and at most 3 parents per variable. We enforce a feedback loop among the dependencies. Note that a single feedback loop can introduce arbitrary length cycles based on the structure of the model. This procedure yields a total of 15,000 synthetic models. Note that this generates simple Bayesian networks when there is a single entity type. We refer to the version of RCD with $d$-separation and $\sigma$-separation oracles as $d$-RCD and $\sigma$-RCD respectively.\footnote{Code available at https://github.com/edgeslab/sRCD} \subsection{Evaluation} The goal of the evaluation is to compare the learned causal models with the true causal models. However, the output object for cyclic RCMs is PAGs instead of CPDAGs. Moreover, it is expected that the skeleton of the output PAG might be different from the true causal model. For this reason, we evaluate the algorithms based on the ancestral relationships. We identify the ancestral relationships entailed by the output and the \text{$\sigma$-AGG}{} of the true model and report what percentage (recall) of the actual ancestral relationships are contained in the output. For a sound and complete algorithm, we expect to see 100\% recall. We omit precision since we are only comparing to the true model, not all the models in the equivalence class. Moreover, we consider the identification criterion given in Proposition \ref{prop:id} and evaluate the algorithms based on their ability to correctly identify edges as possible cycle candidates. We report recall for this evaluation as well. \subsection{Results} \begin{figure}[!ht] \centering \includegraphics[width=.40\textwidth]{images/recalls_all.eps} \caption{ Comparison of $d$-RCD and $\sigma$-RCD based on the recall of \textit{isPossibleAncestor} (top row) and \textit{isPossibleCycle} (bottom row) queries. The number of entity types increased from left to right. } \label{fig:recalls} \end{figure} Figure \ref{fig:recalls} shows the comparison of $d$-RCD and $\sigma$-RCD based on \textit{isPossibleAncestor} (top row) and \textit{isPossibleCycle} (bottom row) queries on synthetically generated relational models. The columns represent the increased number of entity types (left to right). The x-axis shows the number of dependencies and y-axis shows recall values. In the leftmost column, we see the results for single entity models. The top left and bottom left figures are equivalent to running the PC algorithm with $d$- and $\sigma$-separation oracles respectively. The rest of the figures represent proper relational models. As expected, we see 100\% recall for $\sigma$-RCD in all these plots. However, the result for $d$-RCD shows some intuitive patterns. For a single entity, $d$-RCD suffers most for 6 and 8 dependencies and get relatively better recalls on lower and higher extremes in x-axis. On the other hand, for multiple entity relational models, we see a general upward trend from left to right which is intuitive since higher number of dependencies makes the models increasingly denser. The difference in the trend between non-relational and relational cases for the low number of dependencies is due to the nature of relational data. Because of multiple entities and overlapping relational paths, there are usually more nodes in a \text{$\sigma$-AGG}{} than a DCG with the same number of dependencies. \begin{figure}[!ht] \centering \includegraphics[width=.40\textwidth]{images/rcd_rules.eps} \caption{ Frequency of edge orientation rules for $d$-RCD (top) and $\sigma$-RCD for different numbers of entity types and dependencies. } \label{fig:rules} \end{figure} Figure \ref{fig:rules} shows the percentage of orientation rules used for $d$-RCD (top row) and $\sigma$-RCD (bottom row). The leftmost column refers to the single entity case where no RBO is in effect. We can see some subtle differences in the distribution of rules for $d$-RCD and $\sigma$-RCD. For the small number of dependencies (i.e. 4) only CD (collider detection) rule activates with $\sigma$-RCD where $d$-RCD utilizes both CD and KNC (known non-collider). The increased number of dependencies shows the difference in the overall distribution. For the middle and right column, a significant difference is seen in the percentage of times rule MR3 (Meek rule 3) is executed for $\sigma$-RCD compared to $d$-RCD. These differences indicate that the algorithms learn fundamentally different structures. \subsection{Demonstration on Real-world Data} \begin{figure}[ht] \centering \includegraphics[width=.40\textwidth]{images/demo.pdf} \caption{ A possible cyclic relational model of MovieLens+ based on the output of RCD~\cite{maier-uai13}. } \label{fig:demo} \end{figure} \citet{maier-uai13} show the output of RCD on a sample of MovieLens dataset (www.grouplens.org) based on an approximate conditional independence test using the significance of coefficients in linear regressions \footnote{The original output is given in Appendix}. Their output contains undirected edges which are potential candidates for cycle edges. Figure \ref{fig:demo} shows a possible cyclic relational model which corresponds to the original output. Following Proposition \ref{prop:id}, we can infer that the edge between \textit{[Movie].Rating Count} and \textit{[Movie].Genre} cannot be part of any cycles or feedback loops. Some undirected edges can be oriented based on domain knowledge (i.e. Budget can cause gross income but not the other way around). There exist many possible orientations of dependencies that agrees with the RCD output. We show one plausible case with a feedback loop between \textit{user rating} and \textit{critic ratings} of a movie. It is possible that rating information is public and users and critics influence each other with their ratings. \section{Conclusion} Despite several methods developed for cyclic causal discovery from i.i.d samples, no such algorithm exists for cyclic relational causal models even though cycles are ubiquitous in real-world relational systems. In this work, we investigate the necessary conditions for discovering cyclic relational causal models from observational samples. We introduce relational acyclification operation which facilitates the theoretical guarantees for identifiability of such models. We prove that an existing state-of-the-art relational discovery algorithm, RCD is sound and complete for cyclic relational models for which valid relational acyclification exists. To the best of our knowledge, this discovery is the first of its kind. We hope that this work will play an important role in the study of mutual influence and interference in complex relational systems. \section{Graphs} \subsection{Directed Cyclic Graphs} A Directed Cyclic Graph (DCG) is a graph $\mathcal{G} = \langle \mathcal{V}, \mathcal{E} \rangle$ with nodes $\mathcal{V}$ and edges $\mathcal{E} \subseteq \{(u, v) : u, v \in V, u \neq v\}$ where $(u,v)$ is an ordered pair of nodes. We will denote a directed edge $(u, v) \in \mathcal{E}$ as $u \rightarrow v$ or $v \leftarrow u$, and call $u$ a parent of $v$. In this work, we restrict ourselves to DCG as the causal graphical model. A walk between two nodes $u, v \! \in \! \mathcal{V}$ is a tuple $\langle v_0, e_1, v_1, e_2, v_2, . . . , e_n, v_n \rangle$ of alternating nodes and edges in $\mathcal{G} (n \geq 0)$, such that $v_0, . . . , v_n \in \mathcal{V}$, and $e_1, . . . , e_n \in \mathcal{E}$, starting with node $v_0 = u$ and ending with node $v_n = v$ where the edge $e_k$ connects the two nodes $v_{k-1}$ and $v_k \in \mathcal{G}$ for all $k = 1, . . . , n$. If the walk contains each node at most once, it is called a \textit{path}. A \textit{directed walk (path)} from $v_i \in \mathcal{V}$ to $v_j \in \mathcal{V}$ is a walk (path) between $v_i$ and $v_j$ such that every edge $e_k$ on the walk (path) is of the form $v_{k - 1} \rightarrow v_k$, i.e., every edge is directed and points away from $v_i$. \sloppy We get the \textit{ancestors} of node $v_j$ by repeatedly following the path(s) through the parents: $AN_\mathcal{G}(v_j) := \{v_i \in V : v_i = v_0 \rightarrow v_1 \rightarrow . . . \rightarrow v_n = v_j \in \mathcal{G}\}$. Similarly, we define the \textit{descendants} of $v_i: DE_\mathcal{G}(v_i) := {v_j \in \mathcal{V} : v_i = v_0 \rightarrow v_1 \rightarrow . . . \rightarrow v_n = v_j \in \mathcal{G}}$. Each node is an ancestor and descendant of itself. A directed cycle is a directed path from $v_i$ to $v_j$ such that in addition, $v_j \rightarrow v_i \in \mathcal{E}$. All nodes on directed cycles passing through $v_i \in \mathcal{V}$ together form the strongly connected component $SC_\mathcal{G}(v_i) := AN_\mathcal{G}(v_i) \cap DE_\mathcal{G}(v_i)$ of $v_i$. \begin{definition} [Strongly connected component (SC)] Given a directed cyclic graph $G = (V,E)$, all nodes on directed cycles passing through node $i \in V$ together form the strongly connected component $SC_G(i) = AN_G(i) \bigcap DE_G(i)$ of $i$. \end{definition} \begin{definition} [Acyclification \cite{forre-arxiv17}] Given a DCG $G = (\mathcal{V}, \mathcal{E})$, an acyclification of $G$ is a DAG $G^{\prime} = (\mathcal{V}, \mathcal{E}^{\prime})$ with \begin{enumerate}[i] \item the same nodes $\mathcal{V}$; \item for any pair of nodes ${i, j}$ such that $i \notin SC_G(j)$: $i \rightarrow j \in \mathcal{E}^{\prime}$ iff there exists a node $k$ such that $k \in SC_G(j)$ and $i \rightarrow k \in \mathcal{E}$; \item for any pair of distinct nodes ${i, j}$ such that $i \in SC_G(j)$: $i \rightarrow j \in \mathcal{E}^{\prime}$ or $i \leftarrow j \in \mathcal{E}^{\prime}$; \end{enumerate} \end{definition} \citet{mooij-uai20} presents some important properties of acyclifications: \begin{proposition} \label{prop:acy} Let $G$ be a DCG and $i$, $j$ two nodes in $G$. \begin{enumerate}[i] \item If $i \in AN_G(j)$ then there exists an acyclification $G^\prime$ of $G$ with $i \in AN_{G^\prime}(j)$; \item If $i \notin AN_G(j)$ then $i \notin AN_{G\prime}(j)$ for all acyclifications $G^\prime$ of $G$; \item There is an inducing path between $i$ and $j$ in $G$ if and only if there is an inducing path between $i$ and $j$ in $G^\prime$ for any acyclification $G^\prime$ of $G$~\footnote{Inducing paths are special kinds of paths in cyclic graphs. We refer readers to Section 3.1 of \citet{mooij-uai20} for further details.}. \end{enumerate} \end{proposition} \begin{definition} [Independence Model \cite{mooij-uai20}] An independence model of an DCG $\mathcal{H}$ is given by \[ IM_d(\mathcal{H}) := \{ \langle A, B, C \rangle : A, B, C \subset \mathcal{V}, A \overset{d}{\underset{\mathcal{H}}{\rotatebox[origin=c]{90}{$\models$}}} B | C \} \] Similarly, a $\sigma$-independence model is defined as \[ IM_\sigma(\mathcal{H}) := \{ \langle A, B, C \rangle : A, B, C \subset \mathcal{V}, A \overset{\sigma}{\underset{\mathcal{H}}{\rotatebox[origin=c]{90}{$\models$}}} B | C \} \] \end{definition} The following assumption is an elaboration of Assumption \ref{asm:acy}that states that given background knowledge is \textit{compatible with acyclification} where $\Psi(G) = 1$ refers to the fact that the given background knowledge (i.e. sufficiency) holds \cite{mooij-uai20}. \begin{assumption} \label{asm:compat} For all DCGs $G$ with $\Psi(G) = 1$, the following three conditions hold: \begin{enumerate}[i] \item There exists an acyclification $G^\prime$ of $G$ with $\Psi(G^\prime) = 1$; \item For all nodes $i, j \in G$: if $i \in AN_G(j)$ then there exists an acyclification $G^\prime$ of $G$ with $\Psi(G^\prime) = 1$ such that $i \in AN_{G^\prime}(j)$; \item For all nodes $i, j \in G$: if $i \notin AN_G(j)$ then there exists an acyclification $G^\prime$ of $G$ with $\Psi(G^\prime) = 1$ such that $i \notin AN_{G^\prime}(j)$; \end{enumerate} \end{assumption} \subsection{PAGs as equivalence class} \begin{figure}[ht] \centering \subfloat[DCG $\mathcal{G}$]{\label{sfig:dcg}\includegraphics[width=.18\textwidth]{images/dcg.pdf}}\hfill \subfloat[PAG]{\label{sfig:pag}\includegraphics[width=.18\textwidth]{images/pag.pdf}} \caption{A simple DCG with a feedback loop (left) and its equivalence class (right) } \label{fig:ccd} \end{figure} The Cyclic Causal Discovery (CCD) algorithm, introduced by \citet{richardson-uai96} is one of the earliest causal discovery algorithms that do not assume acyclicity. They show that a class of graphs called Partial Ancestral Graphs (PAG) is a sufficient representation for the equivalence class of cyclic causal models represented by DCGs. PAGs have also been shown to be a sufficient representation for causal discovery with cycles and unobserved confounders~\cite{mooij-uai20}. Since we are assuming no selection bias for simplicity, we will only discuss directed PAGs (DPAG) in this study. Note that, DPAGs can only represent some features (ancestral relationships) of the true causal model and cannot identify the model itself~\cite{richardson-uai96}. Figure \ref{fig:ccd} shows a simplistic causal model with a feedback loop and its corresponding equivalence class depicted by a DPAG. From Figure \ref{sfig:pag} we can read that both A and B are ancestors of X and Y but X and Y can not be ancestors of A or B. Some of the key features of a DPAG $\Psi$: \begin{enumerate} \item There is an edge between A and B in $\Psi$ iff A and B are connected in DCG $\mathcal{G}$. \item If there is an edge in $\Psi$, $B \edgestar A$, out of A (not necessarily into B), then in every graph in Equiv($\mathcal{G}$), A is an Ancestor of B. \item If there is an edge in $\Psi$, $A \stararrow B$, into B, then in every graph in Equiv($\mathcal{G}$), B is not an Ancestor of A. \end{enumerate} \subsection{$\sigma$-separation} The idea of $\sigma$-separation follows from $d$-separation, a fundamental notion in DAGs which was first introduced by~\citet{pearl-book88}. $d$-separation exhibits the global Markov property in DAGs which states that if two variables $X$ and $Y$ are $d$-separated given another variable $Z$ in a DAG representation then $X$ and $Y$ are conditionally independent given $Z$ in the corresponding distribution of the variables. However, \citet{spirtes-uai95,neal-jair00} show that without any specific assumption regarding the nature of dependence (i.e. linear, polynomial), the $d$-separation relations are not sufficient to entail all the corresponding conditional independence relations in a DCG. In a recent work, an alternative formulation called $\sigma$-separation is introduced which holds for a very general graphical setting \citep{forre-arxiv17}. Here, we consider a simplified version of the formal definition of $\sigma$-separation: \begin{definition} [$\sigma$-separation] \citep{forre-arxiv17} A walk $\langle v_0 . . . v_n \rangle$ in DCG $G = \langle \mathcal{V}, \mathcal{E} \rangle$ is $\sigma$-blocked by $C \subseteq V$ if: \begin{enumerate}[parsep=0pt] \item its first node $v_0 \in C$ or its last node $v_n \in C$, or \item it contains a collider $v_k \notin AN_\mathcal{G}(C)$, or \item it contains a non-collider $v_k \in C$ that points to a node on the walk in another strongly connected component (i.e., $v_{k-1} \rightarrow v_k \rightarrow v_{k+1}$ with $v_{k+1} \notin SC_\mathcal{G}(v_k)$, $v_{k-1} \leftarrow v_k \leftarrow v_{k+1}$ with $v_{k-1} \notin SC_\mathcal{G}(v_k)$ or $v_{k-1} \leftarrow v_k \rightarrow v_{k+1}$ with $v_{k-1} \notin SC_\mathcal{G}(v_k)$ or $v_{k+1} \notin SC_\mathcal{G}(v_k)$). \end{enumerate} \noindent If all paths in $\mathcal{G}$ between any node in set $A \subseteq \mathcal{V}$ and any node in set $B \subseteq \mathcal{V}$ are $\sigma$-blocked by a set $C \subseteq \mathcal{V}$, we say that $A$ is $\sigma$-separated from $B$ by $C$, and we write $A \overset{\sigma}{\underset{\mathcal{G}}{\rotatebox[origin=c]{90}{$\models$}}} B | C$. \end{definition} \subsection{$\sigma$-faithfulness} $\sigma$-\textit{faithfulness} refers to the property which states that all statistical dependencies found in the distribution generated by a given causal structure model is entailed by the $\sigma$-separation relationships. \begin{definition} [$\sigma$-faithfulness] Given $\mathcal{X}_A$, $\mathcal{X}_B$, $\mathcal{X}_C$ as the distributions of variables $A$, $B$, $C$ respectively in solution $\mathcal{X}$ of a causal model $\mathcal{M}$, $\sigma$-\textit{faithfulness} states that if $\mathcal{X}_A$ and $\mathcal{X}_B$ are conditionally independent given $\mathcal{X}_C$, then $A$ and $B$ are $\sigma$-separated by $C$ in the corresponding possibly cyclic graphical model $\mathcal{G}$ of $\mathcal{M}$. \end{definition} \section{Relational Causal Models (RCMs)} We adopt the definition of relational causal model used by previous work on relational causal discovery \citep{maier-uai13, lee-uai20}. We denote random variables and their realizations with uppercase and lowercase letters respectively, and bold to denote sets. We use a simplified Entity-Relationship model to describe relational data following previous work ~\citep{heckerman-isrl07}. A relational schema $\mathcal{S} = \langle \bm{\mathcal{E}}, \bm{\mathcal{R}}, \bm{\mathcal{A}}, card \rangle$ represents a relational domain where $\bm{\mathcal{E}}$, $\bm{\mathcal{R}}$ and $\bm{\mathcal{A}}$ refer to the set of entity, relationship and attribute classes respectively. It includes a cardinality function that constrains the number of times an entity instance can participate in a relationship. Figure \ref{fig:rcm} shows an example relational model that describes a simplified user-media engagement system. The model consists of three entity classes (User, Post, and Media), and two relationship classes (Reacts and Creates). Each entity class has a single attribute. The cardinality constraints are shown with crow's feet notation— a user can react to multiple posts, multiple users can react to a post, and only a single media entity can create a post. \sloppy A \textit{relational skeleton} $s$ is an instantiation of a relational schema $\mathcal{S}$, represented by an undirected graph of entities and relationships. Figure \ref{sfig:skel} shows an example skeleton of the relational model from Figure \ref{fig:rcm}. It shows that Alice and Bob both react to post P1. Alice also reacts to post P2. P1 and P2 both are created by media M1. There could be infinitely many possible skeletons for a given RCM. We denote the set of all skeletons for schema $\mathcal{S}$ as $\sum_\mathcal{S}$. Given a relational schema, we can specify relational paths, which intuitively correspond to ways of traversing the schema. For the schema shown in Figure \ref{fig:rcm}, possible paths include [User, Reacts, Post] (the posts a user reacts to), as well as [User, Reacts, Post, Reacts, User] (other users who react to the same post). \textit{Relational variables} consist of a relational path and an attribute. For example, the relational variable [User, Reacts, Post].Engagement corresponds to the overall engagement of the post that a user reacts to. The first item (i.e. $User$) in the relational path corresponds to the \textit{perspective} of the relational variable. A terminal set, $P|_{i_k}$ is the terminal item on the relational path $P = [I_j, . . . , I_k]$ consisting of instances of class $I_k \in \bm{\mathcal{E}} \cup \bm{\mathcal{R}}$. A relational causal model $\mathcal{M} = \langle \mathcal{S},\mathcal{D} \rangle$, is a collection of relational dependencies defined over schema $\mathcal{S}$. \textit{Relational dependencies} consist of two relational variables, cause and effect. As an example, consider the following relational dependency [Post, Reacts, User].Sentiment $\rightarrow$ [Post].Engagement which states that the engagement of a post is affected by the actions of users who react to that post. In Figure \ref{fig:rcm}, the arrows represent relational dependencies. Note that, all causal dependencies are defined with respect to a specific perspective. A relational model $\mathcal{M} = (\mathcal{S}, \mathcal{D})$ is said to be cyclic if the set of relational dependencies $\mathcal{D}$ constructs one or more directed cycles of arbitrary length. There is a direct feedback loop in the relational model of Figure \ref{fig:rcm} making it a cyclic relational causal model. \subsection{Ground Graph and $\sigma$-Abstract Ground Graph} \label{sec:gg_agg} \sloppy A realization of a relational model $\mathcal{M}$ with a relational skeleton is referred to as the \textit{ground graph} $GG_\mathcal{M}$. It is a directed graph consisting attributes of entities in the skeleton as nodes and relational dependencies among them as edges. A single relational model is actually a template for a set of possible ground graphs based on the given schema. A ground graph has the same semantic as a graphical model. Given a relational model $\mathcal{M}$ and a relational skeleton $s$, we can construct a ground graph \text{$GG_{\mathcal{M}_s}$}{} by applying the relational dependencies as specified in the model to the specific instances of the relational skeleton. Figure \ref{sfig:gg} shows the ground graph for the relational model from Figure \ref{fig:rcm}. The relational dependencies present in the given RCM may temp one to conclude a conditional independence statement: \textit{[User].Sentiment $\,\rotatebox[origin=c]{90}{$\models$}\,$ [Media].Preference | [Post].Engagement}. However, when the model is unrolled in a ground graph we see the corresponding statement is not true (i.e. \textit{[Bob].Sentiment $\,\not\!\perp\!\!\!\perp$ [M1].Preference | [P1].Engagement}) since there is an alternative path through \textit{[Alice].Sentiment} and \textit{[P2].Engagement} which is activated when conditioned on \textit{[P1].Engagement}. It shows why generalization over all possible ground graphs is hard. A $\sigma$-\textit{abstract ground graph} ($\sigma$-AGG) is an abstract representation that solves the problem of generalization by capturing the consistent dependencies in all possible ground graphs and representing them as a directed graph. $\sigma$-AGGs are defined for a specific perspective and \textit{hop threshold}, $h$. Hop threshold refers to the maximum length of the relational paths allowed in a specific $\sigma$-AGG. There are two types of nodes in $\sigma$-AGG, relational variables, and intersection variables. Intersection variables are constructed from pairs of relational variables with non-empty intersections~\citep{maier-arxiv13}. For example, [User, Reacts, Post] refers to the set of posts a user reacts to whereas [User, Reacts, Post, Reacts, User, Reacts, Post] refers to the set of other posts reacted to by other users who also reacted to the same post as the given user. These two sets of posts can overlap which is reflected by the corresponding intersection variable. Edges between a pair of nodes of $\sigma$-AGG exist if the instantiations of those constituting relational variables contain a dependent pair in all ground graphs. Figure \ref{sfig:agg} presents the $\sigma$-AGG from the perspective of $User$ and with $h = 6$ corresponding to the model from Figure \ref{fig:rcm}. The $\sigma$-AGG shows that the sentiment of a user is no longer independent of media preference given just engagements of the corresponding posts the user reacts to. We also need to condition on the sentiment of other users who reacted to the same post. \subsection{Relational $\sigma$-separation} Conditional independence facts are only useful when they hold across all ground graphs that are consistent with the model. \citet{maier-arxiv13} show that relational $d$-separation is sufficient to achieve that for acyclic models. However, such abstraction is not possible for cyclic models since the correctness of $d$-separation is not guaranteed for cyclic graphical models for the general form of dependency~\citep{spirtes-uai95, neal-jair00}. In recent work, \citep{ahsan-clear22} introduced relational $\sigma$-separation criteria specifically for cyclic relational models: \begin{definition} [Relational $\sigma$-separation] Let $\bm{X}$, $\bm{Y}$, and $\bm{Z}$ be three distinct sets of relational variables with the same perspective $B \in \bm{\mathcal{E}} \cup \bm{\mathcal{R}}$ defined over relational schema $\mathcal{S}$. Then, for relational model structure $\mathcal{M}$, $\bm{X}$ and $\bm{Y}$ are $\sigma$-separated by $\bm{Z}$ if and only if, for all skeletons $s \in \sum_\mathcal{S}$, $\bm{X}|_b$ and $\bm{Y}|_b$ are $\sigma$-separated by $\bm{Z}|_b$ in ground graph $GG_{\mathcal{M}_s}$ for all instances $b \in s(B)$ where $s(B)$ refers to the instances of $B$ in skeleton $s$. \end{definition} The definition directly follows from the definition of relational $d$-separation. If there exists even one skeleton and faithful distribution represented by the relational model for which $\bm{X} \not\!\perp\!\!\!\perp \bm{Y} | \bm{Z}$, then $\bm{X}|_b$ and $\bm{Y}|_b$ are not $\sigma$-separated by $\bm{Z}|_b$ for $b \in s(B)$. \section{Relational Causal Discovery (RCD)} The RCD algorithm developed by \citet{maier-uai13} is the first sound and complete algorithm that can discover the abstract ground graph of a relational causal model (RCM) under the assumption of $d$-faithfulness, sufficiency, acyclicity, and a maximum hop threshold $h$. It is designed based on the PC algorithm with some additional steps introduced specifically to handle relational aspects of the representation. Similar to the PC algorithm, the steps of RCD are divided into two phases: 1) skeleton detection and 2) edge orientation. The first phase is identical to the PC algorithm. The second phase is also inspired by PC and uses all four orientation rules from the PC algorithm. An additional important step that RCD performs is the propagation of edge orientation which refers to the idea of orienting all edges associated with a certain relational dependency in all possible AGGs. This helps ensure completeness as well as reduces redundant iteration of the algorithm. Moreover, RCD introduces Relational Bivariate Orientation (RBO)- an orientation rule specifically designed for relational models. It applies to unshielded triples with end nodes having the same attribute type and entity type. Unlike \textit{collider detection} in PC, RBO can orient an unshielded triple to both a collider and a fork structure which yields significantly more orientations than using only the four PC orientation rules. \citet{maier-uai13} provides theoretical guarantees for soundness and completeness of RCD. \subsection{Demonstration on Real Data} \begin{figure}[!ht] \centering \includegraphics[width=.40\textwidth]{images/demo_rcd.pdf} \caption{ RCD-learned model of MovieLens+~\cite{maier-uai13}. } \label{fig:demo_rcd} \end{figure} \citet{maier-uai13} applied RCD to the MovieLens+ database, a combination of the UMN MovieLens database (www.grouplens.org); box office, director, and actor information collected from IMDb (www.imdb.com); and average critic ratings from Rotten Tomatoes (www.rottentomatoes.com). They used a sample of 75,000 ratings. For testing conditional independence they used the significance of coefficients in linear regression and considered the average aggregation function for relational variables. They ran RCD with a hop threshold of 4, maximum depth of 3, and an effect size threshold of 0.01. The RCD-generated output is given in Figure \ref{fig:demo_rcd}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We shall be interested in this work in the so-called normalized Fr\'{e}chet distribution in the form \begin{equation}\label{eq1} Fr(\gamma, x) = \gamma x^{-(1+\gamma)} \exp(-x^{-\gamma}), \quad 0 \leq x < \infty, \end{equation} for the real shape parameter $\gamma > 0$. It has been introduced by Fr\'{e}chet in 1927 [\onlinecite{MFrechet27}] and has been recognized as one of three types of probability distribution functions (pdf) characterizing extreme value phenomena [\onlinecite{SKotz00}]. The field of applications of Fr\'{e}chet distribution is immense and the reader should consult [\onlinecite{SKotz00}] for references on its role in probability and statistics. Recently the Fr\'{e}chet distribution has been evoked and studied in [\onlinecite{TSimon14}] because of its apparent similarity to one-sided L\'{e}vy stable pdf's which are playing an increasingly important role in non-equilibrium Statistical Mechanics [\onlinecite{KAPenson10, KGorska12, KGorska12a, EBarkai}]. The common features shared between $Fr(\gamma, x)$ and its L\'{e}vy stable counterparts include the essential singularity for $x\to 0$, unimodality and heavy-tail algebraic decay for $x~\to~\infty$. We shall return to this comparison below. Since $Fr(\gamma, x)$ is an elementary function, expectation values for many functions on $(0, \infty)$ can be obtained exactly. However, this does not apply to the Laplace transform $\mathbb{E}(e^{-px})$, $\Re(p)>0$ which is explicitly known only in one case $\gamma = 1$ [\onlinecite{TSimon14}]. The knowledge of the Laplace transform is an important issue as it is linked to the alternating moment-generating function. The purpose of this note is to present exact calculation of the Laplace transform of $Fr(\gamma, x)$ for arbitrary rational values of the shape parameter $\gamma$, i.e. for $\gamma = l/k$ with $l, k = 1,2, \ldots$. The paper is organized as follows: In section II we present the definitions and preliminaries on integral transforms that will serve to obtain the desired results. In section III we give the derivation of the main result and of some relations satisfied by the Laplace transform. We furnish also graphical representations for selected cases. In section IV we introduce the Fr\'{e}chet transform and discuss some of its properties. We conclude the paper in section V where we also give more elements of comparison with L\'{e}vy stable pdf's. \section{Definitions and preliminaries} In the following we give some definitions and informations about the Mellin transform of a function $f(x)$ defined for $x\geq 0$. The Mellin transform is defined for complex $s$ as [\onlinecite{INSneddon72}] \begin{equation}\label{27.10.13-1} \mathcal{M}[f(x); s] = f^{\star}(s) = \int_{0}^{\infty} x^{s-1} f(x) dx, \end{equation} along with its inverse \begin{equation}\label{27.10.14-2} \mathcal{M}^{-1}[f^{\star}(s); x] = f(x) = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} x^{-s} f^{\star}(s) ds. \end{equation} For the role of constant $c$ consult [\onlinecite{INSneddon72}]. For fixed $a>0$, $h\neq 0$, the Mellin transform satisfies the following scaling property: \begin{equation}\label{27.10.13-3} \mathcal{M}[x^{b}f(ax^{h}); s] = \frac{1}{|h|} a^{-\frac{s+b}{h}} f^{\star}(\ulamek{s+b}{h}). \end{equation} The knowledge of the Mellin transform permits is many cases to obtain the Laplace transform $\mathcal{L}[f(x); p]~=~\int_{0}^{\infty} e^{-px}f(x) dx$, $\Re(p)>0$. This is due to a powerful relation between the Laplace and inverse Mellin transforms: \begin{equation}\label{27.10.13-4} \mathcal{M}^{-1}[f^{\star}(1-s)\Gamma(s); x] = \mathcal{L}[f(t); x], \end{equation} which belongs to a set of convolution-type theorems discussed in [\onlinecite{LDebnath07}], and is mentioned in [\onlinecite{INSneddon72}], see the exercise 4-2 (a) on p. 288. The proof of \eqref{27.10.13-4} is immediate as it is sufficient to calculate $\mathcal{M}[\mathcal{L}[f(t); x]; s]$: \begin{align}\label{27.10.13-5} \mathcal{M}[\mathcal{L}[f(t); x]; s] &= \mathcal{M}\left[\int_{0}^{\infty} e^{-xt}f(t) dt; s\right] = \int_{0}^{\infty} x^{s-1}\left(\int_{0}^{\infty} e^{-xt}f(t)dt\right)ds \nonumber \\ & = \int_{0}^{\infty} f(t) \Gamma(s) t^{-s} dt = \Gamma(s) \int_{0}^{\infty} t^{-s} f(t) dt = \Gamma(s) f^{\star}(1-s). \end{align} The relation \eqref{27.10.13-5} has been previously used in connection with Stieltjes moment problems related to lognormal distributions [\onlinecite{KAPenson99}]. Among known Mellin transforms a very special role is played by those entirely expressible through products and rations of Euler's gamma functions. The Meijer G function is defined as an inverse Mellin transform [\onlinecite{APPrudnikov-v3}]: \begin{align}\label{27.10.13-6} G^{m, n}_{p, q}\left(z\Big\vert{\alpha_{1} \ldots \alpha_{p} \atop \beta_{1} \ldots \beta_{q}}\right) &= \mathcal{M}^{-1}\left[\frac{\prod_{j=1}^{m}\Gamma(\beta_{j}+s)\, \prod_{j=1}^{n}\Gamma(1-\alpha_{j}-s)}{\prod_{j=m+1}^{q}\Gamma(1-\beta_{j}-s)\, \prod_{j=n+1}^{p}\Gamma(\alpha_{j}+s)}\right] \\ \label{27.10.13-7} & = {MeijerG}([[\alpha_{1}, \ldots, \alpha_{n}], [\alpha_{n+1}, \ldots, \alpha_{p}]], [[\beta_{1}, \ldots, \beta_{m}], [\beta_{m+1}, \ldots, \beta_{q}]], z), \end{align} where in Eq. \eqref{27.10.13-6} empty products are taken to be equal to 1. In Eqs. \eqref{27.10.13-6} and \eqref{27.10.13-7} the parameters are subject of conditions: \begin{align}\label{27.10.13-8} &z\neq 0, \quad 0 \leq m \leq q, \quad 0 \leq n \leq p, \nonumber\\ &\alpha_{j} \in \mathbb{C}, \quad j=1,\ldots, p; \quad \beta_{j}\in\mathbb{C}, \quad j=1,\ldots, q. \end{align} For a full description of integration contours in Eq. \eqref{27.10.13-6}, general properties and special cases of the Meijer G functions, see Ref. [\onlinecite{APPrudnikov-v3}]. In Eq. \eqref{27.10.13-7} we present a transparent notation, which we will use in parallel henceforth, inspired by computer algebra systems [\onlinecite{CAS}]. We quote for reference the Gauss-Legendre multiplication formula for gamma functions encountered in this work: \begin{align}\label{27.10.13-9} &\Gamma(nz) = (2\pi)^{\frac{1-n}{2}} n^{nz-\frac{1}{2}} \prod_{j=0}^{n-1}\Gamma\left(z+\frac{j}{n}\right), \nonumber\\ &z\neq 0, -1, -2, \ldots, \quad n=1, 2, \ldots. \end{align} A frequently occurring special list of $k$ elements in form of \begin{equation}\label{27.10.13-10} \frac{a}{k}, \frac{a+1}{k}, \ldots, \frac{a + k-1}{k} \end{equation} is denoted by $\Delta(k, a)$, $k\neq 0$, see [\onlinecite{APPrudnikov-v3}]. \section{Derivation of the main result} It turns out that the Laplace transform of $Fr(\gamma, x)$ can be obtained for $\gamma=l/k$ with $l, k = 1, 2,\ldots$ and the appropriate pdf will be denoted now by $Fr(l, k, x)$. As a first step we calculate the Mellin transform $\mathcal{M}[Fr(l, k, x); s]$, using the property \eqref{27.10.13-3}: \begin{equation}\label{eq2} \mathcal{M}[Fr(l, k, x); s] = \Gamma\left[k(\ulamek{1-s}{l} + \ulamek{1}{k})\right], \end{equation} which we rewrite as $\Gamma(kz)$ with $z = \ulamek{1-s}{l} + \ulamek{1}{k}$, $z \neq 0, -1, -2, \ldots$. To this last expression we apply the Gauss-Legendre formula \eqref{27.10.13-9} and \eqref{eq2} transforms into \begin{equation}\label{eq3} \Gamma(kz) = \frac{k^{k/l + 1/2}}{(2\pi)^{(k-1)/2}} (k^{k})^{-s/l} \prod_{p=1}^{k} \Gamma\left(\frac{1}{l} + \frac{p}{k} - \frac{s}{l}\right) = \mathcal{M}[Fr(l, k, x); s]. \end{equation} As a next step we calculate $\mathcal{M}[Fr(l, k, x); 1-s]$ and in order to apply the formula \eqref{27.10.13-4} we apply once again \eqref{27.10.13-9} to $\Gamma(s)$ and obtain: \begin{equation}\label{eq4} \Gamma(s) = (2\pi)^{(1-l)/2} l^{-1/2} (l^{l})^{s/l} \prod_{j=1}^{l}\Gamma\left(\frac{s}{l} + \frac{j-1}{l}\right), \quad l=1, 2, \ldots. \end{equation} We are in the position now to rewrite \begin{align}\label{eq5} \begin{split} \mathcal{M}[Fr(l, k, x); 1-s] \Gamma(s) &= \frac{k^{1/2} l^{-1/2}}{(2\pi)^{(k+l)/2-1}} (k^{k} l^{l})^{s/l} \left[\prod_{r=1}^{k}\Gamma\left(\frac{s}{l} + \frac{r}{k}\right)\right]\, \left[\prod_{j=1}^{l} \Gamma\left(\frac{s}{l} + \frac{j-1}{l}\right)\right]. \end{split} \end{align} Observe that in Eq. \eqref{eq5} the variable $s$ appears only in the combination of $s/l$. Consequently, we invert Eq. \eqref{eq5} with a special case of Eq. \eqref{27.10.13-3}: \begin{equation}\label{eq6} \mathcal{M}^{-1}\left[\frac{1}{a^{s/l}} f^{\star}\left(\frac{s}{l}\right); x\right] = l f(a x^{l}), \quad a = (k^{k} l^{l})^{-1}. \end{equation} This last expression permits one to use Eqs. \eqref{27.10.13-6} and \eqref{27.10.13-7} and we obtain the final result for the Laplace transform of $Fr(l, k, x)$: \begin{align}\label{eq7} \begin{split} \mathcal{L}[Fr(l, k, x); p] & = \mathcal{M}^{-1}\left[\mathcal{M}[Fr(l, k, x); 1-s] \Gamma(s); p\right] \\ & = \frac{\sqrt{kl}}{(2\pi)^{(k+l)/2 - 1}} G^{k+l, 0}_{0, k+l}\left(\frac{p^{l}}{k^{k} l^{l}} \Big\vert {- \atop \Delta(k, 1), \Delta(l, 0)}\right) \\ & = \frac{\sqrt{kl}}{(2\pi)^{(k+l)/2-1}} MeijerG\left([[\,\,\,], [\,\,\,]], [[\{\ulamek{r}{k}\}_{r=1}^{k}, \{\ulamek{j-1}{l}\}_{j=1}^{l}], [\,\,\,]], \ulamek{p^{l}}{k^{k} l^{l}}\right), \end{split} \end{align} where $\Re(p)>0$. We note that $\{\ulamek{r}{k}\}_{r=1}^{k} = \ulamek{1}{k}, \ulamek{2}{k}, \ulamek{3}{k}, \ldots, 1 = \Delta(k, 1)$ and similarly $\{\ulamek{j-1}{l}\}_{j=1}^{l}=\Delta(l, 0)$, compare the definition Eq. \eqref{27.10.13-10}. The final result in the computer algebra systems notation takes a particularly transparent form: \begin{equation}\label{eq8} \mathcal{L}[Fr(l, k, x); p] = \frac{\sqrt{kl}}{(2\pi)^{(k+l)/2 -1}} MeijerG\left([[\,\,\,], [\,\,\,]], [[\Delta(k, 1), \Delta(l, 0)], [\,\,\,]], \ulamek{p^{l}}{k^{k} l^{l}}\right), \quad \Re(p) > 0. \end{equation} There is a somewhat hidden symmetry in Eqs. \eqref{eq7} and \eqref{eq8} which becomes apparent by rewriting the list of $k+l$ terms in the third bracket of \eqref{eq8} as \begin{align}\label{eq9} \begin{split} \Delta(k, 1), \Delta(l, 0) & = [\ulamek{1}{k}, \ulamek{2}{k}, \ldots, \ulamek{k-1}{k}], 0, 1, [\ulamek{1}{l}, \ulamek{2}{l}, \ldots, \ulamek{l-1}{l}]\\ & = [\ulamek{1}{l}, \ulamek{2}{l}, \ldots, \ulamek{l-1}{l}], 0, 1, [\ulamek{1}{k}, \ulamek{2}{k}, \ldots, \ulamek{k-1}{k}] \\ & = \Delta(l, 1), \Delta(k, 0). \end{split} \end{align} This last observation leads to a neat transformation law of $\mathcal{L}[Fr(l, k, x); p]$ under the transmutation $l\leftrightarrow k$: \begin{equation}\label{eq10} \mathcal{L}[Fr(l, k, x); p] = \mathcal{L}[Fr(k, l, x); p^{l/k}], \quad \Re(p)>0, \end{equation} which links the case $l/k<1$ with that of $l/k>1$. For $l=k=1$ the appropriate Meijer G function can be related to the modified Bessel function $K_{1}(z)$, as then \begin{align}\label{eq11} \begin{split} \mathcal{L}[Fr(1, 1, x); p] &= G^{2, 0}_{0, 2}\left(p\Big\vert {- \atop 1, 0}\right)\\ & = MeijerG([[\,\,\,], [\,\,\,]], [[0, 1], [\,\,\,]], p) \\ & = 2\sqrt{p} K_{1}(2\sqrt{p}), \quad \Re(p) > 0 \end{split} \end{align} which is the consequence of formula 8.4.23.1 of {\onlinecite{APPrudnikov-v3}}. The above Laplace transform is the only case that can be represented by a standard special function. Below we list some transforms for several low-lying values of $l, k \geq 1$, with the condition $\Re(p)>0$ applying to all of them: \begin{align}\label{eq12} \mathcal{L}[Fr(1, 2, x); p] &= \pi^{-1/2} G^{3, 0}_{0, 3} \left(\frac{p}{4}\Big\vert {- \atop 0, \ulamek{1}{2}, 1}\right) = \pi^{-1/2} MeijerG([[\,\,\,], [\,\,\,]], [[0, \ulamek{1}{2}, 1], [\,\,\,]], \ulamek{p}{4}),\\ \label{eq13} \mathcal{L}[Fr(2, 1, x); p] &= \pi^{-1/2} G^{3, 0}_{0, 3} \left(\frac{p^{2}}{4}\Big\vert {- \atop 0, \ulamek{1}{2}, 1}\right) = \pi^{-1/2} MeijerG([[\,\,\,], [\,\,\,]], [[0, \ulamek{1}{2}, 1], [\,\,\,]], \ulamek{p^{2}}{4}), \end{align} \begin{align} \mathcal{L}[Fr(1, 3, x); p] &= \frac{\sqrt{3}}{2\pi} G^{4, 0}_{0, 4} \left(\frac{p}{27}\Big\vert {- \atop 0, \ulamek{1}{3}, \ulamek{2}{3}, 1}\right) \nonumber\\ &= \frac{\sqrt{3}}{2\pi} MeijerG([[\,\,\,], [\,\,\,]], [[0, \ulamek{1}{3}, \ulamek{2}{3}, 1], [\,\,\,]], \ulamek{p}{27}), \label{eq14} \\[0.6\baselineskip] \mathcal{L}[Fr(2, 3, x); p] &= \frac{\sqrt{12}}{4\pi^{3/2}} G^{5, 0}_{0, 5} \left(\frac{p^{2}}{108}\Big\vert {- \atop 0, \ulamek{1}{3}, \ulamek{1}{2}, \ulamek{2}{3}, 1}\right) \nonumber\\ &= \frac{\sqrt{12}}{4\pi^{3/2}} MeijerG([[\,\,\,], [\,\,\,]], [[0, \ulamek{1}{3}, \ulamek{1}{2}, \ulamek{2}{3}, 1], [\,\,\,]], \ulamek{p^{2}}{108}), \label{eq15} \\[0.6\baselineskip] \mathcal{L}[Fr(3, 2, x); p] &= \frac{\sqrt{12}}{4\pi^{3/2}} G^{5, 0}_{0, 5} \left(\frac{p^{3}}{108}\Big\vert {- \atop 0, \ulamek{1}{3}, \ulamek{1}{2}, \ulamek{2}{3}, 1}\right) \nonumber\\ & = \frac{\sqrt{12}}{4\pi^{3/2}} MeijerG([[\,\,\,], [\,\,\,]], [[0, \ulamek{1}{3}, \ulamek{1}{2}, \ulamek{2}{3}, 1], [\,\,\,]], \ulamek{p^{3}}{108}).\label{eq16} \end{align} The reason why the expressions \eqref{eq12}, \eqref{eq13}, \eqref{eq14}, \eqref{eq15} and \eqref{eq16}, as well as the corresponding ones for $l, k~>~3$ (not cited here), cannot be represented by more conventional special functions (as for example the generalized hypergeometric functions ${_{p} F_{q}}$), is that from the list $[\Delta(k, 1), \Delta(l, 0)]$ one can form the differences of indices (members of this lists) that are equal to zero or an integer (positive or negative). This excludes the application of decoupling formulas 8.2.2.3 of [\onlinecite{APPrudnikov-v3}]. Therefore the forms \eqref{eq12}-\eqref{eq16} are the final ones. (The reader might be tempted to use the tabulated integral in the formula 2.3.2.14 on p.~322 of [\onlinecite{APPrudnikov-v1}] which at first sight could correspond to the required Laplace transform. However, this formula is not appropriate as it applies only for $\alpha \neq nr$, $n=0, \pm 1, \pm 2, \ldots$, as can be seen from the arguments of gamma functions.) The Meijer G functions are implemented in computer algebra systems [\onlinecite{CAS}] and should be considered as full-fledged special functions allowing differentiation, integration etc, as well as plotting. We have illustrated our results of Eq. \eqref{eq8} by representing graphically the Laplace transforms for selected values of $l$ and $k$. This is done in Figs. \ref{fig1} and \ref{fig2}. Observe that for all $l$ and $k$, $\lim\limits_{p\to 0}\mathcal{L}[Fr(l, k, x); p] = 1$. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.56]{rys1.eps} \caption{\label{fig1} (Color Online) Plot of Eq. \eqref{eq8} for $k=4$, $l=1$ (red line), $l=2$ (blue line), $l=3$ (green line), and $l=4$ (gold line).} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.56]{rys2.eps} \caption{\label{fig2} (Color Online) Semi-logarithmic plot of Eq. \eqref{eq8} for $k=1$, $l=1$ (red line), $l=2$ (blue line), and $l=3$ (green line).} \end{center} \end{figure} \section{Fr\'{e}chet transform} The knowledge of the Laplace transform of $Fr(\gamma, x)$ permits one to consider a new type of integral transform with the function related to $Fr(\gamma, x)$ as a kernel. In this section we introduce the Fr\'{e}chet integral transform. In an analogy to the L\'{e}vy kernel [\onlinecite{KGorska12a}] the Fr\'{e}chet kernel $\sigma_{\gamma}(x, t)$ is equal to a suitably rescaled two-variable Fr\'{e}chet pdf, namely: \begin{equation}\label{30.10.13-1} \sigma_{\gamma}(x, t) = \frac{1}{t^{1/\gamma}} Fr\left(\gamma, \frac{x}{t^{1/\gamma}}\right) = \gamma t x^{-(1+\gamma)} e^{-t x^{-\gamma}}, \quad \gamma, t, x, > 0, \end{equation} which is a positive function. For a function $f(x)$ given for $x > 0$, we define its Fr\'{e}chet transform as \begin{equation}\label{30.10.13-2} \bar{f}(\gamma, x) = \int_{0}^{\infty} \sigma_{\gamma}(x, t) f(t) dt. \end{equation} Since the kernel $\sigma_{\gamma}(x, t)$ contains the exponential function, the Fr\'{e}chet transform is related to the Laplace transform. Indeed, in general we can write \begin{align}\label{17Feb2014-1} \bar{f}(\gamma, x) &= \int_{0}^{\infty} \sigma_{\gamma}(x, t) f(t) dt = \int_{0}^{\infty} \gamma t x^{-(1+\gamma)} e^{-t x^{-\gamma}} f(t) dt \\ & = -\gamma x^{-(1+\gamma)} \int_{0}^{\infty} \left[\frac{d}{d u} e^{-t u}\right]_{u=x^{-\gamma}}\!\! f(t) dt \\ & = -\gamma x^{-(1+\gamma)} \left[\frac{d}{d u} \int_{0}^{\infty} e^{-t u} f(t) dt \right]_{u=x^{-\gamma}} \\ & = -\gamma x^{-(1+\gamma)} \left\{\frac{d}{d u} \mathcal{L}[f(t), u]\right\}_{u=x^{-\gamma}}. \end{align} For many functions their Fr\'{e}chet transforms can be obtained readily. Among them the one-sided L\'{e}vy stable pdf's $g_{\alpha}(x)$ play a special role. The one-sided L\'{e}vy stable pdf for $0 < \alpha < 1$ by definition satisfies $\mathcal{L}[g_{\alpha}(x); p] = \exp(-p^{\alpha})$, see [\onlinecite{KAPenson10, KGorska12a, EBarkai, Zolotariev-b, Uchaikin-b}]. The Fr\'{e}chet transform of one-sided L\'{e}vy stable distribution $g_{\alpha}(t)$ is calculated below, with $0 < \alpha < 1$: \begin{align}\label{30.10.13-3} \bar{g}_{\alpha}(\gamma, x) & = \int_{0}^{\infty} \sigma_{\gamma}(x, t) g_{\alpha}(t) dt = \int_{0}^{\infty} \gamma t x^{-(1+\gamma)} e^{-t x^{-\gamma}} g_{\alpha}(t) dt \\ \label{30.10.13-3b} & = \gamma x^{-(1+\gamma)} \left[-\frac{d}{du} \int_{0}^{\infty} e^{- t u} g_{\alpha}(t) dt\right]_{u=x^{-\gamma}} \\ \label{30.10.13-3c} & = \gamma x^{-(1+\gamma)} \left[-\frac{d}{du} e^{-u^{\alpha}}\right]_{u=x^{-\gamma}} \\ \label{30.10.13-3d} & = \gamma x^{-(1+\gamma)} \left[\alpha u^{\alpha-1} e^{-u^{\alpha}}\right]_{u=x^{-\gamma}} \\ \label{30.10.13-3e} & = \gamma\alpha x^{-(1+\gamma\alpha)} e^{-x^{-\gamma\alpha}}, \quad \gamma > 0, \end{align} so it is again given by the Fr\'{e}chet pdf. Therefore, the Laplace transform of $\bar{g}_{\alpha}(\gamma, x)$ is given by \begin{equation}\label{30.10.13-4} \mathcal{L}[\bar{g}_{\alpha}(\gamma, x); p] = \mathcal{L}[Fr(\gamma\alpha, x); p], \end{equation} and according to Eq. \eqref{eq8} can be exactly computed for rational $\gamma\alpha$. The Fr\'{e}chet transform of another Fr\'{e}chet distribution is equal to: \begin{align}\label{30.10.13-5} \bar{Fr}_{\gamma}(\beta, x) & = \int_{0}^{\infty} \sigma_{\gamma}(x, t) Fr(\beta, t) dt = \int_{0}^{\infty} \gamma t x^{-(1+\gamma)} e^{-tx^{-\gamma}} Fr(\beta, t) dt \nonumber\\ & = \gamma x^{-(1+\gamma)} \left\{-\frac{d}{du} \mathcal{L}[Fr(\beta, t); u]\right\}_{u=x^{-\gamma}}, \end{align} and again can be calculated, for arbitrary positive $\gamma$ and rational $\beta$, from derivatives of Eq. \eqref{eq8}. Since the derivatives of Meijer G functions are again expressible by Meijer G functions (see formula 8.2.2.32 of [\onlinecite{APPrudnikov-v3}]), the transform of type of Eq. \eqref{30.10.13-5} can be exactly evaluated. If we choose for illustration $\beta=1/2$, then for arbitrary $\gamma > 0$ \begin{align}\label{16.02.2014-1} \begin{split} \bar{Fr}_{\gamma}(1/2, x) = \int_{0}^{\infty} \sigma_{\gamma}(x, t) Fr(1/2, t) dt &= \ulamek{\gamma}{4\sqrt{\pi}} x^{-(1+\gamma)} MeijerG([[\,\,\,], [\,\,\,]], [[-\ulamek{1}{2}, 0, 0], [\,\,\,]], \ulamek{x^{-\gamma}}{4}) \\ & = \ulamek{\gamma}{4\sqrt{\pi}} x^{-(1+\gamma)} G^{3, 0}_{0, 3} \left(\ulamek{x^{-\gamma}}{4}\Big\vert{- \atop -\ulamek{1}{2}, 0, 0}\right). \end{split} \end{align} The graphical representation of Eq. \eqref{16.02.2014-1} is given on Fig. \ref{fig3} for $\gamma = 1/3$. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.56]{rys3.eps} \caption{\label{fig3}Plot of Eq. \eqref{16.02.2014-1} for $\gamma = 1/3$.} \end{center} \end{figure} We provide below for completeness the demonstration of Eq. \eqref{16.02.2014-1}: \begin{align}\label{27Feb14-1} \bar{Fr}(1, 2, x) &= \gamma x^{-(1+\gamma)} \left[-\frac{d}{du} \pi^{-1/2} G^{3, 0}_{0, 3}\left(\frac{u}{4}\Big\vert {- \atop 0, \ulamek{1}{2}, 1}\right)\right]_{u=x^{-\gamma}} \\ \label{27Feb14-2} &= -\frac{\gamma}{4\sqrt{\pi}} x^{-(1+\gamma)} \left[\frac{d}{dv} G^{3, 0}_{0, 3}\left(v \Big\vert {- \atop 0, \ulamek{1}{2}, 1}\right)\right]_{v=x^{-\gamma}/4}\\ \label{27Feb14-3} & = \frac{\gamma}{4\sqrt{\pi}} x^{-(1+\gamma)} \left[-v^{-1} G^{3, 1}_{1, 4}\left(v \Big\vert {0 \atop 0, \ulamek{1}{2}, 1, 1}\right)\right]_{v=x^{-\gamma}/4} \\ \label{27Feb14-4} & = \frac{\gamma}{4\sqrt{\pi}} x^{-(1+\gamma)} \left[v^{-1} G^{4, 0}_{1, 4}\left(v \Big\vert {0 \atop 0, \ulamek{1}{2}, 1, 1}\right)\right]_{v=x^{-\gamma}/4} \\ \label{27Feb14-5} &= \frac{\gamma}{4\sqrt{\pi}} x^{-(1+\gamma)} \left[v^{-1} G^{3, 0}_{0, 3}\left(v \Big\vert {- \atop \ulamek{1}{2}, 1, 1}\right)\right]_{v=x^{-\gamma}/4} \\ \label{27Feb14-6} & = \frac{\gamma}{4\sqrt{\pi}} x^{-(1+\gamma)} \left[G^{3, 0}_{0, 3}\left(v \Big\vert {- \atop -\ulamek{1}{2}, 0, 0}\right)\right]_{v=x^{-\gamma}/4} \\ \label{27Feb14-7} & = \frac{\gamma}{4\sqrt{\pi}} x^{-(1+\gamma)} G^{3, 0}_{0, 3}\left(\frac{x^{-\gamma}}{4} \Big\vert {- \atop -\ulamek{1}{2}, 0, 0}\right). \end{align} Eq. \eqref{27Feb14-3} has been obtained using the differentiation formula (8.2.2.32) of [\onlinecite{APPrudnikov-v3}], Eq. \eqref{27Feb14-4} results from formula (8.2.2.16) of [\onlinecite{APPrudnikov-v3}] and Eq. \eqref{27Feb14-6} is based on the translation formula (8.2.2.15) of [\onlinecite{APPrudnikov-v3}]. \section{Summary and Conclusions} We are in a position now to confront several characteristics of Fr\'{e}chet and L\'{e}vy stable laws. The functional form of the Fr\'{e}chet pdf, see Eq. \eqref{eq1}, is an elementary function displaying the essential singularity at $x\to 0$, unimodality and heavy-tailed decay at $x\to\infty$ for all the values of $\gamma > 0$. The one-dimensional L\'{e}vy laws $g_{\alpha}(x)$ can be represented by elementary function exclusively for $\alpha = 1/2$. Their explicit forms can be expressed by standard special functions only for $\alpha = 1/3, 2/3$; otherwise for other values of rational $\alpha$ they can be represented by finite sum of generalized hypergeometric functions [\onlinecite{KAPenson10}]. For arbitrary real $0 < \alpha < 1$ the explicit form of $g_{\alpha}(x)$ is unknown. The Laplace transform of Fr\'{e}chet distribution, as shown in the present work, can be expressed by higher transcendental function, the Meijer G function, but exclusively for rational values of the shape parameter $\gamma$. The Laplace transform for general real $\gamma > 0$ is unknown. In contrast, the Laplace transform of L\'{e}vy stable law $g_{\alpha}(x)$ is given for any $0 < \alpha < 1$ by the stretched exponential $\exp(-p^{\alpha})$. This comparison can be carried even further by exploiting the asymptotic forms of $g_{\alpha}(x)$, $x\to 0$, obtained in [\onlinecite{Mikusinski}]. In fact the asymptotic formula \begin{equation}\label{4Mar14-1} g^{\rm a}_{\alpha}(t) = \frac{1}{\sqrt{2\pi}} \alpha^{\frac{1}{2-2\alpha}} t^{-\frac{2-\alpha}{2-2\alpha}} \exp\left[-(1-\alpha) \alpha^{\frac{\alpha}{1-\alpha}} t^{-\frac{\alpha}{1-\alpha}}\right], \end{equation} after introducing there $\alpha = \gamma/(1+\gamma)$ and $t = \gamma x/(1+\gamma)^{1+1/\gamma}$ furnishes \begin{equation}\label{4Mar14-2} g^{\rm a}_{\gamma/(1+\gamma)}\left(\frac{\gamma x}{(1+\gamma)^{1+1/\gamma}}\right) = \frac{(1+\gamma)^{1/\gamma}}{\sqrt{2\pi}} \left(\frac{1+\gamma}{\gamma}\right)^{3/2} x^{\gamma/2} Fr(\gamma, x). \end{equation} Therefore, for given $\gamma$, and for small $x$ it is convenient to compare $Fr(\gamma, x)$ with the right hand side of Eq. \eqref{4Mar14-2}. Spurious effects can be eliminated by considering reduced distributions, obtained by dividing the original quantities by their corresponding maximal values. This is done in Fig. \ref{fig4}. As expected from Eq. \eqref{4Mar14-2} the two distributions approach each other even more, as $\gamma$ decreases. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.56]{rys4.eps} \caption{\label{fig4} (Color online) Comparison of the reduced $g^{\rm a}_{\alpha}(t)$ with the reduced $Fr(\gamma, x)$ for $\alpha = 1/2$, $t=x/4$ (blue line), and $\gamma =1$ (red line), and for $\alpha = 1/4$, $t=27x/256$ (green line), and $\gamma=1/3$ (gold line).} \end{center} \end{figure} Both of these distributions have known expressions of their $\mu$-th power moment: for the L\'{e}vy laws the only non-diverging moments are equal to \begin{equation}\label{28Feb14-1} \int_{0}^{\infty} x^{\mu} g_{\alpha}(x) dx = \frac{\Gamma(1-\ulamek{\mu}{\alpha})}{\Gamma(1-\mu)}, \quad -\infty < \mu < \alpha. \end{equation} The corresponding expression for the Fr\'{e}chet law is \begin{equation}\label{28Feb14-2} \int_{0}^{\infty} x^{\mu} Fr(\gamma, x) dx = \Gamma(1-\ulamek{\mu}{\gamma}), \quad -\infty < \mu < \gamma. \end{equation} Thus, the divergence of sufficiently high moments is a common feature of both of these distributions. For a deeper probabilistic analysis of the comparison between the L\'{e}vy and Fr\'{e}chet laws the reader is referred to [\onlinecite{TSimon14}]. Last but not least we would like to underline the utility of Meijer G functions which are the most important calculational tool in the present work. For reasons explored above our key results can not be represented by any other better known functions. In our opinion the Meijer G functions must be considered as known special functions whose diverse applications are becoming truly universal: for a choice of recent applications see [\onlinecite{DBermudez14, PJForrester14, WMlotkowski13, GAkemann13}] and the references therein. In a recent exposition [\onlinecite{RBeals13}] their use in strongly advocated as being ``both natural and attractive''. Needless to say, we unreservedly share this conviction. \section*{Acknowledgment} The authors acknowledge support from the PHC Polonium, Campus France, project no. 28837QA.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Acknowledgements} This work was supported by EPSRC under the New Investigator Award grant, EP/R030073/1 (Tarapore). The authors acknowledge the IRIDIS High Performance Computing Facility and thank Arnold Benedict for initial work on the simulator. \bibliographystyle{ACM-Reference-Format} \section{Conclusion} In quality-diversity algorithms such as MAP-Elites, a key challenge is to design a behaviour space and a set of evolutionary operators. We present a novel QD meta-evolution system that automatically determines the behaviour space and evolutionary parameters for the MAP-Elites algorithm. The system extends Meta-evolution with CMA-ES \citep{Bossens2020a}, a prior quality-diversity meta-evolution algorithm, with three modifications: (i) the database is reformulated to selectively maintain solutions based on their contribution to quality and diversity; (ii) feature-maps that can represent arbitrary functions of the base-features and the meta-genotype; and (iii) the use of parameter control to adjust the mutation rate and the number of generations per meta-generation for the MAP-Elites algorithms. Improved QD meta-evolution is demonstrated on an 8-joint planar robot arm studying various feature-maps (linear, non-linear, and feature-selection) as well as various parameter control strategies (static, endogenous, reinforcement learning, and annealing). In evolution, non-linear feature-maps yield a 15-fold improvement over the traditional linear transformation used in prior meta-evolution work although they require more function evaluations. Such feature-maps cannot be hand-crafted and are unique representations customised to a meta-level objective. Feature-selection provides a 3-fold improvement over linear feature-maps without requiring more function evaluations. Parameter control yields up to 90\% meta-fitness improvement for the mutation rate and up to 40\% meta-fitness improvement for the number of generations per meta-generation. Reinforcement learning consistently ranks among top parameter control methods and therefore is recommended as a generic method to avoid tuning. Subsequent damage-recovery tests demonstrate that behaviour-performance maps evolved by QD meta-evolution with non-linear feature-map can reach diverse target positions, regardless of the severity of the damage sustained by the robot arm. \section{Discussion} \section{Experiment setup} \label{sec: experimental-setup} \subsection{Simulation environment} We use DART (Dynamic Animation and Robotics Toolkit; \cite{Lee2018}) to simulate an 8-joint robot arm. The robot arm has eight segments each of size $\SI{0.0775}{m}$ for a total length of $L=\SI{0.62}{m}$. Each of its eight joints can rotate in $[-\pi/2,\pi/2]\SI{}{rad}$ to position its end-point, a gripper, into any position within the semi-circle spanned by the orientation $[-3\pi/2,2\pi]\SI{}{rad}$. The robot arm is controlled by an 8-dimensional vector which denotes for each joint the desired angle and it is assumed that this movement is always successful. During evolution, the simulation environment evaluates the fitness of bottom-level solutions (i.e., the genotypes) as well as the meta-fitness (i.e., the meta-genotypes). A genotype, $\mathbf{g}$, is an 8-dimensional vector of commands, representing the desired angles for each of the joints. The fitness of a genotype $f(\mathbf{g}$), representing the performance of a controller of the robot arm, is the negative variance of the angles made by the joints of the robot arm, \begin{equation} \label{eq: fitness} f(\mathbf{g}) = - \frac{1}{8} \sum_{i=1}^{8} (g_i - \bar{g} )^2 \,, \end{equation} where $\bar{g} = \frac{1}{8} \sum_{i=1}^{8} g_i$. The fitness discourages zigzag motions, which is energy-efficient, and promotes distributing movement equally among the joints, which is more robust to damages and other perturbations.\footnote{For low-variance motions, the angles will be closer to zero and therefore the maximal absolute change in a joint's angle is reduced.} When the robot arm collides with itself or with the wall, the corresponding genotype is considered unsafe and is not added to MAP-Elites' behaviour-performance map or the database. Based on a pre-defined damage-set $\mathcal{D}$ that injects damages of different types to the robot arm, the meta-fitness estimates the overall adaptation performance of a meta-genotype $\mathbf{w}$ as follows. First, a batch of genotypes, $\mathcal{B} = \{\mathbf{g}^1,\dots,\mathbf{g}^{N_g}\}$, comprised of 10\% of bottom-level individuals from the behaviour-performance map, is sampled without replacement from the behaviour-performance map. Second, each solution $\mathbf{g} \in \mathcal{B}$ is evaluated on all $d \in \mathcal{D}$, and the final position of the end-point of the robot arm, denoted by $P(\mathbf{g}; d)$, is recorded in each case. Then, the meta-fitness is evaluated as the summed pairwise Euclidian distance of the robots' final positions, averaged across damages in~$\mathcal{D}$: \begin{equation} \label{eq: meta-fitness} \mathcal{F}(\mathbf{w}) = \frac{1}{|\mathcal{D}|} \sum_{d \in \mathcal{D}} \sum_{i=1}^{N_g - 1} \sum_{j=i+1}^{N_g} || P(\mathbf{g}^i; d) - P(\mathbf{g}^j; d) || \,. \end{equation} The damage set in meta-evolution included 16 damages, forcing an individual joint $i \in \{1,\dots,8\}$ to get stuck at two different angles, $\theta_{i,1} \sim U(-\pi/2,0)\SI{}{rad}$ and $\theta_{i,2} \sim U(0,\pi/2)\SI{}{rad}$. The damages in $\mathcal{D}$ therefore cover the legal range of the joint $[-\pi/2,\pi/2]\SI{}{rad}$ across evolution. Partitioning the range into two aims to (i) include both a positive and a negative angle, thereby not biasing the behaviour-performance map to damage of any particular direction; and (ii) reduce the variance of meta-fitness evaluations over time to ensure the meta-fitness is not too deceptive. If a solution collides with itself or the wall, then the corresponding genotype does not contribute to the sum. The highest meta-fitness represents a behaviour-performance map for which all behavioural bins are filled with valid solutions and for which the solutions are uniformly distributed across the entire semi-circle span of the robot arm regardless of any damage. \subsection{Experimental conditions} \label{sec: experimental-conditions} We include four baselines, which apply traditional MAP-Elites with a different behaviour space. Contrary to the genotypic space, these are low-dimensional and most avoid unsafe solutions that will be discarded during evolution. \textbf{Position} is a 2-dimensional space denoting the Cartesian coordinate $(x,y)$, normalised from $[-L,L] \times [-L,0]$ to $[0,1]^2$ based on the robot's length $L$, according to $x \gets \frac{x + L}{2L}$ and $y \gets \frac{-y}{L}$. \textbf{Polar} is a 2-dimensional space denoting the polar coordinate $(r,\theta)$, normalised to $[0,1]^2$ based on the length $L$ and the semi-circle range $[\pi,2\pi]\SI{}{rad}$, according to $r \gets r / L$ and $\theta \gets \frac{\theta - \pi}{\pi}$. \textbf{Joint-pair-angle} is the angle in spanned by connecting joint $i-2$ to joint $i$ for all $i \in \{2,4,6,8\}$. Although two consecutive joints can at most both turn $\frac{\pi}{2}\SI{}{rad}$, the Joint-pair-angle ignores the orientation of the previous joint-pair and therefore the angle ranges in $[0,2\pi]\SI{}{rad}$ and is normalised to $[0,1]$ based on $\theta \gets \frac{\theta}{2\pi}$. \textbf{AngleSum} is a 6-dimensional space that computes the average value for each triplet of bottom-level genes, $\langle 0, 1, 2 \rangle, \langle 1, 2, 3 \rangle, \dots, \langle 6 , 7, 8 \rangle$. The AngleSum can thus be considered as a lower-dimensional formulation of the genotype. \\ We then include a variety of conditions based on QD meta-evolution, which are identified by the \textbf{Meta} prefix and two further suffixes. One suffix denotes the type of feature-map, which can be \textbf{Linear}, \textbf{NonLinear}, or \textbf{Selection} (see Section \ref{subsec: featuremaps}). A second suffix describes the type of parameter control. As a default, if the suffix is not included, then the mutation rate is $0.125$ and there are 5 generations per meta-generation.\footnote{The number of MAP-Elites iterations is divided into distinct generations each of which have a fixed batch-size (see Section \ref{sec: experimental-parameters}).} Four static parameter conditions are included per parameter, where the number of generations per meta-generation is set in $\{5,10,25,50\}$ and the mutation rate is set in $\{0.125,0.25,0.50,1.0\}$, which will be given a distinct suffix (e.g., \textbf{Mutation rate 0.50}, or \textbf{50 generations}). Three dynamic pararameter control settings are included, in which the indicated parameter is allowed to vary within its range, $[1,50]$ for the number of generations and $[0.001,1]$ for the mutation rate. \textbf{Annealing} linearly decreases the parameter $P$ as a function of the number of function evaluations $E$ according to $P(E) = m_{P} + (M_P - m_P)\frac{M_{E} - E}{M_E}$, where $m_P$ and $M_P$ are the minimum and maximum of $P$, respectively, and $M_E$ is the maximal number of function evaluations. \textbf{Endogenous} parameter control adds one gene to the meta-genotype to encode either the number of generations or the mutation rate. Reinforcement learning (\textbf{RL}) follows \cite{Karafotias2014}. It uses the SARSA algorithm \citep{Rummery1994,Sutton2018b} to select the best action, an interval in the parameter setting, for a given state, data about the progress of evolution. The state of the RL agent is formed from a tree-based discretisation of the continuous observation space, such that states represent different partitions with distinct Q-values \citep{Uther1998}. Our RL setup includes 5 actions, corresponding to 5 equally-spaced bins of the parameter setting. Observations track the progress of meta-evolution, including the maximum, mean, and standard-deviation of meta-fitness, the meta-genotypic diversity, the number of consequent meta-generations the maximal meta-fitness has not improved, and the reward. The reward is the ratio improvement in maximal meta-fitness divided by the added function evaluations. Finally, analogous to the Meta conditions, we also include Random conditions which perform traditional MAP-Elites but have a randomly initialised feature-map that is not evolved over time. The Random conditions similarly include a suffix for the type of feature-map (Linear, Non-Linear, or Selection).\\ \subsection{Experimental parameters} \label{sec: experimental-parameters} We now summarise the parameter settings defining the experiments (see also Table~S1 in Supplemental Materials). Each solution is identified by an 8-dimensional genotype, the angles for each of the joints of the robot arm. The mutation operator is a random increment or decrement to the gene, with a step of $0.025$, with a mutation rate of $0.125$ such that each child solution is on average differing in one gene from its parent. The maximal map coverage allowed for each algorithm is 4,096 solutions. In the grid-based geometry of MAP-Elites, assuming equal number of bins per dimension, this allows 64 bins for 2D spaces (Position and Polar conditions), 8 bins for 4D spaces (Meta conditions), and 4 bins for 6D spaces (AngleSum condition). All conditions have 400 bottom-level individuals per generation and an initial population of 2,000 randomly generated individuals. The large budget of 100,000,000 function evaluations allows evolution to run its full course. For Meta conditions, the meta-population size is set to $\lambda=5$ based on the trade-off between global search and convergence. Due to the experimental conditions (see Section \ref{sec: experimental-conditions}) there are $N_b=14$ base-features and $D=4$ target-features, so the meta-genotype comprises a total of 56 genes, except for the non-linear feature-maps, which require 182 ($N_b \times N_h + N_h \times D + 2$) genes based on $N_h=10$ hidden units. For linear feature-maps, a normalisation range of $[0.20,0.80]$ is set based on the empirical range of the target-features. For non-linear feature-maps, the scaling factor in Eq. \ref{eq: nonlinfm} is set to $\alpha_s = 30$ which yields a theoretical range of $[-30,30]$ and an empirical range of $[-5,5]$. This setting allows covering the entire output range $[0,1]^D$ with sufficient precision and it is the smallest setting of $\alpha_s$ for which extreme feature values (close to 0 or 1) are equally frequent as the middle feature values. Endogenous parameter control adds one gene to the meta-genotype. Initialisation of CMA-ES starts with the middle of the parameter range as the mean and a third of the range as the standard-deviation. The meta-genotype parameters' ranges are $[-1,1]$ for non-linear feature-maps and $[0,1]$ otherwise. Further parameters of CMA-ES use the default settings. The database has a capacity of just below 5 million, allowing 3 bins for all 14 features for a bin width of $\delta=1/3$, and initially, the number of solutions per coarse bin is set to $k=5000$. \section{Introduction} Historically, most evolutionary algorithms (EAs) were designed to optimise a fitness function, solving a single problem without considerations for generalisation to unseen problems or robustness to perturbations to the evaluation environment. However, it was widely known that successfully converging to the maximum of that fitness function requires maintaining genetic diversity in the population of solutions (see e.g., \cite{Laumanns2002,Gupta2012,Ursem2002,Ginley2011}). Moreover, the use of niching demonstrated how maintaining subpopulations could help find multiple solutions to a single problem \citep{Mahfoud1995}. Some studies included genetic diversity as one of the objectives of the EA \citep{Toffolo2003}. Approaches in evolutionary robotics, artificial life, and neuro-evolution realised that genetic diversity does not necessarily imply a diversity of solutions, since (i) different genotypes may encode the same behaviour and vice versa (especially for complex genotypes such as neural networks); and (ii) many genotypes may encode unsafe or undesirable solutions that should be discarded during evolution (e.g., self-collisions on a multi-joint robot arm). Such approaches began to emphasise \textit{behavioural diversity} \citep{Mouret2009a,Gomez2009,Mouret2009,Mouret2012a}, not only as a driver for objective-based evolution but also as the enabler for diversity- or novelty-driven evolution \citep{Lehman2011}. In \textit{quality-diversity algorithms} such as MAP-Elites \citep{Mouret2015} and Novelty Search with Local Competition \citep{LehmanStanley2011}, the behavioural diversity approach is combined with local competition such that for each local region in the behaviour space the best solution is stored, forming a large archive of solutions -- a \textit{behaviour-performance map} in the case of MAP-Elites. The development of quality-diversity algorithms has allowed a plethora of useful applications. In robotics, this includes the design of robot morphologies and controllers \citep{Mouret2015,NordmoenEllefsen2018} and behaviour adaptation \citep{Cully2015b}, in which a robot recovers from environmental changes or damages to its sensory-motor system by searching for high-performing controllers across the evolved archive of solutions. Two important design choices of a quality-diversity algorithm are its behaviour space, the behavioural features that define the behavioural diversity across the solutions, as well as its various hyperparameters, such as the mutation rate, the population size, or the crossover or mutation operator. Traditionally, the behavioural features are chosen by the user to fit the purpose. However, complex features that are non-intuitive to the user may optimise the intended purpose better. Similarly, the best evolutionary parameters to achieve the highest quality-diversity metrics are typically unknown. Therefore, an automated approach to the behaviour space and evolutionary parameters may be required. This paper explores an approach to \textit{quality-diversity meta-evolution}, in which the behaviour space and evolutionary parameters are automatically determined to optimise a meta-objective. We explore (i) how to evolve a low-dimensional behaviour space by means of a feature-map, a function taking as input the ``meta-genotype'' and a larger number behavioural base-features and outputting the target-features, and (ii) control strategies to automatically determine the mutation rate and the number of generations (per meta-generation) of the quality-diversity algorithm represented by the meta-genotype. As an application, we consider an 8-joint planar robotic arm where the meta-objective is adaptation to damages. The meta-objective is to be achieved by a combination of diversity, defined as a wide variety of robot poses, and quality, defined as robust and efficient movements characterised by low variance. \section{Quality-diversity meta-evolution with feature-maps and parameter control} MAP-Elites (ME) evolves a behaviour-performance map, storing the highest-fitness controllers for each hypercube in a discretised behaviour space \citep{Mouret2015}. Since ME is not explicitly optimised for generalisation, we use Meta-evolution with CMA-ES \citep{Bossens2020a} to evolve a population of MEs with generalisation as a meta-objective. In Meta-evolution with CMA-ES, low-dimensional behaviour spaces were automatically generated from a weighted sum of a higher-dimensional base-behavioural space. Due to storing all solutions generated by prior MAP-Elites algorithms in a database, Meta-evolution with CMA-ES allowed to efficiently populate behaviour-performance maps on-the-fly, without the need to evolve solutions from scratch. To construct an improved implementation, we propose three modifications to Meta-evolution with CMA-ES: \begin{itemize}[noitemsep,nolistsep] \item a novel type of database that prevents the loss of behaviourally diverse and high-performing solutions (see Section~\ref{subsec: database}); \item a more generic feature-map that allows non-linear transformations of the base-features (see Section~\ref{subsec: featuremaps}); \item the use of parameter control, or dynamic optimisation, of evolutionary hyperparameters, namely the number of generations per meta-generation and the mutation rate (see Section~\ref{subsec: meta-optimisation}). \end{itemize} % \subsection{MAP-Elites algorithm} The MAP-Elites algorithm discretises the behaviour space into behavioural bins, which are equally-sized hypercubes, and then maintains for each behavioural bin the elite solution (i.e., the solution with the highest fitness), leading to quality-diversity. MAP-Elites first randomly generates an initial population of genotypes. Then, each genotype in the initial population is evaluated, resulting in a fitness score $f$ and a behavioural descriptor $\mathbf{\beta}$. Each genotype is then added to the behaviour-performance map $\mathcal{M}$ based on the following replacement rule: if the behavioural bin for $\mathbf{\beta}$ is empty (i.e., $\mathcal{M}[\mathbf{\beta}]=\emptyset$) or if the fitness is higher than the current genotype in that bin (i.e., $ f > f(\mathcal{M}[\mathbf{\beta}])$), then place the genotype $\mathbf{g}$ in that bin of the behaviour-performance map (i.e., $\mathcal{M}[\mathbf{\beta}] \gets \mathbf{g}$). After initialisation, the algorithm applies repeated cycles of random selection, genetic variation, evaluation, and replacement. Random selection is implemented by randomly selecting genotypes from non-empty behavioural bins in the behaviour-performance maps. Genetic variation is based on mutations to the genotypes. Evaluation of genotypes is based on a user-defined fitness function $f(\cdot)$. Replacement is based on the above-mentioned replacement rule. After many repetitions of this cycle, the behaviour-performance map is gradually filled with a behaviourally diverse and high-quality solutions. \subsection{Meta-evolution with CMA-ES} \label{subsec: meta-optimisation} The behaviour space in MAP-Elites is not necessarily optimised for generalisation. To automatically improve the behaviour space towards a generalisation-based meta-objective, we use the Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES; \cite{Hansen2007,Hansen2016}), following in this regard the Meta-evolution with CMA-ES \citep{Bossens2020a}. The algorithm first applies an initialisation phase to populate the behaviour-performance maps at the first meta-generation. A large number of random genotypes are sampled, evaluated, and then added to the database $\mathcal{D}$ (see Section~\ref{subsec: database} for details). After initialisation, the meta-evolution algorithm performs a large number of \textit{meta-generations}, consisting of four main steps. In the first step of the meta-generation, the meta-generation constructs new maps $\mathcal{M}_i$, for each $i \in \{1,\dots,\lambda\}$ in the meta-population (see l.~8-14 in Algorithm~\ref{alg: meta-CMAES}), based on the meta-genotype and the existing solutions in the database. After initialising an empty map, new meta-genotypes $\mathbf{w} \in \mathbb{R}^n$ are sampled based on a multivariate normal distribution, \begin{equation} \label{eq: multivariate normal} \mathbf{w} \sim \mathcal{N} \! \left( \mathbf{m}, \sigma \mathbf{C} \right) \,, \end{equation} where $\mathbf{m} \in \mathbb{R}^n$ is the mean meta-genotype, $\mathbf{C}$ is the covariance matrix, and $\sigma > 0$ is a scalar representing the step-size. Each meta-genotype $\mathbf{w} \in \mathbb{R}^n$ is then transformed from a vector into a more useful format, denoted by $\mathbf{W}$ -- for example, for the linear feature-map in Eq.~\ref{eq: linfm} $\mathbf{W}$ is a matrix, whereas for the non-linear feature-map in Eq.~\ref{eq: linfm} $\mathbf{W}$ consists of two weight matrices and two biases. Each entry in the database $\langle \mathbf{g}, \mathbf{b}, f \rangle \in \mathcal{D}$ is then processed to obtain the resulting behavioural descriptor $\mathbf{\beta}$ via the feature-map in Eq.~\ref{eq: featuremap}, and if MAP-Elites' replacement rule is satisfied then the entry is added to the map according to $\mathcal{M}[\mathbf{\beta}] \gets \mathbf{g}$. \sloppy In the second step of the meta-generation,, after all behaviour-performance maps are filled with database entries,\footnote{Empirical tests show populating one behaviour-performance map from the large database of around 5 million solutions consumes on average \SI{6}{s}.} the meta-individuals $i \in \{1,\dots,\lambda \}$ independently apply MAP-Elites, evolving their own behaviour-performance map $\mathcal{M}_i$ for a number of $I$ iterations. During these iterations of MAP-Elites (see l.~30-38 in Algorithm~\ref{alg: meta-CMAES}), new bottom-level genotypes are formed by mutating existing bottom-level genotypes in the map. Each created solution is given a behavioural description according to the feature-map $\mathbf{\beta} \gets \phi(\mathbf{W}, \mathbf{b})$ (see Section~\ref{subsec: featuremaps}), put in the behaviour-performance map if the replacement rule is satisfied, and added to the database (see l.~23-29 in Algorithm~\ref{alg: meta-CMAES}). In the third step, each meta-individual is evaluated on the meta-fitness $\mathcal{F}(\mathbf{w})$, which represents a map-level objective such as the adaptation performance of the behaviour-performance map in various unseen contexts (see Section \ref{sec: experimental-setup} for its implementation). In the final step of the meta-generation, CMA-ES updates the mean, covariance and step size parameters (see l.~19-21 in Algorithm~\ref{alg: meta-CMAES}), applying the $(\mu/\mu_W, \lambda)$-CMA Evolution Strategy \citep{Hansen2016} to optimise the meta-fitness. One selects the $\mu \leq \lambda$ individuals with higest meta-fitness for reproduction. Reproduction involves mutating the mean towards the best of the selected meta-individuals, \begin{equation} \label{eq: update-mean} \mathbf{m} \gets \mathbf{m} + c_m \sigma \sum_{i=1}^{\mu} v_i (\mathbf{w}^{i} - \mathbf{m}) \,, \end{equation} where $v_i > 0$, $\sum_{i=1}^{\mu} v_i = 1$; $\mathbf{w}^ {i}$ is the $i$'th best meta-genotype; $\sigma$ is the step size; and $c_m \in [0,1]$ is a learning rate. The covariance matrix is adapted based on a combination of the active rank-$\mu$ update \citep{Jastrebski2006}, which exploits information from the entire population by assigning positive weights to highest-ranking individuals and negative weights to lowest-ranking individuals, and the rank-one update \citep{Hansen1996}, which exploits the correlations between generations based on the evolution path, \begin{equation} \label{eq: update-covariance} \mathbf{C} \gets \left(1 - c_1 - c_{\mu} \sum_j v_j\right) \mathbf{C} + c_{\mu} \sum_{i=1}^{\lambda} v_i \mathbf{s}_i \mathbf{s}_i^{\intercal} + c_1 \mathbf{p}_c \mathbf{p}_c^{\intercal} \,, \end{equation} where $c_{\mu}$ and $c_1$ are positive weights reflecting the importance of the rank-$\mu$ and rank-one term, respectively; $\mathbf{s}_i \sim \mathcal{N}(\mathbf{0},\mathbf{C})$ is the difference of the sampled meta-genotype from the old mean, divided by the step size $\sigma$; $v_i$ is a positive scalar in case $i \leq \mu$ and a negative scalar otherwise ; and $\mathbf{p}_c \in \mathbb{R}^n$ is the evolution path, a weighted sum of the past mutation steps. Finally, the step-size is controlled with cumulative step-size control, \begin{equation} \label{eq: update-sigma} \sigma \gets \sigma \exp \left( \frac{c_{\sigma}}{d_{\sigma}} \left( \frac{|| \mathbf{p}_{\sigma} ||}{\mathbb{E}\left[|| \mathcal{N} \! \left( \mathbf{0}, \mathbf{I} \right) || \right]} - 1 \right) \right) \,, \end{equation} where $c_{\sigma}$ and $d_{\sigma}$ are parameters that affect the damping and $\mathbf{p}_{\sigma} \in \mathbb{R}^n$ is the conjugate evolution path -- which is a weighted sum of the past mutation steps but unlike $\mathbf{p}_c$ it is formulated such that its expected Euclidian norm is independent of its direction. When the conjugate evolution path is longer than expected, the successive steps were positively correlated, and the step size will be increased to reduce the number of steps to reach a promising region in search space. When the conjugate evolution path is shorter than expected, successive steps were negatively correlated, and the step size will be decreased to avoid successive steps cancelling out each other. \begin{algorithm} \caption{Meta-evolution with CMA-ES.} \label{alg: meta-CMAES} \begin{algorithmic}[1] \State $\mathcal{D} \gets \emptyset$. \Comment{Create empty database.} \For{$i=1$ to $p$} \Comment{Create initial database.} \State $\mathbf{g} \gets \texttt{random-genotype()}$. \State $\mathbf{b}, f \gets \texttt{eval}(\mathbf{g})$. \Comment{Base-features and fitness.} \State Insert $\langle \mathbf{g}, \mathbf{b}, f \rangle$ into $\mathcal{D}$. \Comment{Fill the database (see Section~\ref{subsec: database}).} \EndFor \For{$j=1$ to $G$ } \Comment{Loop over meta-generations.} \For {$i=1$ to $\lambda$} \State Set $\mathcal{M}^i \gets \emptyset$. \Comment{Empty the map.} \State $\mathbf{w} \sim \mathcal{N} \! \left( \mathbf{m}, \sigma \mathbf{C} \right)$. \Comment{Sample meta-genotype.} \For { $\langle \mathbf{g},\mathbf{b},f \rangle \in \mathcal{D}$ } \Comment{Construct map from database.} \State \texttt{add-to-map}($\mathcal{M}^i$, $\mathbf{w}$, $\mathbf{g}$, $\mathbf{b}$, $f$). \EndFor \EndFor \For {$i=1$ to $\lambda$} \State Perform \texttt{MAP-Elites-iterations}($\mathcal{M}^i$,$\mathbf{w}^i$). \State $\mathcal{F}_i \gets$ \texttt{Meta-fitness}($\mathcal{M}^i$). \EndFor \State $\mathbf{m} \gets $ \texttt{Update-mean}(). \Comment{ See Eq.~\ref{eq: update-mean}.} \State $\mathbf{C} \gets $ \texttt{Update-covariance}(). \Comment{ See Eq.~\ref{eq: update-covariance}.} \State $\sigma \gets $ \texttt{Update-step}().\Comment{ See Eq.~\ref{eq: update-sigma}.} \EndFor \Procedure{add-to-map}{$\mathcal{M}$, $\mathbf{w}$, $\mathbf{g}$, $\mathbf{b}$, $f$} \State $\mathbf{W} \gets \texttt{transform}(\mathbf{w})$. \Comment{Convert to useful format (see Section~\ref{subsec: featuremaps}).} \State $\mathbf{\beta} \gets \phi(\mathbf{W},\mathbf{b})$. \Comment{Apply feature-map to get target features (see Eq.~\ref{eq: featuremap}-\ref{eq: nonlinfm}).} \If {$\mathcal{M}[\mathbf{\beta}] = \emptyset$ \textbf{ or } $f > f(\mathcal{M}[\mathbf{\beta}])$} \State $\mathcal{M}[\beta] \gets \mathbf{g}$. \Comment{Add genotype $\mathbf{g}$ to the map $\mathcal{M}$.} \EndIf \EndProcedure \Procedure{MAP-Elites-iterations}{$\mathcal{M}$, $\mathbf{w}$} \For {$i=1$ to $I$} \Comment{$I$ is the number of iterations} \State $\mathbf{g} \sim \mathcal{M}$. \Comment{Sample genotype randomly from map.} \State $\mathbf{g}' \gets \texttt{mutate}(\mathbf{g})$. \Comment{Mutation.} \State $\mathbf{b}, f \gets \texttt{eval}(\mathbf{g}')$. \Comment{Base-features and fitness.} \State \texttt{add-to-map}($\mathcal{M}$, $\mathbf{w}$, $\mathbf{g}'$, $\mathbf{b}$, $f$). \State Insert $\langle \mathbf{g}', \mathbf{b}, f \rangle$ into $\mathcal{D}$. \Comment{Fill the database (see Section~\ref{subsec: database}).} \EndFor \EndProcedure \end{algorithmic} \end{algorithm} \subsection{$k$-best database} \label{subsec: database} To enable rapidly generating new behaviour-performance maps, the database $\mathscr{D}$ stores a large number of previously found solutions. Each such solution is a tuple $\langle \mathbf{g}, \mathbf{b}, f \rangle$, where $\mathbf{g}$ is the bottom-level genotype (e.g., the parameters of a controller), $\mathbf{b}$ is an extended behavioural description of the solution according to a large number of $N_b$ user-defined behavioural base-features, and $f$ is the fitness (e.g., the performance of a controller). To retain behavioural diversity, the database $\mathscr{D}$ divides the base-behavioural space $[0,1]^{N_b}$ into coarse-grained bins of equal width $\delta$, in which it stores up to a number of $k$ solutions. The hypercube partitioning is geometrically similar to the behaviour-performance maps in MAP-Elites except that (a) only a small number, such as 2 or 3, bins per dimension (corresponding to a width of $\delta=1/2$ or $\delta=1/3$) are allowed to limit the maximal capacity of the database to $2^{N_b}$ or $3^{N_b}$ given the many base-features; and (b) for each hypercube, an array containing up to at most $k$ solutions is stored within a single bin. When a new solution $\langle \mathbf{g}, \mathbf{b}, f \rangle$ is presented to the database, its corresponding coarse-grained bin is looked up, yielding the array of solutions $\mathscr{C} = \mathscr{D}[\mathbf{b}]$. Then an additional check for fitness and diversity is performed: if there is another solution in $\mathscr{C}$ that is in the same base-behavioural hypercube of width $\delta / k$, then the solution is only added if it has higher fitness; if there is no such similar solution, then the solution is always added.\footnote{Since there are $k^{N_b}$ possible hypercubes of width $\delta / k$ within each coarse-grained bin of width $\delta$ and up to $k$ solutions are allowed per coarse-grained bin, the check for diversity and fitness is not too restrictive.} If $\mathscr{C}$ now has $k+1$ solutions, then its lowest-fitness solution is removed. If the database's capacity is exceeded, the number of allowed solutions per bins is decremented, $k \gets k -1$, and for each coarse-grained bin the lowest-fitness solution is removed. While $k$ is initially large, e.g., $k=1000$, $k$ shrinks progressively as the run continues. \subsection{Feature-maps} \label{subsec: featuremaps} To transform the base-behavioural features $[0,1]^{N_b}$ to a low-dimensional behavioural descriptor $\mathbf{\beta} \in [0,1]^D$, we use a feature-map $\phi$, \begin{equation} \label{eq: featuremap} \mathbf{\beta} = \phi(\mathbf{W}, \mathbf{b}) \,, \end{equation} which is parametrised by the meta-genotype $\mathbf{W}$. We explore three instantiations of the feature-map. A first instance is the linear transformation as replicated from \cite{Bossens2020a}, \begin{equation} \label{eq: linfm} \mathbf{\beta} = \mathbf{W} \mathbf{b} \,, \end{equation} where $\mathbf{W} \in \mathbb{R}^{D \times N_b}$. To allow improved coverage of the behaviour space, we add an expanding normalisation $\beta_i \gets (\beta_i - m)/(M - m)$ for all $i \in \{1,\dots,D\}$, where $m$ and $M$ are the minimum and maximum target-feature values, respectively, estimated from empirical data (see Section \ref{sec: experimental-parameters}). A second included feature-map, chosen to be able to demonstrate whether or not it is necessary to combine features, is a feature selector according to \begin{equation} \label{eq: selectionfm} \begin{split} j^{i} & = \argmax_{j \in N_b} W_{ij} \\ \mathbf{\beta} & = b_{j^{1}},\dots, b_{j^{D}}\,, \end{split} \end{equation} such that for each feature of the resulting descriptor, $i=1,\dots,D$, the base-feature $b_j$ with highest $\mathbf{W}_{ij}$ is chosen. A third included feature-map is a non-linear transformation using a neural network with a single hidden layer with sigmoid activations. It is chosen to demonstrate the need for non-trivial feature-maps, and because neural networks of this type can represent arbitrary mappings due to their representational capacity. The neural network is given by \begin{equation} \label{eq: nonlinfm} \begin{split} f(\mathbf{W}, \mathbf{b}) & = \mathbf{W}^{2} S_{N_b}(\mathbf{W}^{1} \mathbf{b} + B^1) + B^2 \\ \mathbf{\beta} & = S_{N_h}(f(\mathbf{W}, \mathbf{b})) \,, \end{split} \end{equation} where $S_N(\mathbf{x}) = 1 / \left( 1 + \exp\left(-\alpha_{s} \mathbf{x}/(N+1)\right)\right)$ is an elementwise sigmoid function; $\alpha_s$ is an empirically defined scaling factor; and now $\mathbf{W}$ is composed of a weight matrix from input to hidden layer, $\mathbf{W}^{1} \in \mathbb{R}^{N_h \times N_b}$, a weight matrix from hidden layer to output, $\mathbf{W}^{2} \in \mathbb{R}^{D \times N_h}$, and the corresponding bias units $B^1, B^2 \in \mathbb{R}$. For networks such as $f(\mathbf{W}, \mathbf{b})$, but also those replacing the activation function with the wider class of non-polynomial piecewise continuous functions, universal approximation theorems (see e.g., \cite{Leshno1993,Hornik1989a}) show that in principle all multi-variate functions over closed and bounded intervals can be represented to arbitrary precision, assuming a sufficient amount of neurons. Adding a sigmoid transformation as we have implemented is therefore not strictly necessary for representational capacity.\footnote{However, the representational capacity is preserved. Let $\{(\mathbf{b}_1,\mathbf{\beta}_1),\dots,(\mathbf{b}_n,\mathbf{\beta}_n)\}$ be a finite set of input-output pairs. Since the sigmoid function is monotonously increasing, for every $\mathbf{\beta}_i \in [0,1]^D$, $i \in \{1,\dots,n\}$, there exists a unique $\mathbf{\beta}_i' \in \mathbb{R}^D$ such that $\mathbf{\beta}_i' = S^{-1}(\mathbf{\beta}_i)$ and $\mathbf{\beta}_i' \neq S^{-1}(\mathbf{\beta})$ for all $\mathbf{\beta} \neq \mathbf{\beta}_i'$. Due to not having an activation function at the output units, the universal approximation theorems apply and the function that maps pairs $\{(\mathbf{b}_1,\mathbf{\beta}_1'),\dots,(\mathbf{b}_n, \mathbf{\beta}_n')\}$ can therefore be represented with arbitrary precision. In practice, restricting $\mathbf{\beta}_i' \in [-5,5]^D$ for all $i \in \{1,\dots,n\}$ is sufficient to approximate any $\mathbf{\beta}_i \in [0,1]^D$ accurately.} We require that for a sizeable proportion of mappings, varying the base-behavioural features in $[0,1]^{N_b}$ ensures (i) diversity, such that the entire output range in $[0,1]^D$ can be reached with high statistical probability, and (ii) quality, such that each behavioural bin has a large enough number of solutions for local competition. If the frequency of certain behavioural bins is extremely low for some bins and extremely high for others, then there cannot be both quality and diversity. The output sigmoid activation in Eq.~\ref{eq: nonlinfm} accounts for the high frequencies of near-zero values of $f(\mathbf{W}, \mathbf{b})$ as it increases steeply for values close to zero and slowly for extreme values. This ensures that for a sizeable proportion of mappings, each bin in $[0,1]^D$ is frequently represented, leading to quality-diversity. \subsection{Dynamic parameter control} While Meta-evolution with CMA-ES optimises the behavioural features of MAP-Elites, there are a few parameter settings which are difficult to tune and possibly require dynamic changes throughout the evolutionary run. Therefore, an additional change is the dynamic control of parameter settings for the MAP-Elites meta-individuals. In particular, we aim to optimise MAP-Elites' mutation rate and the number of iterations using dynamic parameter control strategies. The following algorithms are included (more details in Section \ref{sec: experimental-setup}): \begin{itemize}[noitemsep,nolistsep] \item Endogenous: an additional gene in the meta-genotype encodes the parameter. \item Annealing: the parameter is annealed linearly from its maximal to its minimal value depending on the number of function evaluations. \item Reinforcement learning: a reinforcement learning agent optimises the parameter based on the progress of the algorithm. \item Static: we try out a limited number of static parameter settings (i.e., traditional tuning). \end{itemize} \section*{Acknowledgements} \section{Related work} Behaviour adaptation to unforeseen damages is one of the main applications of our work. Prior work has shown that performing a search across the behaviour-performance maps evolved by MAP-Elites can give high-performing recovery solutions within a limited number of function evaluations \citep{Cully2015b}. We are interested in how automating the behaviour space and the evolutionary hyperparameters affects the adaptation. In automating the behaviour space, there are three competing classes of methods. The first class formulates the behaviour space by means of unsupervised learning, forming a generative model of behaviour \citep{Nguyen2016,Cully2019,Cully2018a}, for example, based on an auto-encoder neural network that represents a compressed and low-dimensional model of sensory data \citep{Cully2019,Cully2018a}. Such approaches can reduce dimensionality and form a robust denoised behaviour space. The second class models the genotypes of the elites (i.e., the highest-quality solutions) and biases the search to find such elites (e.g., by adapting the crossover operator based on a generative model of the behaviour-performance map) \citep{Gaier2020}. This class allows to rapidly find high-performing solutions according to the fitness function with a high coverage of solutions. Both classes do not address how the behaviour space is to be selected to maximise a custom archive-level objective. The third class, of which our method is a member, formulates the behaviour space to maximise an archive-level objective within a particular domain of problems. \cite{Meyerson2016} proposed to learn a behavioural distance function for Novelty Search to suit a particular domain of problems. In Meyerson et al. (2016), the weights given to a large number of base-features depended heuristically on the behaviours that were successful on the target domain. \cite{Bossens2020a} proposed a quality-diversity meta-evolution system, where one also adapts weights given to a large number of base-features but where the weights are evolved by an evolutionary algorithm to define a lower-dimensional behaviour space for MAP-Elites. We expand on this work by (i) reformulating the database to prevent the loss of quality-diversity, (ii) generalising the linear combination of base-features to a feature-map, a parametrised function of the base-features, and (iii) considering a more complete meta-evolution system that further optimises the MAP-Elites algorithm by dynamically controlling its various hyperparameters. Our study investigates the effects of various parameter control strategies. Although this topic has been explored widely (for an overview, see \cite{Eiben2003,Karafotias2015,Doerr19tutorial}), it is rarely explored in quality-diversity algorithms. To the best of our knowledge, one prior work has explored parameter control of the mutation rate in quality-diversity algorithms \citep{Nordmoen2018}. We investigate the mutation rate as well as the generations per meta-generation within a quality-diversity meta-evolution system.\\ \section{Results} We now demonstrate the improved meta-evolution system on a planar 8-joint robot arm. First, we evaluate the quality of the evolved maps by their coverage of the behaviour space and their fitness. We then investigate the evolution of meta-fitness depending on the type of feature-map and parameter control. Finally, we evaluate the experimental conditions on a damage recovery test. Further meta-fitness and damage recovery comparisons can be found in Supplemental Materials. \subsection{Quality of evolved maps} Among the Meta-conditions, the non-linear feature-map obtains the highest coverage of 3000 solutions, compared to 1000 and 1500 for linear and feature-selection feature-maps, respectively (see Figure~\ref{fig: evolution}a) -- this despite random non-linear feature-maps having extremely low coverage, often of only 1 or 2 solutions, as they output a similar value for a large proportion of its inputs. This is explained by the success in optimising the meta-fitness in Eq.~\ref{eq: meta-fitness}, which is strongly related to the coverage. The highest coverages are obtained by Polar and Position (all 4096 and around 3300) and the lowest by JointPairAngle and AngleSum (around 800 and 500). All conditions are able to find at least one controller with near maximal fitness ($-0.01$ is the lowest maximum observed; see Figure~\ref{fig: evolution}b). The average fitness of Meta-conditions is relatively low, ranging from $-0.05$ to $-0.04$, whereas Polar and Position range from $-0.01$ to $-0.005$ and AngleSum and JointPairAngle range from $-0.03$ to $-0.02$ (see Figure~\ref{fig: evolution}c). This is likely due to a combination of two factors: (i) the features may depend strongly on the angle rather than end-position and so certain behavioural bins can only have low fitness; (ii) compared to other angle-based behaviour-spaces (AngleSum and JointPairAngle), the coverages are higher and still increasing, which slows down fitness improvements.\\ \begin{figure}[htbp!] \centering \includegraphics[width=0.65\linewidth]{figures/FMrecovery_m.pdf} \label{fig: feature-maps_epochs} \caption{Effect of the feature-map on meta-evolution. $x$-axis represents the number of function evaluations and $y$-axis represents the meta-fitness, the summed pairwise distance across 10\% of solutions in the map averaged across the damages in which it is assessed. The average meta-fitness in the $\lambda=5$ behaviour-performance maps in the meta-population is first computed and then this average is aggregated over 5 replicates as Mean $\pm$ SD. Default hyperparameters are used (mutation rate 0.125 and 5 generations per meta-generation). } \label{fig: meta-fitness} \end{figure} \subsection{Effect of feature-maps on meta-fitness} \label{subsec: feature-maps} The optimisation of meta-fitness, the summed pairwise distance among safe solutions, is strongly dependent on the feature-map (see Figure \ref{fig: meta-fitness}). While linear and feature-selection feature-maps enable rapid improvements in meta-fitness early on, they then stagnate on a plateau for the rest of meta-evolution. By contrast, non-linear feature-maps start slowly but between 10 million and 20 million function evaluations they improve rapidly. Non-linear feature-maps reach the highest meta-fitness of around $\SI{15000}{m}$ in summed pairwise distance, a 6-fold improvement over feature-selection with meta-fitness of around $\SI{2700}{m}$ and a 15-fold improvement over the linear feature-map with meta-fitness of around $\SI{1000}{m}$. This illustrates the trade-off of high-complexity functions: they can in principle represent the required function to optimise an objective but they require more data. \subsection{Effect of parameter control on meta-fitness} \label{subsec: parameter control} The best parameter control method varies depending on the feature-map and the type of parameter but RL is consistently top-ranked. For linear feature-maps, RL obtains the highest scores by a large margin, with a final meta-fitness of $1088.7 \pm 373.4$ when controlling the number of generations per meta-generation and $1640.5 \pm 340.1$ when controlling the mutation rate. In these cases, RL converges to around 25 generations and around 0.75 mutation rate in all of the replicates. For non-linear feature-maps, static control with a lower number of generations of 5--10 and higher mutation rates 0.25--1 are preferable, yielding meta-fitness of $14600$--$14700$ and $16300$--$16500$, respectively. For feature-selection feature-maps, 50 generations and mutation rate 0.50 yield the highest mean meta-fitness of $3817.6 \pm 520.2$ and $5067.7 \pm 263.2$, respectively. These settings of the number of generations per meta-generation can be intuitively understood, because a smaller number of generations per meta-generation is preferred when the space of feature-maps is larger (non-linear feature-maps) and a higher number of generations per meta-generation is preferred otherwise (feature-selection feature-maps) as its provides more reliable estimates of a particular meta-genotype's meta-fitness. In sum, control of the mutation rate can yield up to a $90\%$ improvement in the meta-fitness while control of the number of generations per meta-generation yields up to $40\%$ improvement. \subsection{Recovery from a priori unknown damages} We now assess the robot arm on the same simulation environment albeit with a different damage set than in meta-evolution. Rather than putting the joint stuck at a particular angle, the damage set $\mathcal{D}_{\text{test}}$ applies to each angle of the joint a particular offset $\epsilon \in [-\pi,\pi]\SI{}{rad}$, such that the resulting angle of each joint is $\theta_i \gets \max(-\pi/2, \min(\pi/2, \theta_i + \epsilon))\SI{}{rad}$. The included offset angles are the 20 equally spaced segments in the range excluding zero, i.e. $\{-1, -0.9,\dots, -0.1, 0.1, \dots, 0.9, 1 \} \SI{\pi}{rad}$. Given the observed evolutionary trends, we first investigate QD meta-evolution with non-linear feature-map. For high-severity damages such as offsets of 180 degrees on the upper joint, QD meta-evolution reaches at least 60\% of the target positions in its semi-circle span (see Figure \ref{fig: testperformance}), whereas Polar and Position, the highest performers of traditional MAP-Elites algorithms, have a minimum of 30\%. For smaller offsets below 45 degrees, Polar and Position behaviour-performance maps achieve the highest reach of up to 95\% of targets followed closely by QD meta-evolution which has a reach of 80--90\%. MAP-Elites with a random non-linear feature-map fails completely with only 1\% of targets being reached. \\ \begin{figure}[htbp!] \centering \subfigure[Joint 1]{\includegraphics[width=0.30\linewidth]{figures/testperformance_leg1_NEW.pdf}} \subfigure[Joint 8]{\includegraphics[width=0.30\linewidth]{figures/testperformance_leg5_NEW.pdf}} \subfigure[Joint 8]{\includegraphics[width=0.30\linewidth]{figures/testperformance_leg8_NEW.pdf}} \includegraphics[width=0.90\linewidth]{figures/legend_test.pdf} \caption{Test on unseen damages that offset the joint by a particular angle. The $x$-axis represents the offset in $[-180,180]$ degrees and the $y$-axis represents the percentage of targets reached within the semi-circle span of the robot. For each offset the Mean $\pm$ SD is aggregated over 5 replicates. For Meta, the behaviour-performance map is formed from the mean meta-genotype (see $\mathbf{m}$ in Eq.~\ref{eq: multivariate normal}) and the default hyperparameters are used (mutation rate 0.125 and 5 generations per meta-generation). \textbf{Optimised} indicates the best setting from parameter control (i.e., mutation rate 0.25). }\label{fig: testperformance} \end{figure} To assess statistical significance, we aggregate data of all joint damages and replicates and assess the mean and standard-deviation, statistical significance, and the effect size (see Table \ref{tab: significance}). QD meta-evolution outperforms all other included algorithms with statistical significance, $p<0.001$, and large effect size, Cliff's delta greater than 0.5. \\ \begin{table}[htbp!] \centering \caption{Summary statistics of test on unseen damages. For each condition, For each meta-condition, we show the percentage of targets reached (Mean $\pm$ SD) within the semi-circle span of the robot arm and, to assess the effect compared to QD meta-evolution, the Wilcoxon rank-sum test's significance value and Cliff's delta as an effect size. Bold highlights large effect sizes. For Meta, the behaviour-performance map is generated from the mean meta-genotype (see $\mathbf{m}$ in Eq. \ref{eq: multivariate normal}) and default hyperparameters are used (mutation rate 0.125 and 5 generations per meta-generation). \textbf{Optimised} indicates the best setting from parameter control (i.e., mutation rate 0.25).} \label{tab: significance} \begin{tabular}{l l l l} \toprule \textbf{Condition} & Targets reached ($\%$) & Significance & Cliff's delta \\ \hline Meta (Optimised) & $79.16 \pm 11.38$ & \quad / & \quad / \\ Meta & $79.10 \pm 10.66$ & $p=0.338$ & $0.03$ \\ Random NonLinear & $5.94 \pm 4.64$ & $p<0.001$ & $\mathbf{1.00}$ \\ Position & $58.81 \pm 19.39$ & $p<0.001$ & $\mathbf{0.57}$\\ Polar & $57.87 \pm 19.49$ & $p<0.001$ & $\mathbf{0.60}$ \\ JointPairAngle & $60.53 \pm 13.38$ & $p<0.001$ & $\mathbf{0.71}$\\ AngleSum & $30.04 \pm 6.63$ & $p<0.001$ & $\mathbf{1.00}$\\ \bottomrule \end{tabular} \end{table} A similar comparison of Meta-conditions demonstrates that Meta NonLinear significantly outperforms other Meta-conditions. Meta NonLinear and Meta Linear generalise as expected but Meta Selection has selected features that are optimised for the training damage set but generalise poorly to the test damage set. After test damages, only 5\% to 25\% of solutions in the archive remains safe for Meta Selection, whereas Meta Linear and Meta NonLinear retain 25\% to 100\% safe solutions (roughly 50--250, 250--1000 and 1000--3500 solutions, respectively). The high test-performance of Meta NonLinear is not due to overlap in train- and test-damages, as we observe comparable test performances after dropping the damage set from meta-evolution such that only behavioural diversity and not environmental diversity is included in meta-fitness evaluations. The near equivalence in test-performance with and without damages may be explained by the strong relation between behavioural diversity and environmental diversity (see e.g., \cite{Bossens2020}). \section{Experimental parameters} For convenience, Table~\ref{tab: evolutionparameters} includes all the parameter settings for the experimental setup. \begin{table}[htbp!] \centering \caption{Parameter settings for evolution. Top half shows settings common to all conditions while bottom half shows settings for Meta-conditions.} \label{tab: evolutionparameters} \begin{tabular}{l p{4.4cm}} \toprule \textbf{Parameter} & \textbf{Setting} \\ \hline Genotype ($\mathbf{g}$) & discretised in $[0,1]^{8}$ \\ Mutation rate & $0.125$ (unless otherwise \newline indicated) \\ Mutation type & random increment/decrement with step of $0.025$ \\ Maximal map coverage & 4,096 solutions \\ Function evaluations & 100,000,000 \\ Batch size per generation & 400 bottom-level individuals \\ Initial population ($p$) & 2,000 bottom-level individuals \\ \hline Meta-population size ($\lambda$) & 5 \\ Meta-genotype ($\mathbf{w}$) & $[-1,1]^{182}$ for non-linear feature-map \newline $[0,1]^{56}$ otherwise \\ Number of base-features ($N_b$) & 14\\ Number of target-features ($D$) & 4\\ Normalisation range ($[m,M]$) & $[0.20,0.80]$ (linear feature-maps) \\ Number of hidden units ($N_h$) & 10 (non-linear feature-maps)\\ Sigmoid scaling factor ($\alpha_s$) & 30 (non-linear feature-maps)\\ Database settings & initial $k=5000$; bin width $\delta=1/3$; capacity $3^{14}$ (just below 5 million) \\ \bottomrule \end{tabular} \end{table} \newpage \section{Meta-fitness development} This section provides more data on the development of meta-fitness, the summed pairwise distance across 10\% of solutions in the map. Table~\ref{tab: meta-fitness} displays the final meta-fitness of Meta-evolution algorithms as a function of parameter control strategy. Fig.~\ref{fig: epochs-control} illustrates the effect of controlling the number of generations per meta-generations, with one separate plot for each feature-map (linear, non-linear, and feature-selection). Fig.~\ref{fig: mr-control} analogously illustrates the effect of controlling the mutation rate for different feature-maps. \begin{table*}[htbp!] \centering \caption{Comparison of parameter control methods. The data are divided into 6 groups based on the type of feature-map and the parameter controlled, either the number of generations or the mutation rate. Within each group, bold highlights the condition with highest meta-fitness. Final meta-fitness is is averaged over the final 10\% (i.e., the final 10 million) function evaluations. } \label{tab: meta-fitness} \subtable[Linear feature-map] { \resizebox{!}{0.08\paperheight}{ \begin{tabular}{l l } \toprule \textbf{Parameter control} & \textbf{Meta-fitness} \\ \hline 5 generations & $1037.7 \pm 215.9$ \\ 10 generations & $1037.9 \pm 310.8$ \\ 25 generations & $952.3 \pm 168.7$ \\ 50 generations & $718.8 \pm 86.0$ \\ Annealing generations & $1036.0 \pm 262.9$ \\ RL generations & $\mathbf{1088.7 \pm 373.4}$ \\ Endogenous generations & $886.2 \pm 207.6$ \\ \hline Mutation rate 0.125 & $1037.7 \pm 215.9$ \\ Mutation rate 0.25 & $1093.7 \pm 499.7$ \\ Mutation rate 0.50 & $1185.0 \pm 258.7$ \\ Mutation rate 1.0 & $1581.3 \pm 531.4$ \\ Annealing mutation rate & $1292.4 \pm 435.6$ \\ RL mutation rate & $\mathbf{1640.5 \pm 340.1}$ \\ Endogenous mutation rate & $1113.2 \pm 266.4$ \\ \bottomrule \end{tabular} } } \subtable[Non-linear feature-map]{ \resizebox{!}{0.08\paperheight}{ \begin{tabular}{l l } \toprule \textbf{Parameter control} & \textbf{Meta-fitness} \\ \hline 5 generations & $14562.2 \pm 1084.8$ \\ 10 generations & $\mathbf{14716.5 \pm 847.5}$ \\ 25 generations & $13282.5 \pm 1276.7$ \\ 50 generations & $12014.3 \pm 1020.3$ \\ Annealing generations & $12985.3 \pm 1697.6$ \\ RL generations & $13442.5 \pm 868.1$ \\ Endogenous generations & $13390.7 \pm 1034.7$ \\ \hline Mutation rate 0.125 & $14562.2 \pm 1084.8$ \\ Mutation rate 0.25 & $\mathbf{16470.0 \pm 653.6}$ \\ Mutation rate 0.50 & $16327.3 \pm 785.0$ \\ Mutation rate 1.0 & $16431.4 \pm 1459.5$ \\ Annealing mutation rate & $15292.1 \pm 1198.2$ \\ RL mutation rate & $15054.2 \pm 1365.9$ \\ Endogenous mutation rate & $14574.3 \pm 1447.8$ \\ \bottomrule \end{tabular} } } \subtable[Selection feature-map]{ \resizebox{!}{0.08\paperheight}{ \begin{tabular}{l l } \toprule \textbf{Parameter control} & \textbf{Meta-fitness} \\ \hline 5 generations & $2719.1 \pm 149.0$ \\ 10 generations & $3009.4 \pm 745.4$ \\ 25 generations & $2795.7 \pm 515.0$ \\ 50 generations & $\mathbf{3817.6 \pm 520.2}$ \\ Annealing generations & $3356.3 \pm 442.1$ \\ RL generations & $3701.7 \pm 662.2$ \\ Endogenous generations & $3198.0 \pm 729.0$ \\ \hline Mutation rate 0.125 & $2719.1 \pm 149.0$ \\ Mutation rate 0.25 & $4050.7 \pm 350.1$ \\ Mutation rate 0.50 & $\mathbf{5067.7 \pm 263.2}$ \\ Mutation rate 1.0 & $4755.9 \pm 982.5$ \\ Annealing mutation rate & $4896.3 \pm 1019.3$ \\ RL mutation rate & $4901.4 \pm 739.8$ \\ Endogenous mutation rate & $4781.6 \pm 449.3$ \\ \bottomrule \end{tabular} } } \end{table*} \begin{figure*}[htbp!] \centering \subfigure[Linear]{\includegraphics[width=0.31\linewidth]{figures/LinearEPOCHSrecovery_m.pdf}} \label{fig: linfm_epochs} \subfigure[Non-linear]{\includegraphics[width=0.31\linewidth]{figures/NonLinearEPOCHSrecovery_m.pdf}} \label{fig: nonlinfm_epochs} \subfigure[Feature selection]{\includegraphics[width=0.31\linewidth]{figures/SelectionEPOCHSrecovery_m.pdf}} \label{fig: selection_epochs} \includegraphics[width=0.90\linewidth]{figures/legend_generationcontrol.pdf} \caption{Analysis of parameter control of the bottom-level epochs, the number of generations per meta-generation. The $x$-axis represents the number of function evaluations and the $y$-axis represents the meta-fitness, the summed pairwise distance across 10\% of solutions in the map. To compute a single number and indicate variability, Mean $\pm$ SD of the population average meta-fitness is aggregated over 5 replicates.} \label{fig: epochs-control} \end{figure*} \begin{figure*}[htbp!] \centering \subfigure[Linear]{\includegraphics[width=0.31\linewidth]{figures/LinearMRrecovery_m.pdf}} \label{fig: linfm_mutationrate} \subfigure[Non-linear]{\includegraphics[width=0.31\linewidth]{figures/NonLinearMRrecovery_m.pdf}} \label{fig: nonlinfm_mutationrate} \subfigure[Feature selection]{\includegraphics[width=0.31\linewidth]{figures/SelectionMRrecovery_m.pdf}} \label{fig: selection_mutationrate} \includegraphics[width=0.90\linewidth]{figures/legend_mutationcontrol.pdf} \caption{Analysis of parameter control of the bottom-level epochs, the number of generations per meta-generation. The $x$-axis represents the number of function evaluations and the $y$-axis representes the meta-fitness, the summed pairwise distance across 10\% of solutions in the map. To compute a single number and indicate variability, Mean $\pm$ SD of the population average meta-fitness is aggregated over 5 replicates.} \label{fig: mr-control} \end{figure*} \newpage \section{Test comparison of meta-conditions} This section provides additional data about the performance of the meta-conditions as they are tested on unseen damages, which offset the desired angle by a particular error in $[-180,180]$ degrees. Table~\ref{tab: significanceMETA} shows summary and significance statistics of damage recovery, with data being aggregated across all offsets. Fig.~\ref{fig: testperformanceMETA} presents the damage recovery for two selected joints as the offset is varied. \begin{table}[htbp!] \centering \caption{Summary statistics of test on unseen damages. For each meta-condition, we show the percentage of targets reached within the semi-circle span of the robot and, to assess the effect compared to Meta NonLinear (Optimised), the Wilcoxon rank-sum test's significance value and Cliff's delta as an effect size. Bold highlights large effect sizes. The behaviour-performance map is generated from the mean meta-genotype (see $\mathbf{m}$ in Eq.~5). Default parameter settings are a mutation rate of $0.125$ and 5 generations per meta-generation. \textbf{Optimised} indicates the best setting from parameter control: \textbf{RL mutation rate} for Meta Linear, \textbf{Mutation rate 0.25} for Meta NonLinear, and \textbf{Mutation rate 0.50} for Meta Selection.} \label{tab: significanceMETA} \begin{tabular}{l l l l} \toprule \textbf{Condition} & Targets reached ($\%$) & Significance & Cliff's delta \\ \hline Meta NonLinear (Optimised) & $79.16 \pm 11.38$ & \quad / & \quad / \\ Meta NonLinear & $79.10 \pm 10.66$ & $p=0.338$ & $0.03$ \\ Meta Selection (Optimised) & $26.86 \pm 8.02$ & $p<0.001$ & $\mathbf{1.00}$ \\ Meta Selection & $25.82 \pm 7.80$ & $p<0.001$ & $\mathbf{1.00}$\\ Meta Linear (Optimised) & $54.85 \pm 21.13$ & $p<0.001$ & $\mathbf{0.72}$ \\ Meta Linear & $59.35 \pm 12.50$ & $p<0.001$ & $\mathbf{0.74}$ \\ \bottomrule \end{tabular} \end{table} \begin{figure}[htbp!] \centering \subfigure[Joint 1]{\includegraphics[width=0.30\linewidth]{figures/testperformance_leg1_META.pdf}} \subfigure[Joint 5]{\includegraphics[width=0.30\linewidth]{figures/testperformance_leg5_META.pdf}} \subfigure[Joint 8]{\includegraphics[width=0.30\linewidth]{figures/testperformance_leg8_META.pdf}} \includegraphics[width=0.90\linewidth]{figures/legend_testMETA.pdf} \caption{Test on unseen damages that offset the joint by a particular angle. The $x$-axis represents the offset in $[-180,180]$ degrees and the $y$-axis represents the percentage of targets reached within the semi-circle span of the robot. For each offset the Mean $\pm$ SD is aggregated over 5 replicates. The behaviour-performance map is formed from the mean meta-genotype (see $\mathbf{m}$ in Eq.~5). Default parameter settings are a mutation rate of $0.125$ and 5 generations. \textbf{Optimised} indicates the best setting from parameter control: \textbf{RL mutation rate} for Meta Linear, \textbf{Mutation rate 0.25} for Meta NonLinear, and \textbf{Mutation rate 0.50} for Meta Selection. }\label{fig: testperformanceMETA} \end{figure} \section{Source code} Source code for the experiments is publicly available at \url{https://github.com/resilient-swarms/planar_metacmaes}. \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} ``Coincidence of value similarity with locational similarity" is how Anselin and Bera \cite{anselin1998spatial} loosely describe spatial autocorrelation. For illustration we show the Burning Index, a measure for fire danger, for different locations in the US (Figure \ref{bi}). We observe that similar values cluster together, indicating (positive) spatial autocorrelation. Spatial autocorrelation occurs in many different types of data, for example in climate (fire danger, droughts) or economics (unemployment) data. This is why statistical methods that can deal with spatial autocorrelation are of high interest. A first contribution to this field was made by Whittle \cite{whittle1954stationary} who provided a framework for stochastic processes on the plane. Whittle introduced autoregressive models in two dimensions. Following this idea, Ord \cite{ord1975estimation} proposed the simultaneous autoregressive (SAR) model. This model not only allows us to capture the spatial dependency structure of a response variable but also the influence of covariates on this variable. This property of the SAR model makes it very attractive and led to extensions. Pace and Barry \cite{pace1997sparse} studied how sparse spatial weight matrices can speed up the estimation procedure and De Olivera and Song \cite{de2008bayesian} provide a Bayesian framework for the SAR model. \begin{figure}[t] \center \includegraphics[width=1\textwidth, trim = 0 2.5cm 0 3cm, clip]{spcor2.pdf} \caption{Spatial distribution of the Burning Index visualized on the map. We have one observation for every location. The cutpoints of the symbol key are the 20\%, 40\%, 60\% and 80\% quantile of the variable.} \label{bi} \end{figure} This work was motivated by an attempt to investigate the influence of weather conditions on fire danger in the continental US while accounting for spatial dependency. Data are obtained from the Wildland Fire Assessment System (WFAS). WFAS generates maps for observed and forecasted weather, fuel moisture and fire danger in the US. The SAR model is based on the assumption that the error follows a normal distribution, an assumption that cannot always be met. In our fitted model we observed residuals having heavier tails than the normal distribution. This is why we propose an extension of the SAR model to allow for a $t$-distributed error. We call this the tSAR model (Section \ref{tsardef}). We show how parameters of the tSAR model can be estimated and how the fitted model can be used for prediction (Sections \ref{paresttsar} and \ref{pr}). Furthermore, we provide a spatially varying variance estimate which serves as input to our models (Section \ref{sigmaeps}). In a simulation study (Section \ref{sim}), we show that our proposed estimators for the tSAR model are reasonable and the application (Section \ref{application}) shows that the model fit can improve on the standard SAR model. \section{The SAR model} \label{sarmodel} We recall some basic concepts related to the SAR model. First, we need to be able to determine how certain locations are related to each other, i.e., if there is a link between them and, if so, how strong the connection is. This is usually encoded in a proximity matrix (cf., Waller and Gotway \cite{waller2004applied} p.224 ff.). For \(n\) spatial locations \(l_1, ...., l_n \), the \emph{proximity matrix} is a \(n \times n\) matrix where entry \((i,j)\) indicates if and how strong location \(l_i\) is connected to location \(l_j\). A value of zero means that there is no connection from \(l_i\) to \(l_j\). The diagonal of the proximity matrix is set to zero such that a location is not connected to itself. Since this matrix does not need to be symmetric, we need to distinguish between a connection from \(l_i\) to \(l_j\) and a connection from \(l_j\) to \(l_i\). For a given proximity matrix \(W\) with entries $w_{ij}$, we can introduce the neighbors of location \(l_i\) which are all locations \(l_j\) such that \(w_{ij} \neq 0\). We denote the \emph{set of neighbors of location \(i\)} by \(N_i\), i.e., \begin{equation*} N_i \colonequals \{j \in \{1, \ldots n\}|w_{ij} \neq 0\}. \end{equation*} We now provide two possible choices of proximity matrices. In both cases we measure the strength of a connection by the inverse distance between the two corresponding locations. We will use the great circle distance (cf., Banerjee \cite{banerjee2005geodetic}) since our locations are specified as longitude/latitude pairs. For the first example we consider \(N_i(k)\) the set of the \(k\) nearest neighbors of \(l_i\), i.e., the \(k\) locations (excluding \(l_i\)) which have the smallest distance to \(l_i\). Let \(d_{ij}\) denote the distance between \(l_i\) and \(l_j\). For given \(k\), entry \((i,j)\) of the \emph{non-standardized nearest neighbors based proximity matrix \(\tilde W\)} is then given by \begin{equation*} \tilde w_{ij} \colonequals \left\{\begin{array}{cl} \frac{1}{d_{ij}}, & \mbox{if } l_j \in N_i(k) \\ 0, & \mbox{else} \end{array}\right. , \end{equation*} and entry \((i,j)\) of the \emph{row-standardized nearest neighbors based proximity matrix \(W\)} is defined by \begin{equation*} w_{ij} \colonequals \frac{\tilde w_{ij}}{\tilde w_{i.}}, \end{equation*} where \(\tilde w_{i.} = \sum_{j=1}^n \tilde w_{ij}\) is the sum of the \(i\)-th row of the non-standardized nearest neighbors based proximity matrix \(\tilde W\). By defining the proximity matrix in this way, we ensure that each location has the same number of neighbors. This is no longer the case if we use a radius to determine the set of neighbors. For a given radius \(r\), entry \((i,j)\) of the \emph{non-standardized radius based proximity matrix \(\tilde W\)} is given by \begin{equation*} \tilde w_{ij} \colonequals \left\{\begin{array}{cl} \frac{1}{d_{ij}}, & \mbox{if } i \neq j ~ \mbox{and} ~ d_{ij} \le r \\ 0, & \mbox{else} \end{array}\right. . \end{equation*} As above, entry \((i,j)\) of the \emph{row-standardized radius based proximity matrix \(W\)} is then defined by \begin{equation*} w_{ij} \colonequals \frac{\tilde w_{ij}}{\tilde w_{i.}}, \end{equation*} where \(\tilde w_{i.} = \sum_{j=1}^n \tilde w_{ij}\). A property of radius based proximity matrices is that they are symmetric which is not necessarily the case for nearest neighbors based proximity matrices. In the following, we will always consider row-standardized proximity matrices and refer to them as \emph{nearest neighbors matrices} and \emph{radius matrices}. This standardization allows us to consider a sum of values weighted with the corresponding entry of the proximity matrix as a weighted average, as we will see in the SAR model. In the following we recall the classical SAR model. By $\boldsymbol Z \sim N_n(\boldsymbol \mu, S)$ we denote that the random vector $\boldsymbol Z$ follows a $n$-dimensional normal distribution with mean vector $\boldsymbol \mu$ and covariance matrix $S$. \begin{definition}[The simultaneous autoregressive (SAR) model] Let \(\boldsymbol Y = (Y_1,...,Y_n)^T\) be a \(n\)-dimensional random vector and \(x_{i1},...,x_{ip}\) for \(i=1, ..., n\) associated (fixed) covariates. Let \(X \in \mathbb{R}^{n \times (p+1)}\) be a matrix whose \(i\)-th row is given by \(\boldsymbol x_i^T\), \( \boldsymbol x_i \colonequals (1,x_{i1},...,x_{ip})^T \). Then the \emph{simultaneous autoregressive (SAR) model} is given by \begin{equation} \boldsymbol Y = X \boldsymbol\beta + \lambda W(\boldsymbol Y - X \boldsymbol\beta) + \boldsymbol\epsilon , \label{sar} \end{equation} where \( \lambda \in \mathbb{R}\) is the spatial dependence parameter, \( W \in \mathbb{R}^{n \times n} \) is the proximity matrix and \( \boldsymbol\beta \in \mathbb{R}^{p+1} \) the unknown regression coefficient. For the error vector we assume \(\boldsymbol\epsilon \sim N_n(0,\sigma^2\Sigma_{\epsilon})\) with a positive scalar \(\sigma\) and a diagonal matrix \( \Sigma_{\epsilon} \in \mathbb{R}^{n \times n}\) with positive diagonal entries. \label{sardef} \end{definition} So the components of \(\boldsymbol\epsilon\) are independent. In our application we need to allow for different error variances per location, i.e., the diagonal elements of $\Sigma_{\epsilon}$ are different. Furthermore we require the matrix \( (\Id_n - \lambda W) \) to be a full rank matrix in order to ensure that the model is well defined. Here $\Id_n$ denotes the $n$-dimensional identity matrix. Writing Equation \eqref{sar} component wise yields \begin{equation} \begin{split} Y_i &= \boldsymbol \beta^T \boldsymbol x_i + \lambda \sum_{j \in N_i} w_{ij} (Y_j - \boldsymbol \beta^T \boldsymbol x_j ) + \epsilon_i \text{ for } i = 1, ..., n , \end{split} \label{sarcomp} \end{equation} where \(N_i = \{j|w_{ij} \neq 0 \}\) is the set of neighbors of the \(i\)-th location as introduced above. As we consider row-standardized proximity matrices, the \emph{spatial component} $\lambda \sum_{j \in N_i} w_{ij} (Y_j - \boldsymbol \beta^T \boldsymbol x_j ) $ can be seen as a weighted average of the deviations of the \emph{linear component} $X \boldsymbol \beta $ from the response in the corresponding neighborhood. In the following, we always assume the proximity matrix $W$ and $\Sigma_{\epsilon}$ to be known. \subsection[Parameter estimation]{Parameter estimation} We briefly sketch how parameters of the SAR model are estimated since we want to approach parameter estimation for the tSAR model in similar way. We follow Waller and Gotway \cite{waller2004applied} (p. 365 ff.) who estimate the parameters by maximizing the likelihood. This requires to derive the likelihood function. Since \( (\Id_n - \lambda W) \) has full rank, we can express Equation \eqref{sar} as \begin{equation} \boldsymbol Y = (\Id_n - \lambda W)^{-1} \boldsymbol\epsilon + X \boldsymbol\beta, \label{ssaralty} \end{equation} and we see that \(\boldsymbol Y\) (as a full rank linear transformation of a normal random variable) is normally distributed with mean vector \begin{equation*} \E(\boldsymbol Y) = X \boldsymbol\beta , \end{equation*} and covariance matrix \begin{equation} \Var(\boldsymbol Y) = \sigma^2 \Sigma_Y(\lambda), \label{ssarvar} \end{equation} where \(\Sigma_Y(\lambda) := (\Id_n - \lambda W)^{-1} \Sigma_{\epsilon} (\Id_n - \lambda W^T)^{-1}\). Knowing the distribution of \(\boldsymbol Y\), the likelihood function for $(\boldsymbol\beta,\sigma,\lambda)$ for given data $\boldsymbol y$ is given by \begin{equation*} \begin{split} L(\boldsymbol y|\boldsymbol\beta,\sigma,\lambda) \colonequals & (2\pi)^{-\frac{n}{2}} \det[\sigma^2 \Sigma_{Y}(\lambda)]^{-\frac{1}{2}} \cdot \\ & \cdot \exp\left[-\frac{1}{2}(\boldsymbol y - X \boldsymbol\beta)^T \frac{1}{\sigma^2} \Sigma_{Y}(\lambda)^{-1} (\boldsymbol y - X\boldsymbol\beta) \right]. \end{split} \end{equation*} Instead of maximizing the likelihood function, we minimize the negative log-likelihood given by \begin{equation} \begin{split} \ell(\boldsymbol y|\boldsymbol\beta,\sigma,\lambda) \colonequals & - \log\left[L(\boldsymbol y|\boldsymbol\beta,\sigma,\lambda)\right] \\ =& \frac{n}{2} \log(2\pi) + \frac{n}{2}\log(\sigma^2) + \frac{1}{2}\log\left\{\det[\Sigma_Y(\lambda)]\right\} + \\ &+ \frac{1}{2\sigma^2} (\boldsymbol y-X\boldsymbol\beta)^T \Sigma_Y(\lambda)^{-1} (\boldsymbol y-X\boldsymbol \beta) . \end{split} \label{ssarnll} \end{equation} \subsubsection*{Estimation of $\boldsymbol\beta$} First we take the derivative of \(\ell(\boldsymbol y|\boldsymbol\beta,\sigma,\lambda)\) with respect to \(\boldsymbol\beta\) and set it to zero. Solving for \(\boldsymbol\beta\) yields the (on \(\lambda\) dependent) estimate \begin{equation} \hat{\boldsymbol\beta}(\lambda) = \left[X^T \Sigma_Y(\lambda)^{-1} X \right]^{-1} X^T \Sigma_Y(\lambda)^{-1} \boldsymbol y , \label{ssarbeta} \end{equation} which is independent of \(\sigma\). For fixed $\lambda$, this is the generalized least squares estimator for $\boldsymbol\beta$ (cf., Kariya and Kurata\cite{kariya2004generalized} p. 35). \subsubsection*{Estimation of $\sigma$} We proceed in the same way for \(\sigma^2\) and obtain the (on \(\boldsymbol\beta\) and \(\lambda\) dependent) estimate \begin{equation} \hat\sigma^2(\boldsymbol\beta, \lambda) =\frac{1}{n}(\boldsymbol y-X\boldsymbol\beta)^T \Sigma_Y(\lambda)^{-1} (\boldsymbol y-X\boldsymbol \beta). \label{ssarsigmaest} \end{equation} The estimate for \(\sigma\) is given by its positive square root, i.e., \begin{equation*} \hat\sigma(\boldsymbol\beta, \lambda) =\sqrt{\frac{1}{n}(\boldsymbol y-X\boldsymbol\beta)^T \Sigma_Y(\lambda)^{-1} (\boldsymbol y-X\boldsymbol \beta)}. \end{equation*} \subsubsection*{Estimation of $\lambda$} There is no closed form solution for \( \lambda \). So we focus on the negative profile log-likelihood given by \begin{equation*} \begin{split} &\frac{n}{2} \log(2\pi) + \frac{n}{2}\log \left[ \hat{\sigma}(\hat{\boldsymbol\beta}(\lambda), \lambda)^2 \right] + \frac{1}{2}\log\left\{\det[\Sigma_Y(\lambda)]\right\} + \\ &+ \frac{\left[y-X\hat{\boldsymbol\beta}(\lambda)\right]^T \Sigma_Y(\lambda) \left[y-X\hat{\boldsymbol\beta}(\lambda)\right]}{2\hat{\sigma}(\hat{\boldsymbol\beta}(\lambda), \lambda)^2} , \end{split} \end{equation*} which is obtained by replacing \(\boldsymbol\beta\) by \(\hat {\boldsymbol{\beta}}(\lambda) \) and \(\sigma\) by \( \hat{\sigma}(\hat{\boldsymbol\beta}(\lambda), \lambda) \) in the negative log-likelihood function (\ref{ssarnll}). This one dimensional nonlinear minimization problem can be solved by appropriate optimization algorithms and yields \(\hat\lambda\), the estimate of \(\lambda\). The estimation procedure is implemented in the \(\texttt{R}\) package \(\texttt{spdep}\) (see Bivand \cite{bivandspdep}). For optimization, the \(\texttt{R}\) function \(\texttt{optimize}\) which is a combination of golden section search and successive parabolic interpolation (see Brent \cite{brent1973algorithms}) is used. The final estimate of \(\boldsymbol\beta\) is then given by \(\hat{\boldsymbol\beta} = {\boldsymbol\beta}(\hat\lambda) \) and the final estimate of \(\sigma\) is given by \(\hat\sigma = \hat\sigma(\hat{\boldsymbol\beta}, \hat\lambda)\). \subsection{Prediction and residuals} From Equation \eqref{sarcomp} it follows that the conditional expectation of $\boldsymbol Y$ at spatial location $i$, given the values of all other spatial locations, is \begin{equation*} \begin{split} \E(Y_i|\boldsymbol Y_{-i} = \boldsymbol y_{-i}) &= \E(Y_i|Y_j = y_j, j \in N_i) \\ &= \boldsymbol \beta^T \boldsymbol x_i + \lambda \sum_{j \in N_i} w_{ij} (y_j - \boldsymbol \beta^T \boldsymbol x_j ) , \end{split} \end{equation*} where $\boldsymbol z_{-i} = \{z_1, \ldots z_n\} \setminus \{z_i\}$ for a $n-$dimensional vector $\boldsymbol z$. So we define the \emph{\(i\)-th local prediction of Y}, where the neighbors' values are observed, by \begin{equation*} \hat y_{i|N_i} \colonequals \hat{\boldsymbol \beta}^T \boldsymbol x_i + \hat\lambda \sum_{j \in N_i} w_{ij} (y_j - \hat{\boldsymbol \beta}^T \boldsymbol x_j ) , \end{equation*} and the corresponding \emph{vector of local predictions} is defined by \begin{equation*} \hat {\boldsymbol y}_{|N} \colonequals X \hat{\boldsymbol\beta} + \hat\lambda W(\boldsymbol y - X \hat{\boldsymbol\beta}) . \end{equation*} Based on the prediction we can define the \emph{\(i\)-th local residual} as \begin{equation*} \hat \epsilon_i \colonequals y_i - \hat y_{i|N_i}. \end{equation*} Since the local residual is the only type of residual we consider, we also refer to it just as the \emph{\(i\)-th residual}. The \emph{\(i\)-th standardized residual} is given by \begin{equation*} \tilde{\epsilon}_i \colonequals \frac{\hat \epsilon_i}{\sqrt{\hat\sigma^2 (\Sigma_{\epsilon})_{ii}}}, \end{equation*} since \(\Var(\epsilon_i) = \sigma^2 (\Sigma_{\epsilon})_{ii}\). From a good fit we expect the standardized residuals to be approximately identically and independent standard normally distributed. Furthermore, an estimate for the \emph{standard error of \(\hat\beta_i\)} is provided by \begin{equation*} \hat\se(\hat\beta_i) \colonequals \hat\sigma \sqrt{ \left(\left[X^T \Sigma_Y(\hat\lambda)^{-1} X\right]^{-1}\right)_{ii}}, \end{equation*} since \begin{equation} \begin{split} \Var(\hat{\boldsymbol \beta}(\lambda)) =& \Var\left( \left[X^T \Sigma_Y(\lambda)^{-1} X \right]^{-1} X^T \Sigma_Y(\lambda)^{-1} \boldsymbol Y\right) \\ =& \left[X^T \Sigma_Y(\lambda)^{-1} X \right]^{-1} X^T \Sigma_Y(\lambda)^{-1} \Var(\boldsymbol Y) \cdot \\ & \cdot \left\{\left[X^T \Sigma_Y(\lambda)^{-1} X \right]^{-1} X^T \Sigma_Y(\lambda)^{-1}\right\}^T \\ =& \sigma^2 \left[X^T \Sigma_Y(\lambda)^{-1} X\right]^{-1} . \end{split} \label{ssarvarbeta} \end{equation} This can be used to test the significance of \(\beta_i\). For fixed \(\lambda\), \( \boldsymbol{ \hat\beta}\) is normally distributed (as a linear transformation of the normally distributed vector \(\boldsymbol Y\)). We use the following test for the significance of \(\beta_i\) with significance level \(\alpha\), null hypothesis \(H_0: \beta_i = 0\) and alternative \(H_1: \beta_i \neq 0\). We reject \(H_0\) if \begin{equation*} \left |\frac{\hat\beta_i}{\hat\se(\hat\beta_i)}\right | > \Phi^{-1}(1-\frac{\alpha}{2}) , \end{equation*} where \(\Phi^{-1}(1-\frac{\alpha}{2})\) denotes the \(1-\frac{\alpha}{2}\) quantile of the $N(0,1)$ distribution. But we need to use this test with caution because the standard error was estimated with the assumption that $\lambda$ was known. Thus the standard error is too small since we do not account for the variation in \(\lambda\). \section{The tSAR model} \label{tsar} The tSAR model is a way of extending the SAR model to allow for a Student \(t\) error distribution. We replace the assumption that the error vector is normally distributed by the assumption that the components of the error vector are univariate $t$-distributed. This allows for heavier tailed errors in our model. \subsection{Model definition} \label{tsardef} We say that the one dimensional random variable \(X\) follows a \emph{\(t\)-distribution} with mean \(\mu ~ (\mu \in \mathbb{R})\), scale parameter \(\Sigma ~ (\Sigma \in \mathbb{R}, \Sigma >0)\) and \(\nu\) (\(\nu \in \mathbb{N}\)) degrees of freedom if \(X\) has the density \begin{equation*} t( x|\mu, \Sigma, \nu) \colonequals \Gamma\left(\frac{\nu + 1}{2}\right) \Gamma\left(\frac{\nu}{2}\right)^{-1} \frac{1}{ \sqrt{ \nu \pi \Sigma }}\left[1 + \frac{( x - \mu)^2}{\nu \Sigma} \right]^{-\frac{\nu + 1}{2}}, \end{equation*} where \(\Gamma(x) \colonequals \int_0^{\infty}s^{x-1}e^{-s}ds\) is the gamma function. We write \( X \sim t( \mu, \Sigma, \nu) \). Furthermore, we denote by $Sc( X)$ the scale parameter of $X$. \label{tdist} According to Kotz and Nadarajah\cite{kotz2004multivariate} (p. 10 ff.) it holds that \begin{equation} \E(X) = \mu , \end{equation} and \begin{equation} \Var(X) = \frac{\nu}{\nu -2} \Sigma , \end{equation} for $\nu > 2$. \begin{definition}[tSAR model] In the \emph{tSAR model} we assume that \begin{equation*} \boldsymbol Y= X \boldsymbol\beta + \lambda W(Y - X \boldsymbol\beta) + \boldsymbol\epsilon , \end{equation*} with \(\epsilon_i \sim t(0,\sigma^2 (\Sigma_{\epsilon})_{ii}, \nu)\) with a positive scalar \(\sigma\) and a diagonal matrix \( \Sigma_{\epsilon} \in \mathbb{R}^{n \times n}\) with positive diagonal entries and \(\nu > 2 \) degrees of freedom. Furthermore, we assume that the components of the vector \(\boldsymbol\epsilon\) are independent. \(X, \lambda, W\) and \(\boldsymbol \beta\) are defined as in Definition \ref{sardef}. \end{definition} \subsection[Parameter estimation]{Parameter estimation} \label{paresttsar} As in the SAR model, we estimate parameters by maximizing the likelihood while assuming $W, \Sigma_{\epsilon}$ and the degrees of freedom $\nu$ to be known. We start with deriving the likelihood function. Since \begin{equation*} \boldsymbol\epsilon = (\Id_n - \lambda W) (\boldsymbol Y - X \boldsymbol\beta) , \end{equation*} the components of the vector \begin{equation*} \boldsymbol Z := \Sigma_{\epsilon}^{-\frac{1}{2}}(\Id_n - \lambda W) (\boldsymbol Y - X \boldsymbol\beta), \end{equation*} where \(\Sigma_{\epsilon}^{-\frac{1}{2}}\) is a diagonal matrix with \(i\)-th diagonal entry \((\Sigma_{\epsilon})_{ii}^{-\frac{1}{2}}\), are identically and independent \(t(0,\sigma^2, \nu)\) distributed. So the density \(f_{\boldsymbol Z}\) of \( \boldsymbol Z\) is the product of its marginal densities. Furthermore, we have that \begin{equation*} \boldsymbol Y = (\Id_n - \lambda W)^{-1} \sqrt{\Sigma_{\epsilon}} \boldsymbol Z + X \boldsymbol\beta . \end{equation*} We obtain the density of \( \boldsymbol Y \) by density transformation. \begin{equation*} \begin{split} f_Y (\boldsymbol y)=& \left|{\det\left[\Sigma_{\epsilon}^{-\frac{1}{2}}(\Id_n - \lambda W) \right]}\right| \prod_{i=1}^n t\left( \left(\Sigma_{\epsilon}^{-\frac{1}{2}}(\Id_n - \lambda W) (\boldsymbol y - X \boldsymbol\beta)\right)_i| 0, \sigma^2, \nu \right) , \end{split} \end{equation*} where \(f_{\boldsymbol Z}\) is the density of \(\boldsymbol Z\). Hence the negative log-likelihood of data $\boldsymbol y$ given the model parameters $(\boldsymbol\beta,\sigma,\lambda)$ is \begin{equation} \begin{split} \ell(\boldsymbol y|\boldsymbol\beta, \lambda, \sigma) \colonequals& -\log\left\{\left|{\det\left[\Sigma_{\epsilon}^{-\frac{1}{2}}(\Id_n - \lambda W) \right]}\right|\right\}\\ &- \sum_{i=1}^n \log\left[t\left( \left(\Sigma_{\epsilon}^{-\frac{1}{2}}(\Id_n - \lambda W) (\boldsymbol y - X \boldsymbol\beta)\right)_i| 0, \sigma^2, \nu \right) \right]. \label{tsarnll} \end{split} \end{equation} \newline Unfortunately we can not proceed as before (i.e., take the derivatives with respect to \(\boldsymbol\beta \) and \(\sigma\), set them to zero and solve analytically for the parameters) due to the more complex form of the likelihood function. For illustration of this problem we write down the derivative with respect to \(\boldsymbol \beta\). \begin{equation*} \frac{d}{d\boldsymbol\beta} \ell(\boldsymbol y|\boldsymbol\beta, \lambda, \sigma) = \text{c} + \frac{\nu+1}{2} \sum_{i=1}^n \frac{1}{\left[1 + \frac{m_i(\boldsymbol\beta)^2}{\sigma^2\nu} \right]} \frac{2 m_i(\boldsymbol\beta)}{\sigma^2\nu} \left( \Sigma_{\epsilon}^{-\frac{1}{2}}(\Id_n - \lambda W) X \right)_i , \end{equation*} where \(\text{c}\) is a constant independent of \(\boldsymbol \beta\), \(m_i(\boldsymbol\beta) = \left(\Sigma_{\epsilon}^{-\frac{1}{2}} (\Id_n - \lambda W) (\boldsymbol y - X \boldsymbol\beta)\right)_i \) and \(\left( \Sigma_{\epsilon}^{-\frac{1}{2}}(\Id_n - \lambda W) X \right)_i\) is the \(i\)-th row of \(\Sigma_{\epsilon}^{-\frac{1}{2}}(\Id_n - \lambda W) X\). If we set this equation to zero, we can not solve it analytically for \(\boldsymbol \beta\). Numerical optimization for all parameters would be computationally very complex since $\boldsymbol\beta$ is often high dimensional. Therefore we suggest to estimate \(\boldsymbol\beta\) and \(\sigma\) as explained in the following. \subsubsection*{Estimation of $\boldsymbol \beta$} A simple analytic estimator for \(\boldsymbol \beta\) is the on \(\lambda\) dependent generalized least squares estimator, i.e., \begin{equation*} \hat{\boldsymbol\beta}(\lambda) = \left[X^T \Sigma_Y(\lambda)^{-1} X \right]^{-1} X^T \Sigma_Y(\lambda)^{-1} \boldsymbol Y , \end{equation*} as in the SAR model. For fixed $\lambda$, this is the best linear unbiased estimator according to the Gau{\ss} Markov Theorem (cf., Kariya and Kurata \cite{kariya2004generalized} p. 34). \subsubsection*{Estimation of $\sigma$} For \(\sigma\) we suggest the following estimate dependent on \(\boldsymbol\beta\) and \(\lambda\), \begin{equation*} \hat\sigma^2(\boldsymbol\beta, \lambda) = \frac{\nu-2}{\nu} \frac{1}{n}(\boldsymbol y-X\boldsymbol\beta)^T \Sigma_Y(\lambda)^{-1} (\boldsymbol y-X\boldsymbol \beta), \end{equation*} since \(\hat\sigma^2(\boldsymbol\beta, \lambda)\) can be written as \begin{equation*} \begin{split} \hat\sigma^2(\hat{\boldsymbol\beta}, \hat\lambda) &= \frac{\nu-2}{\nu} \frac{1}{n}(\boldsymbol y-X\hat{\boldsymbol\beta})^T \Sigma_Y(\hat\lambda)^{-1} (\boldsymbol y-X\hat{\boldsymbol\beta}) \\ &= \frac{\nu-2}{\nu} \frac{1}{n}(\boldsymbol y-X\hat{\boldsymbol\beta})^T (\Id_n - \hat\lambda W^T) \Sigma_{\epsilon}^{-1} (\Id_n - \hat\lambda W) (\boldsymbol y-X\hat{\boldsymbol\beta}) \\ &= \frac{\nu-2}{\nu} \frac{1}{n} \hat{\boldsymbol\epsilon}^T \Sigma_{\epsilon}^{-1} \hat{\boldsymbol\epsilon} \\ &= \frac{\nu-2}{\nu} \frac{1}{n}\sum_{i=1}^n \frac{\hat\epsilon_i^2}{(\Sigma_{\epsilon})_{ii}} , \end{split} \end{equation*} where we used the definition of the prediction vector $ \hat {\boldsymbol y}_{|N} \colonequals X \hat{\boldsymbol\beta} + \hat\lambda W(\boldsymbol y - X \hat{\boldsymbol\beta})$ and the residual $\hat{ \boldsymbol \epsilon} \colonequals \boldsymbol y - \hat{ \boldsymbol y}_{|N}$ to express $\hat{ \boldsymbol\epsilon}$ as \begin{equation*} \begin{split} \hat{ \boldsymbol\epsilon} &= \boldsymbol y - \hat { \boldsymbol y}_{|N} \\ &= \boldsymbol y - X \hat{\boldsymbol\beta} - \hat\lambda W \boldsymbol y + \hat\lambda W X \hat{\boldsymbol\beta} \\ &= (\Id_n - \hat\lambda W)(\boldsymbol y - X \hat{\boldsymbol\beta}). \end{split} \end{equation*} The quantity \(\frac{1}{n}\sum_{i=1}^n \frac{\hat\epsilon_i^2}{(\Sigma_{\epsilon})_{ii}} \) is an estimate of the variance of \( \epsilon_i/\sqrt{(\Sigma_{\epsilon})_{ii}}\) and so \(\hat\sigma^2 (\hat{\boldsymbol\beta}, \hat\lambda)\) is an estimate of the scale parameter. \subsubsection*{Estimation of $\lambda$} For the estimation of \(\lambda\) we proceed as in the SAR model, i.e., we obtain the negative profile log-likelihood by replacing \(\boldsymbol\beta\) by \( \hat{\boldsymbol \beta}(\lambda) \) and \(\sigma\) by \(\hat\sigma(\hat{\boldsymbol\beta}(\lambda), \lambda)\) in the negative log-likelihood function (\ref{tsarnll}). Then \(\hat\lambda\) is defined as the minimum of the negative profile log-likelihood which is found numerically. As before we set \(\hat{\boldsymbol\beta} = \hat{\boldsymbol\beta}(\hat\lambda) \) and \(\hat\sigma = \hat\sigma(\hat{\boldsymbol\beta}, \hat\lambda)\). \subsection{Prediction and residuals} \label{pr} The \emph{vector of local predictions \(\hat{\boldsymbol y}_{|N}\)} and the \emph{residual vector \(\hat{\boldsymbol \epsilon}\)} are defined as for the SAR model, i.e., \begin{equation*} \hat {\boldsymbol y}_{|N} \colonequals X \hat{\boldsymbol\beta} + \hat\lambda W(\boldsymbol y - X \hat{\boldsymbol\beta}) , \end{equation*} and \begin{equation*} \hat \epsilon_i \colonequals y_i - \hat y_{i|N_i}. \end{equation*} Since \(Sc(\epsilon_i) = \sigma^2 (\Sigma_{\epsilon})_{ii} \), we define the \emph{\(i\)-th standardized residual} by \begin{equation*} \tilde{\epsilon}_i \colonequals \frac{\hat \epsilon_i}{\sqrt{\hat\sigma^2 (\Sigma_{\epsilon})_{ii}}}. \end{equation*} As in (\ref{ssaralty}), we can write \(\boldsymbol Y\) as \begin{equation*} \boldsymbol Y = (\Id_n - \lambda W)^{-1} \boldsymbol\epsilon + X \boldsymbol\beta , \end{equation*} and obtain similarly to Equation (\ref{ssarvar}), \begin{equation*} \Var(\boldsymbol Y) = \frac{\nu}{\nu - 2}\sigma^2 \Sigma_Y(\lambda), \end{equation*} where \(\Sigma_Y(\lambda) := (\Id_n - \lambda W)^{-1} \Sigma_{\epsilon} (\Id_n - \lambda W^T)^{-1}\). Therefore we get similar to (\ref{ssarvarbeta}) \begin{equation*} \Var(\hat{ \boldsymbol\beta}(\lambda)) = \frac{\nu}{\nu - 2}\sigma^2 (X^T \Sigma_Y(\lambda)^{-1} X)^{-1} , \end{equation*} and thus we estimate the \emph{standard error of \(\hat\beta_i\)} by \begin{equation*} \hat\se(\hat\beta_i) \colonequals \sqrt{\frac{\nu}{\nu - 2}\hat\sigma^2 ((X^T \Sigma_Y(\hat\lambda)^{-1} X)^{-1})_{{ii}}}. \end{equation*} \subsection{Specifying the matrix $\Sigma_{\epsilon}$} \label{sigmaeps} To estimate a SAR or tSAR model we need to specify the matrix $\Sigma_{\epsilon}$ which is proportional to the covariance matrix of the error vector. One possibility would be to choose this equal to the identity matrix which leads to all locations having the same variance. But we also want to account for different error variances. Therefore we provide a variance estimate which uses the restriction to a neighborhood. We define the \emph{local empirical variance} of the spatial variable \(Z_i\) at location \(i\) with respect to the proximity matrix \(W\) as follows \begin{equation} \hat\sigma^2_{W}(Z_i) \colonequals \frac{1}{|N_i| - 1} \sum_{j \in N_i} (z_j - \bar z_{N_i})^2, \label{empvar} \end{equation} where the \(z_j\) are observations of \(Z\), \(N_i = \{j|w_{ij} \neq 0\} \) is the neighborhood of location \(i\) induced by \(W\), \(|N_i|\) is the cardinality of the set \(N_i\) and \( \bar z_{N_i} = \frac{1}{|N_i|}\sum_{j \in N_i} z_j\). The corresponding \emph{local empirical variance matrix} is a diagonal matrix with \(i\)-th diagonal entry equal to \(\hat\sigma^2_{W}(Z_i)\). \label{empvardef} For a SAR or tSAR model with response $Y$, covariates $x_1,\ldots, x_p$ and proximity matrix W, we propose to specify $\Sigma_{\epsilon}$ in the following way. \begin{enumerate} \item We fit a linear regression model with response variable $Y$ and covariates $x_1,\ldots, x_p$, i.e., we assume \begin{equation*} y_i = (x_{1i},\ldots, x_{pi})\boldsymbol \beta_{lm} + \epsilon_{lm,i} \end{equation*} with $\epsilon_{lm,i} \sim N(0,\sigma^2)$, $\boldsymbol \beta_{lm} \in \mathbb{R}^p, \sigma \in \mathbb{R}_+$. We obtain $\hat{\boldsymbol\beta}_{lm}$, the estimate of $\boldsymbol\beta_{lm}$ by least squares estimation. The $i$-th residual $r_i$ is given by \begin{equation*} r_i = y_i - (x_{1i},\ldots, x_{pi})\hat{\boldsymbol\beta}_{lm}. \end{equation*} \item Then we set $\Sigma_{\epsilon}$ equal to the local empirical variance matrix of the residual vector $\boldsymbol r$ with respect to $W$. So $\Sigma_{\epsilon}$ is a diagonal matrix with $i-$th diagonal entry $(\Sigma_{\epsilon})_{ii} = \hat\sigma^2_W(r_i)$, where $\hat\sigma^2_W(\cdot)$ is generally defined in \eqref{empvar}. We call this the \emph{local regression variance matrix of $Y$ with respect to $W$}. \end{enumerate} \section{Simulation study} \label{sim} In this section we study if the proposed estimators of the tSAR model behave in a reasonable way and how they compare to the estimators of the already existing SAR model. \newline \newline We simulate from a tSAR model in the following way. \begin{enumerate} \item (number of locations $n$) We specify the number of locations $n$ as 250 or 1500. \item (proximity matrix $W$) We use the first $n$ longitude/latitude values of the WFAS data set introduced in Section \ref{datadesc} to determine locations and corresponding neighborhoods. We set the proximity matrix $W$ equal to a nearest neighbors matrix with $k=30$ neighbors. \item (covariates $\boldsymbol x_1, \ldots, \boldsymbol x_7$) We obtain the covariates $\boldsymbol x_1, \ldots, \boldsymbol x_7$ by sampling $n$ times independently from the following distributions: \begin{equation} \begin{split} \boldsymbol x_1, \ldots, \boldsymbol x_5: & \text{ standard normal}\\ \boldsymbol x_6 \hspace*{0.5cm}:& \text{ bernoulli with }p=0.3\\ \boldsymbol x_7 \hspace*{0.5cm}:& \text{ bernoulli with }p=0.7 \end{split} \end{equation} \item (degrees of freedom $\nu$) We specify the degrees of freedom $\nu$ as 4 or 20. \item (simulation of \(\boldsymbol\epsilon\)) To account for a varying variance, we define 6 regions (see Figure \ref{regions}) with corresponding \(s_1 = 4, s_2 = 0.6, s_3 = 5, s_4 = 0.3, s_5 = 4, s_6 = 6\) and simulate independently for \(i = 1, \ldots, n \): If location \(i\) belongs to region \(j\) simulate \(\epsilon_{i}\) from \(t(0,s_j^2, \nu)\). \item (coefficients $\boldsymbol\beta$) We set \begin{equation*} \beta_0=3,~ \beta_1=10,~ \beta_2=4,~ \beta_3=5,~ \beta_4=2,~ \beta_5=8,~ \beta_6=1,~ \beta_7=3 . \end{equation*} \item (spatial parameter $\lambda$) We specify $\lambda$ as 0.4 or 0.8. \item (response $\boldsymbol y$) According to the assumptions of the tSAR model we set \begin{equation*} \boldsymbol y = (\Id_n - \lambda W)^{-1} \boldsymbol\epsilon + X\boldsymbol\beta , \end{equation*} where $\boldsymbol\beta=(\beta_0,\beta_1,\ldots,\beta_7)^T$ and $X=(\boldsymbol 1^T, \boldsymbol x_1^T, \ldots, \boldsymbol x_7^T)$. \end{enumerate} \begin{figure}[H] \center \includegraphics[width=1\textwidth]{regions.pdf} \caption{Locations of the weather stations of the WFAS data and the 6 regions used in the simulation study visualized on the map.} \label{regions} \end{figure} Choosing between SAR and tSAR and the different choices for $\Sigma_{\epsilon}$ leads to 6 different models (see Table \ref{mod}) that are estimated from the simulated data. \begin{table}[H] \centering \begin{tabular}{ccl} model & & $\Sigma_{\epsilon}$ \\ 1&SAR&$\Id_n$\\ 2&tSAR&$\Id_n$\\ 3&SAR& local regression variance matrix\\ 4&tSAR&local regression variance matrix\\ 5&SAR&true\\ 6&tSAR&true\\ \end{tabular} \caption{Different models estimated in the simulation study.} \label{mod} \end{table} In the tSAR model we have one additional parameter $\nu$, the degrees of freedom, which was assumed to be known in Section \ref{tsar}. Instead of specifying this parameter we use numerical optimization to obtain an estimate for it. We use the \(\texttt{R}\) function \(\texttt{optimize}\) with high tolerance (tolerance = 1) to speed up computation. Here we allow $\nu$ to be a real parameter between 3 and 20. Note that in the SAR model \(\sigma\) is the standard deviation of \(\epsilon_i/\sqrt{(\Sigma_{\epsilon})_{ii}} \), whereas in the tSAR model \(\sigma\) is the square root of the scale parameter of \(\epsilon_i/\sqrt{(\Sigma_{\epsilon})_{ii}} \) and the standard deviation is given by \(\sqrt{\frac{\nu}{\nu - 2}} \sigma\). For easier comparison we introduce \emph{\(s\), the standard deviation of \(\epsilon_i/\sqrt{(\Sigma_{\epsilon})_{ii}} \)}, and define its estimate \(\hat s\), depending on the model, by \begin{equation} \begin{split} \hat s & \colonequals \hat \sigma ~~~~~~~~~~~~~~~\text{ if the SAR model is used}, \\ \hat s & \colonequals \sqrt{\frac{\nu}{\nu - 2}} ~~\hat \sigma ~~\text{ if the tSAR model is used} . \end{split} \end{equation} The results of the simulation study are shown in Table \ref{sumsim}. To evaluate the estimates we use the root mean squared error which is given by \begin{equation} \text{RMSE}(\hat\theta) = \sqrt{\frac{1}{p}\sum_{j=1}^p\frac{1}{r}\sum_{i=1}^r (\theta_{j} - \hat{\theta}_{ji})^2} , \end{equation} where $r$ is the number of replications (in our case $r=500$), $\theta_j$ is the $j$-th component of the $p$-dimensional vector $\boldsymbol \theta$ and $\hat{\theta}_{ji}$ its estimate in the $i$-th replication. First we analyze the results with respect to the number of locations $n$. We compare models that only differ in the choice of this parameter. One usually expects that the root mean squared error decreases as the number of stations increases. We observe this behavior for all parameters in cases where $\Sigma_{\epsilon}$ is the true value or the local regression variance matrix. If $\Sigma_{\epsilon}$ is the identity matrix this does not hold for the parameter $s$. The parameter $s$ scales $\Sigma_{\epsilon}$ and if $\Sigma_{\epsilon}$ is specified incorrectly we cannot expect a reasonable estimate for $s$. Furthermore, the results show that the choice of $\Sigma_{\epsilon}$ has an influence on the estimates for $\boldsymbol \beta$. Comparing models that only differ in the choice of $\Sigma_{\epsilon}$, the best estimates are obtained when $\Sigma_{\epsilon}$ is the true value, the second best when $\Sigma_{\epsilon}$ is the local regression variance matrix and the worst when $\Sigma_{\epsilon} = \Id_n$. There is a notable difference between $\RMSE(\hat{\boldsymbol\beta})$ in cases where $\Sigma_{\epsilon} = \Id_n$ compared to cases where $\Sigma_{\epsilon}$ is equal to the local regression variance matrix. This shows that introducing the local regression variance matrix brings a notable improvement for estimating \(\boldsymbol\beta\) compared to the trivial choice $\Sigma_{\epsilon} = \Id_n$. For $\lambda$, reasonable estimates are provided in all cases whereas the best estimates are usually obtained when $\Sigma_{\epsilon}$ is the true value. We see that the choice of $\Sigma_{\epsilon}$ also has influence on the estimates for $s$, where the influence is similar as for $\boldsymbol\beta$, i.e., the best estimates are obtained when $\Sigma_{\epsilon}$ is the true value, the second best when $\Sigma_{\epsilon}$ is equal to the local regression variance matrix and the worst when $\Sigma_{\epsilon} = \Id_n$. The differences in $\RMSE(\hat s)$ are rather big since it is difficult to estimate $s$, which scales $\Sigma_{\epsilon}$, if $\Sigma_{\epsilon}$ is not specified correctly. Analysing the estimation of $\nu$ we observe that big values of $\RMSE(\hat\nu)$ are obtained in cases where $\nu = 20$ and $\Sigma_{\epsilon}$ is not equal to the true value. In these cases $\nu$ was estimated too low. Specifying $\Sigma_{\epsilon}$ incorrectly causes that the variance of the residuals is estimated too low or too high for some of them which then causes that a $t$-distribution with lower degrees of freedom provides a better fit. \begin{table}[H] \centering \resizebox{1.1\columnwidth}{!}{% \begin{tabular}{rrrrrrrrrrrrrrr} \cline{3-6} & & \multicolumn{4}{|c|}{RMSE} & & & & & & & & \\ \hline $n$ & model & $\hat{\boldsymbol\beta}$ & $\hat\lambda$ & $\hat s$ & $\hat\nu$ & $ll$ & $\hat{\boldsymbol\beta}$ & $\boldsymbol\beta$ & $\hat\lambda$ & $\lambda$ & $\hat s$ & $s$ & $\hat\nu$ & $\nu$ \\ 250 & 1 & 0.428 & 0.086 & 2.961 & & -722 & 4.49 & 4.5 & 0.31 & 0.4 & 4.38 & 1.41 & & \\ 250 & 2 & 0.425 & 0.040 & 2.965 & 0.41 & -660 & 4.49 & 4.5 & 0.36 & 0.4 & 4.38 & 1.41 & 3.59 & 4 \\ 250 & 3 & 0.106 & 0.062 & 0.390 & & -567 & 4.50 & 4.5 & 0.34 & 0.4 & 1.02 & 1.41 & & \\ 250 & 4 & 0.107 & 0.047 & 0.389 & 0.41 & -521 & 4.50 & 4.5 & 0.35 & 0.4 & 1.03 & 1.41 & 3.59 & 4 \\ 250 & 5 & 0.069 & 0.032 & 0.032 & & -474 & 4.50 & 4.5 & 0.37 & 0.4 & 1.38 & 1.41 & & \\ 250 & 6 & 0.069 & 0.035 & 0.029 & 2.17 & -457 & 4.50 & 4.5 & 0.36 & 0.4 & 1.39 & 1.41 & 4.98 & 4 \\ 250 & 1 & 0.651 & 0.054 & 2.972 & & -728 & 4.51 & 4.5 & 0.75 & 0.8 & 4.39 & 1.41 & & \\ 250 & 2 & 0.652 & 0.035 & 2.970 & 0.41 & -665 & 4.51 & 4.5 & 0.77 & 0.8 & 4.38 & 1.41 & 3.59 & 4 \\ 250 & 3 & 0.165 & 0.049 & 0.474 & & -576 & 4.50 & 4.5 & 0.75 & 0.8 & 0.94 & 1.41 & & \\ 250 & 4 & 0.164 & 0.034 & 0.478 & 0.41 & -529 & 4.50 & 4.5 & 0.77 & 0.8 & 0.94 & 1.41 & 3.59 & 4 \\ 250 & 5 & 0.113 & 0.022 & 0.035 & & -480 & 4.50 & 4.5 & 0.78 & 0.8 & 1.38 & 1.41 & & \\ 250 & 6 & 0.120 & 0.020 & 0.025 & 2.34 & -464 & 4.50 & 4.5 & 0.78 & 0.8 & 1.39 & 1.41 & 5.05 & 4 \\ 250 & 1 & 0.323 & 0.067 & 2.220 & & -652 & 4.49 & 4.5 & 0.33 & 0.4 & 3.27 & 1.05 & & \\ 250 & 2 & 0.321 & 0.037 & 2.223 & 16.41 & -607 & 4.49 & 4.5 & 0.36 & 0.4 & 3.28 & 1.05 & 3.59 & 20 \\ 250 & 3 & 0.073 & 0.048 & 0.101 & & -485 & 4.50 & 4.5 & 0.35 & 0.4 & 0.95 & 1.05 & & \\ 250 & 4 & 0.074 & 0.043 & 0.098 & 16.14 & -465 & 4.50 & 4.5 & 0.36 & 0.4 & 0.96 & 1.05 & 3.87 & 20 \\ 250 & 5 & 0.052 & 0.032 & 0.022 & & -402 & 4.50 & 4.5 & 0.37 & 0.4 & 1.03 & 1.05 & & \\ 250 & 6 & 0.053 & 0.036 & 0.020 & 5.67 & -402 & 4.50 & 4.5 & 0.36 & 0.4 & 1.03 & 1.05 & 16.41 & 20 \\ 250 & 1 & 0.462 & 0.047 & 2.219 & & -657 & 4.50 & 4.5 & 0.75 & 0.8 & 3.27 & 1.05 & & \\ 250 & 2 & 0.460 & 0.031 & 2.218 & 16.41 & -613 & 4.50 & 4.5 & 0.77 & 0.8 & 3.27 & 1.05 & 3.59 & 20 \\ 250 & 3 & 0.111 & 0.045 & 0.170 & & -499 & 4.50 & 4.5 & 0.76 & 0.8 & 0.88 & 1.05 & & \\ 250 & 4 & 0.113 & 0.036 & 0.169 & 16.25 & -476 & 4.50 & 4.5 & 0.76 & 0.8 & 0.88 & 1.05 & 3.76 & 20 \\ 250 & 5 & 0.080 & 0.021 & 0.022 & & -408 & 4.50 & 4.5 & 0.78 & 0.8 & 1.03 & 1.05 & & \\ 250 & 6 & 0.084 & 0.022 & 0.015 & 5.14 & -409 & 4.50 & 4.5 & 0.78 & 0.8 & 1.04 & 1.05 & 16.82 & 20 \\ 1500 & 1 & 0.175 & 0.009 & 3.213 & & -4430 & 4.50 & 4.5 & 0.39 & 0.4 & 4.63 & 1.41 & & \\ 1500 & 2 & 0.175 & 0.005 & 3.214 & 0.41 & -4028 & 4.50 & 4.5 & 0.39 & 0.4 & 4.63 & 1.41 & 3.59 & 4 \\ 1500 & 3 & 0.034 & 0.012 & 0.385 & & -3162 & 4.50 & 4.5 & 0.39 & 0.4 & 1.03 & 1.41 & & \\ 1500 & 4 & 0.034 & 0.009 & 0.386 & 0.39 & -2986 & 4.50 & 4.5 & 0.39 & 0.4 & 1.03 & 1.41 & 3.64 & 4 \\ 1500 & 5 & 0.029 & 0.004 & 0.009 & & -2981 & 4.50 & 4.5 & 0.40 & 0.4 & 1.40 & 1.41 & & \\ 1500 & 6 & 0.029 & 0.008 & 0.008 & 0.51 & -2865 & 4.50 & 4.5 & 0.39 & 0.4 & 1.41 & 1.41 & 4.18 & 4 \\ 1500 & 1 & 0.274 & 0.012 & 3.237 & & -4464 & 4.50 & 4.5 & 0.79 & 0.8 & 4.65 & 1.41 & & \\ 1500 & 2 & 0.274 & 0.009 & 3.239 & 0.41 & -4060 & 4.50 & 4.5 & 0.79 & 0.8 & 4.65 & 1.41 & 3.59 & 4 \\ 1500 & 3 & 0.051 & 0.010 & 0.446 & & -3205 & 4.50 & 4.5 & 0.79 & 0.8 & 0.97 & 1.41 & & \\ 1500 & 4 & 0.052 & 0.008 & 0.445 & 0.40 & -3027 & 4.50 & 4.5 & 0.79 & 0.8 & 0.97 & 1.41 & 3.63 & 4 \\ 1500 & 5 & 0.043 & 0.005 & 0.001 & & -3021 & 4.50 & 4.5 & 0.80 & 0.8 & 1.42 & 1.41 & & \\ 1500 & 6 & 0.044 & 0.006 & 0.006 & 0.50 & -2902 & 4.50 & 4.5 & 0.79 & 0.8 & 1.42 & 1.41 & 4.15 & 4 \\ 1500 & 1 & 0.134 & 0.012 & 2.407 & & -3997 & 4.50 & 4.5 & 0.39 & 0.4 & 3.46 & 1.05 & & \\ 1500 & 2 & 0.134 & 0.005 & 2.407 & 16.41 & -3722 & 4.50 & 4.5 & 0.39 & 0.4 & 3.46 & 1.05 & 3.59 & 20 \\ 1500 & 3 & 0.024 & 0.015 & 0.081 & & -2694 & 4.50 & 4.5 & 0.38 & 0.4 & 0.97 & 1.05 & & \\ 1500 & 4 & 0.024 & 0.014 & 0.082 & 11.97 & -2669 & 4.50 & 4.5 & 0.39 & 0.4 & 0.97 & 1.05 & 8.18 & 20 \\ 1500 & 5 & 0.022 & 0.005 & 0.003 & & -2547 & 4.50 & 4.5 & 0.40 & 0.4 & 1.05 & 1.05 & & \\ 1500& 6 & 0.022 & 0.009 & 0.002 & 3.67 & -2545 & 4.50 & 4.5 & 0.39 & 0.4 & 1.05 & 1.05 & 17.56 & 20 \\ 1500 & 1 & 0.205 & 0.005 & 2.406 & & -4028 & 4.51 & 4.5 & 0.79 & 0.8 & 3.46 & 1.05 & & \\ 1500 & 2 & 0.204 & 0.003 & 2.408 & 16.41 & -3753 & 4.51 & 4.5 & 0.80 & 0.8 & 3.46 & 1.05 & 3.59 & 20 \\ 1500 & 3 & 0.036 & 0.009 & 0.136 & & -2737 & 4.50 & 4.5 & 0.79 & 0.8 & 0.92 & 1.05 & & \\ 1500 & 4 & 0.036 & 0.008 & 0.135 & 12.42 & -2713 & 4.50 & 4.5 & 0.79 & 0.8 & 0.92 & 1.05 & 7.69 & 20 \\ 1500& 5 & 0.032 & 0.003 & 0.003 & & -2578 & 4.50 & 4.5 & 0.80 & 0.8 & 1.05 & 1.05 & & \\ 1500 & 6 & 0.032 & 0.004 & 0.001 & 3.70 & -2580 & 4.50 & 4.5 & 0.80 & 0.8 & 1.05 & 1.05 & 17.58 & 20 \\ \hline \end{tabular}% } \caption{Results of the simulation study. The first two columns specify the number of locations and the model. Other columns show the root mean squared error, the average log-likelihood ($ll$), the true parameter and the average of estimated parameters. For $\boldsymbol \beta$ we average over its components.} \label{sumsim} \end{table} Evaluating the overall fit with the log-likelihood and comparing models that only differ in the choice of one parameter we see that the choice between SAR and tSAR and the choice of $\Sigma_{\epsilon}$ has influence. The tSAR model leads to higher likelihood values when $\nu=4$ or mostly similar values when $\nu =20$. For $\Sigma_{\epsilon}$ the highest likelihood values are obtained when $\Sigma_{\epsilon}$ is the true value, the second highest when $\Sigma_{\epsilon}$ is equal to the local regression variance matrix and the lowest when $\Sigma_{\epsilon} = \Id_n$. \section{Application} \label{application} We use the two models, SAR and tSAR, to fit data to assess the risk of fire danger in the US. \subsection{Data description} \label{datadesc} The data is obtained from the Wildland Fire Assessment System (WFAS) and contains the following variables observed at 1542 stations on the 23rd of June 2015. \vspace*{0.4cm} \begin{itemize} \item \(Elev\) = Elevation in feet divided by 100 \item \(Lat\) = Latitude \item \(Long\) = Longitude \item \(Tmp\) = Temperature in Fahrenheit \item \(RH\) = Relative humidity in percent \item \(Wind\) = Wind speed (10 min avg wind) in mi/h \item \(PPT\) = 24h precipitation in inches \item \(BI\) = Burning Index calculated according to the National Fire Danger Rating System (cf., National Wildfire Coordinating Group\cite{nfdrs2002selfstudy}) (number related to the contribution of fire behavior to the effort of containing a fire. It is expressed as a numeric value closely related to the flame length in feet multiplied by 10.) \end{itemize} \subsection{Model fitting} We consider the Burning Index \(BI\) as response variable and the other variables as covariates. These covariates can be measured using simple weather station technology. For our approach, there is no expert knowledge required compared to the calculation of the Burning Index according to the National Fire Danger Rating System. Fitting several SAR and tSAR models, we observed misbehavior in the residuals. The residuals did not follow the desired normal or $t$-distribution. Figure \ref{bres} illustrates this problem for one case where we fit one SAR and one tSAR model with $\nu = 6$ degrees of freedom. We use $BI$ as response and all other variables as covariates. As proximity matrix $W$ we choose a nearest neighbors matrix with $k=30$ neighbors and for $\Sigma_{\epsilon}$ we use the local regression variance matrix of $Y$ with respect to $W$. \begin{figure}[H] \centerline{% \includegraphics[width=0.5\textwidth]{resnbsar}% \includegraphics[width=0.5\textwidth]{resnbtsar}% }% \caption{qq-plots for a SAR and a tSAR model. We plot the quantiles of the standard normal distribution against the quantiles of the standardized residuals of the SAR model and the quantiles of the $t$-distribution with mean zero, scale parameter 1 and $6$ degrees of freedom against the quantiles of the standardized residuals of the tSAR model with $\nu = 6$.} \label{bres} \end{figure} To deal with this problem and to further improve our fit, we now consider Box-Cox transformations of the response variable (cf., Box and Cox\cite{box1964analysis}) for SAR models. We show how Box-Cox transformations that were developed for linear regression models can be used for SAR and tSAR models. We are given \(\boldsymbol y =(y_1, \ldots, y_n)^T\), an observation of the random vector \(\boldsymbol Y = (Y_1, \ldots, Y_n)^T\). For \(l \in \mathbb{R}\) and \(m \in \mathbb{R}\) such that \(Y_i > -m\) for all \(i = 1, \ldots,n\), the Box-Cox transformed variable \(Y_i^{m,l}\) is given by \begin{equation} Y_i^{m,l} = \left\{\begin{array}{cl} \frac{(Y_i + m)^{l} - 1}{l}, & \mbox{if } l \neq 0 \\ \log(Y_i + m), & \mbox{else} \end{array}\right. . \label{transfo} \end{equation} We consider \(l\) and \(m\) fixed and assume that \(\boldsymbol Y^{m,l}\) is distributed according to a SAR or tSAR model with parameters \(\boldsymbol \theta^{m,l} = (\boldsymbol\beta^{m,l},\sigma^{m,l},\lambda^{m,l})\). We denote its log-likelihood by \(\ell_{S}(\boldsymbol y^{m,l}|\boldsymbol \theta^{m,l})\) where the observation $ y_i^{m,l}$ of $ Y_i^{m,l}$ is obtained by applying the same transformation \ref{transfo} on the observation $y_i$. The density of $\boldsymbol Y$ can be obtained using the density transformation rule. The log-likelihood of \(\boldsymbol \theta^{m,l}\) with respect to the observations \(y_1, \dots, y_n\) is then given by \begin{equation*} \ell(\boldsymbol y|\boldsymbol \theta^{m,l}) =\sum_{i=1}^n (l - 1) \log(y_i + m) + \ell_{S}(\boldsymbol y^{m,l}|\boldsymbol \theta^{m,l}). \end{equation*} The log-likelihood is a sum of two components where the first component is independent of \(\boldsymbol \theta^{m,l}\), and therefore not needed for the maximization with regard to \(\boldsymbol \theta^{m,l}\). So we need to maximize the second component which we know how to do since it is the log-likelihood of a SAR or tSAR model. Knowing the log-likelihood, the corresponding BIC is \begin{equation*} \BIC(\boldsymbol y, \boldsymbol \theta^{m,l}) = -2 \ell(\boldsymbol y|\boldsymbol \theta^{m,l}) + \dim(\boldsymbol \theta^{m,l}) \log(n), \end{equation*} which can be used for selection among different models corresponding to different \(m\) and \(l\) values. For fitting SAR models we use a step wise procedure where we adjust the Box-Cox transformation parameter and eliminate a non-significant covariate in each step. The procedure (Algorithm \ref{algo}) for a given variable \(R\), parameter $m$ and proximity matrix \(W\) is shown in the following. The available covariates are denoted by $x_1, \ldots, x_p$. \begin{algorithm}[H] \caption{Step wise procedure for SAR models} \label{algo} \begin{algorithmic}[1] \STATE \(\mathcal{X} \gets \{x_1, \ldots, x_p\}\) \STATE $ maxp\gets 1 $ \STATE \(c \gets \{\} \) \WHILE{\(maxp > 0.05\)} \STATE \(\mathcal{X} = \mathcal{X} \setminus \{c\}\) \FOR{\(l = -2,-1,-1/2,-1/3,0,1/3,1/2,1,2\)} \STATE \(Y \gets \left\{\begin{array}{cl} \frac{(R + m)^{l} - 1}{l}, & \mbox{if } l \neq 0 \\ \log(R + m), & \mbox{else} \end{array}\right. \) \STATE \(mod_{l} \gets\) fitted SAR model with response variable \(Y\), covariates \(\mathcal{X}\), proximity matrix \(W\) and \(\Sigma_{\epsilon}\) is the local regression variance matrix of \(Y\) with respect to \(W\). \ENDFOR \STATE \(mod \gets\) model with lowest BIC among \(\{ mod_{l}|l = -2,-1,-1/2,-1/3,0,1/3,1/2,1,2 \}\) \STATE \(maxp \gets\) maximum of the p-values of the tests for significance of the coefficients in model \(mod\) \STATE \(c \gets\) covariate corresponding to \(maxp\) \ENDWHILE \end{algorithmic} \end{algorithm} Algorithm \ref{algo} is applied to the response variable \(BI\) with $m=10$ and different choices of the proximity matrix \(W\). Instead of iterating over different values for $m$ we choose one value, 10, to reduce computational time. For the proximity matrix we use nearest neighbors matrices with \(k=10, 20, 30, 40, 50\) neighbors and radius matrices with radius \(r=350,500\). So we obtain 7 different models corresponding to different proximity matrices. After fitting SAR models using the procedure just described, we fit tSAR models. We proceed in the following way. For a certain proximity matrix \(W\) we take the same covariates and transformation as in the corresponding just fitted SAR model and fit a tSAR model where we optimize the degrees of freedom parameter $\nu$ numerically. For the matrix \(\Sigma_{\epsilon}\) we use as before the local regression variance matrix of the transformed response variable with respect to \(W\). Table \ref{bibic} shows the BIC values of the models. If we consider only nearest neighbors matrices, we see that the BIC of the worst tSAR model is still lower than the BIC of the best SAR model. The best model is a tSAR model where the proximity matrix is a nearest neighbors matrix with $k = 20$ neighbors. Estimates for this model are given in Table \ref{esttsar}. \begin{table}[H] \centering \resizebox{1\columnwidth}{!}{% \begin{tabular}{l|rrrrr|rr} \hline model type & nn10 & nn20 & nn30 & nn40 & nn50 & r350 & r500 \\ \hline SAR & 12624.72 & 12488.34 & 12480.19 & 12499.37 & 12539.93 & 12617.03 & 12737.47 \\ tSAR & 12424.78 & \textbf{12400.30} & 12418.51 & 12438.61 & 12470.69 & 12561.87 & 12684.05 \\ $l$ & 1/3 & 1/3 & 1/3 & 1/3 & 1/3 & 1/3 & 1/3 \\ \end{tabular}% } \caption{BIC for different models for transformed \(BI\) and the value of the transformation parameter $l$. ``nnx" means that a nearest neighbors matrix with x neighbors was used and ``rx" means that a radius matrix with radius x was used.} \label{bibic} \end{table} \begin{table}[H] \centering \begin{tabular}{rrrr} \hline & estimate & $\hat{\se}$ & estimate$/\hat{\se}$\\ \hline Intercept & 7.72 & 1.14 & 6.76 \\ Elev & 0.01 & 0.00 & 2.85 \\ Lat & -0.07 & 0.03 & -2.44 \\ Long & -0.03 & 0.01 & -2.56 \\ RH & -0.04 & 0.00 & -12.39 \\ Wind & 0.16 & 0.01 & 20.97 \\ PPT & -0.80 & 0.13 & -6.34 \\ $\lambda$ & 0.85 & & \\ $\sigma$ & 0.84 & & \\ $\nu$ & 6.34 & & \\ \hline \end{tabular} \caption{Parameter estimates, estimated standard errors and their quotient for the model with the best BIC of the Box-Cox transformed Burning Index $(m=10, l=\frac{1}{3})$. } \label{esttsar} \end{table} In Figure \ref{tsarresplot} we check if the residuals of the best tSAR model have the distribution as expected. As the data points do not deviate far from the $x=y$ line, our fitted model seems to be appropriate. For comparison we also show this plot for the SAR model with the lowest BIC. We see that the tSAR model is not only preferred in terms of BIC. \begin{figure}[H] \centerline{% \includegraphics[width=0.5\textwidth]{resbsar}% \includegraphics[width=0.5\textwidth]{resbtsar}% }% \caption{qq-plots for the SAR and tSAR model with the best BIC value. } \label{tsarresplot} \end{figure} \subsection{Out of sample prediction} Now we perform out of sample predictions. This allows us to predict the Burning Index at locations where only the covariates are available. To do so, we need to relate a random variable at a location which was not part of the sample to $\boldsymbol Y$, the vector of random variables in the sample. For an out of sample random variable $Y_o$ at location $l_o$ we assume that \begin{equation*} Y_o = \boldsymbol \beta^T \boldsymbol x_o + \lambda \sum_{j \in N_o} w_{oj} (Y_j - \boldsymbol \beta^T \boldsymbol x_j ) + \epsilon_o , \end{equation*} where $w_{oj}$ relates location $l_o$ to $l_j$ for $j=1...n$ such that $\sum_{j=1}^n w_{oj} =1$ to stay consistent with the row-standardized proximity matrix. We will choose $w_{oj}$ similar to how we chose the entries of the proximity matrix. If $W$ is a $k$ nearest neighbors matrix, $w_{oj}$ is the inverse distance between location $l_o$ and $l_j$ times a standardization constant, if location $l_j$ is among the $k$ nearest neighbors of $l_o$ and zero else. $N_o$ is the neighborhood of location $l_o$ defined as in Section \ref{sarmodel}. For the error we assume \(\epsilon_o \sim N(0,\sigma^2 \Sigma_{o})\) in the case of a SAR model or \(\epsilon_o \sim t(0,\sigma^2 \Sigma_{o}, \nu)\) in the case of a tSAR model. Similar to the SAR and tSAR model, $\Sigma_{o}$ is assumed to be known. We specify $\Sigma_{o}$ similar to how we specified $\Sigma_{\epsilon}$. If $\Sigma_{\epsilon}$ is the local regression variance matrix of $\boldsymbol Y$, the diagonal entries of $\Sigma_{\epsilon}$ were calculated with linear regression residuals $r_1, \ldots r_n$. $\Sigma_{o}$ is then the empirical variance of $\{r_j|j \in N_o\}$. With this assumption the expectation of $Y_o$ given $\boldsymbol Y$ is given by \begin{equation*} \E(Y_o|\boldsymbol Y) = \E(Y_o|Y_j = y_j, j \in N_o) = \boldsymbol \beta^T \boldsymbol x_0 + \lambda \sum_{j \in N_o} w_{oj} (y_j - \boldsymbol \beta^T \boldsymbol x_j ), \end{equation*} where $\boldsymbol \beta$, $\sigma$ and $\nu$ are the parameters of the SAR or tSAR model for $\boldsymbol Y$. So we define the local prediction of $Y_o$, where the neighbors' values are observed, by \begin{equation*} \hat y_{o|N_o} \colonequals \hat{\boldsymbol \beta}^T \boldsymbol x_o + \hat\lambda \sum_{j \in N_o} w_{oj} (y_j - \hat{\boldsymbol \beta}^T \boldsymbol x_j ) , \end{equation*} where $\hat{\boldsymbol \beta}$ and $\hat\lambda$ are the estimates of the model for $\boldsymbol Y$. In addition to the prediction we provide confidence intervals. The $1 - \alpha$ confidence interval is given by \begin{equation*} \CI(1-\alpha) = \begin{cases} \hat y_{o|N_o} \pm \Phi^{-1}(1-\frac{\alpha}{2},0,\hat\sigma^2 \Sigma_{o}) \text{ ~~for SAR} \\ \hat y_{o|N_o} \pm t^{-1}(1-\frac{\alpha}{2},0,\hat\sigma^2 \Sigma_{o},\nu) \text{ ~~for tSAR} \\ \end{cases}, \end{equation*} where $\Phi^{-1}(1-\frac{\alpha}{2},0,\hat\sigma^2 \Sigma_{o})$ and $t^{-1}(1-\frac{\alpha}{2},0,\hat\sigma^2 \Sigma_{o},\nu)$ are the $1-\frac{\alpha}{2}$ quantiles of the $N(0,\hat\sigma^2 \Sigma_{o})$ and the $t(0,\hat\sigma^2 \Sigma_{o},\nu)$ distribution. To perform out of sample prediction, we divide our data set in 10 distinct batches. We use 9 batches for fitting the model and apply the same procedure as before. Our fitted model is the one with the lowest BIC. For the remaining batch data we perform out of sample prediction. Doing this 10 times gives us an out of sample prediction for every location. In every case the fitted model was a tSAR model. For comparison we also take the best SAR model for every case and perform out of sample prediction with this model. The predictions are shown in Figure \ref{tvsp} where we see that there is not a big difference between the SAR and the tSAR model. The prediction is influenced by the estimation of $\lambda$ and $\boldsymbol\beta$ where the SAR and the tSAR model provide similar estimates. The two models differ in the specification of the error distribution which influences confidence intervals. Figure \ref{ci} shows the confidence intervals and Table \ref{citable} the proportion of data points inside the corresponding confidence interval. We see that, in all three cases of confidence levels, this proportion is closer to the theoretical confidence level for tSAR based confidence intervals. To support this statement we conduct a likelihood ratio test (see Wilks \cite{wilks1938large}) for binomial data. We consider a theoretical confidence level of $1- \alpha$. Then we test the null hypothesis that the number of points lying outside the confidence interval is binomial distributed with success probability $\alpha$ against the alternative that it is binomial distributed with a success probability different than $\alpha$. The results of this test are shown in Table \ref{pval}. We see that higher $p$-values are obtained when the tSAR model is used. For the $99\%$ confidence interval the SAR model leads to a very small $p$-value and the null hypothesis is rejected at the $0.1 \%$ level. This can be explained by the fact that the normal distribution is not a good choice to model heavy tailed data. \begin{figure}[H] \centerline{% \includegraphics[width=0.5\textwidth]{tvspsar}% \includegraphics[width=0.5\textwidth]{tvsptsar}% }% \caption{True vs predicted Burning Index (BI). A smoothed curve for the predicted Burning Index was added in red. For better visualization the Burning Index was ordered.} \label{tvsp} \end{figure} \begin{figure}[H] \centerline{% \includegraphics[width=0.5\textwidth]{ci90sar}% \includegraphics[width=0.5\textwidth]{ci90tsar}% }% \caption{Burning Index (BI) and its $90\%$ confidence intervals. For better visualization the Burning Index was ordered.} \label{ci} \end{figure} \begin{table}[H] \centering \begin{tabular}{rrr} \hline & SAR & tSAR \\ \hline 90$\%$ & 91.05$\%$ & 90.21$\%$\\ 95$\%$ & 94.36$\%$ & 94.55$\%$ \\ 99$\%$ & 97.93$\%$ & 98.96$\%$ \\ \hline \end{tabular} \caption{Comparison of different confidence intervals. The first column gives the level of the confidence interval. The other two columns show the proportion of data points inside the confidence interval.} \label{citable} \end{table} \begin{table}[H] \centering \begin{tabular}{rrr} \hline & SAR & tSAR \\ \hline 90$\%$ &0.1622 &0.7853 \\ 95$\%$ &0.2566 &0.4265 \\ 99$\%$ &0.0002 &0.8827 \\ \hline \end{tabular} \caption{Comparison of different confidence intervals. The first column gives the level of the confidence interval. The other two columns show the $p$-value of the likelihood ratio test.} \label{pval} \end{table} \section{Outlook} We proposed the tSAR model, an extension of the SAR model for $t$-distributed errors, which lead to notable improvements in the model fit in our application. The tSAR model showed improvement in the BIC value, its residuals behaved well and it provided more accurate confidence intervals. A natural question which arises is if we can extend the SAR model to other distributions than the \(t\)-distribution. Having a closer look at how we approached the tSAR model we can proceed in a similar way for other distributions. We consider the model \begin{equation*} \boldsymbol Y = X \boldsymbol\beta + \lambda W(\boldsymbol Y - X \boldsymbol\beta) + \boldsymbol\epsilon , \end{equation*} where everything except \(\boldsymbol \epsilon\) is defined as in the SAR model (see Definition \ref{sardef}). We make the more general assumption for the error \(\boldsymbol \epsilon\) that it has expectation zero, a diagonal variance matrix \(\sigma^2 \Sigma_{\epsilon}\) and that \(\epsilon_i/(\sigma\sqrt{(\Sigma_{\epsilon})_{ii}})\) are identically and independent distributed with density \(\phi(\cdot |\boldsymbol\theta)\), where \(\phi(\cdot |\boldsymbol \theta)\) is the density of a distribution with zero mean, unit variance and parameter vector \(\boldsymbol\theta\). So one could allow for errors that follow for example a skew-$t$ distribution. Note that \(\boldsymbol\theta\) is empty for location-scale distributions (e.g. the normal distribution). We obtain the density of \(\boldsymbol Y\) as in Section \ref{paresttsar} using the density transformation rule as \begin{equation*} f_Y(\boldsymbol y) = |\det(\Sigma_{\epsilon}^{-\frac{1}{2}} (\Id_n - \lambda W))| \prod_{i=1}^n \phi\left( \left(\Sigma_{\epsilon}^{-\frac{1}{2}}(\Id_n - \lambda W) (\boldsymbol y - X \boldsymbol\beta)\right)_i | 0,1,\boldsymbol\theta\right). \end{equation*} The regression parameters \(\boldsymbol \beta\) could be estimated by the generalized least squares estimator and \(\sigma^2\) as in the SAR model. Then we can form the profile log-likelihood and estimate \(\lambda\) and \(\boldsymbol\theta\) by numerical optimization. Alternatively one could think about finding estimators of \(\boldsymbol\theta\) depending on \(\lambda\) such that the dimensionality of the profile log-likelihood can be reduced. It would be interesting to investigate this in more detail for various distributions. \section*{Acknowledgment} The first author acknowledges financial support by a research stipend of the Technical University of Munich. The second author is supported by the German Research Foundation through the TUM International Graduate School of Science and Engineering (IGSSE). The third and fourth authors are supported by the German Research Foundation (DFG grants CZ 86/5-1 and CZ 86/4-1). Computations were performed on a Linux cluster supported by DFG grant INST 95/919-1 FUGG. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and Summary} The five-dimensional supersymmetric gauge theories possess interesting dynamics. Even though they are power-counting nonrenormalizable and are IR trivial in general, some 5d gauge theories are known to arise as a relevant deformation of nontrivial UV 5d SCFTs, while some are even known to be UV completed by 6d SCFTs. Furthermore, there is holographic evidence for the existence of RG flow across dimensions from 5d theories to 3d SCFTs. This duality across dimensions or ``5d/3d correspondence'' has recently become quite an interesting playground to study certain classes of 5d theories. The holographic prediction was verified at large $N$ for certain classes of theories, including Seiberg theories \cite{Seiberg:1996bd} and related quiver theories \cite{Bergman:2012kr}, by explicit field theoretical computation of $S^3_b×Σ_{\fg}$ partition function in \cite{Crichigno:2018adf}. Similar results for different 5d manifolds have been obtained in the large $N$ limit \cite{Hosseini:2018uzp,Jain:2021sdp} and in Cardy limit \cite{Hosseini:2021mnn}. All the above-mentioned results inherently study and compute 5d partition functions without dealing directly with the underlying 3d theory (if any). This issue is tackled head-on in \cite{Sacchi:2021afk,Sacchi:2021wvg} with explicit construction of the 3d theories whose global symmetries, operator spectra, etc. are expected to match the corresponding 5d gauge theories compactified on 2d surfaces (specifically, tubes and tori). The 5d theories considered in these articles do not belong to the classes of theories mentioned above whose 5d partition functions are easily evaluated. The difficulty arises because of insufficient cancellation of nonlocal terms in the matrix models that emerge in the large $N$ limit of the partition functions. However, since explicit 3d theories are conjectured to exist, it seems like a worthwhile exercise to compute partition functions for both 5d and 3d theories and check the correspondence. In this short note, we will focus on the first half of the correspondence and compute $S^5_{\vec{ω}}$ and $S^3_{b}×Σ_{\fg}$ partition functions of a few simple 5d theories; those discussed extensively in \cite{Intriligator:1997pq}. We find that the large $N$ scaling of these theories behaves as $N^2$ or $N^{\frac{3}{2}}$, depending on the number of fundamental matter multiplets, unlike the more familiar $N^{\frac{5}{2}}$ scaling of various quiver theories with $AdS_6$ duals. We will comment on the other half of the correspondence involving $S^3_b$ partition function of 3d theories at the end of this note but leave its detailed analysis to \cite{Jain:2022td}. \paragraph{Outline.} This note is organized as follows. In Section \ref{sec:Review} we review the relevant 5d theories, whose $S^5_{\vec{ω}}$ and $S^3_{b}×Σ_{\fg}$ partition functions are then computed in Section \ref{sec:S5pf} and \ref{sec:S3Spf}, respectively. In Section \ref{sec:S3pf}, we then discuss a sample computation of $S^3_b$ partition function of a conjectured 3d theory associated with the compactification of a specific 5d SCFT. \section{Review}\label{sec:Review} We review some aspects of the relevant 5d supersymmetric gauge theories following \cite{Intriligator:1997pq}. The 5d exact low-energy effective prepotential for gauge group $G$ and matter hypermultiplets (HM) in representation $R_I$ with masses $m_I$ is given by \equ{\F =\frac{1}{2g^2}\Tr(\s^2) +\frac{k}{6}\Tr(\s^3) +\frac{1}{12}\bigg(∑_{α∈Ad(G)'}|α·\s|^3 -∑_I∑_{ρ∈R_I}|ρ·\s+m_I|^3\bigg), } where $\s$ is the (adjoint) scalar in vector multiplet (VM), $α$ are the roots of $G$, $ρ$ are the weights of $G$ in rep $R_I$. The first two terms arise from the 5d supersymmetric Yang-Mills (YM) and Chern-Simons (CS) action. The last two terms are the one-loop quantum contributions. It was shown in \cite{Intriligator:1997pq} that the existence of a nontrivial UV fixed point for such theories requires that the 5d prepotential should be a convex function over the entire Coulomb branch (Weyl chamber). The convexity of the 5d prepotential is guaranteed when its Hessian $\big(\frac{∂^2\F}{∂\s_i∂\s_j}\big)$ has non-negative eigenvalues. This analysis leads to constraints on the possible matter content for various gauge groups along with $g^{-2}=0$. These constraints can be relaxed slightly as discussed in \cite{Bergman:2014kza,Hayashi:2015fsa,Bergman:2015dpa}. However, instanton contributions play a crucial role in such relaxations which we will not discuss in this note. We restrict ourselves to the results from \cite{Intriligator:1997pq}, which are summarized below: \begin{description} \item[$\bm{SU(N)}$.] The Coulomb branch is given by $\s=\text{diag}(\s_1,⋯,\s_N)$ with $∑_i\s_i=0$, modulo permutations, which is the Weyl group action. Thus, we can take the Weyl chamber to be $\s_1≥\s_2≥⋯≥\s_N$. Considering theories with $N_s$ symmetric HMs, $N_{as}$ antisymmetric HMs and $N_f$ fundamental HMs, the Hessian is calculated easily. Along the direction where $\s_i=λ$, $i=1,⋯,N-1$, such that $SU(N)$ is broken to $SU(N-1)×U(1)$, the non-negativity of the eigenvalues of the Hessian produces the following constraints: \equ{N_s=0\,,\quad N_{as}=0\,,\quad N_f≤2N-2|k|\,. \label{SUNconst}} This is the only constraint that is applicable when $N$ is large. There are constraints leading to $N_{as}≠0$ when $N≤8$, but we will not be interested in these `finite $N$' theories. The absolute value on $k$ takes care of the charge conjugation operation, which transforms $\s_i→-\s_{N+1-i}$ and $k→-k$. For the rest of the note, we consider $k$ to be positive, unless otherwise stated, without loss of generality. \item[$\bm{USp(2N)}$.] The Coulomb branch is given by $\s=\text{diag}(\s_1,⋯,\s_N,-\s_1,⋯,-\s_N)$, modulo the Weyl group action. Thus, we can take the Weyl chamber to be $\s_1≥\s_2≥⋯≥\s_N≥0$ and consider theories with $N_{as}$ antisymmetric HMs and $N_f$ fundamental HMs. Along the direction where $\s_i=λ$ for $i=1,⋯,p$ and $\s_{i>p}=0$, $USp(2N)$ is broken to $USp(2(N-p))×SU(p)×U(1)$, the constraints turn out to be \eqsa{N_{as}=0\,,& \quad N_f≤2N+4\,; \\ N_{as}=1\,, &\quad N_f≤7\,. \label{USp2Nconst}} The latter case has been studied in great detail starting from the work of \cite{Jafferis:2012iv} so we will only focus on the former case with no antisymmetric HMs. \item[$\bm{SO(M)}$.] We take $M$ to be of the form $2N+δ$ with $δ$ being 0(1) for even(odd) $M$. The Coulomb branch is given by the Weyl chamber $\s_1≥\s_2≥⋯≥\s_N≥0$ and we consider theories with $N_f$ fundamental (vector) HMs and $N_{sp}$ spinorial HMs. Along the direction where $\s_i=λ$ for $i=1,⋯,p$ and $\s_{i>p}=0$, $SO(M)$ is broken to $SO(M-2p)×U(p)$, the constraints turn out to be \eqsa{N_f≤M-4\,,\quad N_{sp}≤2^{6-N-δ}\,. \label{SOMconst}} Since we are interested in the large $N$ limit, it is clear that spinorial HMs are not allowed. \end{description} Having recalled the various possible 5d theories, we now move on to studying their 5d partition functions. \section{\texorpdfstring{$\bm{S^5_{\vec{ω}}}$ Free Energy}{S⁵ω Free Energy}}\label{sec:S5pf} The construction of the large $N$ expression for free energy on $S^5$ follows from \cite{Jafferis:2012iv,Imamura:2013xna,Alday:2014bta}. Let us begin with the definition of free energy $F$: \begingroup \allowdisplaybreaks \eqs{Z_{S^5_{\vec{ω}}} &=\frac{1}{|\W|}∫d\s^i e^{-F_{S^5_{\vec{ω}}}(\s)}\,, \nn F_{S^5_{\vec{ω}}}(\s)&≡\frac{4π^2r}{g^2ω_1ω_2ω_3}\tr_F\s^2 +\frac{πk}{3ω_1ω_2ω_3}\tr_F\s^3 +\tr_{Ad}F_V(\s) +∑_I\tr_{R_I}F_H(\s)\,. } \endgroup The localization computation has reduced the full path integral to just integrals over the scalar $\s$ in the Cartan of the gauge algebra. For large $N$, we only need the $F_V,F_H$ functions at large argument ($|\s|≫1$), which read as follows: \eqsa{F_V(\s) &≈\frac{π}{6ω_1ω_2ω_3}|\s|^3 -\frac{(ω_{sum}^2+ω_{sym}^2)π}{12ω_1ω_2ω_3}|\s|\,, \\ F_H(\s) &≈-\frac{π}{6ω_1ω_2ω_3}|\s|^3 -\frac{(ω_{sum}^2-2ω_{sym}^2)π}{24ω_1ω_2ω_3}|\s|\,, } where we use the notation $ω_{sum}=ω_1+ω_2+ω_3$ and $ω_{sym}^2=ω_1ω_2+ω_2ω_3+ω_3ω_1$. With these building blocks, the free energy can be easily written down for the above-mentioned theories. However, evaluating / extremizing the resulting expressions using the continuum approach as discussed in \cite{Jafferis:2012iv,Uhlemann:2019ypp} does not seem to give sensible results. So we simplify the evaluation procedure drastically by equating some of the eigenvalues $\s_i$ and setting the rest to zero (as done in the analysis of \cite{Intriligator:1997pq} to obtain the constraints reviewed above), to obtain a free energy expression depending on only one variable, which can then be easily extremized.\footnote{This simplified approach also gives the correct $N^{\frac{5}{2}}$ scaling for the `usual' $USp(2N)$ theories satisfying the second set of constraint in \eqref{USp2Nconst}, but the coefficients do not match as one might expect. A somewhat different approach has also appeared in \cite{Minahan:2014hwa,Nedelin:2015mta,Santilli:2021qyt} for 5d theories with adjoint and fundamental matter, which exhibit $N^2$ scaling.} We now specialize to the three classes of 5d gauge theories discussed above. \subsection[\texorpdfstring{$SU(N)_k+N_f$}{SU(N)k+Nf}]{$\bm{SU(N)_k+N_f}$} Consider a $SU(N)$ Yang-Mills theory with Chern-Simons level $k$ (assumed to be $\O(1)$, unless otherwise stated) with $N_f$ fundamental hypermultiplets. The VM scalar is parametrized as $\s=\text{diag}\(\s_1,⋯,\s_N\)$ with $\s_N=-∑_{i=1}^{N-1}\s_i$ such that the $S^5$ free energy can be written down as follows (setting $g^{-2}=0$ from now on): \equ{F^{SU}_{S^5_{\vec{ω}}}(\s) =∑_{i=1}^{N}\left[\frac{πk}{3ω_1ω_2ω_3}\s_i^3 +∑_{j=1}^{i-1}2F_V(\s_i-\s_j) +∑_{I=1}^{N_f}F_H(\s_i)\right]. \label{Fsu}} We used that the adjoint rep of $SU(N)$ has roots $±(e^i-e^j)$ ($j<i$) and fundamental rep has weights simply given by $e^i$, with $\{e^i\}$ being the basis of unit vectors in $\bR^N$. Further restricting to the Weyl chamber $\s_1≥\s_2≥⋯≥\s_N$ and choosing $\s_i=λ$ for $i=1,⋯,N-1$ (as discussed above), we get \eqst{F^{SU}_{S^5_{\vec{ω}}}(\s) =-\frac{πkN(N-1)(N-2)}{3ω_1ω_2ω_3}λ^3 +\(\frac{π(N-1)N^3}{3ω_1ω_2ω_3}λ^3 -\frac{π(ω_{sum}^2+ω_{sym}^2)N(N-1)}{6ω_1ω_2ω_3}λ\) \\ -N_f\(\frac{π(N-1)(N^2-2N+2)}{6ω_1ω_2ω_3}λ^3 +\frac{π(ω_{sum}^2-2ω_{sym}^2)(N-1)}{12ω_1ω_2ω_3}λ\). \label{Fsusimp}} We can now extremize the above free energy wrt $λ$ easily and obtain the extremized free energy to be \equ{\bar{F}^{SU}_{S^5_{\vec{ω}}} =-\frac{π(N-1)[2N(ω_{sum}^2+ω_{sym}^2)+N_f(ω_{sum}^2-2ω_{sym}^2)]^{\frac{3}{2}}}{18ω_1ω_2ω_3\sqrt{3}\sqrt{4N^3-4k N(N-2) -2N_f(N^2-2N+2)}}\,· } Demanding the denominator does not vanish reproduces the allowed values of $N_f$: \equ{N_f≤2N+4-2k\,. } Comparing this to \eqref{SUNconst}, we see that the above constraint is less strict but matches the one found in \cite{Bergman:2014kza,Hayashi:2015fsa,Bergman:2015dpa}. Continuing to ignore instanton corrections and assuming $N_f$ to be of the form $n N+f$, we see the extremized free energy takes the following form: \equ{\bar{F}^{SU}_{S^5_{\vec{ω}}} =-\frac{π(N-1)[N\{(2+n)ω_{sum}^2+2(1-n)ω_{sym}^2\}+f(ω_{sum}^2-2ω_{sym}^2)]^{\frac{3}{2}}}{18ω_1ω_2ω_3\sqrt{3}\sqrt{(4-2n)N^3 +2(2n-2k-f)N^2 -4(n-2k-f)N -4f}}\,· } This allows us to discuss the large $N$ limit of $\bar{F}_{S^5}$ for the sequence of theories considered in \cite{Sacchi:2021wvg} and this limit falls into three cases: \begin{enumerate} \item When $n=2$ with $f=4-2k$: \equ{\bar{F}^{SU}_{S^5_{\vec{ω}}}=-\frac{π(2ω_{sum}^2-ω_{sym}^2)^{\frac{3}{2}}}{18\sqrt{3}\,ω_1ω_2ω_3}N^2\,. } In the range $2N-2k<N_f≤2N+4-2k$, the above result may not be the whole story owing to instanton contributions. Also, the UV completion is supposed to be a 6d SCFT in this range so $N^3$ scaling is more likely. \item When $n=2$ with $f<4-2k$:\footnote{If $k$ happens to be $\O(N)$, we also get $N^{\frac{3}{2}}$ scaling.} \equ{\bar{F}^{SU}_{S^5_{\vec{ω}}}=-\frac{π(2ω_{sum}^2-ω_{sym}^2)^{\frac{3}{2}}}{9\sqrt{3(4-2k-f)}\,ω_1ω_2ω_3}N^{\frac{3}{2}}\,. } \item When $0≤n<2$ and $f$ is $\O(1)$: \equ{\bar{F}^{SU}_{S^5_{\vec{ω}}}=-\frac{π[(2+n)ω_{sum}^2+2(1-n)ω_{sym}^2]^{\frac{3}{2}}}{18\sqrt{3(4-2n)}\,ω_1ω_2ω_3}N\,. } This result is definitely not the whole story as instantons are expected to contribute at $\O(N)$. \end{enumerate} \subsection[\texorpdfstring{$USp(2N)+N_f$}{USp(2N)+Nf}]{$\bm{USp(2N)+N_f}$} This is a $USp(2N)$ Yang-Mills theory with matter consisting of zero antisymmetric hypermultiplets but only $N_f$ fundamental hypermultiplets. The VM scalar is parametrized as $\s=\text{diag}\(\s_1,⋯,\s_N,-\s_1,⋯,-\s_N\)$ such that the free energy becomes (with $k=0$): \equ{F^{USp}_{S^5_{\vec{ω}}}(\s) =∑_{±,i,j<i}\left[F_V(±(\s_i-\s_j)) +F_V(±(\s_i+\s_j))\right] +∑_{±,i}F_V(±2\s_i) +∑_{±,i}N_f F_H(±\s_i)\,, \label{Fusp}} where we used that the adjoint rep of $USp(2N)$ has roots $±(e^i±e^j)$ ($j<i$) and $±2e^i$, and fundamental rep has weights $±e^i$. Further restricting to the Weyl chamber $\s_1≥\s_2≥⋯≥\s_N≥0$ and choosing $\s_i=λ$ for $i=1,⋯,p$ with $\s_{i>p}=0$ (as discussed in the previous section), we get \eqst{F^{USp}_{S^5_{\vec{ω}}}(\s) =\(\frac{2πp(N+p-2)}{3ω_1ω_2ω_3}λ^3 -\frac{πp(2N-p-1)(ω_{sum}^2+ω_{sym}^2)}{6ω_1ω_2ω_3}λ\) \\ +\(\frac{π(8-N_f)p}{3ω_1ω_2ω_3}λ^3 -\frac{πp[(4+N_f)ω_{sum}^2+(4-2N_f)ω_{sym}^2]}{12ω_1ω_2ω_3}λ\). \label{Fuspsimp}} Again, we can extremize the above free energy straightforwardly and obtain the extremized result as follows \equ{\bar{F}^{USp}_{S^5_{\vec{ω}}} =-\frac{πp[(4N+2-2p+N_f)ω_{sum}^2+2(2N+1-p-N_f)ω_{sym}^2]^{\frac{3}{2}}}{36\sqrt{3(2N+4+2p-N_f)}\,ω_1ω_2ω_3}\,· } Demanding the denominator does not vanish constrains the possible values of $N_f$: \equ{N_f≤2N+4+2p\,, } which is consistent with \eqref{USp2Nconst}. We can now discuss the large $N$ limit of $\bar{F}_{S^5}$ for various possible theories and this limit again falls into three cases: \begin{enumerate} \item When $N_f=n N+f$ with $0≤n≤2$ and $f$ being $\O(1)$ but $p=N-n_p$ with $0≤n_p≪N$: \equ{\bar{F}^{USp}_{S^5_{\vec{ω}}}=-\frac{π[(2+n)ω_{sum}^2+2(1-n)ω_{sym}^2]^{\frac{3}{2}}}{36\sqrt{3(4-n)}\,ω_1ω_2ω_3}N^2\,. } A special case is $n=2$ and $f≤4$ which gives \equ{\bar{F}^{USp}_{S^5_{\vec{ω}}}=-\frac{π(2ω_{sum}^2-ω_{sym}^2)^{\frac{3}{2}}}{18\sqrt{3}\,ω_1ω_2ω_3}N^2\,, } similar to the case 1. for $SU(N)_k+N_f$ theories. \item When $N_f=2N+f$ with $f≤4$ but $p$ being $\O(1)$: \equ{\bar{F}^{USp}_{S^5_{\vec{ω}}}=-\frac{πp\,ω_{sum}^3}{3\sqrt{2(4+2p-f)}\,ω_1ω_2ω_3}N^{\frac{3}{2}}\,. } \item When $N_f=n N+f$ with $0≤n<2$ and $f,p$ both being $\O(1)$: \equ{\bar{F}^{USp}_{S^5_{\vec{ω}}}=-\frac{πp[(4+n)ω_{sum}^2+2(2-n)ω_{sym}^2]^{\frac{3}{2}}}{36\sqrt{3(2-n)}\,ω_1ω_2ω_3}N\,. } Once again, this result will get corrected with instantons contributions at this order. \end{enumerate} \subsection[\texorpdfstring{$SO(M)+N_f$}{SO(M)+Nv}]{$\bm{SO(M)+N_f}$} This is a $SO(M)$ Yang-Mills theory with matter consisting of only $N_f$ vector hypermultiplets. We also use $M=2N+δ$ with $δ$ being 0(1) for even(odd) $M$. The VM scalar is then parametrized as $\s=\text{diag}\{\s_1,⋯,\s_N\}$ with the Weyl chamber chosen as $\s_1≥\s_2≥⋯≥\s_N≥0$. As discussed in the previous section, we choose $\s_i=λ$ for $i=1,⋯,p$ with $\s_{i>p}=0$. With this setup, the free energy becomes (again $k=0$) \eqs{F^{SO}_{S^5_{\vec{ω}}}(\s) &=∑_{±,i,j<i}\left[\frac{π}{3ω_1ω_2ω_3}(\s_i±\s_j)^3 -\frac{π(ω_{sum}^2+ω_{sym}^2)}{6ω_1ω_2ω_3}(\s_i±\s_j)\right] \nn &\quad+∑_i\(\frac{π(δ-N_f)}{3ω_1ω_2ω_3}\s_i^3 -\frac{π[(2δ+N_f)ω_{sum}^2+2(δ-N_f)ω_{sym}^2]}{12ω_1ω_2ω_3}\s_i\) \nn &=\(\frac{2πp(N+p-2)}{3ω_1ω_2ω_3}λ^3 -\frac{πp(2N-p-1)(ω_{sum}^2+ω_{sym}^2)}{6ω_1ω_2ω_3}λ\) \nn &\quad+\(\frac{πp(δ-N_f)π}{3ω_1ω_2ω_3}λ^3 -\frac{πp[(2δ+N_f)ω_{sum}^2+2(δ-N_f)ω_{sym}^2]}{12ω_1ω_2ω_3}λ\), \label{Fspin}} where we used that the adjoint rep of $SO(M)$ has roots $±(e^i±e^j)$ ($j<i$) and $±δe^i$, and vector rep has weights $±e^i$. Again, we can extremize the above free energy straightforwardly and obtain the extremized result as follows \equ{\bar{F}^{SO}_{S^5_{\vec{ω}}} =-\frac{πp[(2(2N+δ)-2p-2+N_f)ω_{sum}^2 +(2N+δ-p-1-N_f)ω_{sym}^2]^{\frac{3}{2}}}{36\sqrt{3(2N+δ+2p-4-N_f)}\,ω_1ω_2ω_3}\,· } Demanding the denominator does not vanish constrains the possible values of $N_f$: \equ{N_f≤M-4+2p\,, } which is consistent with \eqref{SOMconst}. We again find the large $N$ limit of $\bar{F}_{S^5}$ falls into three cases (very similar to the $USp(2N)$ case): \begin{enumerate} \item When $N_f=n N-f$ with $0≤n≤2$ and $f$ being $\O(1)$ but $p=N-n_p$ with $0≤n_p≪N$: \equ{\bar{F}^{SO}_{S^5_{\vec{ω}}}=-\frac{π[(2+n)ω_{sum}^2+2(1-n)ω_{sym}^2]^{\frac{3}{2}}}{36\sqrt{3(4-n)}\,ω_1ω_2ω_3}N^2\,. } A special case is $n=2$ and $f≥4-δ$ which gives \equ{\bar{F}^{SO}_{S^5_{\vec{ω}}}=-\frac{π(2ω_{sum}^2-ω_{sym}^2)^{\frac{3}{2}}}{18\sqrt{3}\,ω_1ω_2ω_3}N^2\,. } \item When $N_f=2N+δ-f$ with $f≥4-δ$ but $p$ being $\O(1)$: \equ{\bar{F}^{SO}_{S^5_{\vec{ω}}}=-\frac{πp\,ω_{sum}^3}{3\sqrt{2(2p+f-4)}\,ω_1ω_2ω_3}N^{\frac{3}{2}}\,. } \item When $N_f=n N+f$ with $0≤n<2$ and $f,p$ both being $\O(1)$: \equ{\bar{F}^{SO}_{S^5_{\vec{ω}}}=-\frac{πp[(4+n)ω_{sum}^2+2(2-n)ω_{sym}^2]^{\frac{3}{2}}}{36\sqrt{3(2-n)}\,ω_1ω_2ω_3}N\,. } This result at $\O(N)$ will get corrected with instanton contributions. \end{enumerate} As we clearly see, the $SO(M)$ results mirror those of $USp(2N)$ results in almost every detail so we will not discuss the $SO(M)$ theories in the upcoming sections. \section{\texorpdfstring{$\bm{S^3_b×Σ_{\fg}}$ Free Energy}{S³b×Σg Free Energy}}\label{sec:S3Spf} We collect some relevant results for the $S^3_b×Σ_{\fg}$ partition function from \cite{Crichigno:2018adf}: \eqst{Z_{S^3_b×Σ_{\fg}} =\frac{1}{|\W|}∑_{\fm^i}∮d{\tilde{u}}^ie^{-\frac{4π^2}{g^2}Q^2\fm·{\tilde{u}} +iπkQ^2\Tr(\fm{\tilde{u}}^2)}∏_{α∈Ad(G)'}s_b\(-iQ(α({\tilde{u}})+1)\)^{1-\fg-α(\fm)} \\ ×∏_I∏_{ρ∈R_I}s_b\(-iQ(ρ({\tilde{u}})+{\tilde{\nu}}_I)\)^{ρ(\fm)+\fn_I(\fg-1)}\,, } where ${\tilde{u}}$ is the gauge variable, $\fm(\fn)$ is the gauge(flavour) magnetic flux and ${\tilde{\nu}}$ is the flavour fugacity. For large $N$, we need the asymptotic behaviour of the $s_b(iQz)=e^{\ell_b(z)}$ function: \equ{\ell_b(a+i λ) ≈∓\(\frac{iπ}{2}λ^2 +πaλ -\frac{iπ}{2}a^2 +\frac{iπ}{24}(b^2+b^{-2})\) \quad\text{ for }λ→±∞. \label{eq4p2S2B22}} With this setup, we now specialize to the 5d gauge theories discussed in previous sections. \subsection[\texorpdfstring{$SU(N)_k+N_f$}{SU(N)k+Nf}]{$\bm{SU(N)_k+N_f}$} For this theory, we write the $S^3_b×Σ_{\fg}$ free energy by taking the log of the partition function given above ($g^{-2}=0$ as before): \eqst{F_{S^3_b×Σ_{\fg}}^{SU} =∑_{i=1}^N\(-iπkQ^2\fm_i{\tilde{u}}_i^2\) -∑_{i=1}^N∑_{j=1}^{i-1}(1-\fg∓\fm_i ±\fm_j)\ell_b\(1±({\tilde{u}}_i-{\tilde{u}}_j)\) \\ +N_f∑_{i=1}^N (\fm_i+(\fg-1)\fn_f)\ell_b\({\tilde{u}}_i+{\tilde{\nu}}_f\). } We can follow the approach of extremizing the twisted superpotential first and then evaluating the free energy on those solutions but for the purposes of swiftness, we follow the approach of extremizing the free energy with respect to both the gauge variable ${\tilde{u}}$ and gauge flux $\fm$ following \cite{Hosseini:2021mnn}. From past experiences, we expect the extremum values for ${\tilde{u}}$ to be imaginary so we substitute ${\tilde{u}}_i→i\s_i$. Furthermore, we also choose the ansatz $\fm_i→iη\s_i$ and restrict to the Weyl chamber $\s_1≥\s_2≥⋯≥\s_N≥0$ along with $\s_i=λ$ for $i=1,⋯,N-1$ (as before). We now have to extremize wrt both $λ$ and $η$, which is a straightforward exercise after using \eqref{eq4p2S2B22}, and we get \equ{\bar{F}_{S^3_b×Σ_{\fg}}^{SU} =(\fg-1)\frac{2πQ(N-1)(N-N_f\fn_f{\tilde{\nu}}_f)\sqrt{2N(4Q^2+1)+N_f(4Q^2-2) -12Q^2N_f{\tilde{\nu}}_f^2}}{\sqrt{3}\sqrt{4N^3-4k N(N-2) -2N_f(N^2-2N+2)}}\,· } Demanding the denominator does not vanish reproduces the same constraint on $N_f$ as before: $N_f≤2N+4-2k$. So we can discuss the large $N$ limit of $\bar{F}_{S^3×Σ_{\fg}}$ and compare with the $\bar{F}_{S^5}$ for the three cases as follows ($N_f=n N+f$): \begin{enumerate} \item When $n=2$ with $f=4-2k$: \equ{\bar{F}_{S^3_b×Σ_{\fg}}^{SU} =(\fg-1)\frac{πQ}{\sqrt{3}}(1-2\fn_f{\tilde{\nu}}_f)\sqrt{8Q^2-1-12Q^2{\tilde{\nu}}_f^2}\,N^2\,. } \item When $n=2$ with $f<4-2k$: \equ{\bar{F}_{S^3_b×Σ_{\fg}}^{SU} =(\fg-1)\frac{2πQ}{\sqrt{3}\sqrt{4-2k-f}}(1-2\fn_f{\tilde{\nu}}_f)\sqrt{8Q^2-1-12Q^2{\tilde{\nu}}_f^2}\,N^{\frac{3}{2}}\,. } \item When $0≤n<2$ with $f$ being $\O(1)$: \equ{\bar{F}_{S^3_b×Σ_{\fg}}^{SU} =(\fg-1)\frac{2πQ}{\sqrt{3}\sqrt{4-2n}}(1-n\fn_f{\tilde{\nu}}_f)\sqrt{4(2+n)Q^2+2(1-n)-12nQ^2{\tilde{\nu}}_f^2}\,N \,. } As instantons are expected to contribute at $\O(N)$, we may not trust this result at this order. \end{enumerate} To compare with $\bar{F}_{S^5_{\vec{ω}}}^{SU}$ results, one can set $ω_1=i,ω_2=-i,ω_3=2Q$ $⇒ω_{sum}^2=4Q^2,ω_{sym}^2=1$ and identify the common factors. More concretely, one can construct the twisted superpotential $\W_{S^3_b×\bR^2}$ that turns out to be proportional to $\bar{F}_{S^5_{\vec{ω}}}$ and then one can relate the two free energies in the usual manner: $\bar{F}_{S^3_b×Σ_{\fg}} \propto \frac{∂\bar{\W}_{S^3_b×\bR^2}}{∂{\tilde{\nu}}_f}\,·$ We leave this exercise to the reader. \subsection[\texorpdfstring{$USp(2N)+N_f$}{USp(2N)+Nf}]{$\bm{USp(2N)+N_f}$} Repeating the above analysis with previous sections' conventions for $USp(2N)$ theories, we get \eqst{\bar{F}_{S^3_b×Σ_{\fg}}^{USp} =(\fg-1)\frac{πQp(2N+1-p-N_f\fn_f{\tilde{\nu}}_f)}{\sqrt{3}\sqrt{2N+4+2p -N_f}}× \\ \sqrt{4(4N+2-2p+N_f)Q^2 +2(2N+1-p-N_f) -12Q^2N_f{\tilde{\nu}}_f^2}\,. } This allows us to write down the large $N$ limit of $\bar{F}_{S^3×Σ_{\fg}}$ for the three cases as follows ($N_f=n N+f$): \begin{enumerate} \item When $0≤n≤2$ with $f$ being $\O(1)$ but $p=N-f_p$ with $0≤f_p≪N$: \equ{\bar{F}^{USp}_{S^3×Σ_{\fg}}=(\fg-1)\frac{πQ(1-n\fn_f{\tilde{\nu}}_f)}{\sqrt{3}\sqrt{4-n}}\sqrt{4(2+n)Q^2 +2(1-n) -12nQ^2{\tilde{\nu}}_f^2}\,N^2\,. } \item When $n=2$ with $f≤4$ but $p$ being $\O(1)$: \equ{\bar{F}^{USp}_{S^3×Σ_{\fg}}=(\fg-1)\frac{4\sqrt{2}πQ^2p}{\sqrt{2(p+2)-f}}(1-\fn_f{\tilde{\nu}}_f)\sqrt{1-{\tilde{\nu}}_f^2}\,N^{\frac{3}{2}}\,. } \item When $0≤n<2$ with $f,p$ both being $\O(1)$: \equ{\bar{F}^{USp}_{S^3×Σ_{\fg}}=(\fg-1)\frac{πQp(2-n\fn_f{\tilde{\nu}}_f)}{\sqrt{3(2-n)}}\sqrt{4(4+n)Q^2 -2(2-n)}\,N\,. } Once again, the instanton contributions at $\O(N)$ will modify this result. \end{enumerate} The above expressions can be compared to the appropriate $\bar{F}_{S^5_{\vec{ω}}}^{USp}$ expressions by using the same values for $ω_i$'s as in the case of the $SU$ theories. \section{\texorpdfstring{$\bm{S^3_b}$ Free Energy}{S³b Free Energy}}\label{sec:S3pf} Now we consider the 3d theories constructed in \cite{Sacchi:2021wvg} and compute their 3-sphere free energy at large $N$. The relevant ingredients can be found in \cite{Jain:2019lqb} and references therein. We will discuss these in proper detail in a companion paper \cite{Jain:2022td} dealing with 3d partition functions so this section will act as just a warm-up for that. For now, we just recall that the $S^3_b$ partition function can be written as follows: \equ{Z_{S^3_b} = απρ(ιλ) \label{ZS3bgen}} Let us now see how the 3d free energy computations stack up against the computations of previous sections. \subsection[\texorpdfstring{$SU(N)_k+N_f$}{SU(N)k+Nf}]{$\bm{SU(N)_k+N_f}$} The 3d theory conjectured for this case with $k=1$ and $N_f=2N+2$ is given by the quiver diagram shown in Figure \ref{fig1}. \fig{!h}{\includegraphics[scale=0.55]{3dSUNconj1.pdf} \put(-115,129){$2N+1$}\put(-188,68){$N$}\put(-23,68){$N$}\put(-100,8){$1$} \caption{The 3d theory associated with the compactification of 5d SCFT that UV completes the $SU(N)_1+(2N+2)F$ gauge theory on a torus with flux $(1,-1,0,⋯,0)$.} \label{fig1}} The free energy follows from \eqref{ZS3bgen}: \equ{F_{S^3_b} = φooλ\s! } \section*{\centering Acknowledgements} More short notes will be forthcoming monthly till the end of this year, regardless of any kind of scooping (hopefully). So, Like, Comment, and Subscribe to my Author Feed! \references{5dRefs} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{General Appearance} \noindent Deep inelastic lepton-hadron scattering plays a rather unique role in studying the structure of hadron. Through measurements of the structure functions in such experiments, people have learned$^{1}$ that hadrons are made out of point-like constituents --- quarks. Based on the picture of the quark parton model$^{2}$, a set of relations have been derived between the structure functions and the quark distributions in the infinite momentum frame, and a rather simple interpretation of the Bjorken variable in that frame has been obtained. Parton model itself says nothing about the forms of the structure functions but the integrals of them. It is predicted$^{3}$ that a set of sum rules should be valid. Such sum rules link the structure functions to quantities which can be measured in other kinds of experiments such as those associated with the static properties of the hadrons. Actually, the sum rules are the only places where structure functions and static properties of hadrons meet each other in the parton model and it is also the sum rules which can be checked experimentally. With increasing accuracy of the measurements, large discrepancies have been observed$^{4,5,6,7,8}$ between data and theory. The two most well-known examples are the integral of the spin-dependent structure function $g_1(x_B)$ and the Gottfried sum. It is found$^{4-8}$ that both of them are much smaller than those expected in the model. The former led$^{4,5,6,7}$ to the conclusion that quark spin contributes only a very small fraction to the nucleon spin and thus triggered the ``spin crisis''. The latter raised$^{9}$ the question whether isospin-invariance is violated in the nucleon sea. In a series of papers published recently, it has been pointed out$^{10}$ that intrinsic motion of the confined quarks plays a very important role in understanding the polarization phenomena in high energy collisions, and that relativistic quark models can be constructed which reproduce baryon's magnetic moments on the one hand, and describe the observed$^{11}$ left-right asymmetries in inclusive meson or hyperon production in high energy processes on the other. It has been shown$^{10}$ in particular that once we accept that quarks are spin-$1/2$ particles moving in a confined spatial region, we are forced also to accept that orbital motion of such valence quarks is always involved, even when they are in their ground states. In other words, in relativistic quark models, intrinsic motion of the valence quarks appears simply as orbital motion. The average orbital angular momenta of the valence quarks are simply nonzero if the nucleon is polarized. In this connection it is also interesting to see that orbital angular momenta of quarks were expected$^{12}$ to contribute to the proton spin by analyzing different data in the framework of the parton model and others. We recall that intrinsic transverse motion was neglected in the formulation of the parton model$^{2}$, and now it is usually thought that transverse motion contributes only to high twist effects which vanish at high $Q^2$. We are therefore led to the following questions: What kind of effects do such orbital motions have on the structure functions of the nucleon? Can we understand the above mentioned data if we take them into account? Are the above mentioned effects observed in polarized$^{4,5,6,7,11}$ and unpolarized$^{8}$ experiments connected with each other? These are questions we would like to discuss in this note. We ask in this connection also: Should not these questions be made clear before we seek for other dynamical origins of the above mentioned effects observed experimentally? Since orbital motion of the valence quarks are best described in nucleon's rest frame, we will discuss these questions also in that frame. We consider the inclusive process $l+N\to l\ +$ anything, where $l$ stands for electron or muon, $N$ for nucleon, and denote the 4-momentum of $N$ and those of the incident and the scattered $l$'s by $P=(M,\vec 0),\ k$, $k'$ and the 4-momentum transfer carried by the virtual photon $\gamma^*$ by $q \equiv (\nu, \vec q)=k-k'$ respectively. We study deep inelastic scattering processes in the Bjorken limit, i.e. $Q^2\equiv -q^2$ is very large ($Q^2\gg M^2$) while $x_B \equiv Q^2/(2M\nu )$, the Bjorken-$x$, is kept fixed. For such events, it is observed that$^{1}$ Bjorken scaling is approximately valid, and thus it is expected that the Bjorken variable $x_B$ should play a special role in describing such events. Hence, the questions we immediately encounter are: What does $x_B$ mean in the rest frame of the nucleon? Why is it particularly useful in describing deep inelastic scattering events? What does the existence of the approximate Bjorken scaling tell us about the structure of nucleon in its rest frame? We recall that in the parton model$^{2}$, one treats the problem in the infinite momentum frame where the transverse motion of the quarks and the interactions between them during the lepton-nucleon interaction can be neglected. Because of this, one obtains a very simple interpretation of $x_B$. It is simply the fractional (longitudinal) momentum of the struck quark with respect to the nucleon in that frame. How is the situation in nucleon's rest frame --- what is still applicable and what is no more valid here? To answer these questions we note the following: (i) In deep inelastic scattering, $Q^2$ is very large. This implies a high spatial resolution so that the interaction between the lepton and the quark is point-like. The time interval $\Delta t$, in which such a point-like interaction takes place, is proportional to $1/\nu$. In the Bjorken limit, $\Delta t$ is much shorter than the typical time needed for color propagation between the quarks in the nucleon. The interaction between the lepton and the confined quark is already over, before the latter could exchange energy-momentum or any other quantum numbers with the neighboring quarks. In other words, {\it impulse approximation$^{13}$ is valid for such scattering processes, also in nucleon' rest frame}. The confining potentials determine the initial states of the quarks but can be neglected during the lepton-quark interaction. This is to be compared with the parton model in the infinite momentum frame, where not only the interactions between the quarks are neglected but also the initial states of the quarks are taken as plane waves with momenta in the same direction as the nucleon. (ii) At large $Q^2$ and fixed $x_B\equiv Q^2/(2M\nu )$, we have, $\Delta \equiv |\vec q \ |-\nu \approx Q^2/(2\nu)$, so that \begin{equation} x_B\equiv Q^2/(2M\nu )\approx \Delta / M. \end{equation} This means that the virtual photon $\gamma^*$ has an energy-deficit$^{14}$, and that {\it $x_B$ is nothing else but the energy-deficit of the virtual photon in unit of proton mass.} (iii) During the lepton-quark interaction, the virtual photon $\gamma ^*$ is absorbed by the quark inside the nucleon. The cross sections or the structure functions derived from them are proportional to the probability for such an absorption to take place. The absorption of $\gamma^*$ by a confined point-like quark implies that the latter obtains, not only an enormously large amount of momentum, but also the corresponding energy and thus the above mentioned energy-deficit. The initial 4-momentum $p\equiv (\varepsilon , p_{\parallel }, \vec p_\perp)$ of the struck quark is suddenly and drastically changed to $p' \equiv (\varepsilon',p'_{\|},\vec p_\perp )=p+q$ (where $_\parallel $ is defined wrt the direction of $\vec q$). As a consequence, it moves kinematically like a free particle such that a current jet (of hadrons) can be produced. The necessary and sufficient condition for this to occur is ${p'}^2=m_q^2$, which leads$^{15}$ to $ \varepsilon -p_{\|} \approx \Delta $ in the Bjorken limit. This implies that the struck quark should have an ``energy excess'' and this ``energy excess'' approximately compensates the energy deficit $\Delta $ of the virtual photon $\gamma ^*$. In terms of $x_B$, this condition is: \begin{equation} \varepsilon-p_{\|} \approx \Delta \approx Mx_B. \end{equation} That is to say: Among all the (point-like and confined) quarks in the target nucleon, the virtual photon may encounter various quarks in different states, but only those which have the right ``energy-excess'' at the moment when they get struck contribute to the observed current jet. The virtual photon $\gamma ^*$ can have different momentum $\vec q$ and energy $\nu $, but for the deep inelastic scattering to take place, only {\it its energy deficit $\Delta $ or the quantity $x_B\approx \Delta /M$ is relevant}. This shows why $x_B$ is particularly useful in describing these events. (iv) The cross section for deep inelastic scattering at fixed $x_B$ is therefore determined by the probability for finding quarks with ``energy excess'' $\varepsilon -p_\| \approx \Delta \approx x_BM$ in the nucleon. It is clear that if $Q^2$ is already large enough, i.e. the spatial resolution is high enough so that the point-like constituents can already be resolved, further increasing of $Q^2$, which means further increasing of spatial resolution, will not see anything new. This implies that the probability for finding such quarks should be independent of $Q^2$ and thus the Bjorken scaling is valid in this case. Having seen that the qualitative features of deep inelastic scattering can indeed also be understood without introducing the infinite momentum frame, we now study the influence of the orbital motion of the valence quarks on the (spin-dependent as well as spin averaged) structure functions. We treat this problem in the rest frame of the nucleon, and recall$^{10}$ the following: In this frame, the valence quark $q_v$ can and should be described by the spherical wave which is an eigenstate of four operators: the Hamiltonian $\hat H$, the total angular momentum squared $\hat {{\vec j}^2}$, its third component $\hat j_z$, and the parity $\hat {\cal P}$ with eigenvalues $\varepsilon $, $j(j+1)$, $m$ and $\cal {P}$ respectively. In momentum space, it is given by$^{/16/}$, \begin{equation} \tilde \psi_{\varepsilon j m{\cal P}} (\vec p\ |q_v) =(-i)^\ell \left( \matrix{ \tilde f_{\varepsilon\ell} (p|q_v) \Omega^{jm}_{\ell } (\theta,\phi)\cr -i\tilde g_{\varepsilon\ell'} (p|q_v)\Omega^{jm}_{\ell'} (\theta,\phi)\cr} \right), \end{equation} where ${\cal P}=(-1)^l, j=l\pm 1/2$ and $l'=l\pm 1$. Here, as well as in the following, $p$, when used as argument in $\tilde f$ or $\tilde g$, stands for $|\vec p \ |$. We see clearly that orbital motion is always involved even if the quark $q_v$ is in its ground state $\tilde \psi_{\varepsilon j m{\cal P}} $ where $\varepsilon =\varepsilon _0, j=1/2, m=\pm 1/2 $ and ${\cal P}=+$ (i.e. $l=0,l'=1)$. Hence, we consider, as the first step, the following demonstrating example: We assume that the valence quarks can be treated, just like that in the quark parton model, as free but they are in the above mentioned eigenstates of $\hat H$, $\hat {{\vec j}^2}$, $\hat j_z$ and $\hat {\cal P}$. The two radial functions $\tilde f$ and $\tilde g$ are determined by the Dirac equation for free particle. We calculate the contribution of one of such quarks, $q_v$, to the structure functions of the nucleon, and compare the results with those obtained in the parton model. We recall that the S-matrix element for the elementary process $e^-q_v\to e^-q_v$ is given by, \begin{eqnarray} S^{e^-q_v}_{fi} & = & -i\int d^4x\int d^4y \left\{ \Bigl [ -e\Psi^{(e)\dagger }_f(x) \gamma _\alpha \Psi ^{(e)}_i(x) \Bigr ] \right. \nonumber\\ & & \hspace*{3cm}\times \left. D_F(x-y) \Bigl [ ee_{qv}\Psi^{(qv)\dagger }_f(y) \gamma ^\alpha \Psi ^{(qv)}_i(y) \Bigr ] \right\} . \end{eqnarray} Here $D_F(x-y)=\int d^4q (-1/q^2) e^{-iq(x-y)}/(2\pi )^4$ is the photon propagator, the $\Psi $'s are the initial and the final (denoted by the subscripts $i$ and $f$ respectively) state wave functions for the electron and the quark [denoted by the superscripts $(e)$ and $(qv)$ respectively] in coordinate space. They are chosen as follows: The initial and final states for the electron are plane waves with 4-momentum $k$ and $k'$ respectively. The final state for the quark is plane wave with 4-momentum $p'$ but the initial state is the spherical wave given by Eq.(3). We insert them into Eq.(4) and obtain the contribution of this elementary process to the hadronic tensor $W^{\alpha \beta}(P,S;q)$ (where $S$ stands for the polarization of the nucleon). Its contribution to the structure functions can then be calculated in a straight forward manner. The results obtained in the Bjorken limit for a $q_v$ in its ground state $\tilde \psi_{\varepsilon_0 \ {1\over 2} \ m\ +}$ are given by, \begin{eqnarray} F_{2(qv)}(x_B|m) & \approx & {Mx_Be^2_{qv} \over 2} \int p_\perp dp_\perp \bigl [\tilde f_{00}^2 (p|q_v)+ \tilde g_{01}^2 (p|q_v) +\nonumber\\ & & \hspace*{3cm} 2\cos \theta \tilde f_{00} (p|q_v) \tilde g_{01}(p|q_v)\bigr ]; \end{eqnarray} \begin{eqnarray} g_{1(qv)}(x_B|m) & \approx & m{Me_{qv}^2\over 2} \int p_\perp dp_\perp \bigl [\tilde f_{00}^2 (p|q_v) +(1-2\sin^2\theta ) \tilde g_{01}^2 (p|q_v) +\nonumber\\ & & \hspace*{3cm} 2\cos\theta \tilde f_{00} (p|q_v)\tilde g_{01}(p|q_v)\bigr ];\\ g_{2(qv)}(x_B|m) & \approx & m{Me_{qv}^2\over 2} \int p_\perp dp_\perp \bigl [ (1-3 \cos^2\theta) \tilde g_{01}^2 (p|q_v) - \nonumber\\ & & \hspace*{3cm} 2\cos\theta \tilde f_{00}(p|q_v)\tilde g_{01}(p|q_v)\bigr ], \end{eqnarray} and $F_{1(qv)}(x_B|m) \approx F_{2(qv)}(x_B|m)/(2x_B)$. Here $\tilde f_{00}(p|q_v) \equiv \tilde f_{\varepsilon\ell} (p|q_v)$ for $\varepsilon=\varepsilon_0,\ell=0$ and $\tilde g_{01}(p|q_v)\equiv \tilde g_{\varepsilon\ell'} (p|q_v)$ for $\varepsilon=\varepsilon_0,\ell '=1$; $\cos\theta\equiv p_\parallel/p $, and $p_\parallel \approx \varepsilon_0 - Mx_B$. The integration over $p_\perp $ is carried in the region as given in [15]. These results are interesting since they show in particular the following: (A) From Eq.(5), we see: $F_{2(qv)}(x_B|m)$ contains not only terms proportional to the quark density $|\tilde \psi (\vec p\ )|^2\propto \tilde f_{00}^2(p)+\tilde g_{01}^2(p)$ but also the ``mixed term'' $\cos \theta \tilde f_{00}(p)\tilde g_{01}(p)$. Hence, the nucleon structure function $F_2$ is {\it not} just proportional to the number densities of quarks in the nucleon. This is essentially different from that in the quark parton model. (B) For $g_{1(qv)}(x_B|m)$, the integrand contains, besides terms like $|\tilde \psi (\vec p\ )|^2$ and the ``mixed term", an additional term $-2\sin^2 \theta \tilde g_{01}^2(p)$ which is negative in sign and is proportional to $\tilde g_{01}^2(p)$. This is expected because $\tilde g_{01}^2(p)$ comes from the lower component of $\tilde \psi $ and such a component corresponds to $l'=1$. Its contribution to the spin-dependent structure functions should be different from the upper component which corresponds to $l=0$. It should be emphasized that, in contrast to the usual expectations, {\it neither of these terms vanishes even in the limit $Q^2\to \infty$.} (C) Since $x_B\approx (\varepsilon _0-p_\parallel )/M$, the interval $0\le x_B\le 1$ corresponds to $\varepsilon _0 \ge p_\| \ge (\varepsilon _0-M)$. This is in general {\it not} necessary the {\it entire} physical region for the momenta of the bound valence quarks. Hence, the integral over this range is {\it not} the sum over all possible states of the bound quarks! In particular, by integrating $F_{2(qv)}(x_B|m)/x_B$ over $x_B$ from zero to unity, we do not get $e_{qv}^2$ but a number which is in general less than it. It tends to $e_{qv}^2$ in the static limit, where we have $|p_\| |\ll M$. This means, sum rules such as those in the parton model are in general {\it not} valid here. The results of such integrals should be, in most of the cases, less than those expected in the parton model. This implies, e.g., the Gottfried sum$^{3}$, which is the integral of $[F_{2}^p(x_B)-F_{2}^n(x_B)]/x_B$, should be less than $1/3$. It is $1/3$ only in the static limit$^{17}$. But, in this limit, the integrand, namely the structure functions, will have the form of a Delta-function --- a distribution which contradicts the existing data$^{1,8,18,19,20}$. (D) Not only because of the facts pointed out in (C) but also due to the presence of the term $-2\sin ^2\theta \tilde g_{01}^2(p)$, the integration of $g^{p}_{1(q_v)}(x_B)$, and thus that of $g_1^p(x_B)$ over $x_B$ from zero to one is expected to be much smaller than that expected in the quark parton model. This is consistent with the experimental observations$^{4-8}$. (E) Similar discussions as those given in (C) and (D) show that, strictly speaking, just as Gottfried sum rule, Bjorken sum rule$^{3}$ should also be violated. However, if we compare these two sum rules, we see the following difference: While both sides of Bjorken sum rule depend on the radial wave functions$^{21}$, the rhs of Gottfried sum rule does not. Hence, in the relativistic case, both sides of Bjorken sum rule and the lhs of Gottfried sum rule should be much smaller than their counterparts in the static limit, while the rhs of Gottfried sum rule remains the same. Since these sum rules are valid in the static limit, this implies a strong violation of Gottfried sum rule, but only a weak violation of Bjorken sum rule in the relativistic models. The latter can even be approximately valid for some particular choices of $\tilde{f}_{00}$ and $\tilde{g}_{01}$. Also this is consistent with the data$^{5,6}$. Encouraged by these agreements, we consider a valence quark in the mean field caused by the other constituents of the nucleon. We take the mean field as central and describe the valence quark by the spherical wave given by Eq.(3) in this central field. The calculations of the contributions of these valence quarks to the structure of the nucleon can be carried out in exactly the same way as above. For one quark, the results have exactly the same form as those given in Eqs.(5)---(7). The only difference is that now the radial functions $\tilde f$ and $\tilde g$ are solutions of the Dirac equation with given potentials. To see how the quantitative results from the conventional simplest potentials are compared to data, we considered a simple spherical potential well, i.e. $U_S(r)=0, U_V=-0.3M$ for $0\le r\le R$ but $U_S(r)=\infty $ for $r>R$, and obtained the contribution of a valence quark to the structure functions from Eqs. (5)\,--\,(7). The contributions of all the valence quarks are then obtained by summing over all of them, i.e. \begin{equation} F_2(x_B) =\sum_{q_v,m} \rho_0(m;q_v|\rightarrow ) F_{2(qv)}(x_B|m), \end{equation} \begin{equation} g_{1,2}(x_B)=\sum_{q_v,m} \rho_0(m;q_v|\rightarrow ) g_{1,2(qv)}(x_B|m). \end{equation} Here, $\rho_0(m,q_v|\rightarrow )$ is the average number of valence quarks in the state $\tilde \psi_{\varepsilon_0 \ {1\over 2} \ m\ +}$ in the nucleon which is polarized in $z$-direction, it is determined$^{10}$ by the nucleon wave function. We calculated first $F_{2}^p(x_B)-F_{2}^n(x_B)$, which is of particular interest, not only because it is nothing else but the integrand of the Gottfried sum, but also because it contains only valence quark contributions provided that isospin invariance is not violated in the quark-antiquark sea. The result is shown in Fig.1. \begin{figure}[htbp] \vspace*{13pt} \centerline{\vbox{\hrule width 5cm height0.0pt}} \vspace*{-0.8cm} \begin{center} \psfig{figure=fig1.ps,width=10cm} \end{center} \vspace*{-1.8cm} \centerline{\vbox{\hrule width 5cm height0.0pt}} \vspace*{13pt} \fcaption{ The difference $F_2^p(x_B)-F_2^n(x_B)$ as function of $x_B$. The curve is the result of Eqs.(8) and (5) obtained by using the $f_{00}(p|q_v)$ and $g_{01}(p|q_v)$ from the potential well mentioned in the text with $R=1.23$\,fm for both $u$ and $d$. Data are taken from [8,18,19] and [20]. (Only statistical errors are shown.)} \end{figure} The same solutions have also been used to calculate $g_1^p(x_B)$ and $g_2^p(x_B)$. The results are shown in Figs.2 and 3 respectively. \begin{figure}[htbp] \vspace*{13pt} \centerline{\vbox{\hrule width 5cm height0.0pt}} \vspace*{-0.8cm} \begin{center} \psfig{figure=fig2.ps,width=10cm} \end{center} \vspace*{-1.8cm} \centerline{\vbox{\hrule width 5cm height0.0pt}} \vspace*{13pt} \fcaption{ The spin-dependent structure function $x_B g_1^p(x_B)$ as function of $x_B$. The curve is the result of Eqs.(9) and (6) by using the same sets of $f_{00}(p|q_v)$ and $g_{01}(p|q_v)$ as those in Fig.1. The data are taken from Refs.[4,6,7,22] and [23]. (Only statistical errors are shown.)} \end{figure} \begin{figure}[htbp] \vspace*{13pt} \centerline{\vbox{\hrule width 5cm height0.0pt}} \vspace*{-0.8cm} \begin{center} \psfig{figure=fig3.ps,width=10cm} \end{center} \vspace*{-1.8cm} \centerline{\vbox{\hrule width 5cm height0.0pt}} \vspace*{13pt} \fcaption{The spin-dependent structure function $x_B g_2^p(x_B)$ as function of $x_B$. The curves are the results obtained from Eqs.(9) and (7) by using the same sets of $f_{00}(p|q_v)$ and $g_{01}(p|q_v)$ as those used in Figs.1 and 2. The data are taken from [7]. (Only statistical errors are shown.)} \end{figure} It should be emphasized in this connection that our purpose here is to investigate the influences of the intrinsic orbital motion of valence quarks on the structure functions of nucleon. No attempt has yet been made to get a better fit to the data by making a more suitable choice of parameters for the confining potentials, although such a procedure is clearly possible. (We give therefore also no quantitative predictions for the integrals of the structure functions since such quantitative results depend very much on the explicit forms of the confining potentials.) No difference between the effective potentials for $u$- and $d$-valence quarks in the nucleon has been taken into account yet. We get therefore $g_1^n(x_B)=g_2^n(x_B)=0$. This shows that the magnitudes of these two structure functions should be much smaller than those of their counterparts for the proton (i.e. $g_1^p$ and $g_2^p$), which is consistent with the recent experimental findings. Non-zero values of $g_1^n$ and $g_2^n$ may for example originate from the existence$^{24}$ of the differences between the wave functions of $u$- and $d$-quarks and/or other effects, which are not discussed here. In summary, together with illustrative examples, we have explicitly demonstrated that the intrinsic orbital motion of the valence quarks can have profound influence on the structure functions of the nucleon. The obtained result shows that the violation of the sum rules derived in the parton model is in fact not surprising and that the conventional interpretation of the nucleon structure functions may not be the most useful one. This is particularly obvious in connection with the spin structure of the nucleon. \nonumsection{Acknowledgements} \noindent We thank C. Boros and Meng Ta-chung for helpful discussions. This work is supported in part by Deutsche Forschungsgemeinschaft (DFG:Me 470/7-1). \newpage \nonumsection{References} \vspace*{-0.8cm} \noindent
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The standard cosmology has been going through an embracing phase due to a series of observational evidence \cite{Riess:1998cb,Perlmutter:1998np,Spergel:2003cb,Tegmark:2003ud,Eisenstein:2005su} for the last two decades. The present accelerated era of expansion as predicted by the observational data has tension with the theoretical prediction of standard cosmology (i.e. LCDM model) (for details one may refer to the recent works for example Snowmass reviews \cite{Cabass:2022avo}). So a group of cosmologists are trying to accommodate this observational fact by considering some exotic matter (known as dark energy) in the framework of Einstein gravity. There are various choices for the dark energy models namely phantom scalar field \cite{Caldwell:1999ew}, quintessence scalar field \cite{Carroll:1998zi,Brax:1999gp}, K-essence scalar field \cite{Armendariz-Picon:1999hyi,Chiba:1999ka,Armendariz-Picon:2000ulo}, quintom scalar field \cite{Elizalde:2004mq,Feng:2004ad,Cai:2009zp}, tachyon field \cite{Padmanabhan:2002cp,Padmanabhan:2002sh}, modified Chaplygin gas \cite{Kamenshchik:2001cp,Bilic:2001cg,Bento:2002ps,Debnath:2004cd} and so on. Alternatively another group of cosmologists are trying to have a modification of gravity theory so that the mismatch between theory and observation can be overcome. The present work is an attempt considering the second view of cosmologists. To address tensions in the present expansion rate of the Universe one natural way to address this issue is by modifying gravity theory and also there is another possibility of modifying the standard model of particle physics through dark matter and other approaches. In the context of Einstein gravity, a natural replacement of the Einstein-Hilbert action is by an arbitrary function of $R$. This modified gravity theory is the well known $f(R)$ gravity theory. This modified gravity not only explains the late time cosmic acceleration \cite{Carroll:2003wy} but also satisfies local gravitational tests \cite{Nojiri:2007as,Cognola:2007zu,Elizalde:2010ts,Nojiri:2010wj,Sotiriou:2008rp}. In the recent past this modified gravity theory has been extended by Harko \textit{et al.} \cite{Harko:2011kv} by choosing the Lagrangian density as an arbitrary function $f(R,T)$ where $T$ is the trace of the energy-momentum tensor. The justification of introducing the matter part in the gravity Lagrangian is due to Quantum effect (known as conformal anomaly). Further due to the coupling in matter and geometry the gravity model obviously depends on the source term and consequently the test particles do not follow the geodesic path (as there is a hypothetical force term perpendicular to the four velocity). Due to the highly complicated form of the field equation there is a simple choice of $f(R,T)$ in an unorthodox manner $(f(R,T)=R+h(T))$ \cite{Chakraborty:2012kj}, keeping in mind the path of the test particle will be along the geodesic. Further this particular choice of $f(R,T)$ is not possible for electromagnetic fields while if we consider the matter as a perfect fluid with constant equation of state then $h(T)$ turns out to be a power law in $T$ where the power of $T$ depends on the equation of state parameter. In the present work, our motivation is to justify the above simple but unusual choice of $f(R,T)$ from the observational point of view. Also we shall make some cosmological consequences of the present model. The plan of the paper is as follows: Section \ref{s2} gives an overview of $f(R,T)$ gravity theory, particularly in the context of present choice. Section \ref{s3} describes a detailed analysis of the cosmological solution both from theoretical and observational point of view. Section \ref{s4} shows in detail, the nature of the scalar field considering the field theoretic description of the model. Section \ref{s5} shows an equivalence of the perfect fluid in $f(R,T)$ gravity theory and the standard modified chaplygin gas model. The numerical analysis with observation constraint has been presented in section \ref{s6}. Finally a brief summary of the whole work has been given in section \ref{s7}. \section{$f(R,T)$ gravity theory: A brief description}\label{s2} In this gravity theory, the complete action can be written as \cite{Harko:2011kv} \begin{equation}\label{eq1} \mathcal{A}=\int\left[\frac{1}{16\pi}f(R,T)+\mathcal{L}_m\right]\sqrt{-g}d^4x \end{equation} where $T=T_{\mu\nu}g^{\mu\nu}$ is the trace of the energy-momentum tensor $T_{\mu\nu}$, obtained from the matter Lagrangian density as \cite{Landau:1982dva} \begin{equation} T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_m)}{\delta g^{\mu\nu}} \end{equation} Further, if $\mathcal{L}_m$ depends only on $g_{\mu\nu}$ but not its derivatives, then the above form for $T_{\mu\nu}$ simplifies to \begin{equation} T_{\mu\nu}=g_{\mu\nu}\mathcal{L}_m-2\frac{\partial\mathcal{L}_m}{\partial g^{\mu\nu}}. \end{equation} Now the variation of the action (\ref{eq1}) gives the field equations as \cite{Harko:2011kv} \begin{equation}\label{eq6a} f_RR_{\mu\nu}+\left(g_{\mu\nu}\square-\nabla_\mu\nabla_\nu\right)f_R-\frac{1}{2}g_{\mu\nu}f(R,T)=8\pi T_{\mu\nu}-\left(T_{\mu\nu}+\Theta_{\mu\nu}\right)f_T \end{equation} with \begin{eqnarray} \Theta_{\mu\nu}=g^{\alpha\beta}\frac{\delta T_{\alpha\beta}}{\delta g^{\mu\nu}}=-2T_{\mu\nu}+g_{\mu\nu}\mathcal{L}_m-2g^{\alpha\beta}\frac{\partial^2\mathcal{L}_m}{\partial g^{\mu\nu}\partial g^{\alpha\beta}},\nonumber\\ f_R=\dfrac{\partial f(R,T)}{\partial R},\ \ \ \ \ \ \ f_T=\dfrac{\partial f(R,T)}{\partial T}.\ \ \ \ \ \ \ \ \nonumber \end{eqnarray} One should note that field equations for $f(R)$ gravity can be recovered from equation (\ref{eq6a}) if $f(R,T)$ is replaced by $f(R)$. Further, one may recover GR if $f(R,T)=R$ while $\Lambda$CDM model will be recovered if $R+2\Lambda$ ($\Lambda$, a cosmological constant) with matter in the form of dust i.e. $L_m=\rho$. In the present work, a particular choice namely $f(R,T)=R+h(T)$ is considered so that the field equation (\ref{eq6a}) simplifies to \cite{Chakraborty:2012kj} \begin{equation}\label{eq8a} G_{\mu\nu}=8\pi T_{\mu\nu}-h'(T)(T_{\mu\nu}+\Theta_{\mu\nu})+\frac{1}{2}h(T)g_{\mu\nu}. \end{equation} If divergence of the above equation (\ref{eq8a}) is considered with conservation of energy-momentum tensor (i.e, $\nabla_\mu T^\mu_\nu=0$) then one obtains \begin{equation}\label{eq9a} (T_{\mu\nu}+\Theta_{\mu\nu})\nabla^\mu h'(T)+h'(T)\nabla^\mu\Theta_{\mu\nu}+\frac{1}{2}g_{\mu\nu}\nabla^\mu h(T)=0 \end{equation} Thus it is easy to see that $h(T)$ is not arbitrary, rather it depends on the choice of the matter field. It is to be mentioned that for this choice of $f(R,T)$, it is not possible to consider the electromagnetic field as the matter field. In the context of perfect fluid (which we shall consider in the following sections), the energy momentum tensor has the following form \begin{equation} T_{\mu\nu}=(\rho+p)u_\mu u_\nu-pg_{\mu\nu} \end{equation} with matter Lagrangian as $\mathcal{L}_m=-p$. Here $\rho$ and $p$ are the energy density and thermodynamic pressure of the perfect fluid with restrictions on the four velocity $u^\mu$ as $$u_\mu u^\mu=1~;~~u^\mu\nabla_\nu u_\mu=0.$$ The symmetric (0,2) tensor $\Theta_{\mu\nu}$ simplifies to \begin{equation} \Theta_{\mu\nu}=-2T_{\mu\nu}-pg_{\mu\nu}. \end{equation} Using this form of $\Theta_{\mu\nu}$ into equation (\ref{eq9a}) one obtains \begin{equation} (T_{\mu\nu}+pg_{\mu\nu})\nabla^\mu h'(T)+h'(T)g_{\mu\nu}\nabla^\mu p+\frac{1}{2}g_{\mu\nu}\nabla^\mu h(T)=0. \end{equation} Moreover, if the perfect fluid is assumed to have barotropic equation of state: $p=\omega\rho$, $\omega$ a constant, then in cosmological context for flat FLRW space-time $h(T)$ has the following simple power law form as \cite{Chakraborty:2012kj} \begin{equation}\label{eq13} h(T)=h_0T^\alpha \end{equation} with $\alpha=\dfrac{1+3\omega}{2(1+\omega)}$, $h_0$ is integration constant and $\omega\neq-1,\pm\dfrac{1}{3}$. \section{Cosmology in $f(R,T)$ gravity theory}\label{s3} In the background of homogeneous and isotropic space-time geometry and with equation (\ref{eq13}) as the choice for $h(T)$, the modified field equations in $f(R,T)$ gravity theory can be written as \begin{eqnarray} 3H^2=\rho+h_0(1-3\omega)^{\alpha-1}\rho^\alpha\label{eq14}\\ \mbox{and~~~}2\dot{H}+3H^2=-p+\dfrac{1}{2}h_0(1-3\omega)^\alpha\rho^\alpha\label{eq15} \end{eqnarray} with matter field conservation equation \begin{equation} \dot{\rho}+3H(p+\rho)=0.\label{eq16a} \end{equation} Due to constant equation of state (i.e, $p=\omega\rho$, $\omega$ a constant) equation (\ref{eq16a}) can be integrated to give \begin{equation} \rho=\rho_0(1+z)^{3(1+\omega)}~,~~~~\rho_0,~\mbox{a constant of integration}\label{eq17a} \end{equation} where the redshift parameter $z$ is defined as $\dfrac{a_0}{a}=(1+z)$. Using this solution for $\rho$ in the 1st modified Friedmann equation (\ref{eq14}) and solving for the redshift parameter one gets \begin{equation} t=t_0+\frac{2}{\sqrt{3\rho_0}}\dfrac{(1+z)^{-\dfrac{3(1+\omega)}{2}}}{(1+\omega)}{}~_2F_1\left(\dfrac{1}{2},\beta-1,\beta,x\right) \end{equation} where $\beta=\dfrac{2}{1-\omega},~x=-h_0\left[\rho_0(1-3\omega)\right]^{\dfrac{\omega-1}{2(1+\omega)}} (1+z)^{\dfrac{3(\omega-1)}{2}}$. $t_0$ is a constant of integration and ${}_2F_1$ is the usual confluent hypergeometric function. Also the Hubble parameter and the measure of acceleration can be expressed in terms of the redshift parameter as \begin{eqnarray} H^2&=&\dfrac{\rho_0}{3}(1+z)^{3(1+\omega)}(1-x)\nonumber\\ &=&\dfrac{\rho_0}{3}\left[(1+z)^{3(1+\omega)}+h_0(1-3\omega)^{\dfrac{\omega-1}{2(1+\omega)}}\rho_0^{\dfrac{\omega-1}{2(1+\omega)}}(1+z)^{\dfrac{3(1-\omega)}{2}}\right] \end{eqnarray} and \begin{equation} \dfrac{\ddot{a}}{a}=-\dfrac{\rho_0}{6}(1+3\omega)(1+z)^{3(1+\omega)}+\dfrac{h_0}{12}(1-9\omega)(1-3\omega)^{\dfrac{\omega-1}{2(1+\omega)}}\rho_0^{\dfrac{1+3\omega}{2(1+\omega)}}(1+z)^{\dfrac{3(1+3\omega)}{2}}. \end{equation} Note that for a viable cosmological solution, one must have $\omega<\dfrac{1}{3}$. Also if $h_0<0$, one can see that the Hubble parameter vanishes for a certain value of scale factor and it does not exist after that. Hence the Universe is bounded. So to obtain an unbounded cosmological solution, one must consider $h_0>0$ and consequently $\ddot{a}\gtrless0$ for different choices of $h_0$, $\rho_0$ and $\omega$. For acceleration or deceleration of the model one must have the following conditions (for unbounded universe i.e, $h_0>0$) \begin{align} \dfrac{1}{9}\leq\omega<\dfrac{1}{3} ~~:&~~\mbox{decelerating model~} (\ddot{a}<0)\nonumber \\ -\dfrac{1}{3}<\omega<\dfrac{1}{9}~~:&~~ \mbox{no definite conclusion, depends on~} \dfrac{\rho_0}{h_0}\nonumber\\ \omega\leq-\dfrac{1}{3}~~:&~~ \mbox{accelerating model~} (\ddot{a}>0)\nonumber \end{align} Further, the modified Friedmann equations (\ref{eq14}) and (\ref{eq15}) in $f(R,T)$ gravity theory can be equivalent to evolution equations in Einstein gravity with non-interacting two fluid system -- both of which are perfect fluid. The first one is the usual matter field with constant equation of state $\omega$ while the second effective perfect fluid has energy density and thermodynamic pressure as $\rho_e=h_0(1-3\omega)^{\alpha-1}\rho^\alpha$ and $p_e=-\dfrac{1}{2}h_0(1-3\omega)^\alpha\rho^\alpha$. So the effective perfect fluid has barotropic equation of state $\omega_e=-\dfrac{1}{2}(1-3\omega)$. Further, the modified field equation can be written as Einstein field equation with a perfect fluid as \begin{equation} 3H^2=\rho_T,\mbox{~and~}2\dot{H}=-\rho_T(1+\omega_T) \end{equation} where the variable equation of state parameter is given by \begin{equation} \omega_T=\frac{\omega-\dfrac{1}{2}h_0(1-3\omega)^{\frac{3\omega+1}{2(\omega+1)}}\rho_0^{\frac{\omega-1}{2(\omega+1)}}(1+z)^{\frac{3}{2}(\omega-1)}}{1+h_0(1-3\omega)^{\frac{\omega-1}{2 (\omega+1)}}\rho_0^{\frac{\omega-1}{2(\omega+1)}}(1+z)^{\frac{3}{2}(\omega-1)}}. \end{equation} The variation of $\omega_T$ with the variation of $\omega$ and redshift parameter $z$ has been shown in FIG. \ref{f4}. \begin{figure}[h] \begin{minipage}{0.38\textwidth} \includegraphics[height=.25\textheight]{wCn.eps} \end{minipage} \begin{minipage}{0.05\textwidth} \includegraphics[height=.2\textheight]{wC1n.eps} \end{minipage} \caption{Variation of $\omega_T$ with respect to $\omega$ and $z$ with $\rho_0=2$ and $h_0=0.9$. } \label{f4} \end{figure} The present $f(R,T)$ gravity model or equivalently two fluid Einstein gravity model can not describe warm inflationary scenario for the following two reasons: (i) There is no interaction between the two fluids and as a result dissipative or friction term is absent in the matter evolution equations (ii) Neither the matter fluid nor the effective fluid can have radiation equation of state. Moreover, the present cosmological model is equilibrium thermodynamical prescription due to non-existence of any dissipative pressure in both the fluids. Further, the present effective Einstein gravity with two fluids, the effective fluid will always be exotic in nature (i.e, DE) provided $\omega<\dfrac{1}{9}$ while it will be a normal fluid for $\dfrac{1}{9}<\omega<\dfrac{1}{3}$.\begin{figure}[h] \includegraphics[scale=0.7]{A00n.eps} \caption{ $\ddot{a}=0$ is plotted in $\omega-z$ plane for different choices of $\dfrac{\rho_0}{h_0}$.} \label{f1} \end{figure} Also from FIG. \ref{f1}, one may conclude that the present $f(R,T)$ model can describe the evolution of the universe from the decelerating phase to the present accelerating era for suitable choices of $\omega$ and $\dfrac{\rho_0}{h_0}$. Lastly, the cosmological parameters namely the scale factor, Hubble parameter and acceleration parameter are shown graphically in FIG. \ref{f2} for various choices of $\omega$. \begin{figure}[h] \begin{minipage}{0.3\textwidth} \includegraphics[height=.15\textheight]{an.eps} \end{minipage} \begin{minipage}{0.3\textwidth} \includegraphics[height=.15\textheight]{Hn.eps} \end{minipage} \begin{minipage}{.3\textwidth} \includegraphics[height=.15\textheight]{accn.eps} \end{minipage} \caption{Scale factor, Hubble parameter, acceleration of the Universe are plotted for three different choices of $\omega$, $\omega=.25$ (dashed), $\omega=-.1$ (dotdashed) and $\omega=-.5$ (dotted) with $\rho_0=2$ and $h_0=0.9$. } \label{f2} \end{figure} \section{Field theoretic description}\label{s4} To describe the present $f(R,T)$ cosmological model from field theoretical point of view, a scalar field $\phi$ having self-interacting potential $V(\phi)$ is introduced to describe the effective fluid. So the energy density $\rho_\phi$ and pressure $p_\phi$ of the scalar field are given by : \begin{eqnarray} \rho_\phi&=&\frac{\dot{\phi}^2}{2}+V(\phi)=h_0(1-3\omega)^{\alpha-1}\rho^\alpha\\ p_\phi&=&\frac{\dot{\phi}^2}{2}-V(\phi)=-\dfrac{1}{2}h_0(1-3\omega)^\alpha\rho^\alpha \end{eqnarray} i.e, \begin{equation} \dot{\phi}^2=\dfrac{1}{2}h_0(1-3\omega)^{\alpha-1}(1+3\omega)\rho^\alpha~~;~~~ V(\phi)=\dfrac{3}{4}h_0(1-3\omega)^{\alpha-1}(1-\omega)\rho^\alpha\label{eq17}. \end{equation} Using the solution for $\rho$ from equation (\ref{eq17a}) into equation (\ref{eq17}) and integrating $\phi$ has the explicit solution \begin{equation} \phi=\phi_0\sinh^{-1}\left(\mu a^r\right) \end{equation} and the potential function takes the form \begin{equation} V(\phi)=V_0a^y \end{equation} or in term of the scalar field \begin{equation} V(\phi)=V_1\left[\sinh\left(\delta \phi\right)\right]^s \end{equation} where $\phi_0=\dfrac{2\sqrt{\frac{2(1+3\omega)}{3}}}{1-\omega}$, $\mu=\sqrt{h_0}\left[(1-3\omega)\rho_0\right]^{\frac{\alpha-1}{2}}$, $r=\dfrac{3}{4}(1-\omega)$, $V_0=\dfrac{3}{4}h_0(1-3\omega)^{\alpha-1}(1-\omega)\rho_0^\alpha$, $y=-\dfrac{3}{2}(1+3\omega)$, $V_1=\dfrac{3(1-\omega)h_0^{\frac{2(1+\omega)}{1-\omega}}}{4 (1-3\omega)}$, $\delta=\dfrac{1-\omega}{2}\sqrt{\dfrac{3}{2 (1+3 \omega)}}$, $s=\dfrac{2(1+3\omega)}{\omega-1}$. The variation of the potential over scalar field has been presented in FIG. \ref{f3}. \begin{figure}[h] \includegraphics[height=.2\textheight,width=.2\textheight]{Vn.eps} \caption{$V-\phi$ plot: $h_0=0.9,$ $\rho_0=2$, $\omega=-0.1$}\label{f3} \end{figure} Further, eliminating $\rho$ between the equations in (\ref{eq17}) one obtains \begin{eqnarray} \frac{\dot{\phi}^2}{V(\phi)}&=&\frac{2}{3}\frac{(1+3\omega)}{(1-\omega)}\nonumber\\ \mbox{i.e.,~}\int\frac{d\phi}{\sqrt{V(\phi)}}&=&\sqrt{\frac{2(1+3\omega)}{3(1-\omega)}}(t-t_0) \end{eqnarray} So the above field theoretic description is possible for $-\dfrac{1}{3}<\omega<1$. In particular, for some known potential we have the explicit form of the scalar field as \begin{eqnarray} \mathrm{(i)}~ V(\phi)=V_0\phi^{-2(n-1)}&,& \phi^n=n\sqrt{\frac{2V_0(1+3\omega)}{3(1-\omega)}}(t-t_0)\nonumber\\ \mathrm{(ii)}~ V(\phi)=V_0 ~\mathrm{sech}^2\phi~&,& \phi=\sinh^{-1}\left[\sqrt{\frac{2V_0(1+3\omega)}{3(1-\omega)}}(t-t_0)\right]\nonumber \end{eqnarray} From the expression of $\dot{\phi}^2$ in equation (\ref{eq17}), it is easy to see that the equivalent scalar field in field theoretic description will be a real scalar field provided either (i) $h_0>0$ and $\omega>-\dfrac{1}{3}$ i.e, normal fluid in an unbounded universe, or (ii) $h_0<0$ and $\omega<-\dfrac{1}{3}$ i.e, exotic fluid in bounded universe. Thus an unbounded model of the universe will be ever accelerating if the equivalent scalar field is of ghost nature while the unbounded universe will be ever decelerating or undergoes a transition from deceleration to acceleration for equivalent scalar field to be real. \section{An equivalent notion of modified Chaplygin gas and perfect fluid in $f(R,T)$ gravity theory}\label{s5} This section presents a nice equivalence between the $f(R,T)$ gravity theory with the typical choice of the $f(R,T)$ function (in the present work) and Einstein gravity. The modified Einstein field equations in $f(R,T)$ gravity theory for perfect fluid in FLRW space-time geometry are given by equations (\ref{eq14}) and (\ref{eq15}) with $\omega=\dfrac{p}{\rho}$ as the equation of state parameter. Now suppose there exist a modified Chaplygin gas (MCG) having equation of state \begin{equation}\label{eq27} p=\gamma\rho-\Gamma\rho^\alpha \end{equation} with $\gamma$, $\Gamma$ and $\alpha$ are constant parameters of the Chaplygin gas given by$$\gamma=\dfrac{1}{2}(1-3\omega),~\Gamma=\dfrac{1}{2}|h_0|(1-3\omega)^\alpha.$$ If the above modified field equations are considered as equivalent Einstein field equations then one may write \begin{eqnarray} 3H^2&=~\rho_e &=\frac{p}{\gamma}\nonumber\\ 2\dot{H}+3H^2&=-p_e &=-\gamma\rho \end{eqnarray} Thus the effective single fluid in Einstein gravity has the equation of state $p_e=\omega_e\rho_e$ where $\omega_e=\dfrac{\gamma^2}{\omega}$, a constant. So we may conclude that a MCG fluid in $f(R,T)$ gravity theory is equivalent to a perfect fluid in Einstein gravity with constant equation of state $\dfrac{\gamma^2}{\omega}$. Therefore, the present choice of the $f(R,T)$ function shows that the equivalence between the modified gravity theory and Einstein gravity is just an exchange of a perfect fluid with constant equation of state to the well known modified chaplygin gas. \section{Numerical Analysis and Observational Constraint}\label{s6} Our aim here is to constraint on the cosmological parameters analyzing the observational data sets. In order to do so, the pressure of the matter component and effective fluid can be rewritten as \begin{eqnarray} p_m\equiv p=\omega\rho&\equiv&w0_{b}\rho_m\label{eq31}\\ p_{fld}\equiv p_e=-\frac{1}{2}(1-3\omega)\rho_e&\equiv&-\frac{1}{2}(1-3w0_{b})\rho_{fld} \end{eqnarray} where the corresponding energy density for two components can be rewritten as \begin{eqnarray} \rho_m\equiv \rho=\rho_0 (1+z)^{3(1+\omega)}&\equiv&\Omega_{m}H0^2 (1+z)^{3(1+w0_{b})}\\ \rho_{fld}\equiv \rho_e=h_0(1-3\omega)^{\alpha-1}\rho^\alpha&\equiv&\Omega0_{fld}H0^2 (1+z)^{\frac{3}{2}(1+3w0_{b})}\label{eq34} \end{eqnarray} and the public version of the CLASS Boltzmann code has been modified to include the dark energy sector as effective fluid and for baryonic matter and corresponding cold dark matter (Eqns. (\ref{eq31})-(\ref{eq34})). The MCMC code Montepython3.5 \cite{Brinckmann:2018cvx} has been used to estimate the relevant cosmological parameters. In order to analysis and make a comparison, we use the cosmological datasets as dataset I (Pantheon \cite{Pan-STARRS1:2017jku}, BAO (BOSS DR12 \cite{BOSS:2016wmc}, $SMALLZ-2014$ \cite{Ross:2014qpa}) and HST \cite{Riess:2011}) and dataset II (Pantheon \cite{Pan-STARRS1:2017jku}, HST \cite{Riess:2011}). In both cases, a PLANCK18 prior has been imposed. We have made the choice of flat priors on the base cosmological parameters as follows: the baryon density $100\omega_b = [1.9, 2.5]$; cold dark matter density $\omega_{cdm} = [0.095, 0.145]$; Hubble parameter $H0 = [60, 80] km s^{-1}Mpc^{-1} $ and a wide range of flat prior has been chosen for $w0_b=[-1, 1]$. In the Table. \ref{table1} we have enlisted the constraints on the various cosmological parameters and in the Fig. \ref{f5}, we have shown the posterior distribution of those parameters. \begin{table}[h] \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Dataset I} & \multicolumn{2}{c|}{Dataset II} \\ \cline{1-5} Param & best-fit & mean$\pm\sigma$ & best-fit & mean$\pm\sigma$ \\ \hline $100~\omega{}_{b }$ &$2.244$ & $2.242_{-0.046}^{+0.045}$ & $2.252$ & $2.249_{-0.046}^{+0.045}$ \\ \hline $\omega{}_{cdm }$ &$0.1158$ & $0.1162_{-0.0023}^{+0.0023}$ & $0.118$ & $0.1177_{-0.0023}^{+0.0023}$ \\ \hline $w0_{b }$ &$0.02035$ & $0.02012_{-0.002}^{+0.0022}$ &$-0.1722$ & $-0.1706_{-0.021}^{+0.021}$\\ \hline $H0$ &$75.77$ & $75.71_{-1.6}^{+1.7}$ &$73.11$ & $73.18_{-1.8}^{+1.7}$ \\ \hline $M$ &$-19.06$ & $-19.07_{-0.044}^{+0.049}$ &$-19.24$ & $-19.24_{-0.054}^{+0.053}$ \\ \hline $\Omega{} 0_{fld }$ &$0.7591$ & $0.7577_{-0.01}^{+0.012}$ & $0.737$ & $0.7376_{-0.013}^{+0.014}$\\ \hline $\Omega{}_{m }$ &$0.2407$ & $0.2422_{-0.012}^{+0.01}$ &$0.2629$ & $0.2623_{-0.014}^{+0.013}$\\ \hline $\chi^2_{\min}$&\multicolumn{2}{c|}{1130} & \multicolumn{2}{c|}{1029}\\ \hline \end{tabular} \caption{}\label{table1} \end{table} \begin{figure} \centering \includegraphics[scale=0.35]{trianglen.eps} \caption{Posterior distribution for the cosmological parameters using the datasets I and datasets II. In the both cases a PLANCK18 prior has been considered.}\label{f5} \end{figure} From the above numerical analysis and the observed data we found that the equation of state parameter turns out to be $w0_b=0.02035$ (for DataSet I), $w0_b=-0.1722$ (for DataSet II) which is consistent with the theoretical prediction $\left(-\dfrac{1}{3}<\omega<\dfrac{1}{9}\right)$ in the present work for the $f(R,T)$ modified gravity model. Moreover, the above observational data analysis shows a transition of model from decelerated era of expansion to the present accelerated expansion era as inferred by the theoretic prediction. Now using the best fit values of the parameters from TABLE \ref{table1}, $\rho_0$ and $h_0$ can be estimated as \begin{table}[h!] \begin{tabular}{|c|c|c|}\hline Best fit&$\rho_0$&$h_0$\\\hline Dataset I&1381.88&98.47\\\hline Dataset II&1405.22&637.392\\\hline \end{tabular} \end{table} and hence the choice of $f(R,T)$ becomes $f(R,T)=R+98.47T^{0.52}$ (Dataset I) and $f(R,T)=R+637.39T^{0.29}$ (Dataset II). So one may note that after adding the BAO data (Dataset I) the parameter $w0_b$ is estimated as higher value compared to Dataset II, consequently the value of $H0$ increases for Dataset I which is consistent with FIG. 3. Further, it can be concluded that the trace of the energy momentum tensor is more significant in the $f(R,T)$ choice when the BAO data is absent. \section{Summary}\label{s7} An extensive study of FLRW cosmology in $f(R,T)$ gravity has been presented in the present work with a suitable choice of the function $f(R,T)$. The matter field is chosen as perfect with constant equation of state. The modified field equations are equivalent to non-interacting two fluid system in Einstein gravity. The effective fluid is also a perfect fluid with constant equation of state (depending on the state parameter of the given physical fluid). A possible cosmological solution for the present model has been obtained and the corresponding values of Hubble parameter and acceleration parameter are determined and their graphical representation has been shown in FIG \ref{f2}. Also the variation of equation of state parameter for the combined single fluid has been shown graphically in a 3D plot against redshift parameter and equation of state parameter for the physical fluid. Depending on the sign of a parameter ($h_0$) the present model may describe a bounded universe or an unbounded model of the universe. Due to the non-interacting nature of the two fluid system, the model can not describe the warm inflationary scenario and the present model is in thermodynamical equilibrium configuration. Depending on the choices of equation of state parameter of the usual fluid and the ratio $\left(\dfrac{\rho_0}{h_0}\right)$ the model may describe the evolution from decelerating phase to the present accelerating era. This result is in accord with the work of M. J. S. Houndjo \cite{Houndjo:2011tu}. There is a field theoretic description of the model with effective fluid described by a scalar field. From this field theoretic description it is found that an unbounded model of the Universe will be ever accelerating if the nature of the equivalent scalar field is of ghost type while for a real scalar field the universe will experience either ever decelerating phase or a transition from the deceleration to acceleration. Also, it is interesting to see that a MCG in $f(R,T)$ gravity is equivalent to a perfect fluid in Einstein gravity. Thus with the transition from one gravity theory to the other, the physical fluid also makes a transition to another form of fluid. Moreover, From observational point of view the parameters involved in the present model are estimated from Pantheon Data \& BAO Data. It is found that the contribution of the matter part (by the trace term T) in the model function $f(R,T)$ is dominant when BAO data is not taken into account. Finally, both from the theoretical prediction and from the observational data analysis, one may infer that the present $f(R,T)$ gravity model describes the cosmic scenario from matter dominated era to the present late time accelerating phase, it does not predict the early era of the universe. \section*{Acknowledgement} The authors would like to thank Nandan Roy and Supriya Pan for helping to run Montepython. The author A.B. acknowledges UGC-JRF (ID:1207/CSIRNETJUNE2019). G.S. acknowledges UGC for Dr. D.S. Kothari Postdoctoral Fellowship (No.F.4- 2/2006 (BSR)/PH/19-20/0104) and S.C. thanks Science and Engineering Research Board (SERB), India for awarding MATRICS Research Grant support (File No.MTR/2017/000407).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $J=\{j_1, j_2, \ldots, j_r\}_<$ be a set of integers arranged increasingly and let $\mathfrak{S}_J$ denote the set of all permutations on $J$. For each permutation $\sigma=\sigma(j_1)\sigma(j_2)\cdots \sigma(j_r) \in \mathfrak{S}_J$ define the \emph{number of excedances}, the \emph{number of fixed points} and the \emph{descent set} of $\sigma$ to be \begin{align} \mathop{\rm fix}\nolimits \sigma&=|\{i: 1\leq i\leq r, \sigma(j_i)=j_i\}|, \nonumber \\ \mathop{\rm exc}\nolimits \sigma&=|\{i: 1\leq i\leq r, \sigma(j_i)>j_i\}|, \nonumber \\ \mathop{\rm DES}\nolimits \sigma&=\{i: 1\leq i\leq r-1, \sigma(j_i)>\sigma(j_{i+1}) \}, \label{e-descent} \end{align} respectively. A permutation without fixed point is called a {\it derangement}. When $J=[n]:=\{1, 2, \dots , n\}$, we recover the classical definitions. The set $\mathfrak{S}_{[n]}$ is abbreviated by $\mathfrak{S}_n$, and for $\sigma\in \mathfrak{S}_n$ we write $\sigma_i$ for $\sigma(i)$. Our main results are the following Theorems~\ref{t-1-gen} and \ref{exc-gen}. \begin{thm}\label{t-1-gen} Let $J$ be a subset of $[n-1]$. \text{\rm (i)} If $\sigma\in\mathfrak{S}_n$ and $\mathop{\rm DES}\nolimits\sigma=J$, then $$ \mathop{\rm fix}\nolimits\sigma\leq n-|J|. $$ \text{\rm (ii)} Let $F_n(J)$ be the set of all permutations $\sigma$ of order $n$ such that $\mathop{\rm DES}\nolimits\sigma=J$ and $\mathop{\rm fix}\nolimits\sigma=n-|J|$. Furthermore, let $G(J)$ be the set of all derangements $\tau$ on $J$ such that $\tau(i)>\tau(i+1) $ whenever $i$ and $i+1$ belong to $J$. Then $$ \sum_{\sigma\in F_n(J)} s^{\mathop{\rm exc}\nolimits\sigma} =\sum_{\tau\in G(J)} s^{\mathop{\rm exc}\nolimits\tau}. $$ \end{thm} \medskip \noindent {\bf Example.} For $n=8$ and $J=\{1,2,3,6\}$, there are two permutations in $F_n(J)$, both having two excedances: 74315628 and 74325618. On the other hand, there are two derangements in $G(J)$, both having two excedances: 6321 and 6312. \begin{thm}\label{exc-gen} Let ${D}_0^J(n)$ be the set of all derangements $\sigma$ on $[n]$ such that $\mathop{\rm DES}\nolimits\sigma=J$, and let ${D}_1^J(n)$ be the set of all permutations $\sigm $ on $[n]$ such that $\mathop{\rm DES}\nolimits \sigm =J$ with exactly one fixed point. If $J$ is a proper subset of $[n-1]$, then there is a polynomial $Q_n^J(s)$ with positive integral coefficients such that $$ \sum_{\sigma\in{D}_0^J(n)} s^{\mathop{\rm exc}\nolimits\sigma} -\sum_{\sigm \in{D}_1^J(n)} s^{\mathop{\rm exc}\nolimits\sigm } =(s-1) Q_n^J(s). $$ \end{thm} \medskip \noindent {\bf Example.} For $n=6$ and $J=\{1,3,4,5\}$ there are six derangements in ${D}_0^J(n)$: $$ 216543, 316542, 416532, 436521, 546321, 645321; $$ and there are six permutations in ${D}_1^J(n)$: $$ 326541, 426531, 516432, 536421, 615432, 635421. $$ The numbers of excedances are respectively 3, 3, 3, 4, 3, 3 and 3, 3, 2, 3, 2, 3, so that $$ \sum_{\sigma\in{D}_0^J(n)} s^{\mathop{\rm exc}\nolimits\sigma} -\sum_{\sigm \in{D}_1^J(n)} s^{\mathop{\rm exc}\nolimits\sigm } =(5s^3+s^4) - (4s^3+2s^2)= (s-1)(s^3+2s^2). $$ Theorem \ref{t-1-gen} extends Stanley's work on alternating permutations (that we explain next) with maximal number of fixed points, and Theorem \ref{exc-gen} extends the corresponding minimal case. The extensions are in two directions: first, alternating permutations are replaced by permutations with a prescribed descent set; second, instead of simply counting permutations we study their generating polynomials by number of excedances. A permutation $\pi=\pi_1\pi_2\cdots \pi_n\in \mathfrak{S}_n$ is said to be {\it alternating} (resp. {\it reverse alternating}) if $\pi_1 > \pi_2 < \pi_3 > \pi_4 < \dots $ (resp. if $\pi_1 < \pi_2 > \pi_3 < \pi_4 > \dots $); or equivalently, if $\mathop{\rm DES}\nolimits \pi$ is $\{1,3,5,\dots\} \cap [n-1]$ (resp. $\{2,4,6,\dots\} \cap [n-1] $). Therefore, results on permutations with a prescribed descent set apply to alternating permutations. Let ${D}_k(n)$ be the set of permutations in $\mathfrak{S}_n$ with exactly $k$ fixed points. Then ${D}_0(n)$ is the set of derangements of order~$n$. Write $d_k(n)$ (resp. $d^*_k(n)$) for the number of alternating (resp. reverse alternating) permutations in ${D}_k(n)$. The next two corollaries are immediate consequences of Theorems 1 and 2. \begin{cor}[\cite{St}, Conjecture 6.3]\label{t-1} Let $D_n$ denote the number of derangements. Then, for $n\geq 2$ we have $$ d_n(2n) =d_{n+1}(2n+1) =d^*_{n+1}(2n+1) =d^*_{n+2}(2n+2)=D_{n}. $$ \end{cor} \begin{cor}[\cite{St}, Corollary 6.2] \label{t-2} For $n\geq 2$ we have $d_0(n)=d_1(n)$ and $d_0^*(n)=d_1^*(n)$. \end{cor} Stanley enumerated $D_k(n)$ and came up with Corollaries \ref{t-1} and \ref{t-2} on alternating permutations with extremal number of fixed points. He then asked for combinatorial proofs of them. This is the motivation of the paper. The results in Corollary \ref{t-1}, conjectured by Stanley, was recently proved by Chapman and Williams \cite{Wi} in two ways, one directly and the other using the newly developed concept of permutation tableaux \cite{SW}. In Section \ref{sec-cor3} we give a direct proof of a generalized form of Corollary \ref{t-1}. Corollary \ref{t-2} is actually a special case of a more general result due to Gessel and Reutenauer, which itself can be derived from Theorem 2 by setting $s=1$, as stated in the next corollary. \begin{cor}[\cite{gessel-reutenauer}, Theorem 8.3]\label{t-2-gen} Let $J$ be a proper subset of $[n-1]$. Then, the number of derangements in $\mathfrak{S}_n$ with descent set $J$ is equal to the number of permutations in $\mathfrak{S}_n$ with exactly one fixed point and descent set $J$. \end{cor} The paper is organized as follows. In Section 2 we give the proof of Theorem \ref{t-1-gen} that contains the results for the maximal case. Section 3 includes a direct proof of an extension of Corollary \ref{t-1}. Section 4 introduces the necessary part of Gessel and Reutenauer's work for enumerating the maximal case. Section 5 is devoted to the proof of Theorem \ref{exc-gen} dealing with the minimal case. We conclude the paper by making several remarks of analytic nature (see Section 6). In particular, Corollary \ref{exc-sum-n-q1}, proved combinatorially, should deserve an analytic proof. Several techniques are used: D\'esarm\'enien's desarrangement combinatorics \cite{De84}, Gessel's hook-factorization \cite{Ge91} and the analytical properties of two new permutation statistics ``DEZ" and ``lec" \cite{FH06, FH07}. \section{Permutations with maximal number of fixed points}\label{sec-th1} Our task in this section is to prove Theorem 1. The proof relies on the properties of the new statistic ``$\mathop{\rm DEZ}\nolimits$" introduced by Foata and Han \cite{FH06}. For a permutation $\sigma=\sigma_1\sigma_2\cdots \sigma_n\in \mathfrak{S}_n$ let~$\sigma^0=\sigma^0_1 \sigma^0_2 \cdots\sigma^0_n$ be the word derived from $\sigma$ by replacing each fixed point $\sigma_i=i$ by~$0$. The set-valued statistic ``$\mathop{\rm DEZ}\nolimits$" is defined by $$ \mathop{\rm DEZ}\nolimits \sigma =\mathop{\rm DES}\nolimits \sigma^0:=\{i: 1\leq i\leq n-1, \sigma^0_i>\sigma^0_{i+1} \}. $$ For example, if $\sigma= 8\,2\,1\,3\,5\,6\,4\,9\,7$, then $\mathop{\rm DES}\nolimits\sigma=\{1,2,6,8\}$, $\sigma^0=8\,0\,1\,3\,0\,0\,4\,9\,7$ and $\mathop{\rm DEZ}\nolimits\sigma=\mathop{\rm DES}\nolimits\sigma^0=\{1,4,8\}$. The basic property of the statistic ``DEZ" is given in the following proposition. \begin{prop}[\cite{FH06}, Theorem 1.4]\label{DEZequi} The two three-variable statistics $(\mathop{\rm fix}\nolimits, \mathop{\rm exc}\nolimits, \mathop{\rm DEZ}\nolimits)$ and $(\mathop{\rm fix}\nolimits, \mathop{\rm exc}\nolimits, \mathop{\rm DES}\nolimits)$ are equi-distributed on the symmetric group $\mathfrak{S}_n$. \end{prop} More precisely, Proposition \ref{DEZequi} asserts that there is a bijection $\Phi: \mathfrak{S}_n \mapsto \mathfrak{S}_n$ such that $$\mathop{\rm fix}\nolimits \pi =\mathop{\rm fix}\nolimits \Phi(\pi), \quad \mathop{\rm exc}\nolimits \pi=\mathop{\rm exc}\nolimits \Phi(\pi), \quad \mathop{\rm DES}\nolimits \pi =\mathop{\rm DEZ}\nolimits \Phi(\pi), \text{ for all } \pi\in \mathfrak{S}_n. $$ By Proposition \ref{DEZequi} Theorem \ref{t-1-gen} is equivalent to the following Theorem \ref{t-1-gen'}, where the statistic ``DES" has been replaced by ``DEZ". \addtocounter{thm}{-1} \renewcommand\thethm{\ref{t-1-gen}$'$} \begin{thm}\label{t-1-gen'} Let $J$ be a subset of $[n-1]$. \text{\rm (i)} If $\sigma\in\mathfrak{S}_n$ and $\mathop{\rm DEZ}\nolimits\sigma=J$, then $$ \mathop{\rm fix}\nolimits\sigma\leq n-|J|. $$ \text{\rm (ii)} Let $F'_n(J)$ be the set of all permutations $\sigma$ of order $n$ such that $\mathop{\rm DEZ}\nolimits\sigma=J$ and $\mathop{\rm fix}\nolimits\sigma=n-|J|$. Furthermore, let $G(J)$ be the set of all derangements $\tau$ on $J$ such that $\tau(i)>\tau(i+1) $ whenever $i$ and $i+1$ belong to $J$. Then $$ \sum_{\sigma\in F'_n(J)} s^{\mathop{\rm exc}\nolimits\sigma} =\sum_{\tau\in G(J)} s^{\mathop{\rm exc}\nolimits\tau}. $$ \end{thm} \renewcommand\thethm{\arabic{thm}} \begin{proof}[Proof of Theorem \ref{t-1-gen'}] Let $\sigma$ be a permutation such that $\mathop{\rm DEZ}\nolimits\sigma=J$ and let $i\in J$. Then $\sigma^0_i>\sigma^0_{i+1}\ge 0$, so that $i$ is not a fixed point of $\sigma$. It follows that $\sigma$ has at least $|J|$ non-fixed points. This proves (i). Now, consider the case where $\sigma$ has exactly $n-|J|$ fixed points. Then $J$ is the set of all the non-fixed points of $\sigma$. By removing the fixed points from $\sigma$ we obtain a derangement~$\tau$ on $J$. If $i,i+1\in J$, then $\tau(i)=\sigma(i)>\sigma(i+1)=\tau(i+1)$. It follows that $\tau\in G(J)$. On the other hand, take any derangement $\tau \in G(J)$ and let $\sigma$ be the permutation defined by $$\sigma(i)= \begin{cases} \tau(i), &\text{if $i\in J$,}\cr i, &\text{if $i\not\in J$.}\cr \end{cases} $$ Then $\mathop{\rm DEZ}\nolimits\sigma=J$. It is easy to see that $\sigma\in F'_n(J)$ and $\mathop{\rm exc}\nolimits\sigma=\mathop{\rm exc}\nolimits\tau$. This proves the second part of Theorem \ref{t-1-gen'}. \end{proof} \medskip \noindent {\bf Example.} Suppose $n=8$ and $J=\{1,2,3,6\}$. Let us search for the permutations $\sigma\in \mathfrak{S}_8$ such that $\mathop{\rm fix}\nolimits\sigma=8-|J|=4$ and $\mathop{\rm DEZ}\nolimits\sigma=J$. There are two derangements $\tau$ in $G(J)$, namely, $6321$ and $6312$, both having two excedances, so that the two corresponding elements $\sigma$ in $F'_n(J)$ are $63245178$ and $63145278$, both having two excedances. \medskip \noindent\textbf{Remarks.} (i) For permutations with descent set $J$ it is easy to show that the maximum number of fixed points is $n-|J|$, except when $J$ consists of an odd number of consecutive integers. In the latter exceptional case the only decreasing permutation has exactly one fixed point and therefore is not a derangement. (ii) The first part of Theorem 1 can also be proved directly by using the fact that in any consecutive decreasing subsequence of $\pi$, say $\pi_i>\pi_{i+1}>\cdots >\pi_{i+k}$, there is at most one fixed point in $\{i,i+1,\dots,i+k\}$. However the ``DEZ" statistic is an essential tool in the proof of the second part. \section{An extension of Corollary 3}\label{sec-cor3} Stanley's conjectured result in Corollary \ref{t-1} was first proved by Williams \cite{Wi} using the newly developed concept of permutation tableaux. A direct proof without using permutation tableaux was later included in her updated version with Chapman. Our direct proof was independently derived just after Williams' first proof. It has the advantage of automatically showing the following extension (Proposition \ref{p-conj-ext}). We only give the generalized form for $d_n(2n)=D_n$, since the other cases are similar. All of the three proofs are bijective, and the bijections are all equivalent. Note that Proposition \ref{p-conj-ext} is still a corollary of Theorem \ref{t-1-gen}. \begin{prop}\label{p-conj-ext} The number of alternating permutations in $\mathfrak{S}_{2n}$ with $n$ fixed points and $k$ excedances is equal to the number of derangements in $\mathfrak{S}_n$ with $k$ excedances. \end{prop} Let $\pi$ be an alternating permutation. Then, each doubleton $\{\pi_{2i-1}, \pi_{2i}\}$ contains at most one fixed point. This proves the following lemma. \begin{lem}\label{l-group} Each alternating permutation $\pi \in \mathfrak{S}_n$ has at most $\lceil n/2 \rceil$ fixed points. When this maximum is reached, either $2i-1$, or $2i$ is a fixed point of $\pi$ $(2\le 2i \le n+1)$. \end{lem} When the underlying set of the permutation $\pi$ is not necessarily $[n]$, we use $\pi(i)$ instead of $\pi_i$ for convenience. An integer $i$ is called an \emph{excedance}, a \emph{fixed point}, or a \emph{subcedance} of $\pi$ if $\pi(i)>i$, $\pi(i)=i$, or $\pi(i)<i$, respectively. \begin{proof}[Proof of Proposition \ref{p-conj-ext}] Let $\pi \in \mathfrak{S}_{2n}$ be alternating and have exactly $n$ fixed points. It follows from Lemma \ref{l-group} that for each $i$ we have the following property: either $2i-1$ is a fixed point and $2i$ a subcedance, or $2i-1$ is an excedance and $2i$ a fixed point. Conversely, if the property holds, the permutation $\pi$ is necessarily alternating, because $\pi(2i)\le 2i <2i+1\le \pi(2i+1)$; $\pi(2i-1)\ge 2i-1\ge \pi(2i)-1$. Those inequalities imply that $\pi(2i-1)>\pi(2i)$, since $2i-1$ and $2i$ cannot be both fixed points. By removing all fixed points of $\pi$ we obtain a derangement $\sigma$ on an $n$-subset of $[2n]$. The standardization of $\sigma$, which consists of replacing the $i$-th smallest element of $\sigma$ by $i$, yields a derangement $\tau$ on $[n]$. We claim that the map $\varphi:\pi \mapsto \tau$ is the desired bijection. Since the standardization preserves excedances, subcedances and fixed points, it maps one element of $\{\pi(2i-1), \pi(2i)\}$ to $\tau(i)$. It follows that $\tau(i)>i$ if and only if $2i-1$ is an excedance and $2i$ is a fixed point of $\pi$, and that $\tau(i)<i$ if and only if $2i-1$ is a fixed point and $2i$ is a subcedance of $\pi$. Thus, the set of all fixed points of $\pi$ can be constructed from $\tau$. The map $\varphi$ is then reversible. The proposition then follows since the bijection preserves the number of excedances. \end{proof} \medskip \noindent {\bf Example.} Let $\pi=\begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ 3 & \bar{2} & 6 & \bar{4} & \bar{5} & 1 & 10 & \bar{8} & \bar{9} & 7 \\ \end{pmatrix}$. Removing all the fixed points gives $\sigma=\begin{pmatrix} 1 & 3 & 6 & 7 & 10 \\ 3 & 6 & 1 & 10 & 7 \\ \end{pmatrix},$ standardized to $\tau=\begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 2 & 3 & 1 & 5 & 4 \\ \end{pmatrix}.$ Conversely, $\tau$ has excedances at positions $1,2,4$ and subcedances at positions $3,5$. This implies that $2,4,8 $ and $5,9$ are fixed points of $\pi$ and hence we can construct $\pi$. Furthermore, we have $\mathop{\rm exc}\nolimits\pi=\mathop{\rm exc}\nolimits\sigma=\mathop{\rm exc}\nolimits\tau=3$. \section{Enumeration for the maximal case} In this section we will use Theorem \ref{t-1-gen} to enumerate the number of permutations with a prescribed descent set and having the maximal number of fixed points. Every descent set $J\subseteq [n-1]$ can be partitioned into blocks of consecutive integers, such that numbers from different blocks differ by at least $2$. Let $J^b=(a_1,a_2,\dots,a_k)$ denote the sequence of the size of the blocks. For instance, if $J=\{1,2,3,6\}$, then $1,2,3$ form a block and $6$ itself forms another block. Hence $J^b=(3,1)$. Let $M_J$ denote the number of derangements in $\mathfrak{S}_n$ with descent set $J$ having $n-|J|$ fixed points. By Theorem~\ref{t-1-gen} the number $M_J$ depends only on $J^b$. Thus, we can denote $M_J$ by $M(a_1,a_2,\dots, a_k)$. \begin{thm} \label{t-max-fix} The number $M(a_1,\dots,a_k)$ is the coefficient of $x_1^{a_1}\cdots x_k^{a_k}$ in the expansion~of $$ \frac{1}{(1+x_1)(1+x_2)\cdots (1+x_k)(1-x_1-x_2-\cdots -x_k)}. $$ \end{thm} An immediate consequence of Theorem \ref{t-max-fix} is the following Corollary \ref{c-d-symmetry}, which says that $M(a_1,a_2,\dots,a_k)$ is symmetric in the $a_i$'s. \begin{cor}\label{c-d-symmetry} For each permutation $\tau \in \mathfrak{S}_k$ we have $$M(a_1,a_2,\dots,a_k)=M(a_{\tau_1},a_{\tau_2},\dots, a_{\tau_k}). $$ \end{cor} For example, $M(3,1)$ counts two derangements $4312$ and $4321$; $M(1,3)$ counts two derangements $3421$ and $4321$. This symmetry seems not easy to prove directly. Using Theorem \ref{t-max-fix} an explicit formula for $M(a_1, a_2,\ldots, a_k)$ can be obtained when $k=1, 2$. We have $M(a)=1$ if $a$ is even, and $M(a)=0$ if $a$ is odd; also $$ M(a,b)=\sum_{j=2}^{a+b} \sum_{i=0}^j (-1)^j \binom{a+b-j}{a-i}.$$ \medskip To prove Theorem \ref{t-max-fix} we need some notions from \cite{gessel-reutenauer}, where Gessel and Reutenauer represented the number of permutations with given cycle structure and descent set by the scalar product of two special characters of the symmetric group introduced by Foulkes \cite{foulkes-up-down,foulkes-eulerian}. Their results were also key ingredients in \cite{St} for the enumeration of alternating permutations by number of fixed points. In what follows, we assume the basic knowledge of symmetric functions (see, e.g., \cite{Lasc,Macd,EC2}). The scalar product $\langle \ , \ \rangle$ of two symmetric functions is a bilinear form defined for all partitions $\lambda$ and $\mu$ by \begin{equation} \label{e-scalar} \langle m_\lambda, h_\mu \rangle =\langle h_\mu, m_\lambda \rangle= \delta_{\lambda\mu}, \end{equation} where $m_\lambda$ is the monomial symmetric function, $h_\mu$ is the complete symmetric function, and $\delta$ is the usual Kronecker symbol. Moreover, if $\omega $ is the homomorphism defined by $\omega e_i=h_i$ and $\omega h_i=e_i$, where $e_i$ is the elementary symmetric function, then for any symmetric functions $f$ and $g$ we have \begin{equation} \label{e-omega} \langle f,g\rangle =\langle \omega f, \omega g \rangle. \end{equation} Associate the function \begin{equation}\label{e-SJ} S_J=\sum_{\mathop{\rm DES}\nolimits w =J} x_{w_1}x_{w_2}\cdots x_{w_n} \end{equation} with each subset $J\subseteq [n-1]$, where the sum ranges over all words on positive integers with descent set $J$. We claim that $S_J$ is a symmetric function whose shape is a border strip (see \cite[p. 345]{EC2}). In particular, $S_{[n-1]}$ is equal to $e_n$, the elementary symmetric function of order $n$. On the other hand, every partition $\lambda$ of $n$ has an associate symmetric function $L_\lambda$ related to a Lie representation. The definition of $L_\lambda$ is omitted here (see \cite{gessel-reutenauer}); just remember that the symmetric function corresponding to derangements of order $n$ is given by \begin{align} \mathcal{D}_n&=\sum_{\lambda} L_\lambda =\sum_{j=0}^n (-1)^j e_j h_1^{n-j}, \label{e-D-0} \end{align} where the sum ranges over all partitions $\lambda$ having no part equal to 1 \cite[Theorem 8.1]{gessel-reutenauer}. We need the following result from~\cite{gessel-reutenauer} for our enumeration. \begin{prop}[Gessel-Reutenauer] \label{t-gessel-reutenauer} The number of permutations having descent set~$J$ and cycle structure $\lambda$ is equal to the scalar product of the symmetric functions $S_J$ and~$L_\lambda$. \end{prop} \begin{proof}[Proof of Theorem \ref{t-max-fix}] For each fixed integer sequence $(a_1,a_2,\dots,a_k)$ let $s_i=a_1+a_2+\cdots+a_i$ for $i=1,\dots,k$ and $\ell=s_k$. Then $M(a_1,a_2,\dots,a_k)$ is the number of {\it derangements} $\pi\in\mathfrak{S}_\ell$ such that $s_i$ with $i=1,2,\dots,k-1$ may or may not be a descent of $\pi$, and such that all the other numbers in $[\ell-1]$ are descents of $\pi$. There is then a set $T$ of $2^{k-1}$ descent sets $J$ to consider, depending on whether each $s_i$ is a descent or not (for $i=1,\dots,k-1$). By Proposition \ref{t-gessel-reutenauer} and linearity we have \begin{equation} \label{e-D-scalar} M(a_1,a_2,\dots,a_k)= \langle \sum_{J\in T} S_J, \mathcal{D}_{\ell} \rangle. \end{equation} From \eqref{e-SJ} it follows that $$ \sum_{J\in T} S_J =\sum_{\mathop{\rm DES}\nolimits w\in T }x_{w_1}x_{w_2}\cdots x_{w_n} =\sum_{[\ell-1]\setminus\{s_1, s_2, \ldots, s_{k-1}\} \subseteq \mathop{\rm DES}\nolimits w }x_{w_1}x_{w_2}\cdots x_{w_n}. $$ Each word $w$ occurring in the latter sum is the juxtaposition product $w=u^{(1)} u^{(2)} \cdots u^{(k)}$, where each $u^{(i)}$ is a decreasing word of length $a_i$ ($i=1,2,\ldots, k$). Hence $ \sum_{J\in T} S_J =e_{a_1}e_{a_2}\cdots e_{a_k}$. In \eqref{e-D-scalar} replace $\sum_{J\in T} S_J$ by $e_{a_1}e_{a_2}\cdots e_{a_k}$ and $\mathcal{D}_{\ell}$ by the second expression in \eqref{e-D-0}. We obtain $$ M(a_1,a_2,\dots,a_k)= \langle e_{a_1}e_{a_2}\cdots e_{a_k}, \sum_{j=0}^n (-1)^j e_j h_1^{n-j} \rangle. $$ The image under $\omega$ yields \begin{align*} M(a_1,a_2,\dots,a_k) &=\langle \omega e_{a_1}\cdots e_{a_k}, \omega \sum_{j=0}^{\ell} (-1)^j e_j h_1^{\ell-j} \rangle \\ &= \langle h_{a_1}\cdots h_{a_k}, \sum_{j=0}^{\ell} (-1)^j h_j e_1^{\ell-j} \rangle. \end{align*} Notice that $\sum_{j=0}^{\ell} (-1)^j h_j e_1^{\ell-j} $ is the coefficient of $u^\ell$ in $$ \big(\sum_{j}h_j(-u)^j \big)\big(\sum_i e_1^{i}u^i \big) = \frac{1}{(1+x_1u)(1+x_2u)\cdots (1+x_ku)(1-(x_1+x_2+\cdots +x_k)u)}. $$ It follows from \eqref{e-scalar} that $M(a_1,\dots,a_k)$ is the coefficient of $x_1^{a_1}\cdots x_k^{a_k}u^\ell$ in the expansion of the above fraction. \end{proof} \section{Permutations with 0 or 1 fixed points}\label{sec-th2} Our objective in this section is to prove Theorem \ref{exc-gen}. We will establish a chain of equivalent or stronger statements, leading to the final easy one. Further notations are needed. Let $w=w_1w_2\cdots w_n$ be a word on the letters $1,2,\dots,m$, each letter appearing at least once. The set-statistic $\mathop{\rm IDES}\nolimits w$ is defined to be the set of all $i$ such that the rightmost $i$ appears to the right of the rightmost $i+1$ in $w$. Note that if $\pi$ is a permutation on $[n]$, then $\mathop{\rm IDES}\nolimits \pi = \mathop{\rm DES}\nolimits \pi^{-1}$. For every proper subset $J$ of $[n-1]$ let $\mathfrak{S}_n^J$ be the set of permutations $\sigma \in \mathfrak{S}_n$ with $\mathop{\rm IDES}\nolimits\sigma=J$. Note the difference with the notation of $D^J_k(n)$ for $k=0,1$. We will see that it is easier to deal with $\mathop{\rm IDES}\nolimits$ than with $\mathop{\rm DES}\nolimits$ directly. A word $w=w_1w_2\cdots w_n$ is said to be a {\it desarrangement} if $w_1>w_2>\cdots >w_{2k}$ and $w_{2k}\leq w_{2k+1}$ for some~$k\ge 1$. By convention, $w_{n+1}=\infty$. We may also say that the {\it leftmost trough} of~$w$ occurs at an {\it even} position \cite{FH07}. This notion was introduced, for permutations, by D\'esarm\'enien \cite{De84} and elegantly used in a subsequent paper \cite{DW88}. A further refinement is due to Gessel \cite{Ge91}. A desarrangement $w=w_1w_2\cdots w_n$ is called a {\it hook}, if $n\ge 2$ and $w_{1}>w_{2}\leq w_{3}\leq \cdots \leq w_{n}$. Every nonempty word~$w$ on the letters $1,2,3,\dots$ can be written uniquely as a product $uh_1h_2\cdots h_k$, where $u$ is a {\it weakly increasing} word (possibly empty) and each~$h_i$ is a hook. This factorization is called the {\it hook-factorization} of~$w$ \cite{FH07}. For permutations it was already introduced by Gessel \cite{Ge91}. For instance, the hook-factorization of the following word is indicated by vertical bars: $$ w=\mid 1\,2\,4\,5\mid6\,4\,5\,6\mid 4\,1\,3\mid 6\,5\mid 5\,4\mid 6\,1\,1\,4\mid5\,1\,1\mid. $$ Let $uh_1h_2\cdots h_k$ be the hook factorization of the word $w$. The statistic $\mathop{\rm pix}\nolimits w$ is defined to be the length of $u$, and the statistic $\mathop{\rm lec}\nolimits w$ is defined, in terms of inversion statistics ``$\mathop{\rm inv}\nolimits$", by the sum \cite{FH07} $$ \mathop{\rm lec}\nolimits w:=\sum_{i=1}^k \mathop{\rm inv}\nolimits(h_i). $$ In the previous example, $\mathop{\rm pix}\nolimits w=|1245|=4$ and $\mathop{\rm lec}\nolimits w=\mathop{\rm inv}\nolimits(6456)+\mathop{\rm inv}\nolimits(413)+\mathop{\rm inv}\nolimits(65)+\mathop{\rm inv}\nolimits(54)+\mathop{\rm inv}\nolimits(6114)+\mathop{\rm inv}\nolimits(511)=2+2+1+1+3+2=11$. \medskip For each permutation $\sigma$ let $\mathop{\rm iexc}\nolimits\sigma=\mathop{\rm exc}\nolimits \sigma^{-1}$. The next proposition was proved in Foata and Han \cite{FH07}. \begin{prop}\label{FHequi} The two three-variable statistics $(\mathop{\rm iexc}\nolimits, \mathop{\rm fix}\nolimits, \mathop{\rm IDES}\nolimits)$ and $(\mathop{\rm lec}\nolimits, \mathop{\rm pix}\nolimits, \mathop{\rm IDES}\nolimits)$ are equi-distributed on the symmetric group $\mathfrak{S}_n$. \end{prop} Let ${K}_0^J(n)$ denote the set of all desarrangements in $\mathfrak{S}_n^J$, and ${K}_1^J(n)$ the set of all permutations in $\mathfrak{S}_n^J$ with exactly one pixed point. Since the map $\sigma\to \sigma^{-1}$ preserves the number of fixed points, Theorem \ref{exc-gen} is equivalent to asserting that $$\sum_{\genfrac{}{}{0pt}{3}{\sigma\in D_0(n)}{\mathop{\rm IDES}\nolimits(\sigma)=J}}- \sum_{\genfrac{}{}{0pt}{3}{\sigma\in D_1(n)}{\mathop{\rm IDES}\nolimits(\sigma)=J}}=(s-1)Q_n^J(s). $$ Then by Proposition \ref{FHequi} this is equivalent to the following Theorem \ref{exc-gen'}. \addtocounter{thm}{-1} \renewcommand\thethm{\ref{exc-gen}$^a$} \begin{thm}\label{exc-gen'} We have $$ \sum_{\sigma\in{K}_0^J(n)} s^{\mathop{\rm lec}\nolimits\sigma} -\sum_{\sigma\in{K}_1^J(n)} s^{\mathop{\rm lec}\nolimits\sigma} =(s-1) Q_n^J(s), $$ where $Q_n^J(s)$ is a polynomial with positive integral coefficients. \end{thm} \renewcommand\thethm{\arabic{thm}} The following lemma enables us to prove a stronger result. \begin{lem}\label{l-cases} Let $w=w_1w_2\cdots w_n$ be a desarrangement such that $\mathop{\rm IDES}\nolimits w\not=\{1,2,\ldots, n-1\}$ and let $w'=w_nw_1w_2\cdots w_{n-1}$. Then, either $\mathop{\rm lec}\nolimits w'=\mathop{\rm lec}\nolimits w$, or $\mathop{\rm lec}\nolimits w'=\mathop{\rm lec}\nolimits w -1$. \end{lem} \begin{proof} Several cases are to be considered. Say that $w$ belongs to type $A$ if $\mathop{\rm lec}\nolimits(w')=\mathop{\rm lec}\nolimits(w)$, and say that $w$ belongs to type $B$ if $\mathop{\rm lec}\nolimits(w')=\mathop{\rm lec}\nolimits(w)-1$. Since $w$ is a desarrangement, we may assume $w_1>w_2>\cdots >w_{2k}\le w_{2k+1}$ for some $k$. It follows that $w'$ has one pixed point. Let $h_1\cdots h_k$ be the hook-factorization of $w$. Then the hook-factorization of $w'$ must have the form $w_n| h_1'\cdots h_\ell'$. Thus, when computing $\mathop{\rm lec}\nolimits(w')$, we can simply omit $w_n$. This fact will be used when checking the various cases. The reader is invited to look at Figures 1--3, where the letters $b,c,x,y,z$ play a critical role. \begin{enumerate} \item If the rightmost hook $h_k$ has at least three elements, as shown in Figure \ref{f-case1}, then $b\le c$ belongs to type $A$ and $b> c$ belongs to type $B$. This is because the only possible change for ``lec" must come from an inversion containing $c$. Furthermore, $(b,c)$ forms an inversion for type $B$ and does not form an inversion for type $A$. { \def0.0{0.7} \def\dysize{2.8mm} \vskip -\dysize \def\circle*{.2}{\circle*{.2}} \def\putdot(#1,#2){\put(#1,#2){\circle*{.2}}} \def\R(#1,#2){\put(#1,#2){\line(1,1){1}}\putdot(#1,#2)} \def\D(#1,#2){\put(#1,#2){\line(1,-1){1}}\putdot(#1,#2)} \begin{figure}[hbt] \begin{center} \setlength{\unitlength}{4mm} \begin{picture}(6,5) \D(0,1) \R(1,0) \R(2,1) \R(3,2) \putdot(4,3) \put(0.1,1.1){$b$} \put(4.1,3.1){$c$} \put(-0.7,-0.0){\dashbox{.2}(6,5){}} \end{picture} \begin{picture}(3,5) \put(0.5,1.5){\vector(1,0){1}} \end{picture} \begin{picture}(6,5) \D(0,1)\R(1,0)\R(2,1) \putdot(3,2) \put(0.1,1.1){$b$} \put(-0.7,-0.0){\dashbox{.2}(6,5){}} \end{picture} \end{center} \vskip \dysize \caption{\label{f-case1}Transformation for case $1$.} \end{figure} } \item Suppose the rightmost hook $h_k$ has two elements $b>c$. \begin{enumerate} \item If there is a hook $xy$ followed by several decreasing hooks of length $2$ with $y\le z$, as shown in Figure \ref{f-case2a}, then $x\le z$ belongs to type $B$ and $x>z$ belongs to type $A$. { \def0.0{0.0} \def\dysize{0mm} \vskip -\dysize \def\circle*{.2}{\circle*{.2}} \def\putdot(#1,#2){\put(#1,#2){\circle*{.2}}} \def\R(#1,#2){\put(#1,#2){\line(1,1){1}}\putdot(#1,#2)} \def\D(#1,#2){\put(#1,#2){\line(1,-1){1}}\putdot(#1,#2)} \begin{figure}[hbt] \begin{center} \setlength{\unitlength}{4mm} \begin{picture}(8,7) \D(0,4)\putdot(1,3) \put(-0.5,3.3){$x$}\put(0.9,2.3){$y$} \D(2,6)\putdot(3,5) \put(2.2,6.1){$z$} \D(4,4)\putdot(5,3) \D(6,2)\putdot(7,1) \put(6.1,2.1){$b$}\put(7.1,1.1){$c$} \put(-0.8,-0.0){\dashbox{.2}(9,7){}} \end{picture} \begin{picture}(4,7) \put(1.5,3.0){\vector(1,0){1}} \end{picture} \begin{picture}(8,7) \D(0,4)\putdot(1,3) \put(-0.5,3.3){$x$}\put(0.9,2.3){$y$} \put(1,3){\line(1,3){1}} \putdot(2,6) \D(3,5)\putdot(4,4) \D(5,3)\putdot(6,2) \put(2.2,6.1){$z$} \put(6.1,2.1){$b$} \put(-0.8,-0.0){\dashbox{.2}(9,7){}} \end{picture} \end{center} \vskip \dysize \caption{\label{f-case2a}Transformation for case $2a$.} \end{figure} } \item If there is a hook of length at least $3$, followed by several decreasing hooks of length $2$, then (see Figure \ref{f-case2b}) \begin{enumerate} \item $x>y$ belongs to type $B$ and $ x\le y$ belongs to type $A$ in case $y>z$; \item $x\le z$ belongs to type $B$ and $x>z$ belongs to type $A$ in case $y\le z$. \end{enumerate} { \def\circle*{.2}{\circle*{.2}} \def\putdot(#1,#2){\put(#1,#2){\circle*{.2}}} \def\R(#1,#2){\put(#1,#2){\line(1,1){1}}\putdot(#1,#2)} \def\D(#1,#2){\put(#1,#2){\line(1,-1){1}}\putdot(#1,#2)} \def7cm{7cm} \begin{figure}[hbt] \begin{center} \setlength{\unitlength}{4mm} \hbox{ \hskip 5mm $\vcenter{ \hsize=7cm \begin{picture}(11,7) \D(0,4)\R(1,3)\R(2,4)\R(3,5)\putdot(4,6) \D(5,6)\putdot(6,5) \D(7,4)\putdot(8,3) \D(9,2)\putdot(10,1) \put(-.4,3.2){$x$} \put(3.8,6.5){$y$} \put(4.8,6.4){$z$} \put(8.5,1.3){$b$} \put(9.7,0.3){$c$} \put(-0.9,-0.7){\dashbox{.2}(12,8){}} \end{picture} } $ \hskip -13mm $\vcenter{ \hsize=2cm \begin{picture}(4,7) \put(1.5,4.0){\vector(1,1){1.5}} \put(1.5,2.0){\vector(1,-1){1.5}} \end{picture} }$ \hskip 5mm $\vcenter{ \hsize=7cm \hbox{ \begin{picture}(11,10) \D(0,4)\R(1,3)\R(2,4)\putdot(3,5) \D(4,6)\putdot(5,5) \D(6,4)\putdot(7,3) \D(8,2)\putdot(9,1) \put(-.4,3.2){$x$} \put(3.8,6.5){$y$} \put(4.9,5.4){$z$} \put(8.6,0.2){$b$} \put(-0.9,-0.2){\dashbox{.2}(12,7.5){}} \end{picture} } \hbox{ \begin{picture}(11,10) \D(0,4)\R(1,3)\R(2,4)\R(3,5)\R(4,6)\putdot(5,7) \D(6,6)\putdot(7,5) \D(8,4)\putdot(9,3) \put(-.4,3.2){$x$} \put(3.6,6.5){$y$} \put(4.8,7.4){$z$} \put(8.6,2.2){$b$} \put(-0.9,1){\dashbox{.2}(12,7.5){}} \end{picture} } }$ } \end{center} \caption{\label{f-case2b}Transformations for case $2b$.} \end{figure} } \end{enumerate} \end{enumerate} This achieves the proof of the lemma. \end{proof} With the notations of Lemma \ref{l-cases} we say that a desarrangement $w$ is in class $A_0$ if $\mathop{\rm lec}\nolimits w'=\mathop{\rm lec}\nolimits w$ and in class $B_0$ if $\mathop{\rm lec}\nolimits w'=\mathop{\rm lec}\nolimits w -1$. A word $w=w_1w_2w_3\cdots w_n$ is said to be in class $A_1$ (resp. in class $B_1$) if the word $w_2w_3\cdots w_n w_1$ is in class $A_0$ (resp. in class $B_0$). Notice that a word in class $A_1$ or $B_1$ has exactly one pixed point. Then, Theorem \ref{exc-gen'} is a consequence of the following theorem. \goodbreak \addtocounter{thm}{-1} \renewcommand\thethm{\ref{exc-gen}$^b$} \begin{thm}\label{exc-gen-AB} We have $$ \sum_{\sigma\in\mathfrak{S}_n^J\cap A_0} s^{\mathop{\rm lec}\nolimits\sigma} =\sum_{\sigma\in\mathfrak{S}_n^J\cap A_1} s^{\mathop{\rm lec}\nolimits\sigma} \hbox{\quad and\quad} \sum_{\sigma\in\mathfrak{S}_n^J\cap B_0} s^{\mathop{\rm lec}\nolimits\sigma} =s\sum_{\sigma\in\mathfrak{S}_n^J\cap B_1} s^{\mathop{\rm lec}\nolimits\sigma}. $$ \end{thm} \renewcommand\thethm{\arabic{thm}} \goodbreak Let $\mathfrak{S}_n^{\subseteq J}$ be the set of all permutations $\sigma$ of order $n$ such that $\mathop{\rm IDES}\nolimits\sigma\subseteq J$. By the inclusion-exclusion principle, Theorem \ref{exc-gen-AB} is equivalent to the following theorem. \addtocounter{thm}{-1} \renewcommand\thethm{\ref{exc-gen}$^c$} \begin{thm}\label{exc-gen-AB'} We have $$ \sum_{\sigma\in\mathfrak{S}_n^{\subseteq J}\cap A_0} s^{\mathop{\rm lec}\nolimits\sigma} =\sum_{\sigma\in\mathfrak{S}_n^{\subseteq J}\cap A_1} s^{\mathop{\rm lec}\nolimits\sigma} \hbox{\quad and\quad} \sum_{\sigma\in\mathfrak{S}_n^{\subseteq J}\cap B_0} s^{\mathop{\rm lec}\nolimits\sigma} =s\sum_{\sigma\in\mathfrak{S}_n^{\subseteq J}\cap B_1} s^{\mathop{\rm lec}\nolimits\sigma}. $$ \end{thm} \renewcommand\thethm{\arabic{thm}} If $J=\{j_1, j_2, \ldots, j_{r-1}\}\subseteq [n-1]$, define a composition ${\bf m}=(m_1, m_2, \ldots, m_r)$ by $m_1=j_1, m_2=j_2-j_1, \ldots, m_{r-1}= j_{r-1}-j_{r-2}, m_r=n-j_{r-1}$. Let $R({\bf m})$ be the set of all rearrangements of $1^{m_1} 2^{m_2} \cdots r^{m_r}$. We construct a bijection $\phi$ from $R({\bf m})$ to $\mathfrak{S}_n^{\subseteq J}$ by means of the classical {\it standardization} of words. Let $w\in R({\bf m})$ be a word. From left to right label the letters $1$ in $w$ by $1, 2, \dots,m_1$, then label the letters $2$ in $w$ by $m_1+1,m_1+2,\dots,m_1+m_2$, and so on. Then the standardization of $w$, denoted by $\sigma=\phi(w)$, is the permutation obtained by reading those labels from left to right. It is easy to see that $\phi$ is reversible and $\mathop{\rm IDES}\nolimits\sigma\subseteq J$ if and only if $w\in R(\mathbf{m})$ (see \cite{DW93, FH07}). Moreover, the permutation $\sigma$ and the word $w$ have the same hook-factorization {\it type}. This means that if $ah_1h_2\ldots h_s$ (resp. $bp_1p_2\ldots p_k$) is the hook-factorization of $\sigma$ (resp. hook-factorization of $w$), then $k=s$ and $|a|=|b|$. For each $1\leq i \leq k$ we have $|h_i|=|p_i|$ and $\mathop{\rm inv}\nolimits(h_i)=\mathop{\rm inv}\nolimits(p_i)$. Hence $ \mathop{\rm lec}\nolimits w=\mathop{\rm lec}\nolimits\sigma$ and $ \mathop{\rm pix}\nolimits w=\mathop{\rm pix}\nolimits\sigma$. Furthermore, $\sigma$ is in class $A_0$, $A_1$, $B_0$ or $B_1$ if and only if $w$ is in the same class. Theorem \ref{exc-gen-AB'} is equivalent to the next theorem, whose proof follows from the definition of the classes $A_0$, $A_1$, $B_0$, $B_1$ and Lemma \ref{l-cases}. \addtocounter{thm}{-1} \renewcommand\thethm{\ref{exc-gen}$^d$} \begin{thm}\label{exc-gen-AB''} We have $$ \sum_{\sigma\in R({\bf m})\cap A_0} s^{\mathop{\rm lec}\nolimits\sigma} =\sum_{\sigma\in R({\bf m})\cap A_1} s^{\mathop{\rm lec}\nolimits\sigma} \hbox{\quad and\quad} \sum_{\sigma\in R({\bf m})\cap B_0} s^{\mathop{\rm lec}\nolimits\sigma} =s\sum_{\sigma\in R({\bf m})\cap B_1} s^{\mathop{\rm lec}\nolimits\sigma}. $$ \end{thm} \renewcommand\thethm{\arabic{thm}} The following variation of Theorem \ref{exc-gen} follows from Theorem \ref{exc-gen-AB}, but cannot be derived from Theorem \ref{exc-gen} directly. \begin{thm}\label{exc-gen-var} We have $$ \sum_{\sigma\in{D}_0^J(n)} s^{\mathop{\rm iexc}\nolimits\sigma} -\sum_{\sigma\in{D}_1^J(n)} s^{\mathop{\rm iexc}\nolimits\sigma} =(s-1) Q_n^J(s) $$ for some polynomial $Q_n^J(s)$ with positive integral coefficients. \end{thm} \section{Further remarks} A combinatorial proof of Corollary \ref{t-2-gen} can be made by using the methods developed in the preceding section. However this proof does not need the concept of ``hook" and the statistic ``$\mathop{\rm lec}\nolimits$". We only list the equivalent statements, leaving the details to the reader. \begin{thm}\label{t-S2} Let $J$ be a proper subset of $[n-1]$. The following statements are equivalent to Corollary \ref{t-2-gen}: \begin{enumerate} \item The number of derangements in $\mathfrak{S}_n^J$ is equal to the number of permutations in $\mathfrak{S}_n^J$ with exactly one fixed point. \item The number of desarrangements in $\mathfrak{S}_n^J$ is equal to the number of permutations in $\mathfrak{S}_n^J$ with exactly one pixed point. \item The number of desarrangements in $\mathfrak{S}_n^{\subseteq J}$ is equal to the number of permutations in $\mathfrak{S}_n^{\subseteq J}$ with exactly one pixed point. \item The number of desarrangements in $R{(\bf m)}$ is equal to the number of words in $R{(\bf m)}$ with exactly one pixed point. \end{enumerate} \end{thm} We remark that the equivalence of $(1)$ and $(2)$ also follows from a result of D\'esarm\'enien and Wachs \cite{DW88, DW93}: the two bi-variable statistics (fix, IDES) and (pix, IDES) are equi-distributed on the symmetric group $\mathfrak{S}_n$. \medskip The statistics ``des" and ``maj" are determined by ``DES": $\mathop{\rm des}\nolimits \pi =\# \mathop{\rm DES}\nolimits \pi$ and $ \mathop{\rm maj}\nolimits \pi = \sum_{i\in \mathop{\rm DES}\nolimits \pi} i$ for $\pi \in \mathfrak{S}_n$. By using Theorem \ref{exc-gen} for each proper subset $J$ of $[n-1]$ and by checking the case $J=[n-1]$ directly, we have the following result. \begin{thm}\label{exc-sum} There is a polynomial $Q_n(s,t,q)$ with positive integral coefficients such that $$ \sum_{\sigma\in{D}_0(n)} s^{\mathop{\rm exc}\nolimits\sigma} t^{\mathop{\rm des}\nolimits\sigma} q^{\mathop{\rm maj}\nolimits\sigma} -\sum_{\sigma\in{D}_1(n)} s^{\mathop{\rm exc}\nolimits\sigma} t^{\mathop{\rm des}\nolimits\sigma} q^{\mathop{\rm maj}\nolimits\sigma} =(s-1) Q_n(s,t,q) + r_n(s,t,q) $$ where $r_{2k}(s,t,q)=s^k t^{2k-1} q^{k(2k-1)}$ for $k\geq 1$ and $r_{2k+1}(s,t,q)=-s^k t^{2k} q^{k(2k+1)}$ for $k\geq 0$. \end{thm} A related result is the following, where we use the standard notation for $q$-series: $$(z;q)_m= (1-z)(1-zq)\cdots (1-zq^{m-1}).$$ \begin{prop}[\cite{FH07}, Theorem 1.1] \label{FH-gf} Let $(A_{n}(s,t,q,Y))_{n\ge 0}$ be the sequence of polynomials in four variables, whose factorial generating function is given by $$ \sum_{r\ge 0}t^r\frac{(1-sq)\,(u;q)_r\,(usq;q)_r} {((u;q)_r-sq(usq;q)_r)(uY;q)_{r+1}}\!=\!\sum_{n\ge 0} A_n(s,t,q,Y) \frac{u^n}{ (t;q)_{n+1}}. $$ Then $A_{n}(s,t,q,Y)$ is the generating polynomial for~$\mathfrak{S}_n$ according to the four-variable statistic $(\mathop{\rm exc}\nolimits,\mathop{\rm des}\nolimits,\mathop{\rm maj}\nolimits,\mathop{\rm fix}\nolimits)$. In other words, $$A_n(s,t,q,Y)=\sum_{\sigma\in\mathfrak{S}_n} s^{\mathop{\rm exc}\nolimits\sigma}t^{\mathop{\rm des}\nolimits\sigma}q^{\mathop{\rm maj}\nolimits\sigma} Y^{\mathop{\rm fix}\nolimits\sigma}. $$ \end{prop} Since $\sum_{\sigma\in{D}_0(n)} s^{\mathop{\rm exc}\nolimits\sigma} t^{\mathop{\rm des}\nolimits\sigma} q^{\mathop{\rm maj}\nolimits\sigma}$ is simply $A_n(s,t,q,0)$ and $\sum_{\sigma\in{D}_1(n)} s^{\mathop{\rm exc}\nolimits\sigma} t^{\mathop{\rm des}\nolimits\sigma} q^{\mathop{\rm maj}\nolimits\sigma}$ is equal to the coefficient of $Y$ in $A_n(s,t,q,Y)$, Theorem \ref{exc-sum} and Proposition \ref{FH-gf} imply the following theorem. \begin{thm}\label{exc-sum-n} There is a sequence of polynomials $(Q_n(s,t,q))_{n\ge 0}$ with positive integral coefficients such that \begin{multline*} \sum_{r\ge 0}t^r\left(1-u\frac{1-q^{r+1}}{1-q}\right)\frac{(1-sq)\,(u;q)_r\,(usq;q)_r} {((u;q)_r-sq(usq;q)_r) } -\frac{1}{1-t}\\ = (s-1) \sum_{n\geq 1}Q_n(s,t,q) \frac{u^n}{(t;q)_{n+1}} + r(s,t,q), \end{multline*} where $$ r(s,t,q)=\sum_{k\geq 1} s^k t^{2k-1} q^{k(2k-1)} \frac{u^{2k}}{(t;q)_{2k+1}} -\sum_{k\geq 0} s^k t^{2k} q^{k(2k+1)} \frac{u^{2k+1}}{(t;q)_{2k+2}}. $$ \end{thm} In the case of $t=1$ and $q=1$ the above theorem yields the following corollary. \begin{cor}\label{exc-sum-n-q1} For each $n\geq 0$ let $Q_n(s)$ be the coefficient of $u^n/n!$ in the Taylor expansion of $$ H(s)=\frac{u-1}{se^{us}-s^2e^u} - \frac{1}{2s\sqrt{s}}\Bigl( \frac{e^{u\sqrt{s}}}{\sqrt{s}+1} +\frac{e^{-u\sqrt{s}}}{\sqrt{s}-1} \Bigr), $$ that is $$ H(s) =\frac{u^3}{3!} +(s+3)\frac{u^4}{4!} +(s^2+17s+4)\frac{u^5}{5!} +(s^3+46s^2+80s+5)\frac{u^6}{6!}+\cdots+Q_n(s)\frac{u^n}{n!}+\cdots $$ Then, the coefficients $Q_n(s)$ are polynomials in $s$ with positive integral coefficients. \end{cor} It is easy to show that $Q_{2n-1}(1)=D_{2n-1}/2$ and $Q_{2n}(1)=(D_{2n}-1)/2$ for $n\ge 2$. By Formula (6.19) in \cite{FH06IV} we have $$ Q_n(1)=\sum_{2\leq 2k\leq n-1} k \times n(n-1)(n-2)\cdots (2k+2). $$ Since $Q_{n}(1)$ counts the number of desarrangements of type $B$, Corollary \ref{exc-sum-n-q1} implies that the number of desarrangements of type $A$ equals the number of desarrangements of type $B$, when excluding the decreasing desarrangement of even length. It would be interesting to have a direct (analytic) proof of Corollary \ref{exc-sum-n-q1} which would not use the combinatorial set-up of this paper. \goodbreak \vspace{.2cm} \noindent{\bf Acknowledgments.} The authors would like to thank Dominique Foata for helpful remarks and suggestions. This work was supported by the 973 Project, the PCSIRT project of the Ministry of Education, the Ministry of Science and Technology and the National Science Foundation of China. \vskip 2cm
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\@ifstar{\starsection}{\nostarsection}} \renewcommand\subsection{\@ifstar{\starsubsection}{\nostarsubsection}} \newcommand\sectionprelude{\vspace{0ex}} \newcommand\sectionpostlude{\vspace{0ex}} \newcommand\subsectionprelude{\vspace{0ex}} \newcommand\subsectionpostlude{\vspace{0ex}} \newcommand\nostarsection[1]{\sectionprelude\origsection{#1}\sectionpostlude} \newcommand\starsection[1]{\sectionprelude\origsection*{#1}\sectionpostlude} \newcommand\nostarsubsection[1]{\subsectionprelude\origsubsection{#1}\subsectionpostlude} \newcommand\starsubsection[1]{\subsectionprelude\origsubsection*{#1}\subsectionpostlude} \makeatother \newcommand\paraspace{\vspace*{0ex}} \providecommand\parab[1]{\paraspace\noindent\textbf{#1}} \providecommand\parae[1]{\textbf{\textit{#1}}} \setlength{\textfloatsep}{10pt plus 2.0pt minus 3.0pt} \setlength{\abovecaptionskip}{10pt plus 2.0pt minus 3.0pt} \setlength{\belowcaptionskip}{10pt plus 2.0pt minus 3.0pt} \apptocmd\normalsize{% \abovedisplayskip=5pt \abovedisplayshortskip=5pt \belowdisplayskip=5pt \belowdisplayshortskip=5pt }{}{} \hypersetup{final} \section{Training Costs in Active Learning} \section{Prediction Models for $\mathbf{C}$} \label{sec:motivation} \textsc{mcal}\xspace must determine the optimal value of $\set{B}$ and $S^\star(\clD,B)$ that minimizes $\mathbf{C}$ (\secref{sec:Intro}). Thus, in order to make optimal choices for various parameters then, \textsc{mcal}\xspace must be able to predict $\mathbf{C}$ as a function of the choices. $\mathbf{C}$ in turn depends on $\cset{S^\star(\clD,B)}$ and $\trc{\clDa{B}}$ (\eqnref{eqn:Total_Cost}). Thus, \textsc{mcal}\xspace actually constructs two predictors, one each for $\cset{S^\star(\clD,B)}$ and $\trc{\clDa{B}}$. \subsection{Predicting $\cset{S^\star(\clD,B)}$ as a function of $\cset{B}$} \label{subsec:SMaxPrediction} After training the classifier $\mathcal{D}$ on the human labeled data $\set{B}$ generated thus far, \textsc{mcal}\xspace sorts each unlabeled data item in $\setm{X}{B}$ using $M(.)$, a measure of the item's informativeness. Let the set $S^\theta(\clDa{B})$ contain the $\theta\cdot\cset{\setm{X}{B}}$ \emph{least} informative data items ($\theta \in (0,1)$); these are the items that $\mathcal{D}$ is most confident about. Thus, $S^\star(\clD,B)$ corresponds to an $S^\theta(\clDa{B})$ for a maximal value $\theta^\star$ that does not violate the overall groundtruth accuracy constraint $\sfrac{\cset{S^\star(\clD,B)}}{\cset{X}}\erra{S(\clD,B)} < \boldsymbol\epsilon$. \textsc{mcal}\xspace constructs a predictor for $\erra{S^\theta(\clDa{B})}$ and uses it to predict $\csetS^\star(\clD,B)$ by searching for $\theta^\star$. To predict $\erra{S^\theta(\clDa{B})}$, we leverage recent empirical work (\secref{sec:related}) that observes that, for many tasks and many models, the generalization error \textit{vs.} training set size is well-modeled by a power-law~\cite{BaiduLCP,pilot,powerlaw,BMC,DatasizeNeeded} of the form $\erra{S^\theta(\clDa{B})} = \alpha_{\theta}\cset{B}^{-\gamma_\theta}$. However, it is well-known that most power-laws experience a fall-off~\cite{utpl} at high values of the independent variable. To model this, we use an \textit{\textbf{upper-truncated}} power-law~\cite{utpl}: \begin{equation} \label{eq:truncatedPowerLaw} \erra{S^\theta(\clDa{B})} = \alpha_{\theta}\cset{B}^{-\gamma_\theta}e^{-\frac{\cset{B}}{k_\theta}} \end{equation} This model better represents the generalization vs. training error relationship than a power-law, as \figref{fig:truncated} illustrates (Appendix A demonstrates this for other datasets). In this figure, we use active learning on the CIFAR-10~\cite{CIFAR} data set, trained using RESNET18~\cite{resnet}. We fit both a power-law and a truncated power-law. As the figure shows, the truncated power-law is able to better predict the generalization error at larger values of $\cset{\set{B}}$. $\erra{S^\theta(\clDa{B})}$ is expected to increase monotonically with $\theta$ as increasing $\theta$ has the effect of adding data that $\mathcal{D}$ is progressively less confident about. Lacking a parametric model for this dependence, to find $\theta^\star$, we generate power-law models $\erra{S^\theta(\clDa{B})}$ for various discrete values of $\theta\in(0,1)$ as described in \secref{sec:taal}. $\theta^\star$ for a given $\set{B}$ is then predicted by searching across the predicted $\erra{S^\theta(\clDa{B})}$ corresponding to the discrete values of $\theta$. \begin{figure*} \centering \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Models/PowerLawModelFit.pdf} \caption{Power law and Truncated Power law fits on CIFAR-10 using RESNET18 for various $\erra{S^\theta(\clDa{B})}$} \label{fig:truncated} \vspace{3mm} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Models/PowerLawModelFitImprovement.pdf} \caption{Power law fit for $\erra{S^\theta(\clDa{B})}$ improves with increasing number of error estimates for CIFAR-10 using RESNET18.} \label{fig:truncated2} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Models/ErrorDependencyOnDelta.pdf} \caption{Dependence of $\erra{S^\theta(\clDa{B})}$ on $\delta$ is ''small'' especially towards the end of active learning. Here, ($\cset{B}|=16,000$) for CIFAR-10 using RESNET18.} \label{fig:ErrorOnDelta} \end{minipage} \end{figure*} \subsection{Modeling Active Learning Training Costs} \label{subsec:TrainingCosts} Active learning~\cite{settles} iteratively obtains human labels for $\delta$ most informative items ranked using $M(.)$ and adds them to $\set{B}$. It then retrains the classifier $\mathcal{D}$ using the entire set $\set{B}$. A smaller $\delta$ typically makes the active learning more effective: it allows for achieving a lower error for a potentially smaller $\set{B}$) through more frequent sampling, but also significantly increases the training cost due to frequent re-training of $\mathcal{D}$. Choosing an appropriate $\delta$ is thus an important aspect of minimizing overall cost. The training cost (in \$) depends on the training time, which in turn is proportional to the data size ($\cset{B}$) and the number of epochs used to train the model (each epoch running over the entire $\set{B}$). A common strategy in active learning approaches is to use a fixed number of epochs per iteration, so the training cost in each iteration is proportional to the $\cset{B}$. Since in each iteration $\delta$ new data samples are added to $\set{B}$, the total training cost accumulated over all the previous and current iterations is: \begin{equation} \label{eqn:trainingcost} \trc{\clDa{B}} = k\cset{B}\left(\frac{\cset{B}}{\delta} + 1 \right) \end{equation} \figref{fig:motivation_training_cost}, generated for CIFAR-10 on RESNET18, shows this quadratic dependence on $\cset{B}$ for different $\delta$. \textsc{mcal}\xspace does not depend on the specific form of the training cost, so can accommodate other cost models (\textit{e.g.,}\xspace if the number of epochs is proportional to $\cset{B}$ in which case $\trc{\clDa{B}}$ can have a cubic dependency on $\cset{B}$). While $\erra{S^\theta(\clDa{B})}$ also depends on $\delta$ in theory, in practice this dependence is insignificant relative to $\trc{\clDa{B}}$. To illustrate this, \figref{fig:ErrorOnDelta} depicts values of $\erra{S^\theta(\clDa{B})}$ for various $\theta$ for CIFAR-10 using RESNET18, across a range of values of $\delta$. This variation is less than 1\% especially for smaller values of $\theta$. \section{Submission of papers to NeurIPS 2020} NeurIPS requires electronic submissions. The electronic submission site is \begin{center} \url{https://cmt3.research.microsoft.com/NeurIPS2020/} \end{center} Please read the instructions below carefully and follow them faithfully. \subsection{Style} Papers to be submitted to NeurIPS 2020 must be prepared according to the instructions presented here. Papers may only be up to eight pages long, including figures. Additional pages \emph{containing only a section on the broader impact, acknowledgments and/or cited references} are allowed. Papers that exceed eight pages of content will not be reviewed, or in any other way considered for presentation at the conference. The margins in 2020 are the same as those in 2007, which allow for $\sim$$15\%$ more words in the paper compared to earlier years. Authors are required to use the NeurIPS \LaTeX{} style files obtainable at the NeurIPS website as indicated below. Please make sure you use the current files and not previous versions. Tweaking the style files may be grounds for rejection. \subsection{Retrieval of style files} The style files for NeurIPS and other conference information are available on the World Wide Web at \begin{center} \url{http://www.neurips.cc/} \end{center} The file \verb+neurips_2020.pdf+ contains these instructions and illustrates the various formatting requirements your NeurIPS paper must satisfy. The only supported style file for NeurIPS 2020 is \verb+neurips_2020.sty+, rewritten for \LaTeXe{}. \textbf{Previous style files for \LaTeX{} 2.09, Microsoft Word, and RTF are no longer supported!} The \LaTeX{} style file contains three optional arguments: \verb+final+, which creates a camera-ready copy, \verb+preprint+, which creates a preprint for submission to, e.g., arXiv, and \verb+nonatbib+, which will not load the \verb+natbib+ package for you in case of package clash. \paragraph{Preprint option} If you wish to post a preprint of your work online, e.g., on arXiv, using the NeurIPS style, please use the \verb+preprint+ option. This will create a nonanonymized version of your work with the text ``Preprint. Work in progress.'' in the footer. This version may be distributed as you see fit. Please \textbf{do not} use the \verb+final+ option, which should \textbf{only} be used for papers accepted to NeurIPS. At submission time, please omit the \verb+final+ and \verb+preprint+ options. This will anonymize your submission and add line numbers to aid review. Please do \emph{not} refer to these line numbers in your paper as they will be removed during generation of camera-ready copies. The file \verb+neurips_2020.tex+ may be used as a ``shell'' for writing your paper. All you have to do is replace the author, title, abstract, and text of the paper with your own. The formatting instructions contained in these style files are summarized in Sections \ref{gen_inst}, \ref{headings}, and \ref{others} below. \section{General formatting instructions} \label{gen_inst} The text must be confined within a rectangle 5.5~inches (33~picas) wide and 9~inches (54~picas) long. The left margin is 1.5~inch (9~picas). Use 10~point type with a vertical spacing (leading) of 11~points. Times New Roman is the preferred typeface throughout, and will be selected for you by default. Paragraphs are separated by \nicefrac{1}{2}~line space (5.5 points), with no indentation. The paper title should be 17~point, initial caps/lower case, bold, centered between two horizontal rules. The top rule should be 4~points thick and the bottom rule should be 1~point thick. Allow \nicefrac{1}{4}~inch space above and below the title to rules. All pages should start at 1~inch (6~picas) from the top of the page. For the final version, authors' names are set in boldface, and each name is centered above the corresponding address. The lead author's name is to be listed first (left-most), and the co-authors' names (if different address) are set to follow. If there is only one co-author, list both author and co-author side by side. Please pay special attention to the instructions in Section \ref{others} regarding figures, tables, acknowledgments, and references. \section{Headings: first level} \label{headings} All headings should be lower case (except for first word and proper nouns), flush left, and bold. First-level headings should be in 12-point type. \subsection{Headings: second level} Second-level headings should be in 10-point type. \subsubsection{Headings: third level} Third-level headings should be in 10-point type. \paragraph{Paragraphs} There is also a \verb+\paragraph+ command available, which sets the heading in bold, flush left, and inline with the text, with the heading followed by 1\,em of space. \section{Citations, figures, tables, references} \label{others} These instructions apply to everyone. \subsection{Citations within the text} The \verb+natbib+ package will be loaded for you by default. Citations may be author/year or numeric, as long as you maintain internal consistency. As to the format of the references themselves, any style is acceptable as long as it is used consistently. The documentation for \verb+natbib+ may be found at \begin{center} \url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf} \end{center} Of note is the command \verb+\citet+, which produces citations appropriate for use in inline text. For example, \begin{verbatim} \citet{hasselmo} investigated\dots \end{verbatim} produces \begin{quote} Hasselmo, et al.\ (1995) investigated\dots \end{quote} If you wish to load the \verb+natbib+ package with options, you may add the following before loading the \verb+neurips_2020+ package: \begin{verbatim} \PassOptionsToPackage{options}{natbib} \end{verbatim} If \verb+natbib+ clashes with another package you load, you can add the optional argument \verb+nonatbib+ when loading the style file: \begin{verbatim} \usepackage[nonatbib]{neurips_2020} \end{verbatim} As submission is double blind, refer to your own published work in the third person. That is, use ``In the previous work of Jones et al.\ [4],'' not ``In our previous work [4].'' If you cite your other papers that are not widely available (e.g., a journal paper under review), use anonymous author names in the citation, e.g., an author of the form ``A.\ Anonymous.'' \subsection{Footnotes} Footnotes should be used sparingly. If you do require a footnote, indicate footnotes with a number\footnote{Sample of the first footnote.} in the text. Place the footnotes at the bottom of the page on which they appear. Precede the footnote with a horizontal rule of 2~inches (12~picas). Note that footnotes are properly typeset \emph{after} punctuation marks.\footnote{As in this example.} \subsection{Figures} \begin{figure} \centering \fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}} \caption{Sample figure caption.} \end{figure} All artwork must be neat, clean, and legible. Lines should be dark enough for purposes of reproduction. The figure number and caption always appear after the figure. Place one line space before the figure caption and one line space after the figure. The figure caption should be lower case (except for first word and proper nouns); figures are numbered consecutively. You may use color figures. However, it is best for the figure captions and the paper body to be legible if the paper is printed in either black/white or in color. \subsection{Tables} All tables must be centered, neat, clean and legible. The table number and title always appear before the table. See Table~\ref{sample-table}. Place one line space before the table title, one line space after the table title, and one line space after the table. The table title must be lower case (except for first word and proper nouns); tables are numbered consecutively. Note that publication-quality tables \emph{do not contain vertical rules.} We strongly suggest the use of the \verb+booktabs+ package, which allows for typesetting high-quality, professional tables: \begin{center} \url{https://www.ctan.org/pkg/booktabs} \end{center} This package was used to typeset Table~\ref{sample-table}. \begin{table} \caption{Sample table title} \label{sample-table} \centering \begin{tabular}{lll} \toprule \multicolumn{2}{c}{Part} \\ \cmidrule(r){1-2} Name & Description & Size ($\mu$m) \\ \midrule Dendrite & Input terminal & $\sim$100 \\ Axon & Output terminal & $\sim$10 \\ Soma & Cell body & up to $10^6$ \\ \bottomrule \end{tabular} \end{table} \section{Final instructions} Do not change any aspects of the formatting parameters in the style files. In particular, do not modify the width or length of the rectangle the text should fit into, and do not change font sizes (except perhaps in the \textbf{References} section; see below). Please note that pages should be numbered. \section{Preparing PDF files} Please prepare submission files with paper size ``US Letter,'' and not, for example, ``A4.'' Fonts were the main cause of problems in the past years. Your PDF file must only contain Type 1 or Embedded TrueType fonts. Here are a few instructions to achieve this. \begin{itemize} \item You should directly generate PDF files using \verb+pdflatex+. \item You can check which fonts a PDF files uses. In Acrobat Reader, select the menu Files$>$Document Properties$>$Fonts and select Show All Fonts. You can also use the program \verb+pdffonts+ which comes with \verb+xpdf+ and is available out-of-the-box on most Linux machines. \item The IEEE has recommendations for generating PDF files whose fonts are also acceptable for NeurIPS. Please see \url{http://www.emfield.org/icuwb2010/downloads/IEEE-PDF-SpecV32.pdf} \item \verb+xfig+ "patterned" shapes are implemented with bitmap fonts. Use "solid" shapes instead. \item The \verb+\bbold+ package almost always uses bitmap fonts. You should use the equivalent AMS Fonts: \begin{verbatim} \usepackage{amsfonts} \end{verbatim} followed by, e.g., \verb+\mathbb{R}+, \verb+\mathbb{N}+, or \verb+\mathbb{C}+ for $\mathbb{R}$, $\mathbb{N}$ or $\mathbb{C}$. You can also use the following workaround for reals, natural and complex: \begin{verbatim} \newcommand{\RR}{I\!\!R} \newcommand{\Nat}{I\!\!N} \newcommand{\CC}{I\!\!\!\!C} \end{verbatim} Note that \verb+amsfonts+ is automatically loaded by the \verb+amssymb+ package. \end{itemize} If your file contains type 3 fonts or non embedded TrueType fonts, we will ask you to fix it. \subsection{Margins in \LaTeX{}} Most of the margin problems come from figures positioned by hand using \verb+\special+ or other commands. We suggest using the command \verb+\includegraphics+ from the \verb+graphicx+ package. Always specify the figure width as a multiple of the line width as in the example below: \begin{verbatim} \usepackage[pdftex]{graphicx} ... \includegraphics[width=0.8\linewidth]{myfile.pdf} \end{verbatim} See Section 4.4 in the graphics bundle documentation (\url{http://mirrors.ctan.org/macros/latex/required/graphics/grfguide.pdf}) A number of width problems arise when \LaTeX{} cannot properly hyphenate a line. Please give LaTeX hyphenation hints using the \verb+\-+ command when necessary. \section*{Broader Impact} Authors are required to include a statement of the broader impact of their work, including its ethical aspects and future societal consequences. Authors should discuss both positive and negative outcomes, if any. For instance, authors should discuss a) who may benefit from this research, b) who may be put at disadvantage from this research, c) what are the consequences of failure of the system, and d) whether the task/method leverages biases in the data. If authors believe this is not applicable to them, authors can simply state this. Use unnumbered first level headings for this section, which should go at the end of the paper. {\bf Note that this section does not count towards the eight pages of content that are allowed.} \begin{ack} Use unnumbered first level headings for the acknowledgments. All acknowledgments go at the end of the paper before the list of references. Moreover, you are required to declare funding (financial activities supporting the submitted work) and competing interests (related financial activities outside the submitted work). More information about this disclosure can be found at: \url{https://neurips.cc/Conferences/2020/PaperInformation/FundingDisclosure}. Do {\bf not} include this section in the anonymized submission, only in the final paper. You can use the \texttt{ack} environment provided in the style file to autmoatically hide this section in the anonymized submission. \end{ack} \section*{References} References follow the acknowledgments. Use unnumbered first-level heading for the references. Any choice of citation style is acceptable as long as you are consistent. It is permissible to reduce the font size to \verb+small+ (9 point) when listing the references. {\bf Note that the Reference section does not count towards the eight pages of content that are allowed.} \medskip \small [1] Alexander, J.A.\ \& Mozer, M.C.\ (1995) Template-based algorithms for connectionist rule extraction. In G.\ Tesauro, D.S.\ Touretzky and T.K.\ Leen (eds.), {\it Advances in Neural Information Processing Systems 7}, pp.\ 609--616. Cambridge, MA: MIT Press. [2] Bower, J.M.\ \& Beeman, D.\ (1995) {\it The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System.} New York: TELOS/Springer--Verlag. [3] Hasselmo, M.E., Schnell, E.\ \& Barkai, E.\ (1995) Dynamics of learning and recall at excitatory recurrent synapses and cholinergic modulation in rat hippocampal region CA3. {\it Journal of Neuroscience} {\bf 15}(7):5249-5262. \end{document} \section{The \textsc{mcal}\xspace Algorithm} \label{sec:taal} \textsc{mcal}\xspace is described in \algoref{alg:ActiveLearning}. It takes as input an active learning metric $M(.)$, the specific classifier $\mathcal{D}$ (\textit{e.g.,}\xspace RESNET18) and the parametric models for training cost (\textit{e.g.,}\xspace Eqn~\ref{eqn:trainingcost}) and for error rate as a function of training size (\textit{e.g.,}\xspace the truncated power law in Eqn~\ref{eq:truncatedPowerLaw}). At a high-level, the algorithm operates in two phases. In the first phase, it uses estimates obtained during active learning to learn the parameters of the truncated power-law model for various $\theta$ and the cost measurements to learn the parameters for training cost model. In the second phase, having the models, it can estimate and refine $S^\star(\clD,B)$ and $\set{B}$ that produce the optimal cost $\mathbf{C}^\star$. It can also estimate the optimal batch size $\delta_{opt}$ for this cost. It terminates when adding more samples to $B$ is counter productive. It trains a classifier to label $S^\star(\clD,B)$ and generates human labels for the remaining unlabeled samples. The first four steps perform initialization. The first step (Line 1) randomly selects a test set $\mathbf{T}$ ($\cset{T}$ = 5\% of $\cset{X}$) and obtains human labels, so that it can be used to test and measure the performance of $\mathcal{D}$. Line 2 initializes $\set{B}=\set{B}_0$ by randomly selecting $\delta_0$ (1\% of $\set{X}$ in our implementation) samples from $\set{X}$ and obtaining human labels for these. Line 3 trains $\mathcal{D}$ using $\set{B}_0$ and uses $\set{T}$ to estimate the generalization errors $\epsilon_T\left(\sdbtbi{0}\right)$ for various values of $\theta\in(0,1)$ (we chose in increments of 0.05 $\{0.05, 0.1,\cdots,1\}$), using $\set{T}$ and $M(.)$. After these initial steps, the main loop of min-cost active labeling begins (Line 9). In each step, as with standard active learning, \textsc{mcal}\xspace selects $\delta$ most informative samples $M(.)$, obtains their labels and adds them to $\set{B}$ (Line 11), then trains $\mathcal{D}$ on them (Line 12). The primary difference with active learning is that \textsc{mcal}\xspace, in every iteration, estimates the model parameters for $\erra{S^\theta(\clDa{B})}$ and $S^\theta(\clDa{B})$ (Line 13, 16), then uses these to estimate $\mathbf{C}^\star$ and $\set{B}_opt$ (Line 18). At the end of this step, \textsc{mcal}\xspace can answer the question: ``How many human generated labels must be obtained into $\set{B}$ to train $D$, in order to minimize $\mathbf{C}$?'' (\secref{sec:motivation}). The estimated model parameters for $\trc{\clDa{B}}$ and $\erra{S^\theta(\clDa{B})}$ may not be stable in the first few iterations (in our experience, 3 when using the truncated power law) given limited data for the fit. To determine if the model parameters are stable, \textsc{mcal}\xspace compares the estimated $\mathbf{C}^\star$ (in dollars) obtained from the previous iteration to the current. If the difference is small (within 5\%, in our implementation), the model is considered to be stable enough for use (Line 19). After the predictive models have stabilized, we can rely on the estimates of $\set{B}_{opt}$, the final number of labels to be obtained into $\set{B}$, and consequently the remaining number of samples needed $\setm{\set{B}_{opt}}{\set{B}_i}$. At this point \textsc{mcal}\xspace adjusts $\delta$ (Line 21) to reduce the training cost when it is possible to do so. \textsc{mcal}\xspace can do this because it targets relatively high accuracy for $\clDa{\set{B}}$. For these high targets, it is important to continue to improve model parameter estimates (\textit{e.g.,}\xspace the parameters for the truncated power law), and active learning can help achieve this. \figref{fig:truncated2} shows how the fit to the truncated power law improves as more points are added. Finally, unlike active learning, \textsc{mcal}\xspace adapts $\delta$ to achieve lower training cost. \figref{fig:ErrorOnDelta} shows that, for most values of $\theta$, the choice of $\delta$ does not affect classifier accuracy significantly. While, the choice of active learning batch size does not affect the final classifier accuracy, but it can significantly impact training cost (\secref{sec:motivation}). This loop terminates when total cost obtained in a step is higher than that obtained in the previous step. At this point, \textsc{mcal}\xspace simply trains the classifier using the last value of $\set{B}_{opt}$, then human labels any remaining unlabeled samples (Lines 26, 27). \parab{Extending \textsc{mcal}\xspace to selecting the cheapest DNN architecture.} In what we have described so far, we have assumed that min-cost labeling is given a candidate DNN architecture for $D$. However, it is trivial to extend \textsc{mcal}\xspace to the case when the data set curator supplies a small number $m$ (typically 2-4) of candidate classifier architectures $\{\mathcal{D}_1,\mathcal{D}_2,\cdots\}$. In this case, \textsc{mcal}\xspace can generate separate prediction models for each of the classifiers and pick the one that minimizes $\mathbf{C}$ once the model parameters have stabilized. This does not inflate the cost significantly since the training costs until this time are over small sizes of $\set{B}$ and not significant. \section{Appendix} \label{sec:appendix} \renewcommand{\thesubsection}{\Alph{subsection}} \setcounter{subsection}{0} \subsection{Power-law and Truncated Power-law Fit} In this section, we show power law and truncated power law fitting results on all combinations of datasets and models. \figref{fig:CIFAR10-CNN18}, \figref{fig:CIFAR10-Res18}, \figref{fig:CIFAR10-Res50} show fitting results on CIFAR-10, \figref{fig:CIFAR100-CNN18}, \figref{fig:CIFAR100-Res18}, \figref{fig:CIFAR100-Res50} on CIFAR-100, and \figref{fig:Fashion-CNN18}, \figref{fig:Fashion-Res18}, \figref{fig:Fashion-Res50} on Fashion. As an example, we show the fitting results on the error profile of $\theta=50\%$ in all figures in this section. In all combinations of datasets and models, while using more points gives higher accuracy and better prediction, both power-law and truncated power-law can get stable and precise prediction using a very limited number of small sample points. \begin{figure*}[!htb] \centering \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Appendix/Fitting_on_0.5_cifar10_CNN18.png} \caption{Power-law and Truncated Power-law fits on CIFAR-10 using CNN18} \label{fig:CIFAR10-CNN18} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Appendix/Fitting_on_0.5_cifar10_Res18.png} \caption{Power-law and Truncated Power-law fits on CIFAR-10 using RESNET18} \label{fig:CIFAR10-Res18} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Appendix/Fitting_on_0.5_cifar10_Res50.png} \caption{Power-law and Truncated Power-law fits on CIFAR-10 using RESNET50} \label{fig:CIFAR10-Res50} \end{minipage} \end{figure*} \begin{figure*}[!htb] \centering \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Appendix/Fitting_on_0.5_cifar100_CNN18.png} \caption{Power-law and Truncated Power-law fits on CIFAR-100 using CNN18} \label{fig:CIFAR100-CNN18} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Appendix/Fitting_on_0.5_cifar100_Res18.png} \caption{Power-law and Truncated Power-law fits on CIFAR-100 using RESNET18} \label{fig:CIFAR100-Res18} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Appendix/Fitting_on_0.5_cifar100_Res50.png} \caption{Power-law and Truncated Power-law fits on CIFAR-100 using RESNET50} \label{fig:CIFAR100-Res50} \end{minipage} \end{figure*} \begin{figure*}[!htb] \centering \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Appendix/Fitting_on_0.5_fashion_CNN18.png} \caption{Power-law and Truncated Power-law fits on Fashion using CNN18} \label{fig:Fashion-CNN18} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Appendix/Fitting_on_0.5_fashion_Res18.png} \caption{Power-law and Truncated Power-law fits on Fashion using RESNET18} \label{fig:Fashion-Res18} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Appendix/Fitting_on_0.5_fashion_Res50.png} \caption{Power-law and Truncated Power-law fits on Fashion using RESNET50} \label{fig:Fashion-Res50} \end{minipage} \end{figure*} \label{sec:truncated} \section*{Broader Impact} In this paper, we introduce Min-Cost Active Labeling (\textsc{mcal}\xspace), a hybrid human machine labeling system that leverages active learning to minimize groundtruth data overall acquisition cost. Dataset labeling is often prohibitively expensive, which can limit the generalization of machine learning algorithms to a wider range of new settings and applications. While most machine learning research uses publicly available benchmarks, it is more difficult to evaluate research on more specialized settings for which there is no accurate groundtruth. By reducing the overall dataset building cost, \textsc{mcal}\xspace fundamentally democratizes groundtruth acquisition. The potential risks of a cheaper approach to larger scale dataset acquisition have not received enough attention. These include: (i) environmental damage due to more data collected and storage consumed, as well as more training cycles incurred both in the process of data collection and consumption, (ii) privacy implications due to the commoditization of data without obtaining enough consent and proper law and regulation in place, and (iii) ethical treatment of human workers used for data labeling. We see research opportunities in applying \textsc{mcal}\xspace to production systems for inventing novel capabilities not previously demonstrated. To mitigate the associated risks, we encourage research to understand the implicit impact, which is often not measured by dollar values, and to follow best practices in data privacy and pricing. Further, we also encourage research to investigate other side effects and approaches to reduce potential negative impact while collecting datasets for a particular custom application. \section{Conclusions} \label{sec:concl} Motivated by the emergence of labeling platforms such as Amazon Sagemaker~\cite{sagemaker} and Google Labeling Services~\cite{googlelabeling}, which incurs prohibitive human labeling cost, this paper asks: ``How to label a data set at minimum cost''? To do this, it trains a classifier $\mathcal{D}$ using a set $\set{B}$ from the data set to label $\set{S}$ samples, and uses humans to label the rest. The key challenge is to find the optimal value of $\set{B}$ that minimizes total cost. \textsc{mcal}\xspace first estimates $\cset{B}$ by modeling the accuracy vs. training set size as a truncated power-law, then uses active learning to find samples to include in $\set{B}$ to minimize overall cost, of which training cost can be a significant component. The evaluation shows that it can achieve up to 6$\times$ lower cost than using humans to label the data set, and is always cheaper than using active learning with the lowest-cost batch size. \section{Evaluation} \label{sec:eval} In this section, we evaluate the performance of \textsc{mcal}\xspace over three popular classification data sets: Fashion-MNIST \cite{Fashion-MNIST}, CIFAR-10 \cite{CIFAR} and CIFAR-100 \cite{CIFAR}. We chose these three data sets to demonstrate that \textsc{mcal}\xspace can work effectively across classification tasks with different difficulty levels, Fashion-MNIST being the ``easiest'' and CIFAR-100 the ``hardest''. We use three popular DNN architectures RESNET50, RESNET18~\cite{resnet}, and CNN18 (RESNET18 without the skip connections). These architectures span the range of architectural complexity with differing training costs and achievable accuracy. CNN18 has a very low training cost but yields lower accuracy while RESNET50 has a very high training cost and typically yields the highest accuracy. This allows us to demonstrate how \textsc{mcal}\xspace can effectively select the most cost efficient architecture among available choices. We also use two different human labeling services: Amazon labeling services~\cite{sagemaker} at 0.04 USD/image and Satyam~\cite{qiu2018satyam} at 0.003 USD/image. Satyam labels images 10$\times$ cheaper by leveraging untrained inexpensive workers. This allows to demonstrate how \textsc{mcal}\xspace adapts to changing human labeling costs. Finally, our evaluation ignores model fitting and inference costs, since they are negligible compared to training costs. \textsc{mcal}\xspace uses a popular active learning metric (margin~\cite{settles}) to rank and select samples for all our results in this section. At each active learning iteration, it trains the model over 200 epochs with a $10\times$ learning rate reduction at 80, 120, 160, 180 epochs, and a mini-batch-size of 256 samples~\cite{keras_defaults}. We have left it to future work to incorporate hyper-parameter search into the optimization. \begin{table}[h] \begin{small} \centering \begin{tabular}{|c|c|c|c|} \hline DNN & CNN-18~\footnote{CNN-18 is a plain 18 layer convolutional neural network obtained by removing all the skip layer connections from ResNet18.} & RESNET18 & RESNET50 \\ \hline Cost & 0.00007 & 0.0003 & 0.0009 \\ \hline \end{tabular} \caption{Training Costs in USD Per Image for various DNN architectures} \label{tab:ComputeCosts} \end{small} \vspace{-3mm} \end{table} \tabref{tab:ComputeCosts} depicts the training cost per image for three different DNNs trained on CIFAR-10. We use a virtual machine with 4 NVIDIA K80 GPUs at 3.6 USD/hr and maintain over 90\% utilization on each GPU during the training process. In all experiments, unless otherwise specified, the overall labeling accuracy requirement $\boldsymbol\epsilon$ was set at 5\%. \begin{figure*} \centering \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Results/SummaryResult.pdf} \caption{Total cost of labeling for various data sets, for i) Human labeling, ii) MCAL and iii) Oracle assisted AL for various DNN architectures.} \vspace{-3mm} \label{fig:summary_total_cost} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Motivation/Fashion-TotalALCost.pdf} \caption{Dependence of overall cost of using naive active learning on $\delta$ for Fashion using Amazon labeling service.} \label{fig:Fashion-Amazon-Result} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Motivation/CIFAR10-TotalALCost.pdf} \caption{Dependence of overall cost of using naive active learning on $\delta$ for CIFAR-10 using Amazon labeling service.} \label{fig:CIFAR10-Amazon-Result} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Motivation/CIFAR100-TotalALCost.pdf} \caption{Dependence of overall cost of using naive active learning on $\delta$ for CIFAR-100 using Amazon labeling service.} \label{fig:CIFAR100-Amazon-Result} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Motivation/AccuracyGainsVsBatchSize-RESNET18.pdf} \caption{Dependence of {$\frac{\left|S^*(D(B))\right|}{\left|X\right|}$ for a fixed $\delta$ using RESNET18.}} \label{fig:motivation_autoLabeled} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Motivation/TotalALTrainingCost-RENET18.pdf} \caption{{Active learning training cost (in \$) as a function of batch size ($\delta$) using RESNET18.}} \label{fig:motivation_training_cost} \vspace{2mm} \end{minipage} \end{figure*} \begin{table}[h] \begin{small} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Data Set & Labeling &$\frac{\cset{B}}{\cset{X}}$ & $\frac{\cset{S}}{\cset{X}}$ & DNN Selected &Labeling & Human & \textsc{mcal}\xspace\\ & Service & & & & Error & Cost (\$) & Cost (\$)\\ \hline Fashion & Amazon & 6.1\%& 85.0\%& RESNET18& 4.0\%&2800&400 \\ \cline{2-8} & Satyam & 8.4\%& 85.0\%& RESNET18& 4.0\%&210&29 \\ \hline CIFAR-10 & Amazon & 22.2\%& 65.0\%& RESNET18& 2.4\%&2400&792\\ \cline{2-8} & Satyam & 27.0\%& 65.0\%& RESNET18& 2.4\%&180&63\\ \hline CIFAR-100 & Amazon & 32.0\% & 10.0\%& RESNET18 &0.4\%&2400&1698 \\\cline{2-8} & Satyam & 57.6\% & 20.0\%& RESNET18 &1\%&180&139\\\hline \end{tabular} \caption{Summary of results} \label{tab:TAALChoices} \end{small} \vspace{-3mm} \end{table} \subsection{Reduction in Labeling Costs} \textsc{mcal}\xspace automatically makes three key decisions to minimize overall labeling cost. It a) selects the subset of images that the classifier should be trained on ($\cset{B}_{opt}$), b) adapts $\delta$ across active learning iterations to keep training costs in check, and c) selects the best DNN architecture from among a set of candidates (CNN18, RESNET18 and RESNET50). In this section, we demonstrate that \textsc{mcal}\xspace provides significant overall cost benefits at the expense of $\boldsymbol\epsilon$ (5\%) degradation in label quality. Further, it outperforms active learning even when an oracle is used to choose the optimal $\delta$. \figref{fig:summary_total_cost} depicts the total labeling costs incurred when using Amazon labeling services for three different schemes: i) when humans are used to label the entire data set using Amazon labeling services, ii) \textsc{mcal}\xspace for $\boldsymbol\epsilon=5\%$, and iii) active learning with an oracle to choose $\delta$ for the DNN architectures CNN18, RESNET18 and RESNET50. \tabref{tab:TAALChoices} lists the numerical values of the costs (in \$) for human labeling and \textsc{mcal}\xspace. \parab{Cost Saving Compared to Human Labeling.} From \figref{fig:summary_total_cost} and \tabref{tab:TAALChoices}, \textsc{mcal}\xspace provides an overall cost saving of 86\%, 67\% and 30\% for Fashion, CIFAR-10 and CIFAR-100 respectively. As expected, the savings depend on the ``hardness'' of the classification task. \tabref{tab:TAALChoices} also shows the number of samples in $\set{B}$ used to train $\mathcal{D}$, as well as the number of samples $\cset{S}$ labeled using $\mathcal{D}$. For Fashion, \textsc{mcal}\xspace labels only 6.1\% of the data to train the classifier and uses it to label 85\% of the data set. For CIFAR-10, it trains using 22\% of the data set and labels about 65\% of the data using the classifier. CIFAR-100 requires more data to train the classifier to a high accuracy so is able to label only 10\% of the data using the classifier. \tabref{tab:TAALChoices} shows that \textsc{mcal}\xspace, for each data set, is able to achieve the overall accuracy constraint of $\boldsymbol\epsilon<5\%$. \parab{Cost Savings Compared to Oracle-assisted Active Learning.} We experimentally determined the optimal $\delta$ for active learning, for each data set and architecture combination; we call this \textit{oracle-assisted active learning.} To do this, using various values of $\delta$ between 1\% to 20\% of $\left|\mathbf{X}\right|$, we ran active learning to label each dataset until the desired labeling error constraint was met. We then chose the $\delta$ with the lowest overall cost. The dependence of overall cost on $\delta$ for each of the DNNs and data set combinations can be seen in \figref{fig:Fashion-Amazon-Result}, \figref{fig:CIFAR10-Amazon-Result}, and \figref{fig:CIFAR100-Amazon-Result}. \figref{fig:summary_total_cost} shows that \textsc{mcal}\xspace is cheaper than even oracle-assisted active learning for all the three DNN choices. It achieves this by determining when to stop active learning for each data set, adapting the $\delta$ and choosing the cheapest DNN architecture. It is interesting to note that oracle-assisted active learning is more expensive than human labeling on CIFAR-100 when using CNN18 and RESNET50. This is because oracle-assisted active learning does not take training costs into account. \parab{Understanding the effect of $\delta$.} \figref{fig:motivation_autoLabeled} depicts the number of samples labeled using the classifier, for various choices of $\delta$ for each of the data sets using RESNET18. As seen from \figref{fig:motivation_autoLabeled}, while the number of classifier-labeled samples decreases with increasing $\delta$, the reduction is not significant until a certain point (\textit{e.g.,}\xspace 7\% for CIFAR-10 and Fashion). Training costs decrease as $\frac{1}{\delta2}$ as depicted in Figure~\ref{fig:motivation_training_cost}. Thus, choosing a delta too small can be sub-optimal for overall cost. This is also seen in \figref{fig:Fashion-Amazon-Result}, \figref{fig:CIFAR10-Amazon-Result}, and \figref{fig:CIFAR100-Amazon-Result} which show (using a circle) the $\delta$ with minimum overall cost. \parab{Summary.} \tabref{tab:TAALChoices} and Figures~\ref{fig:summary_total_cost}-\ref{fig:motivation_training_cost} show how \textsc{mcal}\xspace is able to adapt to different kinds of data sets, classification tasks and DNN architectures and consistently minimize overall cost while providing the required labeling quality guarantees. \begin{figure*} \centering \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Motivation/Fashion-TotalALCost-Satyam.pdf} \caption{Performance of \textsc{mcal}\xspace compared to naive active learning on Fashion using Satyam labeling} \label{fig:Fashion-Satyam-Result} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Motivation/CIFAR10-TotalALCost-Satyam.pdf} \caption{Performance of \textsc{mcal}\xspace compared to naive active learning on CIFAR-10 using Satyam labeling} \label{fig:CIFAR10-Satyam-Result} \end{minipage} \hfill \begin{minipage}{0.32\columnwidth} \centering \includegraphics{fig/Motivation/CIFAR100-TotalALCost-Satyam.pdf} \caption{Performance of \textsc{mcal}\xspace compared to naive active learning on CIFAR-100 using Satyam labeling} \label{fig:CIFAR100-Satyam-Result} \end{minipage} \end{figure*} \subsection{Effect of cheaper labeling costs} \label{sec:CheapLabeling} Intuitively, with cheaper labeling costs \textsc{mcal}\xspace should use more human labeling to train the classifier. This in turn should enable a larger fraction of data to be labeled by the classifier. To validate this, we used the Satyam~\cite{qiu2018satyam} labeling service that incurs a $10\times$ lower labeling cost compared to Amazon's labeling service. The effect of this reduction is most evident for CIFAR-100 in \tabref{tab:TAALChoices} as \textsc{mcal}\xspace chooses to train the classifier using 57.6\% of the data (instead of 32\% using Amazon labeling service). This increases the classifier's accuracy allowing it to label 20\% of the dataset (instead of 10\% using Amazon labeling service). For other datasets, the differences are less dramatic (they use 2.5-5\% more data to train the classifier). For these datasets, there is no change in $\sfrac{\cset{S}}{\cset{X}}$ because our resolution in this dimension is limited to 5\% since we change $\theta$ in increments of 5\%. Figures~\ref{fig:Fashion-Satyam-Result},~\ref{fig:CIFAR10-Satyam-Result}, and~\ref{fig:CIFAR100-Satyam-Result} depict the effect of using various choices of $\delta$ on overall cost for various classifiers and data sets using using Satyam as the labeling service. As seen in these figures, the lower labeling cost alters the tradeoff curves. The figures also depict the corresponding \textsc{mcal}\xspace cost as well as the human labeling cost for reference. As seen from these figures, \textsc{mcal}\xspace achieves a lower overall cost compared to all these possible choices in this case as well. \section{Electronic Submission} \label{submission} Submission to ICML 2020 will be entirely electronic, via a web site (not email). Information about the submission process and \LaTeX\ templates are available on the conference web site at: \begin{center} \textbf{\texttt{http://icml.cc/}} \end{center} The guidelines below will be enforced for initial submissions and camera-ready copies. Here is a brief summary: \begin{itemize} \item Submissions must be in PDF\@. \item Submitted papers can be up to eight pages long, not including references, plus unlimited space for references. Accepted papers can be up to nine pages long, not including references, to allow authors to address reviewer comments. Any paper exceeding this length will automatically be rejected. \item \textbf{Do not include author information or acknowledgements} in your initial submission. \item Your paper should be in \textbf{10 point Times font}. \item Make sure your PDF file only uses Type-1 fonts. \item Place figure captions \emph{under} the figure (and omit titles from inside the graphic file itself). Place table captions \emph{over} the table. \item References must include page numbers whenever possible and be as complete as possible. Place multiple citations in chronological order. \item Do not alter the style template; in particular, do not compress the paper format by reducing the vertical spaces. \item Keep your abstract brief and self-contained, one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. The title should have content words capitalized. \end{itemize} \subsection{Submitting Papers} \textbf{Paper Deadline:} The deadline for paper submission that is advertised on the conference website is strict. If your full, anonymized, submission does not reach us on time, it will not be considered for publication. \textbf{Anonymous Submission:} ICML uses double-blind review: no identifying author information may appear on the title page or in the paper itself. Section~\ref{author info} gives further details. \textbf{Simultaneous Submission:} ICML will not accept any paper which, at the time of submission, is under review for another conference or has already been published. This policy also applies to papers that overlap substantially in technical content with conference papers under review or previously published. ICML submissions must not be submitted to other conferences during ICML's review period. Authors may submit to ICML substantially different versions of journal papers that are currently under review by the journal, but not yet accepted at the time of submission. Informal publications, such as technical reports or papers in workshop proceedings which do not appear in print, do not fall under these restrictions. \medskip Authors must provide their manuscripts in \textbf{PDF} format. Furthermore, please make sure that files contain only embedded Type-1 fonts (e.g.,~using the program \texttt{pdffonts} in linux or using File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3) might come from graphics files imported into the document. Authors using \textbf{Word} must convert their document to PDF\@. Most of the latest versions of Word have the facility to do this automatically. Submissions will not be accepted in Word format or any format other than PDF\@. Really. We're not joking. Don't send Word. Those who use \textbf{\LaTeX} should avoid including Type-3 fonts. Those using \texttt{latex} and \texttt{dvips} may need the following two commands: {\footnotesize \begin{verbatim} dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi ps2pdf paper.ps \end{verbatim}} It is a zero following the ``-G'', which tells dvips to use the config.pdf file. Newer \TeX\ distributions don't always need this option. Using \texttt{pdflatex} rather than \texttt{latex}, often gives better results. This program avoids the Type-3 font problem, and supports more advanced features in the \texttt{microtype} package. \textbf{Graphics files} should be a reasonable size, and included from an appropriate format. Use vector formats (.eps/.pdf) for plots, lossless bitmap formats (.png) for raster graphics with sharp lines, and jpeg for photo-like images. The style file uses the \texttt{hyperref} package to make clickable links in documents. If this causes problems for you, add \texttt{nohyperref} as one of the options to the \texttt{icml2020} usepackage statement. \subsection{Submitting Final Camera-Ready Copy} The final versions of papers accepted for publication should follow the same format and naming convention as initial submissions, except that author information (names and affiliations) should be given. See Section~\ref{final author} for formatting instructions. The footnote, ``Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.'' must be modified to ``\textit{Proceedings of the $\mathit{37}^{th}$ International Conference on Machine Learning}, Vienna, Austria, 2020. Copyright 2020 by the author(s).'' For those using the \textbf{\LaTeX} style file, this change (and others) is handled automatically by simply changing $\mathtt{\backslash usepackage\{icml2020\}}$ to $$\mathtt{\backslash usepackage[accepted]\{icml2020\}}$$ Authors using \textbf{Word} must edit the footnote on the first page of the document themselves. Camera-ready copies should have the title of the paper as running head on each page except the first one. The running title consists of a single line centered above a horizontal rule which is $1$~point thick. The running head should be centered, bold and in $9$~point type. The rule should be $10$~points above the main text. For those using the \textbf{\LaTeX} style file, the original title is automatically set as running head using the \texttt{fancyhdr} package which is included in the ICML 2020 style file package. In case that the original title exceeds the size restrictions, a shorter form can be supplied by using \verb|\icmltitlerunning{...}| just before $\mathtt{\backslash begin\{document\}}$. Authors using \textbf{Word} must edit the header of the document themselves. \section{Format of the Paper} All submissions must follow the specified format. \subsection{Dimensions} The text of the paper should be formatted in two columns, with an overall width of 6.75~inches, height of 9.0~inches, and 0.25~inches between the columns. The left margin should be 0.75~inches and the top margin 1.0~inch (2.54~cm). The right and bottom margins will depend on whether you print on US letter or A4 paper, but all final versions must be produced for US letter size. The paper body should be set in 10~point type with a vertical spacing of 11~points. Please use Times typeface throughout the text. \subsection{Title} The paper title should be set in 14~point bold type and centered between two horizontal rules that are 1~point thick, with 1.0~inch between the top rule and the top edge of the page. Capitalize the first letter of content words and put the rest of the title in lower case. \subsection{Author Information for Submission} \label{author info} ICML uses double-blind review, so author information must not appear. If you are using \LaTeX\/ and the \texttt{icml2020.sty} file, use \verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information will not be printed unless \texttt{accepted} is passed as an argument to the style file. Submissions that include the author information will not be reviewed. \subsubsection{Self-Citations} If you are citing published papers for which you are an author, refer to yourself in the third person. In particular, do not use phrases that reveal your identity (e.g., ``in previous work \cite{langley00}, we have shown \ldots''). Do not anonymize citations in the reference section. The only exception are manuscripts that are not yet published (e.g., under submission). If you choose to refer to such unpublished manuscripts \cite{anonymous}, anonymized copies have to be submitted as Supplementary Material via CMT\@. However, keep in mind that an ICML paper should be self contained and should contain sufficient detail for the reviewers to evaluate the work. In particular, reviewers are not required to look at the Supplementary Material when writing their review. \subsubsection{Camera-Ready Author Information} \label{final author} If a paper is accepted, a final camera-ready copy must be prepared. For camera-ready papers, author information should start 0.3~inches below the bottom rule surrounding the title. The authors' names should appear in 10~point bold type, in a row, separated by white space, and centered. Author names should not be broken across lines. Unbolded superscripted numbers, starting 1, should be used to refer to affiliations. Affiliations should be numbered in the order of appearance. A single footnote block of text should be used to list all the affiliations. (Academic affiliations should list Department, University, City, State/Region, Country. Similarly for industrial affiliations.) Each distinct affiliations should be listed once. If an author has multiple affiliations, multiple superscripts should be placed after the name, separated by thin spaces. If the authors would like to highlight equal contribution by multiple first authors, those authors should have an asterisk placed after their name in superscript, and the term ``\textsuperscript{*}Equal contribution" should be placed in the footnote block ahead of the list of affiliations. A list of corresponding authors and their emails (in the format Full Name \textless{}email@domain.com\textgreater{}) can follow the list of affiliations. Ideally only one or two names should be listed. A sample file with author names is included in the ICML2020 style file package. Turn on the \texttt{[accepted]} option to the stylefile to see the names rendered. All of the guidelines above are implemented by the \LaTeX\ style file. \subsection{Abstract} The paper abstract should begin in the left column, 0.4~inches below the final address. The heading `Abstract' should be centered, bold, and in 11~point type. The abstract body should use 10~point type, with a vertical spacing of 11~points, and should be indented 0.25~inches more than normal on left-hand and right-hand margins. Insert 0.4~inches of blank space after the body. Keep your abstract brief and self-contained, limiting it to one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. \subsection{Partitioning the Text} You should organize your paper into sections and paragraphs to help readers place a structure on the material and understand its contributions. \subsubsection{Sections and Subsections} Section headings should be numbered, flush left, and set in 11~pt bold type with the content words capitalized. Leave 0.25~inches of space before the heading and 0.15~inches after the heading. Similarly, subsection headings should be numbered, flush left, and set in 10~pt bold type with the content words capitalized. Leave 0.2~inches of space before the heading and 0.13~inches afterward. Finally, subsubsection headings should be numbered, flush left, and set in 10~pt small caps with the content words capitalized. Leave 0.18~inches of space before the heading and 0.1~inches after the heading. Please use no more than three levels of headings. \subsubsection{Paragraphs and Footnotes} Within each section or subsection, you should further partition the paper into paragraphs. Do not indent the first line of a given paragraph, but insert a blank line between succeeding ones. You can use footnotes\footnote{Footnotes should be complete sentences.} to provide readers with additional information about a topic without interrupting the flow of the paper. Indicate footnotes with a number in the text where the point is most relevant. Place the footnote in 9~point type at the bottom of the column in which it appears. Precede the first footnote in a column with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can appear in each column, in the same order as they appear in the text, but spread them across columns and pages if possible.} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{icml_numpapers}} \caption{Historical locations and number of accepted papers for International Machine Learning Conferences (ICML 1993 -- ICML 2008) and International Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was produced, the number of accepted papers for ICML 2008 was unknown and instead estimated.} \label{icml-historical} \end{center} \vskip -0.2in \end{figure} \subsection{Figures} You may want to include figures in the paper to illustrate your approach and results. Such artwork should be centered, legible, and separated from the text. Lines should be dark and at least 0.5~points thick for purposes of reproduction, and text should not appear on a gray background. Label all distinct components of each figure. If the figure takes the form of a graph, then give a name for each axis and include a legend that briefly describes each curve. Do not include a title inside the figure; instead, the caption should serve this function. Number figures sequentially, placing the figure number and caption \emph{after} the graphics, with at least 0.1~inches of space before the caption and 0.1~inches after it, as in Figure~\ref{icml-historical}. The figure caption should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. You may float figures to the top or bottom of a column, and you may set wide figures across both columns (use the environment \texttt{figure*} in \LaTeX). Always place two-column figures at the top or bottom of the page. \subsection{Algorithms} If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic'' environments to format pseudocode. These require the corresponding stylefiles, algorithm.sty and algorithmic.sty, which are supplied with this package. Algorithm~\ref{alg:example} shows an example. \begin{algorithm}[tb] \caption{Bubble Sort} \label{alg:example} \begin{algorithmic} \STATE {\bfseries Input:} data $x_i$, size $m$ \REPEAT \STATE Initialize $noChange = true$. \FOR{$i=1$ {\bfseries to} $m-1$} \IF{$x_i > x_{i+1}$} \STATE Swap $x_i$ and $x_{i+1}$ \STATE $noChange = false$ \ENDIF \ENDFOR \UNTIL{$noChange$ is $true$} \end{algorithmic} \end{algorithm} \subsection{Tables} You may also want to include tables that summarize material. Like figures, these should be centered, legible, and numbered consecutively. However, place the title \emph{above} the table with at least 0.1~inches of space before the title and the same after it, as in Table~\ref{sample-table}. The table title should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. \begin{table}[t] \caption{Classification accuracies for naive Bayes and flexible Bayes on various data sets.} \label{sample-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Data set & Naive & Flexible & Better? \\ \midrule Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\ Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\ Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\ Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\ Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\ Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\ Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\ Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} Tables contain textual material, whereas figures contain graphical material. Specify the contents of each row and column in the table's topmost row. Again, you may float tables to a column's top or bottom, and set wide tables across both columns. Place two-column tables at the top or bottom of the page. \subsection{Citations and References} Please use APA reference format regardless of your formatter or word processor. If you rely on the \LaTeX\/ bibliographic facility, use \texttt{natbib.sty} and \texttt{icml2020.bst} included in the style-file package to obtain this format. Citations within the text should include the authors' last names and year. If the authors' names are included in the sentence, place only the year in parentheses, for example when referencing Arthur Samuel's pioneering work \yrcite{Samuel59}. Otherwise place the entire reference in parentheses with the authors and year separated by a comma \cite{Samuel59}. List multiple references separated by semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.' construct only for citations with three or more authors or after listing all authors to a publication in an earlier reference \cite{MachineLearningI}. Authors should cite their own work in the third person in the initial version of their paper submitted for blind review. Please refer to Section~\ref{author info} for detailed instructions on how to cite your own papers. Use an unnumbered first-level section heading for the references, and use a hanging indent style, with the first line of the reference flush against the left margin and subsequent lines indented by 10 points. The references at the end of this document give examples for journal articles \cite{Samuel59}, conference publications \cite{langley00}, book chapters \cite{Newell81}, books \cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports \cite{mitchell80}, and dissertations \cite{kearns89}. Alphabetize references by the surnames of the first authors, with single author entries preceding multiple author entries. Order references for the same authors by year of publication, with the earliest first. Make sure that each reference includes all relevant information (e.g., page numbers). Please put some effort into making references complete, presentable, and consistent. If using bibtex, please protect capital letters of names and abbreviations in titles, for example, use \{B\}ayesian or \{L\}ipschitz in your .bib file. \section*{Software and Data} If a paper is accepted, we strongly encourage the publication of software and data with the camera-ready version of the paper whenever appropriate. This can be done by including a URL in the camera-ready copy. However, \textbf{do not} include URLs that reveal your institution or identity in your submission for review. Instead, provide an anonymous URL or upload the material as ``Supplementary Material'' into the CMT reviewing system. Note that reviewers are not required to look at this material when writing their review. \section*{Acknowledgements} \textbf{Do not} include acknowledgements in the initial version of the paper submitted for blind review. If a paper is accepted, the final camera-ready version can (and probably should) include acknowledgements. In this case, please place such acknowledgements in an unnumbered section at the end of the paper. Typically, this will include thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support. \nocite{langley00} \section{Introduction} \label{sec:Intro} \newcommand{\clDa}[1]{\mathcal{D}(#1)} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\set}[1]{#1} \newcommand{\cset}[1]{\left|#1\right|} \newcommand{\setm}[2]{\set{#1} \mathbin{\mathchoice{\mysetminusD}{\mysetminusT}{\mysetminusS}{\mysetminusSS}} \set{#2}} \newcommand{\setsub}[2]{\set{#1} \subset \set{#2}} \newcommand{\erra}[1]{\boldsymbol\epsilon(#1)} \newcommand{\boldsymbol\epsilon}{\boldsymbol\epsilon} \newcommand{S(\clD,B)}{S(\mathcal{D},B)} \newcommand{S^\star(\clD,B)}{S^\star(\mathcal{D},B)} \newcommand{\sdbmbi}[1]{S^\star(\mathcal{D},B_{#1})} \newcommand{\mathbf{C}}{\mathbf{C}} \newcommand{\mathbf{C}^\star}{\mathbf{C}^\star} \newcommand{C_h}{C_h} \newcommand{\trc}[1]{C_t(#1)} \newcommand{M(.)}{M(.)} \newcommand{\sfrac}[2]{(#1)/(#2)} \newcommand{\tuple}[2]{\langle #1, #2 \rangle} \newcommand{S^\theta(\clDa{B})}{S^\theta(\clDa{B})} \newcommand{\sdbti}[1]{S^\theta(\clDa{B}_{#1})} \newcommand{\sdbtbi}[1]{S^\theta(\clDa{B_{#1}})} \newcommand{\errm}[3]{\boldsymbol\epsilon(#1,#2,#3)} Groundtruth data is crucial for training and testing ML models. Today, labeling services~\cite{sagemaker,googlelabeling,figureeight} typically employ humans to generate groundtruth, which can incur significant cost. A cheaper alternative is to train a classifier using human generated labels for a subset of the data, then generate labels using this classifier at almost negligible cost for the rest of the data. There are two key challenges with this approach. First, the accuracy of the labels generated by the classifier may be less than that of human labeling. Second, the cost (in \$) of training the classifier itself can become significant and potentially offset the gains obtained from avoiding human labeling. In this paper we ask the question: ``How can we minimize the overall cost of generating groundtruth (in \$ including training and human labeling costs) for an unlabeled data set $\set{X}$, while ensuring that the overall error rate of the generated groundtruth is less than $\boldsymbol\epsilon$ (\textit{e.g.,}\xspace 5\%), assuming human generated labels as the gold standard?'' Suppose that a classifier $\clDa{B}$ is trained using human generated labels for $\set{B}\subset\set{X}$. Let the error rate of $\clDa{B}$ over the remaining unlabeled data using $\clDa{B}$ be $\erra{\setm{X}{B}}$. If $\clDa{B}$ were used to generate labels for this remaining data, the overall groundtruth error rate for $\set{X}$ would be, $\erra{\set{X}} = \left(1 - {\cset{B}}/{\cset{X}}\right)\erra{\setm{X}{B}}$ (0\% for $\set{B}$, because we have assumed human labeling is perfect). If $\erra{\set{X}}\ge\boldsymbol\epsilon$, then, this would violate the maximum error rate requirement. However, $\clDa{B}$ might still be able to generate accurate labels for a carefully chosen subset $S(\clD,B)\subset\setm{X}{B}$ (\textit{e.g.,}\xspace comprising only those that $\clDa{B}$ is very confident about). After generating labels for $S(\clD,B)$ using $\clDa{B}$, labels for the remaining $\setm{X}{B}\setminusS(\clD,B)$ can be once again generated by humans. The \textit{overall error rate} of the generated groundtruth then would be, $\sfrac{\cset{S(\clD,B)}}{\cset{X}}\erra{S(\clD,B)}$. $\erra{S(\clD,B)}$ is the error rate of generating labels over $S(\clD,B)$ using $\clDa{B}$ and is, in general, higher for larger $\cset{S(\clD,B)}$. Let $S^\star(\clD,B)$ be the largest possible $S(\clD,B)$ that ensures that the overall error rate is less than $\boldsymbol\epsilon$. Then, overall cost of generating labels in this manner is: \begin{equation} \mathbf{C} = (\cset{\setm{X}{S^\star(\clD,B)}})\cdot C_h + \trc{\clDa{B}} \label{eqn:Total_Cost} \end{equation} where, $C_h$ is the cost of human labeling for a single data item and $\trc{\clDa{B}}$ is the total cost of generating $\clDa{B}$ including the cost of finding $\set{B}$ and training $\clDa{B}$ but not including the human labeling cost $\cset{B}C_h$. The key contribution in this paper is \textbf{\textit{min-cost active labeling}} (\textsc{mcal}\xspace), an active learning~\cite{settles} based approach that minimizes $\mathbf{C}$ as follows: \begin{align*} \mathbf{C}^\star = \argminB_{S^\star(\clD,B),\set{B}} & \ \ \mathbf{C} \\ \text{subject to} & \ \ \sfrac{\cset{S^\star(\clD,B)}}{\cset{X}}\erra{S^\star(\clD,B)} < \boldsymbol\epsilon \end{align*} \textsc{mcal}\xspace follows a similar iterative process as active learning when generating $\set{B}$. In each iteration, it ranks data ${X}\setminus{B}$ using a function $M(.)$ that measures their ``informativeness'', based on $\clDa{B}$'s classification uncertainty (\textit{e.g.,}\xspace entropy~\cite{ME} or margin~\cite{settles}). \textsc{mcal}\xspace then obtains human labels for the $\delta$ (batch size) most informative ones, adds them to $\set{B}$, and (repeatedly) re-trains $\mathcal{D}$ using $\set{B}$. However, \textsc{mcal}\xspace differs from active learning in one crucial aspect. Unlike standard active learning which aims to minimize $\cset{B}$ while achieving a high accuracy for $\mathcal{D}$, \textsc{mcal}\xspace optimizes $\mathbf{C}$ by \textbf{\textit{jointly selecting}} $S^\star(\clD,B)$ and $\set{B}$. As $\cset{B}$ increases, $\clDa{B}$'s prediction accuracy improves. However, this accuracy has a \textit{concave} relationship with $\cset{B}$: more and more data must be human labeled for incremental improvements in $\mathcal{D}$'s accuracy. On the other hand, the cost of training $\mathcal{D}$ using active learning, $\trc{\clDa{B}}$, typically has a \textit{convex} dependency on $\cset{B}$, making it more and more expensive to train over larger $\set{B}$ and it also accumulates across active learning iterations (\secref{sec:motivation}). Both these dependencies make it harder to improve $\mathcal{D}$'s accuracy per unit cost (in \$) as $\cset{B}$ increases. Thus, continuing to perform active learning becomes counter-productive beyond a certain point. This dependency of $\mathbf{C}$ on $\cset{B}$ is data set specific. Thus, a key challenge for \textsc{mcal}\xspace is that it must model and predict $\mathbf{C}$ as a function of $\cset{B}$ during active learning. Using this model, then it must jointly: i) decide when to terminate active learning, ii) adapt $\delta$ to limit training costs while not degrading the active learning efficiency, and iii) select an appropriate $S^\star(\clD,B)$. To this end, this paper makes the following contributions: \squishlist \item It develops a method to construct, and iteratively refine, a model for predicting $\mathbf{C}$ as a function of $\cset{B}$ and a procedure to find $S^\star(\clD,B)$. (\secref{sec:motivation}). \item It presents the \textsc{mcal}\xspace algorithm (\secref{sec:taal}) that terminates active learning when it is counter-productive to add human-generated labels into $\cset{B}$. In effect, this is the point at which the total cost (\eqnref{eqn:Total_Cost}) is minimized. \textsc{mcal}\xspace can also select, from a small set of network architectures for $\mathcal{D}$, the one that achieves lowest-cost labeling. \item Evaluations (\secref{sec:eval}) of \textsc{mcal}\xspace on various popular benchmark datasets of different levels of difficulty (Fashion-MNIST~\cite{Fashion-MNIST}, CIFAR-10~\cite{CIFAR} and CIFAR-100~\cite{CIFAR}) show that it achieves lower than the lowest-cost labeling achieved by an active learning strategy that trains a classifier to label the dataset. It selects a strategy that matches the complexity of the data set and the classification task. For example, it labels the Fashion dataset, the easiest to classify, using a trained classifier. At the other end, it chooses to label CIFAR-100 using humans almost completely; for this data set, it estimates training costs to be prohibitive. Finally, it labels a little over half of CIFAR-10 using a classifier. \textsc{mcal}\xspace is up to 6$\times$ cheaper for some data sets compared to human labeling all images. It is able to achieve these savings, in part, by carefully determining active learning batch sizes while accounting for training costs; cost savings due to active learning range from 20-32\% for Fashion and CIFAR-10. \squishend \section{Related Work} \label{sec:related} To label large data sets, annotation platforms such as Amazon SageMaker \cite{sagemaker}, Figure Eight \cite{figureeight}, and Google Labeling Services \cite{googlelabeling}, rely on humans. Annotation by human experts can be expensive, and Amazon Sagemaker allows users to trade-off annotation quality for a lower cost. \textsc{mcal}\xspace is motivated by this, and attempts to lower cost without compromising quality. Active learning~\cite{settles} aims to reduce labeling cost in training a model, by iteratively selecting the most informative samples for labeling. Early work focused on designing metrics for sample selection based on margin sampling~\cite{settles}, max entropy~\cite{ME} and least confidence~\cite{LC}. Recent work has focused on developing metrics tailored to specific tasks, such as classification~\cite{coreset}, detection~\cite{AL-detect,loc-aware-accv}, and segmentation~\cite{GeometricAL,bio-seg-AL}, or for specialized settings such as when costs depend upon the label~\cite{al-cost-sensitive}, or for a hierarchy of labels~\cite{partialfeedback}. Other work in this area has explored variants of the problem of sample selection: leveraging model structure~\cite{CostEffectiveAL}, using model ensembles to improve sampling efficacy~\cite{ensemble}, or using self-supervised mining of samples for active learning to avoid data set skew~\cite{SSM}. \textsc{mcal}\xspace uses active learning for training the classifier $D$ (\secref{sec:Intro}) and can accommodate multiple sample selection metrics $M(.)$. More recent work has explored techniques to learn active learning strategies, using reinforcement learning~\cite{RL-AL} or one-shot meta-learning~\cite{meta_AL}. However, with the exception of~\cite{named_entity_AL} which designs a lightweight model to reduce the iterative training cost incurred in active learning, we have not found any work that takes training cost into account when developing an active learning strategy. Because active learning can incur significant training cost, \textsc{mcal}\xspace includes training costs in its formulation. Training cost figures prominently in the literature on hyper-parameter tuning, especially for architecture search. Prior work has attempted to predict learning curves to prune hyper-parameter search~\cite{BayesLCP}, develop techniques to find the most effective search strategy within a given budget~\cite{NASBudget}, or build a model to characterize maximum achievable accuracy on a given dataset to enable fast triage during architecture search~\cite{TAPAS}, all with the goal of reducing training cost. Also relevant is the long line of work that has empirically observed a power-law relationship between generalization error and training set size~\cite{Beery_2018_ECCV,BaiduLCP,pilot,powerlaw,BMC,DatasizeNeeded} across a wide variety of tasks and models. \textsc{mcal}\xspace builds upon this observation, and learns the parameters of a truncated power-law model with as few samples as possible using active learning, which it uses to learn $\set{B}$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In recent decades, the polar regions have experienced major transformations due to global warming. For example, the rapid decline of summer ice extent in the Arctic Ocean has caught a lot of attention. While there is no doubt that warmer temperatures have been a major factor in transforming the polar seascape, evidence has also shown that ocean waves and their increased activity play an aggravating role, and in turn the presence of sea ice affects the wave dynamics. By breaking up the sea ice, waves cause it to become more fragmented, which in turn increases their capacity to further penetrate and damage the ice cover. A typical setting in the ocean where wave-ice interactions prevail is the marginal ice zone (MIZ) which is the fragmented part of the ice cover closest to the open ocean. It is a highly heterogeneous region comprising various types of sea ice that result from the incessant assault of incoming waves. Of particular interest to oceanographers is the modeling of wave attenuation in sea ice, a process that has been poorly represented in large-scale wave forecasting models for the polar regions. There are two principal mechanisms for the attenuation of wave energy propagating into an ice field: (i) scattering by ice floes or other inhomogeneities of the ice cover, which is a conservative process that redistributes energy in all directions, and (ii) dissipative processes which are related to various sources, e.g. friction due to the presence of sea ice, inelastic collisions and breakup of ice floes. The relative importance of scattering and dissipation is still unclear, and uncertainties still exist about the actual mechanisms for wave dissipation in sea ice. This has led to a surge of research activity on this topic in recent years. Studies have suggested that dissipative processes are dominant in frazil and pancake ice fields \cite{ddmbw15,nm99}, while scattering seems to be the main mechanism for wave attenuation in broken floe fields \cite{km08,wsgcm88}. Even for a denser ice field, the problem remains complex: e.g. Ardhuin et al. \cite{asdw16} found that dissipation dominates over scattering for long swells in the Arctic ice pack. With a view to describing wave attenuation in the MIZ, two different approaches have been pursued based on linear theory: (i) discrete-floe models where individual floes with possibly distinct characteristics are resolved assuming an idealized geometry \cite{bs12,msb16,wbsdb13}, and (ii) continuum models where the heterogeneous ice field is viewed as a uniform material with effective rheological properties including viscosity or viscoelasticity \cite{dd02,k98,ws10}. In case (i), the analysis focuses on wave scattering and typically requires solving a boundary value problem with multiple regions in the horizontal hyperplane. By contrast, case (ii) enables the derivation of an exact algebraic expression for the dispersion relation governing traveling plane waves in the effective medium. Wave attenuation (possibly from scattering and dissipation combined) is encoded in the complex roots of this dispersion relation, and various physical effects are controlled by constant parameters. Recent reviews on this theoretical work can be found in \cite{s19,s20}. Models of type (i) have been applied to various floe configurations and have reached a high degree of sophistication. There is now a consensus that the process of wave scattering in sea ice is well understood, and associated parameterizations have been tested for operational wave forecasting \cite{db13,ph96,wbsdb13b}. It is however not the case for dissipative processes and, partly for this reason, there has been an increasing effort in recent years at developing and calibrating models of type (ii) \cite{cheng17,desanti18,ddmbw15,rogers16}. In this framework, details of the attenuation processes are not accounted for and it lies on the calibration to ensure that the effective parameters are assigned suitable empirical values for practical applications. While continuum models have been employed for some time now, based mostly on thin-plate theory, to describe wave propagation in pack ice \cite{fs94,lm88}, their extension to the setting of a more compliant or fragmented ice cover for application to the MIZ is more recent \cite{zsc15}. Earlier versions include the two-layer viscous model of Keller \cite{k98} which treats the ice cover as a viscous layer lying on top of an ideal fluid (the ocean). The viscous layer is meant to represent a suspension of ice particles in water. Interaction among these particles and the associated friction leads to wave energy dissipation, which is modeled as a viscous effect. Good agreement has been found in comparison to laboratory data on wave attenuation in grease ice \cite{nm97,nm99}. Keller's model was extended by de Carolis and Desiderio \cite{dd02} to allow for a viscous fluid in the lower layer as well. Validation was provided to some extent against laboratory and field measurements. Wang and Shen \cite{ws10} refined Keller's model by adding elasticity to the upper layer as this property may be of relevance to broken floe fields. Their viscoelastic model has been tested against laboratory experiments under various ice conditions, and has been calibrated and used in parameterization of wave hindcasts for the Arctic and Antarctic. Building upon this idea, Zhao and Shen \cite{zs18} developed a three-layer version which features a turbulent boundary layer between the viscoelastic ice cover and the inviscid ocean. Dissipation due to turbulence in the middle layer is associated with some eddy viscosity. In the spirit of this continuum approach, Chen et al. \cite{cgg19} recently proposed a more elaborate two-layer model where the ice cover is viewed as a homogeneous isotropic poroelastic material according to Biot's theory \cite{cgg18}. More specifically, the heterogeneous ice field is described as a mixed layer with a solid phase and a fluid phase as the two limiting configurations. Each phase is assumed to be slightly compressible. Dissipative effects are included via two different mechanisms: viscosity within each phase of the ice layer, and friction caused by the relative motion between its fluid and solid constituents. A parameter of interest in this model is the ice porosity which may serve to provide a measure of ice concentration. Despite the complicated nature of this formulation, an exact linear dispersion relation can be derived and numerical estimates of physically relevant solutions can be found using relatively simple selection criteria. Preliminary tests were conducted in \cite{cgg19} to verify consistency with predictions from simpler models (e.g. open water, mass loading, purely elastic) in their respective limits \cite{crl17,xg09}. A more detailed review of this dispersion relation together with those from other viscoelastic representations will be presented in the next section. The main goal of this paper is to further assess the porous viscoelastic model of Chen et al. \cite{cgg19} by testing it against both laboratory experiments and field observations of wave attenuation in sea ice. These are taken from the literature, and allow us to probe a wide range of ice conditions and wave frequencies. This is accomplished by fitting the theoretical predictions to data on attenuation rate via error minimization over a set of rheological parameters. As a result, numerical estimates for both the attenuation rate and the set of effective parameters are obtained from the fitting process. This model's performance is also checked by comparing it to other existing viscoelastic formulations under the same various conditions. The purpose of such a comparison is two-fold. First, it helps examine in detail the parametric dependence in viscoelastic theories, and the extent to which common rheological parameters may differ in their range of values. Indeed, this difference may be of several orders of magnitude for such effective parameters. Second, it helps validate our data fitting method as we can check with previous independent calibration results from the literature. Given the relatively large parameter space in this poroelastic setting, a sensitivity analysis is also performed to gauge the individual contributions of rheological parameters to the fitting process. A notable finding from our study is that Chen et al.'s model can reproduce to some degree the roll-over of attenuation rate as observed in field measurements from the Arctic MIZ. This intriguing phenomenon has generally eluded linear scattering or viscoelastic models and, while various possible causes have been suggested, it is still not well understood \cite{lkdwgs17,thmkk21}. The remainder of this paper is organized as follows. Section 2 recalls the linear dispersion relation obtained from the porous viscoelastic model of Chen et al. \cite{cgg19} and describes the data fitting procedure. Other existing viscoelastic formulations are also briefly reviewed. Section 3 presents the corresponding fits to data on attenuation rate from a selection of laboratory experiments and field observations. Section 4 discusses the estimation of shear modulus and kinematic viscosity, and compares results among three different viscoelastic models. Section 5 shows sensitivity tests on a set of rheological parameters that are relevant to the poroelastic system. Finally, concluding remarks are provided in Section 6. \section{Theoretical models} The dispersion relations reviewed in this section are derived from continuum models for linear traveling waves in the two-dimensional case (one horizontal direction and one vertical direction). The dispersion relation associated with the porous viscoelastic model proposed in \cite{cgg19} (hereafter referred to as CGG) is given by \begin{equation} \label{poro-disp} \omega^{2} = \left( \frac{T_1 + g \, T_2}{T_3} \right) D_{4} \tanh(D_{4} H) \,, \end{equation} with \[ D_4 = \sqrt{\kappa^2 - \frac{\omega^2}{c_f^2}} \,, \] where $g$ is the acceleration due to gravity, $H$ is the water depth and $c_f$ is the speed of sound in water. The reader is directed to \cite{cgg19} for a detailed derivation of this model and to the Appendix where the expressions of coefficients $T_1$, $T_2$, $T_3$ are recalled for convenience. These coefficients are functions of various wave parameters and rheological parameters. Wave parameters include the angular frequency $\omega$ and complex mode $\kappa = k + {\rm i} \, q$ where $k$ is the wavenumber and $q$ is the attenuation rate. Rheological parameters include the water density $\rho_f$ as well as the ice density $\rho_s$, porosity $\beta$, shear modulus $\mu$, Poisson's ratio $\nu$, kinematic viscosity $\eta$ and thickness $h$. Ice porosity is represented by a dimensionless parameter whose range is $0 \le \beta \le 1$, with the limiting values $\beta = 0$ (solid phase) and $\beta = 1$ (fluid phase) corresponding to pack ice and near-open water, respectively. This parameter may be related to ice concentration $C$ via the relation $\beta = 1 - C$, namely $\beta$ is the complement of $C$. It should be pointed out that, in this continuum framework, the elasticity, porosity and viscosity parameters do not necessarily correspond to intrinsic properties of sea ice but rather they are meant to represent effective properties of the heterogeneous ice field under consideration, similar to e.g. homogenization modeling of wave propagation in complex media \cite{cgs09,dcdgs08}. These parameters may thus vary over a wider range than the typical values for sea ice. In view are potential applications to large-scale wave forecasting in the MIZ where various types of sea ice coexist. Whenever the information is available, $\beta$ and $h$ will be assigned values corresponding to mean ice concentration and mean thickness of the ice cover (e.g. field studies often report an estimate of the fraction of ice-covered surface that may be used for $C$). Aside from bulk viscosity which is typically regulated by $\eta$, this model also describes friction due to the relative motion between fluid and solid parts of the ice field. In the equations, the coefficient controlling this mechanism is defined by \begin{equation} \label{friction} b = \frac{8 \rho_s \eta \beta}{a^2} \,, \end{equation} where $a$ denotes the fluid pore size in the porous medium. From the viewpoint of effective medium theory for wave propagation in the MIZ, this parameter $a$ may be related to some characteristic horizontal size of open-water areas in the fragmented ice cover. In the next section, predictions from \eqref{poro-disp} will be tested against a selection of laboratory experiments and field observations. For each set of experimental data, comparison with other existing models will be provided as well. These include recent viscoelastic models by Wang and Shen \cite{ws10} and Mosig et al. \cite{mms15}, which we find convenient to present in detail below because they share common rheological parameters, and this will be of relevance to the subsequent discussion. These two models are simpler than the present one in the sense that they do not take into account ice porosity, accordingly their dispersion relations are simpler. The dispersion relation resulting from Wang and Shen's model \cite{ws10} (hereafter referred to as WS) can be written as \begin{equation} \label{WS} \omega^2 = \left( 1 + \frac{\rho_s N_3}{\rho_f N_4} \right) g {\kappa} \tanh({\kappa} H) \,, \end{equation} where \begin{eqnarray*} N_4 & = & g {\kappa} \big[ 4 {\kappa}^3 N_1 \eta_c^2 \sinh({\kappa} h) \cosh(N_1 h) + N_2^2 \cosh({\kappa} h) \cosh(N_1 h) \\ & & - g {\kappa} \sinh({\kappa} h) \sinh(N_1 h) \big] \,, \\ N_3 & = & (g^2 {\kappa}^2 - N_2^4 - 16 {\kappa}^6 N_1^2 \eta_c^4) \sinh({\kappa} h) \sinh(N_1 h) \\ & & - 8 {\kappa}^3 N_1 \eta_c^2 N_2^2 \big[ \cosh({\kappa} h) \cosh(N_1 h) - 1 \big] \,, \end{eqnarray*} and \[ N_2 = \omega + 2 {\rm i} \, \eta_c {\kappa}^2 \,, \quad N_1 = \sqrt{{\kappa}^2 - {\rm i} \frac{\omega}{\eta_c}} \,, \quad \eta_c = \eta + {\rm i} \frac{\mu}{\rho_{s} \omega} \,. \] On the other hand, the dispersion relation produced by Mosig et al.'s model \cite{mms15} (hereafter referred to as EFS) takes the form \begin{equation} \label{EFS} \frac{\omega^{2} }{g - \frac{\rho_s \omega^{2} h}{\rho_f} + \frac{\mu_c h^3 \kappa^{4}}{6 (1 - \nu) \rho_f}} = \kappa \tanh(\kappa H) \,, \end{equation} where \begin{equation} \label{mu} \mu_c = -{\rm i} \, \rho_s \omega \eta_c = \mu -{\rm i} \, \omega \rho_{s} \eta \,. \end{equation} Note that the EFS model also makes use of the thin-plate approximation and thus is significantly simpler than both CGG and WS models which instead consider the ice cover as a distinct layer with an actual thickness. Preliminary comparison between these three models can be found in \cite{cgg19}. For a given value of $\omega$ and other parameter values, the dispersion relation is solved numerically for $\kappa$ using the root-finding routine \textit{fsolve} in Matlab. More specifically, because $\kappa$ is complex, Eq. \eqref{poro-disp} is split up into its real and imaginary parts. This leads to a system of two independent equations that are solved simultaneously for the two unknowns $k$ and $q$. The \textit{fsolve} algorithm is essentially a quasi-Newton method with a numerical approximation of the Jacobian matrix. We have successfully used this Matlab routine in previous work \cite{g06,gp14} to compute traveling wave solutions of nonlinear partial differential equations. Considering that multiple roots for $k$ and $q$ may exist here \cite{mms15,zcs17}, we apply the selection criteria proposed in \cite{ws10} to find a dominant pair $(k,q)$ associated with a physically relevant solution. We choose the converged values $(k,q) \in \mathbb{R}_+^2$ such that $k$ is closest to the open-water wavenumber $k_0$ which solves \[ \omega^2 = g k \tanh(k H) \,, \] and $q$ is the lowest attenuation rate possible. Accordingly, we run the root finder \textit{fsolve} for a range of initial guesses around $(k,q) = (k_0,0)$ and select the converged values for which the error $| \kappa - (k_0 + {\rm i} \, 0)|$ is minimum among all the roots found. In doing so, we were able to get acceptable solutions in all the cases we considered. For the following tests, we prescribe a number of physical parameters such as $g = 9.81$, $\rho_s = 917$, $\rho_f = 1025$ and $c_f = 1449$ (in SI units), and fit the model predictions to the experimental data by optimizing with respect to other parameters. The water depth $H$ is also specified, since this is either known information from laboratory experiments or a representative value may be used for a specific oceanic region when comparing to field observations. Furthermore, because the range of Poisson's ratio is typically small ($0 < \nu < 1/2$), we set it to be $\nu = 0.4$, after checking that it does indeed not play a major role (see Section 5). This reduction in the parameter space helps simplify the analysis, which is especially relevant for the CGG model considering that it involves many physical parameters. We will focus our attention on the subset $(\beta, \mu, \eta)$ when fitting the model predictions to the experimental data. Throughout this study, we will only consider data on the attenuation rate, a reason being that data on the wavenumber were not reported by field observations and our focus here is on the attenuation process as in \cite{km08,mms15,ph96,srcj19}. Our fitting procedure is basically a direct search approach. For a given triplet $(\beta, \mu, \eta)$ and a range of values of $\omega$, we apply the above-mentioned root-finding scheme to find a set of pairs $(k,q)$. We repeatedly run this procedure over a specified region of parameter space $(\beta, \mu, \eta)$, and select the set $\{ q_j \}$ for which the $L^2$ error \begin{equation} \label{error} E = \sum_{j=1}^n (q_j - \widehat q_j)^2 \,, \end{equation} between numerical estimates $\{ q_j \}$ and experimental data $\{ \widehat q_j \}$, is minimum among all the solutions calculated. The best fit so obtained returns a set $\{ q_j \}$ for the attenuation rate, as well as a triplet $(\beta, \mu, \eta)$ for these rheological parameters. For the problem at hand, we use a straightforward definition \eqref{error} of the error as in \cite{desanti18}, which is readily applicable to all the cases explored and which allows for a direct comparison among the various models involved. It turns out that information on ice concentration was also reported in some of these studies (which we used to determine the ice porosity), and thus only the pair of parameters $(\mu,\eta)$ is to be found from the data fitting. In this process, the ranges of values for $(\mu,\eta)$ and their resolutions are chosen in a heuristic manner based on extensive trials, considering previous work \cite{mms15,nm99,zs15} and our own experience \cite{cgg19,cgg18}. The larger the region of parameter space to be covered, the higher the computational cost. Typically, we conduct a preliminary search over an extended rough region of parameter space and then refine the search over smaller better-resolved sectors. Aside from testing the performance of the CGG model, such an analysis also helps calibrate it by estimating rheological parameters for potential applications in realistic conditions. As discussed below, while the CGG, EFS and WS models share common physical parameters such as $\mu$ and $\eta$, their respective numerical values according to the data fitting may differ significantly. Note that we use the same procedure to obtain fitting curves from the EFS and WS models. Although it is somewhat different in character from the CGG, EFS and WS models, we will also include a comparison with the two-layer viscous model recently proposed by Sutherland et al. \cite{srcj19} (hereafter referred to as SRCJ), which estimates wave attenuation in sea ice by \begin{equation} \label{viscous} q = \frac{1}{2} \Delta_0 \epsilon h k_0^2 \,. \end{equation} This formula is partly heuristic because it was derived based on scaling arguments and dimensional analysis. It is nonetheless appealing due to its stunning simplicity and has been shown to produce satisfactory results in comparison with experimental data. Considering that viscous models have been successful at describing wave attenuation in such ice covers as grease ice, predictions from \eqref{viscous} may serve as a suitable independent reference, especially for the tests involving laboratory experiments. The coefficients $\Delta_0$ and $\epsilon$ are dimensionless empirical parameters whose range is $0 < \Delta_0 < 1$, $0 < \epsilon < 1$. The same $L^2$ error \eqref{error} is used to optimize \eqref{viscous} with respect to the pair $(\Delta_0,\epsilon)$ when fitting to measurements. The parameter $\Delta_0$ is a measure of the relative motion between ice and water at the bottom boundary of the ice layer. To first approximation, $\Delta_0 \simeq 1$ corresponding to a no-slip boundary condition. The parameter $\epsilon$ is thought to be a function of ice porosity, exhibiting a similar behavior. In particular, the limit $\epsilon \rightarrow 0$ is analogous to $\beta \rightarrow 0$ (pack ice) for which, according to \eqref{viscous}, there should be no wave dissipation within the ice layer. Sutherland et al. \cite{srcj19} pointed out that such a parameterization is consistent with observations of wave attenuation through the MIZ, being several orders of magnitude greater in frazil and pancake ice than in a broken floe field \cite{ddmbw15}. This is somewhat counterintuitive considering that the latter represents a more rigid ice cover than the former. Note however that the dominant mechanism for wave attenuation in broken floe fields is believed to be scattering and is of different nature from the viscous-type dissipation taking place in pancake ice fields. Preliminary tests of the CGG model in \cite{cgg19} are also consistent with these observations and indicate a tendency for $q$ to decrease as $\beta \rightarrow 0$ over all frequencies. \section{Comparison with experimental data} In this section, we test the CGG model against three different sets of laboratory experiments and two different sets of field observations. Altogether, these span a wide range of wave frequencies and ice-cover types. In each case, the CGG model is tested by best fitting its predictions to experimental data on the attenuation rate, based on the numerical scheme described earlier. Predictions by other existing models are also shown for comparison and numerical values of their rheological parameters as determined by the data fitting are discussed. \subsection{Laboratory experiments of Newyear and Martin (1997)} Newyear and Martin \cite{nm97} conducted laboratory experiments of wave propagation and attenuation in grease ice. This study was among the first to measure wave attenuation by floating ice in a controlled laboratory environment. The wave tank was a flat-bottomed rectangular box, $3.5$ m long, $1$ m wide and $1$ m deep. They reported two sets of measurements for two different ice thicknesses $h = 11.3$ cm (Test 1) and $h = 14.6$ cm (Test 2). In both cases, the water depth was set to $H = 0.5$ m and the ice concentration was estimated to be $C = 0.53$. Figure \ref{fig:Newyear} shows best fits of the CGG, EFS and WS models to Newyear and Martin's measurements of attenuation rate $q$ (from their Tables 1 and 2) as functions of wave frequency $f = 2\pi \omega$. The value $\beta = 1 - C = 0.47$ for ice porosity is used in the CGG model. Newyear and Martin \cite{nm99} provided a comparison of their laboratory data with Keller's two-layer viscous model \cite{k98}, whose predictions are also shown in Fig. \ref{fig:Newyear} (these were extracted from figures in their paper). Keller's model is basically a counterpart of WS model without elasticity. The agreement between the CGG model and the experiments is fairly good in both cases. We see that the CGG and EFS curves are particularly close together. They appear to be concave while Keller's curve appears to be convex, which is characteristic of a purely viscous (i.e. diffusive-type) mechanism \cite{lhv91,srcj19}. The CGG concavity is especially pronounced for $h = 14.6$ cm (Fig. \ref{fig:Newyear}b), which leads to a close fit at high frequencies where the increase of $q$ seems to slow down. As for the WS curve, it behaves more linearly with respect to $f$ and lies between these two opposite trends. Note that it is not clear whether the actual trend is concave or convex due to measurement errors and the limited number of data points. \subsection{Laboratory experiments of Wang and Shen (2010)} This study was conducted as part of the RECARO (REduced ice Cover in the ARctic Ocean) project in the Arctic Environmental Test Basin at Hamburg Ship Model Basin (HSVA), Germany. The wave basin was roughly $19$ m long, $6$ m wide and $1.5$ deep, and was separated equally lengthwise into two $3$ m wide flumes (referred to as Tank 2 and Tank 3). Experiments were performed in these two flumes to measure wave propagation and attenuation in a grease-pancake ice mixture \cite{ws10b}. The ice thickness was not uniform in these experiments, so we use the mean values $h = 9.0$ cm and $8.9$ cm for Tank 2 and 3 respectively. The water depth was $H = 0.85$ m but no information was provided on the ice concentration. Comparison of these experimental data (from Tables 1 and 2 in \cite{ws10b}) with the CGG, EFS, SRCJ and WS predictions is given in Fig. \ref{fig:WangShen}. The general trend appears to be convex for most of these models. The EFS fit falls down very quickly as $f \rightarrow 0$ but seems to develop an inflection (from convex to concave) while rising up at high frequencies. The CGG fit looks satisfactory overall. It does not quite capture the high convexity around $f = 0.9$ Hz (in particular for Tank 3) but it does not fall down as quickly as the other curves at lower frequencies, which is consistent with the asymptotic behavior suggested by the experimental results in that limit. A similar observation was made by Wang and Shen \cite{ws10b} who found that a grease-pancake ice layer appears to be more dissipative (producing a higher attenuation rate) in the low-fequency range than predicted by Keller's viscous model, and concluded that such a mixed ice layer may be rheologically quite different from a pure grease ice layer (for which a viscous model usually works well). The CGG fit estimates the ice porosity to be $\beta = 0.16$ and $0.15$ for Tank 2 and 3 respectively. These low values of $\beta$ mimic a configuration where the ice cover is relatively compact, which is compatible with the presence of pancake ice, and as expected they are lower than the value $\beta = 0.47$ deduced from Newyear and Martin's measurements for grease ice. The associated pore sizes are given by $a = 1.2$ cm and $2.4$ cm for Tank 2 and 3. Recall that the pores represent the fluid part of the porous ice cover in the continuum formulation of the CGG model. For low $\beta$, we may thus assume that constitutive elements of the solid part would have a typical size on the same order of magnitude as or larger than $a$, which is consistent with the pancake diameter ranging from about $\ell = 1$ cm to $40$ cm as observed in Wang and Shen's experiments. \subsection{Laboratory experiments of Zhao and Shen (2015)} Zhao and Shen \cite{zs15} followed up with additional experiments at the HSVA in 2013. Three sets of measurements were performed in Tank 3 (as defined in the previous section) for three different types of ice cover: a frazil/pancake ice mixture (with thickness $h = 2.5$ cm), pancake ice ($h = 4.0$ cm) and a broken floe field ($h = 7.0$ cm). These three cases are referred to as Test 1, 2 and 3 respectively. The water depth was about $H = 0.94$ m and again, although values for the diameter of a typical pancake/floe were reported in \cite{zs15}, no information was given on the mean ice concentration for the generated ice fields. Overall, the models compare well with these experiments (from Table 3 in \cite{zs15}), as indicated in Fig. \ref{fig:ZhaoShen}. Their fits are especially good for Tests 1 and 2, and are reminiscent of the previous results (Fig. \ref{fig:WangShen}) with Wang and Shen's experiments for a grease-pancake ice mixture. The SRCJ model is found to perform quite well over the entire range of frequencies considered, even at low frequencies where, despite tending to zero, its fitting curve is closest to the data points. This contrasts with a previous observation regarding the comparison to Wang and Shen's experiments, and may be explained by the fact that the ice thicknesses for Tests 1 and 2 are significantly smaller than those specified in \cite{ws10b}. The agreement is less convincing for Test 3, partly because there are fewer data points available. These suggest a convex dependence of $q$ on $f$, which is captured to some extent by the CGG, SRCJ and WS models. The laboratory measurements however yield much higher attenuation rates at low frequencies than what these models predict, indicating a tendency for $q$ to saturate or even increase back as $f$ decreases. The EFS curve looks quite different from the other curves, exhibiting a slightly concave profile. It is worth pointing out that all these models underestimate the attenuation rate at low frequencies for all three experiments. From the CGG fit, we find $\beta = 0.01$, $0.07$ and $0.06$ for Test 1, 2 and 3 respectively. The corresponding pore sizes are $a = 2.0$ cm, $4.8$ cm and $8.5$ cm. Note the particularly low level of ice porosity that is obtained for Test 1. An interpretation for this case is that the ice thickness is so small and consequently the wave attenuation is so weak (as indicated by the low attenuation rates in Fig. \ref{fig:ZhaoShen}a) that the CGG model views it as equivalent to a configuration with pack ice (corresponding to the limit $\beta \rightarrow 0$). This is consistent with the parameter values $\epsilon = 0.34$, $0.94$ and $0.85$ estimated from the SRCJ fits for Test 1, 2 and 3 as reported by Sutherland et al. \cite{srcj19}. These authors also tested their viscous model against the laboratory experiments of Newyear and Martin \cite{nm97}, Wang and Shen \cite{ws10b} and Zhao and Shen \cite{zs15}, and found the parameter $\epsilon$ to be smallest for Test 1 of Zhao and Shen among all the cases considered. We obtain a similar result here, with $\beta$ being smallest for that particular experiment. Moreover, the set of $\beta = \{ 0.01, 0.07, 0.06 \}$ seems to follow a pattern of variation similar to that for the set of $\epsilon = \{ 0.34, 0.94, 0.85 \}$. Again, for such low levels of ice porosity as given by the CGG fit, the associated pore sizes $a = \{ 2.0, 4.8, 8.5 \}$ cm may be deemed compatible with the typical ice diameters $\ell = \{ 3, 5, 20 \}$ cm observed in \cite{zs15} for Test 1, 2, 3. We point out in passing that Sutherland et al. \cite{srcj19} set $\Delta_0 = 1$ and determined $\epsilon$ by a linear least-squares fitting method. In the present study, we fit \eqref{viscous} to the experimental data by minimizing \eqref{error} with respect to both $\Delta_0$ and $\epsilon$, based on the approach described in Section 2. This produces values of $\Delta_0$ near $1$ and values of $\epsilon$ that are very close to those reported in \cite{srcj19}, which may serve as evidence for the effectiveness of our fitting method. From the SRCJ fit shown in Fig. \ref{fig:ZhaoShen}, we find $(\Delta_0,\epsilon) = (0.96,0.34)$, $(0.97,0.97)$ and $(0.94,0.82)$ for Test 1, 2 and 3. Zhao and Shen \cite{zs15} also used their laboratory data to test the WS model and estimate such parameters as the shear modulus and kinematic viscosity. We will refer to their results as part of the discussion in Section 4. \subsection{Field observations of Wadhams et al. (1988)} During field operations in the Greenland and Bering Seas in the late 1970s and early 1980s, the Scott Polar Research Institute \cite{wsgcm88} carried out a series of experiments where wave attenuation was measured along a line of stations running from the open sea deep into an ice field. Large broken floes are a prominent feature of the ice field in this case. At each station, a wave buoy was inserted between floes to measure the local wave spectrum. A mean ice thickness was determined by coring at each of the experimental floes along the major axis of the incoming wave spectrum. Floe size distributions were derived from overlapping vertical photography from a helicopter. Among the measurements reported in \cite{wsgcm88} (see their Table 2), we will use those from the Greenland Sea in 1979 and from the Bering Sea in 1983. Other data sets (e.g. 1978 Greenland Sea and 1979 Bering Sea) were deemed not suitable due to possibly larger experimental error or unwanted physical effects such as wave reflection/absorption from the fjords, as mentioned in \cite{km08}. We will take this opportunity to compare with results of Kohout and Meylan \cite{km08} (hereafter referred to as KM) who also tested their scattering model against these field observations. An intriguing feature of the 1979 Greenland Sea and 1983 Bering Sea measurements is that they show a roll-over of attenuation rate as a function of wave period (or wave frequency), in lieu of a monotonic behavior. This roll-over occurs at short periods (or high frequencies) in the range considered. Continuum viscoelastic models or discrete scattering models have usually been unable to predict this phenomenon. Possible explanations that have been suggested include wind forcing, nonlinear wave interactions or instrument noise \cite{lkdwgs17,ph96,thmkk21,wsgcm88}. An exception that we are aware of in the context of linear theory is the three-layer viscoelastic model with eddy viscosity as recently proposed by Zhao and Shen \cite{zs18}. Their numerical results show a roll-over that accentuates as the thickness of the turbulent boundary layer (located between the viscoelastic ice layer and the inviscid water layer) increases. However, no comparison with field data featuring the roll-over was presented in that study. A similar phenomenon was observed by Liu et al. \cite{lhv91} based on a linear model for a thin elastic plate with eddy viscosity \cite{lm88}. These authors derived a temporal rate of wave attenuation and converted it to a spatial rate by dividing it by the group velocity. As noted in \cite{lkdwgs17}, this temporal rate is a monotonic function of frequency and so the fact that the spatial rate exhibits roll-over is likely due to the group velocity being non-monotonic and reaching a minimum at some frequency \cite{gp12}. Therefore, it is not clear from this result whether the roll-over effect is an intrinsic feature of the thin-plate viscoelastic model or is simply an artifact of the observation procedure. \subsubsection{Greenland Sea, 10 September 1979} During this experiment, the ice cover was sparse and the floes were generally large. The ice concentration was estimated to be $C = 0.17$ from photograph analysis. Because ice thicknesses could not be determined on that day and were not reported, we choose $h = 3.1$ m following Kohout and Meylan \cite{km08} who suggest using the floe thickness from the 1978 data, which was based on 14 measurements through smooth areas. We set $H = 1500$ m (average depth of the Greenland Sea) and, for the CGG model, we assign the value $\beta = 1 - C = 0.83$ to ice porosity. Note that the field measurements under consideration are on the wave spectrum, which is proportional to the square of the wave amplitude. Accordingly, we halve the corresponding decay rates when comparing to theoretical predictions for the wave amplitude. We see in Fig. \ref{fig:Greenland} that the CGG model fits the field observations well, despite the small number of data points available. Among all the models at play, it is the only one that is able to reproduce some roll-over of $q$, with a peak near $ f = 0.13$ Hz. In fairness, we should mention that experimental errors are more appreciable at this end of the spectrum. The CGG model also captures the stronger decay of attenuation rate at lower frequencies, although its predictions of $q$ tend to be even lower than the measured values in the limit $f \rightarrow 0$. By contrast, the EFS and WS curves are monotonically increasing with $f$, almost linearly over the range of frequencies considered. The SRCJ fit is also found to be monotonically increasing with frequency and is not plotted in this figure. Instead, we show the KM fit which is extracted from Fig. 8 in \cite{km08} (with appropriate rescaling to convert the dimensionless energy rates per floe number in that figure to dimensional rates of spatial attenuation for the wave amplitude). We point out that the KM model is a scattering model and is of different nature from the continuum formulation that is highlighted in the present study. It is thus not further discussed here and the reader is directed to \cite{km08} for more detail. Because scattering is believed to be the dominant mechanism for wave attenuation in broken floe fields, the KM model serves as a suitable independent reference for the comparison with field observations from \cite{wsgcm88}. The KM curve appears to be rougher than the other theoretical curves, as it represents the average of 100 simulations with different random realizations of the floe size distribution. We can nonetheless discern a general trend that is monotonically increasing with frequency, and is approximately linear with a slope close to that of the WS curve. The CGG fit presented in Fig. \ref{fig:Greenland} returns a pore size $a = 14.6$ m. While it is difficult to give a physical interpretation for this parameter from the viewpoint of effective medium theory, we may associate it to a characteristic horizontal size of open-water areas in the context of an extensive broken floe field. It is reassuring that we find a value of $a$ which is significantly larger than those obtained for the (smaller-scale) laboratory experiments of Wang and Shen \cite{ws10b} and Zhao and Shen \cite{zs15}. Moreover, although pore size in the CGG model does not signify floe size as mentioned earlier (and these two parameters are not necessarily correlated), we deem it consistent that the estimated value $a = 14.6$ m is somewhat comparable in order of magnitude to the typical floe size ($\ell = 50$--$80$ m) observed on this expedition, as reported in \cite{km08}. \subsection{Bering Sea, 7 February 1983} This experiment was carried out as part of the MIZEX West study in 1983. Following Perrie and Hu \cite{ph96} and Kohout and Meylan \cite{km08}, we take representative values for the ice concentration and thickness in this case to be $C = 0.72$ and $h = 1.5$ m, respectively. The ice cover was thus less fragmented than in the previous environment. We select $H = 1500$ m for the average depth of the Bering Sea and prescribe $\beta = 1 - C = 0.28$ in the CGG model. As shown in Fig. \ref{fig:Bering}, the roll-over is even more apparent here than in the previous observations due to the larger number of data points and smaller experimental errors. Overall, the same comments as in the previous section can be made on the comparison between the measurements and predictions. The CGG model can somewhat reproduce the roll-over of $q$ near $f = 0.13$ Hz, despite the fact that the corresponding fit appears smoother in this region. It underestimates the peak amplitude and slightly overshoots the peak frequency. Interestingly, these relative features of the roll-over from the field observations and numerical estimates are reminiscent of the comparison given in \cite{lhv91} (see their Fig. 13) between their viscoelastic theory and the same Bering Sea data. Note that attenuation rate is plotted as a function of wave period rather than frequency in their Fig. 13, where the roll-over takes place at short periods. At the opposite end of the spectrum, the CGG fit is also found to provide a good approximation for the low-frequency tail. We see again in Fig. \ref{fig:Bering} that none of the other models produce a roll-over. The associated curves are all monotonically increasing with frequency, and look similar to those in Fig. \ref{fig:Greenland}, although they seem to display a more convex shape here. This convexity is especially pronounced for the KM and WS curves (the former is extracted from Fig. 11 in \cite{km08}). Notice again that the KM curve is rougher than the other curves for the same reason as mentioned earlier. In comparison to these field data, the pore size deduced from the CGG fit turns out to be $a = 22.0$ m, which is not so different from the previous prediction ($a = 14.6$ m) for the Greenland Sea experiment. Given the denser floe field here, we might have expected a smaller pore size, nevertheless this value $a = 22.0$ m is definitely larger than those found for the frazil/pancake ice covers generated in the laboratory experiments of Wang and Shen \cite{ws10b} and Zhao and Shen \cite{zs15}. It is striking how close this estimated pore size is to the floe diameter $\ell = 14.5$ m that was assumed by Perrie and Hu \cite{ph96} in their simulations of the Bering Sea observations. Although it is difficult to discriminate any specific physical mechanism from the CGG formulation, which would be responsible for the observed roll-over, we recall preliminary results from \cite{cgg19} suggesting that the relative motion between different constituents of the ice cover induces friction that may interfere with other (bulk) dissipative effects to help produce this non-monotonic behavior of the attenuation rate. As can be seen from the definition \eqref{friction} of its controlling parameter $b$, this phenomenon in the CGG view is directly linked to the porous (hence heterogeneous) nature of the ice cover. Details of the frictional process are unclear in this effective medium approach and so we prefer not to attempt to interpret it further at this point. We have confirmed the previous presumption by fitting the CGG model to the Bering and Greenland Seas data in the absence of frictional effects (i.e. with $b$ manually set to zero). No roll-over has emerged from these computations (this is not shown here for convenience); the CGG curve would be monotonically increasing with frequency and would look similar to the WS curve. \subsection{Field observations of Kohout et al. (2014)} We turn our attention to a more recent data set that was collected in the Antarctic MIZ as part of the Australian Antarctic Division's second Sea Ice Physics and Ecosystem Experiment in 2012 \cite{kw13}. Wave measurements were made simultaneously using contemporary sensors at up to five locations on a transect spanning up to $250$ km. Kohout et al. \cite{kwdm14} provided a preliminary report on these measurements to support the claim that wave activity and ice extent are correlated. A spectral analysis of the data was performed by Meylan et al. \cite{mbk14} who examined in particular the dependence of attenuation rates on wave periods. Following Mosig et al. \cite{mms15}, we assume $h = 1$ m and $H = 4300$ m for our computations in this setting. Four estimates of mean ice concentration $C = 0.210$, $0.481$, $0.498$, $0.576$ in areas of the Antarctic MIZ where the wave sensors drifted, are given in \cite{mbk14}. These estimates were calculated using Nimbus-7 scanning multichannel microwave radiometer and Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager Sounder (SSMIS) Passive Microwave Data. As a representative value, we specify the average $\beta = 0.56$ of the corresponding ice porosities in the CGG model. A camera installed on the upper deck of the ship monitored the floe size distribution during this expedition. Photographs of the broken floe field taken by this camera can be seen in \cite{mbk14}. Attenuation rates extracted from Fig. 8 in \cite{mms15} (see also Fig. 4 in \cite{mbk14}) are now shown in Fig. \ref{fig:Antarctic} and compared to theoretical predictions. Again, we take into account the difference between data on wave energy decay from \cite{mbk14} and estimates on wave amplitude decay from the various models by halving the former decay rates. For this particular data set, we plot $q$ as a function of $T$ (wave period) rather than $f$ (wave frequency), as originally presented in \cite{mbk14,mms15}, to retain a uniform resolution over the range of periods considered. Unlike the field observations discussed in the previous section, no roll-over is discernible from the data points in Fig. \ref{fig:Antarctic}. Accordingly, none of the models involved in this comparison (including the CGG model) predict such a phenomenon; their fitting curves are all monotonically decreasing with increasing $T$. Here the EFS model provides the closest fit as indicated in Fig. \ref{fig:Antarctic}. The agreement is especially good at long periods while, as $T \rightarrow 0$, this model tends to underestimate the attenuation rate. Note that our version of the EFS curve bears a resemblance to the original one shown in \cite{mms15} (see their Fig. 8), which may be viewed as further evidence for the effectiveness of our fitting procedure. By contrast, the CGG and WS curves are steeper, falling down more quickly as $T$ increases. These two models produce negligible values of $q$ at long periods, which are distinctly lower than the field data over most of the time interval being probed. On the other hand, they tend to overestimate the decay rates at short periods. Despite these discrepancies, the CGG fit is seen to lie within or near experimental error, while the WS fit tends to lie further below. We remark in passing that the decay rates observed in this case and in the Arctic MIZ \cite{wsgcm88} are significantly lower than those measured in the laboratory experiments as discussed previously. This supports a previous statement from Section 2 that the decay rates in frazil/pancake ice can be several orders of magnitude greater than in a broken floe field (e.g. compare values of $q$ between Figs. \ref{fig:WangShen} and \ref{fig:Antarctic}). The pore size returned by the CGG fit to these field measurements is $a = 72.0$ m, which is larger than the predictions for the two previous data sets from the Arctic MIZ. While the photographs in \cite{mbk14} might suggest a lower value of $a$ for this broken floe field, we point out that these were taken immediately after deployment of the sensors. Over the duration of their operation, these sensors tended to drift into open ocean, as mentioned in \cite{mbk14}. Furthermore, considering that the MIZ explored was overall more on the sparse side ($\beta = 0.56$), with dominant floe sizes $\ell$ ranging from a few meters to greater than $100$ m in the various areas visited by the sensors, we deem the pore size $a = 72.0$ m estimated from the CGG model to be reasonable here as well. \section{Discussion on shear modulus and kinematic viscosity} We further check the performance of the CGG model by comparing its estimates of shear modulus $\mu$ and kinematic viscosity $\eta$ with predictions by the EFS and WS models. These two parameters are important measures of effective viscoelastic properties of the ice cover and are common to all three continuum formulations. Such an assessment would be suitable as part of the calibration of these models in view of potential applications to large-scale wave forecasting in the polar regions. Similar parametric calibration of the EFS and WS models has been conducted in \cite{cheng17,mms15,zs15}, although these studies used different methods to fit the theoretical predictions to experimental data. Table 1 lists values of $\mu$ and $\eta$ as determined from the CGG, EFS and WS fits to the laboratory experiments and field observations that we discussed in the previous sections. To highlight the effective character of these models as applied to wave propagation in various ice-cover types, these estimates are presented in such a way that they are normalized relative to typical values $\mu \simeq \mu_0 = 10^9$ Pa for pack ice \cite{mms15,wf92} and $\eta \simeq \eta_0 = 10^{-2}$ m$^2$ s$^{-1}$ for grease or pancake ice \cite{ddmbw15,nm99}. As alluded to in Section 2, Table 1 confirms that both parameters can take a wide range of values depending on the particular model and ice conditions. Among all the cases considered, the highest values of $\mu$ and $\eta$ are both achieved by the EFS fit to the Greenland Sea observations \cite{wsgcm88}. The lowest value of $\mu$ is given by the WS fit to the laboratory Test 1 of Newyear and Martin \cite{nm97}. The lowest value of $\eta$ is returned by the CGG fit to the Tank 2 experiment of Wang and Shen \cite{ws10b}. These extreme values are highlighted in blue (highest) and red (lowest) in Table 1. As expected, for all three models, the shear modulus $\mu$ (which is a measure of the ice-cover's elasticity) is found to be smallest for grease ice (Test 1 in \cite{nm97}) and largest for a broken floe field (Arctic MIZ \cite{wsgcm88}). Their estimates of $\mu$ for both Antarctic and Arctic MIZ remain overall within two orders of magnitude from the typical value $\mu_0$ for pack ice. By contrast, the kinematic viscosity $\eta$ (which may represent a combination of various attenuating effects in this continuum framework) exhibits a more complicated behavior depending on the particular model and ice conditions. We point out however that all three models predict $\eta$ to be on the order of $\eta_0$ for the grease-ice experiments \cite{nm97}, which is consistent with values $\eta = 2$--$3 \, \eta_0$ inferred by Newyear and Martin \cite{nm99} who fitted data from \cite{nm97} to Keller's two-layer viscous model \cite{k98}. For a broken floe field, the viscosity estimates from both EFS and WS models are found to be larger by several orders of magnitude than their counterparts for grease ice. On the other hand, the corresponding predictions from the CGG model remain comparable between these two types of ice cover. On a related note, we see that $\mu$ and $\eta$ as determined by the CGG fit only vary over $6$ and $3$ orders of magnitude respectively, among all the cases considered. By contrast, $\mu$ and $\eta$ as predicted by the WS fit vary over $10$ and $6$ orders of magnitude respectively, while both parameters in the EFS model vary over $10$ orders of magnitude. This suggests that both $\mu$ and $\eta$ in the CGG model may only require moderate tuning in view of potential applications to operational wave forecasting. Our estimates of $\mu$ ($4.2 \times 10^{11}$ Pa) and $\eta$ ($4.2 \times 10^6$ m$^2$ s$^{-1}$) from the EFS fit to the Antarctic MIZ data are consistent with those reported in \cite{mms15} for the same model ($\mu = 4.9 \times 10^{12}$ Pa, $\eta = 5.0 \times 10^7$ m$^2$ s$^{-1}$). Both of them are several orders of magnitude larger than the reference values $\mu_0$ and $\eta_0$ (especially for $\eta$). Regardless of how close the fit is, the EFS model tends to require very large values of these parameters in order to reproduce wave attenuation in broken floe fields of the Antarctic and Arctic MIZ. This may be interpreted as a way to make up for the thin-plate approximation so that elastic properties of the ice cover would be sufficiently well captured in these situations. Our estimates of $\mu$ ($\{ 4.2, 2.5 \times 10^5, 8.3 \times 10^5 \}$ Pa) and $\eta$ ($\{ 1.5 \times 10^{-2}, 45.0, 131.6 \}$ m$^2$ s$^{-1}$) from the WS fit to the laboratory measurements of Zhao and Shen \cite{zs15} are in good agreement with their own findings ($\mu = \{ 21, 5 \times 10^5, 1 \times 10^6 \}$ Pa, $\eta = \{ 1.4 \times 10^{-2}, 61, 140 \}$ m$^2$ s$^{-1}$) for Test 1, 2, 3 respectively (see their Table 2). These authors also fitted the WS model to a data set from \cite{ws10b} (it was not clearly stated which experiment was considered) and obtained $\mu = 48$ Pa, $\eta = 4 \times 10^{-2}$ m$^2$ s$^{-1}$ which again are fairly close in terms of order of magnitude to our own results ($\mu = \{ 1.1 \times 10^3, 4.0 \times 10^2 \}$ Pa, $\eta = \{ 9.6 \times 10^{-2}, 5.1 \times 10^{-2} \}$ m$^2$ s$^{-1}$) for Tank 2 and 3 respectively. When examining the CGG and WS models against the field observations, we see that their predictions of $\mu$ are comparable to each other on the order of $10^7$ Pa, which contrasts with the much higher values from the EFS fit, as noted above. This similarity however does not extend to $\eta$ since the WS model yields values that are higher than the CGG predictions by several orders of magnitude. Again, the range of estimated $\eta$ from the CGG fit is strikingly narrow among all the cases considered, in comparison to the other two models. It is also worth mentioning that the estimates $\mu = 3.3 \times 10^7$ Pa and $\eta = 1.2 \times 10^{-2}$ m$^2$ s$^{-1}$ from the CGG fit to the Bering Sea measurements are consistent with those ($\mu = 2.3 \times 10^9$ Pa, $\eta = 1.5 \times 10^{-2}$ m$^2$ s$^{-1}$) reported in \cite{lhv91} for the same data set. As stated earlier, these authors used a thin-plate viscoelastic model and were able to emulate the roll-over phenomenon to some extent (see their Fig. 13). In that study, $\mu$ was assigned a typical value $\sim \mu_0$ for pack ice while $\eta$ was deduced from the data fitting. Interestingly, the fitting curve shown in Fig. 13 of \cite{lhv91} bears a resemblance to the CGG curve in our Fig. \ref{fig:Bering} (modulo the switch between wave frequency and period for the horizontal axis). Lastly, we remark that the CGG predictions of $\eta \sim 10^{-2}$--$10^0 \, \eta_0$ for the data sets from the Antarctic and Arctic MIZ are encouraging in view of earlier measurements that reported values of eddy viscosity under large ice floes, ranging from $2.4 \times 10^{-3}$ m$^2$ s$^{-1}$ in the central Arctic Ocean \cite{h66} to $2.1 \times 10^{-2}$ m$^2$ s$^{-1}$ in the Weddell Sea (Antarctic MIZ) \cite{mm94}. \section{Sensitivity tests} Given the rather large number of rheological parameters associated with the ice cover in the CGG formulation, it is of interest to check their individual relevance to this problem and test the sensitivity of attenuation rate predictions with respect to these parameters. For this purpose, we take the Bering Sea observations as a representative discriminating case because it exhibits unusual features such as the roll-over phenomenon and contains a fair number of data points. We focus our attention on the following parameters: $h$ (thickness), $\beta$ (porosity), $a$ (pore size), $\eta$ (kinematic viscosity), $\mu$ (shear modulus) and $\nu$ (Poisson's ratio). As alluded to in previous sections, some of them which are related to geometrical features of the ice cover (e.g. thickness, porosity, pore size) may be estimated by in-situ measurement or remote sensing, while others which are related to material properties (e.g. kinematic viscosity, shear modulus) would be more difficult to determine or guess. With this in mind, a sensitivity analysis may help assign predefined values to some of these parameters (as opposed to other parameters that may require more tuning), in order to reduce the parameter space for the CGG model. For each of these parameters, Fig. \ref{fig:parameters} displays a set of curves for the attenuation rate as predicted by the CGG model. The reference regime of parameters is given by the corresponding best fit to the Bering Sea data, as discussed in the previous section (see Fig. \ref{fig:Bering}). This set of curves is obtained by varying the parameter under consideration while freezing the other parameters at their original best-fitting values. The objective of such an analysis is to examine how perturbations in individual parameters would affect the original best fit. The range of perturbations for each parameter is chosen to be an interval around its best-fitting value. Ice thickness is a distinctive feature of the ice cover in the CGG formulation, as opposed to the viewpoint in the thin-plate approximation. Fig. \ref{fig:parameters}(a) reveals that the roll-over tends to shift upward and to higher frequencies as $h$ is decreased. This tendency is quite pronounced and suggests strong sensitivity of $q$ with respect to $h$. A decrease in $h$ by a factor of $3$ shifts the peak outside the frequency range of the experimental data, and moves it out of sight in this figure. The fact that shorter waves (i.e. at higher frequencies) experience more attenuation in thinner nice (i.e. for smaller $h$), which is rather counter-intuitive, is reminiscent of a common feature in models for water waves over seabed composed of a viscous mud layer, where dissipation has a non-monotonic dependence on mud-layer thickness, with thicker layers being less dissipative \cite{cgg19,crl17,dl78}. Ice porosity and pore size are rheological parameters that are characteristic of the present model. As illustrated in Figs. \ref{fig:parameters}(b) and (c), increasing $\beta$ or decreasing $a$ has the basic effect of raising the attenuation rate and accentuating the roll-over. In either case, the peak remains around $f = 0.13$ Hz as $\beta$ or $a$ is varied. Note that the dichotomy in variation between $\beta$ and $a$ is attributed to their contrasting roles in the friction process. Because the parameter $b$ depends linearly on $\beta$ while it is inversely proportional to $a^2$ according to \eqref{friction}, friction is thus enhanced (and so is the roll-over) as $\beta$ is increased or $a$ is decreased. A similar behavior occurs as $\eta$ is increased (see Fig. \ref{fig:parameters}d), which is anticipated considering the linear dependence of $b$ on $\eta$. A slightly more complicated picture is observed for the variation with respect to $\mu$. Inspecting Fig. \ref{fig:parameters}(e), the roll-over tends to shift upward and to lower frequencies as $\mu$ is increased. The sensitivity of $q$ to $\nu$ is relatively weak and is confined to the high-frequency region, as suggested by Fig. \ref{fig:parameters}(f). This explains why, for convenience and given that $0 < \nu < 1/2$, we set $\nu = 0.4$ in the previous computations (which is close to the typical value $\nu = 1/3$ for pack ice \cite{wf92}). Our sensitivity tests indicate that all these parameters have some influence on the roll-over, affecting its amplitude and/or position. Sensitivity of $q$ with respect to $h$ and $\mu$ seems to be most nontrivial, and is particularly strong for $h$. Liu et al. \cite{lhv91} also concluded from their model-data comparison that the frequency at which the roll-over occurs depends on ice conditions, especially ice thickness. In light of this sensitivity analysis and results from Section 3, to help reduce the parameter space, it would also be reasonable to fix the pore size with some predefined value of order $O(10)$ m for potential applications to wave forecasting in the MIZ. This is even more relevant considering that this parameter only appears in the expression \eqref{friction} of the friction coefficient for the CGG model. \section{Conclusions} To assess the recently proposed CGG model, we test it against a selection of laboratory experiments and field observations taken from the literature, concerning wave attenuation in sea ice. Altogether, these measurements span a wide range of ice conditions and wave frequencies. We fit the theoretical predictions to data on attenuation rate via error minimization, which in turn yields estimates for effective rheological parameters in addition to estimates for the attenuation rate. Whenever the information is available, the porosity parameter is assigned a value that is the complement of the mean ice concentration. To further check this model's performance, we also compare it to other existing viscoelastic theories under the same various conditions. Numerical solutions of the dispersion relations can be found using relatively simple selection criteria. As a byproduct, we independently recover (via a different fitting procedure) a number of results that are similar to those reported in previous studies. Special attention is paid to the EFS and WS formulations which share some common features with the CGG system. For such parameters as $\mu$ (shear modulus) and $\eta$ (kinematic viscosity) which control the viscoelastic properties, we find that the range of estimated values (over all the situations considered) may differ significantly from one model to another. Among these three representations, the CGG (resp. EFS) model turns out to be the one for which the predicted range of both $\mu$ and $\eta$ is the narrowest (resp. widest) in orders of magnitude. Even for individual situations, this difference in parameter recovery may be considerable. As expected, for grease ice, all three models predict $\eta$ on the order of $\eta_0 = 10^{-2}$ m$^2$ s$^{-1}$ and $\mu$ to be essentially negligible compared to the typical value $\mu_0 = 10^9$ Pa for pack ice. On the other hand, for broken floe fields, there is more variability in the determination of these parameters, especially for $\eta$. We obtain in this case values of $\mu$ that may be lower (for CGG and WS) or higher (for EFS) than $\mu_0$ by a few orders of magnitude. Estimates of $\eta$ from the EFS and WS fits tend to be larger than $\eta_0$ by several orders of magnitude, while those from the CGG fit remain around this reference value. Overall, the CGG model provides good fits to the data on attenuation rate for the various cases under consideration. Against the Antarctic MIZ data, the EFS counterpart appears to be a clear favorite, but the corresponding fit is achieved for values of $\mu$ and $\eta$ that are both excessively high, a fact which has also been pointed out in \cite{mms15}. By contrast, the CGG fit returns markedly lower values for these parameters, with $\mu$ being lower than $\mu_0$ by two orders of magnitude and $\eta$ being essentially equal to $\eta_0$. In comparison to the Arctic MIZ data (from both the Bering and Greenland Seas), the CGG model is able to reasonably reproduce the roll-over of attenuation rate, unlike both EFS and WS counterparts which only predict a monotonic growth with frequency. According to the poroelastic formulation, this intriguing phenomenon is attributed to friction caused by the relative motion between fluid and solid components of the ice cover, which highlights the role of porosity in the present description of wave-ice interactions, as such friction is directly connected to the porous nature of the ice cover. This dissipative mechanism could possibly be a contributing factor in the roll-over effect as reported in field observations. In the future, various extensions of this work may be envisioned. Further calibration of the CGG model would be suitable using more data (and larger data sets), which may include e.g. recent data from the Arctic MIZ as in \cite{cheng17}. Using such data requires substantial processing and analysis. It would also be of interest to perform more tests against other data sets exhibiting a roll-over in order to further assess the possible role of friction in this phenomenon. While the CGG system features a larger number of physical parameters than other existing viscoelastic formulations. this study suggests that some of these parameters may be assigned predefined values, or may be estimated by in-situ measurement or remote sensing. Moreover, it is conceivable that the parametric dependence in the CGG model may be mathematically simplified by examining asymptotic regimes (in the limit of vanishing parameters) according to the specific type of ice cover under consideration. Finally, it would be appropriate to extend these results to the three-dimensional setting (for wave propagation in two horizontal directions) as well as to the nonlinear case. Discrepancies that we have observed in comparison to experimental data may partly be attributed to such effects. Nonlinear theory of wave-ice interactions has drawn increasing attention in recent years \cite{dkp19,gp12,gp14,gp17}. \clearpage \begin{table}[!t] \hspace{-4cm} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{Experiment} & \multirow{3}{*}{Data set} & \multicolumn{2}{c|}{CGG model} & \multicolumn{2}{c|}{EFS model} & \multicolumn{2}{c|}{WS model} \\ \cline{3-8} & & $\mu$ ($\times 10^9$) & $\eta$ ($\times 10^{-2}$) & $\mu$ ($\times 10^9$) & $\eta$ ($\times 10^{-2}$) & $\mu$ ($\times 10^9$) & $\eta$ ($\times 10^{-2}$) \\ & & (Pa) & (m$^2$ s$^{-1}$) & (Pa) & (m$^2$ s$^{-1}$) & (Pa) & (m$^2$ s$^{-1}$) \\ \hline \multirow{2}{*}{Newyear \& Martin} & Test 1 & $5.00 \times 10^{-7}$ & $7.52$ & $1.17 \times 10^{-7}$ & $2.50$ & \color{red}{$6.40 \times 10^{-11}$} & $2.80$ \\ & Test 2 & $6.00 \times 10^{-7}$ & $9.00$ & $1.26 \times 10^{-7}$ & $2.64$ & $1.20 \times 10^{-10}$ & $3.76$ \\ \hline \multirow{2}{*}{Wang \& Shen} & Tank 2 & $1.82 \times 10^{-4}$ & \color{red}{$1.22 \times 10^{-2}$} & $4.80 \times 10^{-4}$ & $2.25 \times 10^4$ & $1.15 \times 10^{-6}$ & $9.61$ \\ & Tank 3 & $1.73 \times 10^{-4}$ & $4.00 \times 10^{-2}$ & $2.80 \times 10^{-5}$ & $1.20 \times 10^3$ & $4.00 \times 10^{-7}$ & $5.10$ \\ \hline \multirow{3}{*}{Zhao \& Shen} & Test 1 & $7.34 \times 10^{-6}$ & $8.32 \times 10^{-2}$ & $7.20 \times 10^{-6}$ & $8.00 \times 10^2$ & $4.20 \times 10^{-9}$ & $1.46$ \\ & Test 2 & $4.21 \times 10^{-5}$ & $4.68 \times 10^{-2}$ & $9.40 \times 10^{-4}$ & $1.62 \times 10^4$ & $2.47 \times 10^{-4}$ & $4.50 \times 10^3$ \\ & Test 3 & $1.44 \times 10^{-4}$ & $9.22 \times 10^{-2}$ & $7.20 \times 10^{-2}$ & $1.44 \times 10^6$ & $8.32 \times 10^{-4}$ & $1.32 \times 10^4$ \\ \hline \multirow{2}{*}{Wadhams et al.} & Greenland Sea & $1.38 \times 10^{-2}$ & $7.00 \times 10^{-2}$ & \color{blue}{$6.50 \times 10^2$} & \color{blue}{$4.62 \times 10^9$} & $6.75 \times 10^{-2}$ & $1.14 \times 10^5$ \\ & Bering Sea & $3.30 \times 10^{-2}$ & $1.16$ & $1.54$ & $5.28 \times 10^6$ & $4.00 \times 10^{-2}$ & $2.00 \times 10^5$ \\ \hline Kohout et al. & Antarctic MIZ & $1.58 \times 10^{-2}$ & $1.00$ & $4.20 \times 10^2$ & $4.20 \times 10^8$ & $6.00 \times 10^{-3}$ & $1.20 \times 10^4$ \\ \hline \end{tabular} \caption{Estimates of shear modulus and kinematic viscosity from the CGG, EFS and WS fits to data on attenuation rate for the various cases under consideration. Values of shear modulus are normalized relative to $\mu_0 = 10^9$ Pa, while values of kinematic viscosity are normalized relative to $\eta_0 = 10^{-2}$ m$^2$ s$^{-1}$. Lowest estimates are highlighted in red while highest estimates are highlighted in blue.} \end{table} \clearpage \begin{figure}[t!] \centering \begin{subfigure}[h]{0.52\textwidth} \includegraphics[width=\textwidth]{Newyear_test1.jpg} \caption{$h=11.3$ cm} \label{fig:Newyear_table_1} \end{subfigure}\hspace*{\fill} \begin{subfigure}[h]{0.52\textwidth} \includegraphics[width=\textwidth]{Newyear_test2.jpg} \caption{$h=14.6$ cm} \label{fig:Newyear_table_2} \end{subfigure} \caption{Comparison of attenuation rate vs. frequency between model predictions and laboratory data for grease ice from Newyear and Martin \cite{nm97}. Predictions from the CGG, EFS, WS and Keller's models are shown. Laboratory data for (a) $h = 11.3$ cm (test 1), (b) $h = 14.6$ cm (test 2) are presented.} \label{fig:Newyear} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}[h]{0.52\textwidth} \includegraphics[width=\textwidth]{WangShen_tank2.jpg} \caption{$h=9.0$ cm} \label{fig:WangShen tank 2} \end{subfigure}\hspace*{\fill} \begin{subfigure}[h]{0.52\textwidth} \includegraphics[width=\textwidth]{WangShen_tank3.jpg} \caption{$h=8.9$ cm} \label{fig:WangShen tank 3} \end{subfigure} \caption{Comparison of attenuation rate vs. frequency between model predictions and laboratory data for a grease-pancake ice mixture from Wang and Shen \cite{ws10b}. Predictions from the CGG, EFS, SRCJ and WS models are shown. Laboratory data for (a) $h = 9.0$ cm (tank 2), (b) $h = 8.9$ cm (tank 3) are presented.} \label{fig:WangShen} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}[b]{0.55\textwidth} \includegraphics[width=1\linewidth]{ZhaoShen_test1.jpg} \caption{$h=2.5$ cm} \label{fig:test1} \end{subfigure} \begin{subfigure}[b]{0.55\textwidth} \includegraphics[width=1\linewidth]{ZhaoShen_test2.jpg} \caption{$h=4.0$ cm} \label{fig:test2} \end{subfigure} \begin{subfigure}[b]{0.55\textwidth} \includegraphics[width=1\linewidth]{ZhaoShen_test3.jpg} \caption{$h=7.0$ cm} \label{fig:test3} \end{subfigure} \caption{Comparison of attenuation rate vs. frequency between model predictions and laboratory data from Zhao and Shen \cite{zs15}. Predictions from the CGG, EFS, SRCJ and WS models are shown. Laboratory data for (a) $h = 2.5$ cm (test 1, frazil/pancake ice), (b) $h = 4.0$ cm (test 2, pancake ice), (c) $h = 7.0$ cm (test3, fragmented ice) are presented.} \label{fig:ZhaoShen} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Greenland_Sep_10_1979.jpg} \caption{Comparison of attenuation rate vs. frequency between model predictions and data for a broken floe field from the Greenland Sea 10 September 1979 experiment in Wadhams et al. \cite{wsgcm88}. Predictions from the CGG, EFS, KM and WS models are shown. Results for $h = 3.1$ m and $H = 1500$ m are presented.} \label{fig:Greenland} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Bering_Feb_7_1983.jpg} \caption{Comparison of attenuation rate vs. frequency between model predictions and data for a broken floe field from the Bering Sea 7 February 1983 experiment in Wadhams et al. \cite{wsgcm88}. Predictions from the CGG, EFS, KM and WS models are shown. Results for $h = 1.5$ m and $H = 1500$ m are presented.} \label{fig:Bering} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Antarctic.jpg} \caption{Comparison of attenuation rate vs. period between model predictions and data for a broken floe field from the Antarctic MIZ \cite{kwdm14,mbk14}. Predictions from the CGG, EFS and WS models are shown. Results for $h = 1$ m and $H = 4300$ m are presented.} \label{fig:Antarctic} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}{0.52\textwidth} \includegraphics[width=\linewidth]{h.jpg} \caption{varying thickness $h$} \label{fig:h} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\linewidth]{beta.jpg} \caption{varying porosity $\beta$} \label{fig:beta} \end{subfigure} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\linewidth]{a.jpg} \caption{varying pore size $a$} \label{fig:a} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\linewidth]{eta.jpg} \caption{varying kinematic viscosity $\eta$} \label{fig:eta} \end{subfigure} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\linewidth]{mu.jpg} \caption{varying shear modulus $\mu$} \label{fig:mu} \end{subfigure}\hspace*{\fill} \begin{subfigure}{0.52\textwidth} \includegraphics[width=\linewidth]{nu.jpg} \caption{varying Poisson's ratio $\nu$} \label{fig:nu} \end{subfigure} \caption{Sensitivity of attenuation rate vs. frequency to varying parameters as predicted by the CGG model. Data for a broken floe field from the Bering Sea 7 February 1983 experiment \cite{wsgcm88} are considered. Results for varying (a) thickness $h$ (m), (b) porosity $\beta$, (c) pore size $a$ (m), (d) kinematic viscosity $\eta$ (m$^2$ s$^{-1}$), (e) shear modulus $\mu$ (Pa), (f) Poisson's ratio $\nu$ are presented.} \label{fig:parameters} \end{figure} \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:1} Once the Higgs boson will be discovered at the LHC, it will be crucial to test its properties, and check how well they fit in the standard model (SM) framework. Higgs boson couplings to vector bosons, heavy quarks and heavy leptons can in principle be measured by combining informations on different production and decay channels \cite{Reina:2005ae}. A measurement of the Higgs boson coupling to $b$ quarks seems presently quite challenging. On the one hand, the SM Higgs production channel $b\bar{b}\to H$ is overwhelmed by the main production process $gg\to H$ at the LHC \cite {maltoni}. On the other hand, processes involving the $Hb\bar{b}$ coupling via the Higgs decay $H\to b\bar{b}$ (for $m_H\lsim 140$ GeV) seem at the moment hard to manage, due to the large $b$ (and, more generally, jet) background expected from pure QCD processes. The $H\to b\bar{b}$ decay in the Higgs production via vector-boson fusion (VBF) has been studied in~\cite{higgsplb}. It gives rise to four-jet final states, out of which two jets should be $b$-tagged. Although the VBF final states have quite distinctive kinematical features (i.e., two forward jets with a typical transverse momentum of order $ M_W$ plus a resonant $b$-jet pair produced centrally), different sources of QCD backgrounds and hadronic effects presently make the relevance of this channel for a $Hb\bar{b}$ coupling determination difficult to assess. For instance, triggering on $bbjj$ final states must confront with the corresponding large QCD four-jet trigger rate. The $Ht\bar{t}$ associated production, where the Higgs boson is radiated by a top-quark pair, with subsequent $H\to b\bar{b}$ decay, could also provide a $Hb\bar{b}$ coupling measurement. Nevertheless, the recent inclusion of more reliable QCD background estimate and detector simulation in the corresponding signal analysis~\cite{CMS-TDR}, have lowered the expectations on the potential of this channel. Here we report on a further process that could help in determining the $H b\bar{b}$ coupling, that was recently studied in \cite{nnoi} (where more details can be found). We consider the Higgs boson production in VBF in association with a large transverse-momentum photon (i.e., $p_{\rm T}\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 20$ GeV) emitted centrally (i.e., with pseudorapidity $|\eta_{\gamma}|<2.5$) \begin{equation} pp\to H\, \gamma\, jj + X\,\to b \bar b \,\gamma \,jj\, + X\, , \label{process} \end{equation} where $H$ decays to $b\bar{b}$, and, at the parton level, the final QCD partons are identified with the corresponding jets $j$. Disregarding the resonant contribution to the process coming from the $WH\gamma,\, ZH\gamma$ production, the dominant Feynman diagrams are the ones involving VBF (as shown in Figure~1, where the Higgs decay to $b\bar{b}$ is not shown). Final states $b \bar b\, \gamma\, jj\,$ arising from photon radiation off one of the two $b$-quarks arising from the Higgs boson decay [via $pp\to H(\to b \bar b\,\gamma )\, jj$] fall outside the experimental $m_{b\bar b}$ resolution window around the $m_H$, due to the requirement of a large $p_{\rm T}$ photon. \begin{figure} \centering \includegraphics[height=7cm]{Hg.eps} \caption{Tree-level $t$-channel Feynman diagrams for $H$ production via $pp\to H\,\gamma\, jj$.} \label{fig:1} \end{figure} \section{Benefits from the central photon} \label{sec:2} Adding a central photon to the $pp\to H(\to b\bar b) \; jj$ final state, despite a further e.m. fine structure constant $\alpha$ that depletes production rates, gives a number of benefits \cite{nnoi} \begin{itemize} \item the large (critical) rate for QCD multi-jet final states that characterizes the background for $pp\to H(\to b\bar b) \; jj$ is depleted, too, by the electromagnetic coupling when requiring a further photon in the final state; this is expected to improve triggering efficiencies of the detector; \item the large gluonic component entering the QCD background to the plain $b \bar b \,j j$ final state at parton level does not take part in the radiation of a large $p_{\rm T}$ photon, so making part of the potential background to $H \,\gamma \,jj$ {\it inactive}; \item further dynamical {\it coherence} effects dramatically suppress the radiation of a photon in the irreducible QCD background to $b \bar b \,\gamma \,jj$, when the photon is central (i.e. emitted outside the typical radiation cone around the initial/final quark legs, for quarks scattered in the $t$-channel) ; \item a similar {\it coherence} effect depletes the $HZZ$ amplitudes (involving neutral currents) with the respect to the $HWW$ ones (involving charged currents) in Figure~1, increasing the relative sensitivity to the $HWW$ coupling in the radiative channel; then, a measurement of the $b \bar b \,\gamma \,jj$ rate could lead to a combined determination of the Higgs boson couplings to $b$ quarks and $W$ vector bosons, with less contamination from the $HZZ$ coupling uncertainties; \item the requirement of a central photon strongly reduces the background arising from alternative Higgs boson production processes, such as the one coming from the virtual gluon fusion $g^{\ast} g^{\ast} \to H$ diagrams, with a photon radiated off any external quark leg. \end{itemize} In the following, we will elaborate on a few of the previous items. \section{Production rates: signal versus background} \label{sec:3} In Table~\ref{tab:1}, the cross section for the signal and irreducible background for the process in Eq.~(\ref{process}) are shown for three values of the Higgs boson mass, as independently obtained by the Monte Carlo event generators ALPGEN~\cite{alpgen}, and MadEvent~\cite{madevent}, with the choice of parameters described in \cite{nnoi}. The following event selection, that optimizes the significance $S/\sqrt{B}$, has been applied \begin{eqnarray} &&p_{\rm T}^{j1,b1} \geq 60\, {\rm GeV}, \, \, \, \,\, p_{\rm T}^{j2,b2} \geq 30\, {\rm GeV}, \, \, \, \, \, p_{\rm T}^\gamma \geq 20\, {\rm GeV}, \, \,\, \Delta R_{ik} \geq 0.7,\, \nonumber \\ &&|\eta_\gamma|\leq 2.5, \, \, \,\,\, |\eta_b|\leq 2.5, \, \, \,\,\, |\eta_j|\leq 5, \nonumber \\ &&m_{jj} > 800\, {\rm GeV}, \, \, \, \,\,\,\, m_H(1-10\%) \leq m_{b \bar b} \leq m_H(1+10\%), \nonumber \\ &&|\Delta \eta_{jj}| > 4, \, \, \, \, \, m_{\gamma H} \geq 160\, {\rm GeV}, \, \, \, \, \, \Delta R_{\gamma b/\gamma j} \geq 1.2\, , \label{selec} \end{eqnarray} where $ik$ is any pair of partons in the final state, and $\Delta R_{ik} =\sqrt{\Delta^2\eta_{ik}+\Delta^2\phi_{ik}}$, with $\eta$ the pseudorapidity and $\phi$ the azimuthal angle. For comparison, cross sections and irreducible background for the plain VBF process are also shown. % \begin{table} \centering \caption{Cross sections for the signal and the irreducible background for the {\it optimized} event selection, as defined in Eq.~(\ref{selec}). The signal and irreducible background production rates for the plain VBF process are also shown, with the same event selection. } \label{tab:1} \begin{tabular}{llll} \hline\noalign{\smallskip} $m_H $ & 120~GeV & $\;\; 130$~GeV & $\;\;140$~GeV \\ \noalign{\smallskip}\hline\noalign{\smallskip} $\sigma[H(\to b \bar b) \gamma jj] \;\;\;\;\;\;\;\;$ &3.6~fb &$\;\;\;$2.9~fb &$\;\;\;$2.0~fb \\ $\sigma[{b \bar b} \gamma jj] $ &$\;$33~fb &$\;\;\;\;$38~fb &$\;\;\;\;$40~fb \\ $\sigma[H(\to b \bar b) jj] $ & 320~fb & $\;\;\;$255~fb & $\;\;\;$168~fb \\ $\sigma[{b \bar b} jj] $ & 103~pb & $\;\;\;$102~pb &$\;\;\;$ 98~pb \\ \noalign{\smallskip}\hline \end{tabular} \end{table} In case the usual pattern of QED corrections held, the request of a further hard photon would keep the relative weight of signal and background unchanged with respect to the $pp\to H\, jj$ case. Indeed, the rates for $pp\to H\, \gamma\, jj$ and its background would be related to a ${\cal O}(\alpha)$ rescaling of the rates for the $H\, jj$ signal and its background, respectively, keeping the $S/B$ ratio approximately stable. On the other hand, both the $H\, \gamma\, jj$ signal and its background statistics would decrease according to the rescaling factor ${\cal O}(\alpha)$. Consequently, if $(S/\sqrt{B})|_{H(\gamma)\,jj}$ is the signal significance for the VBF process (with) without a central photon, the signal significance for $pp\to H\, \gamma\, jj$ would fall down as $(S/\sqrt{B})|_{H\gamma \,jj} \sim \sqrt{\alpha}\, (S/\sqrt{B})|_{H\,jj}\lsim 1/10\,(S/\sqrt{B})|_{H\,jj} $ with respect to the basic VBF process. This would question the usefulness of considering the $H\, \gamma \, jj$ variant of the $H\, jj$ process, apart from the expected improvement in the triggering efficiency of the detectors due to the lower background rates. In Table~\ref{tab:1}, one can see that the QED naive expectations do not necessarily apply when restricted regions of phase space are considered (as discussed in detail in \cite{nnoi}). We see that the naive QED rescaling fails for the main background processes $pp\to b \bar b\,( \gamma)\, jj\,$, whose rate drops by about a factor 3000 after requiring a central photon, due to destructive interference ({\it coherence}) effects discussed in \cite{nnoi}. Since, on the other hand, the signal cross section roughly follows the naive QED rescaling $\sigma_{\gamma}\sim\sigma/100$, the requirement of a central photon gives rise to a dramatic increase (by more than one order of magnitude) in the $S/B$ ratio. Indeed, in Table~\ref{tab:2}, comparable statistical significances for the signal with and without a photon are obtained, for an integrated luminosity of $100$~fb$^{-1}$. The impact of including a few main reducible backgrounds for $pp\to b \bar b\, \gamma\, jj\,$ has also been studied in \cite{nnoi}, and found to be moderate. \begin{table} \centering \caption{Statistical significances with the optimized event selection as defined in Eq.~(\ref{selec}), for an integrated luminosity of $100$~fb$^{-1}$. The value $\epsilon_b = 60$\% for the $b-$tagging efficiency and a Higgs boson event reduction by $\epsilon_{b \bar b}\simeq$ 70\%, due to the finite ($\pm$10\%) $b \bar b$ mass resolution, are assumed. Jet-tagging efficiency and photon-identification efficiency are set to 100\%. Only the irreducible background is included in $B$.} \label{tab:2} \begin{tabular}{llll} \hline\noalign{\smallskip} $m_H $ & 120~GeV & $\;\; 130$~GeV & $\;\;140$~GeV \\ \noalign{\smallskip}\hline\noalign{\smallskip} $S / \sqrt{B}|_{H\gamma\,jj}$ & $\;\;$ 2.6 &$\;\;$ 2.0 &$\;\;$ 1.3 \\ $S / \sqrt{B}|_{H\,jj}$ &$\;\;$ 3.5 &$\;\;$ 2.8 & $\;\;\;$1.9 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} Apart from enhancing the $S/B$ ratio, coherence effects in $pp\to H(\to b\bar b)\gamma\,jj$ remarkably curb the relative contribution of the $ZZ\to H$ boson fusion diagrams with respect to the $WW\to H$ ones (see \cite{nnoi} for further details). Then, the $H(\to b\bar b)\gamma\,jj$ production at the LHC can have a role not only in the determination of the $Hbb$ coupling, but also for a cleaner determination of the $HWW$ coupling. The analysis presented above does not include parton-shower effects. The latter are expected to further differentiate the signal and background final-state topology and composition. A preliminary analysis of showering and central-jet veto effects points to an improvement of $S / \sqrt{B}$ by about a factor two \cite{nnoi}. The inclusion of complete showering, hadronization, and detector simulations will be needed to establish the actual potential of the process $pp\to H(\to b\bar b)\gamma\,jj$. \section*{Acknowledgements} I wish to thank my collaborators Emidio Gabrielli, Fabio Maltoni, Mauro Moretti, Fulvio Piccinini, and Roberto Pittau for the enjoyable time I had in working out with them the results discussed above. This research was partially supported by the RTN European Programmes MRTN-CT-2006-035505 (HEPTOOLS, Tools and Precision Calculations for Physics Discoveries at Colliders), and MRTN-CT-2004-503369 (Quest for Unification).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Abstract} The spatial modulation of the Fermi velocity for gapless Dirac electrons in quantum materials is mathematically equivalent to the problem of massless fermions on a certain class of curved spacetime manifolds. We study null geodesic lensing through these manifolds, which are dominated by curvature singularities, such as \emph{nematic} singularity walls (where the Dirac cone flattens along one direction). Null geodesics lens across these walls, but do so by perfectly collimating to a local transit angle. Nevertheless, nematic walls can trap null geodesics into stable or metastable orbits characterized by repeated transits. We speculate about the role of induced one-dimensionality for such bound orbits in 2D dirty $d$-wave superconductivity. \vspace{10pt} \noindent\rule{\textwidth}{1pt} \setcounter{tocdepth}{2} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \newpage \section{Introduction} \label{IntroductionSection} Many of the most important quantum materials discovered in the past several decades feature electrons, confined to two spatial dimensions, with effective ultrarelativistic band structures. Massless 2D Dirac electrons arise as quasiparticles in the $d$-wave cuprates \cite{AltlandSimonsZirnbauer02}, in monolayer graphene \cite{GrapheneRev}, as surface states of bulk topological insulators \cite{KaneHasan}, and in twisted bilayer graphene \cite{TBLGRev}. Massless Dirac or Majorana quasiparticles are also predicted to form at the surface of topological superfluids and superconductors \cite{TSCRev1,TSCRev2,3HeRev}. Recently, the focus has begun to shift from discovering Dirac materials to precisely manipulating them. In twisted bilayer graphene, for example, the moir\'e potential flattens the Dirac cones near the magic angle, facilitating Mott insulating and superconducting phases \cite{TBLGRev}. Since massless Dirac carriers are a fermionic analogue of photons, an interesting question is whether gravitational effects like lensing or trapping behind an event horizon can occur with suitable modifications. \emph{Artificial} quenched gravity (QG) can arise whenever a static source couples to components of the Dirac-electron stress tensor \cite{Volovik}. Coupling to the off-diagonal time-space components breaks time-reversal symmetry ($T$) (as in the Kerr metric \cite{Carroll}) and induces a ``tilt'' in the Dirac cone \cite{Jafari19A,Jafari19B}, an effect that is realized in type-II Weyl semimetals \cite{WeylRev}. By contrast, we focus here on a static coupling to the spatial-spatial components of the stress tensor that preserves $T$, but modulates the components of the Dirac velocity (the effective ``velocity of light''), see Fig.~\ref{QGDIntroductionFigure}. \begin{figure}[b!] \centering \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=1.0\textwidth]{DiracConesCartoon.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.95\textwidth]{DisorderPotentialsHeatmap.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{GeneralModelRandomMap.PNG} \end{minipage}\hfill \caption{ Introduction to the gravitational view of Dirac-carrier velocity modulation. A: Cartoon depicting a random spatial modulation of a 2D Dirac cone (QGD---see text). B-E: Heat maps on a spatial $[-5,5]\times[-5,5]$ grid, depicting four randomly-generated (Gaussian) disorder potentials, corresponding to the velocity components $v^b_a(\vex{x})$; these couple to the spatial components of the Dirac-electron stress tensor in Eq.~(\ref{GravitationalDisorderHamiltonian}). F: Heat map depicting the gravitational time-dilation factor, relative to the flat case [a proxy for curvature, see Eq.~(\ref{TimeDilationFactorDefinition}) and surrounding discussion], for the manifold corresponding to the disorder potentials in B-E. Note the visible domain walls corresponding to 1D nematic curvature singularities. } \label{QGDIntroductionFigure} \end{figure} While it is possible to deform the velocity in normal Dirac materials like graphene using strain \cite{Juan12,DiazFernandez17}, it is particularly natural in Dirac superconductors (SCs). For example, a charged impurity placed on the surface of class DIII topological SC is predicted to isotropically steepen or flatten the Dirac cone of the surface Majorana fluid \cite{NumericalQGD}. In 2D $d$-wave SCs such as the cuprates, a modulation of the pairing amplitude translates into a \emph{nematic} deformation of the Dirac cone (along the Fermi surface). Nematicity and emergent one-dimensionality have been argued to play a key role in the physics of high-temperature superconductivity \cite{Davis2019,NematicReview,DagottoRice96,Tsvelik17}. \emph{Random} quenched gravity arises when the Dirac velocity modulation occurs due to disorder (``quenched gravitational disorder,'' QGD). Nanometer-scale inhomogeneity observed by tunneling into BSCCO \cite{DavisReview} could imply that QGD plays a role in high-temperature superconductivity; it has recently been demonstrated that increasing disorder can raise the critical temperature in these materials \cite{Welp18}. QGD might also arise due to twist disorder in bilayer graphene \cite{Pixley20,Ryu20}. The physics of QGD has only recently been investigated theoretically. Exact diagonalization studies of a 2D Dirac cone subject to different varieties of QGD revealed a surprisingly robust incarnation of quantum criticality. In Ref.~\cite{NumericalQGD}, nematic QGD was shown to produce an entire spectrum of quantum-critical single-particle wave functions, with universal critical spatial fluctuations analogous to those found at an Anderson metal-insulator transition. Spectrum-wide quantum criticality has also been observed in non-gravitational models of topological superconductor surface states, where it was linked to quantum Hall plateau transitions \cite{Ghorashi18,Sbierski,JonasReview}. \begin{figure}[t!] \centering \centerline{ \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{IsotropicGenericGeodesics.PNG} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{NematicGenericGeodesics.PNG} \end{minipage} } \caption{Generic null geodesic trajectories in the spatial plane for purely isotropic or purely nematic QGD. As discussed in the text, we consider quenched random spacetimes with a special \emph{temporal flatness} condition, which means that 2+1-D Dirac carriers are modulated by perturbations that artificially mimic gravity, but in samples defined in physically flat spacetime. It means that physical distances in a solid-state realization correspond to the Euclidean measure in the plane, rather than to the geodesic one. A: Null geodesics for a purely isotropic QGD realization. Trajectories with different initial conditions (initial position and launch angle) appear with different colors. Note the qualitative resemblance to uncorrelated 2D random walks (diffusion). B: Null geodesics for a purely nematic QGD realization. Nearby geodesics are highly correlated in their direction, and tend to exhibit near-retracing orbits that bounce back and forth along nematic curvature singularity contours. Note that only in case B do singularities arise along 1D curves (domain walls).} \label{NematicVsIsotropicFigure} \end{figure} \begin{figure}[b!] \centering \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{MetastableOrbitsExampleA.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{MetastableOrbitsExampleB.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{MetastableOrbitsExampleC.PNG} \end{minipage}\hfill \caption{Example depicting metastable null geodesic bound states along nematic singularities. The spacetime manifold is defined by $v_D = \cos[0.2 \pi r]x$, $v_N = \cos[0.2 \pi r]y$ in the purely nematic model of Eqs.~(\ref{NematicSubmodelDefinition1})--(\ref{NematicSubmodelDefinition2}). A: Time dilation heat map of the manifold [Eq.~(\ref{TimeDilationFactorDefinition})], depicting concentric circular contours of curvature singularity. B: Null geodesics shortly after release from the origin. We see that the geodesics tend to travel along the singular contours. C: Null geodesics in the long-time limit, showing that the geodesic dynamics are dominated by metastable orbits along these singularities.} \label{MetastableOrbitsFigure} \end{figure} Motivated by the prospects for inducing gravitational effects in 2D Dirac materials, and by the numerical observation of robust quantum criticality induced by QGD \cite{NumericalQGD}, this paper investigates the geometry of 2+1-D Lorentzian manifolds with quenched gravitational singularities, corresponding to gravitationally modulated Dirac materials. We study the semiclassical limit of massless Dirac carriers lensing through static gravitational landscapes by computing null geodesic trajectories. Null geodesics play a crucial role in informing the solution to the wave equation on curved manifolds \cite{Friedlander}. Moreover, geodesics can form the basis for a sensible semiclassical expansion of the Dirac equation, which governs a consistent single-particle relativistic quantum mechanics (unlike the Klein-Gordon equation). These points are formalized by the fact that null geodesics are exactly the bicharacteristics for the Dirac equation on a Lorentzian manifold. Bicharacteristics determine the propagation of discontinuities of partial differential equations and very generally correspond to a ``geometric optics'' viewpoint of a generic field equation \cite{LucaVitagliano}. To be precise, we consider only \emph{artificial} gravitational potentials, i.e.\ perturbations that mimic gravity by coupling to the stress tensor, but for Dirac electrons propagating through physically flat spacetime. Technically, this translates into a special \emph{temporal flatness} condition, which simplifies the metric and the analysis of null geodesics. It also means that there is a preferred coordinate system that measures physical Euclidean distances in the plane; geodesic distances are ``experienced'' by the Dirac carriers, but would not be easily extracted from an experiment. Examples of the static gravity studied here include nematic deformations of the Dirac cone, as can arise from spatial inhomogeneity in the pairing potential of a $d$-wave SC, or isotropic flattening of the Majorana cone due to impurities at the surface of a topological SC \cite{NumericalQGD}. By contrast, our results are not applicable to a macroscopically curved sheet of graphene. In the gravitational language, curvature singularities arise whenever one or both components of the Dirac carrier velocity vanish. We study two different types of singular loci. Nematic singularities occur when only one component of the Dirac velocity passes through zero, and arise along 1D curves in a $d$-wave superconductor whenever the pairing amplitude changes sign (i.e., separates pairing domains with a $\pi$ phase shift). We also study isotropic curvature singularities, where the entire Dirac cone is flattened at a point. We find that there are strong qualitative differences between the geodesics corresponding to nematic and isotropic QGD, as shown in Fig.~\ref{NematicVsIsotropicFigure}. Moreover, we show that null geodesics are profoundly influenced by isotropic and nematic curvature singularities, though these give rise to different effects. Isotropic singularities are strongly attractive and asymptotically capture geodesics that pass sufficiently close. Conversely, nematic singularities do not capture geodesics; instead, they drive all impinging geodesics to a unique velocity, dependent only on the disorder potentials at the singularity, at which the geodesics are allowed to pass through the singularity. We call this effect \emph{geodesic collimation}. The collimation angle is simply determined by the local dispersive direction of the Dirac cone at the singularity wall. Although the null geodesics do not ``stick'' to singular nematic contours, there is a horizon effect due to the latter. In particular, nematic singularity walls can produce stable and metastable gravitationally bound orbits, wherein null geodesics repeatedly lens back and forth across the wall (at the local geodesic collimation angle). This is illustrated for a ``nematic circular billiard'' spacetime shown in Fig.~\ref{MetastableOrbitsFigure}. \subsection{Outline} Our manuscript is organized as follows. Sec.~\ref{ModelAndGravitySection} introduces the 2D Dirac Hamiltonian for artificial quenched gravity. Mapping this to the covariant formulation of massless fermions on curved spacetime, we extract an associated metric. We discuss symmetries and describe the nature of curvature singularities that can arise in this spacetime. In Sec.~\ref{GeodesicEquationSection}, we develop the geodesic equation and reformulate it several times to gain insight into the geometric properties of geodesics, especially in the vicinity of curvature singularities. The most useful formulation employs the projection of the tangent vector onto the local dreibein. Sec.~\ref{SubmodelsSection} introduces the purely isotropic and purely nematic submodels, which allow separate study of nematic and isotropic Dirac cone modulation. Finally, in Sec.~\ref{ToyModelsGeodesicsSection} we present an array of simple, highly-symmetric but singular spacetime geometries that admit analytical solutions and give a window into properties of the null geodesic interactions with singularities. Sec.~\ref{ConclusionSection} summarizes our results and discusses future directions. Many supporting details and calculations are relegated to appendices. Appendix~\ref{GeodesicEquationReformulationAppendixSection} summarizes various useful reformulations of the geodesic equation. Appendix ~\ref{LengthScaleDependeneAppendixSection} explains that the geodesic dynamics do not depend qualitatively on the length scale of a QGD potential, and Appendix~\ref{DiagonalOffDiagonalAppendixSection} introduces two other submodels that aid connection with previous work \cite{NumericalQGD}. In addition to the analytical calculations described in Sec.~\ref{ToyModelsGeodesicsSection}, we compute numerical results for regular and random quenched gravitational potentials, exhibited in various figures displayed throughout the paper. \section{Model and analogy to gravitation} \label{ModelAndGravitySection} \subsection{Hamiltonian} Our focus is a 2+1-dimensional massless Dirac model in the presence of \emph{artificial} quenched gravity: perturbations that mimic gravity by coupling to the spatial-spatial components of the stress tensor, and which preserve time-reversal symmetry. These flatten, steepen, or rotate the components of the Dirac velocity. Despite the spatial modulation of the Dirac cone, we assume that the Dirac electrons propagate through physically flat spacetime; in practice, this means that we exclude transport across a curved 2D sample (such as a corrugated graphene sheet). As explained in the Introduction \ref{IntroductionSection}, this is hardly a restriction in the context of 2D Dirac materials. As an example, for Dirac or Majorana quasiparticles that arise at the boundary of 3D topological or in 2D $d$-wave superconductors, inhomogeneity due to charged impurities or modulation of the pairing gap can both manifest as quenched gravitational coupling to the stress tensor \cite{NumericalQGD}. Our model is defined by the Hamiltonian \begin{align} \label{GravitationalDisorderHamiltonian} \mathcal{H} &= - \frac{i}{2} \sum_{a,b = 1,2} \int d^2 \vex{x}\ v_a^b(\vex{x}) \left[ \bar{\psi}(\vex{x}) \, \hat{\sigma}^a \, \overleftrightarrow{\partial_b} \, \psi(\vex{x}) \right], \end{align} where $\vex{x} = \{x,y\}$ are Cartesian coordinates that measure physical Euclidean distances in the sample plane, $\psi = \psi_\sigma$ is a two-component spinor, $\hat{\sigma}^{1,2}$ are Pauli matrices, and the double-directed derivative is defined by $(f\overleftrightarrow{\partial}g) \equiv f (\partial g) - (\partial f)g$. The functions $\{v_a^b(\vex{x})\}$ couple to the spatial components of the Dirac-electron stress tensor $T^a{}_b$. In the unperturbed limit, we can take $\{v_1^1 = v_2^2 = 1$, $v_1^2 = v_2^1 = 0\}$. We will find it useful to define the ``disorder vectors'': \begin{align} \label{DisorderVectorsDefinition} \vex{v_j}(\vex{x}) &= \begin{bmatrix} v_1^j(\vex{x})\\ v_2^j(\vex{x}) \end{bmatrix}, \end{align} which will let us write the action of our fermion theory as \begin{align} \label{GravitationalDisorderAction1} \mathcal{S} &= i\int dt \int d^2\vex{x} \left\{ \bar{\psi}(t,\vex{x}) \partial_t \psi(t,\vex{x}) + \frac{1}{2} \sum_{a,b = 1,2} v_a^b(\vex{x}) \left[ \bar{\psi}(t,\vex{x}) \, \hat{\sigma}^a \, \overleftrightarrow{\partial_b} \, \psi(t,\vex{x})\right] \right\}, \\ \label{GravitationalDisorderAction2} &= i\int dt \int d^2\vex{x}\ \bar{\psi}(t,\vex{x}) \left\lgroup \begin{aligned} \partial_t &+ \big[\vex{v_1}(\vex{x})\cdot\vex{\hat{\sigma}}\big]\partial_1 + \frac{1}{2}\big[\partial_1\vex{v_1}(\vex{x})\cdot\vex{\hat{\sigma}}\big] \\ &+ \big[\vex{v_2}(\vex{x})\cdot\vex{\hat{\sigma}}\big]\partial_2 + \frac{1}{2}\big[\partial_2\vex{v_2}(\vex{x})\cdot\vex{\hat{\sigma}}\big] \end{aligned} \right\rgroup \psi(t,\vex{x}), \end{align} where $\vex{\hat{\sigma}} = [\hat{\sigma}^1, \hat{\sigma}^2]^T$. In going from Eq.~(\ref{GravitationalDisorderAction1}) to Eq.~(\ref{GravitationalDisorderAction2}), we integrate by parts to remove the double-directed derivative in Eq.~(\ref{GravitationalDisorderAction1}), which we see gives rise to a spatially-dependent imaginary vector potential in the Lagrangian. Hermiticity requires that such terms exist to balance the spatially modulated Dirac velocity; preserving Hermiticity is crucial, since non-Hermitian versions of the spatial stress tensor arise instead for reparameterization ghosts in 2+0-D \cite{GSW}. The Hermitian counterterms in Eq.~(\ref{GravitationalDisorderAction2}) form the \textit{spin connection} of the covariant Lagrangian for a theory of massless fermions on a curved spacetime manifold. It will also be helpful to define $[\vex{u_1},\vex{u_2}] = [\vex{v_1},\vex{v_2}]^T$, which allows us write the Hamiltonian as \begin{align} \label{uVecHamiltonian} \mathcal{H} = \int d^2 \vex{x}\ \bar{\psi}\left\lgroup -i\sum_j\hat{\sigma}^j\left[(\vex{u_j}\cdot\vex{\nabla}) + \frac{1}{2}(\vex{\nabla}\cdot\vex{u_j})\right]\right\rgroup\psi. \end{align} In momentum space, the Dirac cone is spanned by the vectors $\{\vex{u_j}\}$. \subsection{Mapping to gravity} We pursue a gravitational analogy to shed light on the Dirac Hamiltonian in Eq.~(\ref{GravitationalDisorderHamiltonian}). Using the vielbein formalism \cite{Carroll}, the action for a system of massless Dirac fermions on a 2+1-dimensional spacetime is given by \cite{Ramond} \begin{align} \label{GeneralCurvedSpaceAction} \mathcal{S} = \int \sqrt{|g|} \, d^3x\ \bar{\psi}(x) E_A^\mu(x) \hat{\gamma}^A \left[ i\partial_{\mu} - \frac{1}{2}\omega_{\mu}^{BC}(x)\hat{S}_{BC} \right] \psi(x), \end{align} where $x = \{t,x,y\}$, $\mu \in \{0,1,2\}$ is a coordinate index, $A,B,C \in \{0,1,2\}$ are local Lorentz indices, the 2$\times$2 Dirac matrices satisfy the Clifford algebra \[ \hat{\gamma}^A \, \hat{\gamma}^B + \hat{\gamma}^B \, \hat{\gamma}^A = - 2 \eta^{A B} \, \hat{1}, \] $\eta^{AB} \rightarrow \diag(-1,1,1)$ is the flat Minkowski metric, $g_{\mu\nu}$ is the spacetime metric and $g$ is its determinant. Further, $E^\mu_A$ is the \textit{dreibein}, central to the tetrad (here: ``triad'') formalism, defined by the relation \begin{align} \label{VerbeinDefinitionEquation} \eta^{AB} E^\mu_A(x) E^\nu_B(x) = g^{\mu\nu}(x). \end{align} Finally, $\hat{S}_{BC}$ generates local Lorentz transformations on the spinor index of $\psi$, and $\omega^{BC}_\mu$ is the \textit{spin connection}, defined by \begin{align} \label{SpinConnectionDefinition} \omega^A_{\mu B} = E^A_\nu \Gamma^\nu_{\mu\lambda}E^{\lambda}_B - (\partial_\mu E^A_{\lambda})E^{\lambda}_B. \end{align} Our goal is to identify a spacetime metric [$g^{\mu\nu}$(\vex{x})] such that Eqs.~(\ref{GravitationalDisorderAction2}) and (\ref{GeneralCurvedSpaceAction}) are identical. After shifting $\bar{\psi} \rightarrow \bar{\psi} \hat{\gamma}^0$ in Eq.~(\ref{GeneralCurvedSpaceAction}) [such that $\bar{\psi} \leftrightarrow \psi^\dagger$, implicitly assumed in the ``non-relativistic'' notation of Eq.~(\ref{GravitationalDisorderAction2})], we can identify $\hat{\gamma}^0 \hat{\gamma}^{a} = \hat{\sigma}^{a}$. Consistency between Eqs.~(\ref{GravitationalDisorderAction2}) and (\ref{GeneralCurvedSpaceAction}) requires setting $E^0_{A\neq 0} = 0$, due to the absence of time-space mixing terms. For static $\{\vex{v}_j(\vex{x})\}$, this is equivalent to enforcing time-reversal symmetry. Further, as explained above and in the Introduction, the Dirac electrons are assumed to propagate through physically flat 2+1-D spacetime, with effective gravitation arising solely due to the spatial variation in $v_a^b(\vex{x})$. Then, the coefficient of the time-derivative term in Eq.~(\ref{GeneralCurvedSpaceAction}) can be chosen equal to one, a condition that we call \textit{temporal flatness}, \begin{align}\label{TempFlat} \sqrt{|g|} \, E_0^0 \equiv 1. \end{align} With this choice, the Cartesian coordinates $\vex{x}$ in Eqs.~(\ref{GravitationalDisorderAction2}) and (\ref{GeneralCurvedSpaceAction}) measure physical Euclidean distances in the plane. Temporal flatness allows the identification of the disorder potentials directly in terms of the dreibein, \begin{align} \label{DisorderInTermsOfDreibein} v_a^b = \frac{E^b_a}{E^0_0}. \end{align} We may then construct the metric in terms of $v_a^b$ and $E_0^0$ via Eq.~(\ref{VerbeinDefinitionEquation}). If we bring the dreibein in line with the potentials in Eq.~(\ref{DisorderInTermsOfDreibein}), then the spin connection will match the imaginary vector potential terms in Eq.~(\ref{GravitationalDisorderAction2}). To determine $E_0^0$, we take the determinant of the metric and again invoke temporal flatness to compute \begin{align} \label{TemporalFlatnessComputation} 1 = \frac{1}{(E^0_0)^2|g|} = [E^1_2 E^2_1-E^1_1E^2_2]^2 = (E_0^0)^4 (\vex{v_1}\times\vex{v_2})^2. \end{align} We thus find that Eq.~(\ref{GeneralCurvedSpaceAction}) is equivalent to the Hamiltonian system in Eq.~(\ref{GravitationalDisorderHamiltonian}) if the dreibein and spacetime metric are given by the mapping dictionary \begin{align} E_0^0 &= \frac{1}{\sqrt{|\vex{v_1}\times\vex{v_2}|}}, \label{DisorderVerbeinDefinition00} \\ \label{DisorderVerbeinDefinition} E_A^\mu &\rightarrow \frac{1}{\sqrt{|\vex{v_1}\times\vex{v_2}|}} \begin{bmatrix} 1 & 0 & 0 \\ 0 & v_1^1 & v_2^1\\ 0 & v_1^2 & v_2^2 \end{bmatrix}, \\ \label{DisorderMetricDefinition} g_{\mu\nu} &\rightarrow \frac{1}{|\vex{v_1}\times\vex{v_2}|} \begin{bmatrix} -(\vex{v_1}\times\vex{v_2})^2 & 0 & 0 \\ 0 & |\vex{v_2}|^2 & -\vex{v_1}\cdot\vex{v_2}\\ 0 & -\vex{v_1}\cdot\vex{v_2} & |\vex{v_1}|^2\\ \end{bmatrix}. \end{align} The result in Eq.~(\ref{DisorderMetricDefinition}) defines the quenched gravitational metric. This metric is quite general, although it has three important structural properties that constrain the geometry: (1) it is everywhere block-diagonal in time and space [a consequence of time-reversal symmetry for static potentials $\{\vex{v}_j(\vex{x})\}$], (2) the temporal flatness condition [Eq.~(\ref{TempFlat})] fixes $g_{00}$ in terms of the determinant of the spatial-spatial subblock of the metric, and (3) it is time-independent [$\partial_t g_{\mu\nu} = 0$]. We note that if one wanted to consider time-dependent gravitational disorder by allowing for explicit time-dependence of the disorder vectors $\{\vex{v}_j\}$, only the last of these conditions is removed [allowing for the generalization presented in Eq.~(\ref{TimeDependentPotentialsGeodesicEquationPhi})]. We also note that metric is expressed entirely in terms of the relative geometry of the disorder vectors, a fact that will be important for establishing the invariance of the geodesic dynamics to pseudospin rotations in Sec.~\ref{PseudospinRotationsSubsection}. Our theory can thus be studied in two different settings. On one hand, it is an effective low-energy Dirac theory due to perturbations that couple to spatial-spatial stress tensor components in a condensed matter system. On the other hand, we can study it as a theory of free massless fermions on a corresponding curved spacetime. \subsection{Curvature and singularities} \label{CurvatureAndSingularitiesSubsection} The metric in Eq.~(\ref{DisorderMetricDefinition}) becomes ill-defined at points where the cross-product $\vex{v_1}\times\vex{v_2}$ vanishes. This condition corresponds to a failure of the temporal flatness condition [Eq.~(\ref{TemporalFlatnessComputation})], divergence of the dreibein [Eq.~(\ref{DisorderVerbeinDefinition})], and the non-invertibility of the inverse metric. The Ricci scalar curvature takes the form \begin{align} R = \frac{ N\left(v_a^b,\partial v_c^d\right) }{ |\vex{v_1}\times\vex{v_2}|^3 }, \end{align} where $N\left(v_a^b,\partial v_c^d\right)$ is a (complicated) homogeneous quadratic polynomial in spatial derivatives of the disorder-vector components. While it is possible for the numerator to vanish so as to give finite curvature at a point where $\vex{v_1}\times\vex{v_2} = 0$, we will generically find curvature singularities along the sets defined by this condition. We can characterize singularities in terms of Dirac cone geometry: at a point where $\vex{v_1}\times\vex{v_2} = 0$, we have \begin{align}\label{NemSingDef} \begin{aligned} \vex{v_1} &= \cos\theta^* \, \vex{v},\\ \vex{v_2} &= \sin\theta^* \, \vex{v}, \end{aligned} \end{align} for some $\vex{v} \equiv [v_1, v_2]^T$ and an angle $\theta^* = \arctan(|\vex{v_2}|/|\vex{v_1}|).$ In the notation of Eq.~(\ref{uVecHamiltonian}), $\vex{u_1} = v_1 \hat{\vex{\theta}}^*$ and $\vex{u_2} = v_2 \hat{\vex{\theta}}^*$, where $\hat{\vex{\theta}}^* = [\cos\theta^*,\sin\theta^*]^T.$ It follows that \begin{align} \label{SingularDiracConeEquation} \mathcal{H} = \int d^2 \vex{x}\ \bar{\psi} \left[ -i (\vex{v}\cdot\vex{\hat{\sigma}})\partial_{\theta^*} + \text{S.C.} \right] \psi, \end{align} where S.C.\ is the spin connection term. At the singularity, the energy only depends on the derivative of the field in the direction of $\vex{\hat{\theta}^*}$: there is a flat band in the perpendicular direction, forming a ``Dirac canyon.'' A singularity can thus be partially characterized in terms of the angle $\theta^*$ and the vector $\vex{v}$, as defined above. At a singularity point, we have a flattening of \textit{at least one} axis of the Dirac cone, in the direction perpendicular to $\vex{\hat{\theta}^*}$. We see that there are two types of possible curvature singularities: \textit{nematic} singularities correspond to nonzero $\vex{v}$ [Eq.~(\ref{NemSingDef})] and give rise to a local Dirac canyon, while \textit{isotropic} singularities correspond to $\vex{v} = 0$, and locally flatten the entire Dirac cone. We note that isotropic deformations of the Dirac cone can only produce isotropic singularities. On the other hand, nematic singularities can only be formed by the breaking of rotational symmetry of the electron band structure. We can gain more insight with some topological reasoning. The quantity $\vex{v_1}\times\vex{v_2}$ can vary continuously with $\vex{x}$, taking on both negative and positive values. Thus, the regular singularities will generally form 1-manifolds that act as domain walls, partitioning the plane into regions of $\vex{v_1}\times\vex{v_2} > 0$ and $\vex{v_1}\times\vex{v_2} < 0$. Even at a singular point $|\vex{v}| \geq 0,$ and so isotropic singularities will generally arise only at isolated points. We will see in latter sections that both flavors of singularity strongly impact geodesic behavior. Geodesics that collide with an isotropic singularity are arrested and remain captured for the rest of time. These isotropic singularities also turn out to exert a strong pull on nearby geodesics. Conversely, geodesics that collide with a nematic singularity pass through in finite time; they are all driven to pass through the singularity in the direction $\vex{\hat{\theta}^*}$ and at the speed $|\vex{v}|$, a singularity-induced \emph{geodesic collimation} effect. We stress that, unintuitively, this fixing of both the geodesics' direction and speed does \textit{not} uniquely define the geodesic. \subsection{Pseudospin rotations} \label{PseudospinRotationsSubsection} Before moving on to a study of the spacetime manifolds defined by the metric in Eq.~(\ref{DisorderMetricDefinition}), we pause to consider the properties of the quantum theory [Eq.~(\ref{GravitationalDisorderAction2})] under a local pseudospin rotation. We claim that the \textit{dynamics} of the theory are invariant under a local $U(1)$ pseudospin symmetry. Specifically, let the unitary transformation \begin{align} \label{U1PseudospinRotation} \mathcal{U}(\vex{x}) = \exp\left[ \frac{i}{2}\theta(\vex{x})\hat{\sigma}^3 \right] \end{align} encode the in plane rotation $\vex{v}\rightarrow \hat{R}(\vex{v})$ via the canonical SU(2) $\rightarrow$ SO(3) double cover. That is, \begin{align} \label{SU2SO3DoubleCover} \mathcal{U}^{\dagger} \big[ \vex{v}\cdot\vex{\hat{\sigma}} \big] \mathcal{U} = \hat{R}(\vex{v})\cdot\vex{\hat{\sigma}}, \end{align} where the rotation operator $\hat{R}$ is given by \begin{align} \hat{R}(\vex{v}) = \cos\theta\vex{v} - \sin\theta\vex{v}^{\perp}. \end{align} (Our convention is that $\vex{v}^{\perp} = [v_2, -v_1]^T$.) The unitary fermion field transformation $\psi \rightarrow \mathcal{U}\psi$ sends the action [Eq.~(\ref{GravitationalDisorderAction2})] to \begin{align} \mathcal{S} &= \int dt \int d\vex{x}\ \bar{\psi}(t,\vex{x}) \left\lgroup \begin{aligned} i\partial_t &+ \big[\hat{R}(\vex{v_1})\cdot\vex{\hat{\sigma}}\big]i\partial_x + \frac{i}{2}\big[\partial_x\hat{R}(\vex{v_1})\cdot\vex{\hat{\sigma}}\big] \\ &+ \big[\hat{R}(\vex{v_2})\cdot\vex{\hat{\sigma}}\big]i\partial_y + \frac{i}{2}\big[\partial_y\hat{R}(\vex{v_2})\cdot\vex{\hat{\sigma}}\big] \end{aligned} \right\rgroup \psi(t,\vex{x}). \end{align} While the action is not invariant, the new theory is not qualitatively different from the old. The disorder vectors have been rotated through the same angle and their relative geometry is preserved. Since the metric for the corresponding spacetime manifold [Eq.~(\ref{DisorderMetricDefinition})] depends only on the lengths and relative angles of the disorder vectors, it is explicitly invariant under the transformation. It follows that the geodesic dynamics are invariant as well. The quantum dynamics are also invariant under the transformation. To see this, note that the quantum states of the original theory can be recovered from knowledge of the quantum states of the pseudospin-rotated theory by enacting the inverse pseudospin rotation [$\mathcal{U}^\dagger(\vex{x})$] on the eigenstates. The same can be said of the time-dependent wave function. Since the time-dependent wave functions are related by a unitary pseudospin rotation, the corresponding time-dependent density functions are identical. \section{Geodesics} \label{GeodesicEquationSection} The main focus of this paper is the study of the geodesics on the manifolds defined by the quenched gravitational metric in Eq.~(\ref{DisorderMetricDefinition}). In this section, we introduce the geodesic equation and reformulate it into a more manageable form that allows an analytical understanding of the effects of curvature singularities, and also facilitates efficient numerical evaluation. \subsection{Geodesic equation} The geodesic equation is given by the second order ODE \cite{Carroll} \begin{align} \label{AbstractGeodesicEquation} \frac{d^2 x^{\mu}}{d s^2} = -\Gamma^{\mu}_{\alpha\beta}[\vex{x}(s)] \, \frac{d x^{\alpha}}{d s} \frac{d x^{\beta}}{d s}, \end{align} where $s$ is an affine parameter for the curve and $\{\Gamma^{\mu}_{\alpha\beta}\}$ are the Christoffel symbols derived from the metric [Eq.~(\ref{DisorderMetricDefinition})]. In our case, these take the form \begin{align} \label{ChristoffelStructureEquation} \Gamma^0_{\mu\nu}(\vex{x}) &= \begin{bmatrix} 0 & \partial_1 & \partial_2 \\ \partial_1 & 0 & 0\\ \partial_2 & 0 & 0 \end{bmatrix} \frac{1}{2} \log|\vex{v_1}\times\vex{v_2}|, \\ \Gamma^1_{\mu\nu}(\vex{x}) &= \begin{bmatrix} \Gamma^1_{00}(\vex{x}) & 0 & 0\\ 0 & \Gamma^1_{11}(\vex{x}) & \Gamma^1_{12}(\vex{x})\\ 0 & \Gamma^1_{12}(\vex{x}) & \Gamma^1_{22}(\vex{x}) \end{bmatrix}, \\ \Gamma^2_{\mu\nu}(\vex{x}) &= \begin{bmatrix} \Gamma^2_{00}(x) & 0 & 0\\ 0 & \Gamma^2_{11}(\vex{x}) & \Gamma^2_{12}(\vex{x})\\ 0 & \Gamma^2_{12}(\vex{x}) & \Gamma^2_{22}(\vex{x}) \end{bmatrix}. \end{align} The Christoffel symbols $\{\Gamma^\rho_{\mu\nu}\}$ left undefined above are complicated functions of $\vex{v}_{1,2}$ and derivatives thereof. \subsection{Temporal first integral} \label{TemporalFirstIntegralSubsection} The structure of the quenched spacetime in Eq.~(\ref{DisorderMetricDefinition}) yields a general first integral for the time coordinate along a geodesic. Inserting Eq.~(\ref{ChristoffelStructureEquation}) into the geodesic Eq.~(\ref{AbstractGeodesicEquation}) gives \begin{align} \frac{d^2 t}{d s^2} &= - \left(\partial_1 \log|\vex{v_1}\times\vex{v_2}|\right) \frac{d x}{d s} \frac{d t}{d s} - \left(\partial_2 \log|\vex{v_1}\times\vex{v_2}|\right) \frac{d y}{d s} \frac{d t}{d s}. \end{align} This is integrable and the first integral for time follows: \begin{align} \label{TemporalFirstIntegralEquation} \frac{d t}{d s} = \frac{(E/m)}{\big|\vex{v_1}[\vex{x}(s)]\times\vex{v_2}[\vex{x}(s)]\big|}. \end{align} Above, $E/m$ is the constant of integration, which we identify as the energy of the geodesic according to an observer at rest at the same location. To see this, note that the standard expression for this is $E/m = -g_{00}[\vex{x}(s)](d t/d s)$. The fact that energy is conserved along a generic geodesic is due to the fact that the ``at rest'' three-vector $\hat{t} \equiv [1,0,0]^T$ gives a global timelike Killing field for the quenched gravitational manifold. For null geodesics, the constant $E/m$ formally diverges, but it can be scaled arbitrarily without affecting the geodesic. \subsection{Reparametrization by global time} In light of Eq.~(\ref{TemporalFirstIntegralEquation}), it will be useful to define \begin{align} \label{TimeDilationFactorDefinition} \gamma(\vex{x}) \equiv \frac{1}{\big|\vex{v_1}(\vex{x})\times\vex{v_2}(\vex{x})\big|}. \end{align} which is the \textit{gravitational time-dilation factor}. Since $d t / d s = \gamma[\vex{x}(s)] > 0$ [setting $E/m = 1$ in Eq.~(\ref{TemporalFirstIntegralEquation})], the mapping between the affine parameter $s$ and global time $t$ along a geodesic is invertible. There is thus a well-defined reparametrization of the geodesic in terms of $t$ [$\equiv \vex{x}(t)$]. Using Eq.~(\ref{TemporalFirstIntegralEquation}), we have ($j \geq 1$) \begin{align} \label{TimeReparamScaling} \frac{dx^j}{dt} = \frac{dx^j}{ds}\frac{ds}{dt} = \frac{1}{\gamma[\vex{x}(s)]}\frac{dx^j}{ds}, \end{align} so that tangent vectors of geodesics with respect to the global time coordinate are just spatially-dependent dilations of the original tangent vectors with respect to the affine parameter. The geodesic equation in terms of the global time coordinate is [with $\dot{x}_j \equiv dx_j/dt$] \begin{subequations}\label{CartesianGlobal} \begin{align} \label{CartesianGlobal1} \ddot{x}(t) &= -\left[\Gamma^1_{00} + (\Gamma^1_{11} + \partial_1\log[\gamma]) \dot{x}^2 + (2\Gamma^1_{12} + \partial_2\log[\gamma])\dot{x}\dot{y} + \Gamma^1_{22}\dot{y}^2\right],\\ \label{CartesianGlobal2} \ddot{y}(t) &= -\left[\Gamma^2_{00} + \Gamma^2_{11}\dot{x}^2 + (2\Gamma^2_{12} + \partial_1\log[\gamma])\dot{x}\dot{y}+ (\Gamma^2_{22} + \partial_2\log[\gamma])\dot{y}^2\right]. \end{align} \end{subequations} These equations offer some interesting interpretations. Firstly, the $\Gamma_{00}$ terms appear as potentials in what is effectively a Hamiltonian dynamics problem with many friction-like dissipative terms. We note that the global time reparametrization introduces several new terms combining with the Christoffel symbols, adding new friction-like terms to the geodesic equation. The dissipative terms play a key role in the geodesic capture by isotropic curvature singularities, as we discuss in Sec.~\ref{PurelyIsotropicModelSubsection}. \subsection{Reformulation of the geodesic equation} \label{ReformulationOfGeodesicEquationSubsection} Along a geodesic (expressed in terms of the proper time or an affine parametrization), the spacetime interval is constant, $g_{\mu\nu} (d x^\mu / d s) (d x^\nu / d s) \equiv \Delta$. Introducing notation for the speed [$\sigma \equiv |\vex{\dot{x}}|$] and velocity angle [$\theta(t) \equiv \arctan[\dot{y}(t)/\dot{x}(t)]$], we can use this to relate a geodesic's speed, position, energy, and mass. Eqs.~(\ref{TemporalFirstIntegralEquation}) and (\ref{TimeDilationFactorDefinition}) imply that \begin{align} \left(\frac{E}{m}\right)^2 \left\{ 1 - \gamma^2(\vex{x}) \sigma^2(\vex{x}) \left[ (\vex{u_1}\times\vex{\hat{\theta}})^2 + (\vex{u_2}\times\vex{\hat{\theta}})^2 \right] \right\} = - \frac{\Delta}{\gamma(\vex{x})}, \end{align} where again, $\vex{\hat{\theta}} \equiv [\cos\theta, \sin\theta]^T$ and $[\vex{u_1},\vex{u_2}] = [\vex{v_1},\vex{v_2}]^T$. Solving instead for the squared-speed of the geodesic, we find \begin{align} \label{ConstantIntervalConditionEquation} \sigma_{\Delta}^2[\theta,\vex{x}] &= \frac{1}{\gamma(\vex{x})^2} \frac{1}{(\vex{u_1}\times\vex{\hat{\theta}})^2+(\vex{u_2}\times\vex{\hat{\theta}})^2} \left[ 1 + \frac{\Delta}{\gamma(\vex{x})} \left(\frac{m}{E}\right)^2 \right]. \end{align} In the flat-space limit, these reduce to the familiar equations of special relativity: $E^2(1-\sigma^2) = m^2$ for a timelike geodesic. Note also that if $m=0$, then $E$ plays no role, reflecting that fact that null geodesics are unaffected by a scaling of the affine parameter. Eq.~(\ref{ConstantIntervalConditionEquation}) can be used to rewrite the geodesic equation in an angular formulation; this is presented in Appendix~\ref{GeodesicEquationReformulationAppendixSection}. While our focus is on null geodesics, we see that timelike and spacelike geodesics have speed-position relations derived from those of null geodesics by a simple multiplicative factor. From Eq.~(\ref{ConstantIntervalConditionEquation}), we can see that while null [$\Delta = 0$] and tachyonic [$\Delta> 0$] geodesics have a well-defined speed at every point of the manifold, massive geodesics [$\Delta < 0$] are restricted from the regions of the manifold with $- \Delta < (E/m)^2 \, \gamma(\vex{x}).$ Equation~(\ref{ConstantIntervalConditionEquation}) with $\Delta = 0$ also offers an insight into the null geodesic collimation effect alluded to in Sec.~\ref{CurvatureAndSingularitiesSubsection}. At a singularity, the factor $\gamma^{-2}$ necessarily vanishes, pushing the speed of the geodesic towards zero. The geodesic may only pass through the singularity if the denominator in Eq.~(\ref{ConstantIntervalConditionEquation}) diverges simultaneously. This can happen only if $\vex{u_1}$ and $\vex{u_2}$ are parallel (automatic for the singularity), \textit{and} if the velocity vector of the geodesic is driven to point in their common direction, $\vex{\hat{\theta}}\rightarrow\vex{\hat{\theta}^*}$ [see the discussion around Eq.~(\ref{SingularDiracConeEquation})]. Though it is not obvious from Eq.~(\ref{ConstantIntervalConditionEquation}), we will see that all geodesics impinging on a nematic singularity are in fact always driven to the correct direction, $\vex{\hat{\theta}^*}$. While Eq.~(\ref{ConstantIntervalConditionEquation}) offers some physical insight into the dynamics, a significantly more useful reformulation is possible. The geodesic equation may be expressed directly in terms of the dreibein [Eq.~(\ref{DisorderVerbeinDefinition})]. This is natural in this setting, since the dreibein (and not the metric) is fundamental to the formulation of the Dirac field on curved spacetime, Eq.~(\ref{GeneralCurvedSpaceAction}). The structure of the quenched gravitational spacetime [Eq.~(\ref{DisorderMetricDefinition})] allows even further simplification. Relegating the details to Appendix~\ref{GeodesicEquationReformulationAppendixSection}, we find that the equation \textit{for null geodesics} can be expressed as \begin{align} \label{SimplifiedGeodesicEquationX} \dot{x}(t) &= \vex{\hat{\phi}}\cdot\vex{v_1},\\ \label{SimplifiedGeodesicEquationY} \dot{y}(t) &= \vex{\hat{\phi}}\cdot\vex{v_2},\\ \label{SimplifiedGeodesicEquationPhi} \dot{\phi}(t) &= \vex{\hat{\phi}} \times \left\lgroup \bigg[ \partial_1\vex{v_1} + \partial_2\vex{v_2} \bigg] - \frac{1}{\vex{v_1}\times\vex{v_2}} \bigg[ [\vex{v_1}\partial_1 + \vex{v_2}\partial_2] [\vex{v_1}\times\vex{v_2}] \bigg] \right\rgroup, \end{align} where $\vex{\hat{\phi}}\equiv[\cos\phi, \sin\phi]^T$ is an auxiliary unit vector that rotates along the geodesic trajectory. In the zero-disorder limit, $\vex{\hat{\phi}}$ reduces to the velocity unit vector $\vex{\hat{\theta}}$. The angle $\phi$ expresses the alignment of the tangent vector relative to the spatial components of the dreibein triad. We note that the implementation of the null geodesic constraint reduces our two second-order geodesic equations [Eqs.~(\ref{AbstractGeodesicEquation}), (\ref{CartesianGlobal1}), and (\ref{CartesianGlobal2})] to three first-order equations. The form of the geodesic equation in Eqs.~(\ref{SimplifiedGeodesicEquationX})--(\ref{SimplifiedGeodesicEquationPhi}) is useful for numerical simulation; while it is not divergence-free at a curvature singularity, it avoids the singularities in the Christoffel symbols and in the denominator of Eq.~(\ref{ConstantIntervalConditionEquation}). Further, the nullity condition [$g_{\mu\nu} (d x^\mu / d s)(d x^\nu / d s) = 0$] is implemented automatically by the use of the unit vector $\vex{\hat{\phi}}$, providing numerical stability. Eqs.~(\ref{SimplifiedGeodesicEquationX})--(\ref{SimplifiedGeodesicEquationPhi}) also allow easy insight of the geodesic collimation effects of singularities mentioned above; we discuss these next. \subsection{Geodesic collimation at nematic singularities} \label{CollimationSubsection} \begin{figure}[t!] \centering \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{CollimationExampleA.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{CollimationExampleB.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{CollimationExampleC.PNG} \end{minipage}\hfill \caption{Example of null geodesic collimation. We use a spacetime defined by $v_D = x$ and $v_N = y$ in the purely nematic model of Eqs.~(\ref{NematicSubmodelDefinition1}) and (\ref{NematicSubmodelDefinition2}). A: Heat map of the time-dilation factor $\gamma$ [Eq.~(\ref{TimeDilationFactorDefinition})], depicting curvature singularities along unit circle. B: Heat map annotated to mark the collimation angles of the singularities. C: Heat map with null geodesic trajectories superimposed. Note that the geodesics pass through the singular manifold at the correct collimation angles. We can also see some geodesics arc back into metastable orbits along the singular manifold.} \label{CircularCollimationFigure} \end{figure} \begin{figure}[b!] \centering \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{RandomCollimationExampleA.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{RandomCollimationExampleB.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{RandomCollimationExampleC.PNG} \end{minipage}\hfill \caption{Example of null geodesic collimation in a random quenched gravitational disorder realization. We use the same manifold depicted in Fig.~\ref{QGDIntroductionFigure}. A: Heat map of time-dilation factor $\gamma$ [Eq.~(\ref{TimeDilationFactorDefinition})], depicting domain walls of singularities. B: Heat map annotated to mark the collimation angles of the singularities. C: Heat map with null geodesic trajectories superimposed. We see both that the geodesics pass through the singular manifold at the correct collimation angles, and that geodesics have a tendency to cross singular manifolds multiple times.} \label{RandomCollimationFigure} \end{figure} From Eq.~(\ref{SimplifiedGeodesicEquationX})--(\ref{SimplifiedGeodesicEquationPhi}), we can now better understand geometric features displayed by geodesics in the vicinity of a nematic curvature singularity, what we have been calling \textit{geodesic collimation}. As discussed in Secs.~\ref{CurvatureAndSingularitiesSubsection} and \ref{ReformulationOfGeodesicEquationSubsection}, all geodesics impinging on a nematic singularity are driven to pass through at the direction defined by the angle $\vex{\hat{\theta}^*}$ and at the speed $|\vex{v}|.$ This is depicted for a simple circular geometry in Fig.~\ref{CircularCollimationFigure}, and for random (quenched disorder) geometry in Fig.~\ref{RandomCollimationFigure}. We have seen that at a singularity, the vectors $\vex{v_1}$ and $\vex{v_2}$ are parallel, and can be parametrized as in Eq.~(\ref{NemSingDef}). Plugging this into Eqs.~(\ref{SimplifiedGeodesicEquationX}) and (\ref{SimplifiedGeodesicEquationY}) then gives $d y / d x = \tan\theta^*$, fixing the direction of the geodesic at the singularity. This is the \textit{collimation angle} $\theta^*$ of the curvature singularity. Via Eqs.~(\ref{NemSingDef}), (\ref{SimplifiedGeodesicEquationX}), and (\ref{SimplifiedGeodesicEquationY}), we also have that, at a singularity, the geodesic speed is given by $|\dot{\vex{x}}| = \vex{v}\cdot\vex{\hat{\phi}}$. From Eq.~(\ref{SimplifiedGeodesicEquationPhi}), we see that for $\phi$ to have a finite derivative at the singularity, we must have strong driving of $\vex{\hat{\phi}} \rightarrow \vex{\hat{v}}$ so that at the singularity, $\vex{\hat{\phi}}$ is \textit{parallel} to the vectors $\vex{v_1},\vex{v_2}$. The geodesic equation thus locks the speed of the geodesic through the singularity to $|\vex{v}|$. \subsection{Geodesic coincidence at singular points} \label{GeodesicCoincidenceSubsection} A geodesic on a Riemannian differentiable manifold is uniquely specified if its position and velocity (tangent vector) at a point are given. Since geodesic collimation at a nematic curvature singularity dictates the velocity of a geodesic at a specific singular point, it would appear that only a single geodesic may pass through each nematic singular point. This turns out \textit{not} to be the case. A continuum of distinct geodesics may pass through the same singular point at the same time, coinciding in both position and velocity, without contradiction, as shown in Fig.~\ref{GeodesicCoincidenceFigure}. This is possible because our space is only \emph{piece-wise} a Riemannian differentiable manifold, i.e.\ when restricted to the connected, open sets that are non-singular. As we approach a singularity, the form of the geodesic equation allows it to avoid specifying the value of $\dot{\phi}$ at the singularity, despite the fact that $\{x,y,\phi,\dot{x},\dot{y}\}$ are completely determined, allowing for distinct geodesics to have the same instantaneous position and velocity (but different values of $\dot{\phi}$). In turn, the value of $\dot{\phi}$ at the singularity uniquely characterizes the geodesic; all higher derivatives of $x,y$ and $\phi$ at the singularity can be computed in terms of $\dot{\phi}$ and the values (and derivatives) of the disorder potentials at the singularity. To see how geodesic collimation avoids uniquely specifying the geodesics, we linearize Eqs.~(\ref{SimplifiedGeodesicEquationX})--(\ref{SimplifiedGeodesicEquationPhi}) about a singular point to first order in $t$. This linearization requires the use of a convective derivative; we evaluate potentials along the geodesic and take a total derivative with respect to $t$. We let $\vex{v}$ and $\theta^*$ correspond to the singularity as defined by Eq.~(\ref{NemSingDef}), and (without loss of generality) we take the singularity to be at the origin and the collision to occur at time $t=0$. We have \begin{align} \begin{aligned} \dot{x}(0) &= \cos\theta^*|\vex{v}| + \mathcal{O}(t),\\ \dot{y}(0) &= \sin\theta^*|\vex{v}| + \mathcal{O}(t),\\ \vex{v_1}[\vex{x}(0)] &= \cos\theta^*\vex{v} + t|\vex{v}|(\vex{\hat{\theta}^*}\cdot\vex{\partial})\vex{v_1}|_{\vex{x}=0} + \mathcal{O}(t),\\ \vex{v_2}[\vex{x}(0)] &= \sin\theta^*\vex{v} + t|\vex{v}|(\vex{\hat{\theta}^*}\cdot\vex{\partial})\vex{v_2}|_{\vex{x}=0} + \mathcal{O}(t),\\ (\vex{v_1}\times\vex{v_2})[\vex{x}(0)] &= t|\vex{v}|(\vex{\hat{\theta}^*}\cdot\vex{\partial})(\vex{v_1}\times\vex{v_2})|_{\vex{x}=0} + \mathcal{O}(t^2). \end{aligned} \end{align} We also expand the unit vector $\vex{\hat{\phi}}$ about a singularity with collimation angle $\theta$: \begin{align} \label{PhiLinearization} \vex{\hat{\phi}}(t) = \vex{\hat{v}} - t\dot{\phi}(0)\vex{\hat{v}}^\perp + \mathcal{O}(t^2). \end{align} Plugging these expansions into Eq.~(\ref{SimplifiedGeodesicEquationPhi}), we obtain \begin{align} \label{GeodesicLinearization} \dot{\phi}(t) = \left[ \vex{\hat{v}} - t\dot{\phi}(0)\vex{\hat{v}}^\perp + \mathcal{O}(t^2) \right] \times \left[ \vex{\hat{v}}\left(\frac{-1}{t} + \mathcal{O}(1)\right) + \vex{D} + \mathcal{O}(t) \right], \end{align} where \begin{align} \vex{D} = \bigg(\partial_1\vex{v_1} + \partial_2\vex{v_2} - \frac{ [(\vex{\hat{\theta}^*}\cdot\vex{\partial}\vex{v_1})\partial_1 + (\vex{\hat{\theta}^*}\cdot\vex{\partial}\vex{v_2})\partial_2](\vex{v_1}\times\vex{v_2}) } {(\vex{\hat{\theta}^*}\cdot\vex{\partial})(\vex{v_1}\times\vex{v_2})}\bigg)\bigg|_{\vex{x}=0}. \end{align} Carrying out the cross products, we find that $\vex{\hat{v}}\times\vex{D} = 0$ and that Eq.~(\ref{SimplifiedGeodesicEquationPhi}) simply reduces to \begin{align} \dot{\phi}(t) = \dot{\phi}(0) + \mathcal{O}(t). \end{align} The $t \rightarrow 0$ limit leaves $\dot{\phi}(0)$ \textit{completely undetermined}. The fact that $\dot{\phi}(0)$ is left undetermined at the singularity opens up the \textit{possibility} that distinct geodesics can share an instantaneous position and velocity. To see that this actually happens, we construct explicit examples from an exactly solvable model---this is done in Sec.~\ref{ToyModelsGeodesicsSection}, but the results are plotted in Fig.~\ref{GeodesicCoincidenceFigure}. \begin{figure}[t!] \centering \centerline{ \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{GeodesicCoincidenceA.PNG} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{GeodesicCoincidenceB.PNG} \end{minipage} } \caption{Example demonstrating that \emph{distinct} geodesics can agree in both position and velocity (orientation of the tangent vector) at a singular point. The manifold is the ``linear dreibein wall'' treated in Sec.~\ref{ToyModelsGeodesicsSection}, and the null geodesics have the closed form solution given by Eqs.~(\ref{LinearWallExactSolutionX}) and (\ref{LinearWallExactSolutionY}). A: Several geodesics are launched at $t=0$ (launch points marked with circles) in the vicinity of a nematic singularity wall along the line $x = 0$, with a horizontal collimation direction (marked with arrows). B: At $t = 1$, all geodesics simultaneously pass through the origin at the correct collimation velocity, as dictated by singularity. All 25 distinct geodesics have the same instantaneous position and velocity at $t=1$. After traversing the singularity, these mutually diverge (remain distinct) in their subsequent evolution along the manifold.} \label{GeodesicCoincidenceFigure} \end{figure} \subsection{Time-dependent potentials} The form of the geodesic equation in Eqs.~(\ref{SimplifiedGeodesicEquationX})--(\ref{SimplifiedGeodesicEquationPhi}) provide such a simplification that it is worth checking how this approach fares for time-dependent gravitational potentials. We surprisingly find that this leaves the geodesic equations are \textit{almost} unaltered. With $\vex{v_j} \rightarrow \vex{v_j}[t,\vex{x}(t)],$ we still have $\dot{x}^j(t) = \vex{\hat{\phi}}\cdot\vex{v_j}.$ The equation for $\phi$ is slightly modified, \begin{align} \label{TimeDependentPotentialsGeodesicEquationPhi} \dot{\phi}(t) &= \vex{\hat{\phi}} \times \left\lgroup \begin{aligned} \bigg[ \partial_1\vex{v_1} + \partial_2\vex{v_2} \bigg] &- \frac{1}{\vex{v_1}\times\vex{v_2}} \bigg[ [\vex{v_1}\partial_1 + \vex{v_2}\partial_2] [\vex{v_1}\times\vex{v_2}] \bigg] \\ &- \frac{1}{\vex{v_1}\times\vex{v_2}} \bigg[ \vex{v_1}(\hat{\phi}\times\partial_0\vex{v_2}) - \vex{v_2}(\hat{\phi}\times\partial_0\vex{v_1}) \bigg] \end{aligned} \right\rgroup. \end{align} We see that the time-dependent generalization of the geodesic equation is relatively simple as well, including only a single correction term with a quadratic $\vex{\hat{\phi}}$ dependence. Further, this form makes it apparent that the singularity-collimation effect survives to the time-dependent generalization. At a singular point, we still have strong driving of $\vex{\hat{\phi}}$ to $\vex{\hat{v}}$. While our focus in this paper is on quenched (static) potentials, this result applies generally to any time-dependent gravitational spacetime expressible in the from given by Eq.~(\ref{DisorderMetricDefinition}), and could have potentially useful applications in future work. We will revisit this in our concluding discussion Sec.~\ref{ConclusionSection}. \section{Isotropic and nematic submodels} \label{SubmodelsSection} In order to qualitatively differentiate the effects of isotropic and nematic fluctuations on the geodesics, we identify two subclasses of quenched gravitation that we will study alongside the general metric in Eq.~(\ref{DisorderMetricDefinition}). \subsection{Pure isotropic model} \label{PurelyIsotropicModelSubsection} The pure \textit{isotropic} model will be defined by \begin{align} \label{IsotropicSubmodelDefinition1} \vex{v_1}(\vex{x}) &= \begin{bmatrix} 1 + v_D(\vex{x})\\ v_N(\vex{x}) \end{bmatrix}, \\ \label{IsotropicSubmodelDefinition2} \vex{v_2}(\vex{x}) &= \begin{bmatrix} -v_N(\vex{x})\\ 1 + v_D(\vex{x}) \end{bmatrix}, \end{align} where $v_D,v_N$ are the \textit{diagonal} and \textit{off-diagonal} potentials, respectively. This model has been designed so that $\vex{v_1}\cdot\vex{v_2} = 0$ and $|\vex{v_1}| = |\vex{v_2}|$ at every point; it encodes isotropic fluctuations and pseudospin rotations, but does not allow for nematic compression of the Dirac cone. In particular, there will be no nematic singularities---all singular points will host a fully flat local Dirac cone. In light of the pseudospin invariance of the theory, the dynamics of this model are fully determined by the related model with $\vex{v_j} = v(\vex{x}) \vex{\hat{e}_j},$ where $\vex{\hat{e}_j}$ is a coordinate unit vector and $v(\vex{x})^2 = |\vex{v_j}|^2 = \vex{v_1}\times\vex{v_2} = (1+v_D)^2 + v_N^2 = 1/\gamma \geq 0.$ Singularities occur only at points where $\{v_D = -1, v_N = 0\}$, as depicted in Fig.~\ref{IsotropicSubmodelCurvatureFigure}. Plugging the disorder vectors of Eqs.~(\ref{IsotropicSubmodelDefinition1}) and (\ref{IsotropicSubmodelDefinition2}) into the constant-interval speed condition [Eq.~(\ref{ConstantIntervalConditionEquation})], we have $\sigma(\theta,\vex{x}) = v(\vex{x})$ for null geodesics. For the isotropic model, there is no angular dependence of $\sigma$; all geodesics that hit a singularity are stopped, in line with the remarks about isotropic singularities in Sec.~\ref{CollimationSubsection}. \begin{figure}[b!] \centering \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{IsotropicModelExampleA.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{IsotropicModelExampleB.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{IsotropicModelExampleC.PNG} \end{minipage}\hfill \caption{Curvature in the pure isotropic model. A: Heat map of the time-dilation factor $\gamma$ [Eq.~(\ref{TimeDilationFactorDefinition})] for the ``disorder space'' isotropic manifold, with $v_D = x$ and $v_N = y$ in Eqs.~(\ref{IsotropicSubmodelDefinition1}) and (\ref{IsotropicSubmodelDefinition2}). We note that there is a single isolated curvature singularity. B: A heat map of the time-dilation factor for a random realization of pure isotropic quenched gravity. C: The heat map in B annotated to mark the locations of isotropic singularities.} \label{IsotropicSubmodelCurvatureFigure} \end{figure} In the case of the isotropic model, we find a dramatically simpler form of the metric and Christoffel symbols, \begin{align} g_{\mu \nu}(\vex{x}) &\rightarrow \begin{bmatrix} -v(\vex{x})^2 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}, \\ \Gamma^0_{\mu\nu}(\vex{x}) &\rightarrow \begin{bmatrix} 0 & \partial_1 & \partial_2 \\ \partial_1 & 0 & 0\\ \partial_2 & 0 & 0 \end{bmatrix} \log[v(\vex{x})], \\ \Gamma^1_{\mu\nu}(\vex{x}) &\rightarrow \begin{bmatrix} v(\vex{x})\partial_1v(\vex{x}) & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{bmatrix}, \\ \Gamma^2_{\mu\nu}(\vex{x}) &\rightarrow \begin{bmatrix} v(\vex{x})\partial_2v(\vex{x}) & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end{bmatrix}. \end{align} From Eq.~(\ref{TemporalFirstIntegralEquation}), we have $d t / d s = v(\vex{x})^{-2}$ (setting the constant $E/m = 1$). The spatial geodesic equations are \begin{align} \label{IsotropicGeodesicHamiltonianSystem} \frac{d^2 \vex{x}}{d s^2} = \vex{\nabla}\left[\frac{1}{2v(\vex{x})^2}\right] = \vex{\nabla}\left[\frac{\gamma(\vex{x})}{2}\right]. \end{align} With respect to the affine parameter $s$, these equations represent a non-relativistic classical-Hamiltonian system with the effective potential \begin{equation} \label{IsotropicGeodesicPotentialEnergy} U(\vex{x}) = - \frac{1}{2} \gamma(\vex{x}). \end{equation} Conservation of energy for the Hamiltonian system takes the form \begin{align} \frac{1}{2}\left|\frac{d \vex{x}}{d s}\right|^2 &= \mathcal{E} + \frac{1}{2}\gamma(\vex{x}). \end{align} Above, $\mathcal{E}$ determines both the Hamiltonian energy of the classical system and the length of the spacetime interval: \begin{align} \Delta(s) = g_{\mu\nu}[\vex{x}(s)] \frac{d x^\mu}{d s} \frac{d x^\nu}{d s} = 2\mathcal{E}. \end{align} A null geodesic is a trajectory with $\mathcal{E}=0$. In this picture, the factor $\gamma = 1/v^2(x)$ [Eq.~(\ref{TimeDilationFactorDefinition})] enters acts as a potential energy $U(\vex{x})$, and singularities are infinitely deep potential wells. Conservation of the energy $\mathcal{E}$ would seem to prevent geodesics from terminating at an isotropic singularity, but this is true only in terms of the affine parameterization. Because $U \rightarrow - \infty$ corresponds to infinitely strong time dilation, when reparametrized in terms of time, the spatial coordinates $\vex{x}(t)$ instead slow upon approaching the singular point, and collide with it only as $t \rightarrow \infty$. This can be attributed to the additional (non-Christoffel) ``friction terms'' appearing on the right-hand side of (\ref{CartesianGlobal}) that arise due to the time reparametrization. In Sec.~\ref{ToyModelsGeodesicsIsotropicPowerLawSubsection}, we consider a highly symmetric geometry with an isotropic singularity at the origin that can be solved exactly. In that case, we will see explicitly how geodesic capture occurs. The analogue of Eqs.~(\ref{SimplifiedGeodesicEquationX})--(\ref{SimplifiedGeodesicEquationPhi}) for the isotropic model is also much simpler, \begin{align} \dot{\vex{x}} &= v(\vex{x}) \vex{\hat{\phi}} \\ \dot{\phi} &= [\vex{\nabla} v(\vex{x})]\times \vex{\hat{\phi}}. \end{align} In this case, we see that $\vex{\hat{\phi}}$ simply gives the velocity direction of the geodesic, and $v(\vex{x})$ is the speed. The velocity vector rotates when it is not aligned with the gradient of $v(\vex{x})$. Finally, in the case of the isotropic model the Ricci scalar curvature is simple enough to state: \begin{align} R(\vex{x}) = - \frac{2 \nabla^2 v}{v}. \end{align} The fully quantum-mechanical formulation of the pure isotropic model can be re-written so that its time-dependent wave function is determined by the solution of an auxiliary Hermetian differential equation. To see this, note that pseudospin invariance asserts that the dynamics of the general isotropic model can be studied by the action \begin{align} \label{RotatedGeneralIsotropicAction} \mathcal{S} &= i\int dt \int d^2\vex{x}\ \bar{\psi}(t,\vex{x}) \left\lgroup \begin{aligned} \partial_t &+ v(\vex{x})\hat{\sigma}^1\partial_1 + v(\vex{x})\hat{\sigma}^2\partial_2\\ &+ \frac{1}{2}[\partial_1v(\vex{x})]\hat{\sigma}^1 + \frac{1}{2}[\partial_2v(\vex{x})]\hat{\sigma}^2 \end{aligned} \right\rgroup \psi(t,\vex{x}), \end{align} from which we can extract the Schr\"odinger equation. We can deal with the spin connection terms [the second line in Eq.~(\ref{RotatedGeneralIsotropicAction})] by introducing $\tilde{\psi}(t,\vex{x}) = \sqrt{v(\vex{x})} \, \psi(t,\vex{x}),$ which satisfies \begin{align} \label{IsotropicAuxilliaryShrodingerEquation} \left\{ i\partial_t + v(\vex{x}) [ \hat{\sigma}^1 i\partial_1 + \hat{\sigma}^2 i\partial_2 ] \right\} \tilde{\psi}(t,\vex{r}) = 0. \end{align} Dividing by $v(\vex{x})$, we see the energy eigenstates are determined by the equation \begin{align} \label{RandomEnergyDiracEquation} - i \bigg[\hat{\sigma}^1\partial_1 + \hat{\sigma}^2\partial_2\bigg] \tilde{\psi}_E(\vex{x}) = \frac{E}{v(\vex{x})} \tilde{\psi}_E(\vex{x}). \end{align} \subsection{Pure nematic model} \label{PurelyNematicModelSubsection} The pure \textit{nematic} model will be defined by \begin{align} \label{NematicSubmodelDefinition1} \vex{v_1}(\vex{x}) &= \begin{bmatrix} 1 + v_D(\vex{x})\\ v_N(\vex{x}) \end{bmatrix}, \\ \label{NematicSubmodelDefinition2} \vex{v_2}(\vex{x}) &= \begin{bmatrix} v_N(\vex{x})\\ 1 - v_D(\vex{x}) \end{bmatrix}, \end{align} where $v_D,v_N$ are the \textit{diagonal} and \textit{non-diagonal} potentials, respectively. \begin{figure}[t!] \centering \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{NematicModelExampleA.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{NematicModelExampleB.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{NematicModelExampleC.PNG} \end{minipage}\hfill \caption{Curvature in the pure nematic model. A: Heat map of the time-dilation factor $\gamma$ [Eq.~(\ref{TimeDilationFactorDefinition})] for the ``disorder space'' nematic manifold, with $v_D = x$ and $v_N = y$ in Eqs.~(\ref{NematicSubmodelDefinition1}) and (\ref{NematicSubmodelDefinition2}). We note the singularities lie along the unit circle, with the collimation angle marked in red. B: A heat map of the time-dilation factor for a random realization of pure nematic QGD. C: The heat map in B annotated to mark the collimation angles of the nematic singularities. Numerically computed null geodesics are displayed in Fig.~\ref{CircularCollimationFigure} for this geometry. } \label{NematicSubmodelCurvatureFigure} \end{figure} This model has been designed to encode nematic fluctuations, but does not allow for isotropic compression of the Dirac lightcone. In particular, there can be no isotropic singularities. We note that $\vex{v_1}\times\vex{v_2} = 1-(v_D^2 + v_N^2) = 1/\gamma$, so that the curvature singularities fall along the unit circle in $\{v_D,v_N\}$-space, as depicted in Fig.~\ref{NematicSubmodelCurvatureFigure}. Plugging the disorder vectors of Eqs.~(\ref{NematicSubmodelDefinition1})--(\ref{NematicSubmodelDefinition2}) into the constant-interval speed condition [Eq.~(\ref{ConstantIntervalConditionEquation})], for null geodesics we have \begin{align} \label{NematicSpeedCondition} \sigma(\theta,\vex{x}) = \frac{|1-(v_D^2 + v_N^2)|}{\sqrt{[v_D-\cos(2\theta)]^2+[v_N-\sin(2\theta)]^2}}. \end{align} We see that for the nematic model, there is an angular dependence of $\sigma$ and the collimation angle of a singularity is closely tied to the geometry of the unit circle in $\{v_D,v_N\}$-space. Unlike the pure isotropic model, the pure nematic model neither yields a significantly simplified form of the geodesic equation nor a partial solution to the quantum problem analogous to Eq.~(\ref{RandomEnergyDiracEquation}). It is related to the $T\bar{T}$ deformation of 2D quantum field theories \cite{NumericalQGD,TTBar}. The Ricci scalar curvature is extremely unwieldy and not particularly useful. \section{Solvable manifolds with curvature singularities} \label{ToyModelsGeodesicsSection} In this section we present several ``toy models'' of quenched gravitation, that is, highly symmetric realizations of the velocity potentials $\vex{v}_{1,2}(\vex{x})$ in Eq.~(\ref{DisorderMetricDefinition}) that allow (full or partial) analytical solution to the geodesic equation. We have several motivations here. Firstly, we observe in numerical solutions that geodesics are often captured by isotropic singularities or drawn into meta-stable gravitationally bound orbits along nematic singularity walls. We would like to understand these phenomena through the lens of some exactly solvable models. In particular, we want closed form solutions that shed light on the nature of bound state orbits and on the asymptotic approach to a singular point. Further, we can use analytical solutions to benchmark our numerical solver. \subsection{Isotropic power-law model} \label{ToyModelsGeodesicsIsotropicPowerLawSubsection} \begin{figure}[b] \centering \centerline{ \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{IsotropicPowerLawLinearA.PNG} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{IsotropicPowerLawLinearB.PNG} \end{minipage} } \caption{Null geodesic trajectories for the isotropic power-law model with $\alpha = 1$. A: Geodesics released at $(3,3)$ with various angles. B: Long-term geodesic dynamics, in agreement with Eqs.~(\ref{IsotropicAlpha1SolutionR}) and (\ref{IsotropicAlpha1SolutionTheta}). } \label{IsotropicPowerLaw1Figure} \end{figure} \begin{figure}[t] \centering \centerline{ \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{IsotropicPowerLawQuadraticA.PNG} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{IsotropicPowerLawQuadraticB.PNG} \end{minipage} } \caption{Geodesic trajectories for the isotropic power-law model with $\alpha = 2$. A: Geodesics released at $(3,3)$ with various angles. B: Long-term geodesic dynamics, in agreement with Eqs.~(\ref{IsotropicPowerlawGeodesicClosedFormBeta}) and (\ref{IsotropicPowerlawGeodesicClosedFormR}). } \label{IsotropicPowerLaw2Figure} \end{figure} We observe in numerical results that isotropic singularities tend to be highly attractive, and that geodesics can be captured by these. One may ask if this is an artifact of the numerical solver or if the geodesics truly asymptote to the singularities. Here we study a family of integrable examples of the purely isotropic model to see how the geodesics approach such a singularity. We will consider the pure isotropic model with $v(r) = r^\alpha$ for $\alpha > 0$ (in the notation of Sec.~\ref{PurelyIsotropicModelSubsection}). Adapting to polar coordinates $(r,\theta)$ in the plane, one finds that the geodesic equations read \begin{subequations} \label{IsotropicPowerLawGeodesicEquationsPolar} \begin{align} \dot{r}(t) &= r^\alpha \cos\big[\phi(t)-\theta(t)\big],\\ \dot{\theta}(t) &= r^{\alpha - 1} \sin\big[\phi(t)-\theta(t)\big],\\ \dot{\phi}(t) &= \alpha r^{\alpha - 1}\sin\big[\phi(t)-\theta(t)\big]. \end{align} \end{subequations} We stress that here $\theta$ denotes the positional angle in the plane, \emph{not} the orientation of the tangent (velocity) vector, as employed elsewhere in this paper. The Eqs.~(\ref{IsotropicPowerLawGeodesicEquationsPolar}) are easy to solve; integrating gives $\phi(t) = \alpha \theta(t) + c_0$ for some initial-value constant $c_0$. In the \emph{marginal} $\alpha = 1$ case (see below), the null geodesics are given by \begin{subequations}\label{IsotropicAlpha1Solution} \begin{align} \label{IsotropicAlpha1SolutionR} r(t) &= r_0 e^{\cos(c_0)t}\\ \label{IsotropicAlpha1SolutionTheta} \theta(t) &= \sin(c_0)t + \theta_0. \end{align} \end{subequations} These geodesics rotate with a constant angular velocity and they either decay towards the origin or explode outwards exponentially. Geodesics that start out heading towards the singularity are always captured and those that start out heading away always escape. This is depicted in Fig.~\ref{IsotropicPowerLaw1Figure}. For a generic potential that can be Taylor-expanded in $r$ about an isotropic singularity, the $\alpha = 1$ case considered above captures the lowest-order term in the expansion. This result provides intuition that a geodesic that enters a sufficiently small neighborhood of an isotropic singularity heading towards it will be asymptotically captured. We can also treat the general case. For $\alpha \neq 1$, define the function $\beta(t) \equiv \phi(t)-\theta(t) = (\alpha - 1)\theta(t) + c_0$. With this, the geodesic equations then have a first integral of the form \begin{align} \frac{\sin[\beta(t)]}{\sin\beta_0} = \left[\frac{r(t)}{r_0}\right]^{\alpha-1}. \end{align} We can see from this formula that for $\alpha>1$, all geodesics with $\sin\beta_0 \neq 0$ are bound for these manifolds. This allows to solve for $\beta(t)$ and $r(t)$: \begin{subequations}\label{IsotropicPowerlawGeodesicClosedForm} \begin{align} \label{IsotropicPowerlawGeodesicClosedFormBeta} \cot[\beta(t)] &= (1-\alpha)\frac{r_0^{\alpha-1}}{\sin\beta_0}t + \cot\beta_0, \\ \label{IsotropicPowerlawGeodesicClosedFormR} r(t) &= \left\{ \frac{\sin\beta_0}{r_0^{\alpha-1}} \sqrt{ 1 + \left[ (1-\alpha)\frac{r_0^{\alpha-1}}{\sin\beta_0}t + \cot\beta_0 \right]^2 } + c_1 \right\}^{-1/(\alpha-1)}, \end{align} \end{subequations} where $c_1$ is a constant determined by the $t \rightarrow 0$ limit. We see from Eq.~(\ref{IsotropicPowerlawGeodesicClosedForm}) that $r(t)$ approaches zero in an $\alpha$-dependent power law for $\alpha > 1$, and that the asymptotic angle of approach to the singularity is only dependent on $\theta_0$, $\phi_0.$ This is depicted in Fig.~\ref{IsotropicPowerLaw2Figure}. This model allows us to understand how isotropic singularities can capture geodesics. The pure isotropic model is described by a Hamiltonian system in terms of the affine parameter [Eqs.~(\ref{IsotropicGeodesicHamiltonianSystem}) and (\ref{IsotropicGeodesicPotentialEnergy})]. For the isotropic power-law model $\gamma(r) = 1/r^{2\alpha}$, we have the conservation law \begin{align} \label{IsotropicPowerLawCentrifugalBarrier} \frac{1}{2}\left(\frac{d r}{d s}\right)^2 + \frac{l^2}{2r^2} - \frac{1}{2r^{2\alpha}} = \mathcal{E}, \end{align} where $l = r^2 (d \theta / d s)$ is the angular momentum. For $\alpha < 1$, the effective radial potential diverges to $+\infty$ as $r \rightarrow 0$, and no trajectories cross the singularity. For $\alpha > 1$, the effective potential diverges to $-\infty$, and all trajectories cross the singularity. This all agrees with the closed form solution for null geodesics, Eqs.~(\ref{IsotropicPowerlawGeodesicClosedFormBeta}) and (\ref{IsotropicPowerlawGeodesicClosedFormR}). While Eq.~(\ref{IsotropicPowerLawCentrifugalBarrier}) implies that the kinetic term diverges when a geodesic crosses a singularity, this is only with respect to the affine parametrization; time dilation effects overwhelm that divergence and when parameterized in terms of global coordinate time $t$, the geodesics slow and asymptote to the singularity. The case $\alpha = 1$ is marginal, and only trajectories with $l^2 > 1$ are blocked from the singularity by the centrifugal barrier. We can see how this works from the solution in Eq.~(\ref{IsotropicAlpha1Solution}). Naively calculating $l^2$ from these would give a non-constant angular momentum, because these solutions are given in terms of the global coordinate time. Re-expressing Eq.~(\ref{IsotropicAlpha1Solution}) in terms of the affine parametrization, we have \begin{align} \begin{aligned} \frac{d r}{d s} &= \frac{1}{r}\cos(c_0), \\ \frac{d \theta}{d s} &= \frac{1}{r^2}\sin(c_0). \end{aligned} \end{align} We see that $l^2 = \sin(c_0)^2 \leq 1$, so that in this case, none of our geodesics are centrifugally prevented from crossing the singularity. As a result, the solution in Eq.~(\ref{IsotropicAlpha1Solution}) asymptotes to $r = 0$ in either the infinite future or past. The exception is the $l^2 = 1$ orbit, with $\cos(c_0) = 0$, which orbits at fixed radius. \subsection{xy-factored model} \label{SubsectionXYFactoredModel} In this section, we present a class of 2D toy models that is solvable due to a ``factorization" into independent 1D structures. It provides a class of example manifolds on which both nematic and isotropic singularities attract geodesics; all geodesics asymptote towards nematic singularity walls, and run along these walls until finally being captured by an isotropic singularity. We are interested in the class of models with $\vex{v_1}\cdot\vex{v_2} = 0$ and $\partial_2|\vex{v_1}| = \partial_1|\vex{v_2}| = 0$ everywhere. In light of the pseudospin invariance of the geodesic dynamics, we may reduce to the model defined by the disorder vectors $\vex{v_1} = v_1^1(x)\vex{\hat{e}_1}$ and $\vex{v_2} = v_2^2(y)\vex{\hat{e}_2}$. We then have $|\vex{v_1}\times\vex{v_2}| = v_1^1(x) v_2^2(y)$, so that singularities fall along the vertical and horizontal lines defined by the zeros of $\{v_1^1,v_2^2\}$. Let $\{x_j\}$ and $\{y_j\}$ denote the zeros of $v_1^1$ and $v_2^2$, respectively; we note that they partition the plane into rectangular boxes, $B_{ij} = (x_i,x_{i+1})\times(y_j,y_{j+1})$ (see Fig.~\ref{XYFactoredModelGeodesicsFigure}), separated by walls of nematic singularities and with isotropic singularities at the corners. \begin{figure}[t!] \centering \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{XYFactoredModelA.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{XYFactoredModelB.PNG} \end{minipage}\hfill \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{XYFactoredModelC.PNG} \end{minipage}\hfill \caption{Geodesic trajectories in the xy-factored model. We use a manifold with $v_1^1(x) = p(x)$, $v_2^2(y) = p(y)$ with $p(x) = x^3 + x^2 - 6x$, creating curvature singularities along the lines $\{x = -3,\ x = 0,\ x = 2,\ y = -3,\ y = 0,\ y = 2\}$. A: Heat map of the time-dilation factor $\gamma$ [Eq.~(\ref{TimeDilationFactorDefinition})], depicting curvature singularities along vertical and horizontal walls. B: Heat map annotated to mark the collimation angles of the nematic singularities and the locations of isotropic singularities. C: Heat map with geodesic trajectories superimposed. Note that the geodesics are trapped in the box in which they start and asymptote towards isotropic singularities at the corners.} \label{XYFactoredModelGeodesicsFigure} \end{figure} We solve for the geodesic dynamics. In this setting, the geodesic equations (\ref{SimplifiedGeodesicEquationX})--(\ref{SimplifiedGeodesicEquationPhi})] reduce to \begin{subequations} \begin{align} \dot{x}(t) &= v_1^1(x)\cos\phi, \\ \dot{y}(t) &= v_2^2(y)\sin\phi, \\ \dot{\phi}(t) &=0. \end{align} \end{subequations} That $\phi$ is constant along all geodesics allows the geodesics to be written in closed form. We define the functions \begin{subequations}\label{FGMapping} \begin{align} \label{FMapping} F_j(x) &= \int_{(x_j + x_{j+1})/2}^x \frac{dz}{v_1^1(z)}, \\ \label{GMapping} G_j(y) &= \int_{(y_j + y_{j+1})/2}^y \frac{dz}{v_2^2(z)}. \end{align} \end{subequations} The mapping $(x,y)\leftrightarrow \left(F_i(x),G_j(y)\right)$ provides a diffeomorphism between the box $B_{ij}$ and the plane. Geodesics in the box $B_{ij}$ are described by \begin{subequations} \label{XYFactoredModelGeodesicSolution} \begin{align} \label{XYFactoredModelGeodesicSolutionX} x(t) &= F_i^{-1}[t\cos\phi_0 + F_i(x_0)] \\ \label{XYFactoredModelGeodesicSolutionY} y(t) &= G_j^{-1}[t\sin\phi_0 + G_j(y_0)], \end{align} \end{subequations} which we see implies that geodesics never escape the $B_{ij}$ regions that they originate in; they asymptotically approach the isotropic singularities in the corners of the $B_{ij}$ regions, riding along the nematic singularity walls. We plot an example in Fig.~\ref{XYFactoredModelGeodesicsFigure} Here we have a concrete example of null geodesics asymptotically approaching both nematic singularity walls and isotropic singularities, with capture by isotropic singularities, and a new perspective on the interactions between nematic and isotropic singularities in models where both are allowed. It also offers perspective on what happens when the collimation angle of a nematic singularity is fixed to be parallel to the singularity manifold: such singularities seem to be impossible for geodesics to cross. \subsection{Dreibein wall model} \label{DreibienWallsGeodesicsSubsection} We next consider a model with a dreibein wall. The goal is to understand gravitationally bound orbits of geodesics that cross a nematic singularity wall many times, as observed often in numerical solutions (e.g., Fig.~\ref{MetastableOrbitsFigure}). We define the model by the disorder vectors $\vex{v_1} = \vex{\hat{e}_1}$ and $\vex{v_2} = m(x) \, \vex{\hat{e}_2}.$ We choose $m(x)$ such that $m(0)=0$ to place the nematic singularity wall along the $y$-axis. The model has geodesic collimation angle $\theta^* = 0$ (perpendicular to the dreibein wall); walls with other collimation angles are easily constructed, but exhibit qualitatively similar physics. \begin{figure}[b!] \centering \centerline{ \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{LinearWallA.PNG} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{LinearWallB.PNG} \end{minipage} } \caption{Null geodesic trajectories for the linear dreibein wall, with $m(x) = x$. A: Geodesics launched near the wall of singularities. The singular manifold has been labeled with collimation arrows. B: The long-time geodesic dynamics. Note that all geodesics are in permanent bound states along the singular wall, see Eq.~(\ref{LinearWallExactSolution}). All crossings of the singular wall occur at the collimation angle $\theta^* = 0$ (or $\pi$).} \label{LinearWallGeodesicsFigure} \end{figure} \begin{figure}[t!] \centering \centerline{ \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{TanhWallA.PNG} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{TanhWallB.PNG} \end{minipage} } \caption{Null geodesic trajectories for the dreibein wall, with $m(x) = \tanh(x)$. A: Geodesics launched near the wall of singularities. The singular manifold has been labeled with collimation arrows. B: The long-time geodesic dynamics. Note that some geodesics are in permanent bound states along the singular wall, while others escape to infinity. The separation between bound and free orbits is determined by Eq.~(\ref{BoundStateCondTanhWall}). All crossings of the singular wall occur at the collimation angle $\theta^* = 0$ (or $\pi$).} \label{TanhWallGeodesicsFigure} \end{figure} The null geodesic equations (\ref{SimplifiedGeodesicEquationX})--(\ref{SimplifiedGeodesicEquationPhi}) reduce to \begin{subequations} \begin{align} \label{GeodesicEquationXVB} \dot{x}(t) &= \cos[\phi(t)], \\ \label{GeodesicEquationYVB} \dot{y}(t) &= m[x(t)]\sin[\phi(t)], \\ \label{GeodesicEquationPhiVB} \dot{\phi}(t) &= \frac{m'[x(t)]}{m[x(t)]}\sin[\phi(t)], \end{align} \end{subequations} which admit a general first integral of the form \begin{align} \label{FirstIntegralPhiVB} \frac{\sin[\phi(t)]}{\sin\phi_0} = \frac{m[x(t)]}{m_0}. \end{align} We note that along a trajectory we must have $|m[x(t)]| < |m_0/\sin\phi_0|.$ This condition will provide a very simple way to understand and compute trapping horizons for gravitationally bound orbits. In particular, we can see that for unbounded $m(x)$, \textit{all} geodesic trajectories with $\sin\phi_0 \neq 0$ are bound. We may use Eq.~(\ref{FirstIntegralPhiVB}) to reformulate the geodesic equations as a first order system in the position variables: \begin{subequations}\label{FirstIntegralVB} \begin{align} \label{FirstIntegralXVB} \dot{x}(t) &= \pm\sqrt{1-\left[\sin\phi_0\frac{m(x)}{m_0}\right]^2}, \\ \label{FirstIntegralYVB} \dot{y}(t) &= \sin\phi_0\frac{m[x(t)]^2}{m_0}. \end{align} \end{subequations} We can extract the behavior of null geodesics near a trapping horizon by linearizing Eq.~(\ref{FirstIntegralVB}) around a point $x_*$ such that $m(x_*) = |m_0/\sin\phi_0|.$ Assuming (without loss of generality) that $m'(x_*)>0$ gives \begin{align} x(t_*) = x_* - \left|\frac{\sin\phi_0}{2m_0}\right|m'(x_*)t_*^2, \end{align} where $t_*$ is a shifted time coordinate defined so that the impact with the horizon occurs at $t_* = 0.$ Importantly, we see that the trapping horizon is \textit{not} an asymptote, but a turning point that sends the geodesic back the other way in finite time. This gives an example by which we can understand gravitational bound-state orbits of geodesics along singularity walls. We consider the case of a linear dreibein wall, with $m(x) = x$. In this case, we can directly integrate Eq.~(\ref{FirstIntegralVB}) to obtain \begin{subequations}\label{LinearWallExactSolution} \begin{align} \label{LinearWallExactSolutionX} x(t) &= \frac{x_0}{\sin\phi_0}\sin\left[\phi_0 + \frac{\sin\phi_0}{x_0}t\right], \\ \label{LinearWallExactSolutionY} y(t) &= y_0 + \frac{x_0}{2\sin\phi_0}\left[t + \frac{x_0}{2\sin\phi_0}\left(\sin[2\phi_0] - \sin\left[2\phi_0 + \frac{2\sin\phi_0}{x_0}t\right]\right)\right]. \end{align} \end{subequations} All geodesics (with $\sin\phi_0 \neq 0$) are bound states, oscillating back and forth across the singularity wall while drifting along it, as shown in Fig.~\ref{LinearWallGeodesicsFigure}. This is a particularly nice example for constructing distinct geodesics that have equal instantaneous positions and velocities at a singularity crossing, and is used to generate the example given in Fig.~(\ref{GeodesicCoincidenceFigure}). We also consider the case $m(x) = \tanh(x)$. This model still has a nematic singularity wall along the $y$-axis, but since $m(x)$ is bounded, not all trajectories will be bound states. In fact, the condition for a bound-state null trajectory is \begin{align}\label{BoundStateCondTanhWall} \left|\frac{\tan(\theta_0)}{\tanh(x_0)}\right|\frac{1}{\sqrt{\tanh(x_0)^2 + \tan(\theta_0)^2}} \geq 1, \end{align} where $\theta_0$ is the initial launch angle of the geodesic, initially located at $x_0$. We see that the initial launch angle and initial distance from the singularity wall together determine if a geodesic is asymptotically bound or free. We plot geodesic trajectories in Fig.~\ref{TanhWallGeodesicsFigure}. Again, these toy model solutions add perspective to the results of numerical simulation. They give an analytical understanding of the ability of nematic singularity walls to trap geodesics into oscillatory, gravitationally bound orbits. We expect that the linear profile represents the lowest-order approximation to the curvature profile in the vicinity of a generic nematic singularity wall. These models are designed so that the collimation angle of the geodesics is orthogonal to the singularity manifold at all points, and we see that these orbits are fully stable. \subsection{Circular nematic model} \begin{figure}[b!] \centering \centerline{ \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{CircularNematicA.PNG} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{CircularNematicB.PNG} \end{minipage} } \caption{Null geodesic trajectories for the rotationally symmetric nematic model defined by Eq.~(\ref{CircNemDef}), with $\rho(r) = r$. A: The singular manifold $r^* = 1$, labeled with collimation arrows. B: Geodesic dynamics. Note that many geodesics are in bound states along the singular wall, and all wall crossings occur at the correct collimation angle. The condition for bound-state null geodesics is $e < 1$, where the eccentricity $e$ is defined in Eq.~(\ref{eccentricity}).} \label{CirculatNematicGeodesicsFigure} \end{figure} The previous section on dreibein wall geometries shows that nematic singularity walls can host states of permanently bound geodesics. An interesting question is whether this is a feature unique to infinite singular walls. In this section, we construct a class of geometries hosting gravitationally bound geodesics along a finite, closed contour. Our model will be a rotationally symmetric version of the purely nematic model defined by Eqs.~(\ref{NematicSubmodelDefinition1}) and (\ref{NematicSubmodelDefinition2}), with \begin{subequations}\label{CircNemDef} \begin{align} v_D &= \rho(r)\cos(2\theta),\\ v_N &= \rho(r)\sin(2\theta). \end{align} \end{subequations} Here $\theta$ denotes the positional polar angle in the plane, \emph{not} the orientation of the tangent (velocity) vector, as employed elsewhere in this paper. The metric [Eq.~(\ref{DisorderMetricDefinition})], converted to polar spacetime coordinates $(t,r,\theta)$, takes the form \begin{align} g_{\mu\nu} = \frac{1}{1 - \rho^2(r)} \begin{bmatrix} - \left[1 - \rho^2(r)\right]^2 & 0 & 0 \\ 0 & \left[1 - \rho(r)\right]^2 & 0 \\ 0 & 0 & r^2 \left[1 + \rho(r)\right]^2 \end{bmatrix}, \end{align} which becomes singular at $\rho(r) = 1$. The metric is invariant under rotations $\theta \rightarrow \theta + \theta_0$; the angular $\theta$-direction is flattened along each radial contour with $\rho(r) = 1$, corresponding to a nematic curvature singularity. The Ricci scalar curvature is given by \begin{align} R(r) = - \frac{2}{\left[1 - \rho(r)\right]^3} \left\{ \frac{ \left[3 - \rho(r)\right] \left[1 - \rho(r)\right] \rho'(r) }{ r } + \frac{ \left[\rho'(r)\right]^2 }{ 1 + \rho(r) } + \left[1 - \rho (r)\right]^2 \rho''(r) \right\}, \end{align} where $\rho'(r) = d \rho / d r$. Converting the geodesic equations (\ref{SimplifiedGeodesicEquationX})--(\ref{SimplifiedGeodesicEquationPhi}) to polar coordinates, we find \begin{subequations} \begin{align} \dot{r} &= \left[1+\rho(r)\right] \cos[\theta-\phi], \\ \dot{\theta} &= - \left[ \frac{1-\rho(r)}{r} \right] \sin[\theta-\phi], \\ \label{phidot} \dot{\phi} &= \left[ \frac{1+\rho(r)}{1-\rho(r)} \, \rho'(r) + 2\frac{\rho(r)}{r} \right] \sin[\theta-\phi]. \end{align} \end{subequations} Defining $\beta \equiv \theta - \phi$, as before, these equations are separable. The orbit equation relating $r$ to $\beta$ can be directly integrated, yielding \begin{align} \left(\frac{\sin \beta}{\sin \beta_0}\right) = \frac{r_0}{r} \left[ \frac{1 - \rho(r)}{1 - \rho(r_0)} \right]. \end{align} For the case $\rho(r) = r$, which hosts a singularity wall at $r^* = 1$, the orbit equation becomes \begin{align} r(\beta) = \left[ 1 + \frac{1 - r_0}{r_0} \left(\frac{\sin \beta}{\sin \beta_0}\right) \right]^{-1}. \end{align} This is a conic section orbit in the $(r,\beta)$ plane with semilatus rectum $\alpha = 1$ and eccentricity \begin{align}\label{eccentricity} e = \frac{\left|1 - r_0\right|}{r_0 \left|\sin (\theta_0 - \phi_0)\right|}. \end{align} The condition for a bound orbit is $e < 1$. This model shows that closed, finite singular manifolds can host permanent bound states. \section{Conclusion} \label{ConclusionSection} The effects of ``artificial'' quenched gravity (as defined in the Introduction) on 2D massless Dirac carriers could have important consequences for understanding and manipulating low-dimensional Dirac materials. The action in Eq.~(\ref{GravitationalDisorderAction2}) is equivalent to a theory of massless electrons on a certain class of static, curved spacetime manifolds, described by the metric in Eq.~(\ref{DisorderMetricDefinition}). The geometry of null geodesic trajectories is heavily affected by the presence of both isotropic and nematic curvature singularities that can arise in these spacetimes. Isotropic singularities can asymptotically capture geodesics that pass sufficiently close. On the other hand, null geodesics can traverse nematic singularity domain walls, but experience a \textit{geodesic collimation} effect that fixes their transit velocity. These domain walls can exhibit a horizon effect, trapping null geodesics as bound states that perpetually lens back and forth across the nematic singularity line. In a semiclassical picture of the quantum dynamics, the influence of nematic singularity walls on null geodesics presents a compelling potential mechanism for \emph{pairing enhancement} in Dirac superconductors along these singular manifolds. On one hand, states gravitationally bound to domain wall horizons could provide a link to quasi-1D physics. The latter has been long suspected to play a role in enhancing strong correlations in quantum materials, and possibly in the mechanism for high-$T_c$ superconductivity in particular \cite{Davis2019,NematicReview,DagottoRice96,Tsvelik17}. States gravitationally bound to the singular manifolds will feel the effects of an interaction-enhancing flat band dispersion. On the other hand, the collimation phenomenon drives particles at the same spatial location to equal or opposite momenta, a \emph{geometric effect} reminiscent of the dynamics induced by kinematical constraints and attractive interactions in BCS theory. If singularity walls in the spacetime manifold could enhance or even induce superconducting pairing, then quenched artificial gravity could underlie a simple, universal mechanism for gap enhancement in 2D Dirac superconductors (SCs). In the scenario where quenched gravitational disorder arises from gap fluctuations in a d-wave SC, the prevalence of nematic singularities is dictated by the \textit{ratio} of gap fluctuations to the size of the gap. Therefore, singularities could be \textit{more} common in a weak-pairing SC state given a fixed degree of fluctuation, possibly induced by a low-temperature pairing mechanism. Pairing enhancement due to nematic singularity walls could then create a negative feedback loop, terminating when the gap has hardened sufficiently so as to suppress singularities and their concomitant 1D bound states. This paradigm allows disorder to play a constructive role, which could offer insight into the puzzling indifference of the cuprates to dopant-induced disorder \cite{DavisReview}. Finally, while static gravity is the focus of this paper, Eq.~(\ref{TimeDependentPotentialsGeodesicEquationPhi}) shows that the geodesic collimation effect of nematic singularities survives a generalization to time-dependent fluctuations of the Dirac cone. Since the collimation effect can provide a mechanism for interesting physics, Eq.~(\ref{TimeDependentPotentialsGeodesicEquationPhi}) could serve as the foundation of an attempt to relate this to time-dependent fluctuations of a superconducting gap. This could connect with popular theories of strong correlation physics based on fluctuation-driven competing orders and proximate quantum critical points \cite{Lee06}. An approach that passes all sources of ``fluctuation'' (both ordered and disordered) through the intermediary step of (spatial and temporal) gap modulation has the potential to unify several competing frameworks into a single mechanism for superconductivity. Exploring these possibilities is a goal for future work. \section*{Acknowledgements} We thank Mustafa Amin and Ilya Gruzberg for useful conversations. This work was supported by the Welch Foundation Grant No.~C-1809 and by NSF CAREER Grant No.~DMR-1552327. \begin{appendix} \section{Reformulation of the geodesic equation} \label{GeodesicEquationReformulationAppendixSection} This appendix outlines useful reformulations of the geodesic equation. First we give an angular, first-order formulation based on Eq.~(\ref{ConstantIntervalConditionEquation}). We then explain the derivation of Eqs.~(\ref{SimplifiedGeodesicEquationX})--(\ref{SimplifiedGeodesicEquationPhi}). \subsection{Angular reformulation} We can use the nullity condition, Eq.~(\ref{ConstantIntervalConditionEquation}) with $\Delta = 0$, to reformulate the geodesic equation in terms of the spatial velocity-vector angle $\theta$: \begin{subequations} \label{AngularGlobal1} \begin{align} \dot{x}(t) &= \sigma(\theta,\vex{x})\cos\theta,\\ \dot{y}(t) &= \sigma(\theta,\vex{x})\sin\theta,\\ \dot{\theta}(t) &= \frac{1}{\sigma(\theta,\vex{x})}\left[a_2(\vex{x})\cos\theta - a_1(\vex{x})\sin\theta\right], \end{align} \end{subequations} so that $d y / d x = \tan(\theta)$, and where the $\{a_j\}$ are taken from the right-hand side of Eqs.~(\ref{CartesianGlobal1})--(\ref{CartesianGlobal2}) so that $\ddot{x}_j(t) = a_j[\vex{x}(t)].$ The angular formulation has a built-in error-resistance for numerical solution. By using the nullity condition to reduce the equations to first order, we guarantee that the particle moves along a null geodesic. Even as the solver inevitably accumulates errors due to the angular update, the particle may move along a different geodesic than the one it started on, but it will still be on a null geodesic. \subsection{Tangent vector projected onto the dreibein} We can express the geodesic equation concisely in terms of the dreibein. We project the tangent (3-velocity) vector onto the dreibein, \begin{align} \label{UVectorDefinition} u^A(s) \equiv E^A_{\mu}[\vex{x}(s)] \, \frac{d x^\mu}{d s}. \end{align} We can then re-express the geodesic equation as the time-evolution equation for $u$, which can be written simply in terms of $E^A_\mu$ as \begin{align} \label{UDotGeodesicEquation} \frac{d u^J}{d s} &= \eta^{JM} \eta_{AP} \left[ E_B^{\mu} E_M^{\nu} - E_B^{\nu} E_M^{\mu} \right] \big(\partial_{\nu}E_{\mu}^P\big) u^A u^B, \nonumber\\ &= \frac{1}{2} \eta^{JM} \eta_{AP} \left( E^{\mu}\wedge E^{\nu} \right)_{BM} \big( d E^P \big)_{\nu\mu} u^A u^B. \end{align} For null geodesics, we have \begin{align} \eta_{AB} u^A u^B = g_{\mu\nu} (d x^\mu / d s)(d x^\nu / d s) = 0, \end{align} so that we may parametrize the vector $u$ as \begin{align}\label{FirstUFormula} u^A(s) \rightarrow u^0(s) \begin{bmatrix} 1\\ \cos[\phi(s)]\\ \sin[\phi(s)] \end{bmatrix}. \end{align} \subsection{Derivation of Eqs.~(\ref{SimplifiedGeodesicEquationX})--(\ref{SimplifiedGeodesicEquationPhi})} The special structure of the quenched gravitational metric in Eq.~(\ref{DisorderMetricDefinition}) allows us to make further progress. In particular, using the temporal first integral equation (\ref{TemporalFirstIntegralEquation}) and the time-space block diagonality of the dreibein, we have \begin{align} \frac{d t}{d s} = E^0_0[\vex{x}(s)] u^0(s) = \gamma[\vex{x}(s)] = \left\{E^0_0[\vex{x}(s)]\right\}^2, \end{align} [see Eq.~(\ref{TimeDilationFactorDefinition})], and where we have set the constant $E/m = 1$. In this equation, $E^0_0$ is the $A = 0$, $\mu = 0$ component of $E^\mu_A$, which is the inverse of the same component of $E_\mu^A$ [Eq.~(\ref{UVectorDefinition})]. We conclude that $u^0(s) = E^0_0[\vex{x}(s)] = 1/\sqrt{\left|\vex{v_1}\times\vex{v_2}\right|}$ [Eq.~(\ref{DisorderVerbeinDefinition00})]. As before, we implement the global time reparametrization via Eq.~(\ref{TimeReparamScaling}). Combining this with Eq.~(\ref{UVectorDefinition}) and projecting out the spatial components of the geodesic equation, we obtain \begin{align} \dot{\vex{x}}(t) = \begin{bmatrix} v_1^1(\vex{x}) & v_2^1(\vex{x}) \\ v_1^2(\vex{x}) & v_2^2(\vex{x}) \end{bmatrix} \vex{\hat{\phi}}(t) \equiv \hat{V}(\vex{x})\vex{\hat{\phi}}(t), \end{align} where $\vex{\hat{\phi}} \equiv [\cos\phi, \sin\phi]^T$ is a unit vector. This gives Eqs.~(\ref{SimplifiedGeodesicEquationX}) and (\ref{SimplifiedGeodesicEquationY}), but it remains to determine the dynamics of $\phi(t)$. We may use the parametrization of $u^A$ [Eq.~(\ref{FirstUFormula})] and the time evolution equation for $u^A$ [Eq.~(\ref{UDotGeodesicEquation})] together to find that \begin{align} \label{PhiHatDynamicsEquation} \frac{d\vex{\hat{\phi}}_j}{dt} &= \sum_{a,b,t,l \in \{1,2\}} \left[ V_{lb}V_{tj} - V_{tb}V_{lj} \right] \big( \partial_t V^{-1}_{al} \big) \vex{\hat{\phi}}_a \vex{\hat{\phi}}_b. \end{align} [We note that the term in brackets in Eq.~(\ref{PhiHatDynamicsEquation}) vanishes for most index assignments.] Backing out the implied ODE for $\phi(t)$ finally gives Eq.~(\ref{SimplifiedGeodesicEquationPhi}). \section{Length-scale dependence} \label{LengthScaleDependeneAppendixSection} Let $a$ denote the length scale on which the lightcone modulations fluctuate, for example in a random quenched gravitational potential (QGD). We will extract the dependence of geodesic trajectories on $a$. Let $g^{(a)}_{\mu\nu}$ be the metric corresponding to disorder potentials fluctuating on length scale $a$, and let $g^{(1)}_{\mu\nu}$ be the metric corresponding to the same potential, but scaled so that $a = 1:$ \begin{equation} g^{(a)}_{\mu\nu}[\vex{x}] = g^{(1)}_{\mu\nu}\left[\frac{\vex{x}}{a}\right]. \end{equation} Next, let $\Gamma^{(a)\rho}_{\mu\nu}$ be the Christoffel symbols corresponding to the metric $g^{(a)}_{\mu\nu}$. Since the Christoffel symbols are related to the metric via spatial derivatives, we find \begin{align} \label{ChristoffelScaling} \Gamma^{(a)\mu}_{\alpha\beta}[\vex{x}] &= \frac{1}{a}\Gamma^{(1)\mu}_{\alpha\beta}\left[\frac{\vex{x}}{a}\right]. \end{align} Now, let $[t_{(a)}(s),\vex{x}_{(a)}(s)]$ denote a solution to the geodesic equation at length scale $a$: \begin{align} \label{GeodesicLengtha} \partial^2_s x^{\rho}(s) + \Gamma^{(a)\rho}_{\mu\nu}[\vex{x}]\left[\partial_sx^{\mu}(s)\right]\left[\partial_sx^{\nu}(s)\right] &= 0. \end{align} The variable transformations $s = a\tilde{s}$, $x^{\mu} = a\tilde{x}^{\mu}$ and Eq.~(\ref{ChristoffelScaling}) map Eq.~(\ref{GeodesicLengtha}) to \begin{align} \label{GeodesicLengtha-1} \frac{1}{a}\partial^2_{\tilde{s}} \tilde{x}^{\rho}(\tilde{s}) + \frac{1}{a}\Gamma^{(1)\rho}_{\mu\nu}[\vex{\tilde{x}}]\left[\partial_{\tilde{s}}\tilde{x}^{\mu}(\tilde{s})\right]\left[\partial_{\tilde{s}}\tilde{x}^{\nu}(\tilde{s})\right] &= 0. \end{align} Thus, if $[t_{(a)},\vex{x}_{(a)}]$ is a geodesic with metric length scale $a$, then $[\tilde{t},\vex{\tilde{x}}] = (1/a)[t_{(a)},\vex{x}_{(a)}] = [t_{(1)},\vex{x}_{(1)}]$ is a solution with length scale $a = 1$, so that geodesics on different length scales are related by a simple inflation transformation. \section{Other submodels: diagonal and off-diagonal} \label{DiagonalOffDiagonalAppendixSection} The pure isotropic and nematic models introduced in Sec.~\ref{SubmodelsSection} are studied alongside the general QGD Hamiltonian in the quantum setting via numerical exact diagonalization in Ref.~\cite{NumericalQGD}. In that paper, these are referred to as models ``c'' and ``d,'' respectively. That work also introduces two other models [``a,'' ``b'']. While these models aren't of primary interest for us here in light of the pseudospin invariance of Sec.~\ref{PseudospinRotationsSubsection}, we introduce them here and present some properties of their geodesics, for comparison with Ref.~\cite{NumericalQGD}. \subsection{Pure diagonal model} \begin{figure}[b!] \centering \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{DiagonalModelExampleA.PNG} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{DiagonalModelExampleB.PNG} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{DiagonalModelExampleC.PNG} \end{minipage} \caption{Curvature in the pure diagonal model. A: Heat map of the time-dilation factor $\gamma$ [Eq.~(\ref{ModelagammaDef})] for the ``disorder space'' diagonal manifold, with $\delta v_1^1 = x$ and $\delta v_2^2 = y$. We note the singularities lie along the lines $\{x = -1,\ y = -1\}$. B: A heat map of the time-dilation factor for a random realization of purely diagonal QGD. C: The heat map in B annotated to mark the collimation angles of the nematic singularities, which we note are all either horizontal or vertical.} \label{DiagonalSubmodelCurvatureFigure} \end{figure} The pure diagonal model is defined by the disorder vectors \begin{align} \label{DiagonalSubmodelDefinition1} \vex{v_1}(\vex{x}) &= \begin{bmatrix} 1 + \delta v_1^1(\vex{x})\\ 0 \end{bmatrix}, \\ \label{DiagonalSubmodelDefinition2} \vex{v_2}(\vex{x}) &= \begin{bmatrix} 0\\ 1 + \delta v_2^2(\vex{x}) \end{bmatrix}. \end{align} This corresponds to ``model a" in Ref.~\cite{NumericalQGD}. We note that by pseudospin invariance, its properties generalize to all models with $\vex{v_1}\cdot\vex{v_2} = 0$ everywhere. The time-dilation factor is given by \begin{align}\label{ModelagammaDef} \gamma(\vex{x}) = \frac{1}{|(1+\delta v_1^1(\vex{x}))(1+\delta v_2^2(\vex{x}))|}, \end{align} so that we have a nematic singularity when either $\delta v_1^1 = -1$ or $\delta v_2^2 = -1$, and an isotropic singularity when both $\delta v_1^1 = \delta v_2^2 = -1.$ In this model, all isotropic singularities lie at an intersection of nematic singularity manifolds---see Fig.~\ref{DiagonalSubmodelCurvatureFigure}. The squared speed-of-light in Eq.~(\ref{ConstantIntervalConditionEquation}) (with $\Delta = 0$) reduces to \begin{align} \label{DModelSigma} \sigma^2(\theta,\vex{x}) = \frac{|(1+\delta v_1^1(\vex{x}))(1+ \delta v_2^2(\vex{x}))|^2}{(1+\delta v_1^1(\vex{x}))^2\sin^2\theta+(1+\delta v_2^2(\vex{x}))^2\cos^2\theta}. \end{align} We see from Eq.~(\ref{DModelSigma}) that the geodesic collimation effect takes a simple form for these models: if the singularity corresponds to $\delta v_1^1 = -1$ ($\delta v_2^2 = -1$), then the geodesic may only pass through vertically (horizontally). \subsection{Pure off-diagonal model} The pure off-diagonal model is defined by \begin{align} \label{OffDiagonalSubmodelDefinition1} \vex{v_1}(\vex{x}) &= \begin{bmatrix} 1 \\ v_2^1(\vex{x}) \end{bmatrix}, \\ \label{DiagonalSubmodelDefinition2} \vex{v_2}(\vex{x}) &= \begin{bmatrix} v_1^2(\vex{x})\\ 1 \end{bmatrix}. \end{align} This corresponds to ``model b'' in Ref.~\cite{NumericalQGD}, and we note that it is equivalent to the pure nematic model by pseudospin invariance. The time-dilation factor is given by \begin{align}\label{ModelbgammaDef} \gamma(\vex{x}) = \frac{1}{|1 - v_1^2(\vex{x})v_2^1(\vex{x})|}, \end{align} so that we have singularities whenever $v_1^2v_2^1 = 1$. The constant-interval speed condition [Eq.~(\ref{ConstantIntervalConditionEquation})] reduces to (for null geodesics with $\Delta = 0$) \begin{align} \label{OModelSigma} \sigma^2(\theta,\vex{x}) = \frac{|v_{12}(\vex{x})v_{21}(\vex{x})-1|^2}{[\sin\theta-v_{21}(\vex{x})\cos\theta]^2+[\cos\theta - v_{12}(\vex{x})\sin\theta]^2}. \end{align} We can parametrize a point on the singularity manifold by $(v_1^2,v_2^1) = (\cot\phi,\tan\phi)$ [Fig.~(\ref{OffDiagonalSubmodelCurvatureFigure})]. Eq.~(\ref{OModelSigma}) then shows that a geodesic can pass through a curvature singularity if and only if it is at the angles $\phi$ (or $\phi + \pi$), relating the collimation angles to the geometry of the singularity manifold in $\{v_1^2, v_2^1\}$-space. \begin{figure}[t!] \centering \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{OffDiagonalModelA.PNG} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{OffDiagonalModelB.PNG} \end{minipage} \begin{minipage}{0.32\textwidth} \centering \includegraphics[width=0.9\textwidth]{OffDiagonalModelC.PNG} \end{minipage} \caption{Curvature in the pure off-diagonal model. A: Heat map of time-dilation factor $\gamma$ [Eq.~(\ref{ModelbgammaDef})] for the ``disorder space'' off-diagonal manifold, with $v_{12} = x$ and $v_{21} = y$. We note the singularities lie along the hyperbola $\{xy = 1\}$. B: A heat map of time-dilation factor for a random realization of purely off-diagonal QGD. C: The heat map in B annotated to mark the collimation angles of the nematic singularities, which we note are clustered around $\pm \pi/4$.} \label{OffDiagonalSubmodelCurvatureFigure} \end{figure} \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:Intro} Mathisson \cite{Mathisson37} and Papapetrou \cite{Papapetrou51} provided the equations of motion for a spinning particle in a curved spacetime. The equations of motion of a spinning test particle are interesting from the astrophysical point of view, because they approximate the motion of a stellar compact object in the spacetime background of a supermassive black hole. Such a binary system is called an extreme mass ratio inspiral (EMRI). EMRIs are among the most promising sources of gravitational waves expected to be detected by space interferometer antennas like LISA (see, e.g., \cite{LISA}). However, in this work we focus rather on the dynamics of a spinning particle system than on astrophysical aspects. The number of Mathisson-Papapetrou (MP) equations is smaller than the number of variables which the MP equations intend to evolve. The above fact can be interpreted as a freedom of choosing different worldlines for evolving the equation of motion of the same extended object described by the pole-dipole approximation \cite{Moeller49}. To choose a worldline we use a supplementary condition that is known in the literature as the spin supplementary condition (SSC). There is a variety of SSCs (for a review, see, e.g., \cite{Semerak99,Kyrian07,Semerak15}), but all are physically acceptable. The most renown are the Pirani (P) \cite{Pirani56} and the Tulczyjew (T) \cite{Tulczyjew59} SSCs. For many years the P SSC was considered unphysical, because the test particle exhibited helical motion in the flat spacetime limit. However, in \cite{Costa12} it has been shown that this helical motion results from a hidden momentum and the P SSC is physically valid as well. The aspect of the spinning particle dynamics we are interested in is the issue of the integrability of the corresponding system. It has been shown that for the Schwarzschild background the MP equations with T SSC give chaotic orbits \cite{Suzuki97}, and the same holds for the Kerr background, see, e.g., \cite{Hartl03a,Hartl03b,Han08}. Hence, one can claim that the MP equations with T SSC correspond to a non-integrable system. However, in the linear in spin approximation of the MP equations, it has been proved that for the T SSC a Killing-Yano tensor provides a Carter-like constant of motion for the Kerr background \cite{Rudiger}. The existence of a Carter-like constant and the fact that the T and P SSCs are the same in the linear regime led to the impression that in the linear in spin approximation the spinning particle dynamics corresponds to an integrable system (see, e.g., \cite{Hinderer13}). For geodesic orbits in a Kerr spacetime the existence of the Carter constant ensures integrability, since it is the fourth constant of motion (the others being the energy, the angular momentum along the symmetry axis and the contraction of the four-momentum) in a Hamiltonian system of four degrees of freedom. Nevertheless, when the particle is spinning we have extra degrees of freedom, and it is questionable even if the existence of a Carter-like constant can ensure the integrability of the system. When examining whether a system is integrable or not, it is useful to have a canonical Hamiltonian formalism which provides symplecticity. In a non-symplectic system we need as many constants of motion as the dimensionality of the phase space. On the other hand, in a canonical Hamiltonian system two dimensions of the phase space correspond to one degree of freedom. Therefore, we need half the number of constants of motion in order to have integrability with respect to a non-canonical system of the same phase space dimensionality. Moreover, by having a canonical Hamiltonian system, tools like Poincar\'{e} sections can be properly used. When a system is not symplectic, then a surface of section is ambiguous. A canonical Hamiltonian formalism has not been found yet for the MP equations with the T SSC. However, such canonical Hamiltonian formalism has been provided in \cite{Barausse09} for the Newton-Wigner (NW) SSC \cite{NewtonWigner49} in the linearized in spin approximation. A Hamiltonian for a spinning particle moving in a Kerr spacetime background has been first provided in \cite{Barausse09}. However, due to the approximative procedure which leads from the MP equations to the linearized in spin Hamiltonian function, the resulting Hamiltonian equations are not equivalent to the corresponding MP equations, e.g., starting the Hamiltonian equations and the MP equations with the same initial conditions lead to two diferrent orbits \cite{LGSK}. It might even occur that the final linearized Hamiltonian system may not even respect some symmetries that the corresponding MP equations respect. For example, the Hamiltonian function provided in \cite{Barausse09} for the Kerr spacetime in Boyer-Lindquist coordinates did not respect in the Schwarzschild limit $a=0$ the spherical background symmetry. Namely, the total angular momentum was not preserved as it should be. Absence of integrals of motions could lead to the misleading impression that for the Schwarzschild background the Hamiltonian corresponds to a non-integrable system (see, e.g., figure~2 in \cite{KLLS}). The problem with the Hamiltonian in \cite{Barausse09} was the specific tetrad field choice on which the Hamiltonian function was built on. Even in \cite{Barausse09}, it was found that the resulting Hamiltonian would evolve the spin in the flat spacetime limit \cite{Barausse09}. However, since the helical motion in the case of the MP equations with P SSC could result from a hidden momentum, the same could hold for the Hamiltonian approximation coming from the NW SSC. Thus, a more solid reasoning was needed to show the drawbacks of the tetrad field chosen in \cite{Barausse09}. In \cite{KLLS} it was shown that if the resulting Hamiltonian should respect the symmetries of the Schwarzschild background, then the corresponding tetrad filed should obey a certain prescription (equation~(44) in \cite{KLLS}). The tetrad field of \cite{Barausse09} is not complying with this prescription. On the other hand, a different tetrad field choise provided in \cite{Barausse10} led to a revised Hamiltonian for the Kerr background. This tetrad field is obeying the prescription given in \cite{KLLS}. In particular, for the Schwarzschild limit the revised Hamiltonian \cite{Barausse10} was conserving not only the total angular momentum as it should be, but it was shown in \cite{KLLS} that the magnitude of the orbital momentum was preserved as well. The latter implies that in the Schwarzschild limit the revised Hamiltonian of \cite{Barausse10} corresponds to an integrable system, since for a five degrees of freedom system we have five constants of motion \cite{KLLS}\footnote{In fact it was shown that a proper Hamiltonian function for the Schwarzschild background corresponds to an integrable system in general.}. The revised Hamiltonian ceases to be integrable when the spin of the central black hole is nonzero \cite{KLLS}, i.e., in the Kerr spacetime background. A thorough study of the non-integrability of the revised Hamiltonian is the subject of our article. The study of chaotic motion around black holes probably starts with \cite{Contopoulos90}, where a method based on Cantor sets was applied to prove the chaotic nature of the system. Since then many methods have been applied to detect chaos in the vicinity of black holes, but the most common is the 2D Poincar\'{e} section. It is a fact that in order to study the non-integrability of a two degrees of freedom Hamiltonian system a 2D Poincar\'{e} section is a standard tool. However, since the Hamiltonian provided in \cite{Barausse10} corresponds to a system with three degrees of freedom, we have to deal with 4D Poincar\'{e} sections \cite{Contopoulos02}. In order to detect order and chaos in the 4D Poincar\'{e} spaces of section, we must first of all have a way to visualize them. In the past, several methods have been proposed for the visualization of the 4D surfaces of section in a 6D phase space of a 3D autonomus Hamiltonian system: ordinary 2D projections \cite{Contopoulos89}, 3D projections \cite{Vrahatis97}, stereoscopic projections \cite{Froeschle70,Martinet81,Contopoulos82}, or 2D slices of 3D subspaces (\cite{Froeschle70,Froeschle72}, and recently a more sophisticated version in \cite{Richter14,Lange14} (see Appendix). In the present work we use the method of color and rotation, introduced by Patsis and Zachilas \cite{Patsis94}. This method is extensively described for the case of 3D rotating galactic potentials in a series of papers \cite{Katsanikas11a,Katsanikas11b,Katsanikas11c,Katsanikas13,Patsis14a,Patsis14b}. These papers investigate portraits of the 4D spaces of section in the neighborhood of periodic orbits exhibiting all kinds of instabilities encountered in 3D Hamiltonian systems (see, e.g., \cite{Contopoulos02}). The method has also been applied in the study of the structure of the phase space close to fixed points in a 4D symplectic map \cite{Zachilas13}, and to design spacecraft orbits \cite{Geisel13}. The method consists in plotting the points (the consequents) of an orbit in a 3D subspace as they cross the space of section in a given direction, rotate them by means of standard 3D graphic tools to get a good insight of their distribution in the 3D subspace, and finally color them according to their value in the fourth dimension (the one not used in the 3D spatial representation of the orbit). Color allows the estimation of the smoothness in the 4th dimension of geometrical structures appearing in the 3D projections and the distinction of pseudo- from true intersections in the 4D space. Thus, one can establish criteria for the regular, weak chaotic or strong chaotic character of a given orbit \cite{Patsis94,Katsanikas11a,Katsanikas11b,Katsanikas11c,Katsanikas13,Patsis14a,Patsis14b,Zachilas13}. In the latter papers, specific patterns in phase space are associated with the various kinds of instability or with stability. In our paper this method is used in the study of the dynamics of a spinning particle in the Hamiltonian approach in an effort to trace regular and chaotic motion in the phase space of our system. The paper is organized as follows. Sec.~\ref{sec:HamSP} introduces the Hamiltonian function of \cite{Barausse10}, which we use for our study. Sec.~\ref{sec:2D4DPs} discusses the non-integrability of the Hamiltonian, briefly describes the setting up of the numerics, and provides a detailed account of our numerical findings. Sec.~\ref{sec:ConDis} sums up our findings, and discusses the possible astrophysical implications. Appendix~\ref{sec:V4D} lists techniques used for visualizing 4D spaces of section. We use geometric units, i.e., $G=c=1$, and the signature of the metric is (-,+,+,+). Greek letters denote the indices corresponding to spacetime (running from 0 to 3), while Latin letters denote indices corresponding only to space (running from 1 to 3). \section{The Hamiltonian of a spinning particle} \label{sec:HamSP} The canonical Hamiltonian formalism of a spinning particle in \cite{Barausse09} has been achieved by linearizing the MP equations of motion for the NW SSC. In this formalism the mass of the test particle $m$ is considered a constant of motion \cite{Barausse09}, and the spin of the particle is given by a three vector $S^I$. The corresponding Hamiltonian function $H$ splits in two main parts, the non-spinning $H_{NS}$, which describes basically the geodesic motion, and the spinning part $H_S$, which incorporates the spinning of the particle, i.e., \begin{equation} H=H_{NS}+H_{S}~~. \label{eq:HamSP} \end{equation} The non-spinning part of the Hamiltonian $H_{NS}$ reads \begin{equation} H_{NS}=\beta^{i}P_i+\alpha~\sqrt{m^2+\gamma^{ij}P_i P_j}~~, \label{eq:HamNSP} \end{equation} where $P_i$ are the canonical momenta conjugate to the coordinates $x^{i}$ of the Hamiltonian~\eqref{eq:HamSP} \cite{Barausse09}, and \begin{eqnarray} \alpha &=& \frac{1}{\sqrt{-g^{00}}}~~, \nonumber \\ \beta^i &=& \frac{g^{0i}}{g^{00}}~~, \nonumber \\ \gamma^{ij} &=& g^{ij}- \frac{g^{0i}g^{0j}}{g^{00}}~~. \label{eq:abg} \end{eqnarray} $g^{\kappa\lambda}$ is the contravariant form of the metric tensor of the background spacetime in which the test particle moves. We are interested in the Kerr spacetime background describing the spacetime around a black hole of mass $M$ with spin parameter $a$. In Boyer-Lindquist coordinates $t$ is the coordinate time, $\phi$ is the azimuthal angle, $\theta$ is the polar angle, and $r$ is the radial distance, and the Kerr metric reads \begin{eqnarray} g_{tt} &=&-1+\frac{2 M r}{\Sigma}~~,\nonumber\\ g_{t\phi} &=& -\frac{2 a M r \sin^2{\theta}}{\Sigma}~~,\nonumber\\ g_{\phi\phi} &=& \frac{\Lambda \sin^2{\theta}}{\Sigma}~~,\nonumber \\ g_{rr} &=& \frac{\Sigma}{\Delta}~~,\nonumber\\ g_{\theta\theta} &=& \Sigma~~,\label{eq:KerrMetric} \end{eqnarray} where \begin{eqnarray} \Sigma &=& r^2+ a^2 \cos^2{\theta}~~,\nonumber\\ \Delta &=& \varpi^2-2 M r~~,\nonumber \\ \varpi^2 &=& r^2+a^2~~, \nonumber \\ \Lambda &=& \varpi^4-a^2\Delta \sin^2\theta~~. \label{eq:Kerrfunc} \end{eqnarray} The spinning part of the Hamiltonian $H_S$ for the Kerr spacetime in Boyer-Lindquist coordinates as given in \cite{Barausse10} can be split in two parts as well, i.e., \begin{equation} \label{eq:RHamBL} H_S=H_{SO}+H_{SS}~~, \end{equation} where the Hamiltonian providing the spin orbit coupling reads \begin{widetext} \begin{eqnarray}\label{eq:RHamSOBL} H_{SO} &=& \frac{\sqrt{\Delta~\Sigma}~P_\phi~S_z}{m \Lambda \sqrt{Q} \sin^2{\theta}} (\frac{\Sigma}{\sqrt{\Lambda}}-1)+\frac{1}{\sqrt{\Delta~\Sigma~\Lambda~Q} (1+\sqrt{Q})\sin^2{\theta}} \Bigg{\{} \sin^2{\theta}(S_y \cos{\phi}-S_x \sin{\phi}) \Delta^{3/2} \bigg{[}-\frac{\partial \mu}{\partial r} (\sqrt{Q}+1)\frac{P_\theta}{m} \nonumber \\ &-& \frac{\partial \mu}{\partial \cos\theta}\frac{P_r}{m}\sin{\theta}+\sqrt{Q} \bigg{(}\frac{\partial \nu}{\partial r}\frac{P_\theta}{m}+\sin{\theta} (\frac{\partial \nu} {\partial \cos\theta}-\frac{\partial \mu} {\partial \cos\theta}) \frac{P_r}{m} \bigg{)} \bigg{]} \nonumber \\ &+& \frac{\Delta~\Sigma(2 \sqrt{Q}+1)\sin{\theta}~P_\phi} {m\sqrt{\Lambda}}\bigg{[}\sqrt{\Delta}~\frac{\partial \nu}{\partial r} \Big{(}-\cos{\theta}(S_x \cos{\phi}+S_y \sin{\phi})+S_z \sin{\theta} \Big{)} \nonumber \\ &-& \frac{\partial \nu} {\partial \cos\theta}(S_x \sin\theta~\cos\phi+S_y \sin\theta~\sin\phi+S_z \cos\theta)\sin\theta \bigg{]} \nonumber \\ &+& \Sigma \sqrt{\frac{\Delta}{\Lambda}}(r-M-\sqrt{\Delta})(\sqrt{Q}+1)\sin\theta\frac{P_\phi}{m} \bigg{[}\cos\theta(S_x\cos\phi+S_y\sin\phi)-S_z\sin\theta \bigg{]} \Bigg{\}}~~, \end{eqnarray} and the Hamiltonian providing the spin spin coupling reads \end{widetext} \begin{widetext} \begin{eqnarray}\label{eq:RHamSSBL} H_{SS} &=& \omega S_z+\sqrt{\frac{\Lambda}{\Delta}}\frac{\partial \omega}{\partial r} \frac{1}{2\Sigma^2\sqrt{Q}(1+\sqrt{Q})\sin^2\theta}\Bigg{\{} \frac{\Sigma~\Delta}{\sqrt{\Lambda}}\sin^2\theta(S_y\cos\phi-S_x\sin\phi)\frac{P_\phi P_\theta}{m^2} \nonumber \\ &+& \frac{\Delta~\Sigma^2}{\Lambda}\sin\theta \left[-\cos\theta(S_x \cos\phi+S_y \sin\phi) +S_z\sin\theta\right]\frac{{P_\phi}^2}{m^2} \nonumber \\ &+& \Sigma~\Delta \sqrt{Q}(1+\sqrt{Q})\sin^3\theta\left[-\cos\theta(S_x \cos\phi+S_y \sin\phi) +S_z\sin\theta\right] \nonumber \\ &+& \Delta^{3/2} \sin^3\theta \frac{P_r}{m^2}\Big{\{}\sqrt{\Delta} \big{[}\cos\theta(S_x\cos\phi+S_y\sin\phi)-S_z\sin\theta \big{]}P_r -(S_x \sin\theta~\cos\phi+S_y \sin\theta~\sin\phi+S_z \cos\theta)P_\theta\Big{\}} \Bigg{\}} \nonumber \\ &+& \frac{\sqrt{\Lambda}}{2\Sigma^2 \Delta \sqrt{Q}(1+\sqrt{Q})}\frac{\partial\omega}{\partial \cos\theta} \Bigg{\{} -\frac{\Delta~\Sigma^2}{\Lambda}\frac{{P_\phi}^2}{m^2}(S_x \sin\theta~\cos\phi+S_y \sin\theta~\sin\phi+S_z \cos\theta) \nonumber \\ &+& \frac{\Sigma~\Delta^{3/2}}{\sqrt{\Lambda}}\frac{P_r P_\phi}{m^2}\sin\theta (S_y\cos\phi-S_x\sin\phi) + \sin^2\theta \Delta\Big{\{}(S_x \sin\theta~\cos\phi+S_y \sin\theta~\sin\phi+S_z \cos\theta)\nonumber \\ &\times&\left(\frac{P_\theta^2}{m^2}-\Sigma\sqrt{Q}(1+\sqrt{Q})\right) + \sqrt{\Delta}\frac{P_\theta P_r}{m^2} [-\cos\theta(S_x\cos\phi+S_y\sin\phi)+S_z\sin\theta] \Big{\}} \Bigg{\}}~~, \end{eqnarray} \end{widetext} where the $S_I$ is written in the corresponding cartesian coordinates, i.e., \begin{eqnarray} \label{eq:CtoBL} x &=& r \sin{\theta} \cos{\phi}~~, \nonumber \\ y &=& r \sin{\theta} \sin{\phi}~~, \nonumber \\ z &=& r \cos{\theta} ~~, \end{eqnarray} and $\omega,~\mu,~\nu,~Q$ are the following functions \begin{eqnarray} \omega &=& \frac{2 a M r}{\Lambda}~~, \nonumber \\ e^{2\nu} &=& \frac{\Delta\Sigma}{\Lambda}~~, \nonumber \\ e^{2\mu} &=& \frac{4 \Sigma}{(r-M+\sqrt{\Delta})^2}~~, \nonumber \\ Q &=& 1+\frac{\gamma^{ij}}{m^2} P_i P_j~~. \end{eqnarray} For more about the canonical Hamiltonian formalism and the derivation of the above Hamiltonian function see \cite{Barausse09} and \cite{Barausse10} respectively. The equations of motion for the canonical variables as a function of the coordinate time $t$ read \begin{align} \frac{d x^i}{dt} &=\pd H{P_i}~~,\nonumber \\ \frac{d P_i}{dt} &=-\pd H{x^i}~~,\nonumber\\ \frac{d S_I}{dt} &=\epsilon_{IJC}\pd H{S_J}S^C~~\label{eq:EqMHam}~~, \end{align} where $\epsilon_{IJL}$ is the Levi-Civita symbol. \section{2D and 4D Poincar\'{e} sections} \label{sec:2D4DPs} \subsection{The issue of integrability} \label{sec:IntCh} The canonical Hamiltonian approximation provided in \cite{Barausse09} has five degrees of freedom. Three degrees of freedom come from the coordinates, and two degrees from the spin vector \cite{KLLS}. In \cite{KLLS} it has been shown that for the Schwarzschild spacetime background the Hamiltonian approximation possesses five integrals of motion. The spherically symmetric background corresponds to the preservation of the total angular momentum, thus, two independent and in involution integrals come from the spherical symmetry; since the Hamiltonian is autonomous, the Hamiltonian function is a constant of motion, representing the energy; the measure of the particle's spin is conserved, and the measure of the orbital angular momentum is a constant as well. Hence, since we have five independent and in involution integrals for five degrees of freedom, the Hamiltonian of a spinning particle for a Schwarzschild background is integrable \cite{KLLS}. For nonzero spin of the central black hole, however, chaotic motion appears (see figure~3 of \cite{KLLS}). This means that for the Kerr background the revised Hamiltonian of \cite{Barausse10} is non-integrable. Actually, figure~3 of \cite{KLLS} is a projection of a 4D Poincar\'{e} map on a 2D surface of section. The 2D projections of a 4D Poincar\'{e} map is an old technique to visualize the dynamics of a chaotic system (method \ref{item:2Dpr} in appendix~\ref{sec:V4D}). Similar techniques have been employed in previous studies \cite{Suzuki97,Hartl03a,Hartl03b,Han08} when the question of chaos was examined for spinning particles using MP equations. However, since the MP equations are not symplectic, the use of surface of sections for studying their dynamics is ambiguous. On the other hand, the canonical Hamiltonian formalism of \cite{Barausse09} is symplectic (see, e.g., appendix A in \cite{KLLS}), and, hence, the subsequent study of Poincar\'{e} sections stands on solid ground from this point of view. \subsection{Setting up the numerics} \label{sec:SetNum} To evolve the Hamiltonian equation of motion \eqref{eq:EqMHam} we need to set up the initial conditions of our system. We have nine variables, i.e., three variables for the position, three for the momentum, and three for the spin. In the case of the Kerr background we have two integrals of motion apart from the Hamiltonian function $H$ \eqref{eq:HamSP}. Namely, the azimuthal component of the total angular momentum \cite{Barausse10,KLLS} \begin{equation} \label{eq:Jz} J_z=P_\phi+S_z~~, \end{equation} and the measure of the particle's spin \cite{Barausse09} \begin{equation} \label{eq:SpinM} S=\sqrt{S_x^2+S_y^2+S_z^2}~~, \end{equation} are preserved. For a group of orbits to belong to the same surface of section they have to share the same values of $J_z$, $S$ and $H$. Thus, we are going to use the above three constants to fix the initial conditions. Since the Kerr background is axisymmetric, the initial value of the azimuthal angle $\phi$ can be set to $0$ without loss of generality. The equatorial plane $\theta=\pi/2$ seems to define the appropriate surface of section, due to reflection symmetry along the equatorial plane of the Kerr spacetime. The equatorial plane was also chosen as the surface of section by previous studies of the spinning particle dynamics \cite{Suzuki97,Hartl03a,Hartl03b,Han08}. On the equatorial plane we choose initial conditions along the radial direction $r$ and for each orbit we set the initial radial momentum to $P_r=0$. The spin components $S_x,~S_y$ are chosen to be set to $0$, and, thus, the measure of the component $S_z$ is defined by the spin's magnitude. The sign of $S_z$ shows if the particle's spin is initially aligned with the spin of the central object (positive sign) or anti-aligned (negative sign). From \eqref{eq:Jz}, with given $S_z$, we can get $P_\phi$, while $P_\theta$ is found through a Newton iteration for a given value of the Hamiltonian function $H$ \footnote{With all the other phase space variables fixed as explained in the text, the Hamiltonian function can be rewritten as an effective function of $P_\theta$ alone which drastically reduces the complexity of the Newton iteration.}. Obviously the above initial condition setting is not unique, but we found it convenient for our investigation. The equations of motion~\eqref{eq:EqMHam} are evolved by a Gauss Runge--Kutta integration scheme which has very good conservation properties for symplectic systems (see, e.g.,~Appendix A in~\cite{LGSK}). On the surface of section we record crossings with $P_\theta>0$. In order to calculate the phase space points on the sections very precisely, we take use of the integration scheme's interpolation property as described in Appendix A of~\cite{KLLS}. In our visualization we are going to use only the variables $r,~P_r,~P_\theta~P_\phi$, since by using the constants of motion~\eqref{eq:Jz}-\eqref{eq:SpinM} we can reduce our phase space to the positions, and the momenta. Above we have chosen $\theta=\pi/2$ for our surface of section due to the reflection symmetry. Moreover, even if $\phi$ evolves in time, we do not to use it for the $4D$ Poincar\'{e} sections, because the Kerr spacetime is axially symmetric and, therefore, the variable $\phi$ should not carry any useful information. Thus, in our 4D~Poincar\'{e} sections we are using $r,~P_r,~P_\theta$ for the 3D projection, while $P_\phi$ is represented by the color. However, note that due to the constant~$J_z$~\eqref{eq:Jz}, the use of $P_\phi$ to color the consequents is equivalent to the use of $S_z$, i.e., the maxima of the one quantity correspond to the minima of the other one. The spin is measured in $m~M$ units, namely $S/(m M)$ is dimensionless. By setting $m=M=1$ the spin is dimensionless, and all the other quantities as well. In some of our numerical examples we are using unrealistic high values for the particle's spin measure, e.g. $S\approx 1$. However, these values are dynamically valid even for the linearized in spin Hamiltonian formulation we are using, because once the Hamiltonian function is explicitly written the Hamiltonian system is selfconsistent. Namely, the Hamiltonian function itself depends just linearly on the spin components, and the Hamiltonian equations~\eqref{eq:EqMHam} are just linearly depended on the spin as well. The only limitation is the astrophysical. The dimensionless spin value becomes astrophysically relevant for EMRI when $S<10^{-4}$ (for more details see section II.B in \cite{Hartl03a}). However, one has to keep in mind that the aim of this work is basically a dynamical investigation of the system, not an astrophysical one. As far as the Kerr parameter is concerned, we have chosen the value $a=0.9$ in our study. The reason is the following. In order to have integrability we can go either to the geodesic limit $(S=0)$, or to the Schwarzschild limit $(a=0)$. Thus, in order to have the most pronounced non-integrability effects, we have to go away from both above limits, which is the case with $a=0.9$. This does not mean that for smaller Kerr parameters we cannot find signs of chaos. Actually, the non-integrability of the linear in spin Hamiltonian approximation for the Kerr background was found for $a=0.1$ (figure~3 in \cite{KLLS}). \begin{figure}[htp] \centerline{ \includegraphics[width=0.45\textwidth]{2DS1.eps}} \caption{A 2D projection of a Poincar\'{e} section on the $r,~P_r$ plane for spins $a=0.9,~S=1$ and $H=0.95,~J_z=2$. } \label{Fig:2DS1} \end{figure} \subsection{Examples for $S=1$} \label{sec:ExS1} In order to find chaos we use the extreme case of $S=1$ in our first example. Fig.~\ref{Fig:2DS1} shows a two dimensional projection of a Poincar\'{e} section. We can observe a chaotic region (scattered points) encircling an island of stability. The chaotic sea is confined between two surfaces. The inner one, which defines the limit of the island of stability, is a KAM torus, while the outer one is the boundary of the allowed motion. The outer boundary is indicated by the outer limit of the chaotic orbit. The boundary of the allowed motion has an opening around $r=2,~P_r=0$ from which the chaotic orbits are plunging towards the central black hole $(r=0)$. However, our observations are not unambiguous, since we do not see a Poincar\'{e} section in Fig.~\ref{Fig:2DS1}, but a projection. A 2D Poincar\'{e} section is accurate only for a Hamiltonian system of two degrees of freedom. In a two degrees of freedom system the KAM curves have zero width, and chaotic regions are represented by scattered dots covering a nonzero width region. In Fig.~\ref{Fig:2DS1} we see KAM tori projected on a 2D plane, so the width of the KAMs is nonzero. Thus, a 2D Poincar\'{e} projection does not offer an unambiguous criterion to distinguish chaos from order. In order to drive safe conclusions we have to use 4D Poincar\'{e} sections. \begin{figure}[htp] \centerline{ \includegraphics[width=0.49\textwidth]{4DS1reg.eps}} \caption{ A regular torus from Fig.~\ref{Fig:2DS1} with initial $r=7.5$ on a 4D Poincar\'{e} section. } \label{Fig:4DS1reg} \end{figure} Using the technique of color and rotation on a 4D Poincar\'{e} section, the regularity of an orbit in the neighborhood of a stable periodic orbit is indicated in the topology of the 3-dimensional projection by the presence of a torus, with a smooth color variation on its surface. This is determined by the distribution of the consequents in the 4th dimension \cite{Patsis94}. We use the orbit starting from $r=7.5$ in Fig.~\ref{Fig:2DS1} to give our first example of a regular orbit on a 4D Poincar\'{e} section (Fig.~\ref{Fig:4DS1reg}). In Fig.~\ref{Fig:4DS1reg} we observe that as the orbit evolves on the rotational torus projected on the $r,~P_r,~P_\theta$ surface, the colors representing $P_\phi$ vary smoothly \cite{Katsanikas11a}. This means that the orbit is regular. We use the software package ``gnuplot'' to visualize our results. We give the viewing angles of the 3D projections for Fig.~\ref{Fig:4DS1reg} and all the subsequent similar figures of our paper in Table~\ref{tab:ViewAngles}. \begin{figure*}[htp] \centerline{ \includegraphics[width=0.49\textwidth]{4DS1chIn.eps} \includegraphics[width=0.49\textwidth]{4DS1chF.eps}} \caption{ A chaotic orbit with initial $r=3,~P_r=0$ in Fig.~\ref{Fig:2DS1} depicted in a 4D Poincar\'{e} section. The left plot shows the initial $300$ crossings of the orbit through the Poincar\'{e} section, while the right shows a detail from the Poincar\'{e} section when $1500$ crossings have been reached. The arrows indicate consequents almost on the surface that separates the allowed from the non-allowed space for the motion of the particle. For further explanations see text. } \label{Fig:4DS1ch} \end{figure*} On a 4D Poincar\'{e} section the chaotic nature of an orbit is demonstrated by its irregular behavior on the 3-dimensional projection or/and by the mixing of colors representing the 4th dimension. In Fig.~\ref{Fig:4DS1ch} we consider the chaotic orbit starting from $r=3$ on the 2D projection in Fig.~\ref{Fig:2DS1}. Initially the orbit sticks around a KAM lying on the border of the island of stability (This is given in the left plot of Fig.~\ref{Fig:4DS1ch}). By sticking around the torus it mimics a regular orbit (the color variation is smooth), but as the orbit evolves it departs from the KAM torus and sticks on the surface that defines the space for the allowed motion. The consequents of the orbit exhibit a smooth color variation. This is typical of the phenomenon of stickiness and it is quite common for weakly chaotic orbits, which are called sticky, see, e.g., \cite{Katsanikas11a}. The arrows at the right plot in Fig.~\ref{Fig:4DS1ch} show points that stick in this case on the outer boundary. The chaotic nature of the orbit is defined by its irregular behavior, and not by the color mixing. The orbit, after 1500 consequents, does not form a torus with small color variation on it like in Fig.~\ref{Fig:4DS1reg}, but it has a double loop structure. The fact that we do not have color mixing indicates stickiness \cite{Katsanikas13}. This behavior is similar to a weakly chaotic orbit that is trapped between two invariant curves in the case of a 2D Hamiltonian System. \subsection{Examples for $S=\sqrt{0.1}$} \label{sec:ExSroot0p1} \begin{figure}[htp] \centerline{ \includegraphics[width=0.45\textwidth]{2DSroot0p1.eps}} \caption{A detail from a 2D projection of a Poincar\'{e} section on the $r,~P_r$ plane for spins $a=0.9,~S=\sqrt{0.1}$ and $H=0.95,~J_z=2$.} \label{Fig:2DSroot0p1} \end{figure} We keep the same energy and angular momentum as in Sec.~\ref{sec:ExS1}, but we reduce the spin measure to $S=\sqrt{0.1}$. In this case a 2D projection of the whole phase space like the one in Fig.~\ref{Fig:2DS1} is hardly discernible from a proper Poincar\'{e} map coming from a system with 2 degrees of freedom. One has to focus on a small region of the phase space to see the real structure (Fig.~\ref{Fig:2DSroot0p1}). In Fig.~\ref{Fig:2DSroot0p1} we observe that there is still a chaotic region surrounding the main island of stability (scattered points on the left side of the plot), and that the KAM tori have nonzero width. It is worth mentioning that in a system of 3 degrees of freedom the chaotic regions communicate even if we see KAMs between them in the 3D projections of the 4D space of section. On the contrary in 2~degrees of freedom systems, when a KAM is lying between two chaotic regions in the 2D surface of section it does not allow them to communicate\footnote{By ``communicate'' we mean that a chaotic orbit can go from the one region to the other.}. A case where the two chaotic regions communicate is given in Fig.~\ref{Fig:2DSroot0p1}. Apart from the outer chaotic region there is a chaotic region lying on the interval $2.35 \lesssim r \lesssim 2.37$. This region is inside the KAM tori that are lying on the interval $2.24 \lesssim r \lesssim 2.25$ on the $P_r=0$ line in Fig.~\ref{Fig:2DSroot0p1}. By starting integrating an orbit in the inner chaotic region we soon end up in the outer one, since the two regions communicate. \begin{figure}[htp] \centerline{ \includegraphics[width=0.45\textwidth]{4DSroot0p1reg.eps}} \caption{A regular torus from Fig.~\ref{Fig:2DSroot0p1} with initial $r=2.4$ on a 4D Poincar\'{e} section. } \label{Fig:4DSroot0p1reg} \end{figure} \begin{figure}[htp] \centerline{ \includegraphics[width=0.45\textwidth]{4DSroot0p1fil.eps}} \caption{Filament corresponding to a chaotic orbit from Fig.~\ref{Fig:2DSroot0p1} with initial $r=2.32$ on a 4D Poincar\'{e} section. The consequents after long integration time diffuse i phase space. } \label{Fig:4DSroot0p1fil} \end{figure} Actually the structure of the phase space is far more complicated. An example is a regular orbit, starting from $r=2.4$ which is represented by a structure that looks like nooses in a row in the 2D subspace $(r,P_r)$ of the 4D~Poincar\'{e} section (Fig.~\ref{Fig:2DSroot0p1}). This regular orbit is represented by a warped rotational torus on the 4D~Poincar\'{e} section (see, e.g., \cite{Vrahatis97}, \cite{Katsanikas11a}). In Fig.~\ref{Fig:4DSroot0p1reg} we see the real structure of the warped rotational torus. The regular orbit follows the warping of the torus while the color varies smoothly during the time of integration. On the other hand, weakly chaotic orbits lie in the region which is apparently dominated by KAM tori ($2.24 \lesssim r \lesssim 2.25$ in Fig.~\ref{Fig:2DSroot0p1}). In Fig.~\ref{Fig:4DSroot0p1fil} we plot such a weakly chaotic orbit. It is represented by a 3D filamentary structure with self-intersections in the 3D subspace $(r,P_r,P_{\theta})$ of the 4D space of section. We observe that this structure has smooth color variation and that we have the same color (the same value in the 4th dimension) at the regions of the self-intersections. We underline the fact that in Fig.~\ref{Fig:4DSroot0p1fil} we observe two self-intersections that do not have the same color. If we rotate the figure, we can see these self-intersections from different viewing angles and we can observe very easily that these self-intersections do not exist in the 3D subspace. This means that these self-intersections are due to the viewing angles and they do not really exist. The smooth color variation of the 3D~filamentary shows that the 4th dimension supports the geometry of this structure in the 4D space of section. This also gives us the dynamical information that these self-intersections occur in the 4D space. Such 4D~filamentary structures have been encountered for the first time in a 3D~galactic Hamiltonian system in \cite{Katsanikas11c} and they are found at the neighborhood of unstable periodic orbits with high multiplicity \cite{Katsanikas11c}. The orbits that are represented by these structures are sticky chaotic orbits. Such weakly chaotic orbits have as a 2D counterpart the chaotic orbits that can be found in chains of elliptic and hyperbolic points in resonance zones. These chaotic orbits connect the hyperbolic points and surround the islands of stability of the elliptic ones. In the case we study here, these weakly chaotic orbits extend into the 3D space of the projection, while they have a smooth color variation along the filament they form. However, if we continue the integration for long times, the orbit will diffuse in the 4D space, something that will be demonstrated clearly in the next example. \subsection{Examples for $S\le 0.1$} \label{sec:ExSle0p1} \begin{figure}[htp] \centerline{ \includegraphics[width=0.45\textwidth]{4DS0p1fil.eps}} \caption{A detail from a 4D Poincar\'{e} section of a filamentary chaotic orbit with $S=0.1$ starting from $r=2.225$. } \label{Fig:4DS0p1fil} \end{figure} If we reduce the spin measure to $S=10^{-1}$ and $S=10^{-2}$, we encounter again 4D tori and 4D filamentary structures in the 4D space of section that correspond to regular orbits and sticky chaotic orbits respectively. In Fig.~\ref{Fig:4DS0p1fil} (for $S=0.1$) we observe a 4D filamentary structure. Despite the fact that we have smooth color variation for the 4th dimension $P_{\phi}$ the consequents depart from this filamentary structure through the 3D subspace $(r,P_r,P_{\theta})$, and they occupy larger volumes in the phase space (before the final plunge towards the black hole). These consequents can be observed at the left side of Fig.~\ref{Fig:4DS0p1fil}. The departure of these points from the filamentary structure happens earlier than in the case described in Fig.~\ref{Fig:4DSroot0p1fil}. However, in both cases we observe stickiness on 4D Poincr\'{e} sections in structures that correspond to chaotic zones around unstable periodic orbits with high multiplicity for the first time in a relativistic system. \begin{figure}[htp] \centerline{ \includegraphics[width=0.45\textwidth]{2DS0p001.eps}} \caption{A detail from a 2D projection of a Poincar\'{e} section on the $r,~P_r$ plane for spins $a=0.9,~S=0.001$ and $H=0.95,~J_z=2$. } \label{Fig:2DS0p001} \end{figure} The last significant imprints of chaos are found for $S=10^{-3}$. For such low value of spin the 2D projection of a Poincar\'{e} section shown in Fig.~\ref{Fig:2DS0p001} is very close to what one would expect to see in a case of a system with 2 degrees of freedom. In Fig.~\ref{Fig:2DS0p001}, we see a chaotic zone (scattered points on the left side), and a KAM torus (orbit on the right side of the plot). We have to focus significantly on the surface of section in order to make apparent the chaotic zone and the width of the torus. For spins $S\leq 10^{-4}$ the presence of chaos appears to be negligible, and if this is the case it can be practically ignored. Even non-integrability effects like the existence of islands of stability near resonances can be neglected for any practical reason as well. In few words the system is nearly integrable, in agreement with the recent findings of \cite{Ruangsri15}, where no traces of resonant orbits were found in a study of a the linearized in spin MP equations. It is worth reminding that $S\leq 10^{-4}$ is the upper limit for the EMRIs, and it is interesting to notice that this value is also the upper limit for which the orbits produced by the Hamiltonian approximation start to match the orbits produced by the MP equations with NW SSC \cite{LGSK}. \begin{table} \begin{tabular}[t]{c | c c } Fig. & $\theta$ & $\phi$ \\ \hline \ref{Fig:4DS1reg} & $47^{o}$ &$349^{o}$ \\ \ref{Fig:4DS1ch}~(left panel) & $36^{o}$ &$144^{o}$ \\ \ref{Fig:4DS1ch}~(right panel)& $136^{o}$& $84^{o}$ \\ \ref{Fig:4DSroot0p1reg} & $46^{o}$ & $66^{o}$ \\ \ref{Fig:4DSroot0p1fil} & $44^{o}$ & $148^{o}$ \\ \ref{Fig:4DS0p1fil} & $60^{o}$ & $88^{o}$ \\ \end{tabular} \caption{The view points of the figures, which are depicting 4D~Poincar\'{e} section, are given in spherical coordinates $(\theta,\phi)$ as defined in the gnuplot software package.} \label{tab:ViewAngles} \end{table} \section{Discussion and conclusions} \label{sec:ConDis} The method of color and rotation \cite{Patsis94} is used for the first time in a relativistic system. Until now this method was used in 3D galactic Hamiltonian systems (\cite{Katsanikas11a,Katsanikas11b,Katsanikas11c,Patsis14a,Patsis14b}, the 3D circular restricted three body problem \cite{Geisel13} and a 4D symplectic map \cite{Zachilas13}. We encountered three types of orbits in our study, which, though studied in detail in a 3D galactic system \cite{Katsanikas11a,Katsanikas11c}, have never been investigated in other 3D systems in the framework of general relativity. These three types of orbits are: \begin{enumerate} \item The first type of orbits are the regular orbits. These orbits are represented on the 4D Poincar\'{e} spaces of section by 4D rotational tori \cite{Katsanikas11a,Vrahatis97}. These tori have the topology of a regular torus in the 3D projections of the 4D Poincar\'{e} space of section. Some of them are smooth regular tori and few of them are warped. Nevertheless, all of them manifest smooth color variation on them. \item The second type of orbits are chaotic orbits that initially stick on 4D rotational tori (on the 4D Poincar\'{e} section), before they diffuse in the phase space. \item The third type of orbits are a special case of chaotic orbits. They are represented by 4D filamentary structures on the 4D Poincar\'{e} sections as in \cite{Katsanikas11c}. These structures are in the neighborhood of unstable periodic orbits with high multiplicity. Such orbits are sticky chaotic orbits since their consequents leave the 4D filamentary structures after a longer time of integration. \end{enumerate} In general we did not encounter strong chaos in the system, which would be manifested by color mixing on the 4D Poincar\'{e} sections. We encountered only weakly chaotic and sticky orbits. Moreover, we observe that chaotic motion seems to be insignificant, and its contribution to the overall dynamics can be probably be neglected, when the dimensionless spin becomes smaller than $S= 10^{-4}$, i.e. when the value of the spin is in the astrophysical relevant interval for extreme mass ratio inspirals. However, from a dynamical point of view the inclusion of the particle's spin in the motion of a small compact object is just one way to go from the integrable case of geodesic motion on a Kerr black hole background to a non-integrable system. For example, it is well known that rings and halos around black holes can induce chaotic motion (see, e.g., \cite{SemSuk}). The same effect takes place when the spacetime around the central supermassive object is described by a non-Kerr black hole (see, e.g., \cite{nonKerr}). Non-integrability can also originate from the self-force or from the inclusion of the quadrupole momentum to the Mathisson-Papapetrou equations. In few words, there are many reasons for a extreme mass ratio binary to be described by a non-integrable system. However, it is unclear to which extent the effects coming from the non-integrability can affect the motion of the small body. \begin{acknowledgments} G.L-G is supported by UNCE-204020a and by GACR-14-10625S. This work was partially supported by Research Committee of the Academy of Athens (project 200/854). We would like to thank Prof. GeorgevContopoulos for carefully reading the manuscript and for his useful suggestions. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} \begin{figure*} \centering \includegraphics[scale=0.57]{fd.pdf} \caption{\small Freebase description of \emph{Jean Metellus} can be used to infer that the entity has the type \emph{/book/author}. This missing fact is found by our algorithm and is still missing in the latest version of Freebase when the paper is written.} \label{fd} \end{figure*} There is now increasing interest in the construction of knowledge bases like \emph{Freebase} \cite{freebase} and \emph{NELL} \cite{NELL} in the natural language processing community. KBs contain facts such as \emph{Tiger Woods} is an \emph{athlete}, and \emph{Barack Obama} is the \emph{president of} \emph{USA}. However, one of the main drawbacks in existing KBs is that they are incomplete and are missing important facts \cite{WGMSGL14}, jeopardizing their usefulness in downstream tasks such as question answering. This has led to the task of completing the knowledge base entries, or Knowledge Base Completion (KBC) extremely important. In this paper, we address an important subproblem of knowledge base completion--- inferring missing entity type instances. Most of previous work in KB completion has only focused on the problem of relation extraction \cite{distant_supervision,rescal,transe,limin}. Entity type information is crucial in KBs and is widely used in many NLP tasks such as relation extraction \cite{trescal}, coreference resolution \cite{ratinov,coref}, entity linking \cite{entity_linking}, semantic parsing \cite{ccg,dcs} and question answering \cite{subgraph,ie}. For example, adding entity type information improves relation extraction by 3\% \cite{trescal} and entity linking by 4.2 F1 points \cite{elnew}. Despite their importance, there is surprisingly little previous work on this problem and, there are no datasets publicly available for evaluation. We construct a large-scale dataset for the task of inferring missing entity type instances in a KB. Most of previous KBC datasets \cite{distant_supervision,limin} are constructed using a single snapshot of the KB and methods are evaluated on a subset of facts that are hidden during training. Hence, the methods could be potentially evaluated by their ability to predict \emph{easy} facts that the KB already contains. Moreover, the methods are not directly evaluated on their ability to predict missing facts. To overcome these drawbacks we construct the train and test data using two snapshots of the KB and evaluate the methods on predicting facts that are added to the more recent snapshot, enabling a more realistic and challenging evaluation. Standard evaluation metrics for KBC methods are generally {\em type-based} \cite{distant_supervision,limin}, measuring the quality of the predictions by aggregating scores computed within a type. This is not ideal because: (1) it treats every entity type equally not considering the distribution of types, (2) it does not measure the ability of the methods to rank predictions across types. Therefore, we additionally use a global evaluation metric, where the quality of predictions is measured within and across types, and also accounts for the high variance in type distribution. In our experiments, we show that models trained with negative examples from the entity side perform better on type-based metrics, while when trained with negative examples from the type side perform better on the global metric. In order to design methods that can rank predictions {\em both} within and across entity (or relation) types, we propose a {\em global objective} to train the models. Our proposed method combines the advantages of previous approaches by using negative examples from both the entity and the type side. When considering the same number of negative examples, we find that the linear classifiers and the low-dimensional embedding models trained with the global objective produce better quality ranking within and across entity types when compared to training with negatives examples only from entity or type side. Additionally compared to prior methods, the model trained on the proposed global objective can more reliably suggest confident entity-type pair candidates that could be added into the given knowledge base. Our contributions are summarized as follows: \begin{itemize} \item We develop an evaluation framework comprising of methods for dataset construction and evaluation metrics to evaluate KBC approaches for missing entity type instances. The dataset and evaluation scripts are publicly available at {\footnotesize \url{http://research.microsoft.com/en-US/downloads/df481862-65cc-4b05-886c-acc181ad07bb/default.aspx}}. \item We propose a global training objective for KBC methods. The experimental results show that both linear classifiers and low-dimensional embedding models achieve best overall performance when trained with the global objective function. \item We conduct extensive studies on models for inferring missing type instances studying the impact of using various features and models. \end{itemize} \section{Inferring Entity Types} \label{sec:information-sources} We consider a KB $\Lambda$ containing entity type information of the form $(e, t)$, where $e \in E$ ($E$ is the set of all entities) is an entity in the KB with type $t \in T$ ($T$ is the set of all types). For example, $e$ could be \emph{Tiger Woods} and $t$ could be \emph{sports athlete}. As a single entity can have multiple types, entities in Freebase often miss some of their types. The aim of this work is to infer missing entity type instances in the KB. Given an unobserved fact (an entity-type pair) in the training data $(e, t) \not\in \Lambda$ where entity $e \in E$ and type $t \in T$, the task is to infer whether the KB currently misses the fact, i.e., infer whether $(e, t) \in \Lambda$. We consider entities in the intersection of Freebase and Wikipedia in our experiments. \subsection{Information Resources} Now, we describe the information sources used to construct the feature representation of an entity to infer its types. We use information in Freebase and external information from Wikipedia to complete the KB. \begin{itemize} \item{{\bf Entity Type Features}: The entity types observed in the training data can be a useful signal to infer missing entity type instances. For example, in our snapshot of \emph{Freebase}, it is not uncommon to find an entity with the type \emph{/people/deceased\_person} but missing the type \emph{/people/person}. } \item{{\bf Freebase Description}: Almost all entities in \emph{Freebase} have a short one paragraph description of the entity. Figure \ref{fd} shows the \emph{Freebase} description of \emph{Jean Metellus} that can be used to infer the type \emph{/book/author} which \emph{Freebase} does not contain as the date of writing this article. \begin{figure*} \centering \includegraphics[scale=0.58]{wiki} \caption{\small A section of the \emph{Wikipedia} article of \emph{Claire Martin} which gives clues that entity has the type \emph{/award/award\_winner}. This currently missing fact is also found by our algorithm. } \label{wiki} \end{figure*} } \item{{\bf Wikipedia}: As external information, we include the \emph{Wikipedia} full text article of an entity in its feature representation. We consider entities in Freebase that have a link to their Wikipedia article. The \emph{Wikipedia} full text of an entity gives several clues to predict it's entity types. For example, Figure \ref{wiki} shows a section of the \emph{Wikipedia} article of \emph{Claire Martin} which gives clues to infer the type \emph{/award/award\_winner} that \emph{Freebase} misses.} \end{itemize} \section{Evaluation Framework} \label{sec:eval-meth} In this section, we propose an evaluation methodology for the task of inferring missing entity type instances in a KB. While we focus on recovering entity types, the proposed framework can be easily adapted to relation extraction as well. First, we discuss our two-snapshot dataset construction strategy. Then we motivate the importance of evaluating KBC algorithms globally and describe the evaluation metrics we employ. \subsection{Two Snapshots Construction} In most previous work on KB completion to predict missing relation facts \cite{distant_supervision,limin}, the methods are evaluated on a subset of facts from a {\em single} KB snapshot, that are hidden while training. However, given that the missing entries are usually selected randomly, the distribution of the selected unknown entries could be very different from the actual missing facts distribution. Also, since any fact could be potentially used for evaluation, the methods could be evaluated on their ability to predict easy facts that are already present in the KB. To overcome this drawback, we construct our train and test set by considering {\em two} snapshots of the knowledge base. The {\em train} snapshot is taken from an earlier time without special treatment. The {\em test} snapshot is taken from a later period, and a KBC algorithm is evaluated by its ability of recovering newly added knowledge in the test snapshot. This enables the methods to be directly evaluated on facts that are missing in a KB snapshot. Note that the facts that are added to the test snapshot, in general, are more subtle than the facts that they already contain and predicting the newly added facts could be harder. Hence, our approach enables a more realistic and challenging evaluation setting than previous work. We use manually constructed \emph{Freebase} as the KB in our experiments. Notably, \newcite{trescal} use a two-snapshot strategy for constructing a dataset for relation extraction using automatically constructed \emph{NELL} as their KB. The new facts that are added to a KB by an automatic method may not have all the characteristics that make the two snapshot strategy more advantageous. We construct our train snapshot $\Lambda_0$ by taking the \emph{Freebase} snapshot on $3^{rd}$ September, 2013 and consider entities that have a link to their Wikipedia page. KBC algorithms are evaluated by their ability to predict facts that were added to the $1^{st}$ June, 2014 snapshot of \emph{Freebase} $\Lambda$. To get negative data, we make a closed world assumption treating any unobserved instance in \emph{Freebase} as a negative example. Unobserved instances in the \emph{Freebase} snapshot on $3^{rd}$ September, 2013 and $1^{st}$ June, 2014 are used as negative examples in training and testing respectively.\footnote{Note that some of the negative instances used in training could be positive instances in test but we do not remove them during training.} The positive instances in the test data ($\Lambda - \Lambda_0$) are facts that are newly added to the test snapshot $\Lambda$. Using the entire set of negative examples in the test data is impractical due to the large number of negative examples. To avoid this we only add the negative types of entities that have at least one new fact in the test data. Additionally, we add a portion of the negative examples for entities which do not have new fact in the test data and that were unused during training. This makes our dataset quite challenging since the number of negative instances is much larger than the number of positive instances in the test data. It is important to note that the goal of this work is not to predict facts that emerged between the time period of the train and test snapshot\footnote{In this work, we also do not aim to correct existing false positive errors in \emph{Freebase}}. For example, we do not aim to predict the type \emph{/award/award\_winner} for an entity that won an award after $3^{rd}$ September, 2013. Hence, we use the \emph{Freebase} description in the training data snapshot and \emph{Wikipedia} snapshot on $3^{rd}$ September, 2013 to get the features for entities. One might worry that the new snapshot might contain a significant amount of emerging facts so it could not be an effective way to evaluate the KBC algorithms. Therefore, we examine the difference between the training snapshot and test snapshot manually and found that this is likely not the case. For example, we randomly selected $25$ \emph{/award/award\_winner} instances that were added to the test snapshot and found that all of them had won at least one award before $3^{rd}$ September, 2013. Note that while this automatic evaluation is closer to the real-world scenario, it is still not perfect as the new KB snapshot is still incomplete. Therefore, we also perform human evaluation on a small dataset to verify the effectiveness of our approach. \subsection{Global Evaluation Metric} \emph{Mean average precision} (MAP) \cite{map} is now commonly used to evaluate KB completion methods \cite{distant_supervision,limin}. MAP is defined as the mean of \emph{average precision} over all entity (or relation) types. MAP treats each entity type equally (not explicitly accounting for their distribution). However, some types occur much more frequently than others. For example, in our large-scale experiment with $500$ entity types, there are many entity types with only $5$ instances in the test set while the most frequent entity type has tens of thousands of missing instances. Moreover, MAP only measures the ability of the methods to correctly rank predictions within a type. To account for the high variance in the distribution of entity types and measure the ability of the methods to correctly rank predictions across types we use global average precision (GAP) (similarly to micro-F1) as an additional evaluation metric for KB completion. We convert the multi-label classification problem to a binary classification problem where the label of an entity and type pair is true if the entity has that type in \emph{Freebase} and false otherwise. GAP is the average precision of this transformed problem which can measure the ability of the methods to rank predictions both within and across entity types. Prior to us, \newcite{transe} use mean reciprocal rank as a global evaluation metric for a KBC task. We use average precision instead of mean reciprocal rank since MRR could be biased to the top predictions of the method \cite{WGMSGL14} While GAP captures global ordering, it would be beneficial to measure the quality of the top $k$ predictions of the model for bootstrapping and active learning scenarios \cite{active_learning,boot}. We report G@k, GAP measured on the top $k$ predictions (similarly to \emph{Precision@k} and \emph{Hits@k}). This metric can be reliably used to measure the overall quality of the top $k$ predictions. \section{Global Objective for Knowledge Base Completion} \label{sec:approach} We describe our approach for predicting missing entity types in a KB in this section. While we focus on recovering entity types in this paper, the methods we develop can be easily extended to other KB completion tasks. \subsection{Global Objective Framework} During training, only positive examples are observed in KB completion tasks. Similar to previous work \cite{distant_supervision,transe,limin}, we get negative training examples by treating the unobserved data in the KB as negative examples. Because the number of unobserved examples is much larger than the number of facts in the KB, we follow previous methods and sample few unobserved negative examples for every positive example. Previous methods largely neglect the sampling methods on unobserved negative examples. The proposed global object framework allows us to systematically study the effect of the different sampling methods to get negative data, as the performance of the model for different evaluation metrics does depend on the sampling method. We consider a training snapshot of the KB $\Lambda_0$, containing facts of the form $(e, t)$ where $e$ is an entity in the KB with type $t$. Given a fact $(e, t)$ in the KB, we consider two types of negative examples constructed from the following two sets: $\mathcal{N}_E (e,t)$ is the ``negative entity set'', and $\mathcal{N}_T (e,t)$ is the ``negative type set''. More precisely, \begin{equation*} \mathcal{N}_E (e,t) \subset \{e'| e' \in E, e' \neq e, (e',t) \notin \Lambda_0\}, \end{equation*} and \begin{equation*} \mathcal{N}_T (e,t) \subset \{t'| t'\in T, t' \neq t, (e,t') \notin \Lambda_0\}. \end{equation*} Let $\theta$ be the model parameters, $m = |\mathcal{N}_E (e,t)|$ and $n = |\mathcal{N}_T (e,t)|$ be the number of negative examples and types considered for training respectively. For each entity-type pair $(e,t)$, we define the scoring function of our model as $s(e,t | \theta)$.\footnote{We often use $s(e,t)$ as an abbreviation of $s(e,t|\theta)$ in order to save space.} We define two loss functions one using negative entities and the other using negative types: \begin{equation*} L_E (\Lambda_0, \theta) = \!\!\!\!\sum_{(e,t)\in\Lambda_0, e' \in \mathcal{N}_E(e,t)} \!\!\!\! [s(e',t) - s(e,t) +1]_+^k, \end{equation*} and \begin{equation*} L_T (\Lambda_0, \theta) = \!\!\!\!\sum_{(e,t)\in\Lambda_0, t' \in \mathcal{N}_T(e,t)} \!\!\!\! [s(e,t') - s(e,t) +1]_+^k, \end{equation*} where $k$ is the power of the loss function ($k$ can be 1 or 2), and the function $[\cdot]_+$ is the hinge function. The global objective function is defined as \begin{equation} \label{eq:general_obj} \min_{\theta} Reg(\theta) + CL_T(\Lambda_0, \theta) + CL_E(\Lambda_0, \theta), \end{equation} where $Reg(\theta)$ is the regularization term of the model, and $C$ is the regularization parameter. Intuitively, the parameters $\theta$ are estimated to rank the observed facts above the negative examples with a margin. The total number of negative examples is controlled by the size of the sets $\mathcal{N}_E$ and $\mathcal{N}_T$. We experiment by sampling only entities or only types or both by fixing the total number of negative examples in Section~\ref{sec:experiments}. The rest of section is organized as follows: we propose three algorithms based on the global objective in Section~\ref{sec:algorithms}. In Section~\ref{sec:relat-exist-meth}, we discuss the relationship between the proposed algorithms and existing approaches. Let $\Phi(e) \rightarrow R^{d_{e}}$ be the feature function that maps an entity to its feature representation, and $\Psi(t) \rightarrow R^{d_{t}}$ be the feature function that maps an entity type to its feature representation.\footnote{This gives the possibility of defining features for the labels in the output space but we use a simple one-hot representation for types right now since richer features did not give performance gains in our initial experiments.} $d_e$ and $d_t$ represent the feature dimensionality of the entity features and the type features respectively. Feature representations of the entity types ($\Psi$) is only used in the embedding model. \subsection{Algorithms} \label{sec:algorithms} We propose three different algorithms based on the global objective framework for predicting missing entity types. Two algorithms use the linear model and the other one uses the embedding model. \paragraph{Linear Model} The scoring function in this model is given by $s(e, t|\theta\!=\!\{\mathbf{w}_{t}\}) = \mathbf{w}_{t}^T\Phi(e)$, where $\mathbf{w}_{t} \in R^{d_{e}} $ is the parameter vector for target type $t$. The regularization term in Eq.~\eqref{eq:general_obj} is defined as follows: $R(\theta) = 1/2 \sum_{t=1} \mathbf{w}_t^T\mathbf{w}_t$. We use $k=2$ in our experiments. Our first algorithm is obtained by using the dual coordinate descent algorithm~\cite{dcd} to optimize Eq.~\eqref{eq:general_obj}, where we modified the original algorithm to handle multiple weight vectors. We refer to this algorithm as {\bf Linear.DCD}. While DCD algorithm ensures convergence to the global optimum solution, its convergence can be slow in certain cases. Therefore, we adopt an online algorithm, Adagrad~\cite{adagrad}. We use the hinge loss function ($k=1$) with no regularization ($Reg(\theta) = \emptyset$) since it gave best results in our initial experiments. We refer to this algorithm as {\bf Linear.Adagrad}, which is described in Algorithm~\ref{algo_adagrad}. Note that $\text{AdaGradUpdate}(x,g)$ is a procedure which updates the vector $x$ with the respect to the gradient $g$. \newfloat{algorithm}{t}{lop} \begin{algorithm} \footnotesize \caption{ The training algorithm for Linear.Adagrad. } \label{algo_adagrad} \begin{algorithmic}[1] \State Initialize $\mathbf{w}_t=0, \forall t=1\ldots |T|$ \For{$(e, t) \in \Lambda_0$ } \For{$e' \in \mathcal{N}_E(e,t)$ } \If{$\mathbf{w}_{t}^T\Phi(e) - \mathbf{w}_{t}^T\Phi(e') -1 < 0$ } \State $\text{AdaGradUpdate}(w_t, \Phi(e') - \Phi(e))$ \EndIf \EndFor \For{$t' \in \mathcal{N}_T(e,t)$ } \If{$\mathbf{w}_{t}^T\Phi(e) - \mathbf{w}_{t'}^T\Phi(e) -1 < 0$ } \State $\text{AdaGradUpdate}(w_t, -\Phi(e))$ \State $\text{AdaGradUpdate}(w_{t'},\Phi(e))$. \EndIf \EndFor \EndFor \end{algorithmic} \end{algorithm} \paragraph{Embedding Model} \newfloat{algorithm}{t}{lop} \begin{algorithm} \footnotesize \caption{The training algorithm for the embedding model.} \label{algo:embedding} \begin{algorithmic}[1] \State Initialize $\mathbf{V},\mathbf{U}$ randomly. \For{$(e, t) \in \Lambda_0$ } \For{$e' \in \mathcal{N}_E(e,t)$ } \If{$s(e,t) - s(e',t) -1 < 0$ } \State $\mu \leftarrow \mathbf{V}^T \Psi(t)$ \State $\eta \leftarrow \mathbf{U}^T (\Phi(e') - \Phi(e))$ \For{ $i \in 1\ldots d$} \State $\text{AdaGradUpdate}(\mathbf{U}_i, \mu[i] (\Phi(e')- \Phi(e)))$ \State $\text{AdaGradUpdate}(\mathbf{V}_i, \eta[i] \Psi(t))$ \EndFor \EndIf \EndFor \For{$t' \in \mathcal{N}_T(e,t)$ } \If{$s(e,t) - s(e,t') -1 < 0$ } \State $\mu \leftarrow \mathbf{V}^T (\Psi(t') - \Psi(t))$ \State $\eta \leftarrow \mathbf{U}^T\Phi(e)$ \For{ $i \in 1\ldots d$} \State $\text{AdaGradUpdate}(\mathbf{U}_i, \mu[i] \Phi(e))$ \State $\text{AdaGradUpdate}(\mathbf{V}_i, \eta[i] (\Psi(t') - \Psi(t)))$ \EndFor \EndIf \EndFor \EndFor \end{algorithmic} \end{algorithm} In this model, vector representations are constructed for entities and types using linear projection matrices. Recall $\Psi(t) \rightarrow R^{d_{t}}$ is the feature function that maps a type to its feature representation. The scoring function is given by \begin{center} $ s(e, t|\theta\! =(\mathbf{U},\mathbf{V})) = \Psi(t)^T\mathbf{V}\mathbf{U}^T \Phi(e), $ \end{center} where $\mathbf{U} \in R^{d_e \times d}$ and $\mathbf{V} \in R^{d_{t}\times d}$ are projection matrices that embed the entities and types in a $d$-dimensional space. Similarly to the linear classifier model, we use the l1-hinge loss function ($k=1$) with no regularization ($Reg(\theta) = \emptyset$). $\mathbf{U}_i$ and $\mathbf{V}_i$ denote the $i$-th column vector of the matrix $\mathbf{U}$ and $\mathbf{V}$, respectively. The algorithm is described in detail in Algorithm~\ref{algo:embedding}. The embedding model has more expressive power than the linear model, but the training unlike in the linear model, converges only to a local optimum solution since the objective function is non-convex. \subsection{Relationship to Existing Methods} \label{sec:relat-exist-meth} Many existing methods for relation extraction and entity type prediction can be cast as a special case under the global objective framework. For example, we can consider the work in relation extraction~\cite{distant_supervision,transe,limin} as models trained with $\mathcal{N}_T(e,t) = \emptyset$. These models are trained only using negative entities which we refer to as Negative Entity (NE) objective. The entity type prediction model in \newcite{fine-grained} is a linear model with $\mathcal{N}_E(e,t) = \emptyset$ which we refer to as the Negative Type (NT) objective. The embedding model described in \newcite{wsabie} developed for image retrieval is also a special case of our model trained with the NT objective. While the $NE$ or $NT$ objective functions could be suitable for some classification tasks \cite{wsabie}, the choice of objective functions for the KBC tasks has not been well motivated. Often the choice is made neither with theoretical foundation nor with empirical support. To the best of our knowledge, the global objective function, which includes both $\mathcal{N}_E(e,t)$ and $\mathcal{N}_T(e,t)$, has not been considered previously by KBC methods. \section{Experiments} \label{sec:experiments} \begin{table}[t!] \centering \small \begin{tabular}{ l | c |c } \hline & 70 types & 500 types \\ \hline \hline Entities & 2.2M & 2.2M \\\hline \multicolumn{3}{c}{Training Data Statistics ($\Lambda_0$)}\\ \hline \hline positive example & 4.5M & 6.2M \\ max \#ent for a type & 1.1M & 1.1M \\ min \#ent for a type & 6732 & 32 \\\hline \multicolumn{3}{c}{Test Data Statistics ($\Lambda - \Lambda_0$)}\\ \hline \hline positive examples & 163K & 240K \\ negative examples & 17.1M & 132M \\ negative/positive ratio & 105.22 & 554.44 \\\hline \end{tabular} \caption{\small Statistics of our dataset. $\Lambda_0$ is our training snapshot and $\Lambda$ is our test snapshot. An example is an entity-type pair.} \label{table:stats} \end{table} In this section, we give details about our dataset and discuss our experimental results. Finally, we perform manual evaluation on a small subset of the data. \subsection{Data} First, we evaluate our methods on $70$ entity types with the most observed facts in the training data.\footnote{We removed few entity types that were trivial to predict in the test data.} We also perform large-scale evaluation by testing the methods on $500$ types with the most observed facts in the training data. Table \ref{table:stats} shows statistics of our dataset. The number of positive examples is much larger in the training data compared to that in the test data since the test set contains only facts that were added to the more recent snapshot. An additional effect of this is that most of the facts in the test data are about entities that are not very {\emph well-known or famous}. The high negative to positive examples ratio in the test data makes this dataset very challenging. \begin{table*}[t!] \small \begin{subtable}{.47\linewidth} \centering \begin{tabular}{ l | l | l | l } \hline Features & Algorithm & MAP & GAP \\ \hline \multirow{2}{*} {Description} & Linear.Adagrad & 29.17 & 28.17 \\ & Linear.DCD & 28.40 & 27.76 \\\hline \multirow{2}{*} {\begin{minipage}{1in} Description + Wikipedia \end{minipage}} & Linear.Adagrad & {\bf 33.28} & {\bf 31.97} \\ & Linear.DCD & 31.92 & 31.36 \\\hline \end{tabular} \caption{Adagrad vs. Dual coordinate descent (DCD). Results are obtained using linear models trained with global training objective (m=1, n=1) on 70 types. }\label{table:train} \end{subtable} \begin{subtable}{.47\linewidth} \centering \begin{tabular}{ l | l | l } \hline Features & MAP & GAP \\ \hline Type (T) & 12.33 & 13.58 \\\hline Description (D) & 29.17 & 28.17 \\\hline Wikipedia (W) & 30.81 & 30.56 \\\hline D + W & 33.28 & {\bf 31.97} \\\hline T + D + W & {\bf 36.13} & 31.13 \\\hline \end{tabular} \caption{Feature Comparison. Results are obtained from using Linear.Adagrad with global training objective (m=1, n=1) on 70 types.}\label{table:feature} \end{subtable} \begin{subtable}{\linewidth} \centering \begin{tabular}{ l l l l } & & & \\ & & & \\ \end{tabular} \end{subtable} \begin{subtable}{0.47\linewidth} \centering \begin{tabular}{ l | l | l | l } \hline Features & Objective & MAP & GAP \\ \hline \multirow{3}{*} { D + W } & NE (m = 2) & 33.01 & 23.97 \\ & NT (n = 2) & 31.61 & 29.09 \\ & Global (m = 1, n = 1) & 33.28 & {\bf 31.97} \\\hline \multirow{3}{*} {T + D + W} & NE (m = 2) & 34.56 & 21.79 \\ & NT (n = 2) & 34.45 & 31.42 \\ & Global (m = 1, n = 1) & {\bf 36.13} & 31.13 \\\hline \end{tabular} \caption{Global Objective vs NE and NT. Results are obtained using Linear.Adagrad on 70 types.}\label{table:objective} \end{subtable} \begin{subtable}{0.47\linewidth} \centering \begin{tabular}{ l | l | l | l } \hline Features & Objective & MAP & GAP \\ \hline \multirow{3}{*} { D + W } & NE (m = 2) & 30.92 & 22.38 \\ & NT (n = 2) & 25.77 & 23.40 \\ & Global (m = 1, n = 1) & {\bf 31.60} & {\bf 30.13} \\\hline \multirow{3}{*} {T + D + W} & NE (m = 2) & 28.70 & 19.34 \\ & NT (n = 2) & 28.06 & 25.42 \\ & Global (m = 1, n = 1) & 30.35 & 28.71 \\\hline \end{tabular} \caption{Global Objective vs NE and NT. Results are obtained using the embedding model on 70 types.} \label{table:embeddingobjective} \end{subtable} \begin{subtable}{\linewidth} \centering \begin{tabular}{ l l l l } & & & \\ & & & \\ \end{tabular} \end{subtable} \begin{subtable}{\linewidth} \centering \begin{tabular}{ l | l | l | l | l | l } \hline Features & Model & MAP & GAP & G@1000 & G@10000 \\ \hline \multirow{2}{*} {D + W} & Linear.Adagrad & 33.28 & {\bf 31.97} & {\bf 79.63} & {\bf 68.08}\\ & Embedding & 31.60 & 30.13 & 73.40 & 64.69 \\\hline \multirow{2}{*} {T + D + W} & Linear.Adagrad & {\bf 36.13} & 31.13 & 70.02 & 65.09 \\ & Embedding & 30.35 & 28.71 & 62.61 & 64.30 \\\hline \end{tabular} \caption{Model Comparison. The models were trained with the global training objective (m=1, n=1) on 70 types.}\label{table:model} \end{subtable} \begin{subtable}{\linewidth} \centering \begin{tabular}{ l l l l } & & & \\ & & & \\ \end{tabular} \end{subtable} \begin{subtable}{\linewidth} \centering \begin{tabular}{ l | l | l | l |l } \hline Model & MAP & GAP & G@1000 & G@10000 \\ \hline Linear.Adagrad & {\bf 13.28} & {\bf 20.49} & {\bf 69.23} & {\bf 60.14} \\ \hline Embedding & 9.82 & 17.67 & 55.31 & 51.29 \\ \hline \end{tabular} \caption{Results on 500 types using Freebase description features. We train the models with the global training objective (m=1, n=1).}\label{table:full} \end{subtable} \caption{ \small Automatic Evaluation Results. Note that $m = |\mathcal{N}_E (e,t)|$ and $n = |\mathcal{N}_T (e,t)|$.}\label{table:small} \end{table*} \subsection{Automatic Evaluation Results} Table \ref{table:small} shows automatic evaluation results where we give results on $70$ types and $500$ types. We compare different aspects of the system on $70$ types empirically. \paragraph{Adagrad Vs DCD} We first study the linear models by comparing Linear.DCD and Linear.AdaGrad. Table \ref{table:train} shows that Linear.AdaGrad consistently performs better for our task. \paragraph{Impact of Features} We compare the effect of different features on the final performance using Linear.AdaGrad in Table \ref{table:feature}. Types are represented by boolean features while Freebase description and Wikipedia full text are represented using \emph{tf-idf} weighting. The best MAP results are obtained by using all the information (T+D+W) while best GAP results are obtained by using the Freebase description and Wikipedia article of the entity. Note that the features are simply concatenated when multiple resources are used. We tried to use \emph{idf} weighting on type features and on all features, but they did not yield improvements. \paragraph{The Importance of Global Objective} Table \ref{table:objective} and \ref{table:embeddingobjective} compares global training objective with NE and NT training objective. Note that all the three methods use the same number of negative examples. More precisely, for each $(e,t) \in \Lambda_0$, $|\mathcal{N}_E(e,t)| + |\mathcal{N}_T(e,t)| = m+ n = 2$. The results show that the global training objective achieves best scores on both MAP and GAP for classifiers and low-dimensional embedding models. Among NE and NT, NE performs better on the type-based metric while NT performs better on the global metric. \paragraph{Linear Model Vs Embedding Model} Finally, we compare the linear classifier model with the embedding model in Table \ref{table:model}. The linear classifier model performs better than the embedding model in both MAP and GAP. We perform large-scale evaluation on $500$ types with the description features (as experiments are expensive) and the results are shown in Table \ref{table:full}. One might expect that with the increased number of types, the embedding model would perform better than the classifier since they share parameters across types. However, despite the recent popularity of embedding models in NLP, linear model still performs better in our task. \subsection{Human Evaluation} To verify the effectiveness of our KBC algorithms, and the correctness of our automatic evaluation method, we perform manual evaluation on the top $100$ predictions of the output obtained from two different experimental setting and the results are shown in Table \ref{table:manual}. Even though the automatic evaluation gives pessimistic results since the test KB is also incomplete\footnote{This is true even with existing automatic evaluation methods.}, the results indicate that the automatic evaluation is correlated with manual evaluation. More excitingly, among the 179 unique instances we manually evaluated, 17 of them are still\footnote{at submission time.} missing in Freebase which emphasizes the effectiveness of our approach. \subsection{Error Analysis} \begin{itemize} \item{ {\bf Effect of training data}: We find the performance of the models on a type is highly dependent on the number of training instances for that type. For example, the linear classifier model when evaluated on $70$ types performs 24.86 \% better on the most frequent $35$ types compared to the least frequent $35$ types. This indicates bootstrapping or active learning techniques can be profitably used to provide more supervision for the methods. In this case, G@k would be an useful metric to compare the effectiveness of the different methods. } \item{ {\bf Shallow Linguistic features}: We found some of the false positive predictions are caused by the use of shallow linguistic features. For example, an entity who has acted in a movie and composes music only for television shows is wrongly tagged with the type {\emph /film/composer} since words like "movie", "composer" and "music" occur frequently in the Wikipedia article of the entity (\url{http://en.wikipedia.org/wiki/J._J._Abrams}). } \end{itemize} \begin{table}[t!] \small \begin{tabular}{ l | l | l | l } \hline Features & G@100 & G@100-M & Accuracy-M \\\hline D + W & {\bf 87.68} & {\bf 97.31} & {\bf 97} \\ \hline T + D + W & 84.91 & 91.47 & 88 \\ \hline \end{tabular} \caption{\small Manual vs. Automatic evaluation of top 100 predictions on 70 types. Predictions are obtained by training a linear classifier using Adagrad with global training objective (m=1, n=1). G@100-M and Accuracy-M are computed by manual evaluation.} \label{table:manual} \end{table} \section{Related Work} \label{sec:related-work} \paragraph{Entity Type Prediction and Wikipedia Features} Much of previous work \cite{pantel,fine-grained} in entity type prediction has focused on the task of predicting entity types at the sentence level. \newcite{uschema} develop a method based on matrix factorization for entity type prediction in a KB using information within the KB and New York Times articles. However, the method was still evaluated only at the sentence level. \newcite{toral}, \newcite{wiki} use the first line of an entity's Wikipedia article to perform named entity recognition on three entity types. \paragraph{Knowledge Base Completion} Much of precious work in KB completion has focused on the problem of relation extraction. Majority of the methods infer missing relation facts using information within the KB \cite{rescal,pra,rntn,transe} while methods such as \newcite{distant_supervision} use information in text documents. \newcite{limin} use both information within and outside the KB to complete the KB. \paragraph{Linear Embedding Model} \newcite{wsabie} is one of first work that developed a supervised linear embedding model and applied it to image retrieval. We apply this model to entity type prediction but we train using a different objective function which is more suited for our task. \section{Conclusion and Future Work} \label{sec:concl-future-work} We propose an evaluation framework comprising of methods for dataset construction and evaluation metrics to evaluate KBC approaches for inferring missing entity type instances. We verified that our automatic evaluation is correlated with human evaluation, and our dataset and evaluation scripts are publicly available.\footnote{ \url{http://research.microsoft.com/en-US/downloads/df481862-65cc-4b05-886c-acc181ad07bb/default.aspx}} Experimental results show that models trained with our proposed global training objective produces higher quality ranking within and across types when compared to baseline methods. In future work, we plan to use information from entity linked documents to improve performance and also explore active leaning, and other human-in-the-loop methods to get more training data. \bibliographystyle{naaclhlt2015}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} The perspective to realize atomtronic devices \cite{MicO04PRL,SeaO07PRA,PepO09PRL} as well as the exploration of transport features that are known from electronic mesoscopic systems \cite{BraO12S} have strongly stimulated the research on the dynamical properties of ultracold atoms in open systems. While fermionic atoms provide direct analogies with the electronic case \cite{BraO12S,BruBel12PRA,KriO13PRL}, the use of a bosonic atomic species brings along new aspects and challenges for the atomic transport problem \cite{GutGefMir12PRB,Iva13EPJB}. It is in this context particularly relevant to quantitatively understand the conceptual link between a mesoscopic Bose-Einstein condensate, which may serve as a reservoir for a bosonic transport setting, and the microscopic dynamics of an ensemble of few interacting atoms that encounter each other \textit{e.g.} within a transistor-like device. A particularly promising configuration for the experimental study of these latter aspects is provided by the guided atom laser \cite{Guerin2006,Couvert2008a,Gattobigio2011} in which atoms are coherently outcoupled from a trapped Bose-Einstein condensate into an optical waveguide. A coherent atomic beam can thereby be created and injected onto engineered optical scattering geometries, which would allow one to study bosonic many-body scattering at well-defined incident energy. A theoretical modelling of such waveguide scattering processes within guided atom lasers faces the problem of dealing with an open system in a many-body context. Within the framework of the mean-field approximation described by the nonlinear Gross-Pitaevskii equation, this problem can be solved to a satisfactory degree by imposing absorbing boundary conditions \cite{Shi91PRB} at the two open ends of the numerical grid representing the waveguide, which are suitably defined in order to match the dispersion relation of the expected outgoing waves \cite{PauRicSch05PRL,PauO07PRA}. While this approach provides a reasonably efficient absorption of outgoing Gross-Pitaevskii waves even in the presence of dynamical instabilities \cite{PauO05PRA}, it ultimately breaks down if quantum fluctuations beyond the Gross-Pitaevskii approximation are taken into account in the theoretical description of the bosonic scattering process \cite{ErnPauSch10PRA}. Complex absorbing potentials (CAPs) \cite{Kosloff1986,Riss1998} that exhibit a nonvanishing imaginary part can still be introduced in that case in order to damp the outgoing flux. However, their numerical implementation requires great care in order to suppress unwanted backreflections of outgoing waves at the onset of the artificially introduced imaginary potential (see Refs.~\cite{MoiO04JPB,RapO10PRA} for successful applications of CAPs in the context of the Gross-Pitaevskii equation). The method of \emph{Complex Scaling} (CS) \cite{Balslev1971,BarrySimon1973,Reinhardt1982,Junker1982,Ho1983,Lowdin1988,Moiseyev1998} provides a more satisfactory alternative from a conceptual point of view. This method essentially consists (in 1D) in the rotation $x \mapsto z = x \exp(i\theta)$ of the spatial coordinate in the complex plane by a suitably chosen angle $\theta>0$. Decaying quasi-bound states that exhibit outgoing boundary conditions become square integrable by this transformation and can thereby be computed in an open system. The complex scaling approach can formally be generalized to the nonlinear Gross-Pitaevskii equation \cite{MoiCed05PRA,SchPau06PRA}. However, its practical implementation in this latter context poses substantial numerical difficulties concerning the proper evaluation of the nonlinear term in the complex rotated frame \cite{SchPau06PRA,WimSchMan06JPB,SchWim07APB}. In this paper, we focus on the method of \emph{Exterior Complex Scaling} (ECS) \cite{Simon1979} which is particularly suited for open systems in which potential scattering and (mean-field) particle-particle interaction effects are restricted to a finite spatial region. This method consists in a complex rotation of the position coordinate applied only to the asymptotic spatial domain of freely outgoing and noninteracting particles. ECS has been applied in a wide range of problems such as computing the probability distribution of excitations to the electronic continuum of HeT$^{+}$ following the $\beta$ decay of the T$_2$ molecule \cite{Froelich1993}, molecular photoionization \cite{Saenz2003}, electron-hydrogen collisions \cite{Rescigno1999,McCurdy2004a} and also strong-field infrared photo-ionization of atoms \cite{He2007, Tao2009,Tao2012,Scrinzi2012}. While this approach exactly reproduces the true decay behaviour in the open quantum system from a formal point of view (in contrast to the introduction of CAPs), numerical imprecisions are necessarily introduced through the discretization of space in the finite-difference approximation \cite{McCurdy2004a} of the Gross-Pitaevskii approximation. This problem can be overcome by using high rank finite elements \cite{Scrinzi1993,Scrinzi2010} or a B-spline basis \cite{McCurdy2004} instead of a finite-difference representation. Alternatively, an analytic transition function can be used to interpolate from the scaled ``outer'' domain to the unscaled ``inner'' domain which may contain all sorts of nontrivial scattering and interaction phenomena. This latter method is named \emph{Smooth Exterior Complex Scaling} (SECS) \cite{Rom1990}. It has been used, for example, to compute doubly excited states of the helium atom \cite{Elander2003} and to investigate the dynamical stability of stationary scattering states of a Bose-Einstein condensate in two-dimensional billiard geometries \cite{HarO12AP}. The main aim of this study is to assess the applicability of smooth exterior complex scaling to the mean-field transport of Bose-Einstein condensates in one-dimensional waveguides using the finite-difference approximation. We therefore represent, as described in Sec.~\ref{sec:DIPBH}, the waveguide by means of a discrete one-di\-mensional chain which is at some point connected to a separate site representing the reservoir trap of the atom laser. Scattering and interaction phenomena are assumed to be restricted to a finite spatial domain within this chain. As is shown in Sec.~\ref{sec:OBH}, this crucial assumption allows us to formally separate this central domain from the two attached semi-infinite ``leads'' featuring free non-interacting motion. This gives rise to perfectly transparent boundary conditions which render, however, the numerical propagation of the system rather time-consuming. In Sec.~\ref{sec:SECS}, smooth exterior complex scaling is then introduced to this open system as a feasible alternative. Finally, numerical results comparing the use of smooth exterior complex scaling, of complex absorbing potentials, as well as of the transparent boundary conditions derived in Sec.~\ref{sec:OBH} are presented in Sec.~\ref{sec:Comp}. \section{1D Bose-Hubbard chain with a source} \label{sec:DIPBH} We consider an infinite one-dimensional (1D) Bose-Hub\-bard (BH) system representing the transverse ground mode of a 1D waveguide in a finite-difference representation. This BH chain is connected at one of its sites to one additional site representing a reservoir of Bose-Einstein condensed atoms with the chemical potential $\mu$, as illustrated in Fig.~\ref{fig:BHproj}. The many-body Hamiltonian of this system reads \begin{eqnarray} \label{eq:BHham} \hat{\mathcal{H}} &= \displaystyle\sum_{\ell=-\infty}^{+\infty}& \Bigg[ -J(\hat{a}^\dagger_{\ell+1}\hat{a}_{\ell} + \hat{a}^\dagger_{\ell}\hat{a}_{\ell+1}) \nonumber \\ &&+ \frac{g_\ell}{2} \hat{n}_\ell(\hat{n}_\ell-1) + V_\ell\hat{n}_\ell \Bigg] \nonumber \\ &&+ \kappa^*(t)\hat{b}^\dagger\hat{a}_{\ell_S} + \kappa(t)\hat{a}^\dagger_{\ell_S}\hat{b} + \mu \hat{b}^\dagger \hat{b}, \end{eqnarray} where we define by $\hat{a}_\ell$ and $\hat{a}_\ell^\dagger$ the annihilation and creation operators, respectively, on the site $\ell$ of the chain, with $\hat{n}_\ell = \hat{a}^\dagger_\ell\hat{a}_{\ell}$ the corresponding number operator, and by $\hat{b}$ and $\hat{b}^\dagger$ the annihilation and creation operator of the reservoir to which the chain is connected at the site $\ell_S$. The hopping strength $J$, the on-site interaction strength $g_\ell$, as well as the on-site potential $V_\ell$ can be determined from the Hamiltonian of the underlying continous system through the discretization of the spatial coordinate along the waveguide. The coupling strength $\kappa(t)$, on the other hand, is related to the outcoupling process of atoms from the reservoir and can be controlled in a time-dependent manner (\textit{e.g.} through the variation of the intensity of a radiofrequency field in the case of Refs.~\cite{Guerin2006,RioO08PRA}). We should mention, however, that the framework developed here does not exclusively apply to guided atom lasers, but could also be used in the context of analogous transport processes taking place within optical lattices. In the Heisenberg representation, the time evolution of the annihilation operators $\hat{a}\equiv\hat{a}_\ell(t)$ and $\hat{b}\equiv\hat{b}(t)$ is given by the Heisenberg equations (we set $\hbar=1$ in the following) \begin{subequations} \label{eq:inffieldopevol} \begin{align} i \frac{\partial \hat{a}_\ell(t)}{\partial t} =& V_\ell\hat{a}_\ell(t) - J\left[\hat{a}_{\ell-1}(t) + \hat{a}_{\ell+1}(t)\right] \nonumber\\ & + g_\ell \hat{a}_\ell^\dagger(t)\hat{a}_\ell(t)\psi_\ell(t) + \kappa(t)\delta_{\ell,\ell_S}\hat{b}(t)\\ i \frac{\partial\hat{b}(t)}{\partial t} =& \mu\hat{b}(t) + \kappa^*(t)\hat{a}_{\ell_S}(t). \end{align} \end{subequations} In accordance with the working principle of an atom laser, we consider an initial state at time $t_0$ in which the source is populated with a very large number $N\to\infty$ of atoms and the chain is empty. Moreover, we consider a very weak coupling strength $\kappa(t)\to 0$ which tends to zero such that $N|\kappa(t)|^2$ remains finite. This combined limit gives rise to a finite population within the chain, which is (at time-independent $\kappa$) alimented by a steady flux of atoms from the reservoir. In the following, we consider the classical mean-field regime of large on-site densities and weak interaction strengths within the Bose-Hubbard chain, in which (at finite evolution time $t$) the system can be described using c-numbers instead of operators. The dynamics of the system in this regime is described by the nonlinear Gross-Pitaevskii (GP) equation \begin{subequations} \label{eq:BHsystem} \begin{align} \label{eq:chain} i\frac{\partial \psi_\ell(t)}{\partial t} &= (V_\ell-\mu)\psi_\ell(t) -J\left(\psi_{\ell+1}(t) + \psi_{\ell-1}(t)\right) \nonumber\\ & + g_\ell|\psi_\ell(t)|^2 \psi_\ell(t)+ \delta_{\ell,\ell_S}\kappa(t)\chi(t) \\ \label{eq:Source} i\frac{\partial \chi(t)}{\partial t} &= \kappa^*(t)\psi_{\ell_S}(t) \end{align} \end{subequations} where we define the amplitudes $\psi_\ell(t)=\langle\hat{a}_\ell(t)\rangle e^{-i\mu t}$ and $\chi(t)=\langle\hat{b}(t)\rangle e^{-i\mu t}$ with the initial conditions $\psi_\ell(t_0)=0$ and $\chi(t_0)=\sqrt{N}$. From Eqs.~\eqref{eq:chain} and \eqref{eq:Source} we can infer that $\chi(t) = \sqrt{\mathcal{N}}(1+\mathcal{O}(|\kappa|^2)$ at finite $t-t_0$, and as a consequence we can neglect the time dependence of $\chi$ in the limit $\kappa\to 0$. We then obtain a discrete nonlinear Schr\"{o}dinger equation with a source term: \begin{eqnarray} \label{eq:chain2} i\frac{\partial \psi_\ell(t)}{\partial t} &=& (V_\ell-\mu)\psi_\ell(t) \nonumber\\ &&-J\left(\psi_{\ell+1}(t) + \psi_{\ell-1}(t)\right) \nonumber\\ && + g_\ell|\psi_\ell(t)|^2 \psi_\ell(t)+\delta_{\ell,\ell_S} \kappa(t)\sqrt{\mathcal{N}}\,. \end{eqnarray} \section{Transparent boundary conditions} \label{sec:OBH} The standard procedure for a numerical study of the time-dependent dynamics within an infinite chain consists in defining a sufficiantly large ``simulation box'' containing a finite number of sites. The choice of the boundary conditions at the edges of the box is, in general, irrelevant for wave packet evolution processes that evolve within a finite time; it does, however, matter for the type of scattering processes that we consider here: choosing hard-wall or periodic boundary conditions would rather quickly lead to unwanted reflections of the matter wave at the artificially introduced boundaries of the box. \begin{figure}[t] \centering \begin{psfrags} \psfrag{S}{{\small Source}} \psfrag{0}{{\small $0$}} \psfrag{1}{{\small $1$}} \psfrag{N}{{\small $L$}} \psfrag{N+1}{{\small $L+1$}} \psfrag{nS}{{\small $\ell_S$}} \psfrag{L}{{\small $\mathcal{L}$}} \psfrag{Q}{{\small $\mathcal{Q}$}} \psfrag{R}{{\small $\mathcal{R}$}} \includegraphics[width=0.8\linewidth]{img/grid_proj.eps} \end{psfrags} \caption{(color online) One-dimensional infinite BH system with an additional site for the source. The zone $\mathcal{Q}$ is defined between the dot-dashed lines. \label{fig:BHproj}} \end{figure} To avoid such artificial backreflections, we can introduce \emph{transparent boundary conditions} (TBC), making use of the fact, as explained in the introduction, that the scattering potential and the interaction strength are non-zero only in a finite region of space. To this end, we formally divide the system in three parts, namely the semi-infinite left and right parts $\mathcal{L}$ and $\mathcal{R}$ where neither interaction nor scattering takes place, and the finite central part $\mathcal{Q}$ consisting of $L$ sites numbered from $1$ to $L$, which contains potential scattering, atom-atom interaction, as well as the link to the source (see Fig.~\ref{fig:BHproj}). For the sake of compactness of the formalism, we regroup all the amplitudes $\psi_\ell$ into a state $\ket{\psi}$ defined through \begin{equation} \ket{\psi} = \sum_{\ell=-\infty}^{+\infty} \psi_\ell \ket{\ell}, \end{equation} where the on-site states $\ket{\ell}$ form an orthonormal basis $\braket{\ell}{\ell'}=\delta_{\ell,\ell'}$. Formally Eq.~\eqref{eq:chain2} can then be expressed as \begin{equation} i \frac{\partial \ket{\psi}}{\partial t} = \mathcal{H}\ket{\psi} = [\mathcal{H}_f+\mathcal{V}+\mathcal{U}(\psi)]\ket{\psi} + \ket{S} \end{equation} with $\ket{S} = \kappa(t)\sqrt{\mathcal{N}} \ket{\ell_0}$, where we decompose the Gross-Pitaveksii Hamiltonian $\mathcal{H}$ in the free motion on the chain described by $\mathcal{H}_f$, the scattering potential included in $\mathcal{V}$, and the nonlinear interaction term $\mathcal{U}(\psi)$. The corresponding matrix elements in the local basis are \begin{subequations} \begin{align} \label{eq:kinelements} \elemm{\ell}{\mathcal{H}_f}{\ell'} &= -\mu \delta_{\ell,\ell'} - J (\delta_{\ell,\ell'-1} +\delta_{\ell,\ell'+1}),\\ \elemm{\ell}{\mathcal{V}}{\ell'} &= \delta_{\ell,\ell'} V_{\ell},\\ \elemm{\ell}{\mathcal{U}(\psi)}{\ell'} &= \delta_{\ell,\ell'} g_\ell |\psi_\ell|^2. \end{align} \end{subequations} We can now define the division of the system described above using the Feshbach projection formalism with the three projectors \begin{subequations} \begin{align} P_\mathcal{L} &= \sum_{\ell=-\infty}^0 \ket{\ell}\bra{\ell}, \\ P_\mathcal{Q} &= \sum_{\ell=1}^L \ket{\ell}\bra{\ell}, \\ P_\mathcal{R} &= \sum_{\ell=L+1}^\infty \ket{\ell}\bra{\ell}. \end{align} \end{subequations} This gives rise to the three coupled evolution equations \begin{subequations} \begin{align} i\frac{\partial \ket{\psi^{(\mathcal{L})}}}{\partial t} &= \mathcal{H}_\mathcal{L}\ket{\psi^{(\mathcal{L})}}+ \mathcal{W}_{\mathcal{LQ}}\ket{\psi^{(\mathcal{Q})}} \label{eq:left} ,\\ i\frac{\partial \ket{\psi^{(\mathcal{Q})}}}{\partial t} &=(\mathcal{H}_\mathcal{Q} + \mathcal{V}_{\mathcal{Q}} + \mathcal{U_{\mathcal{Q}}}(\psi))\ket{\psi^{(\mathcal{Q})}} + \ket{S} \nonumber\\ & +\mathcal{W}_{\mathcal{QL}}\ket{\psi^{(\mathcal{L})}} +\mathcal{W}_{\mathcal{QR}}\ket{\psi^{(\mathcal{R})}}, \label{eq:center} \\ i\frac{\partial \ket{\psi^{(\mathcal{R})}}}{\partial t} &= \mathcal{H}_\mathcal{R}\ket{\psi^{(\mathcal{R})}}+ \mathcal{W}_{\mathcal{RQ}}\ket{\psi^{(\mathcal{Q})}} \label{eq:right} \end{align} \end{subequations} where we define $\ket{\psi^{(\mathcal{X})}} = P_\mathcal{X}\ket{\psi}$, $\mathcal{H}_\mathcal{X} = P_\mathcal{X}\mathcal{H}_fP_\mathcal{X}$, with $\mathcal{X}$, $\mathcal{Y}$ being equal to $\mathcal{Q}, \mathcal{L}$ or $\mathcal{R}$, as well as \begin{eqnarray} \mathcal{W}_{\mathcal{LQ}} &=& P_\mathcal{L}\mathcal{H}_fP_\mathcal{Q} = -J \ket{0}\bra{1} = \mathcal{W}_{\mathcal{QL}}^\dagger,\\ \mathcal{W}_{\mathcal{RQ}} &=& P_\mathcal{R}\mathcal{H}_fP_\mathcal{Q} = -J \ket{L+1}\bra{L} = \mathcal{W}_{\mathcal{QR}}^\dagger. \end{eqnarray} The evolution equations \eqref{eq:left}, \eqref{eq:right} for the left and the right part are linear and describe a free propagation in a semi-infinite lead. As a consequence, we can formally integrate them and plug the result in the evolution equation \eqref{eq:center} for the central part. This yields \begin{eqnarray} \label{eq:evol} i\frac{\partial \ket{\psi^{(\mathcal{Q})}}}{\partial t} &=&[\mathcal{H}_\mathcal{Q}+ \mathcal{V}_\mathcal{Q} + \mathcal{U}_\mathcal{Q}(\psi)]\ket{\psi^{(\mathcal{Q})}} \nonumber\\ & & -i\int_{t_0}^t dt'\mathcal{W}_{\mathcal{QL}}e^{-i(t-t')\mathcal{H}_\mathcal{L}}\mathcal{W}_{\mathcal{LQ}}\ket{\psi^{(\mathcal{Q})}(t')}\nonumber\\ & & -i\int_{t_0}^t dt'\mathcal{W}_{\mathcal{QR}}e^{-i(t-t')\mathcal{H}_\mathcal{R}}\mathcal{W}_{\mathcal{RQ}}\ket{\psi^{(\mathcal{Q})}(t')}\nonumber\\ & & + \mathcal{W}_{\mathcal{QR}}e^{-i(t-t_0)\mathcal{H}_\mathcal{R}}\ket{\psi^{(\mathcal{R})}(t_0)}\nonumber\\ & & + \mathcal{W}_{\mathcal{QL}}e^{-i(t-t_0)\mathcal{H}_\mathcal{L}}\ket{\psi^{(\mathcal{L})}(t_0)}, \end{eqnarray} where the second and third lines describe the decay into the leads and the fourth and fifth lines describe the propagation of the initial conditions within the lead into the scattering region $\mathcal{Q}$. The integrals in Eq.~\eqref{eq:evol} are calculated using the normalized continuum eigenstates $\ket{k^{\mathcal{(L/R)}}}$ of the leads, which in the local basis $\ket{\ell}$ can be written as \begin{equation} \braket{\ell}{k^\mathcal{(L)}} = \sqrt{\frac{2}{\pi}}\sin[(\ell-1)k] \quad \mbox{with}\; \ell< 1 \end{equation} for the left lead and \begin{equation} \braket{\ell}{k^\mathcal{(R)}} = \sqrt{\frac{2}{\pi}}\sin[(\ell-L)k] \quad \mbox{with}\; \ell> L \end{equation} for the right lead, with $0\le k \le \pi$, $\braket{k^{\mathcal{(L/R)}}}{{\tilde{k}^{\mathcal{(L/R)}}}} = \delta(k-\tilde{k})$ and the associated eigenvalues \begin{equation} E_k = -2J\cos(k)-\mu. \end{equation} For the term $\mathcal{W}_{\mathcal{QL}}e^{-i\tau\mathcal{H}_\mathcal{L}} \mathcal{W}_{\mathcal{LQ}}$ for instance, we obtain the expression \begin{equation*} \mathcal{W}_{\mathcal{QL}}e^{-i\tau\mathcal{H}_\mathcal{L}}\mathcal{W}_{\mathcal{LQ}} = J^2\int_0^\pi dk \,|\braket{0}{k^{(\mathcal{L}})}|^2 e^{-i\tau E_k}\ket{1}\bra{1} \end{equation*} which is related to Bessel integrals. This finally yields a finite set of $L$ integro-differential equations \begin{eqnarray} \label{eq:integro_evol} i \frac{\partial \psi_\ell}{\partial t} &=& (V_\ell-\mu)\psi_\ell - J(\psi_{\ell-1}\theta_{\ell-1,1}+ \psi_{\ell+1}\theta_{L,\ell+1}) \nonumber\\ & & + g_\ell |\psi_\ell|^2\psi_\ell + \kappa(t)\sqrt{N}\delta_{\ell,\ell_S} \nonumber\\ & & - 2i(\delta_{\ell,1} + \delta_{\ell,L})J^2 \int_{t_0}^t dt'\, \mathcal{M}_{1}(t-t')\psi_\ell(t') \nonumber\\ & & +2J \delta_{\ell,1}\sum_{\ell'=-\infty}^0 \mathcal{M}_{\ell'-1}(t-t_0)\psi_{\ell'}(t_0) \nonumber\\ & & -2J \delta_{\ell,L}\sum_{\ell'=L+1}^\infty \mathcal{M}_{\ell'-L}(t-t_0)\psi_{\ell'}(t_0) \end{eqnarray} with \begin{equation} \theta_{\ell,\ell'}=\left\{\begin{matrix} 1 & & \mathrm{if}\; \ell\ge\ell' \\ 0 & & \mathrm{otherwise} \end{matrix}\right. \end{equation} and \begin{equation} \mathcal{M}_\ell(\tau) = \frac{i^\ell}{2} \left[ J_{\ell-1} \left(2J\tau\right)+J_{\ell+1}\left(2J\tau\right)\right]e^{i\mu\tau} \end{equation} where $J_\ell(x)$ is the Bessel function of the first kind. As no approximation has been made in this section, Eq.~\eqref{eq:integro_evol} reproduces the true evolution of the infinite system under consideration, described by Eq.~\eqref{eq:chain2}. The integral term in the third line of Eq.~\eqref{eq:integro_evol} describing the decay into the left and right leads therefore yields a perfectly transparent boundary condition that is defined on the first and last site of the central region. \section{Smooth Exterior Complex Scaling} \label{sec:SECS} Within a continuous 1D system, the method of complex scaling consists in the transformation $x \mapsto z = x e^{i\theta}$ ($x\in\mathbb{R}$) of the position coordinate with $\theta > 0$. With this transformation, a stationary wave $\psi(x) \sim e^{ikx}$ becomes $\psi(z) \sim e^{ikz}=e^{ikx\cos \theta}e^{-kx\sin \theta}$ where $k$ is the wavenumber. Waves traveling from left to the right ($k>0$) are therefore subject to damping for positive $x$, while waves traveling from right to the left ($k<0$) are damped for negative $x$ (and would be enhanced for positive $x$ \cite{Scrinzi2010}). Thus complex scaling allows to describe in a numerically efficient manner the outgoing waves that arise in our 1D scattering problem in which the source is part of the scattering system (see Fig.~\ref{fig:BHproj}). For our case, we want to apply the complex scaling transformation to the leads $\mathcal{L}$ and $\mathcal{R}$, while the finite scattering region $\mathcal{Q}$ is supposed to remain unscaled. In order to properly introduce the method of smooth exterior complex scaling (SECS) for this case, we first consider a continuous system with a wavefunction $\psi(x,t)$ that is subject to the Schr\"odinger equation \begin{equation} i\frac{\partial}{\partial t} \psi(x,t) = -J\frac{\partial^2}{\partial x^2} \psi(x,t) \,. \end{equation} We now define a complex analytical function $q(x)$ on the 1D space and introduce an (in general non-unitary) transformation $\mathcal{U}$ through \begin{equation} \mathcal{U} \psi(x,t) = \psi(z(x), t) \end{equation} where $z(x)$ is defined as \begin{equation} z(x) = \int_{0}^x q(x')dx' \end{equation} (assuming, without loss of generality, that the spatial origin $x=0$ is part of $\mathcal{Q}$). The evolution of the transformed wavefunction is then given by \begin{equation} i\frac{\partial \mathcal{U} \psi}{\partial t}(x,t) = -\frac{J}{q^2(x)} \left( \frac{\partial^2}{\partial x^2} - \frac{q'(x)}{q(x)} \frac{\partial}{\partial x} \right) \mathcal{U} \psi(x,t) \label{eq:kincompscal} \end{equation} where $q'(x)$ is the first derivative of $q$ with respect to $x$. The goal is to choose $q(x)$ such that the Hamiltonian remains unscaled in the $\mathcal{Q}$ region and scaled in the other two regions. For this purpose, we choose $q(x)~\to~1$ within $\mathcal{Q}$ and smoothly ramp $q(x)$ to $e^{i \theta}$ within the scaled regions. The function $q(x)$ we used in this study reads \cite{Kalita2011} \begin{equation} q(x) = 1+(e^{i\theta}-1)\left(1+\frac{f_{+}(x)-f_{-}(x)}{2}\right) \label{eq:qx} \end{equation} with \begin{equation} f_{\pm}(x) = \tanh(\lambda(x-x_\pm)\pm 2\pi) \end{equation} where $\lambda$ is defined as the smoothing parameter and the interval $[x_{-},x_{+}]$ corresponds to the $\mathcal{Q}$ region. In order to apply SECS to our BH chain, which can be seen as a discretization of space, we need to define the matrix elements of the spatial derivatives appearing in Eq.~(\ref{eq:kincompscal}) within the discrete basis of on-site states $\ket{\ell}$. Within the framework of the finite-difference approximation, we find \begin{subequations} \begin{align} \label{eq:scaledkinelements} \elemm{\ell}{\frac{\partial}{\partial x }}{\ell'} &= \frac{1}{2} (\delta_{\ell,\ell'+1} -\delta_{\ell,\ell'-1}),\\ \elemm{\ell}{\frac{\partial^2}{\partial x^2 }}{\ell'} &= \delta_{\ell,\ell'+1} + \delta_{\ell,\ell'-1} - 2 \delta_{\ell,\ell'}, \end{align} \end{subequations} which yields, using Eq.~\eqref{eq:kinelements}, the relation \begin{equation} -J \elemm{\ell}{\frac{\partial^2}{\partial x^2 }}{\ell'} = \elemm{\ell}{\hat{\mathcal{H}}_f}{\ell'} + (\mu-2J)\delta_{\ell,\ell'} \end{equation} between the free 1D kinetic energy and the hopping term of the BH model. Defining $\bra{\ell'}q\ket{\ell}=q_\ell\delta_{\ell,\ell'}$, we can discretize Eq~\eqref{eq:kincompscal}. Provided that the transition between the scaled and unscaled regions is sufficiently smooth (\textit{i.e.} $\lambda\ll 1$), we can set $q_{\ell-1}\simeq q_\ell\simeq q_{\ell+1}$ and the evolution equation \eqref{eq:integro_evol} now reads \begin{eqnarray} \label{eq:SECS} i\frac{\partial \psi_\ell}{\partial t} &=& \left(V_\ell- \mu q_\ell\right)\psi_\ell + g_\ell|\psi_\ell|^2 \psi_\ell + \kappa(t)\sqrt{\mathcal{N}}\delta_{\ell,\ell_S} \nonumber \\ & & +2J( q_\ell + q_\ell^{-1}) \psi_\ell \nonumber \\ & & -J\left[\frac{1}{q_{\ell+1}} +\frac{1}{2}\frac{q'_{\ell+1}}{ q^2_{\ell+1}}\right] \psi_{\ell+1} \nonumber \\ & & -J \left[\frac{1}{q_{\ell-1}} -\frac{1}{2}\frac{q'_{\ell-1}}{ q^2_{\ell-1}}\right]\psi_{\ell-1} \nonumber \\ & & +2J \delta_{\ell,1}\sum_{\ell'=-\infty}^0 \mathcal{M}_{\ell'-1}(t-t_0)\psi_{\ell'}(t_0) \nonumber\\ & & -2J \delta_{\ell,L}\sum_{\ell'=L+1}^\infty \mathcal{M}_{\ell'-L}(t-t_0)\psi_{\ell'}(t_0). \end{eqnarray} The index $\ell$ can now take values in $\mathbb{Z}$ and the unscaled region goes from $\ell_{-}=1$ to $\ell_{+}=L$. In the practical numerical implementation, the BH chain has to be sufficiently long in order to absorb the outgoing flux. The last two lines of Eq.~\eqref{eq:SECS} still contain the propagation of the initial population of the leads into the scattering region, which is unaffected by the SECS transformation. \section{Results} \label{sec:Comp} We now compare the SECS and TBC methods with each other and with the well established method of complex absorbing potentials (CAPs). The imaginary part of such a complex potential renders the Hamiltonian non-Hermitian and thus the evolution non-unitary. For the sake of comparability with the SECS method, we choose the absorbing potential \begin{equation} V^\textrm{CAP}_\ell = -i\,\, \textrm{Im}( q_\ell ), \end{equation} with $q_\ell$ being defined, as in Sec.~\ref{sec:SECS}, by the discretization of Eq.~\eqref{eq:qx}. The TBC, SECS, and CAP methods are applied to the case of free kinetic propagation along a homogeneous and noninteracting BH chain, as well as to the case of propagation across a symmetric double-barrier configuration in the presence of interaction, which can be seen as an atomic quantum dot. At the initial time $t_0$, the BH chain is considered to be either completely empty (\textit{i.e.} $\psi_\ell(t_0) = 0$ for all $\ell\in\mathbb{Z}$) or populated with some randomly selected complex amplitudes on each site. The coupling to the source is ramped according to \begin{equation} \kappa(t) = \frac{1}{1+ e^{-(J(t-t_0) - 50) / 5 }}. \end{equation} To solve the differential equations, we use a Runge-Kutta Fehlberg (RKF) method. This method is of order $\mathcal{O}(\delta t^4)$ with an error estimator of order $\mathcal{O}(\delta t^5)$, which allows one to adapt the numerical time step $\delta t$ in order to keep the numerical solution as close as possible to the mathematically true solution of the equations. \subsection{Free case with empty leads} \label{subsec:free} We first consider the case of a free kinetic propagation, \textit{i.e.} $V_\ell=g_\ell=0$ for all $\ell\in\mathbb{Z}$, and compare the density profiles obtained by the TBC, SECS, and CAP methods against the expected value for the stationary density $n^\varnothing$. The latter can be analytically calculated with Eq.~\eqref{eq:integro_evol} by restricting the central region $\mathcal{Q}$ to a single site. We obtain \begin{equation} \label{eq:freen} n^{\varnothing} = \lim_{t\to\infty} |\psi(t)|^2 = \lim_{t\to\infty}\frac{\mathcal{N}|\kappa(t)|^2}{4J^2-\mu^2}. \end{equation} \begin{figure}[t] \begin{center} \input{img/free2.tex} \caption{(color online) Density profiles for the case of free propagation (\textit{i.e.} $g_\ell=V_\ell=0$) in a homogeneous BH chain for three different propagation times $t$. The source of atoms is connected at site $\ell_S=1$. The TBC method (red solid line), the SECS method (green dashed line), as well as the CAP method (blue dotted line) yield practically identical densities within the region $\mathcal{Q}$ defined from site 1 to site 100, which agree there with the analytical expression \eqref{eq:freen} for the stationary density. Differences between the numerical methods naturally appear within the leads (in which the results obtained by the TBC method are not displayed). While SECS is found to absorb the outgoing flux most efficiently in terms of computation time, TBC is the slowest method because of the integral in the decay term in Eq.~\eqref{eq:integro_evol}.} \label{fig:free} \end{center} \end{figure} In Fig.~\ref{fig:free}, we represent the propagation of free particles in a BH chain consisiting of 100 sites within the $\mathcal{Q}$ region. The source is located at the first site in this region. The $\mathcal{L}$ and $\mathcal{R}$ regions are treated according to TBC, CAP or SECS. We chose the smoothing parameter $\lambda=0.1$ and the maximal scaling angle $\theta=1.5$. We can see that the three methods agree with each other and reproduce the analytical value \eqref{eq:freen} of the on-site density within the $\mathcal{Q}$ region. Moreover, the three methods seem to be stable for long propagation times, which allows us to study the steady state of this scattering process with confidence. For a total propagation time of $Jt=250$, the SECS method is found to be slightly faster than the CAP method. We believe that this is due to CAP absorbing the outgoing flux less efficiently than SECS; hence, more sites in the leads contribute to the error computed by the adaptive RKF method and consequently more time steps have to be taken in order to reach the final time. On the other hand, the TBC method, while being exact from the formal point of view, is substantially slower (about 1000 times for this particular comparison) than the other two methods. This can be explained by the fact that the numerical evaluation of the integral in the decay term is very costly and and has to be re-done at any individual time step since this integral is a convolution of the memory kernel with the local history of the wavefunction. \subsection{Free case with populated leads} \label{subsec:TW} Let us now study the influence of nonvanishing initial populations in the leads. We generate the initial condition on the site $\ell$ according to \begin{equation} \label{eq:InitCond} \psi_\ell(t=t_0) = \frac{1}{2}\left(\mathcal{A}_\ell + i \mathcal{B}_\ell\right), \end{equation} where $\mathcal{A}_\ell$ and $\mathcal{B}_\ell$ are real, independent Gaussian random variables with unit variance and zero mean, such that \begin{subequations} \begin{align} \langle \mathcal{A}_\ell \rangle &= \langle \mathcal{B}_\ell \rangle = 0,\\ \langle \mathcal{A}_{\ell'}\mathcal{A}_\ell \rangle &= \langle \mathcal{B}_{\ell'}\mathcal{B}_\ell \rangle = \delta_{\ell',\ell}, \\ \langle \mathcal{A}_{\ell'}\mathcal{B}_\ell \rangle &= 0 \end{align} \end{subequations} for all $\ell,\ell'\in \mathbb{Z}$. As we shall point out in a forthcoming paper \cite{DujArgSch}, such initial conditions arise when applying the truncated Wigner method to the transport of Bose-Einstein condensates in the context of atom lasers. \begin{figure}[t] \begin{center} \input{img/noise2.tex} \caption{(color online) Density profiles for free propagation (\textit{i.e.} $g_\ell=V_\ell=0$) with random initial populations in the leads of the BH chain. There is no source of atoms in this calculation. As is clearly visible in the inset, all three methods compare very well in the central region $\mathcal{Q}$ which is again defined from site $1$ to site $100$. SECS absorbs the outgoing flux more effectively in the leads than CAP and appears to be the most efficient method in terms of computation time.} \label{fig:noise} \end{center} \end{figure} We consider again a homogeneous non-interacting BH chain and used the same parameters as in Sec.~\ref{subsec:free} for SECS and CAP. Owing to the linearity of the time evolution in the case $g_\ell=0$ for all $\ell\in\mathbb{Z}$, we can, without loss of generality, set the coupling to the source to zero, $\kappa(t)=0$, for all times $t$ and study the evolution of the random initial populations within an isolated waveguide (since the effect of the source was already investigated in Sec.~\ref{subsec:free}). The results of this simulation are shown in Fig.~\ref{fig:noise}. We arrive at the same conclusions as in Sec.~\ref{subsec:free}: All three methods agree with each other and yield nearly identical on-site densities. In particular, there is no artificial accumulation of the total population within the central $\mathcal{Q}$ region due to an inefficient absorption of the outgoing flux at the boundaries. Again, the computational effort for SECS is appreciably lower than for CAP and substantially lower than for the integro-differential TBC method. \subsection{Quantum dot} \label{subsec:QD} In this section we study the effects of a nonvanishing potential and a finite on-site interaction on the scattering process. As displayed in Fig.~\ref{fig:qdot_geom}, we specifically consider a double-barrier configuration defined by \begin{equation} V_\ell = V(\delta_{\ell,\ell_0} + \delta_{\ell,\ell_0+6}), \label{eq:qdotV} \end{equation} which can be seen as an atomic quantum dot. Interaction is present only within the dot, \textit{i.e.} we define \begin{equation} g_\ell = g \sum_{j=1}^{5}\delta_{\ell,\ell_0+j} \label{eq:qdotg} \end{equation} and choose $g = 0.1J$. \begin{figure}[t] \begin{center} \begin{psfrags} \psfrag{i}{{\small $\ell_0$}} \psfrag{i+6}{\hspace{-0.2cm}{\small $\ell_0+6$}} \psfrag{lS}{{\small $\ell_S$}} \psfrag{Vl}{{\small $V_\ell$}} \psfrag{V}{{\small $V$}} \includegraphics[width=0.9\linewidth]{img/qdot.eps} \end{psfrags} \end{center} \caption{(color online) One dimensional chain for the quantum dot model (see Eq.~\ref{eq:qdotV}). Plotted is the on-site potential $V_\ell$ as a function of the site index $\ell$. The red circles represent sites where the atoms can interact. } \label{fig:qdot_geom} \end{figure} \begin{figure}[t] \begin{center} \input{img/qdot2.tex} \caption{(color online) Density profiles for a resonant propagation across a quantum dot in the BH chain, defined in Eqs.~\eqref{eq:qdotV} and \eqref{eq:qdotg}, with $\mu~=~-0.242J$ and $g~=~0.1J$ before (upper panel) and after (middle and lower panels) reaching the stationary state. The source of atoms is located at site 1. As shown in the inset, all three methods compare very well in the central region $\mathcal{Q}$ defined from site $1$ to site $20$.} \label{fig:qdot} \end{center} \end{figure} \begin{figure}[t] \begin{center} \input{img/qdotlow.tex} \caption{(color online) Same as Fig.~\ref{fig:qdot} for $Jt = 250$ and the chemical potential $\mu~=~-0.8J$ which at $g~=~0.1J$ gives rise to non-resonant transport with finite reflection. Again, very good agreement between all three methods is found.} \label{fig:qdotlow} \end{center} \end{figure} Figure \ref{fig:qdot} shows the density profiles obtained by the TBC, SECS, and CAP methods for the chemical potential $\mu=-0.242J$ which corresponds to a resonance of the double-barrier configuration at the interaction strength $g=0.1J$. Again, the three methods yield nearly identical results, which confirms their validity. This does not change if we add, as in Sec.~\ref{subsec:TW}, nonvanishing initial populations in the leads. Finally, Fig.~\ref{fig:qdotlow} shows the case of imperfect transmission at the chemical potential $\mu=-0.8J$. As in the noninteracting cases studied in the previous subsections, we find that the SECS method is more efficient (\textit{i.e.} less time consuming) than the CAP and TBC methods. \section{Conclusions} \label{sec:Conclusion} In summary, we introduced in this work the method of smooth exterior complex scaling (SECS) to the mean-field description of the one-dimensional transport of Bose-Einstein condensates within a guided atom laser configuration. While this method is formally exact in a continuous system, imprecisions necessarily arise if the space is discretized in the framework of a finite-difference representation of the Gross-Pitaevskii equation. We showed how to avoid this problem by choosing a sufficiently large smoothing parameter in the implementation of SECS. A comparison with the (numerically inefficient) introduction of perfectly transparent boundary conditions, which are obtained from an analytical elimination of the semi-infinite leads of the waveguide (assuming that spatial inhomogeneities and nonvanishing interactions are restricted to a finite scattering region within the waveguide), yields very good quantitative agreement. This was specifically tested for the case of resonant and non-resonant transport through an atomic quantum dot configuration consisting of a sequence of two symmetric barriers within which the interaction was assumed to be finite. We furthermore showed that the SECS method is appreciably more efficient than the method of complex absorbing potentials that are defined with a comparable smoothing parameter. In contrast to the method of absorbing boundary conditions proposed in Ref.~\cite{Shi91PRB}, which are adapted to outgoing waves with relatively well-defined wave numbers, the SECS method can also account for the presence of density fluctuations that arise from finite random initial populations within the waveguide of the atom laser. This implies that SECS can be applied in the framework of the Truncated Wigner method \cite{Steel1998} which approximately accounts for the effect of quantum fluctuations beyond the mean-field description of the propagating condensate. This specific application, as well as the use of SECS within the many-body matrix product state (MPS) algorithm \cite{Verstraete2004,Vidal2003,Vid04PRL}, shall be discussed in a forthcoming publication \cite{DujArgSch}. \section*{Acknowledgements} We acknowledge financial support from Fonds de la Recherche Scientifique de Belgique (F.R.S.-FNRS) and the DFG Forschergruppe FOR760 "Scattering Systems with Complex Dynamics". The computational resources have been provided by the Consortium des \'{E}quipements de Calcul Intensif (C\'{E}CI), funded by the F.R.S.-FNRS under Grant No. 2.5020.11. \input techgpn.bbl \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Terahertz pulses with an ultrabroad spectral bandwidth \cite{liao20,gopal13,vicario,Shalaby,Tzortzakis} attract a lot of attention due to the large number of applications in various fields of science and technology~\cite{salen,Weightman,Kampfrath,Minty}. The maximum energy yield of a THz pulse can be achieved in the interaction of a short intense laser pulse with overdense plasma \cite{liao20} that pushes the boundaries of possible applications of THz radiation to new areas, which require high energies of generated pulses~\cite{salen}. The broadest spectral bandwidth is achieved when a half-cycle (unipolar) THz pulse is generated, that has already been known because charged particles can radiate in the form of unipolar pulses (see review \cite{Rosanov} and Refs. therein). Emission in such a form is still often considered as a "strange electromagnetic wave" \cite{Vinogradov}, since it looks like DC field simply propagating with speed of light, but satisfies Maxwell's equations in vacuum as a standard electromagnetic wave. To our best knowledge, such half-cycle type (quasi-half-cycle) pulses generated in laser-target interactions were discussed only in numerical simulations or proposed as {\it ad hoc} a guess \cite{Ding, liao19}, while quasi-half-cycle pulses have already been observed in the experiments with electron beams from conventional accelerators \cite{Wu13}. Standardly, theory of transition radiation from electron bunch is presented in the far-field approximation \cite{tilborg04,root07,tr1,tr2} as extension of classical theory for a single electron \cite{Ginzburg1946}. There is no corresponding near-field theory, which could be a base for the experimental measurements THz fields near the target surface, where these fields have maximum strength. The problem is that special detectors are needed to respond to fast static traveling field in free space. This is one of the reasons for low motivation to targeted experiments on namely high-intense unipolar radiation THz pulses, whose field profile measurement still faces technical difficulties. At the same time, it was predicted that such powerful THz pulses could provide effective control over molecules adsorbed to surfaces and ferroelectric polarization and molecular orientation \cite{salen}. Possible applications and development of appropriate diagnostics, e.g., with advanced electro-optic detection \cite{eod} and electron/proton radiography \cite{Quinn,Inoue}, together with adequate theoretical support could be a good background in pursuing the works on laser-produced powerful half-cycle pulse source. For example, highly beneficial novel THz light sources could be those, which reach the spectral range from few to 15 THz, where the present laser THz sources based on optical rectification \cite{vicario} usually do not operate because of strong absorption in crystals. Conventional designs of accelerator-based sources face difficulties in obtaining too short electron bunches necessary to achieve these frequencies. However, this is easily overcome by using electron bunches generated in interaction of femtosecond laser pulses with a plasma \cite{Inoue}. The highest density of electron current can be obtained in interaction with solid dense target and here we focus on such laser-target design for generation of terahertz radiation unipolar pulses of extreme frequency range (up to 15 THz) by femtosecond lasers. Note, that laser production of extremely intense THz pulses could be state-of-art work in the context of terahertz-driven electron acceleration \cite{Nanni}. Mechanisms of THz generation in short intense laser pulse interaction with overdense plasma are associated with effective heating and ejection of target electrons and widely discussed \cite{consoli20,liao20}. Most of the laser heated electrons are trapped in a sheath field layer at the target-vacuum interface. A sheath plasma expansion model has been developed to describe THz radiation of $\sim 1$ THz frequency in the direction perpendicular to the laser pulse propagation direction \cite{liao20,gopal13,gopalol,gopal}. Transition radiation of laser heated electrons leaving the target is considered as another typical mechanism of THz generation \cite{tilborg04,root07,tr1,tr2}. For the highly relativistic electron beam this mechanism generates well collimated THz pulses along electron propagation direction. The electrons trapped in the sheath experience both deceleration/acceleration and reversals at the spatial scale of the hot electron Debye length producing Bremsstrahlung and synchrotron radiation, correspondingly. These radiation mechanisms have efficiencies comparable to transition radiation, but belongs to an optical range with typical frequency close on the order of magnitude to the laser frequency, i.e. one may refer them to re-emission of the laser light by laser-accelerated electrons. Focusing on highly beneficial novel light sources of extreme frequency range (up to 15 THz) based on femtosecond lasers, here we consider a transition radiation mechanism. Until now all the measurements of THz pulses were performed in the far-field zone and standartly are based on only theoretical description of the asymptotic characteristics of wave energy and angular-spectral distributions. However, the nature of generated THz pulse can be comprehended only through rigorous analytical wave theory without far-field approximation. Neither such experimental measurements nor far-field theory are able to reconstruct a THz pulse shape yet. The theory presented fills this gap. We present an analytical solution to Maxwell equations in the near-field zone for an ideal conductivity target and compare it with a result of numerical model for the case of a finite target conductivity. Our analytical model (albeit with some simplifications for the sake of analytical calculations) clearly demonstrates that the coherent transition radiation of an electron bunch at the target-vacuum interface has the form of a half-cycle terahertz pulse. Certainly, there is a great advantage of a theory over numerical simulations (see, e.g. \cite{Ding}) since its ability to explicitly scale THz field characteristics vs laser parameters. \section{Analytical solution}\label{sec:examples} Starting with Maxwell's equations, consider the process of electromagnetic field generation into a vacuum ($z>0$) by laser-induced electron current, ${\bf j} = (0,0,j_z)$, injected from an ideal conductor ($z<0$), e.g. from a high-conductivity plasma back side. This is a given source in Maxwell's equations in the form of electrical current of most energetic electrons, which are accelerated by a laser pulse in the forward direction, along the Z axis, and have an energy enough to overcome a sheath potential barrier. Let, for definiteness, this current appears at $t=0$ and the plasma-vacuum interface has an ideal interface (semi-bounded plasma). In fact, the latter assumption is valid as long as the size of the sometimes possible preplasma is less than the wavelengthh of the generated electromagnetic wave of our interest (THz range). We consider infinite boundary between target and vacuum, that is good approximation as long as $b=\pi L/\lambda \gamma \gg 1$, where $L$ is the transverse size of the target boundary, $\lambda$ is the characteristic wavelength of the radiated wave, and $\gamma$ is the electron beam gamma factor. The latter makes it possible to neglect the contribution of diffraction radiation, which, of course, produces wings of opposite polarity in the profile of the generated field, but introduces an error that is exponentially small $\sim \exp{(-b)}$ \cite{tilborg04}. Given a single nonzero component of the electric current ($j_z \neq 0$), the emitted into a vacuum electromagnetic field is characterized by the following electric, ${\bf E} = (E_{\rho},0,E_z)$, and magnetic, ${\bf H} = (0,H_{\varphi},0)$, components. By introducing the vector (${\bf A} = (0,0,A_z)$) and scalar ($\phi$) potentials, ${\bf H} = {\rm rot} {\bf A}$ and ${\bf E} = - \nabla \phi - (1/c) \partial_t {\bf A}$, which obey the Lorentz gauge condition, $\partial_z A_z + \partial_t \phi /c =0$, the Maxwell's equations are reduced to a single equation for $A_z$ \begin{equation} \partial_{tt} A_z = c^2 \triangle A_z + 4 \pi c j_z \,. \label{vecpoteqn} \end{equation} The solution of (\ref{vecpoteqn}) in a free space reads: $A_z (t, {\bf r}) = \int d^3 {\bf r}' j_z (t-\eta/c, {\bf r}')/(\eta c)$, where $\eta = |{\bf r} -{\bf r}'|$ and integration is over the domain $\eta < c t $. We use the cylindrical symmetry and the factorized form of the electrical current $j_z = Q v n_z(t, z) n_\perp(\rho)$, where $Q$ is the total charge of the electron bunch moving with the velocity $v=const$, $n_z$ is the linear electron density distribution versus $z$ and $t$, and $n_\perp$ is the areal electron density distribution versus $\rho =\sqrt{x^2+y^2}$. Here, a simplifying assumption of a given constant electron velocity is used in order to explicitly obtain an analytical solution for the generated transition radiation electromagnetic pulse. Then, the solution of (\ref{vecpoteqn}) can be written as follows \begin{equation} A_z = \frac{Q v}{c} \!\!\!\!\!\!\!\!\!\!\!\! \int \limits_{\sqrt{\rho'^2+z'^2} < c t} \!\!\!\!\!\! \!\! \!\! \!\! {\rm d} z' {\rm d} \rho' \rho' N_\perp(\rho,\rho') \frac{n_z (t-\sqrt{\rho'^2+z'^2}/c, z' + z) } {\sqrt{\rho'^2+z'^2}} \,, \label{az0} \end{equation} where \begin{equation} N_{\perp}(\rho,\rho') = \int\limits_{0}^{2\pi} {\rm d}\chi \, n_{\perp} \left( \sqrt{\rho^2 + \rho'^2 + 2\rho' \rho \cos \chi} \right) \,. \label{dens_transverse} \end{equation} The desired solution should meet the boundary condition of vanishing of the tangential electric field component $E_{\rho}$ at the vacuum-target interface, $E_{\rho}\mid_{z=0} = 0 $. This is achieved using the so called image method when the desired electromagnetic field can be represented as a superposition of two free space type fields given by (\ref{az0}). They are generated by two currents, $j_z^+$ and $j_z^-$, having the charges of opposite signs and moving from the vacuum-target interface in two opposite directions. Correspondingly, $j_z^+\equiv j_z$ with $n_z^+\equiv n_z(t,z)$ and $j_z^-(t,\rho,z) \equiv j_z(t,\rho,-z)$ with $n_z^- \equiv n_z(t,-z) $ and the substitution of $n_z^{\pm}$ into (\ref{az0}) makes it possible to obtain in explicit form $z$-components of the vector potential, $A_z^+$ and $A_z^-$, and hence the desired solution for the electromagnetic field in a half-space $z>0$ \begin{eqnarray} \label{eb} && E_z = - (1/c) \, \partial_{t} (A_z^{+}+A_z^{-}) - (c/v) \partial_{z} (A_z^{+}-A_z^{-}) \,, \\ \nonumber && E_\rho = - (c/v) \partial_{\rho} (A_z^{+}-A_z^{-}) \,, \quad H_\varphi = - \partial_{\rho} (A_z^{+}+A_z^{-}) \,, \end{eqnarray} where it was used that $\phi^+ + \phi^- = (c/v) (A_z^+ - A_z^-)$. To concretize the solution (\ref{az0}) we specify spatial-temporal form of the electron bunch by introducing $n_z^\pm = \theta(\pm z)(\theta(v t\mp z)-\theta(v \tau \mp z))/(v t_0)$, where $\tau = t-t_0$, and $n_\perp = \exp \left(- \rho^2/r_0^2 \right)/(\pi r_0^2)$. Here the Heaviside step function, $\theta(t)$, corresponds to the simplified rectangular time shape of the electron bunch with the duration $t_0$ and the transverse Gaussian distribution with the characteristic radius $r_0$. The latter can be addressed to a laser pulse with approximately the same duration and spot size radius $\lesssim r_0$. Given this specification one can write $N_\perp$ in the following explicit form \begin{equation} N_{\perp}^{gs}(\rho,\rho') = \frac{2}{r_0^2} \, I_0 \left( \frac{2 \rho' \rho}{r_0^2}\right) \exp \left( - \frac{\rho^2+\rho'^2}{r_0^2} \right) \,, \label{dens_gauss} \end{equation} where $I_0$ is the modified Bessel function, and reduce $A_z^\pm$ to a simple integral form \begin{eqnarray} & A_z^{\pm} = \frac{Q}{c t_0} \, \int\limits_{0}^{\infty} \rho' {\rm d}\rho' N_{\perp}^{gs}(\rho,\rho') [F^{\pm}(t)-F^{\pm}(\tau)] \,, \label{az} \\ & F^{\pm} = \theta \left( \sqrt{c^2 t^2-z^2}-\rho'\right) \ln \left( \cfrac{v t \mp z + R^\pm (\rho') }{(1+\beta)(\sqrt{z^2+\rho'^2}\mp z)}\right) \, , \nonumber \end{eqnarray} where $\beta = v/c$ and $R^\pm (\rho') =\sqrt{(v t \mp z)^2 + (1-\beta^2)\rho'^2}$. The using \eqref{az} in \eqref{eb} makes it possible to analyze in detail the structure of the generated electromagnetic field. \subsection{The case of the sausage type electron bunch} The most intense femtosecond laser pulse, which is able to produce the highest electron current density and, hence, the most powerful THz pulse should be focused into the few micron focal spot, that typically corresponds to $c t_0 \gg r_0$. In this case, the transversal Gaussian electron beam profile can be replaced by the delta-functional distribution, $N_{\perp}^p(\rho,\rho') = \delta(\rho-\rho')/\rho $ in Eq. \eqref{az}. Correspondingly, from Eq. \eqref{az} we arrive to the following easy-to-analyzee analytical expressions for the electromagnetic field components \begin{eqnarray} \label{line} \nonumber && E_z = \frac{Q}{v t_0} \left[ \left(\frac{1-\beta^2}{R^+(t)} + \frac{1-\beta^2}{R^-(t)} - \frac{2}{r} \right) \theta(c t - r) - \left(\frac{1-\beta^2}{R^+(\tau)} + \frac{1-\beta^2}{R^-(\tau)} - \frac{2}{r} \right) \theta(c \tau - r) \right] \,, \\ && E_\rho = \frac{Q} {v t_0 \rho} \left[ \left(\frac{v t -z}{R^+(t)} - \frac{vt +z}{R^-(t)} + \frac{2 z}{r} \right) \theta(c t - r) - \left(\frac{v \tau -z}{R^+(\tau)} - \frac{v \tau+z}{R^-(\tau)} +\frac{2 z}{r} \right) \theta(c \tau - r) \right] \,, \\ \nonumber && H_\varphi = \frac{Q}{c t_0 \rho} \left[ \left(\frac{v t -z}{R^+(t)} + \frac{vt +z}{R^-(t)} \right) \theta(c t - r) - \left(\frac{v \tau -z}{R^+(\tau)} + \frac{v \tau+z}{R^-(\tau)} \right) \theta(c \tau - r) \right] \,, \end{eqnarray} where $r = \sqrt{z^2 + \rho^2}$. In the limit $t_0 \rightarrow 0$ from Eqs. \eqref{line} we recover the known result for a point charge \cite{bolotovskii}. \par In the general case, Eqs. \eqref{line} do not demonstrate a simple separation of the fields, the intrinsic bunch field and the radiation field. In the far-field zone, $ c t > r \gg c t_0$ the electric field components (\ref{line}) can be rewritten in the following form \begin{equation} \label{fzone} \begin{aligned} E_z & = \frac{Q}{\beta } \frac{\theta(c t - r) - \theta(c \tau - r)}{c t_0} \left(\frac{1-\beta^2}{R^+(t)} + \frac{1-\beta^2}{R^-(t)} - \frac{2}{r} \right) \\ & - Q (1-\beta^2) \theta(c \tau - r) \left(\frac{v t -z}{(R^+(\tau))^3} + \frac{v t +z}{(R^-(\tau))^3} \right) \, , \\ E_\rho & = \frac{Q} {\beta \rho} \frac{\theta(c t - r) - \theta(c \tau - r)}{c t_0} \left(\frac{v t -z}{R^+(t)} - \frac{vt +z}{R^-(t)} + \frac{2 z}{r} \right) \\ & + Q \rho (1-\beta^2) \theta(c \tau - r) \left(\frac{1}{(R^+(\tau))^3} - \frac{1}{(R^-(\tau))^3} \right) \, , \\ H_\varphi & = \frac{Q} {\rho} \frac{\theta(c t - r) - \theta(c \tau - r)}{c t_0} \left(\frac{v t -z}{R^+(t)} + \frac{vt +z}{R^-(t)} \right) \\ & + Q \rho \beta (1-\beta^2) \theta(c \tau - r) \left(\frac{1}{(R^+(\tau))^3} + \frac{1}{(R^-(\tau))^3} \right) \, , \end{aligned} \end{equation} where we have neglected all the terms decreasing faster than $1/r^2$, and denoted $\rho = r \sin \vartheta$ and $z =r \cos \vartheta$. Each of the electromagnetic field components in (\ref{fzone}) has two distinct contributions: $\textbf{E} = \textbf{E}^{rad} + \textbf{E}^{int}$ and $\textbf{H} = \textbf{H}^{rad} + \textbf{H}^{int}$. The contributions $\textbf{E}^{rad}$ and $\textbf{H}^{rad}$ proportional to the difference of $\theta$-functions decrease as $1/r$ for large $r$ and define the radiation field, while the remaining contributions $\textbf{E}^{int}$ and $\textbf{H}^{int}$ decreasing as $1/r^2$ for large $r$ describe the intrinsic field of the moving electron bunch. Propagating radiation field reaches the given point at the distance $r$ in the far-field zone at the instant $t=r/c$ and lasts till $t = r/c + t_0$. Then, as times goes by, a radiation field is replaced by a weak incoming intrinsic field (see, for example, Fig.\ref{figH0} below). For the instant corresponding to onset of a radiation field, $ct \simeq r$, one arrives to a simple form for the radiation components, $E_r = E_\rho \sin \vartheta + E_z \cos \vartheta = 0$ and $E_\vartheta = E_\rho \cos \vartheta - E_z \sin \vartheta = H_\phi$, which can be rewritten for the better clarity in the spherical coordinates as following \begin{equation} \label{far_zone_sphere} E_{\vartheta}^{rad} = H_{\varphi}^{rad} \simeq \frac{2 Q }{r} \frac{\beta \sin\vartheta}{ 1 - \beta^2\cos^2 \vartheta} \frac{\theta(c t - r) - \theta(c \tau - r)}{c t_0} \,, \quad E_r^{rad} \simeq 0 \,. \end{equation} As expected, these results show, that the far-zone radiation field is a spherical transverse electromagnetic wave with the amplitude decreasing $\propto 1/r$. And again for explicitness, with extreme particle bunch shortening $ c t_0 \to 0 $ the difference between two $\theta$-functions in (\ref{far_zone_sphere}) can be replaced by the $\delta$-function, $(\theta(c t - r) - \theta(c \tau - r))/(c t_0) \to \delta (ct-r) $, and we arrive to the formula for the transition radiation field generated by a point charge \cite{bolotovskii}. \par \begin{figure} [!ht] \centering \includegraphics[width=0.4 \linewidth]{fig1a} \includegraphics[width=0.4 \linewidth]{fig1b} \caption{Magnetic field temporal profile, $H_{\varphi}(t)$, for $v=0.95 \,c$ ($\gamma \simeq 3.2$) at $r= 10 \,c t_0$ (left) and for $v=0.999 \,c$ ($\gamma \simeq 22$) at $r= 500 \,c t_0$ (right) along the direction, $\vartheta = 1/\gamma$, of maximum radiated field (red curves) and along the target surface, $\vartheta = \pi/2$, (blue curves). The dashed lines corresponds to the far-field approximation. The insets show magnetic field temporal profiles at $ r= 500\, c t_0$ (left) and $ r= 5000\, c t_0$ (right).} \label{figH0} \end{figure} The electromagnetic field temporal profile has a form of half-cycle pulse (see Fig. \ref{figH0}) with the width defined by the electron pulse duration, $ t_0$. For the given total electron charge the field amplitude increases with energy of the electron beam. For the ultrarelativistic electrons with $\gamma = 1/\sqrt{1-\beta^2}\gg 1$ the direction of radiated field maximum corresponds to $\theta \simeq 1/\gamma $. In this direction, separation of the generated electromagnetic field into the intrinsic field of the moving charge and the radiated field is not possible in the near-field zone. However, this is possible at the large distances, $r\ggg ct_0$, where a far-field approximation works and radiation field amplitude drops significantly (see insets in Fig. \ref{figH0}). Selection of the radiation component from the total electromagnetic field in the direction of its maximum intensity is complicated for very energetic electrons. The higher their energy the longer distance is required to measure a true field of radiation. For example, an optimal angle of radiation of 22 MeV electrons ($v\simeq 0.999 c$) is only $\sim 2.5^\circ$ and therefore intrinsic field will have negligible contribution only at the distance longer than 5000 $c t_0$, i.e. $\sim 15$ cm for $t_0 = 100$ fs (see Fig. \ref{figH0}). On the other hand, in the transverse direction such selection is possible at much shorter distances in accordance with Fig. \ref{figH0}, where there is no visible difference between blue dashed and solid curves. Thus, the finding of the true radiation field energy may require corresponding recalculating it from the measured total field energy by taking into account the theoretical space-angular finding presented above. \begin{figure} [!ht] \centering \includegraphics[width=0.4 \linewidth]{fig2a} \includegraphics[width=0.4 \linewidth]{fig2b} \caption{Angular distribution of radiated energy (left) for $v=0.99 c$ (red), $v=0.95 c$ (blue), $v=0.8 c$ (black) and $v=0.999 c$ (gray in inset). Spectra of radiated energy (right) for $v=0.99 c$ (red) and $v=0.9 c$ (blue).} \label{figW} \end{figure} The spectral-angular distribution of radiated energy in a far-field zone is \begin{equation} \label{eqW} \frac{d W}{d o d \omega} = \frac{c r^2}{4 \pi^2} |H_\omega|^2 = \frac{4Q^2}{\pi^2 c} \frac{\beta^2 \sin^2 \vartheta}{(1-\beta^2 \cos^2 \vartheta)^2} \left | \frac{\sin (\omega t_0/2)}{\omega t_0} \right|^2\,. \end{equation} It demonstrates a classical angular distribution \cite{ginzburg} with maximum at the angle $\theta \sim 1/\gamma $ for relativistic limit (see Fig. \ref{figW}), radiated energy decrease with frequency, and the spectrum width at half maximum $\Delta\omega_c \simeq 2.8/t_0 $ (see Fig. \ref{figW}). The latter naturally corresponds to the range, where the coherent transition radiation occurs, $\omega < t_0^{-1}$, and small incoherent contribution relevant to the higher frequencies. Oscillations in the high-frequency tail of the spectrum reflect only the model rectangular shape of the laser pulse, adopted for simplicity in order to achieve maximum clarity of the analytical description, and may not appear in the case of a natural smooth pulse. For ultrarelativistic electrons with $\gamma \gg 1$ the total radiated energy is well approximated by following simple expression $W = Q^2 (4 \ln 2 \gamma -2 )/c t_0$. The simplest reproduction of the above analytical result in the far-field zone and nonrelativistic case, $\beta \ll 1$, can be easily done in the dipole approximation. In this limits, by using $n_\perp = \delta(\rho-\rho')/(2 \pi \rho)$ in the density electron current and ${\bf H} = [\dot{\bf A} \times {\bf n}]/c $, where ${\bf n}$ is the unit vector along the radiation propagation direction and $A$ is given at the retarded time $t -r/c$ one gets \begin{equation}\label{eqdip} {\bf A} = \frac{1}{c r} \int {\bf j} d V = \frac{Q {\bf v}}{c r} \left(\int \limits_0^\infty n_z^+ d z - \int \limits_0^{-\infty} n_z^- d z \right ) \,, \quad \dot{\bf A} = \frac{2 Q {\bf v}}{c r t_0}(\theta(t') - \theta(t'-t_0))\,\ \end{equation} Then, Eqs. \eqref{eqdip} lead to the field components coinciding with Eq. \eqref{far_zone_sphere} at $\beta \ll 1$. The key point is that the time change of the dipole moment occurs not due to the particle velocity change (nonzero acceleration) rather than due to the bunch charge change, which increases as the bunch exits into a vacuum while being zero inside a target of high-conductivity. \subsection{The case of arbitrary longitudinal and transverse widths of the bunch}\label{secB} Let now turn to the general case described by Eqs. \eqref{eb}, \eqref{dens_gauss}, \eqref{az}. Standardly generated electromagnetic field (\ref{eb}) has two contributions, (1) the intrinsic field of a moving charge and of its image and (2) the radiation field. Such field structure is illustrated by Fig. \ref{figB}, where the intrinsic field of the moving charge is shown in blue and the radiation field in red. Formation of the spherical wave is clearly seen as well as the unipolar pulse of the radiation field (see insets in Fig. \ref{figB}). \begin{figure} [!ht] \centering \includegraphics[width=0.4 \linewidth]{fig3a} \includegraphics[width=0.4 \linewidth]{fig3b} \caption{Magnetic component $H_{\phi}$ of the generated electromagnetic field propagating in a vacuum for $v=0.5\, c$ (a) and for $v=0.95\,c$ (b) at the instant $t=10 t_0$. Electron bunch with the sizes $c t_0 = r_0$ is shown in black. The blue corresponding to the levels 0.05 $Q/r_0^2$ (a) and 0.1 $Q/r_0^2$ (b) show the intrinsic field, while the red ones for the levels 0.02 $Q/r_0^2$ (a) and 0.04 $Q/r_0^2$ (b), illustrate a radiated field. The insets show the magnetic pulse time shape (black curve -- total field, blue dashed curve -- intrinsic field and red curve -- radiation field) at the distance of $10 r_0$ along the direction of maximum radiated field. } \label{figB} \end{figure} The wave temporal profiles for the inserts in Fig. \ref{figB}a and Fig. \ref{figB}b are presented for different propagation directions, that results in a different relation between the intrinsic and radiation fields in these inserts. For the case $v=0.5\, c$ the temporal pulse was detected in the direction along a target-vacuum interface far enough from the electron bunch, while for the case $v=0.95\, c$ the field impulse was registered at small ($20^{\rm o}$) angular deviation from the bunch propagation direction. It is clearly seen that presented in Fig. \ref{figB} theoretically derived field structures qualitatively correspond to the numerical simulation results on THz emission in forward direction from irradiated foil \cite{Ding}. \begin{figure} [!ht] \centering \includegraphics[width=0.4 \linewidth]{fig4a} \includegraphics[width=0.4 \linewidth]{fig4b} \caption{The radiated magnetic field maximum vs ratio $r_0/ c t_0$ and spectra of radiated energy for $r_0 = 0.5 c t_0$ (black curves), $r_0 = c t_0$ (red curves) and $r_0 = 2 c t_0$ (blue curves) for $\beta = 0.95$. Dashed black curves corresponds to limiting case of $c t_0 \gg r_0$ (Eq. \eqref{eqW}). The insets show the magnetic pulse time shape at the distance of $2000 c t_0$ along the direction of maximum radiated field.} \label{figB1} \end{figure} With the broadening of the electron beam diameter, the profile of the generated pulse is smoothed out as soon as the transverse size of the beam approaches the longitudinal one. For $r_0 \sim ct_0$ the temporal field pulse profile takes on a Gaussian temporal shape relevant to the spatial Gaussian distribution of the electron beam (see inset in Fig.\ref{figB1}). For a given electron bunch charge (given laser power, see below) the higher the ration $c t_0/r_0$, the higher the electromagnetic pulse amplitude (see Fig.\ref{figB1}). If $r_0 \ll ct_0$ radiated pulse temporal shape follows the electron bunch temporal profile, e.g. rectangular one above discussed (see insets in Fig. \ref{figH0}). For a given $t_0$ the maximum filed decreases as $\sim 1/r_0$ as illustrated in Fig.\ref{figB1}a. For relatively low bunch velocity $v \sim 0.5 c$ the radiation propagates predominantly along the target surface, while for the ultra-relativistic electrons a radiation pulse collimates along electron bunch propagation direction slightly shifting from it in accordance with a classical theory of transition radiation \cite{ginzburg, QE16}. The higher velocity the smaller this shift is. A radiation spectrum is defined by the electron bunch spatial-temporal shape. The half-cycle THz field profile is clearly illustrated by well pronounced low-frequency spectrum domain, where a cutoff of the spectrum may appear in the case $b\sim 1$ due to the diffraction radiation contribution. Super-broadband emission ( Fig. \ref{figW}b) is characterized by the spectral bandwidth $\Delta\omega \simeq c/(c t_0 + r_0)$ in agreement with far-field approach \cite{QE16}. We note the dependence of the spectrum width on the focal spot size (electron bunch radius). Spectrum shrinks with increasing hot spot size. Correspondingly, total emitted energy decreases with this size. In the case of $\gamma t_0/(c r_0) \gg 1$ the total radiated energy can be estimated as $W_R = Q^2/(\pi c t_0) (3 \ln({\cal{E}}_e t_0/(m c r_0))-1 )$ \cite{QE16}, where ${\cal{E}}_e = m c^2\gamma $ is the energy of the laser-heated electron. \section{The FDTD simulations} To study effect of high but finite target conductivity on terahertz pulse generation we performed the 3D simulations with FDTD (finite-difference time-domain) method based on numerical solution of the Maxwell's equations in a medium with a given dielectric susceptibility. We applied this simulation to a metal target, where the dielectric permittivity is a complex function ($\epsilon\prime+i\epsilon\prime\prime$) but still $|\epsilon|\gg1$. To describe a dielectric permittivity we used the standard Drude model $\epsilon = 1 + 4 \pi \sigma (\omega)/\omega$ with the conductivity $\sigma = \sigma_0/(1 - i \omega)/\nu)$, where $\sigma_0 = 10^{18}$~s$^{-1}$ and $\nu = 10^{13}$~s$^{-1}$. The target occupied a half-space $z<0\,\mu$m in the simulation box $-300\,\mu$m$<x<300\,\mu$m, $-300\,\mu$m$<y<300\,\mu$m, and $-200\,\mu$m$<z<400\,\mu$m. The grid cell size was $1\,\mu$m and the time step was 1\,fs. The distributed charge has the Gaussian profiles in both transverse directions, $z$ and $\rho = \sqrt{x^2+y^2}$, with the same hot spot size as above to compare with the theory developed, $c t_0=r_0 = 20 \,\mu$m. The bunch starts to move from the target surface along the normal, along the Z-axis, with a given velocity $ v = 0.5 c $ or $ v = 0.95 c $. \begin{figure} [!ht] \centering \includegraphics[width=0.35 \linewidth]{fig5a} \includegraphics[width=0.35 \linewidth]{fig5b} \caption{Magnetic field $H_\phi$ distribution (in the plane passing through the Z-axis) from the FDTD simulation (a) in comparison with the theoretical result (b) for $v=0.5 c$ at the instant, $t = 10 t_0$. } \label{fig5} \end{figure} \begin{figure} [!ht] \centering \includegraphics[width=0.43 \linewidth]{fig6a} \includegraphics[width=0.32 \linewidth]{fig6b} \caption{The same as in Fig. \ref{fig5}, but for $v=0.95\, c$ at $t = 2.7 t_0$ (bottom) and $t = 9.1 t_0 $ (top). The inset shows the magnetic field temporal profile at the distance of $10 r_0$ along the direction in which a radiated field is maximum.} \label{fig6} \end{figure} \begin{figure} [!ht] \centering \includegraphics[width=0.43 \linewidth]{fig7a} \includegraphics[width=0.43 \linewidth]{fig7b} \caption{(a) Magnetic field ($H_\phi$) distribution from the FDTD simulation at $t=9 t_0$ for the case of the electrons distributed over velocities and escaping in the normal direction. (b) Magnetic field distribution from the FDTD simulation at the instant $t=5.6 t_0$ for the case of the electrons escaping with different velocities inside a cone with the open angle of $15^\circ$. The insets show the magnetic pulse spatial profiles along the direction corresponding to the angle of 45$^o$.} \label{fig7} \end{figure} Like the analytical theory, the performed simulations demonstrate formation and propagation of the half-cycle terahertz pulse. The field distribution is in a good agreement for its parameters with the theoretical model of Sec. \ref{secB} as clearly can be seen from the comparison of the density plots for $H_\varphi$ presented in Fig. \ref{fig5} and Fig. \ref{fig6}. As it should be, somewhat smoother magnetic field distribution along z-direction from the simulation results is due to the use of the Gaussian temporal charge beam profile instead of the rectangular one used in the theoretical model. We also verified that the unipolarity of the THz pulse is conserved if the electrons of the radiating bunch have a given energy distribution, so that the faster electrons can overtake the slower ones (see Fig. \ref{fig7}a). We have simplistically chosen groups of escaping electrons with $v=0.7\,c$ and $v=0.99\, c$ distributed in accordance with the Boltzmann distribution, $\propto e^{-{\cal E}_e/\Delta T_f}$ and $\Delta T_f =4$ MeV that corresponds to the effective velocity spread $\Delta \hat{v} \simeq 0.76\, c$. Since generation of the THz radiation by electrons moving in a certain angle range looks more realistic, we also performed corresponding simulation. The results presented in Fig. \ref{fig7}b for the three groups of electrons escaping inside a cone with the open angle of $15^\circ$. The electrons with the slowest velocity, $ v = 0.95 \,c$ had a Gaussian spatial distribution with characteristic scale $r_0$ and uniform distribution over the angles, $\theta$, raged from $\theta=0^\circ$ to $\theta=15^\circ$. Other, more energetic electrons ($ v = 0.99 c$ and $v = 0.999 c$), had the same spatial distribution but escaped under zero angle. All three groups of electrons were distributed according to the Boltzmann distribution with $\Delta T_f=4$ MeV. The simulation has been performed in the box $-100\,\mu$m$<x<100\,\mu$m, $-100\,\mu$m$<y<100\,\mu$m, $-67\,\mu$m$<z<133\,\mu$m, where a target is placed at $z<0$. The electron bunch had the sizes $c t_0 = 5\, \mu$m and $r_0 = 10\, \mu$m. As for the previously considered models of the electron source, generation of the half-cycle terahertz pulse is also clearly seen in Fig. \ref{fig7}b. \section{Discussion and summary} The THz wave field amplitude is proportional to the total charge, $Q$, of escaping high-energy electrons, making up only a small fraction of entire laser-heated electrons. These electrons must have enough energy to overcome the potential barrier, $ \Phi_m $, to leave the target. The characteristic value of this potential barrier at the target-vacuum interface is $e \Phi_m = -2 T_h \ln[r_0/(\lambda_{De} \sqrt{2})]$, where $T_h$ is the temperature of hot electrons with the density $n_h$. Correspondingly, the escaping electron density, $n_f$, can be estimated as $n_f \simeq n_h \exp{(e \Phi_m/T_h)}$, i.e. the total charge reads $Q =e n_f c t_0 \pi r_0^2 = T_h c t_0 /(2 e)$. The hot electron temperature standardly follows to the ponderomotive scaling, $T_h \simeq m c^2 (\sqrt{1 + a_0^2/2} -1)$, that leads to $T_h\approx 0.7\times m c^2 a_0$ for relativistically intense laser pulse, where $a_0$ is the dimensionless laser field amplitude, $a_0= 0.85 \sqrt{I [10^{18}\mbox{W/cm}^2]/\lambda_0[\mu\mbox{m}]}$ ($I$ is the laser pulse intensity and $\lambda_0$ is the laser wavelength). Finally, the total charge of the escaping electrons depends only on the amplitude and duration of the laser pulse, $Q = 0.35\times e a_0 c t_0 /r_e$, where $r_e = e^2/m c^2$ is the classical electron radius. The total bunch charge roughly estimates the total radiated THz energy, ${\cal E}_R$, as ${\cal E}_R \sim Q^2/c t_0 \sim 0.1\times m c^2 a_0^2 c t_0 /r_e$, as well as the conversion efficiency, $\eta$, of the laser pulse energy ${\cal E}_L = m c^2 a_0^2 c t_0 R_0^2 \pi/(2\lambda^2 r_e)$ into the radiation energy, $\eta = {\cal E}_R/{\cal E}_L \sim \ 0.08 \lambda^2/R_0^2$. Here $R_0$ is the laser focal spot radius, which may differ from the electron emitting spot radius, $R_0 < r_0$. For the given laser energy a tight focusing is favorable for THz radiation production. For example, when focusing a laser beam into a $4 \lambda$ spot the conversion efficiency reaches 2 \%. A ten joule laser pulse of the 100 fs duration (100 TW) focused into (2-3) $\lambda_0$ focal spot produces broadband (up to 10 THz) $\sim100$ mJ unipolar THz pulse with the field amplitude $\sim 10^{10}$ V/m at the distance of 1 mm from a target, that is close to the record value published to date \cite{liao20,Tzortzakis}. A significant increase in the intensity of the terahertz pulse can be expected when the femtosecond laser pulse is focused into the diffraction limit. The presented theory analytically describes production of unique half-cycle THz pulses from a back side of laser irradiated foil target. This requires the target to be thin, of micron size thickness, and have large transverse size, $\gtrsim 1$ cm, to suppress contribution of the diffraction radiation \cite{tilborg04,root07}. A controlled preplasma on the irradiated side of the target could make it possible to achieve the maximum current of the electron current emitted from behind and, hence, to maximize the yield of THz radiation. Unlike the previous ones, the developed theory describes the structure of the generated THz fields in the entire vacuum region, from the near to the far zone. As has been demonstrated, for ultrarelativistic electrons the far-zone approximation is applicable at very long distances, where the emitted pulse is already weakened. The analytical theory and the long scale FDTD simulations open the way to planning an experiment to detect superstrong terahertz fields near the target surface, e.g., using laser-produced charged particles as an invaluable tool for the probing of the electric and magnetic fields \cite{Quinn}. When this paper has already been written we were aware of experiments on THz pulses generated when femtosecond laser pulse irradiates thin foil with specially designed preplasma \cite{savelev}. The results presented there indicate quasi-half-cycle nature of the measured pulses. In summary, the results reported clearly demonstrate that strong THz emission generated through the transition radiation by laser-produced high-energy electrons from a solid target occurs in the form of unique half-cycle pulses. It is highly probable that, taking into account the developed theory and the performed simulations, the terahertz radiation observed in a number of experiments, e.g. Refs. \cite{liao20}, should be interpreted as generation of the unipolar THz pulses. Direct experimental confirmation of such novel view on the nature of laser triggered terahertz emission would be of great interest. A possible approach could be the using of electron or proton radiography. As a final note, we emphasize, that the theory proposed could be also applied to the quantitative description of the transient surface fields, since it may advance the previously considered 2D approach \cite{pre20}. This research was supported by Ministry of Science and Higher Education of the Russian Federation (Agreement No 075-15-2021-1361).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Massive MIMO has recently emerged as a key technology for 5G communications \cite{Marzetta2010a,Larsson2014a,Andrews2014b}, since it can bring significant improvements to the spectral and energy efficiency of cellular networks \cite{Ngo2013a}. By equipping the base stations (BSs) with a large number of antennas, the mutual interference, thermal noise, and small-scale fading can be almost eliminated by virtue of the channel hardening and favorable propagation phenomena \cite{Bjornson2016b,Ngo2014a}. The BS utilizes estimated channel state information (CSI) to achieve these gains, which is generally acquired using UL pilot signals. To achieve the maximum CSI quality, mutually orthogonal pilot sequences are desirable, but this is impractical since the pilot overhead would be proportional to the total number of users in the entire system. The size of the channel coherence block limits the number of orthogonal pilots, and at most half the block should be used for pilots \cite{Bjornson2016a}. The consequence is that the pilots need to be reused across cells, which creates so-called pilot contamination \cite{Marzetta2010a,Jose2011b}, where users with the same pilot cause large interference to each other. A pilot reuse factor can be applied to not use the same pilots in neighboring cells, which reduces the pilot contamination at the cost of extra pilot overhead \cite{Yang2015a,Bjornson2016a}. However, some pilot contamination still remains and its impact strongly depends on which users that have the same pilot. The pilot assignment is a combinatorial problem, thus finding the optimal assignment is generally NP-hard \cite{Jin2015a}. This has motivated the design of suboptimal greedy pilot assignment algorithms, which utilize statistical information such as the large-scale fading \cite{Jin2015a, Xu2015a}. Another way to improve the channel estimation quality is to optimize the transmit powers used for the pilots \cite{ Victor2015b, Guo2015a}. In this paper, we consider joint optimization of the pilot assignment and pilot powers in Massive MIMO, in contrast to previous works that focused on only one of these components. In particular, we obtain ergodic achievable SE expressions for MR detection and arbitrary pilot sequences. The pilot sequences are treated as optimization variables and we formulate a max-min SE optimization problem, which becomes a signomial program. Due to NP-hardness of signomial programs, we propose a suboptimal approach that finds a local optimum in polynomial time. Numerical results show that this solution is close to the global optimum and can provide great performance improvements over prior works. \textit{Notations}: The lower bold letters are used for vectors and the upper bold are for matrices. $(\cdot)^T$ and $(\cdot)^H$ stand for the transpose and conjugate transpose, respectively. $\mathbf{I}_{n}$ is the $n \times n$ identity matrix. $\mathbb{E} \{\cdot \}$ denotes expectation, $\| \cdot \| $ is the Euclidean norm, and $\mathcal{CN} (\cdot, \cdot)$ is the circularly symmetric complex Gaussian distribution. \section{Pilot Designs for Massive MIMO Systems} \label{Section: System Model} We consider the UL of a multi-cell Massive MIMO system with $L$ cells. Each cell consists of a BS equipped with $M$ antennas which serves $K$ single-antenna users. All tuples of cell and user indices belong to the set \begin{equation} \mathcal{S} = \left\{ (i,t): \; i \in \{ 1, \ldots, L\}, \; t \in \{ 1, \ldots, K \} \right \}. \end{equation} The radio channels vary over time and frequency. We divide the time-frequency plane into coherence blocks, each containing $\tau_c$ samples, such that the channel between each user and each BS is static and frequency flat. In each coherence block the users transmit pilot sequences of length $\tau_p$ symbols, while the remaining $\tau_c-\tau_p$ symbols are used for data transmission. In this paper, we focus on the UL, so the fraction $(1 - \tau_p / \tau_c)$ of the coherence block is dedicated to UL data. We assume $\tau_p \geq 1$ to keep the estimation process feasible and stress that the practical case $\tau_p < KL$ is of key importance since it gives rise to pilot contamination. \subsection{Proposed Pilot Design} \label{subsection: ProposedPilot} We aim at optimizing the pilot sequence collection $ \{ \pmb{\psi}_{1,1}, \ldots, \pmb{\psi}_{L,K}\}$, where $\pmb{\psi}_{l,k} \in \mathbb{C}^{\tau_p}$ is the pilot sequence assigned to user~$k$ in cell~$l$. To this end, let us define the $\tau_p$ mutually orthonormal basis vectors $\{\pmb{\phi}_1, \ldots, \pmb{\phi}_{\tau_p} \}$, where $\pmb{\phi}_b \in \mathbb{C}^{\tau_p},$ $\forall b= 1, \ldots, \tau_p$. The corresponding basis matrix is \begin{equation} \pmb{\Phi} = [\pmb{\phi}_1, \ldots, \pmb{\phi}_{\tau_p}], \end{equation} and it satisfies $\pmb{\Phi}^H \pmb{\Phi} = \mathbf{I}_{\tau_p}$. We assume that each pilot sequence is spanned by these basis vectors. In particular, the pilot sequence of user~$k$ in cell~$l$ is \begin{equation} \label{eq: ProposedPilotSequence} \pmb{\psi}_ {l,k}= \sum_{b =1}^{\tau_p} \sqrt{ \hat{p}_{l,k}^b } \pmb{ \phi}_{b}, \quad \forall l,k, \end{equation} where $\hat{p}_{l,k}^b \geq 0$ is the power assigned to the $b$th basis vector. We stress that the pilot construction in \eqref{eq: ProposedPilotSequence} can create arbitrarily many different orthogonal or non-orthogonal pilots, with arbitrary total pilot power $\| \pmb{\psi}_{l,k} \|^2 = \sum_{b=1}^{\tau_p} \hat{p}_{l,k}^b$. We assume that the average pilot power of user~$k$ in cell~$l$ satisfies the power constraint \begin{equation} \label{eq:Max-Power} \frac{1}{\tau_p} \sum_{b=1}^{\tau_p} \hat{p}_{l,k}^b \leq P_{\textrm{max},l,k}, \quad \forall l,k, \end{equation} where $P_{\textrm{max},l,k}$ is the maximum pilot power for user~$k$ in cell~$l$. The inner product of two pilot sequences $\pmb{\psi}_ {l,k}$ and $\pmb{\psi}_ {i,t}$ is \begin{equation} \label{eq: Orthogonal_Property} \begin{split} \pmb{\psi}_ {l,k}^H \pmb{\psi}_ {i,t} = \sum_{b =1}^{\tau_p} \sqrt{ \hat{p}_{l,k}^b \hat{p}_{i,t}^b }. \end{split} \end{equation} These pilot sequences are orthogonal if every term in the sum is zero, which only happens when the two users allocate their pilot power to disjoint subsets of the basis vectors. Otherwise, the sequences are non-orthogonal and the two users will cause pilot contamination to each other. If the square roots of the powers allocated to the $K$~users in cell~$l$ are gathered in matrix form as \begin{equation} \label{eq: ProposedPilotPower} \pmb{P}_l = \begin{bmatrix} \sqrt{\hat{p}_{l,1}^1} & \sqrt{\hat{p}_{l,2}^1} & \cdots & \sqrt{\hat{p}_{l,K}^1} \\ \sqrt{\hat{p}_{l,1}^2} & \sqrt{\hat{p}_{l,2}^2} & \cdots & \sqrt{\hat{p}_{l,K}^2} \\ \vdots & \vdots & \ddots & \vdots \\ \sqrt{\hat{p}_{l,1}^{\tau_p}} & \sqrt{\hat{p}_{l,2}^{\tau_p}} & \cdots & \sqrt{\hat{p}_{l,K}^{\tau_p}} \end{bmatrix} \in \mathbb{R}_{+}^{\tau_p \times K}, \end{equation} then the users in cell~$l$ utilize a pilot matrix defined as \begin{equation} \label{eq: PilotStructure1} \mathbf{\Psi}_l = [ \pmb{\psi}_{l,1}, \ldots, \pmb{\psi}_{l,K} ] = \pmb{\Phi} \pmb{P}_l. \end{equation} We now describe the difference between this general pilot structure and the prior works, for example \cite{Xu2015a,Zhu2015a, Victor2015b, Guo2015a}. \subsection{Other Pilot Designs} The works \cite{Xu2015a,Zhu2015a} considered the assignment of $\tau_p$ orthogonal pilot sequences using equal pilot power $\hat{p} \leq \tau_p P_{\max,l,k}$ for every user. Using our notation, the pilot matrix in cell $l$ is \begin{equation} \label{eq: fixedPilotPower} \widehat{\pmb{\Psi}}_l = [\hat{\pmb{\psi}}_{l,1}, \ldots, \hat{\pmb{\psi}}_{l,K} ] = \sqrt{\hat{p}} \pmb{\Phi} \pmb{\Pi}_l, \end{equation} where $\pmb{\Pi}_l \in \mathbb{R}_{+}^{\tau_p \times K}$ is a permutation matrix. This matrix is optimized in \cite{Xu2015a,Zhu2015a} to assign the pilots to users to minimize a metric of mutual interference. This pilot design is a special case of our proposed design, since \eqref{eq: fixedPilotPower} assumes the use of orthogonal pilot sequences and equal power allocation. These assumptions might be suboptimal in systems with large pathloss differences. The selection of the optimal permutation matrix for cell $l$ is a complicated combinatorial problem, so \cite{Xu2015a,Zhu2015a} only study the special case of $\tau_p = K$. The previous work \cite{Victor2015b} optimized the pilot powers to maximize functions of the SE, but the paper only considers a single cell with orthogonal pilot sequences, i.e., $\tau_p \geq KL$ with $L=1$. Besides, the authors of \cite{Guo2015a} optimized the pilot powers to maximize the energy efficiency of a multi-cell system. That paper assumed $\tau_p = K$ and a fixed pilot assignment. If $p_{l,k}$ is the pilot power of user~$k$ in cell~$l$, then the square root of the power matrix allocated to the $K$~users in cell~$l$ is a diagonal matrix defined as \begin{equation} \widetilde{\pmb{P}}_l = \mathrm{diag}\left( \sqrt{\hat{p}_{l,1}}, \ldots, \sqrt{\hat{p}_{l,K}} \right). \end{equation} The pilot used for the users in cell~$l$ is then formulated as \begin{equation} \label{eq: PilotStructure2} \widetilde{\mathbf{\Psi}_l} = \pmb{\Phi} \widetilde{\pmb{P}}_l. \end{equation} Similar to \eqref{eq:Max-Power}, the pilot power at user~$k$ in cell~$l$ is limited as \begin{equation} 0 \leq \hat{p}_{l,k} \leq \tau_p P_{\max,l,k}. \end{equation} This is also a special case of our proposed pilot design, since \eqref{eq: PilotStructure2} assumes orthogonal pilots and fixed pilot assignment. If we combine the pilot structure in \eqref{eq: PilotStructure2} with the permutation matrix approach from \eqref{eq: fixedPilotPower}, the pilot sequences of the $K$~users in cell~$l$ become \begin{equation} \label{eq: PilotStructure3} \widetilde{\widetilde{\mathbf{\Psi}}}_l = [\tilde{\tilde{\pmb{\psi}}}_{l,1}, \ldots, \tilde{\tilde{\pmb{\psi}}}_{l,K} ] = \pmb{\Phi} \pmb{\Pi}_l \widetilde{\pmb{P}}_l . \end{equation} In principle, we can now consider all possible permutation matrices and optimize the pilot power for each one, based on the algorithms in previous work. This approach is computationally heavy, but will serve as a benchmark in Section~\ref{Section: Experimental Result}. \section{UL Massive MIMO With Arbitrary Pilots} \label{Section: ULTransmission} This section provides ergodic SE expressions with the new pilot sequences in \eqref{eq: PilotStructure1}, which will be used for optimized pilot design and power control in Section~\ref{Section: OptProblem}. \setcounter{eqnback}{\value{equation}} \setcounter{equation}{22} \begin{figure*}[t] \begin{equation} \label{eq: SINR_MRC1} \mathrm{SINR}_{l,k}= \frac{ M (\beta_{l,k}^l)^2 p_{l,k} \left( \sum\limits_{b=1}^{\tau_p} \hat{p}_{l,k}^b \right)^2 }{ \left( \sum\limits_{(i,t) \in \mathcal{S} } \beta_{i,t}^l \left( \sum\limits_{b=1}^{\tau_p} \sqrt{ \hat{p}_{i,t}^{b} \hat{p}_{l,k}^{b}} \right)^2 + \sigma^2 \sum\limits_{b=1}^{\tau_p} \hat{p}_{l,k}^b \right) \left( \sum\limits_{(i,t) \in \mathcal{S} } p_{i,t} \beta_{i,t}^l + \sigma^2 \right) + M \sum\limits_{(i,t) \in \mathcal{S} \setminus (l,k)} p_{i,t} (\beta_{i,t}^l )^2 \left(\sum\limits_{b=1}^{\tau_p} \sqrt{ \hat{p}_{i,t}^{b} \hat{p}_{l,k}^{b}} \right)^2 } \end{equation} \vspace*{-0.25cm} \hrulefill \vspace*{-0.25cm} \end{figure*} \setcounter{eqncnt}{\value{equation}} \setcounter{equation}{\value{eqnback}} \subsection{Channel Estimation} During the UL pilot transmission, the received signal $\mathbf{Y}_l \in \mathbb{C}^{M \times \tau_p}$ at the BS of cell~$l$ is \begin{equation} \label{eq: Received_Pilot} \mathbf{Y}_l = \sum_{(i,t) \in \mathcal{S}} \mathbf{h}_{i,t}^{l} \pmb{\psi}_{i,t}^{H} + \mathbf{N}_l, \end{equation} where $\mathbf{h}_{i,t}^l \in \mathbb{C}^M$ denotes the channel between user~$t$ in cell~$i$ and BS~$l$. $\mathbf{N}_l \in \mathbb{C}^{M \times \tau_p}$ is the additive noise with independent elements distributed as $\mathcal{CN}(0, \sigma^2)$. Correlating $\mathbf{Y}_l$ in \eqref{eq: Received_Pilot} with pilot sequence $\pmb{\psi}_{l,k}$ of user~$k$ in cell~$l$, we obtain \begin{equation} \mathbf{y}_{l,k} = \mathbf{Y}_{l,k} \pmb{\psi}_{l,k} = \sum_{(i,t) \in \mathcal{S}} \mathbf{h}_{i,t}^{l} \pmb{\psi}_{i,t}^H \pmb{\psi}_{l,k} + \mathbf{N}_{l} \pmb{\psi}_{l,k}. \end{equation} We consider independent Rayleigh fading where the channel between user~$t$ in cell~$i$ and BS~$l$ is distributed as \begin{equation} \mathbf{h}_{i,t}^l \sim \mathcal{CN} \left( \mathbf{0} , \beta_{i,t}^l \mathbf{I}_M \right), \end{equation} where the variance $\beta_{i,t}^l$ determines the large-scale fading, including geometric attenuation and shadowing. By using minimum mean squared error (MMSE) estimation, the distributions of the channel estimate and estimation error are as follows. \begin{lemma} \label{lemma: Distribution} If the system uses the pilot structure in \eqref{eq: PilotStructure1}, the channel estimate is distributed as \begin{equation} \hat{ \mathbf{h} }_{l,k}^l \sim \mathcal{CN} \left( \mathbf{0}, \gamma_{l,k}^l \mathbf{I}_M \right), \end{equation} where \begin{equation*} \gamma_{l,k}^l = \frac{ (\beta_{l,k}^l)^2 \left(\sum\limits_{b=1}^{\tau_p} \hat{p}_{l,k}^b \right)^2 }{ \sum\limits_{(i,t) \in \mathcal{S} } \beta_{i,t}^l \left( \sum\limits_{b=1}^{\tau_p} \sqrt{\hat{p}_{i,t}^b \hat{p}_{l,k}^b} \right)^2 + \sigma^2 \sum\limits_{b=1}^{\tau_p} \hat{p}_{l,k}^b }. \end{equation*} The estimation error $\mathbf{e}_{l,k}^l = \mathbf{h}_{l,k}^l - \hat{\mathbf{h}}_{l,k}^l$ is independent of the channel estimate and distributed as \begin{equation} \mathbf{e}_{l,k}^l \sim \mathcal{CN} \left( \mathbf{0} , \left( \beta_{l,k}^l - \gamma_{l,k}^l \right)\mathbf{I}_M \right). \end{equation} \end{lemma} \begin{proof} This result follows directly from standard MMSE estimation in \cite{Kay1993a}. \end{proof} Lemma~\ref{lemma: Distribution} provides the MMSE estimator for the general pilot structure in \eqref{eq: PilotStructure1}. The pilot powers as well as the inner products between pilot sequences appear explicitly in the expressions. \subsection{UL Data Transmission} In the UL data transmission, user~$t$ in cell~$i$ transmits the signal $x_{i,t} \sim \mathcal{CN}(0,1)$. The $M \times 1$ received signal vector at BS~$l$ is the superposition of the transmitted signals \begin{equation} \mathbf{y}_l = \sum_{(i,t) \in \mathcal{S} } \sqrt{p_{i,t}} \mathbf{h}_{i,t}^l x_{i,t} + \mathbf{n}_l, \end{equation} where $p_{i,t}$ is the transmit power corresponding to the signal $x_{i,t}$ and the additive noise is $\mathbf{n}_l \sim \mathcal{CN} ( \mathbf{0}, \sigma^2 \mathbf{I}_M)$. To detect the transmitted signal, BS~$l$ selects a detection vector $\mathbf{v}_{l,k} \in \mathbb{C}^M$ and applies it to the received signal as \begin{equation} \label{eq: Signal-Detection} \mathbf{v}_{l,k}^H \mathbf{y}_l = \sum_{(i,t) \in \mathcal{S} } \sqrt{p_{i,t}} \mathbf{v}_{l,k}^H \mathbf{h}_{i,t}^l x_{i,t} + \mathbf{v}_{l,k}^H \mathbf{n}_l . \end{equation} A general lower bound on the UL ergodic capacity of user~$k$ in cell~$l$ is computed in \cite{Bjornson2016a} as \begin{equation} \label{eq:RateProposedPilot} R_{l,k} = \left( 1 - \frac{\tau_p}{\tau_c} \right) \log_2 \left(1 + \textrm{SINR}_{l,k} \right), \end{equation} with $\textrm{SINR}_{l,k}$ given by \begin{equation} \label{eq: SINR_k} \frac{ p_{l,k} | \mathbb{E} \{ \mathbf{v}_{l,k}^{H} \mathbf{h}_{l,k}^l \} |^2 }{\sum\limits_{(i,t) \in \mathcal{S} } p_{i,t} \mathbb{E} \{ | \mathbf{v}_{l,k}^{H} \mathbf{h}_{i,t}^{l} |^2 \} - p_{l,k} | \mathbb{E} \{ \mathbf{v}_{l,k}^{H} \mathbf{h}_{l,k}^{l} \} |^2 + \sigma^2 \mathbb{E} \{ \| \mathbf{v}_{l,k} \|^2 \} }. \end{equation} As a contribution of this paper, we compute a closed-form expression for this lower bound in the case of MR detection with $\mathbf{v}_{l,k} = \hat{\mathbf{h}}_{l,k}^l$. This is a highly scalable detection method suitable for practical Massive MIMO systems. \begin{lemma} \label{Lemma: Achievable_Rate} If the system uses the pilot structure in \eqref{eq: PilotStructure1} and MR detection, the SE in \eqref{eq:RateProposedPilot} for user~$k$ in cell~$l$ becomes \begin{equation} R_{l,k} = \left( 1 - \frac{\tau_p}{\tau_c} \right) \log_2 \left(1 + \mathrm{SINR}_{l,k} \right), \end{equation} where $\mathrm{SINR}_{l,k}$ is shown in \eqref{eq: SINR_MRC1} at the top of this page. \end{lemma} \begin{proof} The SINR value in \eqref{eq: SINR_MRC1} is obtained by computing the moments of Gaussian distributions, similar to \cite{Chien2017a}. The detailed proof is omitted due to space limitations. \end{proof} Inspecting \eqref{eq: SINR_MRC1}, we notice that it is always advantageous to add BS antennas since the numerator grows linearly with $M$. The first term in the denominator represents non-coherent interference from all users in the system, and it is independent of $M$. The second term in the denominator represents coherent interference caused by pilot contamination and it grows linearly with $M$. We stress that a proper pilot design and power control $\hat{p}_{l,k}^b, \forall l,k,b,$ can improve the SE by enhancing the channel estimation quality and reducing the coherent interference caused by pilot contamination. \section{Max-min Fairness Optimization} \label{Section: OptProblem} In this section, we utilize the SE expression in Lemma~\ref{Lemma: Achievable_Rate} to formulate a max-min SE pilot optimization problem. We further demonstrate that the optimization problem is NP-hard, and therefore instead of seeking the global optimum, a local solution with polynomial complexity is derived. \setcounter{eqnback}{\value{equation}} \setcounter{equation}{30} \begin{figure*}[t] \begin{equation} \label{eq: SINR_MRCApproximation} \widetilde{\textrm{SINR}}_{l,k}= \frac{ M (\beta_{l,k}^l)^2 p_{l,k} \prod\limits_{b=1}^{\tau_p} \left( \hat{p}_{l,k}^b / \alpha_{l,k}^b \right)^{2\alpha_{l,k}^b} }{ \left( \sum\limits_{(i,t) \in \mathcal{S} } \beta_{i,t}^l \left( \sum\limits_{b=1}^{\tau_p} \sqrt{ \hat{p}_{i,t}^{b} \hat{p}_{l,k}^{b}} \right)^2 + \sigma^2 \sum\limits_{b=1}^{\tau_p} \hat{p}_{l,k}^b \right) \left( \sum\limits_{(i,t) \in \mathcal{S} } p_{i,t} \beta_{i,t}^l + \sigma^2 \right) + M \sum\limits_{(i,t) \in \mathcal{S} \setminus (l,k)} p_{i,t} (\beta_{i,t}^l )^2 \left(\sum\limits_{b=1}^{\tau_p} \sqrt{ \hat{p}_{i,t}^{b} \hat{p}_{l,k}^{b}} \right)^2 } \end{equation} \vspace*{-0.25cm} \hrulefill \vspace*{-0.25cm} \end{figure*} \setcounter{eqncnt}{\value{equation}} \setcounter{equation}{\value{eqnback}} \subsection{Problem Formulation} \label{Subsect: Problem} One of the key visions of Massive MIMO is to provide uniformly good service for everyone in the system, which is known as max-min fairness. In this paper, we investigate how to optimize the pilot sequences to achieve this goal. We consider the pilot powers (over the basis vectors) as optimization variables while the data powers are assumed to be predetermined. The max-min SE optimization problem is formulated for the proposed pilot design as \setcounter{eqnback}{\value{equation}} \setcounter{equation}{23} \begin{equation} \label{eq: Opt_Prob1} \begin{aligned} &\underset{\{ \hat{p}_{l,k}^b \geq 0 \}}{ \mathrm{maximize} } && \underset{(l,k)}{\min} \; \log_2 \left( 1 + \textrm{SINR}_{l,k} \right) \\ & \text{subject to} && \frac{1}{\tau_p}\sum_{b=1}^{\tau_p} \hat{p}_{l,k}^b \leq P_{\max, l,k}, \forall l,k. \end{aligned} \end{equation} Note that this optimization problem jointly generates the pilot sequences and performs pilot power control. An equivalent epigraph-form representation of \eqref{eq: Opt_Prob1} is \begin{subequations} \label{eq: Opt_Prob2} \begin{align} & \underset{ \xi, \{ \hat{p}_{l,k}^b \geq 0 \}}{ \mathrm{maximize} } && \xi \\ & \text{subject to} && \mathrm{SINR}_{l,k} \geq \xi, \forall l,k, \label{P1:a} \\ &&& \frac{1}{\tau_p}\sum_{b=1}^{\tau_p} \hat{p}_{l,k}^b \leq P_{\max, l,k}, \forall l,k. \label{P1:b} \end{align} \end{subequations} From the expression of the SINR constraints in \eqref{eq: Opt_Prob2}, we realize that the proposed max-min SE optimization problem is a signomial program.\footnote{A function $f(x_1, \ldots, x_{N_1}) = \sum_{n=1}^{N_2} c_n \prod_{m= 1}^{N_1} x_m^{a_{n,m}}$ defined in $\mathbb{R}_{+}^{N_1}$ is signomial with $N_2$ terms $(N_2 \geq 2)$ if the exponents $a_{n,m}$ are real numbers and the coefficients $c_n$ are also real but at least one of them must be negative. In case of all $c_n, \forall n,$ are positive, $f(x_1, \ldots, x_{N_1})$ is a posynomial function.} Therefore, the max-min SE optimization problem is NP-hard in general and seeking the optimal solution has very high complexity in any non-trivial setup \cite{Lange2014a}. However, the power constraints \eqref{P1:b} ensure a compact feasible domain and make the SINRs continuous functions so that the optimal solution to \eqref{eq: Opt_Prob2} always exists. \subsection{Local Optimality Algorithm} This subsection provides an algorithm to approximate the optimization problem \eqref{eq: Opt_Prob2} as a geometric program. In detail, the signomial SINR constraints are converted to corresponding monomial constraints by using the weighted arithmetic mean-geometric mean inequality \cite{Chiang2007b} as in Lemma~\ref{Lemma: Local_Approximation}.\footnote{ A function $f(x_1, \ldots, x_{N_1}) = c\prod_{m=1}^{N_1} x_m^{a_m}$ defined in $\mathbb{R}_{+}^{N_1}$ is monomial if the coefficient $c >0$ and the exponients $a_m, \forall m,$ are real numbers. } \begin{lemma} \cite[Lemma~1]{Chiang2007b} \label{Lemma: Local_Approximation} Assume that a posynomial function $g(x)$ is defined from the set of $\tau_p$ monomials $\{ u_1 (x), \ldots, u_{\tau_p} (x) \}$ \begin{equation} g(x) = \sum_{b=1}^{\tau_p} u_b (x), \end{equation} then this posynomial function is lower bounded by a monomial function $\tilde{g}(x)$ as \begin{equation} g(x) \geq \tilde{g}(x) = \prod_{b=1}^{\tau_p} \left( u_{b}(x)/ \alpha_b \right)^{\alpha_b}, \end{equation} where $\alpha_b$ is a non-negative weight value corresponding to $u_{b} (x)$. We say that $\tilde{g}(x_0)$ is the best approximation to $g(x_0)$ near the given point $x_0$ in the sense of the first order Taylor expansion, if the weight $\alpha_b $ is defined as \begin{equation} \label{eq: WeightDef} \alpha_b = u_b(x_0) \Big/ \sum_{b=1}^{\tau_p} u_b (x_0) . \end{equation} \end{lemma} Using this lemma, the max-min SE optimization problem \eqref{eq: Opt_Prob2} is converted to a geometric program by bounding the term $\sum_{b=1}^{\tau_p} \hat{p}_{l,k}^b$ in the numerators of the SINR constraints: \begin{equation} \label{eq_: Power_Approximation} \sum_{b=1}^{\tau_p} \hat{p}_{l,k}^b \geq \prod_{b=1}^{\tau_p} \left( \hat{p}_{l,k}^b / \alpha_{l,k}^b \right)^{\alpha_{l,k}^b}, \end{equation} where $\alpha_{l,k}^b$ is the weight value corresponding to $\hat{p}_{l,k}^b$. It leads to a lower bound on the SINR value for user $k$ in cell $l$ as \begin{equation} \label{eq: SINRBound} \textrm{SINR}_{l,k} \geq \widetilde{\textrm{SINR}}_{l,k}, \end{equation} where $\widetilde{\textrm{SINR}}_{l,k}$ is presented in \eqref{eq: SINR_MRCApproximation} at the top of this page. The solution to the max-min SE optimization problem \eqref{eq: Opt_Prob2} is lower bounded by the following geometric program \setcounter{eqnback}{\value{equation}} \setcounter{equation}{31} \begin{equation} \label{eq: Opt_Prob3} \begin{aligned} & \underset{ \xi, \{ \hat{p}_{l,k}^b \geq 0 \}}{ \mathrm{maximize} } && \xi \\ & \text{subject to} && \widetilde{\mathrm{SINR}}_{l,k} \geq \xi, \forall l,k, \\ &&& \frac{1}{\tau_p}\sum_{b=1}^{\tau_p} \hat{p}_{l,k}^b \leq P_{\max, l,k}, \forall l,k. \end{aligned} \end{equation} By virtue of the successive approximation technique \cite{Marques1978a}, a local solution to the original optimization problem \eqref{eq: Opt_Prob2} is obtained if we solve \eqref{eq: Opt_Prob3} iteratively as follows. \begin{theorem} \label{Theorem: KKTpoint} Selecting a feasible starting point $\hat{p}_{l,k}^{b, (0)}, \forall l,k,b,$ and solving \eqref{eq: Opt_Prob3} in an iterative manner via consecutively updating the weight values $\alpha_{l,k}^b$, the solution will converge to the Karush-Kuhn-Tucker (KKT) local point to \eqref{eq: Opt_Prob2}. \end{theorem} \begin{proof} The proof is adapted from the general framework in \cite{Marques1978a}. We first prove that the procedures in Theorem \ref{Theorem: KKTpoint} guarantee that the solution converges to a limit point. This point is further proved to be a KKT local point to \eqref{eq: Opt_Prob2}. The detail proof is omitted due to space limitations. \end{proof} After selecting the initial powers $\hat{p}_{l,k}^{b,(0)}, \forall l,k,b$, we compute the weight values by applying \eqref{eq: WeightDef}. Furthermore, in each iteration, the SINR constraints are converted to the monomials by bounding the pilot power of user~$k$ in cell~$l$ as in \eqref{eq: SINR_MRCApproximation} with noting that the weight values are computed based on the optimal powers of the previous iteration using \eqref{eq: WeightDef}. The solution is then obtained by solving the geometric program \eqref{eq: Opt_Prob3}. At the end of each iteration, the weight values are updated for the next iteration. We repeat the procedure until the algorithm converged to a KKT local point. The convergence can be declared, for example, when the variation between two consecutive iterations is sufficient small. The proposed local optimality approach is summarized in Algorithm \ref{Algorithm: Local_Approximation}. \begin{algorithm} \caption{Successive approximation algorithm for \eqref{eq: Opt_Prob2}} \label{Algorithm: Local_Approximation} \textbf{Input}: Set $i=1$; Select the data powers $p_{l,k}$ for $ \forall l= 1, \ldots, L; k =1, \ldots, K$; Select the initial values of powers $\hat{p}_{l,k}^{b,(0)}$ for $\forall l=1, \ldots, L; k= 1, \ldots, K,$ and $b=1,\ldots,\tau_p$; Compute the weight values: $\alpha_{l,k}^{b,(1)} = \hat{p}_{l,k}^{b,(0)} /\sum_{b=1}^{\tau_p} \hat{p}_{l,k}^{b,(0)}, \forall l,k,b.$ \begin{itemize} \item[1.] \emph{Iteration} $i$: \begin{itemize} \item[1.1.] Solve the geometric program \eqref{eq: Opt_Prob3} with $\alpha_{l,k}^b = \alpha_{l,k}^{b,(i)}$ to get the optimal values $\xi^{(i),\ast}$ and $\hat{p}_{l,k}^{b,(i),\ast}, \forall l,k,b$. \item[1.2.] Update the weight values: $ \alpha_{l,k}^{b,(i+1)} = \hat{p}_{l,k}^{b,(i),\ast} /\sum_{b=1}^{\tau_p} \hat{p}_{l,k}^{b,(i),\ast}, \forall l,k,b.$ \end{itemize} \item[2.] If Stopping criterion satisfied $\rightarrow$ Stop. Otherwise, go to Step 3. \item[3.] Set $\xi^{\ast} = \xi^{(i),\ast}$ and $\hat{p}_{l,k}^{b,\ast} = \hat{p}_{l,k}^{b,(i),\ast}, \forall l,k,b$; Set $i = i+1$, go to Step 1. \end{itemize} \textbf{Output}: The solutions $\xi^{\ast}$ and $\hat{p}_{l,k}^{b,\ast}, \forall l,k,b.$ \end{algorithm} \section{Experimental Results} \label{Section: Experimental Result} \vspace{-0.1cm} A Massive MIMO system with coverage area $1 \mbox{ km}^2$ comprising of $4$ square cells is considered for simulation. In each cell, a BS is located at the center, while $K$ users are uniformly distributed at distance greater than $35$ m from the BS. To even out the interference, the coverage area is wrapped around, and therefore one BS has eight neighbors. We assume that the coherence block contains $\tau_c =200$ symbols. The system operates over a $20$ MHz bandwidth and the corresponding noise variance is $-96$ dBm, including a noise figure of $5$~dB. The large-scale fading coefficient $\beta_{i,t}^l $ is computed as $\beta_{i,t}^l = -148.1 - 37.6 \log_{10} d_{i,t}^l + z_{i,t}^l$ [dB], where $d_{i,t}^l$ denotes the distance [km] between user $t$ in cell $i$ and BS $l$. The shadow fading $z_{i,t}^l$ is created by a Gaussian distributed with zero mean and standard derivation $7$ dB.\footnote{ Shadow fading realizations were sometimes regenerated to ensure that the home BS has the largest large-scale fading to its users (i.e., $\beta_{l,k}^l$ is the maximum over all $\beta_{i,k}^l, i =1,\ldots, L$.) } The payload data symbols have equal power, $p_{l,k}=200$ mW, $\forall l,k$ and the maximum pilot power constraints $P_{\max,l,k} = 200$ mW, $\forall l,k$. For Algorithm \ref{Algorithm: Local_Approximation}, we observed better performance with a hierarchical initialization of $\hat{p}_{l,k}^{b,(0)}$ than with an all-equal initialization. Consequently, we initialize $\hat{p}_{l,k}^{b,(0)}$ as uniformly distributed over the range $[0, P_{\max,l,k}]$. Algorithm \ref{Algorithm: Local_Approximation} converges quite fast, so the stopping criteria can be easily specified in the number of iterations (e.g., $15$ iterations). The proposed algorithm is compared with related works and brute-force: \begin{itemize} \item[$(i)$] \emph{Universal random pilot assignment}, as considered in \cite{Jose2011b,Bjornson2016a}. The same pilots are reused in every cell and assigned randomly to the users within the cell. Equal pilot power $\hat{p} = 200$ mW is used by all users. \item[$(ii)$] \emph{Smart pilot assignment}, as proposed in \cite{Xu2015a}. Orthogonal pilot sequences are assumed in every cell and are assigned to the users based on the mutual interference, determined by the large-scale fading coefficients. Equal pilot power $\hat{p} = 200$ mW is used by all users. \item[$(iii)$] \emph{Pilot power control with brute-force search} utilizes the pilot structure in \eqref{eq: PilotStructure3}. A brute-force search over all permutation matrices $\pmb{\Pi}_l$ is performed, and for each matrix the optimum pilot powers are computed. \end{itemize} \begin{figure}[t] \centering \includegraphics[ trim=0.5cm 0cm 1.2cm 0.55cm, clip=true, width=3.2in]{Fig_QoSNPHard} \vspace*{-0.4cm} \caption{ Cumulative distribution function (CDF) of the max-min SE [b/s/Hz] with $K = \tau_p = 2$ and $M = 300$.} \label{Fig-CDF-2K2B} \vspace*{-0.45cm} \end{figure} SE is measured over different random user locations and shadow fading realizations. The SE achieved by $(i)-(iii)$ and Algorithm \ref{Algorithm: Local_Approximation} are also averaged over different pilot reuse locations and initializations of $\hat{p}_{l,k}^{b,(0)}, \forall l,k,b$, respectively. Additionally, the solutions to the optimization problems are obtained by utilizing the MOSEK solver \cite{Mosek} with CVX \cite{cvx2015}. Fig.~\ref{Fig-CDF-2K2B} shows the cumulative distribution function (CDF) of the max-min SE [b/s/Hz] for the case $K=\tau_p=2$ and $M =300$. Universal random pilot assignment yields the worst SE, because of the pilot contamination and mutual interference. At the $95 \%$-likely SE point, smart pilot assignment brings significant enhancement: it is about $4.75\times$ better than universal random pilot assignment thanks to exploitation of the mutual interference between the users \cite{Xu2015a}. Although the performance of smart pilot assignment is very close to optimal pilot assignment with brute-force search for a fixed power level \cite{Xu2015a}, by jointly optimizing the power and pilot assignment, the proposed method outperforms smart pilot assignment by providing a $1.6\times$ gain in average max-min SE. Furthermore, the similar performance of the proposed pilot design and pilot power control with brute-force search confirms effectiveness of the proposed local optimality algorithm. \begin{figure}[t] \centering \includegraphics[trim=0.5cm 0cm 1.2cm 0.55cm, clip=true, width=3.2in]{Fig_AvgVariedPilotLength} \vspace*{-0.4cm} \caption{ Max-min SE [b/s/Hz] vs. the number of user per cell with $K = \tau_p$ and $M = 300$.} \label{Fig-VariousK} \vspace*{-0.4cm} \end{figure} \begin{figure}[t] \centering \includegraphics[trim=0.5cm 0cm 1.2cm 0.55cm, clip=true, width=3.2in]{Fig_AvgVariedBSAntennas} \vspace*{-0.4cm} \caption{ Max-min SE [b/s/Hz] vs. the number of BS antennas, $ K = \tau_p = 4$.} \label{Fig-VariousM} \vspace*{-0.45cm} \end{figure} Due to huge computational complexity, the brute-force search is not considered hereafter when we increase the number of users. Fig.~\ref{Fig-VariousK} plots the average max-min SE as a function of the number of users per cell, assuming $\tau_p=K$. The proposed pilot design provides the highest SE over all tested scenarios. Specifically, in comparison to universal random pilot assignment, the improvement varies from $2.73\times$ to $5.22\times$ with $K = 2 $ to $K=10$, respectively. Even though smart pilot assignment performs better than universal random pilot assignment, the proposed method still provides SE improvements of up to $1.88\times $ at $K = 10$. Moreover, we observe a dramatic reduction of the max-min SE when the number of users increases due to stronger mutual interference. Fig.~\ref{Fig-VariousM} shows the average max-min SE versus the number of BS antennas. Among the three pilot assignment techniques, we again observe the worst SE with universal random pilot reuse. The max-min SE increases from $0.08$ [b/s/Hz] to $0.22$ [b/s/Hz] from $M=100$ to $M=900$. Our proposed pilot design always yields the highest SE and the gap to the smart pilot assignment reaches up to $2.16 \times$ at $M = 900$. \section{Conclusion} \label{Section: Conclusion} \vspace*{-0.1cm} This paper proposed a new methodology for joint optimization of the pilot assignment and pilot power control in Massive MIMO systems. The key difference from prior work is to treat the pilot sequences as continuous optimization variables, instead of predefined vectors that should be assigned combinatorially. A new SE expression was computed for the proposed pilot structure and it was used to formulate a max-min SE optimization problem. Finding the globally optimal solution is NP-hard, but we obtained an efficient local optimum that outperforms the previous state-of-the-art methods for pilot assignment. Large gains in max-min SE can be achieved by the proposed pilot assignment. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The measurement of deep inelastic scattering cross sections $d^2\sigma/dxdQ^2$ at high $Q^2$ provides an incisive test of the Standard Model. Interesting results have already been obtained by the H1 and ZEUS collaborations at HERA --- a summary of these can be found in the accompanying experimental review by Mehta \cite{Andy}. In this talk we will concentrate on some theoretical issues; in particular, how well can we predict cross sections for neutral (NC) and charged current (CC) $e^-p$ and $e^+ p$ scattering cross sections at high $Q^2$, and what can we hope to learn from present and future measurements at HERA? Many of the issues presented here were subsequently discussed in detail by the high--$Q^2$ Working Group at the Workshop \cite{WGi}, and as a result some of the issues raised here were clarified. The importance of the high--$Q^2$ region as a probe of standard and new physics was repeatedly emphasised in the discussions. We begin by recalling the form of the DIS cross sections in the Standard Model: \begin{equation} {d^2\sigma_{NC,CC}(e^\pm p) \over d x d Q^2}\ = \ \left\{\begin{array}{c} \mbox{standard LO} \\ \mbox{expressions} \end{array} \right\}\ + \ \delta_{\rm QCD} \ + \ \delta_{\rm EW} \; . \label{eq:basic} \eeq The NC and CC cross sections are obtained from the $ep \to eX$ ($\gamma^*,Z^*$ exchange) and $ep \to \nu X$ ($W^*$ exchange) processes respectively. In (\ref{eq:basic}) the first term on the right-hand side represents the standard leading-order `parton-model' expressions for the deep inelastic structure functions ($F_{2,L,3}$). Note that at the high $Q^2 > \cO(10^4\; \GeV^2)$ values measured at HERA, the $Z^0$--exchange contribution to the NC structure functions, see below, cannot be neglected. The second term represents perturbative QCD next-to-leading order (NLO) corrections, expressions for which can be found in the literature (see for example Ref.~\cite{DICK}). Away from $x=0$ and $x=1$ these do not have any particularly dramatic effect on the leading-order cross sections. Since they are generally automatically included in the various computer codes used to fit data and make predictions, they will not be discussed further here.\footnote{Of course going beyond leading order necessitates a choice of renormalisation and factorisation scheme. All quantities referred to in this talk correspond to the $\msb$ scheme.} The third term on the right-hand side of (\ref{eq:basic}) represents electroweak radiative corrections. These are known to at least $\cO(\alpha)$ and include QED corrections from photon emission off the incoming and outgoing quarks and leptons, and also genuine electroweak corrections from propagator, vertex and box contributions associated with the electroweak gauge boson exchanges. The latter allow a precise theoretical definition of the various electroweak parameters (e.g. vector and axial couplings, gauge boson masses etc.) that appear in the leading-order expressions. For a comprehensive review, see the contribution by Spiesberger in \cite{WGi}. Three essentially separate types of information can therefore be obtained from high-precision measurements of the cross sections (\ref{eq:basic}) at HERA : \begin{itemize} \item parton distributions $f_i(x,Q^2)$ (and $\asQ$) at high $x$ and $Q^2$, \item electroweak parameters, in particular $M_W$ and $G_\mu$, from the space-like $W$ exchange in the CC cross section, \item limits on, or measurements of, new physics effects (quark substructure, leptoquark production, etc.). \end{itemize} \begin{figure}[htb] \begin{center} \mbox{\epsfig{figure=zeusnccc.ps,height=12cm}} \caption{Charged and neutral current DIS cross sections at high $Q^2$, as measured by the ZEUS collaboration \protect\cite{ZEUS} in $e^+p$ scattering at HERA.} \end{center} \label{fig:herahiq} \end{figure} In this talk we will concentrate mainly on the first of these, i.e. the impact of CC and NC measurements on parton distributions and $\as$. Our approach will be to examine the accuracy with which predictions, based on global fits to DIS data at lower $Q^2$ and NLO QCD evolution to higher $Q^2$, can already be made for the kinematic regime covered by HERA. These predictions can then be used as a benchmark to assess the impact of present and future HERA data. Since the issues are slightly different for CC and NC cross sections, we will discuss each of these in turn. Before doing so, for reference we collect together the leading-order expressions for the relevant scattering cross sections:\footnote{In these expressions the proton mass is set to zero, and $Q^2 = x y s$.} \begin{itemize} \item[$\bullet$]\ \underline{neutral current} \beqn {d^2\sigma_{NC}(e^\pm p) \over d x d Q^2} &=& {2 \pi\alpha^2 \over x Q^4}\; \Big[ [1+ (1-y)^2] F_2(x,Q^2) -y^2 F_L(x,Q^2) \nonumber \\ && \mp 2y\big(1-y\big) xF_3 (x,Q^2) \Big] \label{eq:nc} \eeqn \beqn F_2(x, Q^2) &= & \sum_q [xq(x,Q^2) + x\bar{q}(x,Q^2) ] \; A_q(Q^2) \nonumber \\ x F_3(x, Q^2) &= & \sum_q [xq(x,Q^2) - x\bar{q}(x,Q^2) ] \; B_q(Q^2) \label{eq:F23def} \eeqn \beqn A_q(Q^2) &= & \; e_q^2 - 2 e_q v_e v_q P_Z + (v_e^2 + a_e^2) (v_q^2 + a_q^2) P_Z^2 \nonumber \\ B_q(Q^2) &= & \; \qquad - 2 e_q a_e a_q P_Z + 4 v_e a_e v_q a_q P_Z^2 \nonumber \\ P_Z &= & \; {Q^2 \over Q^2 + M_Z^2}\; {\sqrt{2} G_\mu M_Z^2\over 4 \pi \alpha} \label{eq:ABdef} \eeqn \item[$\bullet$]\ \underline{charged current} \beqn {d^2\sigma_{CC}(e^-p) \over d x d Q^2} & = & [1-{\cal P}_e ]{ G_\mu^2 \over 2\pi} \Big({M_W^2\over Q^2 + M_W^2}\Big)^2 \nonumber \\ & \times & \sum_{i,j}\; \Big[ \vert V_{u_id_j}\vert^2 u_i(x,Q^2) + (1-y)^2 \vert V_{u_j d_i}\vert^2 \bar d_i(x,Q^2) \Big] \nonumber \\ \label{eq:nce} \eeqn \beqn {d^2\sigma_{CC}(e^+p) \over d x d Q^2} & = & [1+{\cal P}_e ]{ G_\mu^2 \over 2\pi} \Big({M_W^2\over Q^2 + M_W^2}\Big)^2 \nonumber \\ & \times & \sum_{i,j}\; \Big[ \vert V_{u_id_j}\vert^2 \bar u_i(x,Q^2) + (1-y)^2 \vert V_{u_j d_i}\vert^2 d_i(x,Q^2) \Big] \nonumber \\ \label{eq:ncp} \eeqn \end{itemize} {}From these expressions we see that (i) the charged current cross section is relatively suppressed by $\cO(Q^4)$ at small $Q^2$ where the neutral current cross section is dominated by photon exchange, and (ii) at very high $Q^2 \gg \cO(M_V^2)$, the charged and neutral cross sections are of the same order. The HERA data confirm this behaviour: Fig.~\ref{fig:herahiq} shows the neutral and charged current cross sections, integrated over $x$, for $e^+p$ scattering at high $Q^2$ measured by ZEUS (see \cite{Andy}), together with the Standard Model predictions. \section{Neutral current cross sections} \begin{figure}[tb] \vspace{-0.75cm} \begin{center} \epsfig{figure=abweight.ps,height=12cm} \end{center} \vspace{-0.5cm} \caption{The $Q^2$ dependence of the parton combination functions $A_q$ and $B_q$ which appear in the neutral current cross-section expressions of Eq.~(\protect\ref{eq:F23def}).} \label{fig:ABplot} \end{figure} In QCD, the longitudinal structure function $F_L$ is suppressed by $\cO(\asQ)$ compared to $F_2$ and $F_3$, and so at high $Q^2$ its contribution is numerically small. Ignoring overall factors, we therefore have \beqn \sigma_{NC}(e^- p) + \sigma_{NC}(e^+ p) &\ \sim\ & \ F_2 = x \sum_q A_q (q + \bar q)\; , \nonumber \\ \sigma_{NC}(e^- p) - \sigma_{NC}(e^+ p) &\ \sim\ & x F_3 = x \sum_q B_q (q - \bar q) \; .\nonumber \eeqn The $Q^2$ dependence of these cross section combinations (disregarding the overall $1/Q^4$) comes from two sources: $Z^0$ propagator form-factor effects, as contained in the $A_q$ and $B_q$, and logarithmic DGLAP \cite{DGLAP} evolution of the parton distributions. Both are visible in current HERA data \cite{Andy}. Note that as $Q^2 \to 0$, $A_q \to e_q^2$ and $B_q \to 0$. Thus $F_2$ is the {\it same} structure function as measured in fixed-target experiments at lower $Q^2$. The $Q^2$ dependence of the $A_q$ and $B_q$ for $u$-- and $d$--type quarks is illustrated in Fig.~\ref{fig:ABplot}. The point to note here is that the relative mix of the two quark types does not change radically as $Q^2$ increases --- up quarks still dominate at high $Q^2$. This implies that the uncertainty in the extrapolation of, say, $F_2^{\mu p}$ from low to high $Q^2$ at large $x$ from changes in the relative contributions of the valence $u$ and $d$ quarks is very small. \begin{figure}[tb] \begin{center} \epsfig{figure=ecartoon.ps,height=9cm} \end{center} \vspace{-0.5cm} \caption{Illustration of the different contributions to the uncertainty in the prediction of $F_2$ at high $Q^2$, at a fixed (large) value of $x$, given a measurement at lower $Q^2$.} \label{fig:ecartoon} \end{figure} Given a measurement of $F_2$ at lower $Q^2$, how well can we then predict $F_2$ in the high--$Q^2$ region probed by HERA? Figure~\ref{fig:ecartoon} is a schematic (i.e. not-to-scale) illustration of the largest sources of uncertainty.\footnote{We are assuming here that the electroweak parameters associated with the $Z^0$ exchange contribution are already very precisely known from LEP measurements.} First, any measurement error on the low--$Q^2$ data propagates directly through to high $Q^2$. For fixed-target DIS data at medium-large $x$, this uncertainty is of order $\pm 3\%$ (see for example \cite{MRST}). Second, any uncertainty on $\asQ$ affects the evolution of $F_2$ via the large-$x$ DGLAP equation \begin{equation} {\partial F_2 \over \partial \log Q^2} \sim \asQ\; P^{qq} \otimes F_2 \; . \label{eq:DGLAP} \eeq The effect on the evolution of a `world average' value and error, $\asmz = 0.1175 \pm 0.005$, is illustrated in Fig.~\ref{fig:x45plot}, taken from Ref.~\cite{MRST}. Evidently the error on $\alpha_s$ induces an uncertainty of order $\pm 5\%$ in $F_2$ at high $Q^2 \sim 10^5\; \GeV^2$. \begin{figure}[tb] \vspace{-1.0cm} \begin{center} \epsfig{figure=fig-x45plot.ps,height=14cm} \end{center} \vspace{-0.5cm} \caption{The extrapolation of the fits at $x=0.45$ to high $Q^2$ using the MRST, MRST(${\alpha_s\uparrow\uparrow}$) and MRST(${\alpha_s\downarrow\downarrow}$) sets of partons, from Ref.~\protect\cite{MRST}.} \label{fig:x45plot} \end{figure} An error in the evolution of $F_2$ could also be made if there is a significant higher-twist contribution to the low--$Q^2$ data that is not taken account in the fitting and subsequent evolution. This is potentially a problem at very large $x$, since the higher-twist contributions are expected to behave as $1/(1-x)Q^2$ relative to the leading-twist contribution. It is difficult to pin down the precise size of this effect --- most analyses apply a minimum cut in $W^2 = (1-x)Q^2/x$ to fixed-target data and fit the remaining data using leading-twist NLO DGLAP. A recent study to quantify the impact of a possible higher-twist contribution on extracted parton distributions is reported in \cite{MRSTHT}. Finally, the structure function $F_2$ in the convolution on the right-hand side of the DGLAP equation (\ref{eq:DGLAP}) is sampled at $x' \geq x$. The evolution is therefore susceptible to the `feed-down' of an anomalously large contribution to $F_2$ at $x \approx 1$. Such a contribution could escape detection by the fixed-target measurements while still influencing the evolution of $F_2$ to the HERA region, see for example the study of Ref.~\cite{TUNG}. Again, it is hard to quantify the maximum effect that such an anomaly could have on $F_2$ at high $Q^2$. Certainly in global fits that adopt the physically reasonable assumption that (leading-twist) $F_2$ decreases smoothly to zero as $(1-x)^n$, with $n \simeq 3 - 4$ at low $Q^2$, there is no uncertainty in the evolution of $F_2$ from the large-$x$ `unmeasured' region. In summary, if higher-twist contributions have been correctly estimated and if there is no anomalous contribution to $F_2$ at very high x, then we should be able to predict the high--$Q^2$ neutral current cross sections at HERA to within about $\pm 5\%$, with the main uncertainty appearing to come from the error on $\as$. A HERA measurement at this level of precision would therefore provide a powerful check of the theoretical technology based on leading-twist NLO DGLAP evolution. If there is agreement between the low-- and high--$Q^2$ data sets, the latter can be incorporated into global fits to help pin down further the parton distributions and $\as$. Conversely, any gross deviation from the theoretical predictions could signal new physics. \section{Charged current cross sections} \begin{figure}[tb] \begin{center} \epsfig{figure=cc_chart.ps,height=14cm} \end{center} \vspace{-0.5cm} \caption{Parton decomposition of the high--$Q^2$ $e^-p$ and $e^+p$ CC cross sections.} \label{fig:cchart} \end{figure} The normalisation and $Q^2$ dependence of the charged current cross sections (\ref{eq:nce},\ref{eq:ncp}) are, in principle, sensitive to the electroweak parameters $G_\mu$ and $M_W$. The current and projected precision on the extraction of these parameters was discussed at some length in the Working Group, see \cite{WGi}. Notice, however, that there is also potentially useful information on parton distributions, since the flavour decomposition is quite different from that of the neutral current cross sections. Ignoring overall couplings, we have \beqn \sigma_{CC}(e^+ p) &\ \sim\ & \ \ubar + \bar c + (1-y)^2 (d+s) \longrightarrow (1-y)^2 d \; ,\nonumber \\ \sigma_{CC}(e^- p) &\ \sim\ & \ u + c + (1-y)^2 (\dbar+\bar s) \longrightarrow u \; ,\nonumber \eeqn where the $x \to 1 $ limit is indicated. The quantitative breakdown is illustrated in Fig.~\ref{fig:cchart}, which shows the pdf decomposition of the $e^+p$ and $e^-p$ CC cross sections as a function of $x$ at high $Q^2$. Evidently the $e^-p$ cross section is completely dominated by the $u$--quark distribution and, as such, should be predictable with high precision, assuming of course the validity of DGLAP evolution as discussed in the previous section. More interesting is the $e^+p$ cross section. This is dominated by the $d$--quark distribution at large $x$ (though not to the same extent as the $u$ distribution dominates the $e^-p$ cross section). In Fig.~\ref{fig:cchart}, 74\% and 98\% of the leading-order cross section comes from $e^+d$ scattering at $x=0.2$ and $0.6$ respectively. The ratio $\sigma_{CC}(e^+p)/\sigma_{CC}(e^-p)$ therefore provides a good measure of the $d/u$ ratio. Current information on $d/u$ at large $x$ comes from fixed target $F_2^{\mu n}/F_2^{\mu p}$ measurements and the lepton asymmetry in $p \bar p \to W^\pm + X$, see for example \cite{MRST}. In the MRST fit, NMC $n/p$ data are used to constrain the large-$x$ $d$--quark pdf in this way. The corresponding predictions for $\sigma_{CC}(e^+p)$ are compared with the ZEUS data \cite{ZEUS} in Fig.~\ref{fig:zeusplot} \cite{RGR}. Although the agreement is entirely satisfactory, there is some evidence of a slight excess of data over theory in the largest $x$ ($=0.42$) bin. Could this imply that the $d/u$ ratio is being underestimated in the standard global fits? Any attempt to increase $d/u$ at large $x$ in the global fit leads to a direct conflict with the $n/p$ data. However, Bodek and Yang have argued \cite{BODEK} that the latter should be corrected for nuclear binding effects which, at large $x$, lead to a larger $d/u$ ratio, in `better' agreement with the ZEUS data. This is an issue that deserves more attention, and improved precision on the HERA $e^+p$ data would be very valuable. \begin{figure}[htb] \begin{center} \epsfig{figure=e+cc.ps,height=18cm} \end{center} \vspace{-0.5cm} \caption{Comparison of the predictions \protect\cite{RGR} for charged current $e^+p$ cross sections using MRST partons \protect\cite{MRST}, with data from the ZEUS collaboration \protect\cite{ZEUS}.} \label{fig:zeusplot} \end{figure} \section{Summary} In this brief review we have highlighted some of the physics issues relating to neutral and charged current cross sections at high $x$ and $Q^2$ at HERA. Although there is some scope for obtaining information on electroweak parameters, in particular $M_W$ \cite{WGi}, the main impact of future data is likely to be in testing perturbative QCD evolution via the DGLAP equation and in obtaining information on the pdfs. The $d$--quark distribution, for example, is directly probed by the charged current $e^+p$ cross section. Finally, we note that the HERA high $x,Q^2$ DIS kinematic region overlaps with the corresponding region that will be probed by many hard scattering processes at the LHC. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $dV=dV_n$ be the Lebesgue measure on $\C^n$ normalized so that the measure of the unit ball $\Bn$ is 1. If $n=1$ we write $dA=dV_1$. Let $d\sigma$ be the Lebesgue measure on the unit sphere $\Sn$ normalized so that $\sigma(\Sn)=1$. We denote by $H=H(\C^n)$ the space of entire functions on $\C^n$. Let $\ell>0$. For $1\le p<\infty$, $\alpha>0$ and $\rho\in\R$, the space $L^{p,\ell}_{\alpha, \rho}=L^{p}_{\alpha, \rho}$ consists of all measurable functions $f$ on $\C^n$ such that \[ \|f\|^p_{L^{p}_{\alpha, \rho}}:= \int_{\C^n} \bigl|f(z)(1+|z|)^{\rho } e^{-\frac \alpha 2 |z|^{2\ell}}\bigr|^pdV(z)<\infty, \] that is, $L^{p}_{\alpha, \rho}=L^p(\C^n; (1+|z|)^{\rho p}e^{-\frac{\alpha p}2|z|^{2\ell}}dV(z))$. Moreover, $L^{\infty,\ell}_{\alpha,\rho}=L^{\infty}_{\alpha,\rho}$ consists of all measurable functions $f$ on $\C^n$ such that \[ \|f\|_{L^{\infty}_{\alpha,\rho}}= \operatorname*{ess\,sup}_{z\in\C^n}|f(z)| (1+|z|)^{\rho}e^{-\frac \alpha 2|z|^{2\ell}}<\infty. \] We define the generalized Fock-Sobolev spaces as $F^{p}_{\alpha, \rho}:=H\cap L^{p}_{\alpha, \rho}$. When $\rho=0$, we obtain the generalized Fock spaces $F^{p}_{\alpha}=F^{p}_{\alpha, 0}$. According to this notation we write $L^{p}_{\alpha}=L^{p}_{\alpha, 0}$. The space $L^{2}_\alpha$ is a Hilbert space with the inner product \[ \langle f,g\rangle_\alpha:=\int_{\C^n} f(z)\overline{g(z)}e^{-\alpha|z|^{2\ell}}dV(z). \] and $F^{2}_\alpha$ is a closed linear subspace of $L^{2}_\alpha$. Denote by $P_\alpha$ the orthogonal projection from $L^{2}_\alpha$ to $F^{2}_\alpha$, which is usually called the Bergman projection. In \cite[Theorem 9.1]{janson-peetre-rochberg} the authors showed that $P_\alpha$ is bounded from $L^{p}_{\beta}$ to $F^{p}_{\gamma}$ if and only if $\beta<2\alpha$ and $\beta=\gamma$. In \cite{bommier-englis-youssfi} the authors studied the boundedness of $P_\alpha$ between the spaces $\mathcal{L}^p_b:=L^p(\C^n; e^{-b|z|^{2\ell}}dV(z))$ and $\mathcal{L}^q_d:=L^q(\C^n; e^{-d|z|^{2\ell}}dV(z))$. Observe that $\mathcal{L}^p_a=L^{p}_{2a/p}$. Since $\mathcal{L}^2_a=L^{2}_{a}$ the orthogonal projection $ \mathcal{P}_a$ from $\mathcal{L}^2_a$ onto $\mathcal{F}^2_a:=H\cap\mathcal{L}^2_a$ coincides with $P_a$. One advantage of considering the spaces $L^{p}_{\alpha}$ is that permits us to include the case $p=\infty$. Their results are given in terms of a parameter $c$ defined by $c:=\frac{4d}{a^2 q}(a-\frac bp)$. Rewriting the parameters as $a=\alpha$, $b=\beta p/2$ and $d=\gamma q/2$, we have that, in our notations, $c=\gamma\frac{2\alpha-\beta}{\alpha^2}$. The main results in \cite{bommier-englis-youssfi} are: \begin{enumerate} \item If $P_\alpha$ is bounded then $c\ge 1$. \item If $c>1$ then $P_\alpha$ is bounded. \item If $c=1$ and $\ell\le 1$ then $P_\alpha$ is bounded if and only if $q\ge p$. \end{enumerate} For $c=1$ and $\ell>1$ the authors only obtain partial results. In particular they prove that if $c=1$ and $\frac{2n}{2n-1}<\ell<2$ then $P_\alpha$ is bounded if and only if $q=p$. The initial motivation of this work was to close the remaining open cases which will be achieved by proving: \begin{enumerate} \item[(iv)] If $c=1$ and $\ell>1$ then $P_\alpha$ is bounded if and only if $q=p$. \end{enumerate} This result shows that, of the four possible mutually exclusive assertions in \cite[Proposition 17]{bommier-englis-youssfi}, (a) is the valid option. Note that if $c\ge 1$, then $a-\frac bp>0$, which in our notation is equivalent to $\beta<2\alpha$. The later condition is necessary in order that the "pointwise evaluation" of the Bergman projection is bounded on $L^p_{\beta}$ (see Lemma \ref{lem:well-defined} below). Our main result is the following theorem for generalized Fock-Sobolev spaces. \begin{thm}\label{thm:Bprojection} Let $\ell\ge 1$, $\alpha, \beta, \gamma>0$ and $\rho,\eta\in\R$. For $1\le p,q\le \infty$, $P_\alpha$ maps boundedly $L^{p}_{\beta,\rho}$ to $L^{q}_{\gamma,\eta}$ if and only if one of the following conditions holds: \begin{enumerate} \item $0<\alpha^2/(2\alpha-\beta)<\gamma$. \item $\alpha^2/(2\alpha-\beta)=\gamma$, $p\le q$ and $\rho-\eta\ge 2n(\ell-1)\left(\frac 1p-\frac 1q \right)$. \item $\alpha^2/(2\alpha-\beta)=\gamma$, $q< p$ and $\rho-\eta> 2n \left(\frac 1q-\frac 1p \right)$. \end{enumerate} \end{thm} In particular for $\rho=\eta$ we obtain the following generalization of (iv). \begin{cor}\label{cor:Bprojection} Let $\ell> 1$, $\alpha, \beta, \gamma>0$ and $\rho\in\R$. For $1\le p,q\le \infty$, $P_\alpha$ maps boundedly $L^{p}_{\beta,\rho}$ to $L^{q}_{\gamma,\rho}$ if and only if either $0<\alpha^2/(2\alpha-\beta)<\gamma$ or $\alpha^2/(2\alpha-\beta)=\gamma$ and $p=q$. \end{cor} Our approach to obtain Theorem \ref{thm:Bprojection} differs from the one in \cite{bommier-englis-youssfi}. Instead of proving directly the characterizations, we deduce the results as a consequence of two ingredients: the first is the identity (see Proposition \ref{prop:Ponto} below) \begin{equation}\label{eqn:PLontoF} P_\alpha(L^{p}_{\beta,\rho}) =F^{p}_{\frac{\alpha^2}{2\alpha-\beta},\rho}\quad (1\le p\le\infty, \ell\ge 1, \beta<2\alpha, \rho>0) \end{equation} and the second one is the following embedding result: \begin{thm}\label{thm:embeddings} Let $\ell\ge 1$, $\beta,\gamma>0$ and $\rho,\eta\in\R$. For $1\le p,q\le\infty$, the embedding $F^{p}_{\beta,\rho}\hookrightarrow F^{q}_{\gamma,\eta}$ holds if and only if one of the following three conditions is satisfied: \begin{enumerate} \item\label{item:embeddings1} $\beta<\gamma$. \item\label{item:embeddings2} $\beta=\gamma$, $q\ge p$ and \,\, $2n(\ell-1)\left(\frac 1p-\frac 1q\right)\le \rho-\eta$. \item\label{item:embeddings3} $\beta=\gamma$, $q< p$ and \,\, $2n\left(\frac 1q-\frac 1p\right)< \rho-\eta$. \end{enumerate} \end{thm} Note that as an immediate consequence of Theorem \ref{thm:embeddings} we obtain: \begin{cor}$ $ \begin{enumerate} \item If $\ell\ge 1$ and the embedding $F^{p}_{\beta,\rho}\hookrightarrow F^{q}_{\beta,\eta}$ holds, then $\rho\ge \eta$. \item For $\ell=1$, the embedding $F^{p}_{\beta,\rho}\hookrightarrow F^{q}_{\beta,\rho}$ holds if and only if $p\le q$. \item\label{item3} For $\ell>1$, the embedding $F^{p}_{\beta,\rho}\hookrightarrow F^{q}_{\beta,\rho}$ if and only if $p= q$. \end{enumerate} \end{cor} The proof of Theorem \ref{thm:embeddings} requires of some results which can be of interest by themselves. For instance, assertions \eqref{item:embeddings1} and \eqref{item:embeddings2} follow from precise pointwise and $L^{p}_{\beta,\rho}$-norm estimates of the Bergman kernel. As a consequence, we derive pointwise estimates of the functions in $F^{p}_{\beta,\rho}$ and some properties on the boundedness of the Bergman projection. The most difficult part is the proof of assertion \eqref{item:embeddings3}. In this case, for $1\le q<p<\infty$, we use a technique due to D. Luecking (see \cite{luecking}), based on Kinchine's inequality, which permits the construction of adequate test functions. Then the case $1\le q<p=\infty$ follows by extrapolation. The paper is organized as follows: In Section \ref{sec:Bergman} we obtain pointwise and $L^p_{\alpha,\rho}$-norm estimates of the Bergman kernel, from which the boundedness of the Bergman projection $P_\alpha$ on $L^p_{\alpha,\rho}$ is deduced. In Sections \ref{sec:embeddings} and \ref{sec:boundedness} we prove Theorems \ref{thm:embeddings} and \ref{thm:Bprojection} respectively. {\bf Notations:} In the next sections we only consider spaces $F^{p,\ell}_{\alpha,\rho}=F^{p}_{\alpha,\rho}$, with $\ell\ge 1$, $\alpha>0$ and $\rho\in\R$. So we omit the conditions on $\ell,\alpha$ and $\rho$ in the statement of the results. We denote by $p'$ the conjugate exponent of $p\in [1,\infty]$. Let $\N$ be the set of non-negative entire numbers. For a multi-index $\nu=(\nu_1,\cdots,\nu_n)\in \N^n$ and $z=(z_1,\cdots,z_n)\in \C^n$, we write, as usual, $z^\nu=z_1^{\nu_1}\cdots z_n^{\nu_n}$, $\nu!=\nu_1!\cdots\nu_n!$ and $|\nu|=\nu_1+\cdots+\nu_n$. For $z,w\in\C^n$, $z\overline{w}=\sum_{j=1}^n z_j\overline{w}_j$. If $z\in\C^n$ and $r>0$ then $B(z,r)$ is the open ball in $\C^n$ with center~{$z$} and radius $r$. When $n=1$, $B(z,r)$ is denoted, as usual, by $D(z,r)$. If $E\subset\C^n$ then $\mathcal{X}_{E}$ is the characteristic function of $E$. If $X, Y$ are normed spaces, the notation $X\hookrightarrow Y$ means that the mapping $f\in X\mapsto f\in Y$ is bounded. For $\lambda\in\C\setminus\{0\}$, we denote by $\arg\lambda$ the principal branch of the argument of $\lambda$, that is, $-\pi<\arg\lambda\le\pi$. Moreover, $\lambda^\beta=|\lambda|^\beta e^{i\beta\arg\lambda}$, for $\beta\in\R$. The letter $C$ will denote a positive constant, which may vary from place to place. The notation $\Phi\lesssim \Psi$ means that there exists a constant $C>0$, which does not depend on the involved variables, such that $\Phi\le C\, \Psi$. We write $\Phi\simeq \Psi$ when $\Phi\lesssim \Psi$ and $\Psi\lesssim \Phi$. \section{The Bergman projection on $L^{p}_{\alpha,\rho}$}\label{sec:Bergman} \subsection{On the two parametric Mittag-Leffler functions $E_{a,b}$}\quad\par The two parametric Mittag-Leffler functions are the entire functions \[ E_{a,b}(\lambda) :=\sum_{k=0}^\infty \frac{\lambda^k}{\Gamma(a k+b)} \qquad(\lambda\in\C,\,\, a,b>0). \] A good general reference for the Mittag-Leffler functions is the book~{\cite{gorenflo-kilbas-mainardi-rogosin}}. In this section we recall the asymptotic expansions of the two parametric Mittag-Leffler functions and their derivatives. Those expansions will be useful to obtain pointwise and norm estimates of the Bergman kernel. \begin{thm}[{\cite[Theorem 1.2.1]{popov-sedletskii}}] Let $a\in (0,1)$ and $b>0$. Then, for $|\lambda|\to\infty$, we have \begin{equation}\label{eqn:Eab} E_{a,b}(\lambda)= \begin{cases} \frac 1a \lambda^{(1-b)/a}e^{\lambda^{1/a}} +O(\lambda^{-1}), & \quad\text{if}\quad|\arg\lambda|\le a\pi,\\ O(\lambda^{-1}), & \quad\text{if}\quad|\arg\lambda|\ge a\frac{2\pi}3. \end{cases} \end{equation} \end{thm} By Cauchy formula (see~\cite[Theorem 1.4.2]{olver}), the asymptotic expansions of the $m$-th derivatives of $E_{a,b}$ (on ``smaller'' sectors that the ones involved in~{\eqref{eqn:Eab}}) can be obtained by differentiating $m$ times the terms in~{\eqref{eqn:Eab},} that is, \begin{equation}\label{eqn:derEab} E_{a,b}^{(m)}(\lambda)= \begin{cases} \frac 1a \frac{d^m}{d \lambda^m}\left(\lambda^{(1-b)/a} e^{\lambda^{1/a}}\right) +O(\lambda^{-1-m}), & \text{if}\quad|\arg\lambda|\le a\frac{3\pi}4,\\ O(\lambda^{-1-m}), & \text{if}\quad|\arg\lambda|\ge a\frac{3\pi}4. \end{cases} \end{equation} \subsection{The Bergman kernel}\quad\par The next result, which is obtained in \cite{bommier-englis-youssfi}, gives a description of the Bergman kernel. The main tool to compute the norm of the monomials in $F^{2}_{\alpha}$ is the identity \[ \Gamma(x)=\int_0^\infty t^{x-1}e^{-t }dt= 2\ell \gamma^x \int_0^\infty s^{2\ell x-1} e^{-\gamma s^{2\ell}}ds\quad (x>0,\,\gamma>0). \] \begin{lem} The system $\bigl\{\frac{w^\nu}{\|w^\nu\|_{F^{2}_\alpha}}\bigr\}_{\nu\in \N^n}$ is an orthonormal basis for $F^{2}_\alpha$, so the Bergman projection from $L^2_\alpha$ onto $F^2_\alpha$ is \[ P_\alpha f(z)=\langle f,K_{\alpha,z}\rangle_\alpha =\int_{\C^n}f(w)K_{\alpha}(\z,w)e^{-\alpha |w|^{2\ell}}dV(w), \] where \[ K_\alpha(z,w)=\overline{K_{\alpha,z}(w)} =\sum_{\nu\in \N^n}\frac{z^\nu\overline{w}^\nu}{\|w^\nu\|^2_{F^{2}_{\alpha}}} \] is the Bergman kernel. Namely, since $\|w^\nu\|^2_{F^{2}_{\alpha}}= \frac 1\ell\frac{n!\, \nu!\, \Gamma\left(\frac{|\nu|+n}\ell\right)}{(n-1+|\nu|)!} $, $ K_{\alpha}(z,w)=H_{\alpha}(z\overline{w}) $, where \[ H_{\alpha}(\lambda) :=\frac{\ell\alpha^{n/\ell}}{n!} \sum_{k=0}^\infty \frac{(n-1+k)!}{k!}\frac{\alpha^{k/\ell}\lambda^k} {\Gamma\left(\frac{k+n}\ell\right)} =\frac{\ell\alpha^{n/\ell}}{n!} E_{1/\ell,1/\ell}^{(n-1)}(\alpha^{1/\ell}\lambda). \] In particular, for any $\delta>0$ we have \begin{equation}\label{eqn:Kdelta} K_{\alpha}(z,\delta w) =\delta^{-n}K_{\alpha\delta^\ell }(z,w). \end{equation} \end{lem} \begin{rem}\label{rem:pointwB} In order to obtain norm estimates of the Bergman kernel it is useful to make the following change of variables. Given $z\in\C^n$, there is a unitary mapping $U:\C^n\to\C^n$ such that $U(z)=(|z|,0,\,\dots,0)$. Then $ K_{\alpha}(w,z)=H_{\alpha}(|z|u_1) $, where $U(w)=(u_1,\cdots,u_n)$, so we may assume $z=(|z|,0,\cdots,0)$. \end{rem} The remaining part of this section is devoted to derive pointwise and norm estimates of the Bergman kernel, which will be the key tools to obtain our main results. The following corollaries are consequences of \eqref{eqn:derEab}. \begin{cor}\label{cor:diffE} Let $n$ be a positive integer. For $|\lambda|\to\infty$, we have that \[ E^{(n-1)}_{1/\ell,1/\ell}(\lambda)= \begin{cases} \ell^n \lambda^{n(\ell-1)}e^{\lambda^\ell} (1+O(\lambda^{-\ell}))+O(\lambda^{-n}), & \text{if}\quad|\arg \lambda|\le \frac{3\pi}{4\ell},\\ O(\lambda^{-n}), & \text{if}\quad|\arg \lambda|\ge \frac{3\pi}{4\ell}. \end{cases} \] \end{cor} \begin{proof} For $\ell=1$, $E_{1/\ell,1/\ell}(\lambda)=e^\lambda$ so $E^{(n-1)}_{1/\ell,1/\ell}(\lambda)=e^\lambda$, and the above asymptotic identity is obvious in this case. Next assume $\ell>1$. By induction on $n$ it is easy to check that \[ \ell\,\frac{d^{n-1}}{d\lambda^{n-1}}\lambda^{\ell-1} e^{\lambda^\ell} =\ell^n \lambda^{n(\ell-1)}e^{\lambda^\ell} (1+O(\lambda^{-\ell})) \quad(|\lambda|\to\infty,\,|\arg\lambda|<\pi/\ell). \] By combining this identity with \eqref{eqn:derEab} we obtain the result. \end{proof} \begin{cor}\label{cor:pointwB} For any $\delta>0$ and $N>2$, let $S^{\delta}_N:=D(0,\delta)\cup S_N$, where \[ S_N:=\{0\}\cup \{\,\lambda\in\C\setminus\{0\} : |\arg\lambda|\le\tfrac{\pi}{N\ell}\,\}. \] Then there exist $\delta>0$ and $N>2$ such that \begin{align} &|H_{\alpha}(\lambda)| \simeq(1+|\lambda|)^{n(\ell-1)}\, \bigl|e^{\alpha\lambda^\ell}\bigr| \qquad (\lambda\in S^{\delta}_N), \label{eqn:simeq:estimate:H}\\ & |H_{\alpha}(\lambda)|\lesssim (1+|\lambda|)^{n(\ell-1)}\,e^{\alpha\cos(\frac{\pi}N)|\lambda|^\ell} \qquad (\lambda\in \C\setminus S^{\delta}_N). \label{eqn:lesssim:estimate:H} \end{align} In particular, \begin{equation}\label{eqn:rough:estimate:H} \mathcal{X}_{S_N}(\lambda)\lesssim|H_{\alpha}(\lambda)|\lesssim (1+|\lambda|)^{n(\ell-1)}\,e^{\alpha|\lambda|^\ell} \qquad(\lambda\in\C). \end{equation} \end{cor} \begin{proof} Corollary~{\ref{cor:diffE}} shows that there is a large $R>0$ so that \begin{align} &|H_{\alpha}(\lambda)|\simeq (1+|\lambda|)^{n(\ell-1)}\bigl|e^{\alpha\lambda^\ell}\bigr| \qquad (|\lambda|\ge R,\,|\arg\lambda|\le\tfrac{\pi}{3\ell}), \label{eqn:simeq:estimate:H:0}\\ &|H_{\alpha}(\lambda)|\lesssim (1+|\lambda|)^{n(\ell-1)} e^{\frac{\alpha}2|\lambda|^\ell} \qquad (|\lambda|\ge R,\,|\arg\lambda|\ge\tfrac{\pi}{3\ell}). \label{eqn:lesssim:estimate:H:0} \end{align} Since $H_{\alpha}$ is a continuous positive function on the interval $[0,\infty)$, we have that there exist a small $\delta>0$ and a large $N>2$ such that \begin{equation} \label{eqn:simeq:estimate:H:1} |H_{\alpha}(\lambda)|\simeq 1\simeq (1+|\lambda|)^{n(\ell-1)}\bigl|e^{\alpha\lambda^\ell}\bigr| \qquad(\lambda\in S^{\delta}_N,\,|\lambda|\le R). \end{equation} Therefore~{\eqref{eqn:simeq:estimate:H}} directly follows from~{\eqref{eqn:simeq:estimate:H:0}} and~{\eqref{eqn:simeq:estimate:H:1}}. Moreover, \eqref{eqn:lesssim:estimate:H} is deduced from~{\eqref{eqn:simeq:estimate:H:0}}, \eqref{eqn:lesssim:estimate:H:0} and the fact that $H_{\alpha}$ is bounded on $D(0,R)$. \end{proof} As an immediate consequence of the above results we obtain the following pointwise estimate for the Bergman kernel. \begin{prop}\label{prop:pointK} There exist $\delta>0$ and $N>2$ such that \begin{align} &|K_{\alpha}(w,z)| \simeq(1+|z\overline w|)^{n(\ell-1)}\,e^{\alpha\Re((z\overline w)^\ell)} \quad (z\overline w\in S^{\delta}_N), \label{eqn:simeq:estimate:K}\\ & |K_{\alpha}(w,z)|\lesssim (1+|z\overline w|)^{n(\ell-1)}\, e^{\alpha\cos(\frac{\pi}N)|z\overline w|^\ell} \quad (z\overline w\in \C\setminus S^{\delta}_N). \label{eqn:lesssim:estimate:K} \end{align} \end{prop} Now we state norm estimates for the Bergman kernel. \begin{prop}\label{prop:pnormBergman} Let $1\le p\le\infty$. Then \[ \|K_{\alpha}(\cdot,z)\|_{F^{p}_{\alpha,\rho}}\simeq (1+|z|)^{\rho +2n(\ell-1)/p'}e^{\frac{\alpha}2|z|^{2\ell}} \quad(z\in\C^n). \] \end{prop} This estimate for $1\le p<\infty$ and $\rho=0$ is stated without a detailed proof in \cite[Section 8.1]{bommier-englis-youssfi}. Since this norm estimate of the Bergman kernel is essential in order to obtain our main theorems and it is deduced from several non-trivial technical results, we include its proof. The main tool is the pointwise estimate of $H_{\alpha}$ given in Corollary~{\ref{cor:pointwB}}, but we also need the following three technical lemmas. \begin{lem} \label{lem:est:sup} Let $\alpha>0$ and let $\beta\in\R$. Then \[ \sup_{x\ge0}\,(1+x)^{\beta}e^{-\alpha(x-a)^2}\simeq (1+a)^{\beta}\qquad(a\ge0). \] \end{lem} \begin{proof} Since $(1+x)^{\beta}e^{-\alpha(x-a)^2}= ((1+x)^{\beta/\alpha}e^{-(x-a)^2})^{\alpha}$, for any $a,x\ge0$, we may assume that $\alpha=1$. Then it is clear that $\sup_{x\ge0}\,(1+x)^{\beta}e^{-(x-a)^2}\ge (1+a)^{\beta}$, for every $a\ge0$, and so we only have to prove that \[ (1+x)^{\beta}e^{-(x-a)^2}\lesssim (1+a)^{\beta}\qquad(a,x\ge0). \] Let $x\ge0$. If $a-(1+a)/2\le x$ then $-1/2\le(x-a)/(1+a)$ and so, for any $\beta\in\R$, \[ (1+x)^{\beta}e^{-(x-a)^2} \le (1+a)^{\beta}\,\Bigl(1+\frac{x-a}{1+a}\Bigr)^{\beta} e^{-\left(\frac{x-a}{1+a}\right)^2} \le (1+a)^{\beta} \sup_{t\ge-1/2}(1+t)^{\beta}e^{-t^2}. \] Next assume $x\le a-(1+a)/2$. If $\beta<0$ then \[ (1+x)^{\beta}e^{-(x-a)^2}\le e^{-(x-a)^2}\le e^{-\frac14(1+a)^2} \le (1+a)^{\beta}\, \sup_{t\ge1} t^{-\beta}e^{-t^2/4}. \] Finally, if $\beta\ge0$ then $(1+x)^{\beta}e^{-(x-a)^2}\le(1+x)^ {\beta}\le(1+a)^{\beta}$. \end{proof} \begin{lem} \label{lem:est:Cm} Let $a>0$ and let $b\in\R$. Then \[ \int_{\C^{n-1}} (1+y+|w|)^b e^{-a(y^2+|w|^2)^{\ell}} dV_{n-1}(w) \simeq (1+y)^{b-2(n-1)(\ell-1)} e^{-a\,y^{2\ell}}\quad (y\ge 0). \] \end{lem} \begin{proof} It is clear that the estimate of the statement holds for $0\le y\le 1$. Thus, by integration in polar coordinates, we only have to prove that \[ I(y):= \int_{0}^\infty (y+r)^b e^{-a(y^2+r^2)^{\ell}}\,r^{2n-3}dr \simeq y^{b-2(n-1)(\ell-1)} e^{-a\,y^{2\ell}}\quad (y\ge 1). \] The change of variables $r=yt$ shows that $I(y)\simeq y^{b+2(n-1)}\,e^{-ay^{2\ell}} J(y)$, where \[ J(y):= \int_{0}^\infty (1+t)^b e^{-ay^{2\ell}((1+t^2)^{\ell}-1)}\,t^{2n-3}dt. \] We obtain the lower estimate for $I(y)$ by considering the root $t_y>0$ of the equation $y^{2\ell}((1+t^2)^\ell-1)=1$, that is, \[ t_y=\big((1+y^{-2\ell})^{1/\ell}-1\big)^{1/2} \simeq y^{-\ell}, \] and observing that \[ J(y)\ge \int_{0}^{t_y} (1+t)^b e^{-ay^{2\ell}((1+t^2)^{\ell}-1)}\,t^{2n-3}dt \simeq \int_{0}^{t_y} t^{2n-3}dt\simeq y^{-2(n-1)\ell}. \] In order to get the upper estimate, note that if $\ell\ge 1$ then $(1+t^2)^\ell-1\ge \ell t^2$, and so \[ J(y) \le \int_{0}^{\infty}(1+t)^b e^{-a\ell y^{2\ell}t^2}\,t^{2n-3}dt \le 2^{\max(b,0)}(J_1(y)+J_2(y)), \] where \[ J_1(y):= \int_0^1 e^{-a\ell y^{2\ell}t^2}\,t^{2n-3}dt \quad\text{and}\quad J_2(y):=\int_1^\infty e^{-a\ell y^{2\ell}t^2}\,t^{2n-3+b}dt. \] By making the change of variables $s=y^\ell t$, we have that \begin{gather*} J_1(y)=y^{-2(n-1)\ell}\int_0^{y^{\ell}}e^{-a\ell s^2}s^{2n-3}\,ds \lesssim y^{-2(n-1)\ell}\qquad\mbox{and}\\ J_2(y) = \, y^{-(2n-2+b)\ell} \int_{y^{\ell}}^{\infty} e^{-a\ell s^2} s^{2n-3+b} ds \lesssim y^{-(2n-2+b)\ell}\int_{y^{\ell}}^{\infty} e^{-a\ell s} ds \lesssim y^{-2(n-1)\ell}, \end{gather*} which ends the proof. \end{proof} \begin{lem}\label{lem:estuv} Let $a>0$ and let $b\in\R$. Then \[ I(z)=I_{a,b}(z):=\int_\C\frac{e^{-a|v-z|^2}}{(1+|v|)^b}\,dA(v) \simeq \frac{1}{(1+|z|)^{b}} \qquad(z\in\C) \] and \[ J(z)=J_{a,b}(z):=\int_\C\frac{e^{-a(|v|-|z|)^2}}{(1+|v|)^b}\,dA(v) \simeq \frac{1}{(1+|z|)^{b-1}} \qquad(z\in\C). \] \end{lem} \begin{proof} Since $I_{a,b}(z)\simeq I_{1,b}(za^{1/2})$ and $J_{a,b}(z)\simeq I_{1,b}(za^{1/2})$, we may assume that $a=1$. Moreover, $I(z)\simeq 1\simeq J(z)$, when $|z|\le1$, so we only have to prove the estimates for $|z|\ge 1$. In this case we split each of the above integrals into the corresponding three integrals on the sets~{$S_1=\{v\in\C: |v|<|z|/2\}$,} $S_2=\{v\in\C: |z|/2\le|v|\le 2|z|\}$ and $S_3=\{v\in\C: |v|> 2|z|\}$, that is, $I(z)=I_1(z)+I_2(z)+I_3(z)$ and $J(z)=J_1(z)+J_2(z)+J_3(z)$, where \[ I_k(z):=\int_{S_k}\frac{e^{-|v-z|^2}}{(1+|v|)^b}\,dA(v) \quad\mbox{and}\quad J_k(z):=\int_{S_k}\frac{e^{-(|v|-|z|)^2}}{(1+|v|)^b}\,dA(v). \] If $v\in S_1$ then $|v-z|\ge|z|-|v|>|z|/2$. Thus \[ I_1(z)\le J_1(z)\lesssim e^{-|z|^2/4}\int_0^{|z|/2}\frac{r\,dr}{(1+r)^b} \lesssim e^{-|z|^2/4}(1+|z|)^{|b|+2} \lesssim\frac1{(1+|z|)^b}. \] If $v\in S_2$ then $(1+|z|)/2\le1+|v|\le2(1+|z|)$. Therefore \[ I_2(z)\simeq\frac1{(1+|z|)^b}\int_{S_2}e^{-|v-z|^2}dA(v) \,\mbox{ and }\, J_2(z)\simeq\frac1{(1+|z|)^b}\int_{S_2}e^{-(|v|-|z|)^2}dA(v). \] Since $D(z,1/2)\subset S_2$, we have \[ 0 < \int_{D(0,1/2)}e^{-|w|^2}dA(w) \le \int_{S_2}e^{-|v-z|^2}dA(v) \le \int_{\C}e^{-|w|^2}dA(w)<\infty, \] and so $I_2(z)\simeq(1+|z|)^{-b}$. On the other hand, $J_2(z)\simeq(1+|z|)^{1-b}$ because \[ \int_{S_2}e^{-(|v|-|z|)^2}dA(v) \simeq\int_{|z|/2}^{2|z|}e^{-(r-|z|)^2}r\,dr \simeq|z|\int_{-|z|/2}^{|z|}e^{-t^2}dt \simeq|z|. \] If $v\in S_3$ then $|v-z|\ge|v|-|z|>|v|/2 $, and hence \[ I_3(z)\le J_3(z) \lesssim \int_{2|z|}^\infty \frac{re^{-r^2/4}}{(1+r)^b}\,dr \le e^{-|z|^2/2}\int_0^\infty \frac{re^{-r^2/8}}{(1+r)^b}\,dr \lesssim \frac{1}{(1+|z|)^{b}}.\qedhere \] \end{proof} \begin{proof}[Proof of Proposition~{\ref{prop:pnormBergman}}] Let $p=\infty$. Then the lower estimate follows from~{\eqref{eqn:simeq:estimate:H}:} \begin{align*} \|K_{\alpha}(\cdot,z)\|_{F^{\infty}_{\alpha,\rho}} \ge &\, K_\alpha(z,z)\,(1+|z|)^\rho\, e^{-\frac\alpha 2|z|^{2\ell}} = H_{\alpha}(|z|^2)\, (1+|z|)^\rho\, e^{-\frac\alpha 2|z|^{2\ell}}\\ \gtrsim &\,(1+|z|^2)^{n(\ell-1)}\, (1+|z|)^\rho\, e^{\frac\alpha 2|z|^{2\ell}} \simeq (1+|z|)^{\rho+2n(\ell-1)}\, e^{-\frac\alpha 2|z|^{2\ell}}. \end{align*} In order to obtain the upper estimate, first note that \eqref{eqn:rough:estimate:H} and the Cauchy-Schwarz inequality (that is, $|z\overline{w}|\le|z||w|$, for any $z,w\in\C^n$) show that \begin{align*} |K_{\alpha}(w,z)|=|H_{\alpha}(z\overline{w})| \lesssim &\, (1+|z\overline{w}|)^{n(\ell-1)}\, e^{\alpha|z\overline{w}|^{\ell}} \\ \lesssim &\, (1+|z|)^{n(\ell-1)}(1+|w|)^{n(\ell-1)}\, e^{\alpha|z|^{\ell}|w|^{\ell}}. \end{align*} Therefore $\|K_{\alpha}(\cdot,z)\|_{F^{\infty}_{\alpha,\rho}} \lesssim (1+|z|)^{n(\ell-1)}\,e^{\frac{\alpha}2|z|^{2\ell}} M(|z|)$, where \[ M(|z|) =\sup_{w\in\C}\,(1+|w|)^{\rho+n(\ell-1)} e^{-\frac{\alpha}2(|w|^{\ell}-|z|^\ell)^2} \simeq\sup_{x\ge0}\,(1+x)^{\frac{\rho+n(\ell-1)}{\ell}} e^{-\frac{\alpha}2(x-|z|^\ell)^2}. \] Since, by Lemma~{\ref{lem:est:sup}}, $M(|z|) \simeq (1+|z|^{\ell})^{\frac{\rho+n(\ell-1)}{\ell}} \simeq (1+|z|)^{\rho+n(\ell-1)}$, we have just proved the upper estimate in this case. Now assume that $p<\infty$. By making the change of variables $u=Uw$, where $U:\C^n\to\C^n$ is a unitary mapping such that $U(z)=(|z|,0\,\dots,0)$, we get that \[ \|K_{\alpha}(\cdot,z)\|^p_{F^{p}_{\alpha,\rho}} \simeq \int_{\C} |H_{\alpha}(|z|u_1)|^p\,\Psi(u_1)\,dA(u_1), \] where \[ \Psi(u_1):= \int_{\C^{n-1}} (1+|u_1|+|u'|)^{\rho p}\, e^{-\frac{\alpha p}{2}(|u_1|^2+|u'|^2)^{\ell}}\,dV_{n-1}(u'). \] Then Lemma \ref{lem:est:Cm} implies that \begin{equation}\label{eqn:bergman:kernel:norm:estimate}\begin{split} &\|K_{\alpha}(\cdot,z)\|^p_{F^{p}_{\alpha,\rho}}\\ &\quad\simeq \int_{\C} |H_{\alpha}(|z|u_1)|^p\, (1+|u_1|)^{\rho p-2(n-1)(\ell-1)}\, e^{-\frac{\alpha p}2\,|u_1|^{2\ell}}\,dA(u_1). \end{split} \end{equation} Now pick $N>2$ satisfying the statement of Corollary~{\ref{cor:pointwB}}. Then note that~{\eqref{eqn:rough:estimate:H}} implies \[ \mathcal{X}_{S_N}(u_1) \lesssim |H_{\alpha}(|z|u_1)|^p \lesssim (1+|u_1|)^{np(\ell-1)} e^{\alpha p2^{\ell}|u_1|^{\ell}} \quad(|z|\le2,\,u_1\in\C). \] Thus~{\eqref{eqn:bergman:kernel:norm:estimate}} shows that \[ \|K_{\alpha}(\cdot,z)\|^p_{F^{p}_{\alpha,\rho}}\simeq 1 \simeq (1+|z|)^{\rho+2n(\ell-1)/p'}e^{\frac{\alpha}2|z|^{2\ell}} \quad(|z|\le2), \] so we only have to prove the norm estimate for $|z|>2$. In order to do that, we split the integral in~{\eqref{eqn:bergman:kernel:norm:estimate}} as the sum of the three integrals $\mathcal{I}_1(|z|)$, $\mathcal{I}_2(|z|)$ and $\mathcal{I}_3(|z|)$ on the sets $E_1=\{u_1\in\C:|u_1|>1, |\arg u_1|\le\pi/(N\ell)\}$, $E_2=\{u_1\in\C:|u_1|>1, |\arg u_1|>\pi/(N\ell)\}$ and $E_3=\{u_1\in\C:|u_1|\le 1\}$, respectively. To estimate $\mathcal{I}_1(|z|)$ recall that~{\eqref{eqn:simeq:estimate:H}} gives \[ |H_{\alpha}(|z|u_1)|^p \simeq (|z||u_1|)^{np(\ell-1)} e^{\alpha p|z|^{\ell}\Re u_1^{\ell}} \quad(u_1\in E_1,\,|z|>2), \] so \[ \mathcal{I}_1(|z|)\simeq |z|^{np(\ell-1)}e^{\frac{\alpha p}2|z|^{2\ell}} \int_{E_1}|u_1|^{np(\ell-1)+\rho p-2(n-1)(\ell-1)} e^{-\frac{\alpha p}2|u_1^{\ell}-|z|^{\ell}|^2}dA(u_1). \] By making the change of variables $v=u_1^\ell$ we have that \[ \mathcal{I}_1(|z|) \simeq |z|^{np(\ell-1)}e^{\frac{\alpha p}2|z|^{2\ell}} \int_{\{|v|\ge 1, |\arg v|\le \pi/N\}} |v|^{\beta} e^{-\frac{\alpha p}2|v-|z|^{\ell}|^2} dA(v), \] where $\beta:=(n(\ell-1)(p-2)+\rho p)/\ell$. Since for $|z|>2$ we have the inclusions \begin{align*} D(|z|^{\ell},\sin(\pi/N)) &\subset \{v\in\C: |v|>1\}\cap D(|z|^{\ell},|z|^{\ell}\sin(\pi/N))\\ &\subset\{v\in\C: |v|>1, |\arg v|\le\pi/N\}, \end{align*} the preceding integral $\mathcal{I}'_1(|z|)$ satisfies \[ \mathcal{I}'_1(|z|) \ge\int_{D(|z|^{\ell},\sin(\pi/N))} |v|^{\beta} e^{-\frac{\alpha p}2|v-|z|^{\ell}|^2} dA(v) \simeq|z|^{\beta\ell}. \] Moreover, Lemma~{\ref{lem:estuv}} shows that $\mathcal{I}'_1(|z|)\lesssim I_{\alpha p/2,-\beta}(|z|^{\ell}) \simeq |z|^{\beta\ell}$. It follows that $\mathcal{I}'_1(|z|)\simeq |z|^{\beta\ell} =|z|^{n(\ell-1)(p-2)+\rho p}$, and hence \begin{equation}\label{eqn:estimate1} \mathcal{I}_1(|z|)\simeq |z|^{np(\ell-1)}e^{\frac{\alpha p}2|z|^{2\ell}}\mathcal{I}'_1(|z|) \simeq (1+|z|)^{2n(\ell-1)(p-1)+\rho p}\, e^{\frac{\alpha p}2|z|^{2\ell}}. \end{equation} Now we estimate $\mathcal{I}_2(|z|)$. By~{\eqref{eqn:lesssim:estimate:H}}, \[ |H_{\alpha}(|z|u_1)|^p \lesssim (|z||u_1|)^{np(\ell-1)} e^{\alpha p\cos(\frac{\pi}N)\,|z|^{\ell} |u_1|^{\ell}} \quad(u_1\in E_2,\,|z|>2), \] so $\mathcal{I}_2(|z|)\lesssim |z|^{np(\ell-1)} e^{\frac{\alpha p}2\cos^2(\frac{\pi}N)\,|z|^{2\ell}}\mathcal{I}'_2(|z|)$, where \begin{align*} \mathcal{I}'_2(|z|) := & \int_{E_2}|u_1|^{np(\ell-1)+\rho p-2(n-1)(\ell-1)} e^{-\frac{\alpha p}2 \{|u_1|^{\ell}-|z|^{\ell}\cos(\frac{\pi}N)\}^2} dA(u_1)\\ \simeq & \int_1^{\infty} r^{1+np(\ell-1)+\rho p-2(n-1)(\ell-1)} e^{-\frac{\alpha p}2 \{r^{\ell}-|z|^{\ell}\cos(\frac{\pi}N)\}^2} dr. \end{align*} Then we make the change of variables $t=r^{\ell}$ to get that \[ \mathcal{I}'_2(|z|)\simeq \int_1^{\infty}t^{\beta+1}e^{-\frac{\alpha p}2 \{t-|z|^{\ell}\cos(\frac{\pi}N)\}^2} dt, \] so Lemma~{\ref{lem:estuv}} shows that $\mathcal{I}'_2(|z|)\lesssim J_{\frac{\alpha p}2,-\beta}(|z|^{\ell}\cos(\frac{\pi}N)) \simeq|z|^{\beta\ell+\ell}$. Hence \begin{equation}\label{eqn:estimate2} \mathcal{I}_2(|z|)\lesssim |z|^{np(\ell-1)+\beta\ell+\ell} e^{\frac{\alpha p}2\cos^2(\frac{\pi}N)\,|z|^{2\ell}}\lesssim (1+|z|)^{2n(\ell-1)(p-1)+\rho p}\, e^{\frac{\alpha p}2|z|^{2\ell}}. \end{equation} Finally, since by~{\eqref{eqn:rough:estimate:H}} we have that \[ |H_{\alpha}(|z|u_1)|^p\lesssim (1+|z|)^{np(\ell-1)}e^{\alpha p|z|^{\ell}}\quad(u_1\in E_3, |z|>2), \] we obtain that \begin{equation}\label{eqn:estimate3} \mathcal{I}_3(|z|) \lesssim (1+|z|)^{n(\ell-1)}e^{\alpha|z|^{\ell}} \lesssim (1+|z|)^{2n(\ell-1)(p-1)+\rho p}e^{\frac{\alpha p}2|z|^{2\ell}}. \end{equation} Taking into account~{\eqref{eqn:estimate1}}, \eqref{eqn:estimate2} and~{\eqref{eqn:estimate3}}, we conclude that \[ \|K_{\alpha}(\cdot,z)\|^p_{F^{p}_{\alpha,\rho}} \simeq (1+|z|)^{2n(\ell-1)(p-1)+\rho p} e^{\frac{\alpha p}2|z|^{2\ell}} \quad(|z|>2), \] which ends the proof. \end{proof} \begin{cor}\label{cor:qnormBergman} Let $1\le p\le\infty$. Then \[ \|K_{\alpha}(\cdot,z)\|_{F^{p}_{\beta,\rho}} \simeq (1+|z|)^{\rho+2n(\ell-1)/p'} e^{\frac{\alpha^2}{2\beta}|z|^{2\ell}}\quad(z\in\C^n). \] \end{cor} \begin{proof} Since $K_{\alpha}(\delta z, w)=\delta^{-n}K_{\delta^\ell \alpha}(z,w)$, for $\delta=(\beta/\alpha)^{1/\ell}$, we have \begin{align*} \|K_{\alpha}(\cdot,z)\|_{F^{p}_{\beta,\rho}} &\simeq \|K_{\beta}(\cdot,z/\delta)\|_{F^{p}_{\beta,\rho}}\\ &\simeq (1+|z|/\delta)^{\rho+2n(\ell-1)/p'} e^{\frac{\beta}2|z/\delta|^{2\ell}}\\ &\simeq(1+|z|)^{\rho+2n(\ell-1)/p'} e^{\frac{\alpha^2}{2\beta}|z|^{2\ell}}. \end{align*} This ends the proof. \end{proof} \subsection{The Bergman projection}\quad\par The next lemma shows that the Bergman projection $P_{\alpha}$ is pointwise well defined on $L^{p}_{\beta,\rho}$ if and only if $\beta<2\alpha$. \begin{lem}\label{lem:well-defined} Let $\z\in \C^n$ and assume $1\le p\le \infty$. \begin{enumerate} \item If for $\z\ne 0$ the form $U_\z:L^{2}_{\alpha}\to\C$, defined by $U_\z(f)=P_\alpha(f)(\z)$, is bounded on the normed space $(L^{2}_{\alpha}\cap L^{p}_{\beta,\rho}, \,\|\cdot\|_{L^{p}_{\beta,\rho}})$ then $\beta<2\alpha$. \item Conversely, if $\beta<2\alpha$ then $U_\z:L^{p}_{\beta,\rho}\to\C$, defined by \[ U_\z(f)=\int_{\C^n}f(w)K_{\alpha}(\z,w)e^{-\alpha |w|^{2\ell}}dV(w), \] is bounded and \[ \|U_\z\|\lesssim (1+|\z|)^{-\rho+2n(\ell-1)/p} e^{\frac 12\frac{\alpha^2}{2\alpha-\beta}|\z|^{2\ell}}. \] \end{enumerate} \end{lem} \begin{proof} Assume that $U_\z$ is bounded on $(L^{2}_{\alpha}\cap L^{p}_{\beta,\rho},\,\|\cdot\|_{L^{p}_{\beta,\rho}})$. Then, by Hahn-Banach theorem's, $U_\z$ extends to a bounded operator on $L^{p}_{\beta,\rho}$, which we also denote by $U_\z$. Let $\nu$ be a multi-index. It is clear that the function \begin{equation}\label{eqn:f1} f(z):=\frac{z^\nu}{(1+|z|)^{|\nu|+\rho+2n+1}} \, e^{\frac{\beta}2|z|^{2\ell}} \end{equation} belongs to $L^{p}_{\beta,\rho}$. Let $\mathcal{X}_R$ be the characteristic function of the open ball $B_R$ centered at $0$ with radius $R$. Then the function $f_R=\mathcal{X}_R\cdot f$ is in $ L^{2}_{\alpha}\cap L^{p}_{\beta,\rho}$ and $\|f_R-f\|_{F^{p}_{\beta,\rho}}\to 0$ as $R\to\infty$. Since \[ K_{\alpha,z}(w) =\sum_{\mu\in \N^n}\frac{w^\mu\overline{z}^\mu}{\|w^\mu\|^2_{F^{2}_{\alpha}}}, \] where the series converges in $L^{2}_\alpha$, \[ P_\alpha(f_R)(z)=\langle f_R,K_{\alpha,z}\rangle_\alpha =\sum_{\mu\in \N^n}\frac{z^\mu}{\|w^\mu\|^2_{F^{2}_{\alpha}} }\,\langle f_R,w^\mu\rangle_\alpha. \] By integration in polar coordinates we have $\langle f_R,w^\mu\rangle_\alpha^\ell=\delta_{\mu,\nu} c_{\nu}(R)$, where \[ c_{\nu}(R):= \int_{B_R} \frac{|w^\nu|^2}{(1+|w|)^{|\nu|+\rho+2n+1}} \, e^{(\frac{\beta}2-\alpha)|w|^{2\ell}}\,dV(w). \] Thus $U_\z(f_R)=P_\alpha(f_R)(\z) =c_{\nu}(R)\, \z^\nu/\|w^\nu\|^2_{F^{2}_{\alpha}}$. So, by the hypothesis and the monotone convergence theorem, \[ U_\z(f)=\lim_{R\to\infty}U_\z(f_R)= \frac{\z^\nu}{\|w^\nu\|^2_{F^{2}_{\alpha}}} \int_{\C^n} \frac{|w^\nu|^2}{(1+|w|)^{|\nu|+\rho+2n+1}} \, e^{(\frac{\beta}2-\alpha)|w|^{2\ell}}\,dV(w). \] It follows that for any $\nu$ such that $\z^\nu\ne 0$ we have that the above integral is finite. Choosing $\nu$ such that $ |\nu|\ge 1+\rho $ we obtain that $\beta<2\alpha$. Next assume $\beta<2\alpha$. Let $F_\z(w):=G(w)H_\z(w)$, where \begin{align*} G(w) &:=|f(w)| (1+|w|)^{\rho}e^{-\frac\beta 2|w|^{2\ell}} \quad\text{and}\quad\\ H_\z(w)&:= |K_{\alpha}(\z,w)| (1+|w|)^{-\rho} e^{-(\alpha-\frac\beta 2)|w|^{2\ell}}. \end{align*} Since $\|G\|_{L^p}= \|f\|_{L^{p}_{\beta,\rho}}$, we obtain \[ |U_\z(f)|\le\|F_\z\|_{L^1}\le\|K_\alpha(\cdot,\z)\|_{L^{p'}_{2\alpha-\beta,-\rho}} \|f\|_{L^{p}_{\beta,\rho}}. \] Hence Corollary \ref{cor:qnormBergman} ends the proof. \end{proof} \begin{rem} From the pointwise estimate of $|K_\alpha(z,w)|$ with $z\overline w\in S^{\delta}_N$, given in Proposition \ref{prop:pointK}, it is easy to check that if $\beta\ge 2\alpha$ and $f$ is the function defined in \eqref{eqn:f1} with $\nu=0$, then $F_\z\notin L^1$. So $U_\zeta(f)$ is not well defined. \end{rem} \begin{cor}\label{cor:embedFinfty} Let $1\le p< \infty$. Then \[ F^{p}_{\alpha,\rho}\hookrightarrow F^{\infty}_{\alpha,\rho-2n(\ell-1)/p}, \] that is, \[ |f(z)|\lesssim \|f\|_{F^{p}_{\alpha,\rho}}(1+|z|)^{-\rho+(2n(\ell-1))/p} e^{\alpha |z|^{2\ell}/2}\quad(f\in F^{p}_{\alpha,\rho},\,z\in\C^n). \] \end{cor} \begin{cor}\label{cor:representation} Let $1\le p\le\infty$ and let $\beta<2\alpha$. If $f\in F^{p}_{\beta,\rho}$ then $f=P_{\alpha}f$. \end{cor} \begin{proof} If $p<\infty$, the space $F^{2}_\alpha\cap F^{p}_{\beta,\rho}$ is dense in $F^{p}_{\beta,\rho}$ and $P_\alpha$ is the identity on $F^{2}_\alpha$, so $P_\alpha$ is the identity on $F^{p}_{\beta,\rho}$. The case $p=\infty$ follows from the previous one by noting that $F^{\infty}_{\beta,\rho} \subset F^{p}_{\beta',\rho}$, for any $\beta'\in (\beta,2\alpha)$. \end{proof} \begin{prop}\label{prop:PLpontoFp} For $1\le p\le\infty$ the Bergman operator $P_{\alpha}$ is a bounded projection from $L^{p}_{\alpha,\rho}$ onto $F^{p}_{\alpha,\rho}$. \end{prop} \begin{proof} First we consider the case $1< p<\infty$. By Proposition \ref{prop:pnormBergman}, the function \[ \Omega_\alpha(z,w):= e^{-\frac{\alpha}2 |z|^{2\ell}} |K_\alpha(z,w)|e^{-\frac{\alpha}2 |w|^{2\ell}} \] satisfies \begin{equation}\label{eqn:omega} \int_{\C^n}\Omega_\alpha(z,w) (1+|w|)^c dV(w) \simeq \|K_\alpha(\cdot,z)\|_{L^{1}_{\alpha,c}}\simeq (1+|z|)^c. \end{equation} If $\varphi\in L\pla$, then H\"older's inequality and \eqref{eqn:omega} with $c=0$ give \begin{equation}\label{eqn:PLpontoFp} e^{-\frac{p\alpha}2|z|^{2\ell}} |P_{\alpha}(\varphi)(z)|^p \lesssim \int_{\C^n} |\varphi(w)|^p \Omega_\alpha(z,w) e^{-\frac{p\alpha}2|w|^{2\ell}} dV(w). \end{equation} So Fubini's theorem and \eqref{eqn:omega} with $c=\rho\, p$ imply $\|P_{\alpha}(\varphi)\|_{L^{p}_{\alpha,\rho}}\lesssim \|f\|_{L^{p}_{\alpha,\rho}}$. If $p=1$ then \eqref{eqn:PLpontoFp} is obvious and, as in the above case, we obtain the result. If $p=\infty$ then \begin{equation*} (1+|z|)^\rho e^{-\frac{\alpha}2|z|^{2\ell}} |P_{\alpha}(\varphi)(z)| \lesssim \|f\|_{L^{\infty}_{\alpha,\rho}} (1+|z|)^\rho \int_{\C^n} \frac{\Omega_\alpha(z,w) }{(1+|w|)^{\rho}} dV(w). \end{equation*} So \eqref{eqn:omega} shows that $\|P_{\alpha}(\varphi)\|_{L^{\infty}_{\alpha,\rho}}\lesssim \|f\|_{L^{\infty}_{\alpha,\rho}}$. \end{proof} \begin{cor}\label{cor:dualF} Let $1\le p<\infty$. Then the dual of $F^{p}_{\alpha,\rho}$, with respect to the pairing $\langle\cdot,\cdot\rangle_\alpha$, is $F^{p'}_{\alpha,-\rho}$. \end{cor} \begin{proof} From the classical $L^p$-duality it is easy to check that the dual of $L^{p}_{\alpha,\rho}$, with respect to the pairing $\langle\cdot,\cdot\rangle_\alpha^\ell$, is $L^{p'}_{\alpha,-\rho}$. This result together with Proposition \ref{prop:PLpontoFp} prove the corollary. \end{proof} \section{Proof of Theorem \ref{thm:embeddings}} \label{sec:embeddings} The case $\ell=1$ and $\rho=\eta=0$ is well known (see \cite{janson-peetre-rochberg}). For $n=1$, the theorem can be deduced from the characterization of Carleson measures obtained in \cite[Theorem 1]{constantin-pelaez}. \subsection{Necessary conditions for all $p$ and $q$} \begin{lem}\label{lem:necessarypq} If $F^{p}_{\beta,\rho}\hookrightarrow F^{q}_{\gamma,\eta}$, then either $\beta<\gamma$ or $\beta=\gamma$ and \[ 2n(\ell-1)\bigl(\tfrac 1p-\tfrac 1q\bigr)\le\rho -\eta. \] \end{lem} \begin{proof} By Corollary \ref{cor:qnormBergman} the ratio \[ \frac{\|K_{\alpha}(\cdot,z)\|_{F^{q}_{\gamma,\eta}}}{\|K_{\alpha}(\cdot,z)\|_{F^{p}_{\beta,\rho}}} \simeq\frac{ (1+|z|)^{\eta+2n(\ell-1)/q'} e^{\frac{\alpha^2}{2\gamma}|z|^{2\ell}}} { (1+|z|)^{\rho+2n(\ell-1)/p'}e^{\frac{\alpha^2}{2\beta}|z|^{2\ell}}} \] is bounded if and only if $\beta$, $\gamma$, $\rho$ and $\eta$ satisfy the above conditions. \end{proof} \subsection{Proof of Theorem \ref{thm:embeddings} for $1\le p\le q\le\infty$}\quad\par The next lemma shows that the necessary conditions obtained in the above section are also sufficient, which proves Theorem \ref{thm:embeddings} for $1\le p\le q\le\infty$. \begin{lem}\label{lem:condsuffpleq} If either $\beta<\gamma$ or $\beta=\gamma$ and \[ 2n(\ell-1)\bigl(\tfrac 1p-\tfrac 1q\bigr)\le\rho -\eta, \] then $F^{p}_{\beta,\rho}\hookrightarrow F^{q}_{\gamma,\eta}$, provided that $1\le p\le q\le\infty$. \end{lem} \begin{proof} If $p=q$ then $\eta\le\rho$. Hence $(1+|z|)^\eta e^{-\frac{\gamma}2|z|^2{2\ell}}\lesssim (1+|z|)^\rho e^{-\frac{\beta}2|z|^2{2\ell}}$ which proves the embedding $F^{p}_{\beta,\rho}\hookrightarrow F^{p}_{\gamma,\eta}$. The case $p<q=\infty$ is a consequence of Corollary \ref{cor:embedFinfty} and the case $p=q$. Indeed, $F^{p}_{\beta,\rho}\hookrightarrow F^{\infty}_{\beta,\rho-2n(\ell-1)/p}\hookrightarrow F^{\infty}_{\gamma,\eta}. $ Assume $1\le p<q<\infty$ and let $f\in F^{p}_{\beta,\rho}$. Consider $F$ the function defined by \[ F(z):= |f(z)|(1+|z|)^{\eta }e^{-\frac{\gamma }2|z|^{2\ell}} =G(z)^{p/q}H(z)^{(q-p)/q}, \] where \[ G(z):=|f(z)|(1+|z|)^{\rho }e^{-\frac{\beta }2|z|^{2\ell}} \] and \[ H(z):= |f(z)|(1+|z|)^{\frac{\eta q -\rho p}{q-p}} e^{-\frac{\gamma q-\beta p}{2(q-p)}|z|^{2\ell}}. \] By Corollary \ref{cor:embedFinfty} and the hypotheses on $\rho$ and $\eta$, we have \begin{align*} |H(z)|&\lesssim \|f\|_{F^{p}_{\beta,\rho}} (1+|z|)^{\frac{\eta q -\rho p}{q-p}-\rho+\frac{2n(\ell-1)}{p}} e^{\bigl(-\frac{\gamma q-\beta p}{2(q-p)}+\frac{\beta}2\bigr)|z|^{2\ell}}\\ &= \|f\|_{F^{p}_{\beta,\rho}} (1+|z|)^{(\eta -\rho )\frac{q}{q-p}+\frac{2n(\ell-1)}{p}} e^{-\frac{(\gamma -\beta)q }{2(q-p)}|z|^{2\ell}}\\ &\lesssim \|f\|_{F^{p}_{\beta,\rho}}. \end{align*} Hence \[ \|f\|_{F^{q}_{\gamma,\eta}}^q=\|F\|_{L^q}^q\lesssim \|f\|_{F^{p}_{\beta,\rho}}^{q-p}\,\|G\|_{L^p}^p=\|f\|_{F^{p}_{\beta,\rho}}^{q}.\qedhere \] \end{proof} Observe that, for $1\le p\le q\le \infty$, by Lemmas \ref{lem:necessarypq} and \ref{lem:condsuffpleq}, the fact that the embedding $F^{p}_{\beta,\rho}\hookrightarrow F^{q}_{\gamma,\eta}$ holds is only a question of growth, that is, $F^{p}_{\beta,\rho}\hookrightarrow F^{q}_{\gamma,\eta}$ if and only if $F^{\infty}_{\beta,\rho-2n(\ell-1)/p}\hookrightarrow F^{\infty}_{\gamma,\eta-2n(\ell-1)/q}$. \subsection{Sufficient conditions for $1\le q<p\le\infty$} \begin{lem}\label{lem:condsuffip>q} If either $\beta<\gamma$ or $\beta=\gamma$ and $ 2n\left(\frac 1q-\frac 1p\right)< \rho-\eta $, then we have $F^{p}_{\beta,\rho}\hookrightarrow F^{q}_{\gamma,\eta}$, provided that $1\le q< p\le\infty$. \end{lem} \begin{proof} Let $f\in F^{p}_{\beta,\rho}$. Assume first $p=\infty$. In this case $q(\rho-\eta)>2n$, so the hypotheses on the parameters give \begin{align*} \|f\|_{F^{q}_{\gamma,\eta}}^q &= \int_{\C^n}|f(z)|^q(1+|z|)^{\eta q} e^{-\frac{\gamma q }{2}|z|^{2\ell}}dV(z)\\ &\lesssim \|f\|_{F^{\infty}_{\beta,\rho}}^q \int_{\C^n}(1+|z|)^{-(\rho-\eta)q} e^{-\frac{(\gamma-\beta)q }{2}|z|^{2\ell}}dV(z) \lesssim \|f\|_{F^{\infty}_{\beta,\rho}}^q. \end{align*} Next assume $p$ finite. In this case $(\rho-\eta)\frac{pq}{p-q}>2n$. Consider the function \[ F(z):= |f(z)|(1+|z|)^{\eta }e^{-\frac{\gamma }2|z|^{2\ell}} =G(z)H(z), \] where \[ G(z):=|f(z)|(1+|z|)^{\rho }e^{-\frac{\beta }2|z|^{2\ell}} \quad\text{and}\quad H(z):= (1+|z|)^{(\eta -\rho )}e^{-\frac{\gamma -\beta }{2}|z|^{2\ell}}. \] By H\"older's inequality with exponent $p/q>1$ we have \begin{align*} \|f\|_{F^{q}_{\gamma,\eta}}&=\|F\|_{L^q}\le \|G\|_{L^p}\|H\|_{L^{pq/(p-q)}}\\ &= \|f\|_{F^{p}_{\beta,\rho}} \left(\int_{\C^n}(1+|z|)^{-(\rho-\eta)\frac{pq}{p-q}} e^{-\frac{\gamma-\beta}2\frac{pq}{p-q}|z|^{2\ell}}dV(z)\right)^{\frac{p-q}{pq}}. \end{align*} Therefore $\|f\|_{F^{q}_{\gamma,\eta}} \lesssim \|f\|_{F^{p}_{\beta,\rho}}$. \end{proof} \subsection{Necessary conditions for $1\le q<p<\infty$ and $\beta=\gamma$}\quad\par \begin{prop}\label{prop:necpgrq} If $1\le q<p<\infty$ and $F^{p}_{\beta,\rho}\hookrightarrow F^{q}_{\beta,\eta}$ then $ 2n\left(\frac 1q-\frac 1p\right)< \rho-\eta. $ \end{prop} The proof of Proposition \ref{prop:necpgrq} follows from the ideas in \cite{luecking}. We need some technical results. For $r>0$, let $\tau_r:\C\to (0,\infty)$ be the function defined by \begin{equation}\label{eqn:tau} \tau_r(z):=r(1+|z|)^{1-\ell} \end{equation} and let $B_r(z):=B(z,\tau_r(z))$. Note that $\tau_r$ is a radius function in the sense of \cite[p.1617-1618]{dallara}, that is, \begin{equation}\label{eqn:radiusf} 1+|z|\simeq 1+|w|\quad (z\in\C^n,\,\,w\in B_r(z)). \end{equation} Then we have: \begin{lem}[{\cite[Proposition 7]{dallara}}]\label{lem:cover} For any $r>0$ there exists a sequence $\{z_k\} $ in $\C^n$ such that the Euclidean balls $B_k:=B_r(z_k)$ satisfy: \begin{enumerate} \item $\cup_k B_k=\C^n$. \item The overlapping of the balls $B_k$ is finite, that is, there exists $N_r\in\N$ such that $\sum_k \mathcal{X}_{B_k}(z)\le N_r$ for any $z\in\C^n$. \end{enumerate} \end{lem} The following lemma states a subharmonic type estimate. \begin{lem}\label{lem:propertiescover}\quad\par \begin{enumerate} \item \label{item:propertiescover1} There exists $r>0$ such that \[ |K_\alpha(z,w)|e^{-\frac{\alpha}2|w|^{2\ell}} e^{-\frac{\alpha}2|z|^{2\ell}} \simeq (1+|z|)^{2n(\ell-1)} \quad (w\in B_r(z)). \] \item \label{item:propertiescover2} Let $1\le p<\infty$, $\rho\in\R$ and $r>0$. There exists $C=C_{\alpha,p,\rho,r}>0$ such that \[ |f(z)|^p(1+|z|)^{\rho p-2n(\ell-1)}e^{-\frac{\alpha p}2|z|^{2\ell}} \le C\int_{B_r(z)} |f(w)|^p(1+|w|)^{\rho p}e^{-\frac{\alpha p}2|w|^{2\ell}}\,dV(w), \] for any $z\in\C^n$. \end{enumerate} \end{lem} \begin{proof} We begin proving \eqref{item:propertiescover1}. By Remark \ref{rem:pointwB}, we may assume that $z=(|z|,0,\cdots,0)$. Then we have to prove that \begin{equation}\label{eqn:propertiescover1} |H_\alpha(|z|w_1)|e^{-\frac{\alpha}2|w|^{2\ell}} e^{-\frac{\alpha}2|z|^{2\ell}} \simeq (1+|z|)^{2n(\ell-1)} \quad (w\in B_r(z)). \end{equation} By Corollary \ref{cor:pointwB}, there exist $\delta>0$ and $N>2$ satisfying \eqref{eqn:simeq:estimate:H}. For $r>0$ small enough we have $ |z|w_1\in S^\delta_N $, for any $z\in\C^n$ and $w\in B_r(z)$. By \eqref{eqn:simeq:estimate:H}, \begin{equation}\label{eqn:propertiescover2} |H_\alpha(|z|w_1)|\simeq (1+|z||w_1|)^{n(\ell-1)}e^{\alpha|z|^\ell\Re w_1^\ell} \quad (w\in B_r(z)). \end{equation} In particular for $|z|\le 2r$ the terms in \eqref{eqn:propertiescover1} are comparable to a positive constant and there is nothing to prove. Now assume $|z|> 2r$. In this case, $|w_1|\simeq |z|$ for $w\in B_r(z)$. Hence, by \eqref{eqn:propertiescover2}, the equivalence \eqref{eqn:propertiescover1} will be a consequence of \begin{equation}\label{eqn:propertiescover3} e^{\alpha|z|^\ell\Re w_1^\ell}e^{-\frac{\alpha}2|w|^{2\ell}} e^{-\frac{\alpha}2|z|^{2\ell}}\simeq 1 \quad (w\in B_r(z)). \end{equation} First note that \begin{align*} e^{\alpha|z|^\ell\Re w_1^\ell}e^{-\frac{\alpha}2|w|^{2\ell}} e^{-\frac{\alpha}2|z|^{2\ell}} &=e^{\alpha|z|^\ell\Re w_1^\ell} e^{-\frac{\alpha}2(|w_1|^2+|w'|^2)^{\ell}} e^{-\frac{\alpha}2|z|^{2\ell}}\\ &=e^{-\frac{\alpha}2||z|^\ell-w_1^\ell|^2} e^{-\frac{\alpha}2[(|w_1|^2+|w'|^2)^\ell-|w_1|^{2\ell}]}. \end{align*} By mean value theorem, for $w\in B_r(z)$ we have \[ 0\le ||z|^\ell-w_1^\ell|\lesssim (|z|+r(1+|z|)^{1-\ell})^{\ell-1}(1+|z|)^{1-\ell}\simeq 1 \] and \[ (|w_1|^2+|w'|^2)^\ell-|w_1|^{2\ell}\lesssim (|w_1|^2+|w'|^2)^{\ell-1}|w'|^2\lesssim |z|^{2(\ell-1)}(1+|z|)^{2(1-\ell)}\simeq 1, \] we obtain \eqref{eqn:propertiescover3}. In order to prove part \eqref{item:propertiescover2}, note that, by \eqref{eqn:radiusf}, the case $\rho\ne 0$ follows from the result for $\rho=0$. This last case can be deduced using the arguments in the proofs of Proposition 12 and of Lemma 13 in \cite{dallara}. Let $\varphi$ be a real $\mathcal{C}^2$-function on the closed unit ball $\overline{B(0,1)}$ of $\C^n$. It is well known (see for instance \cite{andersson}) that there exists a real $\mathcal{C}^2$-function $\psi$ on $B(0,1)$ such that \[ \partial\overline{\partial}\psi=\partial\overline{\partial}\varphi\quad\text{and}\quad \|\psi\|_{L^\infty(B(0,1))}\le C\|\partial\overline{\partial}\varphi\| _{L^\infty(B(0,1))}. \] By rescaling, we get that if $\varphi$ is a real $\mathcal{C}^2$-function on the closed ball $\overline{B(z,R)}$, then there is a real $\mathcal{C}^2$-function $\psi$ on $B(z,R)$ such that \[ \partial\overline{\partial}\psi=\partial\overline{\partial}\varphi\quad\text{and}\quad \|\psi\|_{L^\infty(B(z,R))}\le CR^2\|\partial\overline{\partial}\varphi\|_{L^\infty(B(z,R))}. \] Applying this result to the function $\varphi(w)=|w|^{2\ell}$ and to the ball $B_r(z)$ there exists a real $\mathcal{C}^2$-function $\psi_z$ on $B_r(z)$ such that $\partial\overline{\partial}\psi_z=\partial\overline{\partial}\varphi$ and, by \eqref{eqn:radiusf}, \[ \|\psi_z\|_{L^\infty(B_r(z))} \le C r^2 (1+|z|)^{2(1-\ell)}\sup_{w\in B_r(z)}|w|^{2(\ell-1)}\le C'\,r^2. \] Since $\psi_z-\varphi$ is a pluriharmonic function on $B_r(z)$, it is the real part of a holomorphic function $h_z$ on $B_r(z)$. Thus we have \begin{align*} |f(z)|^pe^{-\frac{\alpha p}2|z|^{2\ell}} &\simeq |f(z)e^{\frac{\alpha}2 h_z(z)}|^p \le\frac 1{|B_r(z)|} \int_{B_r(z)}|f(w)e^{\frac{\alpha}2 h_z(w)}|^p\,dV(w)\\ &\simeq (1+|z|)^{2n(\ell-1)} \int_{B_r(z)}|f(w)|^p e^{-\frac{\alpha p}2|z|^{2\ell}}dV(w).\qedhere \end{align*} \end{proof} \begin{lem}\label{lem:atomic1} Let $\{z_k\}$ be a sequence satisfying the properties in Lemma \ref{lem:cover}. Then, for $1\le p< \infty$ the map \[ \{c_k\}\longmapsto \Phi(\{c_k\})(z):=\sum_k c_k\frac{K_{\beta}(z,z_k)}{\|K_{\beta}(z,z_k)\|_{F^{p}_{\beta,\rho}}} \] is bounded from the sequence space $\ell^p$ to $F^{p}_{\beta,\rho}$. \end{lem} \begin{proof} For $p=1$ the result is clear. Assume $p>1$. By Corollary \ref{cor:dualF}, the dual of the space $F^{p'}_{\beta,-\rho}$ with respect to the pairing $\langle\cdot,\cdot\rangle_\beta$ is $F^{p}_{\beta,\rho }$. Since the overlapping of the balls $B_k$ is finite, Proposition \ref{prop:pnormBergman} and Lemma \ref{lem:propertiescover}\eqref{item:propertiescover2} show that the map \[ g\longmapsto T_{p'}(g):= \bigl\{g(z_k)/ \|K_{\beta}(z,z_k)\|_{F^{p}_{\beta,\rho}} \bigr\} \] is bounded from $F^{p'}_{\beta,-\rho}$ to $\ell^{p'}$. Indeed, \begin{align*} \|T_{p'}(g)\|_{\ell^{p'}}^{p'} &\simeq \sum_k |g(z_k)|^{p'}(1+|z_k|)^{-\rho p'-2n(\ell-1)} e^{-\frac{\beta}2|z_k|^{2\ell}}\\ &\lesssim\sum_k \int_{B_r(z_k)} |g(z)|^{p'}(1+|z|)^{-\rho p'} e^{-\frac{\beta}2|z|^{2\ell}}dV(z)\simeq \|g\|_{F^{p'}_{\beta,-\rho}}^{p'} \end{align*} So the adjoint map $T^*_{p'}$ of $T_{p'}$, with respect to the pairing $\langle\cdot,\cdot\rangle_\beta$, is bounded from $\ell^p$ to $F^{p}_{\beta,\rho}$. We are going to show that $T^*_{p'}=\Phi$. For $\{c_k\}\in c_{oo}$ (the space of sequences with a finite number of non-zero terms) and $g\in F^{p'}_{\beta,-\rho}$ we have \begin{align*} \langle T^*_{p'}\{c_k\},g\rangle_\beta &=\langle\{c_k\},g(z_k)/ \|K_{\beta}(z,z_k)\|_{F^{p}_{\beta,\rho}}\rangle_{\ell^2}\\ &=\Big\langle \sum_k c_kK_{\beta}(z,z_k) /\|K_{\beta}(z,z_k)\|_{F^{p}_{\beta,\rho}} ,g\Big\rangle_\beta, \end{align*} since $g(z_k)=\int_{\C^n} g(z)K_{\beta}(z_k,z)e^{-\frac\beta 2|z|^{2\ell}}dV(z)$. Therefore \[ T^*_{p'}\{c_k\}=\sum_k c_k\frac{K_{\beta}(z,z_k)}{\|K_{\beta}(z,z_k)\|_{F^{p}_{\beta,\rho}}}\quad (\{c_k\}\in c_{oo}). \] Since $c_{oo}$ is dense in $\ell^p$ we conclude that $ T^*_{p'}=\Phi$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:necpgrq}] Pick $r>0$ satisfying Lemma \ref{lem:propertiescover} \eqref{item:propertiescover1}, and let $\{z_k\}$ be a sequence as in Lemma \ref{lem:cover}. Let $\{c_k\}\in \ell^p$ and consider the function \[ \Phi_t(\{c_k\})(z) :=\sum_k c_kr_k(t)\frac{K_{\beta,n}(z,z_k)} {\|K_{\beta,n}(z,z_k)\|_{F^{p}_{\beta,\rho}}}, \quad 0\le t\le 1, \] where $\{r_k(t)\}$ is a sequence of Rademacher functions (see \cite[p.336]{luecking}). By the hypothesis and Lemma \ref{lem:atomic1}, \[ \|\Phi_t(\{c_k\})\|_{F^{q}_{\beta,\eta}}\lesssim \|\Phi_t(\{c_k\})\|_{F^{p}_{\beta,\rho}}\lesssim\|\{c_kr_k(t)\}\|_{\ell^p}=\|\{c_k\}\|_{\ell^p}. \] So, by Fubini's theorem and Khinchine's inequality (see \cite[p.336]{luecking}) \begin{align*} \int_{\C^n}&\left( \sum_{k}|c_k|^2\frac{|K_{\beta,n}(z,z_k)|^2} {\|K_{\beta,n}(z,z_k)\|^2_{F^{p}_{\beta,\rho}}} (1+|z|)^{2\eta}e^{-\beta|z|^{2\ell}}\right)^{q/2} dV(z)\\ &\simeq \int_0^1\|\Phi_t(\{c_k\})\|_{F^{q}_{\beta,\eta}}^qdt \lesssim\|\{c_k\}\|_{\ell^p}^{q}. \end{align*} By Proposition \ref{prop:pnormBergman} this is equivalent to the fact that $I(\{c_k\})\lesssim\|\{c_k\}\|_{\ell^p}^{q}$, where \[ I(\{c_k\}):=\int_{\C^n}\left( \sum_{k}|c_k|^2\frac{|K_{\beta,n}(z,z_k)|^2 e^{-\beta|z_k|^{2\ell}}e^{-\beta|z|^{2\ell}}}{ (1+|z_k|)^{2(\rho-\eta) +4n(\ell-1)/p'}} \right)^{q/2}dV(z). \] Now \[ I(\{c_k\})\gtrsim \int_{\C^n}\left( \sum_{k}|c_k|^2\frac{|K_{\beta,n}(z,z_k)|^2 e^{-\beta|z_k|^{2\ell}}e^{-\beta|z|^{2\ell}}}{ (1+|z_k|)^{2(\rho-\eta) +4n(\ell-1)/p'}}\mathcal{X}_{B_k}(z) \right)^{q/2}dV(z). \] Since, by Lemma \ref{lem:cover}, any point $z\in\C^n$ is at most in $N$ balls $B_k$, the equivalence of the $\ell^2$-norm and $\ell^{q/2}$-norm on $\C^N$ give \[ I(\{c_k\})\gtrsim \sum_{k} |c_k|^q \int_{B_k} \frac{|K_{\beta,n}(z,z_k)|^q e^{-\frac{\beta q}2|z_k|^{2\ell}}e^{-\frac{\beta q}2|z|^{2\ell}}}{ (1+|z_k|)^{(\rho-\eta) q+2n(\ell-1)q/p'}} dV(z). \] By Lemma \ref{lem:propertiescover}\eqref{item:propertiescover1} \[ |K_{\beta,n}(z,z_k)|^q e^{-\frac{\beta q}2|z_k|^{2\ell}} e^{-\frac{\beta q}2|z|^{2\ell}} \simeq (1+|z_k|)^{2n(\ell-1)q}\quad (z\in B_k). \] Hence \begin{align*} \|\{c_k\}\|_{\ell^p}^{q} &\gtrsim \sum_{k} |c_k|^q (1+|z_k|)^{-(\rho-\eta) q-2n(\ell-1)(q/p'-q+1)}\\ &=\sum_{k} |c_k|^q (1+|z_k|)^{-(\rho-\eta) q-2n(\ell-1)(p-q)/p}, \end{align*} and consequently for any $\{d_k\}\in\ell^{p/q}$, \[ \sum_{k} |d_k| (1+|z_k|)^{-(\rho-\eta) q-2n(\ell-1)(p-q)/p}\lesssim \|d_k\|_{\ell^{p/q}}. \] By the duality of the sequence spaces $(\ell^{p/q})^*=\ell^{p/(p-q)}$, we obtain \[ \sum_{k} (1+|z_k|)^{-(\rho-\eta) \frac{pq}{p-q}-2n(\ell-1)}<\infty \] Since \begin{align*} \infty> \sum_{k} (1+|z_k|)^{-(\rho-\eta) \frac{pq}{p-q}-2n(\ell-1)} &\simeq\sum_{k} \int_{B_k} (1+|z|)^{-(\rho-\eta) \frac{pq}{p-q}}dV(z)\\ &\simeq \int_{\C^n} (1+|z|)^{-(\rho-\eta) \frac{pq}{p-q}} dV(z), \end{align*} we conclude that $-(\rho-\eta) \frac{pq}{p-q}<-2n$. This ends the proof. \end{proof} \subsection{Necessary condition for $1\le q<p=\infty$ and $\beta=\gamma$}\quad\par In this section we extend Proposition \ref{prop:necpgrq} to the case $p=\infty$. \begin{prop}\label{prop:necinf>q} If $1\le q<\infty$ and $F^{\infty}_{\beta,\rho} \hookrightarrow F^{q}_{\beta,\eta}$ then $ \frac {2n}q< \rho-\eta. $ \end{prop} The necessary condition will be obtained from the case $1\le q<p<\infty$ by complex interpolation. In particular we will use the Riesz-Thorin theorem and the following well-known result (see for instance \cite[Lemma 7.11]{kalton-mayboroda-mitrea}). \begin{lem}\label{lem:retract} Let $(Y_0,Y_1)$ and $(X_0,X_1)$ be admissible pairs of Banach spaces. Assume that $(Y_0,Y_1)$ is a retract of $(X_0,X_1)$, that is, there exist bounded linear operators $E:Y_j\to X_j$ and $R:X_j\to Y_j$ such that $R\circ E$ is the identity operator on $Y_j$, $j=0,1$. Then $(Y_0,Y_1)_{[\theta]}=R((X_0,X_1)_{[\theta]})$. \end{lem} \begin{lem}\label{lem:interpolation} Let $1\le q<\infty$ and let $\theta\in(0,1)$. If $\frac 1s=\frac{1-\theta}q$ then \[ (F^{q}_{\beta,\rho}, F^{\infty}_{\beta,\rho})_{[\theta]} =F^{s}_{\beta,\rho} \quad\text{and}\quad (F^{q}_{\beta,\rho}, F^{q}_{\beta,\eta})_{[\theta]} =F^{q}_{\beta,(1-\theta)\rho+\theta\eta}. \] \end{lem} \begin{proof} Observe that the map $\Phi(f)(z) := f(z)e^{\frac\beta 2|z|^{2\ell}}(1+|z|)^{-\rho}$ is a linear isometry from $L^r$ onto $L^{r}_{\beta,\rho}$, $1\le r\le\infty$. So by Lemma \ref{lem:interpolation} and the Riesz-Thorin theorem, we obtain \[ (L^{q}_{\beta,\rho}, L^{\infty}_{\beta,\rho})_{[\theta]} =\Phi((L^q,L^\infty)_{[\theta]}) =\Phi(L^s)=L^{s}_{\beta,\rho} \] By Proposition \ref{prop:PLpontoFp}, for $1\le r\le\infty$, $(F^{q}_{\beta,\rho}, F^{\infty}_{\beta,\rho})$ is a retract of $(L^{q}_{\beta,\rho}, L^{\infty}_{\beta,\rho})$ and so \[ (F^{q}_{\beta,\rho}, F^{\infty}_{\beta,\rho})_{[\theta]} =P_\beta((L^{q}_{\beta,\rho}, L^{\infty}_{\beta,\rho})_{[\theta]}) =P_\beta(L^{s}_{\beta,\rho})=F^{s}_{\beta,\rho}, \] which proves the first interpolation identity. In order to prove the second identity, by Theorem \cite[Theorem 5.5.3]{berg-lofstrom} we have \begin{align*} (L^{q}_{\beta,\rho}, L^{q}_{\beta,\eta})_{[\theta]} &=(L^q(e^{-\frac{q\beta} 2|z|^{2\ell}}(1+|z|)^{q\rho}), L^q(e^{-\frac{q\beta} 2|z|^{2\ell}}(1+|z|)^{q\eta}))_{[\theta]}\\ &=L^q(e^{-\frac{q\beta} 2|z|^{2\ell}}(1+|z|)^{q((1-\theta)\rho+\theta\eta)})= L^{q}_{\beta,(1-\theta)\rho+\theta\eta}. \end{align*} Therefore, as above, \[ (F^{q}_{\beta,\rho}, F^{q}_{\beta,\eta})_{[\theta]} =P_\beta(L^{q}_{\beta,(1-\theta)\rho+\theta\eta}) =F^q(e^{-\frac\beta 2|z|^{2\ell}}(1+|z|)^{1-\theta)\rho+\theta\eta}). \] This ends the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:necinf>q}] Assume $ F^{\infty}_{\beta,\rho}\hookrightarrow F^{q}_{\beta,\eta}$. By Lemma \ref{lem:interpolation}, \[ F^{s}_{\beta,\rho}=(F^{q}_{\beta,\rho}, F^{\infty}_{\beta,\rho})_{[\theta]} \hookrightarrow (F^{q}_{\beta,\rho}, F^{q}_{\beta,\eta})_{[\theta]} =F^{q}_{\beta,(1-\theta)\rho+\theta\eta}, \] with $\frac 1s=\frac{1-\theta}q$. Since $q=(1-\theta)s<s<\infty$, Proposition \ref{prop:necpgrq} gives \[ 2n(\tfrac 1q-\tfrac 1s)<\rho-((1-\theta)\rho+\theta\eta)= q(\tfrac 1q-\tfrac 1s)(\rho-\eta), \] and so $\frac{2n}q<\rho-\eta$. \end{proof} \subsection{Proof of Theorem \ref{thm:embeddings} for $1\le q<p\le\infty$} \quad\par The sufficient conditions follow from Lemma \ref{lem:condsuffip>q}. If $\beta\ne \gamma$ the necessary condition $\beta<\gamma$ follows from Lemma \ref{lem:necessarypq}. If $\beta= \gamma$ the necessary condition follows from Propositions \ref{prop:necpgrq} and \ref{prop:necinf>q}. \section{Proof of Theorem \ref{thm:Bprojection}} \label{sec:boundedness} First we prove the necessary condition $\beta<2\alpha$. For the case $\rho=0$ next lemma corresponds to \cite[Lemma 3]{bommier-englis-youssfi}. \begin{lem}\label{lem:necbeta<2alpha} Let $1\le p,q\le\infty$. If $P_\alpha$ is bounded from $(L^{2}_\alpha\cap L^{p}_{\beta,\rho}, \|\cdot\|_{L^{p}_{\beta,\rho}})$ to $L^{q}_{\gamma,\eta}$ then $\beta<2\alpha$. \end{lem} \begin{proof} For any $\z\in\C^n$, the linear form $g\mapsto g(\z)$ is bounded on $F^{q}_{\gamma,\eta}$ (see Corollary \ref{cor:embedFinfty}). Then the boundedness of $P_\alpha:(L^{2}_\alpha\cap L^{p}_{\beta,\rho}, \|\cdot\|_{L^{p}_{\beta,\rho}})\mapsto L^q_{\gamma,\eta}$ implies the boundedness of the form $U_\z(f)=P_\alpha(f)(\z)$ on $(L^{2}_\alpha\cap L^{p}_{\beta,\rho}, \|\cdot\|_{L^{p}_{\beta,\rho}})$. Hence Lemma \ref{lem:well-defined} gives $\beta<2\alpha$. \end{proof} Now the proof of Theorem \ref{thm:Bprojection} follows from the next proposition and its corollary. \begin{prop}\label{prop:Ponto} Let $1\le p\le \infty$. If $0<\beta<2\alpha$ then the Bergman projection $P_\alpha$ is bounded from $L^{p}_{\beta,\rho}$ onto $F^{p}_{\alpha^2/(2\alpha-\beta),\rho}$. \end{prop} \begin{cor}\label{cor:bounded-embed} Let $1\le p,q\le\infty$ and let $0<\beta<2\alpha$. Then the Bergman projection $P_\alpha$ is bounded from $L^{p}_{\beta,\rho}$ to $L^{q}_{\gamma,\eta}$ if and only if $F^{p}_{\alpha^2/(2\alpha-\beta),\rho} \hookrightarrow F^{q}_{\gamma,\eta}$. \end{cor} Taking for granted these results, we finish the proof of Theorem \ref{thm:Bprojection}. \begin{proof}[Proof of Theorem \ref{thm:Bprojection}] By Lemma \ref{lem:necbeta<2alpha} it is clear that $\beta<2\alpha$ is a necessary condition for the boundedness of $P_\alpha$ from $L^{p}_{\beta,\rho}$ to $L^{q}_{\gamma,\eta}$. If $\beta<2\alpha$, Corollary \ref{cor:bounded-embed} shows that $P_\alpha$ is bounded from $L^{p}_{\beta,\rho}$ to $L^{q}_{\gamma,\eta}$ if and only $F^{p}_{\alpha^2/(2\alpha-\beta),\rho} \hookrightarrow F^{q}_{\gamma,\eta}$. Thus Theorem \ref{thm:Bprojection} is a consequence of Theorem \ref{thm:embeddings}. \end{proof} We conclude this section with the proofs of Proposition \ref{prop:Ponto} and Corollary \ref{cor:bounded-embed}. To do so, we introduce the following notations which will used in the next results. For $\beta<2\alpha$, let \[ \delta:=\bigl(\tfrac{\alpha}{2\alpha-\beta}\bigr)^{1/\ell} \quad\text{and}\quad \kappa:=\alpha\delta^\ell =\tfrac{\alpha^2}{2\alpha-\beta}. \] The next lemma follows from \eqref{eqn:Kdelta}. \begin{lem}\label{lem:PatoPk} If $f\in L^{p}_{\beta,\rho}$, then $P_\alpha(f)=P_\kappa(T_\delta(f))$, where \[ T_\delta(f)(z) =\delta^n f(\delta z)e^{(\alpha-\beta)|\delta z|^{2\ell}}. \] \end{lem} \begin{proof} Using the change of variables $w=\delta u$ and \eqref{eqn:Kdelta}, we obtain \begin{align*} P_\alpha(f)(z) &=\delta^{2n}\int_{\C^n} f(\delta u) K_\alpha(z,\delta u) e^{-\alpha |\delta u|^{2\ell}}dV(u)\\ &=\delta^{n}\int_{\C^n} [f(\delta u) e^{(-\alpha +\kappa\delta^{-2\ell})|\delta u|^{2\ell}}] K_\kappa(z,u) e^{-\kappa | u|^{2\ell}}dV(u). \end{align*} Since $-\alpha+\kappa\delta^{-2\ell} =-\alpha+\alpha\delta^{-\ell} =-\alpha+2\alpha-\beta=\alpha-\beta$ we obtain the result. \end{proof} \begin{lem}\label{lem:operatorT} The operator $T_\delta$ is a topological isomorphism from $L^{p}_{\beta,\rho}$ onto $L^{p}_{\kappa,\rho}$. \end{lem} \begin{proof} Since $\alpha-\beta=-\frac{\beta}2+\frac{2\alpha-\beta}2 =-\frac{\beta}2+\frac \kappa 2 \delta^{-2\ell}$, we have \[ T_\delta(f)(z)=\delta^n f(\delta z) e^{-\frac{\beta}2|\delta z|^{2\ell}} e^{\frac{\kappa}2| z|^{2\ell}}. \] Therefore \begin{align*} \|T_\delta(f)\|_{L^{p}_{\kappa,\rho}} &\simeq \|f(\delta z) e^{-\frac{\beta}2|\delta z|^{2\ell}}(1+|z|)^\rho\|_{L^p}\\ &\simeq \|f(\delta z) e^{-\frac{\beta}2|\delta z|^{2\ell}} (1+|\delta z|)^\rho\|_{L^p} \simeq \|f\|_{L^{p}_{\beta,\rho}}. \end{align*} So to conclude the proof we only need to show that the operator $T_\delta$ is surjective. This follows from the fact that the unique solution of the equation $T_\delta (f)=g$ is $f(z)=\delta^{-n}g(z/\delta)e^{(\beta-\alpha)|z|^{2\ell}}$ and \[\|f\|_{L^{p}_{\beta,\rho}} \simeq \|T_\delta(f)\|_{L^{p}_{\kappa,\rho}} =\|g\|_{L^{p}_{\kappa,\rho}}.\qedhere \] \end{proof} \begin{proof}[Proof of Proposition \ref{prop:Ponto}] By Proposition \ref{prop:PLpontoFp}, $P_\kappa$ is a bounded operator from $L^{p}_{\kappa,\rho}$ onto $F^{p}_{\kappa,\rho}$. So Lemmas \ref{lem:PatoPk} and \ref{lem:operatorT} give \[ P_{\alpha}(L^{p}_{\beta,\rho}) =P_{\kappa}(T_\delta(L^{p}_{\beta,\rho})) =P_{\kappa}(L^{p}_{\kappa,\rho}) =F^{p}_{\kappa,\rho}.\qedhere \] \end{proof} \begin{proof}[Proof of Corollary \ref{cor:bounded-embed}] By Proposition \ref{prop:Ponto}, it is clear that if $F^{p}_{\alpha^2/(2\alpha-\beta),\rho} \hookrightarrow F^{q}_{\gamma,\eta}$, then $P_\alpha$ is bounded from $L^{p}_{\beta,\rho}$ to $L^{q}_{\gamma,\eta}$. Conversely, if $P_\alpha$ is bounded from $L^{p}_{\beta,\rho}$ to $L^{q}_{\gamma,\eta}$ then, by Proposition \ref{prop:Ponto}, \[ F^{p}_{\alpha^2/(2\alpha-\beta),\rho} =P_\alpha(L^{p}_{\beta,\rho}) \hookrightarrow F^{q}_{\gamma,\eta}.\qedhere \] \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} The Optical Parametric Oscillator (OPO) has been studied since the 1960's\cite{kingston62,grahamhaken68}. Already in the 1980's it was recognized as an important tool in quantum optics, for the generation of squeezed states of light~\cite{kimble,heidmann}. It was also recognized as a suitable system for the demonstration of continuous variable (CV) entanglement, by Reid and Drummond, in 1988~\cite{reiddrummprl88}, where above-threshold operation was considered. In the early 1990's, CV entanglement was indeed demonstrated for the first time in an OPO, although operating below threshold~\cite{kimbleepr}. The OPO has since been used in several applications in CV quantum information~\cite{vanloockbraunrmp05,bowen03,schori02,kimbleteleport,entangswap04}. Entanglement in the above-threshold OPO, on the other hand, remained an experimental challenge until 2005, when it was first observed by Villar {\it et al.}~\cite{prlentangtwinopo}, and subsequently by two other groups~\cite{optlettpeng,pfisterentang}. Bipartite continuous variable entanglement can be demonstrated by a violation of the following inequality, obtained independently by Duan {\it et al.}~\cite{dgcz} and Simon~\cite{simon}: \begin{equation} \Delta^2 \hat p_-+\Delta^2 \hat q_+\geq 2 \,, \label{dgczcrit} \end{equation} where $\hat p_-=(\hat p_1-\hat p_2)/\sqrt{2}$ and $\hat q_+=(\hat q_1+\hat q_2)/\sqrt{2}$ are EPR-like operators constructed by combining operators of each subsystem. We choose $\hat p_j$ and $\hat q_j$, $j\in\{0,1,2\}$, as the amplitude and phase quadrature operators of the pump, signal and idler fields, respectively, which obey the commutation relations $[\hat p_j,\hat q_k]=2 i \delta_{jk}$. Any separable system must satisfy Eq.~(\ref{dgczcrit}): violation is an unequivocal signature of entanglement. Entanglement between the intense signal and idler beams generated by an above-threshold OPO can be physically understood as a consequence of energy conservation in the parametric process. On one hand, pump photons are converted into pairs of signal and idler photons, leading to strong intensity correlations; on the other hand, the sum of frequencies of signal and idler photons is fixed to the value of pump frequency, leading to phase anti-correlations. The difficulty of measuring phase fluctuations was largely responsible for the long time between the prediction and the first observation of entanglement in the above-threshold OPO. The technique we used to measure phase fluctuations consists of reflecting each field off an empty optical cavity, as explained in Ref~\cite{optcomm04}. The value of Eq.~(\ref{dgczcrit}) obtained in the first demonstration of entanglement was 1.41(2), with squeezing observed in both EPR-like operators, $\Delta^2\hat p_-=0.59(1)$ and $\Delta^2\hat q_+=0.82(2)$~\cite{prlentangtwinopo}. Nevertheless, such a result could only be achieved very close to threshold, otherwise the phase sum $\Delta^2\hat q_+$ would present excess noise, increasing with pump power relative to threshold $\sigma=P_0/P_{\mbox{\scriptsize th}}$. This strange behavior, also observed by other groups~\cite{claude10db}, is not predicted by the standard linearized OPO theory, for a shot noise limited pump beam. According to this model, entanglement should exist for all values of $\sigma$, although the degree of entanglement should decrease for increasing $\sigma$. This presented an additional complication for the first demonstration of entanglement in the above-threshold OPO. In this paper, we present new improved results of entanglement in the above-threshold OPO, together with a theoretical and experimental study of this unexpected excess phase sum noise. The paper is organized as follows. We begin by describing the linearized model for the OPO and its predictions for a shot noise limited pump beam. This model includes losses and also allows for nonvanishing detunings of pump, signal, and idler modes with respect to the OPO cavity. We then present a full-quantum treatment, neglecting losses and for zero detunings. Even after eliminating the linearization approximation, the theory does not predict the observed excess noise. The experiment is described next, and we present measurements of sum and difference of quadratures' fluctuations, as a function of $\sigma$. The excess noise in the phase sum can be related to pump noise generated inside the OPO cavity, as we will see. We finally present our currently best measurement of two-color squeezed-state entanglement. We conclude by mentioning applications of this entanglement in quantum information. \section{Theoretical description of the OPO} \label{sec:theory} The optical parametric oscillator consists of three modes of the electromagnetic field coupled by a nonlinear crystal, which is held inside an optical cavity. The OPO is driven by an incident pump field at frequency $\omega_{0}$. Following the usual terminology, the downconverted fields are called signal and idler, of frequencies $\omega_{1}$ and $\omega_{2}$, where, by energy conservation, $\omega_{0}=\omega_{1} +\omega_{2}$. We will treat here the case of a cavity which is triply resonant for $\omega_0$, $\omega_{1}$, and $\omega_{2}$. Each field is damped via the cavity output mirror, thereby interacting with reservoir fields. The effective second-order nonlinearity of the crystal is represented by the constant $\chi$. Reid and Drummond investigated the correlations in the nondegenerate OPO (NOPO) both above~\cite{MDacima} and below threshold~\cite{PDbaixo}. In the above threshold case, they studied the effects of phase diffusion in the signal and idler modes, beginning with the positive P-representation equations of motion for the interacting fields~\cite{+P1,+P2}. Changing to intensity and phase variables, they were able to show that output quadratures could be chosen which exhibited fluctuations below the coherent state level and also Einstein-Podolsky-Rosen (EPR) type correlations. In the below threshold case, a standard linearized calculation was sufficient to obtain similar correlations. In the limit of a rapidly decaying pump mode, Kheruntsyan and Petrosyan were able to calculate exactly the steady-state Wigner function for the NOPO, showing clearly the threshold behavior and the phase diffusion above this level of pumping~\cite{barbudos}. We begin by describing the linearized model, and then proceed to calculate noise spectra beyond linearization. \subsection{The linearized model} The equations describing the evolution of signal, idler, and pump amplitudes, $\alpha_j$, inside the triply resonant OPO cavity are given below~\cite{optcomm04}. They are obtained by writing the density operator equation of motion in the Wigner representation, and then searching for a set of equivalent Langevin equations. \begin{widetext} \begin{eqnarray} \tau\frac{d}{dt} \alpha_0 & = & -\gamma'_0(1-i\Delta_0)\,\alpha_0-2\chi^*\alpha_1\alpha_2+ \sqrt{2\gamma_0}\,\alpha_0^{in}+\sqrt{2\mu_0}\,\delta v_0 \nonumber \\ \tau\frac{d}{dt} \alpha_1 & = & -\gamma'(1-i\Delta)\,\alpha_1+2\chi\,\alpha_0\alpha_2^*+ \sqrt{2\gamma}\,\delta u_1+\sqrt{2\mu}\,\delta v_1 \label{eqopo} \\ \tau\frac{d}{dt} \alpha_2 & = & -\gamma'(1-i\Delta)\,\alpha_2+2\chi\,\alpha_0\alpha_1^*+ \sqrt{2\gamma}\,\delta u_2+\sqrt{2\mu}\,\delta v_2 \;, \nonumber \end{eqnarray} \end{widetext} where $\gamma$ and $\gamma_0$ are half the transmissions of the mirrors, $\gamma'$ and $\gamma'_0$ are the total intracavity losses, $\mu=\gamma'-\gamma$ and $\mu_0=\gamma'_0-\gamma_0$ are the spurious intracavity losses, $\Delta$ and $\Delta_0$ are the detunings of the OPO cavity relative to the fields' central frequencies, and $\tau$ is the cavity roundtrip time. We have considered here that $\gamma_1=\gamma_2=\gamma$ and $\gamma'_1=\gamma'_2=\gamma'$. The parameter $\chi$ is the effective second-order nonlinearity. The terms $\delta u_j$ and $\delta v_j$ are vacuum fluctuations associated to the losses from the mirrors' transmissions and from spurious sources, respectively. In the case of the intracavity pump mode, the fluctuations that come from the mirror transmission are due to the quantum fluctuations of the input pump laser beam, $\delta\alpha_0^{in}=\delta p_0^{in}+i\,\delta q_0^{in}$. Linearization consists in writing $\alpha_j(t)= e^{i\phi_j}(p_j+\delta p_j(t)+ i\delta q_j(t))$ and ignoring terms that involve products of fluctuations in the equations. Here $\ave{\alpha_j}=p_j e^{i\phi_j}$ is each field's mean amplitude, with $p_1=p_2\equiv p$ for equal overall intracavity losses in signal and idler, $\delta p_j(t)$ is the amplitude fluctuation, and $\delta q_j(t)$ is the phase fluctuation. Taking the average of the resulting equations gives information on the mean values of the fields. We may then separate the fluctuating part in real and imaginary contributions in order to obtain the equations of evolution for the quadratures of the fields. Defining $\delta q_\pm=(\delta q_1\pm\delta q_2)/\sqrt{2}$ and $\delta p_\pm=(\delta p_1\pm\delta p_2)/\sqrt{2}$ as the normalized sum/subtraction of signal and idler amplitude and phase quadratures, we write the above equations in terms of the EPR variables: \begin{widetext} \begin{eqnarray} \tau\frac{d}{dt} \,\delta p_- & = & -2\gamma'\,\delta p_- + \sqrt{2\gamma}\,\delta u_{p-} + \sqrt{2\mu}\,\delta v_{p-} \nonumber \\ \tau\frac{d}{dt} \,\delta q_- & = & 2\Delta\gamma'\,\delta p_- + \sqrt{2\gamma}\,\delta u_{q-} + \sqrt{2\mu}\,\delta v_{q-} \nonumber \\ \tau\frac{d}{dt} \,\delta p_+ & = & -2\Delta\gamma'\,\delta q_+ + \sqrt{2}\gamma'\beta\,\delta p_0 + \sqrt{2}\Delta\gamma'\beta\,\delta q_0 + \sqrt{2\gamma}\,\delta u_{p+} + \sqrt{2\mu}\,\delta v_{p+} \label{eqopolinear} \\ \tau\frac{d}{dt} \,\delta q_+ & = & -2\gamma'\,\delta q_+ - \sqrt{2}\Delta\gamma'\beta\,\delta p_0 + \sqrt{2}\gamma'\beta\,\delta q_0 + \sqrt{2\gamma}\,\delta u_{q+} + \sqrt{2\mu}\,\delta v_{q+} \nonumber \\ \tau\frac{d}{dt} \,\delta p_0 & = & -\sqrt{2}\gamma'\beta\,\delta p_+ + \sqrt{2}\Delta\gamma'\beta\,\delta q_+ - \gamma'_0\,\delta p_0 - \Delta_0\gamma'_0\,\delta q_0 + \sqrt{2\gamma_0}\,\delta p_0^{in} + \sqrt{2\mu_0}\,\delta v_{p0} \nonumber \\ \tau\frac{d}{dt} \,\delta q_0 & = & -\sqrt{2}\Delta\gamma'\beta\,\delta p_+ - \sqrt{2}\gamma'\beta\,\delta q_+ + \Delta_0\gamma'_0\,\delta p_0 - \gamma'_0\,\delta q_0 + \sqrt{2\gamma_0}\,\delta q_0^{in} + \sqrt{2\mu_0}\,\delta v_{q0} \;, \nonumber \end{eqnarray} \end{widetext} where $\beta=p/p_0$ is the ratio between the intracavity amplitudes of downconverted and pump fields. Noise spectra of the transmitted fields are calculated by solving the above equations in Fourier space. We define $S_{p\pm}$ and $S_{q\pm}$ as the noise spectra of the operators $\hat p_\pm$ and $\hat q_\pm$, respectively. It is clear from Eq.~(\ref{eqopolinear}) that the subtraction of quadratures' subspace decouples from the others, so that $S_{q-}$ and $S_{p-}$ depend only on the ratio of losses through the output cavity mirror to the total intracavity losses, and on the analysis frequency $\Omega$. These fluctuations do not depend on pump power, and are in a minimum uncertainty state, $S_{p-} \times S_{q-}=1$, if $\gamma=\gamma'$. \begin{figure}[ht] \centering \epsfig{file=fig1villarjosab.eps,scale=0.35} \caption{Prediction of the linearized theory for fluctuations in the sum/subtraction of field quadratures as a function of $\sigma$ for a shot noise limited pump beam. Full line: $S_{q+}$; dashed line: $S_{p+}$; line + crosses: $S_{p-}$; line + circles: $S_{q-}$} \label{ruidoxdcteor} \end{figure} On the other hand, the sum of quadratures and pump fields' subspaces are connected. This directly implies that excess noise in the pump beam degrades signal-idler entanglement, and can even destroy it~\cite{optcomm04}. The behavior of the twin beams' fluctuations as functions of pump power relative to threshold $\sigma$, for a shot noise limited pump, is presented in Fig.~\ref{ruidoxdcteor}. The maximum squeezing of $S_{q+}$ occurs at threshold, and approaches shot noise for higher pump powers. These behaviors change in the presence of excess noise in the pump. In this case, both $S_{q+}$ and $S_{p+}$ increase from their values at threshold. In particular, $S_{q+}$ goes from squeezing to excess noise. The point where it crosses the shot noise value solely depends on the amount of excess phase noise present in the pump beam. For this reason, it was necessary to filter the pump field in the experiment, in order to observe entanglement. \subsection{Noise spectra beyond the linearized model} We present here a comparison between the linearized approach to the quantum noise in the OPO and the numerical integration of the quantum stochastic equations in the positive P-representation. This will help us to eliminate the linearization procedure as the reason for the discrepancy between the theoretical prediction of squeezing and the experimentally observed excess phase noise for $\sigma > 1.2$. We shall follow the procedure used in Ref.~\cite{nos} Although exact Heisenberg equations of motion can be found for this system, it is, at the very least, extremely difficult to solve nonlinear operator equations. We therefore develop stochastic equations of motion in the positive P-representation, which in principle give access to any normally-ordered operator expectation values we may wish to calculate. To find the appropriate equations, we proceed via the master and Fokker-Planck equations. Using the standard techniques for elimination of the baths~\cite{hjc}, we find the zero-temperature master equation for the reduced density operator. The master equation may be mapped onto a Fokker-Planck equation~\cite{Crispin} for the positive-P pseudoprobability distribution. The cavity damping rates at each frequency are $\gamma^D_{j}=2\gamma_j/\tau$, with $\gamma_{1}=\gamma_{2}= \gamma$. We further define $\gamma_{r}=\gamma_{0}/\gamma$. In order to apply perturbation theory, we introduce a normalized coupling constant, \begin{equation} g = \frac{\chi}{\gamma^D \sqrt{2\gamma_{r}}}\;, \label{eq:a17} \end{equation} which will be a power expansion parameter. Moreover, it will be useful to work with the scaled quadratures \begin{eqnarray} &&x_{0} = g \sqrt{2 \gamma_{r}}\,p_{0} \quad , \qquad q_{0} = g \sqrt{2 \gamma_{r}} \,q_{0}, \nonumber \\ &&x_{+} = g\,p_{+} \quad , \; \qquad \qquad y_{+} = g\,q_{+}, \nonumber \\ &&x_{-} = g\,p_{-} \quad , \; \qquad \qquad y_{-} = g\,q_{-} \quad , \label{eq:a16} \end{eqnarray} in order to render the stochastic equations amenable to perturbation. The stochastic equations for the scaled EPR variables become \begin{widetext} \begin{eqnarray} \frac{d x_{0}}{d T} &=& -\gamma_{r} \left[ x_{0} - 2\sqrt{\sigma} + \frac{1}{2} \left( x_{+}^{2} - x_{-}^{2} - y_{+}^{2} + y_{-}^{2} \right) \right] \;, \nonumber \\ \frac{d y_{0}}{d T} &=& -\gamma_{r} \left[ y_{0} + x_{+} y_{+} - x_{-}y_{-} \right] \;, \nonumber \\ \frac{d x_{-}}{d T} &=& - x_{-} - \frac{1}{2} \left[ x_{0} x_{-} + y_{0}y_{-} \right] + \frac{g}{\sqrt{2}} \left[ \sqrt{x_{0} + i y_{0}}\;\xi_{-} + \sqrt{x_{0} - i y_{0}}\; \xi_{-}^{+} \right] \;, \label{eq:a18} \\ \frac{d y_{+}}{d T} &=& - y_{+} + \frac{1}{2} \left[ y_{0} x_{+} - x_{0}y_{+} \right] - i \frac{g}{\sqrt{2}} \left[ \sqrt{x_{0} + i y_{0}}\;\xi_{+} - \sqrt{x_{0} - i y_{0}}\; \xi_{+}^{+}\right] \;,\nonumber \\ \frac{d x_{+}}{d T} &=& - x_{+} + \frac{1}{2} \left[ x_{0} x_{+} + y_{0}y_{+} \right] + \frac{g}{\sqrt{2}} \left[ \sqrt{x_{0} + i y_{0}}\; \xi_{+} + \sqrt{x_{0} - i y_{0}}\; \xi_{+}^{+}\right] \;, \nonumber \\ \frac{d y_{-}}{d T} &=& - y_{-} + \frac{1}{2} \left[ x_{0} y_{-} - y_{0}x_{-} \right] - i \frac{g}{\sqrt{2}} \left[ \sqrt{x_{0} + i y_{0}}\;\xi_{-} - \sqrt{x_{0} - i y_{0}}\; \xi_{-}^{+} \right] \nonumber\;, \end{eqnarray} \end{widetext} where $T=\gamma^D\,t$ is time in units of the cavity lifetime for the down-converted fields. The functions $\xi_{\pm}(T)$ and $\xi_{\pm}^+ (T)$ are independent Langevin forces with the following nonvanishing correlation functions: \begin{eqnarray} \langle \xi_{+}(T) \xi_{+}(T^\prime) \rangle &=&\langle \xi_{+}^{+}(T) \xi_{+}^{+}(T^\prime) \rangle = \delta (T - T^\prime)\;, \nonumber\\ \langle \xi_{-}(T) \xi_{-}(T^\prime) \rangle &=&\langle \xi_{-}^{+}(T) \xi_{-}^{+}(T^\prime T) \rangle = -\delta (T - T^\prime)\;. \end{eqnarray} We notice the symmetry properties of the stochastic equations~(\ref{eq:a18}). In fact, it is easy to verify that the equations of motion are unchanged by the transformation $x_-\leftrightarrow y_+\,$ and $x_+\leftrightarrow -\,y_-\,$. Of course, all noise terms appearing in Eqs.~(\ref{eq:a18}) are statistically equivalent. Therefore, these equations should not change the symmetries of the initial values chosen for $x_+$ and $y_-$. In order to provide a comparison between the linearized model and the full stochastic integration, we will use a perturbation expansion of the positive P-representation of the dynamical equations. This allows us to include quantum effects in a systematic fashion~\cite{ndturco}. We first introduce a formal perturbation expansion in powers of the parameter $g$, \begin{eqnarray} && x_{k} = \sum_{n=0}^{\infty} g^{n} x_{k}^{(n)}, \nonumber \\ && y_{k} = \sum_{n=0}^{\infty} g^{n} y_{k}^{(n)}. \label{eq:a20} \end{eqnarray} The series expansion written in this way has the property that the zeroth order term corresponds to the classical field of order $1$ in the unscaled quadrature, while the first order term is related to quantum fluctuations of order $g$, and the higher order terms correspond to nonlinear corrections to the quantum fluctuations of order $g^{2}$ and greater. The stochastic equations are then solved by the technique of matching powers of $g$ in the corresponding time evolution equations. The steady state solutions $x_{js}$ of the zeroth order give the operation point of the OPO and describe its macroscopic behavior. For triply resonant operation, the expressions for the steady state are quite simple: \begin{eqnarray} x_{0s}&=&2\;,\nonumber\\ x_{+s}&=&2\,\left(\sqrt{\sigma}-1\right)^{1/2}\;,\nonumber\\ x_{-s}&=&0\;,\label{sstate} \\ y_{0s}&=&y_{+s}=y_{-s}=0\;.\nonumber \end{eqnarray} The first order equations are often used to predict squeezing in a linearized fluctuation analysis. They are non-classical in the sense that they can describe states without a positive-definite Glauber-Sudarshan P-distribution~\cite{Roy,Sudarshan}, but correspond to a simple form of linear fluctuation which has a Gaussian quasi-probability distribution. A full quantum description of the OPO dynamics can be obtained by numerical integration of the stochastic equations~(\ref{eq:a18}), and can be compared to analytical expressions obtained from the linearized approach. Taking the first order terms and using the steady state solutions given by Eqs.~(\ref{sstate}), we can write the following equations for the linear quantum fluctuations, \begin{eqnarray} \frac{d x_{0}^{(1)}}{d T} &=& -\gamma_{r} \left[ x_{0}^{(1)} + 2\,\left(\sqrt{\sigma}-1\right)^{1/2}x_{+}^{(1)} \right] \;, \nonumber \\ \frac{d y_{0}^{(1)}}{d T} &=& -\gamma_{r} \left[ y_{0}^{(1)} + 2\,\left(\sqrt{\sigma}-1\right)^{1/2} y_{+}^{(1)} \right] \;, \nonumber \\ \frac{dx_{+}^{(1)}}{d T} &=& - \left(\sqrt{\sigma}-1\right)^{1/2}\,x_{0}^{(1)} +\left( \xi_{+} + \xi_{+}^{+} \right) \;, \label{eq:firstordernoise} \\ \frac{dx_{-}^{(1)}}{d T} &=& - 2\,x_{-}^{(1)} + \left( \xi_{-} + \xi_{-}^{+} \right) \;, \nonumber \\ \frac{dy_{+}^{(1)}}{d T} &=& - 2\,y_{+}^{(1)} + \left(\sqrt{\sigma}-1\right)^{1/2} y_0^{(1)}-i \left( \xi_{+} - \xi_{+}^{+} \right) \;, \nonumber \\ \frac{dy_{-}^{(1)}}{d T} &=& -i \left( \xi_{-} - \xi_{-}^{+} \right) \nonumber\;. \end{eqnarray} The linear coupled stochastic equations obtained agree with Eqs.~(\ref{eqopolinear}), for zero detunings and no spurious losses. From them, we may readily calculate the steady state averages of the first-order corrections and use that to compute the linearized fluctuations. Notice that under the linear approximation $y_-$ becomes a purely diffusive variable (phase diffusion). In an experimental situation, the noise spectra outside the cavity are generally the quantities of interest. We will therefore proceed to analyze the problem in frequency space, via Fourier decomposition of the fields. The first order stochastic equations may be rewritten in the frequency domain so that we may calculate the spectra of the squeezed and anti-squeezed field quadratures. The solutions for the noise of the squeezed operators, $\hat p_{-}$ and $\hat q_{+}$, are: \begin{equation} S_{p-}(\Omega^\prime) = 1 - \frac{1}{\Omega^{\prime 2} + 1} \label{eq:sp-} \end{equation} and \begin{widetext} \begin{equation} S_{q+}(\Omega^\prime) = 1 - \frac{(4\,\Omega^{\prime 2}+\gamma_r^2)^2} {\Omega^{\prime 2}\left[ 4\,\Omega^{\prime 2} + \gamma_r^2 - 2\,\gamma_r\, \left(\sqrt{\sigma}-1\right)\right]^2 + \left[ 4\,\Omega^{\prime 2} + \gamma_r^2\,\sqrt{\sigma}\right]^2}\;, \label{eq:sq+} \end{equation} \end{widetext} where $\Omega^\prime=\Omega/\gamma^D$ is the analysis frequency in units of the cavity bandwidth. Under the limits of the linearized approach, the results of the noise spectra are independent of the phase space representation employed. Therefore, these results coincide with the usual ones obtained with the Wigner representation. The spectra given by Eqs.~(\ref{eq:sp-}) and (\ref{eq:sq+}) can now be compared with those found via stochastic integration of the full equations of motion~(\ref{eq:a18}) in the positive P-representation. The nonlinear spectra are calculated by Fourier transform of the stochastic integration, which must be performed numerically. A somewhat subtle point arises here: the nonlinear Eqs.~(\ref{eq:a18}) have more than one possible steady-state solution. Thus, for a fair comparison with the linearized spectra, it is necessary to choose the same steady-state. By doing this, we verified that both predictions, in the above-threshold OPO, agree within a good numerical precision. Therefore, we conclude that possible limitations of the linearized model for dealing with the OPO dynamics under phase diffusion do not account for the experimentally observed excess noise of $\hat q_+$. \section{Experiment} \label{sec:exp} \begin{figure}[ht] \centering \epsfig{file=fig2villarjosab.eps,scale=0.35} \caption{Sketch of the experimental setup.} \label{setup} \end{figure} Our system is a triply resonant type-II OPO operating above threshold. The experimental setup is depicted in Fig.~\ref{setup}. The pump beam is a diode-pumped doubled Nd:YAG laser (Innolight Diabolo) with 900~mW output power at 532~nm. A secondary output at 1064~nm is used for alignment purposes. Since the pump beam presents excess noise for frequencies as high as 20~MHz, a filter cavity is necessary. Our filter cavity has a bandwidth of 2.4~MHz and assures that the pump laser is shot noise limited for analysis frequencies higher than 15~MHz (see Fig.~\ref{pumpnoise}). We measured the laser phase noise by reflecting the beam off an empty cavity, in the same way we measure phase noise of the downconverted beams. The phase noise equals the intensity noise, except at a frequency of 12~MHz, where there is very big phase noise, owing to a frequency modulation inside the Diabolo laser, for stabilization purposes. This excess noise saturates our electronics and prevents measurements for analysis frequencies close to 12~MHz and also to its second harmonic, 24~MHz, as can be seen in Fig.~~\ref{pumpnoise}. The OPO cavity is a linear semi-monolithic cavity composed of a flat input mirror, directly deposited on one face of the nonlinear crystal, with 93\% reflectivity at 532~nm and high reflectivity ($>99.8$\%) at 1064~nm, and a spherical output mirror (50~mm curvature radius) with high reflectivity at 532~nm ($>99.8$\%) and 96\% reflectivity at 1064~nm. The nonlinear crystal is a 10~mm-long Potassium Titanyl Phosphate (KTP) from Litton. Threshold power is 12~mW. \begin{figure}[ht] \centering \epsfig{file=fig3villarjosab.eps,scale=0.35} \caption{Measurement of the pump noise, as a function of the analysis frequency. Open circles: unfiltered laser noise; full circles: laser noise at the output of the filter cavity. In view of the large excess noise at 12~MHz and its second harmonic, we suppressed those frequencies from the data.} \label{pumpnoise} \end{figure} Signal and idler beams are separated by a polarizing beam splitter (PBS) and sent to detection, which consists of a ring cavity and a photodetector (Epitaxx ETX 300) for each beam. Overall detection efficiency is $\eta=80(2)\%$. Both analysis cavities have bandwidths of 14~MHz, allowing for a complete conversion of phase to amplitude noise for analysis frequencies higher than 20~MHz. Measurements are taken at analysis frequency equal to 27~MHz. In order to access the same quadrature for both beams, the two cavities must be detuned by the same amount at the same time. By scanning the detunings synchronously, we can measure all quadratures of the twin beams. In particular, we can easily select the amplitude (off resonance) or phase (detuning equal to half the bandwidth) quadratures\cite{galatola}. Data acquisition is carried out by a demodulating chain, which mixes the photocurrents from each detector with a sinusoidal electronic reference at the analysis frequency and filters the resulting low frequency signal. The demodulated photocurrent fluctuations are sampled at 600~kHz repetition rate by an A/D card connected to a personal computer. The variances of these fluctuations are then computed taking groups of 1000 points, resulting in something proportional to the photocurrents' power spectrum at the analysis frequency. At the end, measured variances are normalized to the shot noise standard quantum level (SQL). \subsection{Fluctuations as a function of $\sigma$} The input pump field is guaranteed to be shot noise limited for frequencies above 15~MHz after being transmitted through the filter cavity. Even before being filtered, pump field is shot noise limited above 25~MHz, as shown in Fig.~\ref{pumpnoise}. Nevertheless, we observed excess noise in the sum of phases of signal and idler beams, preventing the violation of the inequality given in Eq.~\ref{dgczcrit}, except for pump powers very close to threshold\cite{prlentangtwinopo}. \begin{figure}[ht!] \centering \epsfig{file=fig4villarjosab.eps,scale=0.35} \caption{Intensity noise of the reflected pump beam, as a function of the detuning of the OPO cavity. The excess noise observed is peaked for $\Delta_0$ close to half the OPO cavity bandwidth. The asymmetry in the mean field signal is due to thermal bistability. The analysis frequency is 27~MHz. Circles: reflected pump noise; full line: reflected average intensity} \label{pumpnoiseref} \end{figure} As seen in section~\ref{sec:theory}, from the theoretical description of the OPO, excess noise in the pump beam would generate excess noise in the phase sum of the twin beams. Yet, how could that be the case if we carefully measured the input pump to be shot noise limited? By following this single lead, it is natural to examine the noise properties of the pump beam reflected from the OPO cavity. This was done by scanning the OPO cavity, for crystal temperatures such that there was no parametric oscillation (triple resonance depends sharply on crystal temperature and can be easily avoided). Since the incident beam is shot noise limited, could there be excess noise generated inside the cavity containing the KTP crystal? We did indeed find excess noise in the reflected pump's amplitude (Fig.~\ref{pumpnoiseref}) and phase quadratures. The maximum values, for $\sigma=1$, were $S^{\mathrm{R}}_{p0}= 1.8(1)$ and $S^{\mathrm{R}}_{q0}=4.5(3)$. At present, we can still not claim to fully understand the origin of this excess noise. We verified, of course, that no such noise is generated in an empty cavity (which would also invalidate the measurements we perform with the analysis cavities for the twin beams). We also checked whether this effect depended on $\chi$ and would thus be directly related to the parametric process. For a polarization of the incident beam orthogonal to the usual polarization, phase matching can not be fulfilled, and no downconversion can occur. The noise in the reflected beam did not show any significant dependence on the incident polarization. It does, however, increase for increasing power of the incident beam. We can speculate that this can be a result of photon absorption by the crystal at 532~nm (which is at the origin of the thermal bistability observed in Fig.~\ref{pumpnoiseref}), with subsequent relaxation by spontaneous emission or non-radiative processes. This may give rise to an intensity-dependent refractive index, yielding phase and amplitude modulation at 532~nm. We are currently investigating these possibilities. As a first approximation, in order to see whether this would account for the behavior of $\Delta^2 \hat p_-$, $\Delta^2 \hat q_-$, $\Delta^2 \hat p_+$, and $\Delta^2 \hat q_+$, as a function of $\sigma$, we simply added excess noise to the input pump beam in the linearized OPO theory. In Fig.~\ref{ruidoxdcexp}, we compare the results from the model, with incident $S_{p0}=1.5$ and $S_{q0}=5.5$, to the measured data. Signal and idler powers varied from 0.4mW up to 5.5mW each during the experiment, corresponding to pump powers between 13mW and 26mW, or $1.06<\sigma<2.2$. As expected, noises corresponding to the subtraction subspace, $\Delta^2 \hat p_-$ and $\Delta^2 \hat q_-$, are independent of pump power. But $\Delta^2 \hat q_+$ is very sensitive to $\sigma$, as well as $\Delta^2 \hat p_+$ to a smaller degree. The agreement with the theoretical model is surprisingly good. This is a strong indication that the intracavity pump excess noise is the main responsible for the excess noise in $\Delta^2 \hat q_+$. \begin{figure}[htbp!] \centering \epsfig{file=fig5villarjosab.eps,scale=0.45} \caption{Noise behavior as a function of $\sigma$. In part (a), we present the predictions of the linearized model, for an input pump beam with $S_{p0}=1.5$ and $S_{q0}=5.5$; dashed line: $S_{p+}$; full line + open circles: $S_{q-}$; full line: $S_{q+}$; full line + crosses: $S_{p-}$; SQL = 1.0 is indicated by a dashed line. In part (b), experimental results are shown for $\sigma$ ranging from 1.06 to 2.2. full circles: $S_{p+}$; triangles: $S_{q-}$; open circles: $S_{q+}$; squares: $S_{p-}$; SQL = 1.0 is indicated by a dashed line.} \label{ruidoxdcexp} \end{figure} \subsection{Two-color entanglement} The sum of phases' noise is squeezed very close to threshold, and squeezing is degraded with increasing pump power. $\Delta^2 \hat q_+$ crosses the shot noise level approximately at $\sigma=1.20$, from squeezing to anti-squeezing, although only below $\sigma=1.15$ can squeezing be observed with certainty. Fig.~\ref{entang} shows the recorded noise in sum and subtraction of photocurrent fluctuations of signal and idler beams as functions of analysis cavities' detuning, for $\sigma=1.06$. Off resonance, quantum correlations are observed in the subtraction of amplitudes, $\Delta^2 \hat p_-=0.50(1)$, or $-3.01(9)$~dB. For analysis cavities' detuning equal to half the bandwidth, squeezing is present in the sum of phases, $\Delta^2 \hat q_+=0.73(1)$, or $-1.37(6)$~dB. The Duan {\it et al.} and Simon criterion, Eq.(\ref{dgczcrit}), is then clearly violated, \begin{equation} \Delta^2 \hat p_-+\Delta^2 \hat q_+=1.23(2)<2 \;, \end{equation} attesting the entanglement. This value, together with the one reported by Jing {\it et al.}\cite{pfisterentang}, is the lowest achieved for twin beams produced by an above-threshold OPO. \begin{figure}[ht!] \centering \epsfig{file=fig6villarjosab.eps,scale=0.35} \caption{Sum (full circles) and difference (open circles) of quadratures' noise, measured as a function of the analysis cavities' detuning. Squeezed-state entanglement can be directly observed, with $\Delta^2 \hat p_-=0.50(1)$, or $-3.01(9)$~dB, and $\Delta^2 \hat q_+=0.73(1)$, or $-1.37(6)$~dB} \label{entang} \end{figure} We also point out that, in this experiment, the twin beams have very different frequencies (wavelengths differ by $\approx 1$~nm), an unusual situation. Such two-color entanglement can be very interesting for the transfer of quantum information between different parts of the electromagnetic spectrum. \section{Conclusion} We presented a theoretical and experimental investigation of phase noise and entanglement in the above-threshold OPO. Excess noise in the phase sum of the twin beams was measured as a function of pump power relative to threshold and we found that it decreases as pump power is lowered. We finally discovered that excess pump noise is generated inside the OPO cavity containing the nonlinear crystal, even for a shot noise limited pump beam and without parametric oscillation. The ultimate physical origin of this phenomenon still requires further investigation. Another important question to address is how one can eliminate this effect. Su {\it et al.}\cite{optlettpeng} were able to observe entanglement for $\sigma$ of the order of two. The difference between their setup and others is a lower cavity finesse for the pump field. If the assumption of an intensity dependent index of refraction is correct, this makes sense. For a lower finesse, phase shifts accumulated inside the cavity should be smaller, hence the excess noise generated should also be smaller. In spite of these unexpected phenomena, two-color entanglement was measured in the above-threshold OPO. There are interesting avenues to pursue for applications in quantum information. First of all, we should mention that the strongest squeezing measured to date, $-9.7$~dB, was generated in an above-threshold OPO\cite{claude10db}. Thus, entanglement in the above-threshold OPO may be the strongest ever achieved for continuous variables. The bright twin beams can have very different frequencies, and one can envisage CV quantum teleportation\cite{kimbleteleport} to transfer quantum information from one frequency to another (in other words, to ``tune'' quantum information). For example, this system could be used to communicate quantum information between quantum memories or quantum computers based on ``hardwares'' which have different resonance frequencies. Finally, a quantum key distribution protocol proposed by Silberhorn {\it et al.}\cite{qkdleuchs} can be readily implemented, with the advantage that the measurement with analysis cavities does not require sending a local oscillator together with the quantum channel to the distant receiver. The above-threshold OPO, which was the first system proposed to observe continuous variable entanglement, has finally been added to the optical quantum information toolbox. We expect new and exciting applications to come in the near future. \section*{Acknowledgments} This work was supported by FAPESP, CAPES, and CNPq through {\it Instituto do Mil\^enio de Informa\c c\~ao Qu\^antica}. We thank C. Fabre and T. Coudreau for kindly lending us the nonlinear crystal.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Data augmentation is described as imagination or dreaming by C. Shorten and T.~M. Khoshgoftaar \cite{Shorten2019ASO}. They survey different methods of image data augmentation for deep learning, including adversarial training and generative adversarial networks (GANs) \cite{Goodfellow2014GAN}. Adversarial training can be used to attack or defend systems, as well as to increase the amount of available training data. The goal of data augmentation is to create new data samples from the existing training set. This new data is obtained according to the purposes of the application \cite{Marks1995boundary}. In this work, data augmentation serves to supplement the kind of out-of-distribution outlier data that resides closest to the distribution manifold. In previous work \cite{Bui_Symbiotic_2021,Bui_Watchdog_2021}, the autoencoder watchdog is introduced to identify outliers in classification neural networks. The autoencoder watchdog measures an error function, such as the root mean square error (RSME), between the network's input and output. This is a measure of the distance between the autoencoder input and the training data manifold manifest in the latent space. Data samples distant from the manifold are out-of-distribution. The closer the data gets to the manifold, the more fuzzy the classification becomes. The work presented in this paper specializes in identifying outliers that are close to the manifold of the distribution. This is achieved in two steps. First, generating an augmented training data set that lies close to the boundary of the distribution manifold. Second, creating a binary classifier neural network that specializes on differentiating between in-distribution data and on-the-boundary data. The resulting cascade watchdog is comprised of two layers of defense against outliers. The first layer of defense of the cascade watchdog is the autoencoder layer, and the second layer of defense is a binary classifier layer. \section{Background} GANs are effective tools for data augmentation. Since the first publication, where Goodfellow et al. introduced GANs in 2014 \cite{Goodfellow2014GAN}, a variety of techniques and applications have been developed across diverse fields. Yi et al. \cite{Yi_2019} present a review of adversarial training in medical imaging, one of the most prolific fields of application of GANs on data augmentation. The basic structure of a GAN consists of a generator and a discriminator. The generator is trained to fool the discriminator, while the discriminator is trained to differentiate between real and fake inputs. After a GAN has been trained, it provides a source of freshly generated data samples which resemble real system inputs. Once the generator of the GAN is trained, it produces in-distribution samples from noise. Variations of GANS and their many applications are available in the literature \cite{Frid_Adar_2018,zhang2020gan,Zhu2018EmotionCW}. This paper presents a method of adversarial training inspired by the GAN model. The main contrast between the generative adversarial training method presented here and other widespread applications of GANs resides in the different target. Normally, the goal of a GAN is to generate data in the distribution of the dataset. However, in this paper, the goal is to obtain out-of-distribution samples within a certain distance of the distribution. This is work based on autoencoder watchdog neural networks for outlier identification in classification networks \cite{Bui_Watchdog_2021,Bui_Symbiotic_2021}. The cascade watchdog improves upon the precision of the outlier identification task. It is capable of both identifying more outliers and reducing misidentification significantly. In a nutshell: \begin{enumerate} \item The autoencoder serves as the discriminator of the GAN module, \item The GAN module generates out-of-distribution data samples close to the distribution manifold, and \item A combination of in-distribution and generated out-of-distribution data is used to train the binary classifier. \end{enumerate} The binary classifier specializes in identifying outliers that are closer to the distribution manifold. Data far within the manifold are easy to classify. Classification of data close to the manifold surface is more difficult. Applying the autoencoder and the binary classifier in sequential order improves outlier identification, while preventing the network from discarding in-distribution elements by mistake. Other approaches are available for outlier identification. Atlas et al. \cite{atlas1990X} and Hwang et al. \cite{hwang1990,hwang1991} identified manifold boundary points using neural network inversion \cite{jensen1997location,jensen1999inversion,thompson2003inversion}. Yu et al. \cite{yu2019} propose a method that identifies outliers that are only far from the distribution. Lee et al. \cite{lee2018training} use a generator component to produce data samples on the boundary of the distribution. They train the classifier to assign less confidence to the classification of inputs on the boundary of the distribution. In order to obtain less confidence at the output of the classifier for 'boundary' inputs, they set the output target as the uniform distribution for 'boundary' inputs during the training process. \section{Methodology} The distribution manifold lies is a small portion of the input space. The autoencoder layer is capable of identifying many outliers, covering a significant portion of the input space. To determine the boundary between in-distribution and out-of-distribution, the autoencoder uses a threshold hyperparameter. When selecting the threshold, there is a trade-off between false negatives (out-of-distribution data samples that are not identified as outliers) and false positives (in-distribution data samples that are classified as outliers). A large threshold reduces the false positive rate but also increases the false negatives rate. Whereas a small threshold reduces the false negative rate but increases the false positive rate. The second layer of the cascade watchdog is a fine grained binary classifier that complements the autoencoder layer. The binary classifier specializes in identifying outliers that reside close to the distribution. The threshold of the autoencoder can be increased to reduce the false positive rate relying on the additional layer of defense provided by the binary classifier, which decreases the false negative rate safely. At the end of this process, both the false positive and false negative rates are reduced. More outliers are identified and less in-distribution data are erroneously discarded. \subsection{Adversarial Watchdog} The autoencoder also supports the development of a GAN, which generates out-of-distribution data samples close to the autoencoder threshold. For training the GAN, two goals are necessary to generate a rich dataset: \begin{enumerate} \item Producing data samples where the autoencoder produces an error similar to the threshold. \item Distributing the generated data samples across the boundary of the manifold. To avoid the collapse of the GAN, each generated output depends on a point on the manifold taken from the in-distribution training set. Each generated data sample comes from one input on the training data. The distance between generated outliers and inputs must be similar to the threshold of the autoencoder. The same error function between the input and output of the autoencoder is used to measure the distance between the generated out-of-distribution data sample and the original in-distribution data sample. \end{enumerate} The combination of these two targets generates a dataset that is spread through the space near the boundary of the distribution manifold, preventing the collapse of the GAN. The sequence of steps to produce the second layer of the cascade watchdog is: \begin{enumerate} \item Train the autoenoder. \item Train the GAN and generate the dataset on the boundary of the distribution. \item Create a fine grained binary classifier. \end{enumerate} The last step consist of training the binary classifier in two categories: in-distribution and out-of-distribution. The original training dataset is labeled as in-distribution while the dataset on the boundary of the distribution generated with the GAN is labeled as out-of-distribution. After both the autoencoder and binary classifier are trained, they combine sequentially to form the cascade watchdog, as seen in Figure \ref{fig:flowchartwd}. First, the input is analyzed using the autoencoder layer, which identifies whether the input is an outlier or not. If the autoencoder does not identify the input as an outlier, the input is then analyzed by the binary classifier. If neither the autoencoder nor the binary classifier identifies the input as an outlier, then the input is considered to be an in-distribution element. \begin{figure}[!t] \centering \includegraphics[width=0.6\linewidth]{flowchartWD} \caption{Flowchart of the cascaded watchdog. An input that passes both layers of defense is considered as pertaining into the distribution.} \label{fig:flowchartwd} \end{figure} The performance of the binary classifier is evaluated with a ten-fold bias-variance analysis. The quality of the binary classifier is measured by observing the false and true negatives. Figure \ref{fig:manifold} shows a Venn diagram of the cascade watchdog formed by the autoencoder and the binary classifier. The domain of the autoencoder is the full input space and the outliers that are farther from the manifold are detected, while the domain of the binary classifier is only the space that the autoencoder does not filter. Ideally, all of the outliers are detected while all in-distribution data is permitted. Observe that Figure \ref{fig:manifold} does not contain false positive outliers (i.e. no part of the manifold is marked as out-of-distribution), but has false negative outliers (outliers that are very close to the manifold are not detected). \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{manifold} \caption{Venn diagram dividing the input space. The biggest area is the set of outliers detected by the autoencoder, inside are those filtered by the binary classifier, and the bold ``M'' is the manifold with some false negatives around.} \label{fig:manifold} \end{figure} \section{Experimentation} The first step of the experiments is to train the autoencoder with the MNIST dataset% \footnote{The MNIST dataset is available on TensorFlow: \url{https://www.tensorflow.org/datasets/catalog/mnist}}. Examples of the MNIST dataset are shown in Figure \ref{fig:indistribution}. The GAN is trained to generate data samples close to the boundary of the distribution (see some examples on Figure \ref{fig:ontheboundary}). The boundary of the distribution is approximated by the threshold error function between the input and the output of the autoencoder. The error function chosen for the threshold of the autoencoder is the RMSE. For the experiments a value of 5 is selected for the autoencoder threshold. The loss function of the GAN has four components: \begin{equation}\label{eq:lossfn} \text{GAN loss function} = \frac{\text{(a)}+\text{(b)}+\text{(c)}+\text{(d)}}{4} \end{equation} where: \begin{enumerate}[(a)] \item $= \mid $threshold - RMSE(input, GAN output)$\mid$ \item $=$ max\{0, RMSE(input, GAN output) - threshold\} \item $=\mid$threshold - RMSE(GAN output, autoencoder(GAN output))$\mid$ \item $=$ max\{0, RMSE(GAN output, autoencoder(GAN output)) - threshold\} \end{enumerate} and: \begin{itemize}[$\bullet$] \item ``$\mid$·$\mid$'' is the absolute value. \item Each ``input'' is a data sample from the MNIST training dataset. \item The ``GAN output'' is the data sample on the boundary of the distribution generated by the GAN from the current ``input'' that is being used for training. \item ``autoencoder(GAN output)'' is the reconstruction that the previously trained autoencoder produces from the ``GAN output''. \end{itemize} Note that the threshold hyperparameter in the loss function of the GAN is a different variable from the autoencoder threshold, even when they are technically related. The loss function in \eqref{eq:lossfn} penalizes manifold distances that are greater or smaller than the threshold ---the GAN hyperparameter threshold. The distances to the manifold that are greater than the threshold (included on (a),(b),(c), and (d)) are penalized double than those that are less than the threshold (only included on (a) and (c)). The experiments reveal that to generate images with a specific RMSE value on the autoencoder, the training of the GAN requires a smaller value for the threshold hyperparameter. For instance, the GAN needs a threshold hyperparameter of approximately 0.2 to obtain results with a RMSE of approximately 5.25; a GAN threshold of 0.05 produces images with autoencoder RMSE values of about 1.3. Setting the hyperparameter of the GAN threshold about 0.3 or larger makes the GAN collapse. Using grid search \cite{lavalle2004relationship}, the optimal value for the threshold parameter of the GAN is $0.1375$. With this threshold, the GAN produces images with an average autoencoder RMSE of $4.14$. Even when the average RMSE of $4.14$ is below the threshold of the autoencoder value, set as~5, GAN generated images using this threshold as the distance to the manifold produce better results at the end of the experimentation pipeline. After training the GAN, one data sample on-the-boundary of the distribution is obtained for each input of the original dataset (see Figure \ref{fig:GANsamples}). The training dataset of the fine grained binary classifier layer consist of both the original in-distribution data and the generated on-the-boundary data. \begin{figure*}[!t] \centering \begin{subfigure}[t]{0.7\textwidth} \includegraphics[width=\textwidth]{in_distribution} \caption{Original data samples from the MNIST dataset.} \label{fig:indistribution} \end{subfigure} \hfill \begin{subfigure}[t]{0.7\textwidth} \includegraphics[width=\textwidth]{on_the_boundary} \caption{GAN generated on the boundary samples.} \label{fig:ontheboundary} \end{subfigure} \caption{Illustration of the boundary of the distribution. Each column corresponds to the input and the corresponding output of the GAN.} \label{fig:GANsamples} \end{figure*} The training of the autoencoder is excluded from the bias-variance analysis. Once trained, the autoencoder is used for the 10 experimental runs. On each run, the GAN is trained reinitialized, allowing it to generate a fresh on-the-boundary dataset, then the binary classifier is zeroed and then retrained. \num{50000} data samples from the MNIST dataset are used in this process. In order to characterize the performance of the cascade watchdog, \num{10000} samples from the MNIST dataset and \num{10000} outlier samples are used. The outlier samples are taken from the Fashion MNIST dataset% \footnote{The Fashion MNIST dataset is available on TensorFlow: \url{https://www.tensorflow.org/datasets/catalog/fashion_mnist}.}. Samples of the Fashion MNIST dataset can be seen in Figure \ref{fig:examples_pictures}. The in-distribution samples from the MNIST dataset used for testing are separate from the samples used for training. The architecture of the autoencoder (see Figure \ref{fig:AEarchitecture}) has an encoder and a decoder connected sequentially. In between, the latent space has 16 features. The architecture of the GAN is the same as the autoencoder. Transfer learning is applied for the encoder block of the GAN by copying the weighs from the autoencoder. The encoder of the GAN is then frozen during the training and only the decoder block of the GAN is trained. In essence, the autoencoder latent representation of the MNIST training dataset is used as input for the GAN decoder. Or, in other words, the generative component of the GAN is trained to produce on-the-boundary data samples from the autoencoder latent representation of the in-distribution data samples. \begin{figure}[!t] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=0.6\linewidth]{encoder_structure} \caption{Encoder architecture.} \label{fig:encoderstructure} \end{subfigure} \hfill \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=0.6\linewidth]{decoder} \caption{Decoder architecture.} \label{fig:decoder} \end{subfigure} \caption{Architecture of the autoencoder. The same structure is used for the autoencoder and for the GAN. Also the GAN uses transfer learning from the autoencoder for the encoder and only trains the decoder.} \label{fig:AEarchitecture} \end{figure} Finally, the architecture of the binary classifier is shown in Figure \ref{fig:cnnbinarystructure}. The binary classifier is a standard convolutional neural network classifier. \begin{figure}[!t] \centering \includegraphics[width=.6\linewidth]{cnnbinary_structure} \caption{Architecture of the binary classifier used as the second layer defense of the cascade watchdog.} \label{fig:cnnbinarystructure} \end{figure} \section{Analysis} The workflow of the cascade watchdog (see Figure \ref{fig:flowchartwd}) has two steps: \begin{enumerate} \item Autoencoder outlier detection. \item Binary classifier outlier detection. \end{enumerate} In the first step, the autoencoder detected \num{9347} outliers out of \num{10000} data samples taken from the fashion MNIST dataset. In the second step, the remaining 653 outliers not detected by the autoencoder are tested on the binary classifier. The results of the ten fold bias-variance analysis are shown in Table \ref{tab:roc} and in Figure \ref{fig:roc}. In Figure~\ref{fig:roc}, the true positive rate accounts for the fraction of the outliers that the binary classifier detected, while the false positive rate corresponds to the in-distribution samples that the binary classifier layer classifies as outliers. The ROC curve characterizes a very good performance of the binary classifier, where the true positive rate increases to about $\frac{3}{4}$ while the false positive rate stays low. The specific values used to create this ROC curve are shown in Table \ref{tab:roc}. The binary classifier has a true positive rate of 39.8\% with 9.5\% standard deviation while preserving a zero false positive rate. The false positive rate is 0 until the threshold of the binary classifier certainty is raised. Approaching the certainty threshold to the limit (certainty greater than 0\%) produces a small value for the false positive rate (3\%) while the true positive rate improves from 40\% to 77\% in average. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{ROC} \caption{ROC curve for outlier detection by the binary classifier (see Table \ref{tab:roc}). The point $(1,1)$ and the diagonal, that would represent random guess, are not included because they do not fit into the plot. } \label{fig:roc} \end{figure} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Ten-fold bias-variance analysis for outlier detection of the binary classifier. Note: The first column corresponds to the threshold applied to the output of the softmax function on the binary classifier. } \label{tab:roc} \centering \includegraphics[width=0.47\textwidth]{BinaryClassifROCDataTable.png} \end{table} \begin{figure}[!t] \centering \begin{subfigure}[t]{0.3\textwidth} \includegraphics[width=\textwidth]{excluded_AEW_big} \caption{Outliers detected by the autoencoder layer.} \label{fig:excludedAELbig} \end{subfigure} \vfill \begin{subfigure}[t]{0.3\textwidth} \includegraphics[width=\textwidth]{excluded_BIW_big} \caption{Outliers detected by the binary classifier layer.} \label{fig:excludedBCLbig} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \includegraphics[width=\textwidth]{not_excuded_big} \caption{Outliers not detected.} \label{fig:notexcudedbig} \end{subfigure} \caption{Examples of Fashion MNIST outliers detected by the autoencoder (Subfig. \ref{fig:excludedAELbig}), the binary classifier with a certainty threshold of 0.5 (Subfig. \ref{fig:excludedBCLbig}), and not detected (Subfig. \ref{fig:notexcudedbig}).} \label{fig:examples_pictures} \end{figure} In Figure \ref{fig:examples_pictures}, observe the differences between true positive outliers filtered by the autoencoder and by the binary classifier. Also compare the detected outliers with the false negatives (unfiltered outliers). The outlier images not detected by the cascade watchdog, such as those seen in Figure \ref{fig:notexcudedbig}, resemble features from the in-distribution MNIST data: Angles, ovals, and writing strokes (see Figure \ref{fig:indistribution}). Between the detected outliers in Figures \ref{fig:excludedAELbig} and \ref{fig:excludedBCLbig}, the differences with the MNIST data are more evident. The outliers detected by the binary classifier (see Figure \ref{fig:excludedBCLbig}) are visually different from the MNIST data, but they also can be perceived as having some subtle features in common. On the other hand, between the outliers detected by the autoencoder (see Figure \ref{fig:excludedAELbig}) it is rare to observe features similar to the in-distribution MNIST dataset. \section{Conclusion} The cascade watchdog improves the trade-off between true and false negatives of the stand-alone autoencoder. Adding the binary classifier improves the true positive rate while reducing the false positive rate. The enhancement of the outlier detection is possible due to the production of an augmented dataset by means of adversarial training. The autoencoder, the GAN, and the binary classifier, work in conjunction to produce successful results. The results of the experiments show that the binary classifier constitutes a significant contribution for out-of-distribution identification, encouraging the application of the binary classifier together with the autoencoder in future implementations. The high true positive rate obtained in the experiments, combined with the low false positive rate, confirms that the idea of splitting the out-of-distribution space into different subsets is a good approach for the detection of outliers. While the first layer defense (autoencoder) is capable of detecting most of the outliers, the second layer defense specializes in the out-of-distribution subspace that is closer to the manifold of the distribution. Adversarial training is capable of generating an augmented dataset to train the binary classifier. The cascade watchdog multi-tiered adversarial guard passes the proof of concept stage successfully and has the potential to be applied more broadly to real world classification problems, which require the system to identify out-of-distribution inputs. \bibliographystyle{bst/sn-basic}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $F$ be a finite field, with $q$ elements, and $E $ be its unique quadratic extension. Put $G = \mathrm{GL} (2,F)$ and denote by $K$ the Coxeter torus of $G$, realized as the subgroup of all matrices $m_z (z \in E^\times) $ of the maps $ w \mapsto zw (w \in E) $ with respect to a fixed $F$ - basis of $ E$. Recall that the finite homogeneous space $ {\cal H} = G/K$ may be looked upon as the finite analogue of (the double cover of) the classical Poincar\'e Upper Half Plane (see \cite{arca}). Harmonic analysis on $ \cal H$ amounts to decompose the induced representation $\mathop{\mathrm{Ind}}\nolimits _K^G {\bf1} $ from the unit character {\bf1} of $K$ to $G$. We are interested here in the ``twisted'' version of this, i.e., the decomposition of the induced representation $\mathop{\mathrm{Ind}}\nolimits ^G_K \Phi$ from a non (necessarily) trivial character $\Phi$ of $K$ to $G$. The real analogue of this case has been considered in \cite{eja}. We prove that this representation is multiplicity-free, taking advantage of the fact that this is so for $\mathop{\mathrm{Ind}}\nolimits _K^G {\bf1}$ (see \cite{arca}) and reducing the computation of the multiplicities in $\mathop{\mathrm{Ind}}\nolimits ^G_K \Phi $ to the ones in $\mathop{\mathrm{Ind}}\nolimits _K^G {\bf1}$. We also give an explicit description of the corresponding (twisted) spherical functions. Finally, we give a version of the Heisenberg Uncertainty Principle \section{The Multiplicity One Theorem for $\mathop{\mathrm{Ind}}\nolimits ^G_K \Phi$.} \subsection {The case $\Phi = \bf 1$} We consider first the special case $\Phi = \bf 1 $ in which the multiplicity one theorem follows from a geometric argument. In fact, we have $$ \mathop{\mathrm{Ind}}\nolimits ^G_K \bf 1 \simeq (L^2( \cal H), \tau), $$ where $L^2(\cal H)) $ stands for the space of all complex functions on $\cal H$ endowed with the usual canonical scalar product, and $\tau$ denotes the natural representation of $G$ in $L^2(\cal H) $, defined by $(\tau_gf)(z) = f(g^{-1}.z) $, where $ z \mapsto g.z $ is the homographic action of $G$ on $\cal H$, given by $$ g.z = \frac {az + b} {cz + d } \hspace{3cm} \mbox{for} g = \left( \matrix{ a &b\cr c &d \cr}\right) \in G, z \in \cal H . $$ \begin{defi} For all $z,w \in \cal H $, we put $$ D(z,w) = \frac{N(z - w)}{N(z - \bar w)} $$ with the convention that $D(z,w) = \infty $ if $w = \bar z$. \end{defi} \begin{prop} $D$ is an orbit classifying invariant function for the homographic action of $G$ in $\cal H \times \cal H$. \end{prop} \begin{cor} The commuting algebra of $ (L^2( \cal H), \tau)$ is commutative. \end{cor} This follows from the fact that, the classifying invariant $D$ being symmetric, the $G$-orbits in $\cal H \times \cal H$ are also symmetric. \unskip\nobreak\hfil\penalty50\hskip.5em\hbox{}\nobreak\hfill$\Box$ \subsection{The case of general $\Phi$} Let's denote by $\phi$ the restriction of $\Phi$ to $F^\times$. We will prove that every twisting of an irreducible representation $ \pi^d_{\theta} $ of $G$ (where the superscript $d$ denotes the dimension of $\pi$ and $\theta$ its character parameter) by the character $(\Phi +\Phi^q) $ is isomorphic to a representation of the form $\pi^{d'}_{\theta'}+\pi^{d''}_{\theta''}$, when restricted to $K$. In fact we will work with the characters $\chi^d_\theta$ of the irreducible representations $\pi ^d_{\theta} $ of $G$, for which we keep the notations of \cite{tesis} or \cite{anna}. \begin{lem} On $K$ we have\\ $(\Phi +\Phi^q) \chi^{q}_{\alpha, \alpha} = \chi^{q-1}_{\Phi (\alpha \circ N)} + \chi^{q+1}_{\phi \alpha, \alpha}$,\\ $(\Phi +\Phi^q) \chi^{1}_{\alpha, \alpha} = \chi^{q+1}_{\phi \alpha, \alpha } - \chi^{q-1}_{\Phi (\alpha \circ N)}$,\\ $(\Phi +\Phi^q) \chi^{q+1}_{\alpha, \beta} = \chi^{q+1}_{\phi \alpha,\beta} + \chi^{q+1}_{\alpha, \phi \beta}$,\\ $(\Phi +\Phi^q) \chi^{q-1}_{\Lambda} = \chi^{q-1}_{\Phi \Lambda} +\chi^{q-1}_{\Phi^q \Lambda}.$ \unskip\nobreak\hfil\penalty50\hskip.5em\hbox{}\nobreak\hfill$\Box$ \end{lem} Now for a character $\chi$ of $G$, we have $ \chi \circ \mathop{\mathrm{Frob}}\nolimits = \chi$ on $K$, as it follows from the character table. Therefore $ \sum_K \bar \Phi (k^q) \chi (k) = \sum_K \bar \Phi (k) \chi (k) $ because $\mathop{\mathrm{Frob}}\nolimits $ is an involutive automorphism. Hence, the multiplicity of $\pi$ in $\mathop{\mathrm{Ind}}\nolimits _K^G \Phi $ equals $ \frac{1}{2} \sum_K (\bar \Phi+\bar \Phi ^q) (k) \pi(k) $ and so it is just the average of the multiplicities in $\mathop{\mathrm{Ind}}\nolimits ^G_K {\bf 1} $ of two representations of $G$ (one of which may be virtual!) \begin{rem} Put $\pi^{q+1}_{\alpha, \alpha} = \pi^q_{\alpha} + \pi^1_{\alpha}$ and $ \pi^{q-1}_{ \alpha \circ N} = \pi^q_{\alpha} -\pi^1_{\alpha}$ for every $\alpha \in (F^\times)^{\wedge}$. It is easy to check than in the degenerate cases $ \alpha = \beta$ (for $\pi = \pi^{q+1}_{\alpha, \beta}$ ) and $\Lambda = \Lambda^q$ (for $\pi = \pi^{q-1}_\Lambda$) we find for the multiplicities $m_1 (\pi) $ \begin{equation} m_1(\pi^{q+1}_{\alpha, \alpha}) = 1 \hspace{4cm} (\alpha \in (F^\times)^{\wedge}) \label{eq:mdegsp} \end{equation} and \begin{equation} m_1(\pi^{q-1}_{ \alpha \circ N}) = - \delta_{\alpha, 1} \hspace{4cm}(\alpha \in (F^\times)^{\wedge} ) \label{eq:mdegsd} \end{equation} \end{rem} Using the fact that the multiplicities of the irreducible representations of $G$ in $\mathop{\mathrm{Ind}}\nolimits _K^G \bf 1$ are at most one and also equations (\ref{eq:mdegsp}) and (\ref{eq:mdegsd}), we get that the multiplicities are also at most one in the more general case of $\mathop{\mathrm{Ind}}\nolimits _K^G \Phi $. \unskip\nobreak\hfil\penalty50\hskip.5em\hbox{}\nobreak\hfill$\Box$ \subsection{The multiplicities $ m_{\Theta,d}(\Phi)$ of $\pi^d_\Theta $ in $\mathop{\mathrm{Ind}}\nolimits _K^G \Phi $ for general $\Phi \in (E^{\times})^{\wedge} $} In Table~1 below, $ \pi^d_\Theta$ denotes an irreducible representation of $G$, of dimension $d$ and parameter $\Theta$. Then $d \in \{ 1, q, q+1, q-1\}$ and $\Theta$ is of the form $\{\alpha, \beta \} $, with $ \alpha, \beta \in (F^{\times})^{\wedge}$ or $\{\Lambda, \Lambda^q \} $ with $ \Lambda \in (E^{\times})^{\wedge}$ \begin{table}[hb] \caption{The multiplicities $ m_{\Theta,d}(\Phi)$} \begin{center} \begin{tabular}{|l|l|} \hline $\pi^d_\Theta $ & $ m_{ \Theta, d }(\Phi)$ \\ \hline $\pi^1_{\alpha, \alpha} $ & $ \delta_{\alpha^2, \phi} $ \\ \hline $\pi^q_{\alpha, \alpha} $ & $\delta_{\alpha^2,\phi} -\delta_{\alpha \circ N,\Phi} $ \\ \hline $\pi^{q+1}_{\alpha, \beta} $ & $\delta_{\alpha\beta,\phi} $ \\ \hline $\pi^{q - 1}_{ \Lambda, \Lambda^q} $ & $\delta_{\lambda,\phi} - \delta_{\Lambda,\Phi} - \delta_{\Lambda^q,\Phi} $ \\ \hline \end{tabular} \end{center} \end{table} \noindent {\sc NOTATIONS.} Here $ \alpha, \beta \in (F^{\times})^{\wedge}$ with $ \alpha \neq \beta$ and $ \Phi, \Lambda \in (E^{\times})^{\wedge}$ with $ \Lambda \neq \Lambda^q $, and $\lambda$ (resp. $\phi$ ) denotes the restriction of the character $ \Lambda $ (resp. $ \Phi$ ) to $(F^{\times})^{\wedge}$. \section{The twisted spherical functions} \subsection{The averaging construction} In this section $G$ denotes an arbitrary finite group, $K$ a subgroup of $G$and $\Phi$ a one dimensional representation of $K$. We notice that the spherical functions for the representation $\mathop{\mathrm{Ind}}\nolimits _K^G \Phi $ are obtained as weighted averages of the characters of $G$. More precisely: \begin{defi} Let $ L^1(G)$ be the group algebra of $G$, realized as the convolution algebra of all complex functions of $G$ and let $ L^1_\Phi(G,K)$ be the convolution algebra of all complex functions $f$ on $G$ such that $$ f(kgk') = \Phi(k)f(g)\Phi(k') $$ for all $g \in G, k,k' \in K $. For any $f \in L^1(G)$ put $$ (P_ \Phi f)(g) = \frac{1}{|K|} \sum_{k \in K} \Phi^{-1}(k)f(kg) $$ for all $g \in G$. \end{defi} Notice that the operator $P_\Phi$ is just convolution with the idempotent function $\varepsilon^\Phi_K \in L^1G $ which coincides with $|K|^{-1}\Phi$ on $K$ and vanishes elsewhere. Moreover $ L^1_\Phi(G,K) $ may be writen as $ \varepsilon^\Phi_K \ast L^1G \ast \varepsilon^\Phi_K $ and its elements $f$ are characterized by the properties $$ \varepsilon^\Phi_K \ast f = f = f \ast \varepsilon^\Phi_K $$ \begin{lem} Let $\chi$ be the character of an irreducible representation $\pi$ of $G$. Then $P_{\Phi} (\chi) (e) \neq 0$ iff $\pi$ appears in $\mathop{\mathrm{Ind}}\nolimits _K^G \Phi$. \unskip\nobreak\hfil\penalty50\hskip.5em\hbox{}\nobreak\hfill$\Box$ \end{lem} \begin{lem} $P_{\Phi} (\chi) $ is a non-zero function iff it doesn't vanish for $g=e$. \unskip\nobreak\hfil\penalty50\hskip.5em\hbox{}\nobreak\hfill$\Box$ \end{lem} \begin{prop} The mapping $P_ \Phi $ is an algebra epimorphism from the center $Z(L^1G)) $ of the convolution algebra $L^1G $ onto the center $Z( L^1_\Phi(G,K))$ of the convolution algebra $ L^1_\Phi(G,K)$. \end{prop} {\em Proof:} We have $$ \begin{array}{lllll} P_ \Phi (f_1 \ast f_2) & = & \varepsilon^\Phi_K \ast (f_1 \ast f_2) & = &( f_1 \ast \varepsilon^\Phi_K )\ast f_2 \\ & = & (f_1 \ast \varepsilon^\Phi_K \ast \varepsilon^\Phi_K )\ast f_2 & = & ( \varepsilon^\Phi_K \ast f_1 )\ast ( \varepsilon^\Phi_K \ast f_2 ) \\ & = & P_ \Phi f_1 \ast P_ \Phi f_2. \end{array} $$ since $f_1$ is central and $\varepsilon^\Phi_K $ is idempotent. Moreover the dimension $d$ of the image of $ Z(L^1G))$ under $P_ \Phi $ is the number of irreducible characters $\chi$ of $G$ such that $P_\Phi(\chi) \neq 0$; but $P_\Phi (\chi) \neq 0$ iff $(P_\Phi \chi)(e) \neq 0$ and, the number $(P_\Phi (\chi))(e)$ being the multiplicity in $\mathop{\mathrm{Ind}}\nolimits _K^G \Phi$ of the representation $\pi$ of $G$ whose character is $\chi$, we see that $d$ is just the number of irreducible representations $\pi$ of $G$ appearing in $\mathop{\mathrm{Ind}}\nolimits _K^G \Phi$, i. e. the dimension of the center of $ L^1_\Phi(G,K)$. \unskip\nobreak\hfil\penalty50\hskip.5em\hbox{}\nobreak\hfill$\Box$ \begin{cor} The nonzero functions that satisfy the functional equation $$ h(x)h(y) = \int_K \bar {\Phi }(k) h(xky) \,dk $$ linearly span the center of the algebra $L^1_{\Phi} (G,K) $. \end{cor} {\em Proof:} The functions $h$ that satisfy the above functional equation are exactly the complex multiples of the functions $P_{\Phi} (\chi) $; for a proof (see \cite{4}). Therefore the corollary follows. \unskip\nobreak\hfil\penalty50\hskip.5em\hbox{}\nobreak\hfill$\Box$ \subsection{Explicit formulae for the twisted spherical functions} Define $$ S^\Phi_\Lambda (a) = - (q^2 - 1)^{-1} \sum_{(z,w) \in \Gamma_a}\Phi^{-1}(z)\Lambda(w) $$ for $ \Lambda \in (E^\times)^{\wedge} $ and $a \in F^\times$, where $\Gamma_a $ denotes the set of all $(z,w) \in E ^{\times} \times E ^{\times} $ such that $ N(w) = aN(z) $ and $Tr(w) = 2(a + 1)^{-1} Tr(z) $. Then the spherical function $ \zeta ^ \Phi_ \Lambda $ of $G$ associated to the cuspidal character $\chi^{q-1}_\Lambda $ of $G$ is given on the representatives $d(a,1) = \left( \matrix{ a & 0 \cr 0 & 1 \cr} \right) $ \hspace{0.5cm} $(a \in F^\times) $ for the $K$ - double cosets in $G$, by \[ \zeta ^ \Phi_ \Lambda (d(a,1)) = S^\Phi_\Lambda (a) + q(q + 1)^{-1} \delta_{a,1} \delta_{\lambda, \phi}, \] where $\lambda$ (resp. $\phi$) denotes the restriction of the character $\Lambda$ (resp. $ \Phi $) of $E^ \times$ to $ F^ \times$. Notice that $a = 1$ corresponds to the origin in $\cal H$ and $a = -1$ corresponds to the antipode of the origin in $\cal H $. It is not difficult to check that these formulae for the spherical functions are equivalent to the ones given in \cite{arca} for the case $\Phi = 1$. \subsection{A new form for the cuspidal spherical functions for $\Phi = {\bf1 }$ ({\bf char $F$} $ \neq 2) $} For $a \neq 1 $, one has the following new expression for the spherical functions estimated in \cite{katz} $$\zeta^\Phi_\Lambda (a) = (q + 1)^{-1} \sum_{u \in U} \varepsilon (Tr(u) - (a + a^{-1}))( \varepsilon \omega)(u),$$ for $a \neq 1$, where $\varepsilon$ denotes the sign character of $ F^ \times$. \section{Heisenberg Uncertainty Principle} For this section, $G$ denotes an arbitrary finite group, $K$ any subgroup of $G$ and $\Phi $ any linear character of $K$. Let $ \hat G^{\Phi}$ be set of allthe equivalence classes of irreducible representations of $G$ that contain the character $\Phi$ when restricted to $K$. For each equivalence class we choose, once and for all, a representative $(\pi,V_{\pi})$. As usual, for each $f$ in $L^1(G)$, the Fourier Transform ${\cal F} (f) $, valued in the class $(\pi,V_{\pi})$, is the linear operator $\cal F$ in $V_{\pi}$ defined by $$ {\cal F} (f) (\pi):=\pi (f):=\frac{1}{|G|} \int_G f(g) \pi (g^{-1}) dg:= \frac{1}{|G|}\sum_{ g \in G} f(g) \pi(g^{-1}). $$ We recall the statement of the Plancherel theorem for a function $ f \,\in L^1_{\Phi}(G,K) $ $$ f(g) =\frac{1}{G} \sum_{ \pi \in \hat G^{\Phi}} d_{\pi} \mathop{\mathrm{trace}}\nolimits(\pi(f) \pi (g)) ; $$ here $ g\, \in \, G $ arbitrary and $d_{\pi}:=\dim V_{\pi} $. For any complex valued function $f$ on $G$, let $ |\mathop{\mathrm{supp}}\nolimits (f)| $ denote the number of elements of the support of $f$. That is, the number of points of $G$ where $f$ takes nonzero values. \begin{prop} [\bf Heisenberg Uncertainty Principle] For any nonzero function $f \,\, \in \,\, L^1_{\Phi} (G,K) $ we have $$ |\mathop{\mathrm{supp}}\nolimits (f) |\,( \sum_{\pi \in \mathop{\mathrm{supp}}\nolimits ({\cal F} (f) )} d_{\pi})\, \geq\, |G|. $$ Here $\mathop{\mathrm{supp}}\nolimits (\cal F (f) ) $ is the subset of $\hat G^{\Phi} $ where $\cal F (f)$ does not vanish. \end{prop} {\em Proof:} For any function $f$ on $G$ we recall that $$ \Vert f \Vert^2_{2} = \sum_{x \in G} \vert f(x) \vert ^2 ;\,\, \Vert f \Vert_{\infty} = \max_{ x \in G} \vert f(x) \vert ;\,\, \Vert f \Vert_2^2 \leq \Vert f \Vert^2_{\infty} \vert \mathop{\mathrm{supp}}\nolimits (f) \vert \eqno (*) $$ From now on, we fix a $G-$invariant inner product on $V_{\pi}$. Then $T^*$ denotes the adjoint of a linear operator $T$ on $V_{\pi}$ with respect to this inner product. Also $\Vert T \Vert $ denotes the Hilbert-Schmidt norm on $\mathop{\mathrm{End}}\nolimits V_{\pi} $ defined by $\mathop{\mathrm{trace}}\nolimits(TS^*),$ for $ S, T \in \mathop{\mathrm{End}}\nolimits V_{\pi} $ Since $ f \in L^1_{\Phi}(G,K) $, as we pointed out before, the Plancherel Theorem says that we have that $ \mathop{\mathrm{supp}}\nolimits (f) $ is contained in $ \hat G^{\Phi} $ and that $$ f(x) =\frac{1}{G} \sum_{ \pi \in \hat G^{\Phi}} d_{\pi} \mathop{\mathrm{trace}}\nolimits(\pi(f) \pi (x)). $$ The Cauchy--Schwarz inequality applied to the Hilbert-Schmidt inner product says that the first of the two following inequallities is true, $$ \mathop{\mathrm{trace}}\nolimits (\pi (f) \pi (x) ) \leq \Vert \pi (f) \Vert \Vert \pi (x) \Vert \leq \Vert \pi (f) \Vert, $$ the second inequality follows from the fact that $ \Vert T \Vert =1 $ for a unitary operator. Putting together the last two statements we get $$ \Vert f \Vert_{\infty} \leq \frac{1}{G} \sum_{\pi \in \hat G^{\Phi}} d_{\pi} \Vert {\cal F} (f)(\pi) \Vert $$ The classical Cauchy--Schwarz inequality and the fact that $ d_{\pi} = d_{\pi}^{\frac 12}d_{\pi}^{\frac 12 }$ imply that $$ \Vert f \Vert^2_{\infty} \leq \frac{1}{\vert G \vert^2} \sum_{\pi \in \hat G^{\Phi} } d_{\pi} \Vert {\cal F} (f)(\pi) \Vert^2 \,\, \sum_{\pi \in \mathop{\mathrm{supp}}\nolimits({\cal F}(f))} d_{\pi}. $$ Now the $L^2-$version of Plancherel Theorem says that $$ \Vert f\Vert^2_2 =\frac{1}{\vert G \vert} \sum_{\pi \in \hat G^{\Phi}} d_{\pi} \Vert {\cal F} (f)(\pi) \Vert^2. $$ Therefore, $$ \Vert f\Vert_{\infty}^2 \leq \frac{1}{\vert G \vert} \Vert f \Vert^2 \, \sum_{\pi \in \hat G^{\Phi}} d_{\pi} . $$ Since $f$ is nonzero, we apply (*) to the above inequality and get the desired result. \unskip\nobreak\hfil\penalty50\hskip.5em\hbox{}\nobreak\hfill$\Box$
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The idea that gravity may be an emergent phenomenon \footnote{By emergence we understand that in the systems in question \cite{sil} ''short-distance physics is radically different from long-distance physics''.} described by an effective low-energy theory which a is consequence of averaging over (yet unknown) microphysical degrees of freedom dates back at least to the proposal of Sakharov \cite{sakha} in 1968. Since then, it has been shown that is not difficult to get a low-energy effective metric (although it is considerably harder to get dynamical equations controlling it, see for instance \cite{matt0}). In fact, the effective metric arises in systems of very different types (see \cite{mattlr} for a review). The basic idea behind this generality has been exposed in \cite{matt0}, where it was shown that given a classical single-field theory described by a Lagrangian that is an arbitrary function of the field and its derivatives, the fluctuations of the linearized theory around a non-trivial background propagate in a curved spacetime. The geometry of this spacetime is encoded by the effective metric, which is unique in the case of a single field, and depends on the background field configuration. This feature of nonlinear theories led to the construction of analog models of gravity, which imitate the kinematical properties of gravitational fields (see \cite{mattlr} for a complete list). In this article we shall show that the relation between the effective metric in nonlinear electromagnetic theories and the corresponding energy-momentum tensor can be used to classify the possible metrics in terms of the Segr\`e types of $T_{\mu\nu}$ (see Sects. \ref{rel}, \ref{algeb} and \ref{eig}). We will see that for a given type of metric, the form of the light cone is univocally described by a scalar function, that depends on the Lagrangian of the theory and the background field configuration (Sect.\ref{seco}). Following the variation of the light cone along a given path, we can see how the particle is diverted from the background geodesic motion due to the nonlinearities of the interaction. Some examples of this application are given in Sect.\ref{simple}. Finally we will discuss our results and consider future work in Sect.\ref{conc}. \section{The effective geometry for nonlinear electromagnetism} \label{rel} Nonlinear electrodynamics is relevant in several areas of physics. In quantum field theory, the polarization of the vacuum leads naturally to a nonlinear corrections to Maxwell's electrodynamics, which are described by Euler-Heisenberg's Lagrangian \cite{dunne}. In material media, such as some dielectrics and crystals, the complex interaction between the molecules and external electromagnetic fields can be described by an effective nonlinear theory, which is typically observed at very high light intensities such as those provided by pulsed lasers \cite{shen}. The result that the high-energy perturbations of a nonlinear electromagnetic theory propagate along geodesics that are not null in the background geometry but in an effective spacetime has been obtained several times in the literature \cite{nlemef}. In the case of ${\cal L}={\cal L}(F)$, where $F\equiv F_{\mu\nu}F^{\mu\nu}$ , the equation of motion is given by \begin{equation} ({\cal L}_F F^{\mu\nu})_{,\nu}=0. \end{equation} By perturbing this equation around a fixed background solution and taking the eikonal limit (see for instance \cite{matt1}), we obtain for the effective metric \begin{equation} g^{\mu\nu} = {\cal L}_{F0} \eta^{\mu\nu}-4{\cal L}_{FF0}F^\mu_{\;\alpha 0} F^{\alpha\nu}_0, \end{equation} where the subindex 0 means that the quantity is evaluated using the background solution. Both this and the inverse metric can be expressed in terms of the energy-momentum tensor, given by \begin{equation} T^{\mu}_{\phantom a\nu}=-4{\cal L}_FF^{\mu}_{\phantom a\alpha}F^\alpha_{\phantom a\nu}-{\cal L} \delta^{\mu}_{\phantom a\nu}. \end{equation} in particular, the inverse effective metric takes the form \begin{equation} g_{\mu\nu}=a_0\eta_{\mu\nu}+b_0T_{\mu\nu 0}, \label{emem} \end{equation} where $a_0$ and $b_0$ are given by \begin{equation} a_0=-b_0\left[ \frac{{\cal L}_F^{2}}{{\cal L}_{FF}}+{\cal L}+\frac{1}{2}T\right]_0, \label{azero} \end{equation} \begin{equation} b_0=\frac{16{\cal L}_{FF0}}{{\cal L}_{F0}}\left[ \kappa^{2}{\cal L}_{FF}^{2}-16({\cal L}_F+F{\cal L}_{FF})^{2}\right]^{-1}_0. \label{bzero} \end{equation} with $\kappa = \sqrt{F^2+G^2}$, and $G\equiv F_{\mu \nu}^{*}F^{\mu\nu}$. We shall discuss next how to use the dependence of the effective metric with $T_{\mu\nu}$ to classify the different possibilities for $g_{\mu\nu}$. \section{Algebraic properties of $T_{\mu\nu}$} \label{algeb} Due to the relation between the effective geometry and the energy-momentum tensor, the algebraic properties of $T_{\mu\nu}$ determine the propagation of the high-energy perturbations of a given nonlinear theory. These properties can be exhibited as a typical eigenvalue problem. Hence it is useful to review some basic results concerning this problem in the context of relativity. At a given point $p$ of a manifold $\mathcal{M}$, the object $T^{\alpha}_{\phantom a\beta}$ can be thought as a linear map of the tangent space $T_p$ onto itself. The principal directions of this map and its correspondent eigenvalues are determined by \footnote{To be precise, the eigenvalue problem for the energy momentum tensor can be viewed as an immediate consequence of the extremization of a scalar function $\chi={T_{\mu\nu}\xi^\mu\xi^\nu}/{g_{\alpha\beta}\xi^\alpha\xi^\beta}$ with respect to $\xi^\alpha$ (in the same way as Petrov's classification of the Weyl tensor is based on the extremization of the sectional curvature function). In fact, deriving $\chi$ with respect to $\xi^\alpha$ and imposing the condition of extremum implies, the eigenvalue equation (5) results.} \begin{equation} T^{\alpha}_{\phantom a\beta}\xi^{\beta}=\lambda \xi^{\alpha}, \end{equation} where $\lambda$ is a scalar and $\xi^{\beta}$ is an eigenvector. A fourth order characteristic polynomial for $\lambda$ is obtained by the condition $p(\lambda)={\rm det}(\lambda \textbf{1}-\textbf{T})=0$ \footnote{We sometimes use the symbols $\textbf{T}$ and $\textbf{1}$ as matricial versions of the mixed tensors $T^{\alpha}_{\:\beta}$ and $\delta^{\alpha}_{\:\beta}$. Successive contractions of $T^{\alpha}_{\:\beta}$ will be denoted as powers of $\textbf{T}$, i.e: $\textbf{T}^{2}\doteq T^{\mu}_{\:\alpha}T^{\alpha}_{\:\nu}$, $\textbf{T}^{3}\doteq T^{\mu}_{\:\alpha}T^{\alpha}_{\:\beta}T^{\beta}_{\:\nu}$ and so on.}. Although the algebraic properties of this equation are well known, it is important to note that in a positive definite metric, a real symmetric matrix can always be diagonalized by a real orthogonal transformation. However, the hyperbolic character of a Lorentzian metric leads to a more complicated algebraic situation. In particular, the eigenvectors $\xi^{\beta}$ do not necessarily constitute a linearly independent set, implying that $\textbf{T}$ may not have a diagonal representation. Notwithstanding this undesirable property, it is always possible to reduce the matrix $\textbf{T}$ to a typical canonical form, as will be briefly discussed in Sec.\ref{revis}. \subsection{The Segr\`e classification revisited} \label{revis} The Segr\`e classification is a local, invariant and algebraic classification of arbitrary second rank symmetric tensors (see for instance \cite{Hall2}). Tensors of this type play a very important role several areas of physics, and the coordinate-independent method provided by Segr\`e has been discussed by many authors in different contexts \cite{Bona}. It is possible to show that, depending on the properties of the characteristic polynomial $p(\lambda)$ given by \begin{equation} p(\lambda) = \lambda^{4} -a_3\lambda^{3} +a_2\lambda^{2}-a_1\lambda+a_0=0, \label{char} \end{equation} where the coefficients $a_n (n=0..3)$ are simple functions of the scalar invariants built with powers of $\textbf{T}$, and the nature of the eigenvectors, there exist at most four different classes (known as Segr\`e types) of symmetric tensors \cite{Hall2}. Those belonging to Segr\`e types I, II and III have only real eigenvalues and admit respectively, four, three or two linearly independent eigenvectors. Type IV describes tensors associated to complex and conjugated eigenvalues \cite{Bona}. Each type is associated to a canonical form for the corresponding tensors. By construction, type I is the only Segr\`e type in which the given linear map admits a diagonal representation, and it can be shown that this is the only class that admits a timelike eigenvector \cite{Bona, JSantos1}. We list next two important properties that will be useful below \cite{Hall1}:\\[0.3cm] \noindent (i) There always exists a two-dimensional subspace $S_p$ of $T_p$ which is invariant under the action of $\textbf{T}$\\ \noindent (ii) If $S_p$ is an invariant 2-space under the action of $\textbf{T}$, then so is the 2-space orthogonal to $S_p$.\\ The subspace \noindent$S_p$ will be called timelike, null or spacelike if it contains exactly two, one or no null vectors. We shall see next how the Segr\`e classification can be useful in the determination of possible types of effective metrics, in the particular example of nonlinear electromagnetism. \section{Nonlinear electrodynamics and the eigenvalue problem of the energy-momentum tensor} \label{eig} \subsection{Linear case} In order to study the properties of the energy-momentum tensor in nonlinear electromagnetism, we shall review first those of the linear theory, which furnishes a simple realization of the Segr\`e classification \cite{Synge}. In this case, the energy-momentum tensor is given by \begin{equation} \tau^{\mu}_{\phantom a\nu}=F^{\mu}_{\phantom a\alpha}F^{\alpha}_{\phantom a\nu}+\frac{1}{4}F \delta^{\mu}_{\phantom a\nu} \end{equation} Beyond its obvious symmetric nature, $\mbox{\boldmath$\tau$}$ satisfies additional algebraic properties, such as tr$(\mbox{\boldmath$\tau$})=0$, and ${\rm tr}({\mbox{\boldmath$\tau$}}^{2}) = \tau^{\mu}_{\phantom a\nu}\tau^{\nu}_{\phantom a\mu}=\frac{1}{4}\kappa^{2}$, where $\kappa\equiv (F^{2}+G^{2})^{1/2}$. It is easily shown that the characteristic equation (\ref{char}) for Maxwell's energy-momentum tensor becomes \begin{equation} \left(\lambda^{2}-\frac{\kappa^{2}}{16}\right)^{2}=0 \end{equation} From this relation, two important results follow:\\ (i) Since the characteristic equation factors in two identical second order polynomials, the four eigenvalues of the electromagnetic energy-momentum tensor in linear electromagnetism are real and equal in pairs.\\ (ii) The eigenvalues at each point of spacetime are entirely described by a given function of the field invariants $F$ and $G$ at that point: \begin{equation} \lambda_\pm=\pm\frac{1}{4}\kappa \end{equation} We shall consider next the eigenvector structure, which can be determined by the analysis of the only two possible cases for $\kappa$ ($\kappa\neq 0$ and $\kappa =0$) \cite{Synge, Kramer}. Note that because there exist at most two eigenvalues, the structure of a given % eigenspace is degenerate, and there is an infinite number of $\xi^{\alpha}$ associated to a given $\lambda$. \subsubsection{Non-null field ($\kappa\neq0)$: two different eigenvalues.} In this case, there exist two orthogonal invariant eigenspaces $S_p$, each of them in correspondence with a given eigenvalue $\lambda_{\pm}$. It also follows that the timelike two-flat (which admits exactly two null eigenvectors) is associated to the positive eigenvalue $\lambda_{+}$ \cite{Synge}. Because the other two-flat admits two orthogonal spacelike vectors, it is possible to diagonalize $\mbox{\boldmath$\tau$}$ and it follows that non-null Maxwell fields are of Segr\`e type I. \subsubsection{Null field ($\kappa =0$): one single null eigenvalue.} It is possible to show that in this case the eigenvectors of $\mbox{\boldmath$\tau$}$ do not constitute a linearly independent set. Nevertheless, there exist a three-flat in $T_p$ that admits a null direction and two independent spacelike vectors. This three-flat is tangent to the light cone at $p$. According to the Segr\`e classification, this case belongs to type II. Hence $\mbox{\boldmath$\tau$}$ does not admit a diagonal representation but can be reduced to a canonical form (which will be given in Sec.\ref{st2}) \cite{corm}. \subsection{Nonlinear electromagnetism} We now turn to the algebraic analysis of the energy-momentum tensor constructed with nonlinear and real Lagrangians for the electromagnetic field. As will be shown below, such analysis will be very useful in the description of the light cone structure of an arbitrary nonlinear electromagnetic theory. First we will consider Lagrangians that are arbitrary functions of the electromagnetic invariants $F$ and $G$. Following the standard defnition of the energy-momentum tensor \cite{Landau}, we get \begin{equation} T^{\mu}_{\phantom a\nu}=-4{\cal L}_FF^{\mu}_{\phantom a\alpha}F^\alpha_{\phantom a\nu}-({\cal L}-{G\cal L}_G)\delta^{\mu}_{\phantom a\nu}, \end{equation} where ${\cal L}_A\equiv \partial {\cal L}/\partial A$, $A=F,G$. The roots of the characteristic polynomial $p(\lambda)$ (Eqn.(\ref{char})) would give the spectrum of eigenvalues for a given configuration of fields in the context of a given nonlinear theory. However, a simple inspection of the symmetries of $\textbf{T}$ reveals a much more complicated structure than that of Maxwell's. First of all, the trace of $T^{\mu}_{\phantom a\nu}$ does not vanish and is given by tr$(\textbf{T})=-4({\cal L}-F{\cal L}_F-G{\cal L}_G)$ Furthermore, a calculation of the powers of \textbf{T} reveals that, for a given integer $n$ \begin{equation} \textbf{T}^n=\alpha\textbf{F}^{2}+\beta\textbf{1}, \end{equation} with $\alpha$ and $\beta$ are functions of the invariant $F$ and $G$, and of ${\cal L}$ and its derivatives. Thus, the nonlinearity of the theory deforms the Rainich algebra \cite{Rainich} $\textbf{\mbox{\boldmath$\tau$}}^{2}\sim\textbf{1}$ valid in the linear theory, making the calculation of the coefficients of the characteristic polynomial hard. Nevertheless, it is possible to bypass this difficulty by decomposing $\textbf{T}$ in terms of its traceless part $\textbf{N}$ and its trace, that is $$ N^{\alpha}_{\;\;\beta}=T^{\alpha}_{\;\; \beta}-\frac{1}{4}T\delta^{\alpha}_{\;\; \beta}. $$ Then, if $\mathbf{\xi}$ is an eigenvector of $\textbf T$ with eigenvalue $\lambda$ it will be also an eigenvector of $\textbf N$ with eigenvalue $\lambda-T/4$. Hence we are led to the study the properties of $N^{\alpha}_{\phantom a\beta}$ which for an arbitrary Lagrangian is given by \begin{equation} N^{\alpha}_{\phantom a\beta}=-4{\cal L}_F\left( F^{\alpha}_{\phantom a\lambda}F^{\lambda}_{\phantom a\beta}+\frac{1}{4} F\delta^{\alpha}_{\phantom a\beta}\right). \end{equation} In other words, the traceless part of the nonlinear energy-momentum tensor is just a conformal transformation of Maxwell's $\mbox{\boldmath$\tau$}$, and the eigenvalues of $\textbf{N}$ are just that of $\mbox{\boldmath$\tau$}$ multiplied by $-4{\cal L}_F$ This interesting fact permits to obtain the eigenvalues of \textbf{T} with the expression \begin{equation} \lambda_{\pm}=F{\cal L}_F+G{\cal L}_G-{\cal L}\mp {\cal L}_F \kappa. \end{equation} Thus, the eigenvalues are given in terms of the field invariants and specific functions of them. Notice that the first three terms are just the trace of the energy-momentum tensor divided by four. Furthermore, the invariant subspaces of Maxwell's theory determine entirely the invariant subspaces of a nonlinear theory. We conclude that even in a nonlinear theory with ${\cal L} = {\cal L} (F,G)$, the algebraic properties of the energy-momentum tensor are such that the only possible Segr\`e types are I and II. Furthermore, type II is only possible if $\kappa =0$. In the next section we show how the Segr\`e types determine the light cone structure in the particular example of nonlinear electromagnetism. \section{Second order surfaces and a classification of nonlinear regimes} \label{seco} The light cone structure of a nonlinear theory is governed by the effective metric $g_{\mu\nu}$ through the condition \begin{equation} g_{\mu\nu}k^\mu k^\nu=0, \label{lc} \end{equation} where the $k^\mu$ are null vectors in the effective geometry but are not null, in general, in the background geometry (which maybe flat or curved). We shall study next the the relation between the newly-defined light cones and the background light cones (which for definiteness we assume to be those of Minkowski spacetime). It will be shown that there exist many possibilities that are associated with the algebraic nature of the energy-momentum tensor at a given spacetime point, and to the properties of the Lagrangian. Notice that although $g_{\mu\nu}$ can be taken as a new Riemannian metric, it is also licit to think of it as a tensor field defined in Minkowski spacetime. Because of the expression $$ g_{\mu\nu}=a_0\gamma_{\mu\nu}+b_0T_{\mu\nu 0}, $$ with $a_0$ and $b_0$ given by Eqns.(\ref{azero}) and (\ref{bzero}), the principal directions (eigenvectors) of $g_{\mu\nu}$ and its invariant subspaces are entirely determined by the eigenvectors of $T^{\mu}_{\phantom a\nu}$. The analysis will be divided in two different classes, following the two different algebraic types of the energy-momentum tensor discussed above. \subsubsection*{Class 1: Segr\`e type I background} If, at a given point $p$, the energy-momentum is of Segr\`e type I, the following lemma ensues:\\[0.3cm] \noindent \textit{Lemma:} There exist a non-null intersection between Maxwell's light cone and the nonlinear light cone, which is given by the null eigenvectors of $T^{\mu}_{\phantom a\nu}$. \noindent Proof: As stated before, in Segr\`e type I there are two eigenvectors of $\mathbf T$ (those with eigenvalue $\lambda_+$) that are null in the background metric. Denoting them by $\xi^\mu_{(i)}$, $i=1,2$, we have $$ g_{\mu\nu}\xi^\mu_{(i)}\xi^\nu_{(i)}=a_0\gamma_{\mu\nu} \xi^\mu_{(i)}\xi^\nu_{(i)}+ b_0\xi^\mu_{(i)}T_{\mu\nu}\xi^\nu_{(i)}= (a_0+\lambda_+b_0)\gamma_{\mu\nu} \xi^\mu_{(i)}\xi^\nu_{(i)} =0.$$ Since we have $g_{\mu\nu}\xi_{(i)}^{\mu}\xi_{(i)}^{\nu}=0$ and $\eta_{\mu\nu}\xi_{(i)}^{\mu}\xi_{(i)}^{\nu}=0$, the $\xi_{(i)}^{\mu}$ are contained in both light cones, proving that they are tangent along these vectors. By choosing a specific orthogonal basis at $p$, the energy-momentum tensor admits a diagonal representation. which allows the light cone condition to be written as \begin{equation} f_{+}[(k^0)^{2}-(k^1)^{2}]-f_{-}[(k^2)^{2}+(k^3)^{2}]=0 \label{cono1} \end{equation} where $f_\pm=a_0+b_0\lambda_\pm$ are the eigenvalues of the effective metric tensor given in Eqn.(\ref{emem}). At a given point $p$, this is the equation of a three dimensional surface representing locally the light cone of a nonlinear theory in the tangent space $T_p$. The coefficients $f_{+}$ and $f_{-}$ depend explicitly on the Lagrangian and its derivatives, and on the background field, but in a sense, for a given theory, the three-surface will depend only on the invariant $F$. In the context of Maxwell's linear theory this equation reduces to that of the light cones of Minkowski spacetime. To develop a geometrical understanding of the causal structure, we will set $k^0=1$ in Eqn.(\ref{cono1}). From a physical point of view, we can interpret the resulting bidimensional surface as the position of the wavefront after a given infinitesimal lapse of time. In the case of linear electromagnetism, this surface is a sphere (called Maxwell's sphere from now on). Because there exist two common null vectors to both surfaces, the two-dimensional surface that follows from Eqn.(\ref{cono1})(which we will call the photon surface) intersects Maxwell's sphere precisely at two points ($k_{1}=\pm1$ in this frame). Defining the function $\Upsilon\equiv f_{-}/f_{+}$ we have \begin{equation} (k^1)^{2}+\Upsilon [(k^2)^{2}+(k^3)^{2}]=1. \end{equation} From the theory of quadrics we have the following possibilities for the resulting photon surface. If $\Upsilon>0$, the surface is an ellipsoid of revolution around the $k^1$ axis. The ellipsoid touches Maxwell's sphere from inside if $\Upsilon>1$ (with the major axis along $k^1$) and from outside if $0<\Upsilon<1$ (with minor axis along $k^1$, see Fig.\ref{s1}). The difference between the two light cones shows that because of the interaction with the background field, the nonlinear photons (NLP) propagate with different velocities in different directions. In the first case, all the NLP (except those along $k^{1}$) propagate with velocities that are less than $c$, while in the second case all the NLP (leaving aside those along $k^{1}$) propagate with velocity greater that $c$. The limiting case $\Upsilon=1$ coincides with Maxwell's sphere, in such a way that any theory with a background such that $\Upsilon =1$ at a given point will locally reproduce every property of Maxwell's theory from the point of view of photon propagation. Notice that there exist two more exotic allowed situations. When the condition $\Upsilon=0$ is satisfied, the photon surface is determined by planes satisfying the condition $k^1=\pm1$. These planes intersect Maxwell's sphere in two points, in which case the velocity of the NLP coincides with $c$. This in this case the nonlinear theory and the background are such that light propagation is forbidden in spacelike directions which in the chosen frame, are orthogonal to the $k^{1}$ axis. A second exotic possibility is $\Upsilon<0$, since the three-surface will be a two-sheet hyperboloid touching the sphere from outside. In this interesting case the spacelike directions in which the NLP cannot propagate are many more than in the $\Upsilon=0$ case. The following table displays the catalogue of possible light cone structures for a Segr\`e I type background energy-momentum tensor, along with a notation based on the algebraic type and resulting photon surface. \begin{table}[ht!] \centering \begin{tabular}{ccc} \hline\hline $\Upsilon$ & $\mathrm{photon}\: \mathrm{surface}$ & ${\rm symbol}$ \\ \hline $\Upsilon>1$ & ${\rm internal}\: {\rm ellipsoid}$ &$ Ie_{-}$ \\ $\Upsilon=1$ & ${\rm sphere}$ & $× Is$\\ $0<\Upsilon<1$ & ${\rm external}\:{\rm ellipsoid}$ & $Ie_{+}$ \\ $\Upsilon=0$ & ${\rm planes}$ & $×Ip$ \\ $\Upsilon<0$ & ${\rm two-sheet}\:{\rm hyperboloid}$ & $Ih_{2}$\\ \hline \end{tabular} \caption{The table shows the different photon surfaces for a Segr\`e type I energy-momentum tensor.} \label{t1} \end{table} A straightforward calculation shows that for ${\cal L} = {\cal L} (F)$, \begin{equation} \Upsilon=\frac{{\cal L}_F+{\cal L}_{FF}(F-\kappa)}{{\cal L}_F+{\cal L}_{FF}(F+\kappa)}. \label{upsi} \end{equation} Thus, the local causal structure in any nonlinear theory of electrodynamics is determined by computing a single function of the field invariant. The figures \ref{s1},\ref{s1h2}, and \ref{s1p} show some properties of the resulting surfaces and their relation with the Minkowski light cone. \begin{figure}[h] \begin{center} \includegraphics[width=0.6\textwidth]{s1} \caption{Minkowski and nonlinear light cones for Segr\`e type I and $\Upsilon >0$.} \label{s1} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{s1h2} \caption{Minkowski and nonlinear light cones for Segr\`e type I and $\Upsilon < 0$.} \label{s1h2} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{s1p} \caption{Minkowski and nonlinear light cones for Segr\`e type I and $\Upsilon \geq 0$.} \label{s1p} \end{center} \end{figure} \subsubsection*{Class 2: Segr\`e type II background} \label{st2} In this case the energy-momentum tensor cannot be diagonalized and, because $F=0$, there exists only one eigenvalue $\lambda=-{\cal L}$. Nevertheless, it is always possible to reduce $\textbf{T}$ to the following matrix representation by choosing a specific orthogonal frame: \begin{equation} T^{\mu}_{\phantom a\nu}=\left( \begin{array}{llll} \lambda-\mu & -\mu & 0 & 0 \\ \mu & \lambda+\mu & 0 & 0 \\ 0 & 0 & \lambda & 0 \\ 0 & 0 & 0 & \lambda \end{array}\right) \end{equation} where $\mu\equiv 2{\cal L}_F(E^2+H^2)$. Following the procedure sketched in the last subsection, \textit{i.e.} imposing the condition $k^0=1$ and introducing $f_0\equiv a_0+b_0\lambda$, Eqn.(\ref{lc}) becomes \begin{equation} (\Phi+1)(k^1)^2+(k^2)^2+(k^3)^2+2\Phi k^1+(\Phi-1)=0 \end{equation} where $\Phi\equiv b_0\mu_0/f_0$. Since there exists only one null eigenvector for a Segr\`e type II energy-momentum tensor, it is immediate to show that the corresponding photon surface intersects Maxwell's sphere only at one point ($k^1=-1$ in the chosen frame). Also, the non-diagonal terms in $\textbf{T}$ imply that the resulting surface is not centered at the origin. As in Segr\`e type I, we have several possibilities. If $-1<\Phi<\infty$, the three-surface will be an ellipsoid of revolution along $k^{1}$. When $0<\Phi<1$, the surface is entirely contained inside Maxwell's sphere, while in the case $-1<\Phi<0$ Maxwell's sphere is inside, in such a way that the limiting case $\Phi=0$ is the sphere itself (see Fig.\ref{s2}). There are also some exotic situations. If $\Phi=-1$, the NLP define a paraboloid of revolution around $k^{1}$, which prohibits propagation in some directions. Finally, the condition $\Phi<-1$ determines a strange situation in which a one-sheet hyperboloid is tangent to the sphere. The different possibilities are given in the following table, where for ${\cal L} = {\cal L} (F)$, $\Phi$ is given by \begin{equation} \Phi=-\mu \left.\frac{{\cal L}_{FF}}{{\cal L}_{F}}\right|_0. \end{equation} \begin{table}[ht!] \centering \begin{tabular}{ccc} \hline\hline $\Phi$ & $\mathrm{photon}\:\mathrm{surface}$ & $\mathrm{symbol} $\\ \hline $0<\Phi<\infty$ & $\mathrm{internal}\;\mathrm{ellipsoid}$ & $IIe_{-} $\\ $\Phi=0$ & $\mathrm{sphere}$ & $IIs$ \\ $-1<\Phi<0$ &$ \mathrm{external}\:\mathrm{ellipsoid}$ &$ IIe_{+} $\\ $\Phi=-1$ & $\mathrm{paraboloid}$ & $IIp$ \\ $\Phi<-1 $& $\mathrm{one-sheet}\:\mathrm{hyperboloid}$ & $IIh_{1}$\\ \hline \end{tabular} \caption{The table shows the different photon surfaces for a Segr\`e type I energy-momentum tensor.} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=0.65\textwidth]{s2} \caption{Minkowski and nonlinear light cones for a Segr\'e type II energy-momentum tensor.} \label{s2} \end{center} \end{figure} \section{Simple realizations of the classification} \label{simple} In this section we will discuss the relation between the orientation of the light cones in a nonlinear ${\cal L} (F)$ theory and a given electromagnetic field. For simplicity, we will concentrate only in Segr\'e I type energy-momentum tensors. In terms of the electromagnetic vectors $E^{\:\mu}$, $H^\mu$ and a given timelike congruence $V^{\mu}$, the electromagnetic field tensor admits the following decomposition: $$ F^{\mu \nu}\equiv E^{\left[\mu\right.}V^{\left.\nu\right]}+\frac{1}{2}\eta^{\mu \nu}_{\phantom a \phantom a \alpha \beta}H^{\alpha}V^{\beta} $$ where the electric and magnetic vectors $E^\mu(x)$ and $H^\mu(x)$ are defined respectively as $E^\mu = F^{\mu}_{\phantom a \nu}V^{\nu}$ and $H^{\mu}\equiv \eta^{\mu}_{\phantom a \varepsilon \alpha \beta}V^{\varepsilon}F^{\alpha \beta}$, and $F=2(H^{2}-E^{2})$, $G=-4E\cdot H$. The energy-momentum tensor for a theory described by ${\cal L} = {\cal L} (F)$ can be similarly decomposed as \begin{eqnarray} T_{\mu\nu} & = & -4{\cal L}_F(E^{2}V_{\mu}V_{\nu}-E_{\mu}E_{\nu}+2q_{(\mu}V_{\nu)}))\\ & &-4{\cal L}_F(H^{2}V_{\mu}V_{\nu}-H_{\mu}H_{\nu}-H^{2}g_{\mu\nu})-{\cal L}g_{\mu\nu}, \end{eqnarray} where $E^{2}=-E_{\mu}E^{\mu}$, $H^{2}=-H_{\mu}H^{\mu}$ and $q_{\lambda}=\frac{1}{2}\eta_{\lambda}^{\phantom a\mu\rho\sigma}E_{\mu}V_{\rho}H_{\sigma}$ is the Poynting vector. In Segr\'e type I tensors, it is always possible to find an observer such that the electric and magnetic fields are parallel, \textit{i.e.} $H_{\mu}=\xi E_{\mu}$ and, consequently, the Poynting vector vanishes \cite{Landau}. It is also possible to show that $V^{\mu}$ and $E^{\mu}$ are eigenvectors of $T^{\alpha}_{\phantom a\beta}$ with eigenvalue $\lambda_{+}$. Thus, the timelike invariant subspace $S_p$ is spanned by a linear combination of $V^{\mu}$ and $E^{\mu}$. The spacelike subspace is determined by means of two spacelike orthonormal vectors $n^{\mu}$ and $m^{\mu}$ satisfying $n^{\mu}E_{\mu}=m^{\mu}E_{\mu}=0$. It is immediate to see that these vectors are eigenvectors of the energy-momentum tensor with eigenvalue $\lambda_{-}$. In the frame where $H_{\mu}=\xi E_{\mu}$, the eigenvalues have the form: \begin{eqnarray} &&\lambda_{+}=-4{\cal L}_{F}E^{2}-{\cal L}\\ &&\lambda_{-}=4 {\cal L}_{F}\xi^{2}E^{2}-{\cal L}, \end{eqnarray} In particular, if the observer is such that the magnetic part $H^{\mu}$ vanishes ($\xi =0$), the electric field direction determines completely the orientation of the timelike subspace which, as proved by the lemma in Sec.\ref{seco}, determines the only directions in space in which light propagation is not affected by the nonlinear interaction. We will analyze next the problem of a static electrically charged particle in the context of two different nonlinear theories of electrodynamics, which illustrates what has been exposed here. \subsection{Example 1} Following the developments of the previous section, we shall examine here the light cone structure generated by the field of an electric charge placed at the origin in Born-Infeld theory, with Lagrangian given by $$ {\cal L}=b^2\left(1-\sqrt{1+\frac{F}{2b^2}}\right). $$ The equations for the EM field, $$ (\sqrt{-g}{\cal L}_FF^{\mu\nu})_{;\nu}=0 $$ in the spherically symmetric case have been solved for instance in \cite{wh}. The only nonzero component of $F^{\mu\nu}$ is $F^{tr}$ in such a way that $$ \frac{F}{2b^2}=-\frac{16\alpha^2}{r^4b^2+16\alpha^2}, $$ where $\alpha$ is proportional to the electric charge. Defining the new variable $$ y=\left(\frac{b^2r^4}{16\alpha^2}\right)^{1/4}, $$ the function $\Upsilon$ introduced in Eqn.(\ref{upsi}) is in this case given by $$ \Upsilon = 1 - \frac{1}{1+y^4}. $$ $\Upsilon$ tends to 1 for large $r$ (meaning that the linear theory is recovered far from the charge), and it is actually restricted to the interval $[0,1]$. From the classification in Table \ref{t1}, we see that the light cones go from a sphere (for very large values of $r$) to an external ellipsoid, deforming finally to two parallel planes (for $r=0$). Note that these considerations are valid in the reference frame in which $H_\mu $ is zero. This case has been discussed before in \cite{wh}, from the point of view of the effective potential for the NLP arising from the effective metric, given by $$ V(r) = \frac{L^2r^2}{r^4+\frac{8\alpha^2}{b^2}}. $$ We see that for NLP travelling in the radial direction, $V=0$. Hence, these photons move with velocity $c$, as was shown in Sec.\ref{seco}. The motion for $L\neq 0$ can be described in the formalism presented here by the analysis of the variation of the local light cone, exactly as is done for instance in the case of the Schwarzschild's black hole. Notice that all the necessary information about the light cones is encoded in $\Upsilon$. \subsection{Euler-Heisenberg} The second example that will be examined here is that of an Euler-Heisenberg-like Lagrangian (in the weak-field limit), given by $$ {\cal L} (F) = -\frac 1 4 F+\beta F^2, $$ where for the time being we leave $\beta$ unspecified. Taking into account that $\kappa=+\sqrt{F^2+G^2}$, we can distinguish two cases: 1) $F>0\rightarrow \kappa = F$. In this case, $$ \Upsilon = \frac{8\beta F-1}{24\beta F-1}. $$ 2) $F<0$, then $\kappa = - F$, and $$ \Upsilon = \frac{24\beta F-1}{8\beta F-1}. $$ On the other hand, the equations of motion $$ ({\cal L}_FF^{\mu\nu})_{;\nu}=0, $$ in the case of the EH-like Lagrangian for an electric charge lead to $$ \frac{E(r)}{4}+4\beta E(r)^3 = \frac{Q}{r^2}, $$ where $Q$ is proportional to the electric charge. This expression can be inverted to give $$ r^2=\frac{Q}{\sqrt{\frac E 4 + 4 \beta E^3}}. $$ We see that for $\beta >0$, the electric field must satisfy $0<E^2<\infty$. It follows that in this case $-\infty < F < 0$. From the plot of $\Upsilon$ in terms of $F$ it follows that $\Upsilon$ goes from the value 1 (at $r\rightarrow\infty$) to the value 3 (at $r\rightarrow 0$). Hence the photon spheres in this case vary qualitatively shown in Fig.\ref{erico11}. \begin{figure}[h] \begin{center} \includegraphics[width=0.6\textwidth]{erico11} \caption{Variation of Minkowski and photon spheres for $\beta >0$.} \label{erico11} \end{center} \end{figure} In the same way, for $\beta <0$, we have that $\Upsilon \rightarrow 1$ for $r\rightarrow\infty$, and $\Upsilon \rightarrow -\infty$ for $\rightarrow 0$. The qualitative variation of the photon spheres in this case is shown in Fig.\ref{erico22} \begin{figure}[h] \begin{center} \includegraphics[width=0.6\textwidth]{erico22} \caption{Variation of Minkowski and photon spheres for $\beta >0$.} \label{erico22} \end{center} \end{figure} \section{Conclusion} \label{conc} We have shown that the presence of $\mathbf{T}$ in the effective metric along with the help of the Segr\`e classification of second rank symmetric tensors furnishes a classification of the possible forms of the effective metric for nonlinear electromagnetic theories. We developed this classification in the case of Lagrangians given by ${\cal L} = {\cal L} (F)$ \footnote{The generalization to ${\cal L} (F,G)$ Lagrangians should be straightforward, taking into account that the effective metric can still be expressed as in Eqn.(\ref{effmet}), see \cite{klo}}, showing that there are only two possible general forms for the effective metric (associated to Segr\`e types I and II, which are the only types that appear in this case). The explicit form of the effective metric can be used to compare the effective light cone with the Minkowskian light cone. We have presented the different possibilities, each of them associated to a single scalar function of the Lagrangian, its derivatives, and the background field. Finally, we have analyzed two examples, which illustrate the power of the method, in the sense that the variation of the light cones is encoded in $\Upsilon$. To close, let us remark that although we have focused in the case of the nonlinear electromagnetic field, the classification of the effective metric in terms of the Segr\`e types is possible for other fields. This follows from the fact that, as shown in \cite{mariofierz} and \cite{mm}, the effective geometries associated to nonlinear scalar field and the nonlinear spin two field theories can also be written in the form \begin{equation} g_{\mu\nu}=\Omega^{(1)}_0\gamma_{\mu\nu} +\Omega^{(2)} _0 T_{\mu\nu 0}. \label{effmet} \end{equation} In this expression, $\gamma_{\mu\nu}$ is the background metric, and $\Omega^{(1)}_0$ and $\Omega^{(2)}_0$ are functions of the background field, the detailed form of which depends on the given theory, and $\Omega^{(2)}_0$ is such that it goes to zero when the theory is linear. We also leave for future work half-integer spin fields, and the case of multi-fields. \section{Acknowledgements} SEPB would like to acknowledge support from ICRANet (where part of this work was done), and FAPERJ.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Cluster algebras were introduced in \cite{fz_02} as certain commutative algebras whose generators are defined by a combinatorial recursive procedure called ``mutation''. They were originally conceived to study certain problems in Lie theory and quantum groups, but have since found many surprising and deep connections to other areas of mathematics and physics. In \cite{fst_08}, a class of cluster algebras was defined starting from the data of a surface with boundary, together with a collection of punctures and marked points on the boundary. It was shown in \cite{gsv_05} and \cite{fg_06} that these cluster algebras could be interpreted geometrically as functions on decorated Teichm\"{u}ller spaces. Specifically, the cluster variables are coordinate functions known as \emph{$\lambda$-lengths} or \emph{Penner coordinates} \cite{penner_12}. One of the earliest results in the subject is the \emph{Laurent phenomenon}, which says that all cluster variables can be expressed as Laurent polynomials in terms of a fixed set of initial cluster variables. Over the years, several explicit formulas have been given for these Laurent expressions in the case of cluster algebras coming from surfaces, with the Laurent monomials being indexed by various combinatorial objects. Some of these include Schiffler's ``$T$-paths'' \cite{schiffler_08}, perfect matchings of the snake graphs of Musiker, Schiffler, and Williams \cite{msw_11}, and Yurikusa's ``angle matchings'' \cite{yurikusa_19}. Going beyond the commutative case, Berenstein and Zelevinsky defined quantum cluster algebras \cite{berenstein2005quantum} where cluster variables (of the same cluster) quasi-commute with one another, meaning that exchanging the order of multiplication in a cluster monomial could alter the expression by yielding a power of $q$ out in front of such a term. More recently, Berenstein and Retakh \cite{berenstein2018noncommutative} defined in the case of surfaces a (completely) non-commutative model of cluster variables and obtained non-commutative Laurent expansions analogous to $T$-paths. Such non-commutative expressions could also be defined as quasi-Pl\"ucker coordinates. Recently, Penner and Zeitlin defined the notion of the \emph{decorated super-Teichm\"{u}ller space} associated to a bordered marked surface \cite{pz_19}. This work builds off of earlier work on super-Teichm\"uller spaces for super Riemann surfaces \cite{cr_88}. The coordinates on a super-space are broken into two classes: namely even coordinates and odd coordinates. Even coordinates are ordinary commutative variables but odd coordinates anti-commute with one another. Odd coordinates are also commonly known as Grassmann variables. As in the classical commutative case, the coordinates correspond to arcs in a fixed triangulation of the surface. They described a super version of the Ptolemy relation, which is an expression for how the coordinates change when changing the choice of triangulation. Our main result in this paper is an explicit formula for the super $\lambda$-lengths in the case of marked disks, generalizing the $T$-path formulation of Schiffler. Like Schiffler's formula, the terms in our formula are also indexed by objects which closely resemble the $T$-paths from the classical case. \section{Decorated Teichm\"uller theory} First we review some background on decorated Teichm\"{u}ller spaces. For a detailed reference, see~\cite{penner_12}. Let $S$ be a surface with boundary, and let $p_1,\dots,p_n$ be a collection of marked points on the boundary, such that each boundary component contains at least one marked point. More generally, we can also have a collection of interior marked points (or \emph{punctures}), but we will not be concerned with this case in this paper. We also equip the surface with a triangulation, where the arcs terminate at the marked boundary points. The \emph{Teichm\"{u}ller space} of~$S$, denoted~$\mathcal{T}(S)$, is the space of (equivalence classes of) hyperbolic metrics on $S$ with constant negative curvature, with cusps at the marked points. Because of the cusps, any geodesic between marked points has infinite hyperbolic length. The \emph{decorated Teichm\"{u}ller space} of $S$, written $\widetilde{\mathcal{T}}(S)$, is a trivial vector bundle over $\mathcal{T}(S)$, with fiber ${\mathbb R}_{> 0}^n$. The fibers represent a choice of a positive real number associated to each marked point. At each marked point, we draw a horocycle whose size (or \emph{height}) is determined by the corresponding positive number. Truncating the geodesics using these horocycles, it now makes sense to talk about their lengths. If $\ell$ is the truncated length of one of these geodesic segments, then the $\lambda$-length (or \emph{Penner coordinate}) associated to that geodesic arc is defined to be \[ \lambda := \exp(\ell/2). \] Fixing a triangulation of the marked surface, the collection of $\lambda$-lengths corresponding to the arcs in the triangulation (including segments of the boundary) form a system of coordinates for~$\widetilde{\mathcal{T}}(S)$. Choosing a different triangulation results in a different system of coordinates, but they are related by simple transformations which are a hyperbolic analogue of Ptolemy's theorem from classical Euclidean geometry. If two triangulations differ by the flip of a single arc as in Figure~\ref{fig:ptolemy}, then the $\lambda$-lengths are related by $ef = ac + bd$. \input{flip.tex} \section[Laurent expression for lambda-lengths]{Laurent expression for $\boldsymbol{\lambda}$-lengths} In this paper, we will only be concerned with the case that the surface $S$ is a disk with marked points on its boundary (which we will picture as a convex polygon). So we restrict to that case now. Fix a triangulation of an $n$-gon, and label the vertices $1$ through $n$. Schiffler~\cite{schiffler_08} defined a~\emph{$T$-path} from $i$ to $j$ to be a sequence \[ \alpha = \big(a_0,\dots,a_{\ell(\alpha)} \,|\, t_1,\dots,t_{\ell(\alpha)}\big) \] such that \begin{itemize}\itemsep=0pt \item[(T1)] $a_0,\dots,a_{\ell(\alpha)}$ are vertices of the polygon, \item[(T2)] $t_k$ is an arc in the triangulation connecting $a_{k-1}$ to $a_k$, \item[(T3)] no arc is used more than once, \item[(T4)] $\ell(\alpha)$ is odd, \item[(T5)] if $k$ is even, then $t_k$ crosses the arc connecting vertices $i$ and $j$, \item[(T6)] if $k < l$ and both $t_k$ and $t_l$ cross the arc from $i$ to $j$, then the point of intersection with $t_k$ is closer to $i$, and the point of intersection with $t_l$ is closer to $j$. \end{itemize} An example of a $T$-path in a hexagon is shown in Figure~\ref{fig:t_path}. The even-numbered edges are colored blue, and the odd edges red, for emphasis. \begin{figure}[h!] \centering \begin {tikzpicture}[scale=1.3] \coordinate (A) at (-1,0); \coordinate (B) at (-0.5,-0.866); \coordinate (C) at (0.5,-0.866); \coordinate (D) at (1,0); \coordinate (E) at (0.5,0.866); \coordinate (F) at (-0.5,0.866); \draw (A) -- (B) -- (C) -- (D) -- (E) -- (F) -- cycle; \draw (B) -- (F) -- (C) -- (E); \draw (A) node[left] {$i$}; \draw (D) node[right] {$j$}; \draw[red, line width = 1.2] (A) -- (B); \draw[red, line width = 1.2] (F) -- (E); \draw[red, line width = 1.2] (C) -- (D); \draw[blue, line width = 1.2] (B) -- (F); \draw[blue, line width = 1.2] (E) -- (C); \end {tikzpicture} \caption{A $T$-path from $i$ to $j$.} \label {fig:t_path} \end{figure} We denote the set of all $T$-paths from $i$ to $j$ by $T_{ij}$. Given a $T$-path $\alpha$, we define a Laurent monomial $x_\alpha$ (in terms of the $\lambda$-lengths of a fixed triangulation) as the product of the $\lambda$-lengths used in the $T$-path, with the even-numbered ones inverted. That is, if the ordered sequence of edges in $\alpha$ is $e_1, e_2, \dots, e_{2m+1}$, then \[ x_\alpha := \frac{\prod_{k=0}^m x_{e_{2k+1}}}{\prod_{k=1}^m x_{e_{2k}}}. \] Schiffler proved the following theorem relating $\lambda$-lengths and $T$-paths. \begin{Theorem}[{\cite[Theorem 1.2]{schiffler_08}}] Let $x_{ij}$ be the $\lambda$-length corresponding to the geodesic arc connecting vertices $i$ and $j$ in a triangulated polygon. Then \[ x_{ij} = \sum_{\alpha \in T_{ij}} x_\alpha. \] \end{Theorem} \begin{Remark} \label{rem:subtriang}\rm As a consequence of the definition of $T$-paths, to compute $x_{ij}$ it is sufficient to consider only the sub-polygon consisting of the triangles which the geodesic arc connecting vertices $i$ and $j$ crosses. \end{Remark} \section{Decorated super-Teichm\"{u}ller theory}\label{sec:dec_super} Super-Teichm\"{u}ller spaces have been studied for several years now (see for example~\cite{cr_88}). Recently, Penner and Zeitlin introduced a decorated version of super-Teichm\"{u}ller spaces~\cite{pz_19}. It is generated by even variables corresponding to the $\lambda$-lengths of a triangulation, as well as odd variables (called \emph{$\mu$-invariants}) corresponding to the triangles. They give a super version of the Ptolemy relation, which reads as follows (see Figure~\ref{fig:super_ptolemy} for the meanings of the variables) \begin{gather*} ef = (ac+bd)\left( 1+{{\sigma\theta\sqrt{\chi}}\over{1+\chi}}\right),\qquad \sigma' = {{\sigma-\sqrt{\chi}\theta}\over{\sqrt{1+\chi}}},\qquad \theta' = {{\theta+\sqrt{\chi}\sigma}\over{\sqrt{1+\chi}}}, \end{gather*} where $ \chi = \frac{ac}{bd}$. \input{superflip.tex} We will usually find it convenient to re-write these equations without $\chi$, as follows \begin{gather} ef = ac+bd+\sqrt{abcd} \sigma\theta, \label{eq:mutation}\\ \sigma' = {{\sigma\sqrt{bd}-\theta\sqrt{ac}}\over{\sqrt{ac+bd}}}, \label{eq:mu_mutation_neg} \\ \theta' = {{ \theta\sqrt{bd}+\sigma \sqrt{ac}}\over{\sqrt{ac+bd}}}.\label{eq:mu_mutation_pos} \end{gather} An important corollary of these equations is the following \begin{equation}\label{eq:mu_cross} \sigma\theta=\sigma'\theta', \end{equation} which will be used frequently in our proofs. To understand the oriented arrows in Figure~\ref{fig:super_ptolemy} and the minus sign in equation~\eqref{eq:mu_mutation_neg}, we need the combinatorial data of a \emph{spin structure}. In~\cite{cr_07} and~\cite{cr_08}, an isomorphism was shown between the set of equivalence classes of spin structures on a surface and the set of isomorphism classes of Kasteleyn orientations of a~fatgraph spine of the surface. Dual to any fatgraph spine is a~triangulation of the surface, and so an orientation of the fatgraph corresponds to an orientation of a~triangulation (requiring the dual edge to cross from left to right). For our purposes a~\emph{spin structure} will be a~choice of orientation of the edges of a~triangulation, modulo a~certain equivalence relation, which we now describe. Fix a triangulation $\Delta$ and an orientation $\tau$ of the edges in that triangulation. For any triangle~$t$, consider the transformation which reverses the orientation of the three sides of~$t$. Define an equivalence relation on orientations by declaring that $\tau \sim \tau'$ if they differ by a~sequence of these transformations. In~\cite{pz_19}, a spin structure is represented combinatorially by an equivalence class of orientations. In \cite{pz_19}, the authors did not consider surfaces with boundary, as we do here. Because the orientation of boundary segments will play no role in our formulas for $\lambda$-lengths (but not $\mu$-invariants), we can often ignore the boundary orientations. Note that the hypothesis of Proposition~\ref{prop:equiv_rel} on the triangulation is sufficient because of Remark~\ref{rem:subtriang}. \begin {Proposition} \label{prop:equiv_rel} Fix a triangulation of a polygon in which every triangle has at least one boundary edge. Then there is a unique spin structure after ignoring the boundary edges. In particular, this means that, from any representative orientation of a fixed spin structure, one can obtain all other orientations on the interior diagonals, without leaving this spin structure. \end {Proposition} \begin {proof} Because every triangle has at least one boundary edge, one can naturally sequence the triangles (and internal diagonals) from left-to-right. We may picture the polygon as follows, to emphasize this: \begin{center} \begin{tikzpicture} \draw (0,0) -- (0,1) -- (5,1) -- (5,0) -- cycle; \draw (0,1) -- (1,0) -- (1,1); \draw (1,0) -- (2,1); \draw (2,1) -- (2,0); \draw (2,1) -- (3,0); \draw (2,1) -- (4,0); \draw (4,0) -- (4,1); \draw (4,0) -- (5,1); \end{tikzpicture} \end{center} We will demonstrate that we may change the orientation of a single edge while remaining in the same equivalence class. Label the triangles from left to right as $t_1,t_2,\dots$ and the internal diagonals as $d_1,d_2,\dots$. We will argue that we can change the orientation of just $d_k$, by induction on $k$. If $k=1$, then $d_1$ is an edge of $t_1$, and the other two edges of $t_1$ are on the boundary. Because we ignore boundary orientations, the equivalence relation allows us to reverse the orientation of just $d_1$. Now, for arbitrary $d_k$, reverse the orientations around triangle~$t_k$. This affects both $d_k$ and~$d_{k-1}$. But by induction, we may change just $d_{k-1}$ while staying in the equivalence class. This proves the claim. \end {proof} In Figure~\ref{fig:super_ptolemy}, the arrows on the edges labelled $e$ and $f$ represent the choice of orientation. In the figure, the orientations of the edges around the boundary of the quadrilateral are not indicated. Three of the four edges are unchanged in the super Ptolemy transformation, and only the orientation of the edge labelled~$b$ is changed, see Figure~\ref{fig:flip_spin}. The sign in equation~\eqref{eq:mu_mutation_neg} is determined by the relative positions of the triangular faces with respect to the chosen orientation. \input{spin_flip.tex} As illustrated in Figure~\ref{fig:flip_spin}, the super Ptolemy relation is \emph{not} an involution. Performing a flip twice results in reversing the orientations around the top triangle. This leads us to the following observation that $\mu$-invariants are well-defined only up to sign. Consequently, in the main results of our paper regarding $\mu$-invariants (Theorems~\ref{thm:mu_single_fan} and~\ref{thm:proof_zigzag}), we must specify a mutation sequence to obtain a formula for the $\mu$-invariants. \begin{Remark} \label{rem:mu-invariants}\rm The above equivalence relation, i.e., as used in Proposition~\ref{prop:equiv_rel}, guarantees that the result after flipping twice represents the same spin structure, but algebraically it has the effect of negating the $\mu$-invariant of that triangle ($\theta \mapsto -\theta$ in the figure). This means that the specific $\mu$-invariants are not a feature of the triangulation and spin structure alone, but also the choice of representative orientation. Choosing a different orientation corresponds to changing the sign of some of the $\mu$-invariants. \end{Remark} \begin{Remark} \label{rem:lambda}\rm Unlike the $\mu$-invariants, the expressions of $\lambda$-lengths in terms of an initial triangulation (as will be described more fully in Corollary~\ref{cor:laurent}) are independent of the orientation of the arc as part of a spin structure, and of the flip sequence used to obtain a triangulation containing that arc. This is proven in \cite{pz_19} in the case of surfaces without boundaries. The fact that $\lambda$-lengths are well-defined can also be seen as a consequence of the well-definedness of super-freizes, as we will describe in Sections~\ref{sec:super-friezes} and~\ref{sec:subfrieze}, based on \cite{M-GOT15,pz_19}. We provide a more direct proof in Appendix~\ref{appendix}. \end{Remark} \section[Super T-paths]{Super $\boldsymbol{T}$-paths} Let $P$ be an $(n+3)$-gon (a disk with $n+3$ marked points on the boundary), and $T = \{t_1,\dots,t_{2n+3}\}$ the set of arcs in some triangulation. We will denote by $V_0$ the set of vertices (marked points). Let~$a$ and~$b$ be two non-adjacent vertices on the boundary and let $(a,b)$ be the arc that connects~$a$ and~$b$. \subsection{Fans of a triangulation and their centers} We call a triangulation a \emph{fan} if all the internal diagonals meet at a common vertex. We will define a canonical way to break any triangulation $T$ of $P$ into smaller polygons with fan triangulations. For this purpose, certain vertices in $P$ will be distinguished as \emph{centers of fans}. \input{shade.tex} Let $P$ be a polygon and $T$ a triangulation. Following Remark~\ref{rem:subtriang}, without loss of generality, we may assume that $(a,b)$ crosses all internal diagonals of $T$. Consider the intersection of $(a,b)$ with triangles in $T$ which do not contain $a$ or $b$. These intersections create small triangles (colored yellow in Figure~\ref{fig:fan_centers}) whose vertices in $P$ we call \emph{fan centers}. We set $a=c_0$ and $b=c_{N+1}$ and as a convention we will name these centers $c_1,\dots,c_N$ such that \begin{enumerate}\itemsep=0pt \item[1)] for $1\leq i\leq N-1$, the edge $(c_i,c_{i+1})$ is in $T$ which crosses $(a,b)$, \item[2)] the intersection $(c_i,c_{i+1}) \cap (a,b)$ is closer to $a$ than $(c_j,c_{j+1}) \cap (a,b)$ if $i<j$. \end{enumerate} Now the edges $(c_i,c_{i+1})$ naturally break the triangulation $T$ into $N$ smaller polygons, each of which comes with an induced fan triangulation. Let $F_i$ denote the subgraph of $T$ bounded by $c_{i-1}$, $c_{i}$ and $c_{i+1}$, which are called the \emph{fan segments} of $T$. We say that $c_i$ is the \emph{center} of $F_i$. See Figure~\ref{fig:bigger_auxiliary_graph} for an illustration, where the fan segments are indicated by different colors. \subsection{The auxiliary graph} We shall now define an auxiliary graph associated to $(T,a,b)$, which will be used to define the super $T$-paths from $a$ to $b$. For a triangulation $T$ and a pair of vertices $a$ and $b$, we define the graph $\Gamma_T^{a,b}$ to be the graph of the triangulation $T$ with some additional vertices and edges. \begin{enumerate}\itemsep=0pt \item For each face of the triangulation $T$, we place an internal vertex, which lies on the arc $(a,b)$. We denote the internal vertices $V_1=\{\theta_1,\dots,\theta_{n+1}\}$, such that~$\theta_i$ is closer to $a$ than~$\theta_j$ if and only if $i<j$. \item For each face of~$T$, we add an edge $\sigma_i := (\theta_i, c_j)$ connecting the internal vertex $\theta_i$ to the center of the fan segment which contains~$\theta_i$. We denote by~$\sigma$ the set of all such edges. \item For each $\theta_i$ and $\theta_j$ with $i<j$, we add an edge connecting $\theta_i$ and $\theta_j$. We denote the collection of these edges as $\tau = \{\tau_{ij}\colon i<j\}$. For simplicity the $\tau$-edges are drawn to be overlapping. \end{enumerate} See Figure~\ref{fig:auxiliary_graph} for example. \input{graph.tex} \begin{Remark}\rm Note that the arc $(a,b)$ divides the first and last triangle into two triangles, as opposed to into a triangle and a quadrilateral like the case of every other triangle of~$T$. Consequently, the convention of the yellow coloring as in Figure~\ref{fig:fan_centers} can be extended to the first and last triangles of $T$ in multiple ways. Thus we may define the first and last $\sigma$-edge in a~different way: $\sigma_1=(\theta_1,x)$ and $\sigma_4=(\theta_4,y)$ in Figure~\ref{fig:auxiliary_graph}, in which case the (not yet defined) super $T$-paths produces the same weight. We make this choice for sake of consistency of doing induction. This means that when $P$ is a quadrilateral, we can view its triangulation~$T$ as either a single fan or having 2 fans, and define the auxiliary graph in $4$ different ways. \end{Remark} In Figure~\ref{fig:bigger_auxiliary_graph}, we give another example of constructing the auxiliary graph. \begin{figure}[h!] \centering \input{example_shade.tex}\ \ \input{example_auxiliary_graph.tex} \caption{Left: yellow shading indicates the fan centers. Right: the auxiliary graph where different fans are indicated by different colors.} \label{fig:bigger_auxiliary_graph} \end{figure} \subsection[Super T-paths]{Super $\boldsymbol{T}$-paths} Now we define the super $T$-paths from $a$ to $b$ to be paths on edges of the auxiliary graph $\Gamma_T^{a,b}$ satisfying certain axioms. \begin{Definition}[super $T$-paths] \label{def:super-T} A \emph{super $T$-path} $t$ from $a$ to $b$ is a sequence \[t=(a_0,a_1,\dots,a_{\ell(t)} \,|\, t_1,t_2,\dots,t_{\ell(t)})\] such that \begin{enumerate}\itemsep=0pt \item[\rm (T1)] $a=a_0,a_1,\dots,a_{\ell(t)}=b\in V_0\cup V_1$ are vertices on $\Gamma_T^{a,b}$, \item[\rm (T2)] for each $1\leq i\leq \ell(t)$, $t_i$ is an edge in $\Gamma_T^{a,b}$ connecting $a_{i-1}$ and $a_i$, \item[\rm (T3)] $t_i\neq t_j$ if $i\neq j$, \item[\rm (T4)] $\ell(t)$ is odd, \item[\rm (T5)] $t_i$ crosses $(a,b)$ if $i$ is even. The $\sigma$-edges are considered to cross $(a,b)$, \item[\rm (T6)] $t_i\in\sigma$ only if $i$ is even, $t_i\in\tau$ only if $i$ is odd, \item[\rm (T7)] if $i<j$ and both $t_i$ and $t_j$ cross the arc $(a,b)$, then the intersection $t_i\cap (a,b)$ is closer to the vertex $a$ than the intersection $t_j\cap (a,b)$. \end{enumerate} We let $\mathcal T_{a,b}$ denote the set of super $T$-paths from $a$ to $b$. Furthermore, let $\mathcal T_{a,b}^0$ be the set of super $T$-paths from $a$ to $b$ which do not use $\sigma$ or $\tau$ edges. We naturally identify $\mathcal{T}_{a,b}^0$ with $T_{a,b}$, the set of ordinary $T$-paths from $a$ to $b$. We also define $\mathcal T_{a,b}^1 := \mathcal T_{a,b} - \mathcal T_{a,b}^0$ as the set of super $T$-paths which do not correspond to ordinary $T$-paths. \end{Definition} An immediate observation is that every super $T$-path must have an even number of $\sigma$-edges. More specifically, they always appear as a sequence of $\sigma$-edge, $\tau$-edge, and $\sigma$-edge. Here $\tau$- stands for \emph{teleportation}: instead of following along an edge of the triangulation $T$, a $\tau$-step teleports from one internal vertex to another. We call a subsequence of a super $T$-path of the form $(\dots,\theta_i,\theta_j,\dots|\dots,\sigma_i,\tau_{ij},\sigma_j,\dots)$ a \emph{super step}. In other words, a super $T$-path is a~concatenation of certain ordinary $T$-paths and super steps. \begin{Example} \label{example:14some}\rm Figure~\ref{fig:examplepath} illustrates several examples of super $T$-paths from $1$ to $4$. Odd-numbered edges are colored red and even-numbered edges are colored blue. \input{examplepath.tex} \end{Example} \subsection{Default orientation and positive order} {Fix an arc $(a,b)$, and as mentioned in the previous section, we assume that this arc crosses all diagonals in the chosen triangulation. We also choose a direction $a \to b$ for this arc.} Based on these choices, we will define a \emph{default orientation}, which guarantees an ordering of the $\mu$-invariants in which the coefficients in the $\lambda$-length expansion have positive coefficients. Accordingly, we will call this ordering the \emph{positive ordering}. Notice that only the orientation of interior edges affects our calculation of $\lambda$-lengths, therefore the orientation of boundary edges will be omitted. Recall the convention for labelling the vertices of a polygon: given two vertices $a$ and $b$ and a chosen direction $a \to b$, we label $c_0 = a$, $c_{N+1} = b$, and the fan centers are labelled $c_1,\dots,c_N$ in such a way that $c_i$ is closer to $a$ than $c_{i+1}$ is. See Figure~\ref{fig:default_spin} for an illustration. \begin{Definition}[default orientation]\rm When the triangulation is a single fan with $c_1$ being the center, every interior edge is oriented away from $c_1$. When $T$ is a triangulation with $N>1$ fans, where $c_1,\dots,c_N$ are the centers, the interior edges within each fan segment are oriented away from its center. The edges where two fans meet each other are oriented as \( c_1\rightarrow c_2\rightarrow\cdots\rightarrow c_{N-1}\rightarrow c_N \). See Figures~\ref{fig:default_spin} and \ref{fig:default_spin_more}. \end{Definition} \input{defaultspin.tex} \input{defaultspin_more.tex} \begin{Remark}\rm As mentioned above, the definition of default orientation depends on the choice of direction $a \to b$. In particular, choosing the opposite direction $b \to a$ would change the labelling so that $c_i$ becomes $c_{N-i}$. The effect is that the orientation of the diagonals within a fan are unchanged, but the diagonals connecting two fan centers would have the reverse orientation. \end{Remark} \begin{Definition}[positive ordering]\label{def:pos_order}\rm For $F$ a single fan triangulation with center $c_1$, let $\theta_1,\dots,\theta_k$ be its faces. The \emph{positive ordering} is defined to be \[\theta_1>\theta_2>\cdots>\theta_k, \] where $\theta_1,\dots,\theta_k$ are ordered counterclockwise around $c_1$. For a triangulation $T$ with fans $F_1,\dots,F_N$, we order the fans as follows in two different cases \begin{enumerate}\itemsep=0pt \item If $c_{N-1},c_{N},c_{N+1}$ are oriented {counterclockwise}, then we order the fans as follows \[\begin{cases} F1>F3>\cdots>F_{N-1}>F_N>F_{N-2}>\cdots>F4>F2&\text{if }N\text{ is even,}\\ F2>F4>\cdots>F_{N-1}>F_N>F_{N-2}>\cdots>F3>F1&\text{if }N\text{ is odd.} \end{cases} \] \item If $c_{N-1}$, $c_{N}$, $c_{N+1}$ are oriented {clockwise}, then we order the fans as follows \[\begin{cases} F1>F3>\cdots>F_{N-2}>F_N>F_{N-1}>\cdots>F4>F2&\text{if }N\text{ is odd,}\\ F2>F4>\cdots>F_{N-2}>F_N>F_{N-1}>\cdots>F3>F1&\text{if }N\text{ is even.} \end{cases} \] \end{enumerate} Then the \emph{positive ordering} on faces of $T$ is induced by the ordering on fans and the positive ordering within each fan. \end{Definition} \begin{Remark} \label{rem:pos_order}\rm The positive ordering may also be described inductively, triangle-by-triangle, as follows: Recall from the definition of auxiliary graph that the triangles are labelled $\theta_1, \theta_2, \dots, \theta_n$ in order from $a$ to $b$. For each triangle $\theta_k$, look at the edge separating $\theta_k$ and $\theta_{k+1}$. If the edge is oriented so that $\theta_k$ is to the right, then we declare that $\theta_k > \theta_i$ for all $i > k$. On the other hand, if $\theta_k$ is to the left, we declare that $\theta_k < \theta_i$ for all $i > k$. \end{Remark} For example, in Figure~\ref{fig:default_spin}, the positive ordering on the faces is \[\alpha_1>\alpha_2>\alpha_3>\gamma_1>\gamma_2>\gamma_3> \delta_2>\delta_1>\beta_2>\beta_1.\] \subsection{Expansion formula} \begin{Definition}[weight] \label{def:weight}\rm Let $t \in \mathcal{T}_{ab}$ be a super $T$-path which uses edges $t_1,t_2,\dots$ in the auxiliary graph $\Gamma^T_{a,b}$. We will assign to each edge $t_i$ a weight, which will be an element in the super algebra $\mathbb R\bigl[x_1^{\pm\frac{1}{2}},\dots,x_{2n+3}^{\pm\frac{1}{2}} \,|\, \theta_1,\dots,\theta_{n+1}\bigr]$ (where $\theta_i$'s are the odd generators) as follows. For the parity of edges $t_i \in \sigma$ or $\tau$, we recall axiom (T6) of Definition~\ref{def:super-T}: \[ \wt(t_i) := \begin{cases} x_j &\text{if } t_i \in T \text{, with }\lambda\text{-length }x_j, \text{ and }i \text{ is odd}, \\ x_j^{-1} &\text{if } t_i \in T \text{, with }\lambda\text{-length }x_j, \text{ and }i \text{ is even}, \\ \sqrt{\frac{x_j}{x_kx_l}} \, \theta_s &\text{if } t_i \in\sigma \text{ and the face containing }t_i \\ & \text{is as pictured below} ~ (i \text{ must be even}), \\ 1&\text{if }t_i\in\tau~ (i \text{ must be odd}). \end{cases} \] In keeping with the intuition mentioned after Definition~\ref{def:super-T}, teleportation is unweighted: \begin{center} \begin{tikzpicture}[scale=1.25] \coordinate (1) at (1,0); \coordinate (2) at (-1,0); \coordinate (3) at (0,1.618); \coordinate (4) at (0, 0.618); \draw (1)--(2)--(3)--cycle; \draw [thick](3)--(4); \node at (-0.2, 0.518) {$\theta_s$}; \node at (0.1,1.118) {$t_i$}; \node at (0.65,0.9) {$x_l$}; \node at (-0.65,0.9) {$x_k$}; \node at (0,-0.15) {$x_j$}; \end{tikzpicture} \end{center} Here, $\theta_s$ is the $\mu$-invariant associated to the face containing $t_i$. Finally, we define the weight of a super $T$-path to be the product of the weights of its edges \[\wt(t)=\prod_{t_i\in t}\wt(t_i),\] where the product of $\mu$-invariants is taken under the positive ordering. \end{Definition} In what follows, we will use $\lambda_{ab}$ to denote the $\lambda$-length of the arc $(a,b)$, and we will write $\boxed{ijk}$ to denote the $\mu$-invariant associated to the triple of ideal points $(i,j,k)$, subject to Remark~\ref{rem:mu-invariants}. The following theorem is the main result of the current paper, giving an explicit expression for arbitrary super $\lambda$-lengths in terms of the $\lambda$-lengths and $\mu$-invariants of a fixed triangulation. \begin{Theorem}\label{thm:main} Under the default orientation, the $\lambda$-length of $(a,b)$ is given by \[\lambda_{a,b} = \sum_{t\in\mathcal T_{a,b}} \wt(t). \] \end{Theorem} When starting with a generic orientation, one can first apply a sequence of equivalence relations (reversing the arrows around a triangle and negating the $\mu$-invariant) to get to the default orientation, with some of the $\mu$-invariants having signs changed. This is always possible due to Proposition~\ref{prop:equiv_rel}. See Example~\ref{example:main}. Equivalently, we can state the main theorem with respect to an arbitrary choice of orientations (not necessarily the default one) as follows. \begin{Corollary}\label{cor:main} With a generic choice of orientation, the $\lambda$-length of $(a,b)$ is given by \[ \lambda_{a,b} = \sum_{t\in\mathcal T_{a,b}}(-1)^{\inv(t)} \wt(t), \] where ${\rm inv}(t)$ is the number of edges in the triangulation which cross a $\tau$-edge of $t$ and are oriented opposite the default orientation. \end{Corollary} \begin {proof} Label the internal diagonals $d_1,d_2,\dots,d_{n-1}$ in order of their proximity from $c_0$ to $c_N$, and similarly label the $\mu$-invariants of the triangles $\theta_1,\theta_2,\dots,\theta_n$. Suppose $d_k$ is the last diagonal whose orientation disagrees with the default orientation. Proposition~\ref{prop:equiv_rel} describes how we can find another orientation, representing the same spin structure, where $d_k$ is the only internal diagonal whose orientation is changed. From Remark~\ref{rem:mu-invariants}, we see that in doing so, $\theta_i$ must be replaced by $-\theta_i$ for all $i \leq k$. This process may then be repeated for all diagonals whose orientation differs from the default one. The end result is as follows: if $d_{i_1},d_{i_2},\dots,d_{i_k}$ are all the internal diagonals whose orientation disagrees with the default orientation (and $i_1 < i_2 < \dots < i_k$), then the $\mu$-invariants of triangles between $d_{i_{k-1}}$ and $d_{i_k}$ are negated, those between $d_{i_{k-2}}$ and $d_{i_{k-1}}$ are not, those between $d_{i_{k-3}}$ and $d_{i_{k-2}}$ are negated, those between $d_{i_{k-4}}$ and $d_{i_{k-3}}$ are not, etc. Any super $T$-path $t$ contains some number of super-steps. Suppose $t$ contains a super-step $(\dots,\theta_i,\theta_j,\dots\,|\,\dots,\sigma_i,\tau_{ij},\sigma_j,\dots)$. Let $m_{ij}$ be the number of diagonals between $\theta_i$ and $\theta_j$ whose orientation disagrees with the default. If $m_{ij}$ is even, then when we change from the given orientation to the default one, either both $\theta_i$ and $\theta_j$ are negated, or both stay the same. In this case, the product $\theta_i \theta_j$ is unchanged when passing to the default orientation. On the other hand, if $m_{ij}$ is odd, then one of them is negated, and the other stays the same, in which case the product $\theta_i \theta_j$ is negated. The number ${\rm inv}(t)$ is simply the sum of these $m_{ij}$'s over all super steps in the path $t$. \end {proof} It is apparent from the super $T$-path formulation in Theorem~\ref{thm:main} that these $\lambda$-lengths satisfy something analogous to the Laurent phenomenon exhibited by ordinary cluster algebras. This is summarized in the following corollary. \begin{Corollary}\label{cor:laurent} Let $\tilde{\theta}_i := {\rm wt}(\sigma_i) = \sqrt{\frac{x_j}{x_kx_l}} \theta_i$ $($see Definition {\rm \ref{def:weight})}. For any pair of vertices~$a$,~$b$ of the polygon, \begin{itemize}\itemsep=0pt \item[$(a)$] $\lambda_{ab} \in {\mathbb R}\bigl[x_1^{\pm 1}, \dots, x_{2n+3}^{\pm 1} \,|\, \tilde{\theta}_1, \dots, \tilde{\theta}_{n+1}\bigr]$. In other words, each term of $\lambda_{ab}$ is the product of a~Laurent monomial in the $x_i$'s and a monomial in the $\tilde{\theta}_i$'s. \item[$(b)$] $\lambda_{ab} \in {\mathbb R}\bigl[x_{1}^{\pm \frac{1}{2}}, \dots, x_{2n+3}^{\pm \frac{1}{2}} \,|\, \theta_1, \dots, \theta_{n+1}\bigr]$. In other words, each term of~$\lambda_{ab}$ is the product of a~Laurent monomial in the square roots of the~$x_i$'s and a monomial in the~$\theta_i$'s. \end{itemize} \end{Corollary} \input{example_main_theorem.tex} \begin{Example}\label{example:main}\rm An example of the expansion formula given in Theorem~\ref{thm:main} and Corollary~\ref{cor:main} is shown in Figure~\ref{fig:main_example}. Continuing from Example~\ref{example:14some}, this figure shows all super $T$-paths in $\mathcal{T}_{1,4}$. For example, to obtain the default orientation, we would need to flip the arrow on edge $(3,5)$. We can do this by flipping all arrows of the last triangle while negating the $\mu$-invariant to $-\theta_4$. Keeping the same positive ordering $\theta_1>\theta_2>\theta_4>\theta_3$, this would make all terms in Figure~\ref{fig:main_example} positive. In this example, the Laurent expansion can also be written in terms of the $\tilde \theta$'s (defined in Corollary~\ref{cor:laurent}) as \begin{gather*} \lambda_{14}=\frac{x_1x_4}{x_8}+\frac{x_1 x_3 x_5}{x_7 x_9}+\frac{x_1x_3x_4x_6}{x_7x_8x_9}+\frac{x_2x_5x_8}{x_7x_9} +\frac{x_2x_4x_6}{x_7x_9}+\frac{x_1x_5x_8}{x_9}\tilde\theta_1\tilde\theta_2 +\frac{x_1x_4x_6}{x_9}\tilde\theta_1\tilde\theta_2\\ \hphantom{\lambda_{14}=}{} +x_1x_4\tilde\theta_1\tilde\theta_3-x_1x_4\tilde\theta_1\tilde\theta_4 +x_1x_4\tilde\theta_2\tilde\theta_3 -x_1x_4\tilde\theta_2\tilde\theta_4 -\frac{x_1x_3x_4}{x_7}\tilde\theta_4\tilde\theta_3 -\frac{x_2x_4x_8}{x_7}\tilde\theta_4\tilde\theta_3\\ \hphantom{\lambda_{14}=}{} -x_1x_4x_8\tilde\theta_1\tilde\theta_2\tilde\theta_4\tilde\theta_3. \end{gather*} \end{Example} \subsection[Super T-paths in a single fan triangulation]{Super $\boldsymbol{T}$-paths in a single fan triangulation} Let $P$ be an $n$-gon and $T$ a fan triangulation. Let vertex $1$ be the fan center. \begin{Lemma}[Schiffler] \label{lemma:Schiffler} For $2\leq i\leq n-1$, define \[\alpha_{i-1}:=(2,1,i,i+1,1,n\,|\,\dots),\qquad2\leqslant i\leqslant n-1.\] Then $\mathcal T^0_{2,n}=\{\alpha_i\colon 1\leq i\leq n-2\}$. See Example~{\rm \ref{exmp:T-Path-fan}}. Note that in case of $i=2$ or $i=n-1$, $\alpha_{i-1}$ collapses to a shorter $T$-path after removing backtracking. In the special case of $n=3$, there is a unique $T$-path in $\mathcal T^0_{2,n}$, i.e., $\alpha_2 = \alpha_{n-1}$, which consists of the single edge~$(2,3)$. \end{Lemma} \begin{Example}[ordinary $T$-paths of a fan]\label{exmp:T-Path-fan}\rm The following are the ordinary $T$-paths of a fan triangulation of an octagon. Notice that each of these $T$-paths surrounds (and is in bijection with) one of the triangles of $T$, as illustrated in yellow below: \begin{center} \input{example_T_path_in_fan} \end{center} \end{Example} \begin{Lemma}\label{lm:super-path-single-fan} Every super $T$-path in $\mathcal T^1_{2,n}$ is of the following form \[(2,1,\theta_i,\theta_j,1,n\,|\,x_1,\sigma_i,\tau_{ij},\sigma_j,x_n)\] for $1\leq i<j\leq n-1$. See Example~{\rm \ref{exmp:super-T-path-fan}}. \end{Lemma} We illustrate this lemma with an example before proving it. \begin{Example}\label{exmp:super-T-path-fan}\rm \allowdisplaybreaks The following are the ${4 \choose 2}=6$ different non-ordinary super $T$-paths (elements of $\mathcal{T}^1_{2,6}$) of a single-fan hexagon: \begin{center} \input{example_super_T_path_in_fan} \end{center} In this example, $\lambda_{26}$ can be expressed as \begin{gather*} \lambda_{26} = \frac{\lambda_{23}\lambda_{16}}{\lambda_{31}} + \frac{\lambda_{21}\lambda_{34}\lambda_{16}}{\lambda_{13}\lambda_{41}} + \frac{\lambda_{21}\lambda_{45}\lambda_{16}}{\lambda_{14}\lambda_{51}} + \frac{\lambda_{21}\lambda_{56}}{\lambda_{15}} \\ \phantom{\lambda_{26}=} + \lambda_{21} \sqrt{\frac{\lambda_{23}}{\lambda_{12}\lambda_{13}}} \, \boxed{123} \, \sqrt{\frac{\lambda_{34}}{\lambda_{13}\lambda_{14}}} \, \boxed{134} \, \lambda_{16} + \lambda_{21} \sqrt{\frac{\lambda_{34}}{\lambda_{13}\lambda_{14}}} \, \boxed{134} \, \sqrt{\frac{\lambda_{45}}{\lambda_{14}\lambda_{15}}} \, \boxed{145} \, \lambda_{16} \\ \phantom{\lambda_{26}=} + \lambda_{21} \sqrt{\frac{\lambda_{45}}{\lambda_{14}\lambda_{15}}} \, \boxed{145} \, \sqrt{\frac{\lambda_{56}}{\lambda_{15}\lambda_{16}}} \, \boxed{156} \, \lambda_{16} + \lambda_{21} \sqrt{\frac{\lambda_{23}}{\lambda_{12}\lambda_{13}}} \, \boxed{123} \, \sqrt{\frac{\lambda_{45}}{\lambda_{14}\lambda_{15}}} \, \boxed{145} \, \lambda_{16} \\ \phantom{\lambda_{26}=} + \lambda_{21} \sqrt{\frac{\lambda_{34}}{\lambda_{13}\lambda_{14}}} \; \boxed{134} \; \sqrt{\frac{\lambda_{56}}{\lambda_{15}\lambda_{16}}} \, \boxed{156} , \lambda_{16} + \lambda_{21} \sqrt{\frac{\lambda_{23}}{\lambda_{12}\lambda_{13}}} \, \boxed{123} \, \sqrt{\frac{\lambda_{56}}{\lambda_{15}\lambda_{16}}} \, \boxed{156} \, \lambda_{16}. \end{gather*} The first four terms are the weights of all ordinary $T$-paths (as described in Example~\ref{exmp:T-Path-fan}), and the remaining six terms correspond to the $T$-paths pictured above (in the same order). It can also be written as \begin{gather*} \lambda_{26} = \frac{\lambda_{23}\lambda_{16}}{\lambda_{31}} + \frac{\lambda_{21}\lambda_{34}\lambda_{16}}{\lambda_{13}\lambda_{41}} + \frac{\lambda_{21}\lambda_{45}\lambda_{16}}{\lambda_{14}\lambda_{51}} + \frac{\lambda_{21}\lambda_{56}}{\lambda_{15}} + \lambda_{16}\lambda_{21} \, \widetilde{\boxed{123}} \; \widetilde{\boxed{134}} + \lambda_{16}\lambda_{21} \, \widetilde{\boxed{134}} \; \widetilde{\boxed{145} }\\ \hphantom{\lambda_{26} =}{} + \lambda_{21}\lambda_{16}\, \widetilde{\boxed{145}} \; \widetilde{\boxed{156} } + \lambda_{16}\lambda_{21} \, \widetilde{\boxed{123}} \; \widetilde{ \boxed{145}} + \lambda_{16}\lambda_{21} \, \widetilde{\boxed{134}}\;\widetilde{\boxed{156}} + \lambda_{16}\lambda_{21}\, \widetilde{ \boxed{123} }\, \widetilde{ \boxed{156}}\,, \end{gather*} where the $\widetilde{\boxed{ijk}}$ are the weights of the corresponding $\sigma$-edges (following the notation of Corollary~\ref{cor:laurent}). \end{Example} \begin{proof}[Proof of Lemma~\ref{lm:super-path-single-fan}] A $\mathcal T^1_{2,n}$-path must use one of the $\sigma$-edges, therefore the first step in the super $T$-path needs to be $(2,1)$. Then after traveling through $\sigma_i$, what follows must be $\tau_{ij}$ which leads the $T$-path to another internal vertex $\theta_j$. The next step is an even step hence has to be a $\sigma$-edge which will take us to vertex $1$: \[(2,1,\theta_i,\theta_j,1,\dots\,|\,x_1,\sigma_i,\tau_{ij},\sigma_j,\dots).\] Now we are at vertex $1$ and have completed even number of steps, therefore the rest of the this super $T$-path must be a super $T$-path from~$1$ to~$n$~-- clearly there is only one possibility which is the single edge $(1,n)$. Hence a super $T$-path in $\mathcal T^1_{2,n}$ must have the form \begin{gather*} (2,1,\theta_i,\theta_j,1,\dots\,|\,x_1,\sigma_i,\tau_{ij},\sigma_j,\dots)+(\dots,1,n\,|\,\dots,x_n)\\ \qquad{} =(2,1,\theta_i,\theta_j,1,n\,|\,x_1,\sigma_i,\tau_{ij},\sigma_j,x_n).\tag*{\qed} \end{gather*}\renewcommand{\qed}{} \end{proof} \section{Proof of Theorem~\ref{thm:main}} \label{sec:proof} In this section we prove our main theorem. It turns out that the default orientation guarantees a positive sign on all terms of the expansion of $\lambda$-lengths (Theorem~\ref{thm:main}), and is also preserved by induction. Our proof has three parts: we first prove the case of single fan triangulations, and then prove the case of zig-zag triangulations. Finally we prove Theorem~\ref{thm:main} in full generality by combining the two cases mentioned above. Before proving our theorem, we state the following results that will be used in our proofs. \begin{Proposition} \label{lemma:theta32} Let $A$, $\beta,$ and $\Sigma$ be elements in the super algebra $\mathcal A$, which for convenience, we assume is written as $\mathcal A = {\mathbb R}\bigl[x_1^{\pm\frac{1}{2}},\dots,x_{2n+3}^{\pm\frac{1}{2}} \,|\, \theta_1,\dots,\theta_{n+1}\bigr]$ as in Definition~{\rm \ref{def:weight}}. Further, we assume that $A$ is an even element with a non-zero body,\footnote{The \emph{body} of an element $A$ of super algebra $\mathcal A$ is the constant term when expanded out in terms of the $\theta_i$'s.} and that both $\beta$ and $\Sigma$ are odd elements of~$\mathcal A$. Then we have \[\sqrt{A+\beta\Sigma}= \sqrt{A}+\frac{\beta}{2\sqrt{A}}\Sigma,\] where the square root of $A$ is taken to be the positive square root. \end{Proposition} \begin{Remark}\rm Since $A$ is an even element, its positive square root is well-defined as the choice such that the body of $\sqrt{A}$ is the positive square root of the body of $A$. \end{Remark} \begin{proof} Squaring the right-hand side: \[\left(\sqrt{A}+\frac{\beta}{2\sqrt{A}}\Sigma\right)^2=A+2\sqrt{A}\cdot \frac{\beta}{2\sqrt{A}}\cdot \Sigma +\left(\frac{\beta}{2\sqrt{A}}\Sigma\right)^2=A+\beta\Sigma.\] This clearly equals the square of the left-hand side. \end{proof} Using Lemma~\ref{lemma:theta32}, we can rewrite the super Ptolemy relations in a more symmetrical form. \begin{Proposition} The super Ptolemy relations described in Figure~{\rm \ref{fig:super_ptolemy}} can be written as follows \begin{gather} \theta'\sqrt{ef} = \theta\sqrt{bd}+\sigma \sqrt{ac},\label{eq:5}\\ \sigma'\sqrt{ef} =\sigma\sqrt{bd}-\theta\sqrt{ac}.\label{eq:6} \end{gather} \end{Proposition} \begin{proof}We have \begin{gather*} \theta'\sqrt{ef} =\theta'\sqrt{ab+cd+\sqrt{abcd}\sigma\theta} =\theta'\sqrt{ab+cd+\sqrt{abcd}\sigma'\theta'} =\theta'\sqrt{ab+cd} \end{gather*} and \begin{gather*} \theta' = {{ \theta\sqrt{bd}+\sigma \sqrt{ac}}\over{\sqrt{ac+bd}}},\qquad \theta'\sqrt{ac+bd} = \theta\sqrt{bd}+\sigma \sqrt{ac}. \end{gather*} Putting these two equations together gives equation~\eqref{eq:5}: \begin{align*} \theta'\sqrt{ef}= \theta\sqrt{bd}+\sigma \sqrt{ac}. \end{align*} Equation~\eqref{eq:6} can be derived in a similar way. \end{proof} \subsection{Proof of Theorem~\ref{thm:main} for a single fan} For sake of readability, in the below, we use $\boxed{ijk}$ to denote the $\mu$-invariant associated to the triple of ideal points $(i,j,k)$, subject to Remark~\ref{rem:mu-invariants}, while the $\lambda$-length of a pair $(i,j)$ will be denoted $\lambda_{ij}$. We will also sometimes use $\boxed{ijk}$ to denote the internal vertex of the auxiliary graph associated to $(i,j,k)$, when talking about super $T$-paths. First, we will prove the main theorem in the case of a single fan triangulation. \begin{figure}[h] \centering \input{proof_aux_graph} \caption{Proof of Theorem~\ref{thm:mu_single_fan}.} \label{fig:mu_single_fan} \end{figure} \begin{Theorem} \label{thm:mu_single_fan} Consider a single fan triangulation of an $n$-gon as depicted in Figure~{\rm \ref{fig:mu_single_fan}}. We continue our convention of using the default orientation, which for a fan triangulation means that arrows on all internal diagonals point away from the fan center. After performing the super Ptolemy relations for the flips of arcs $(1,3)$, $(1,4)$, $\dots$, $(1,k-1)$, we have \begin{enumerate}\itemsep=0pt \item[$(a)$] $\displaystyle \sqrt{\frac{\l2k}{\l12\l1k}} \; \boxed{12k} = \sum_{i=1}^{k-2}\wt(\sigma_i)$, \item[$(b)$] $\displaystyle \lambda_{2k} = \sum_{t \in \mathcal{T}_{2,k}} {\rm wt}(t)$, \end{enumerate} where ${\rm wt}(\sigma_i)$ and ${\rm wt}(t)$ are defined in Definition~{\rm \ref{def:weight}}, and as defined in Definition~{\rm \ref{def:super-T}}, $\mathcal{T}_{2,k}$ denotes the set of super $T$-paths from vertex $2$ to vertex~$k$. \end{Theorem} \begin{proof} We will induct on $k$. We begin with the case of $k=3$ where statements (a) and (b) follow immediately since ${\rm wt}(\sigma_1) = \sqrt{\frac{\lambda_{23}}{\lambda_{12}\lambda_{13}}} \, \boxed{123}$\,, and the unique super $T$-path from vertex $2$ to vertex $3$ is simply the ordinary $T$-path $t = (2,3)$. In general, for $k\geq 4$, after flipping arcs $(1,3),\dots,(1,k-2)$, the next flip of arc $(1,k-1)$ will be inside the following quadrilateral: \begin{center} \begin{tikzpicture} \draw (1,0) -- (0,1) -- (-1,0) -- (0,-1) -- cycle; \draw (-1,0) --node {\tikz \draw[-triangle 90] (0,0) -- +(.1,0);} (1,0); \draw (-1,0) node[left] {$1$}; \draw (1,0) node[right] {$k-1$}; \draw (0,-1) node[below] {$2$}; \draw (0,1) node[above] {$k$}; \end{tikzpicture} \end{center} Note that the $\mu$-invariant $\boxed{1,k-1,k}$ is in the initial triangulation, but $\boxed{1,2,k-1}$ is not\footnote{Except for the special case of $k=4$.}. However, we assume by induction that $\boxed{1,2,k-1}$ is given by part $(a)$. First we prove part (b). The super Ptolemy relation (equation~\eqref{eq:mutation}) says that $\lambda_{2k}$ is given by \[ \lambda_{1,k-1}\lambda_{2k} = \lambda_{12}\lambda_{k-1,k} + \lambda_{1k}\lambda_{2,k-1} + \sqrt{\lambda_{12}\lambda_{2,k-1}\lambda_{k-1,k}\lambda_{1k}} \, \boxed{1,2,k-1} \, \boxed{1,k-1,k}\,. \] After dividing by $\lambda_{1,k-1}$, we get a formula for $\lambda_{2,k}$. On the right-hand side, the first term gives $\lambda_{21}\lambda_{1,k-1}^{-1}\lambda_{k-1,k}$, which is clearly an ordinary $T$-path. The second term, by induction, is \[ \sum_{t \in \mathcal{T}_{2,k-1}} {\rm wt}(t) \lambda_{k-1,1}^{-1} \lambda_{1k}. \] Taking the ordinary $T$-paths in $\mathcal{T}^0_{2,k-1}$, and appending the arcs $(k-1,1)$ and $(1,k)$ give the rest of the ordinary $T$-paths in $\mathcal{T}^0_{2k}$. (See Lemma~\ref{lemma:Schiffler} and note that appending arc $(k-1,1)$ as an even step may in fact yield a backtrack that cancels out the final step of an ordinary $T$-path in~$\mathcal{T}^0_{2k}$.) The terms coming from $\mathcal{T}^1_{2,k-1}$, multiplied by~$\frac{\lambda_{1k}}{\lambda_{1,k-1}}$, similarly give the super $T$-paths in~$\mathcal{T}^1_{2k}$ which do not involve the triangle~$(1,k-1,k)$ (see Lemma~\ref{lm:super-path-single-fan}). The last term, using part (a) and induction on $\boxed{1,2,k-1}$\,, is \begin{align*} \frac{\sqrt{\lambda_{12}\lambda_{2,k-1}\lambda_{k-1,k}\lambda_{1k}}}{\lambda_{1,k-1}} \, \boxed{1,2,k-1} \, \boxed{1,k-1,k} &= \lambda_{21} \sum_{i=1}^{k-3} {\rm wt}(\sigma_i) {\rm wt}(\sigma_{k-2}) \lambda_{1k}. \end{align*} These are the weights of the super $T$-paths which use the last triangle, namely $(1,k-1,k)$ (again, see Lemma~\ref{lm:super-path-single-fan}). Note that the product ${\rm wt}(\sigma_i) {\rm wt}(\sigma_{k-2})$ is in the positive ordering, since the two $\mu$-invariants appear in the counter-clockwise order around the fan center. Now, we examine part (a). Looking at the same quadrilateral as above, the super Ptolemy relation for $\mu$-invariants (equation~\eqref{eq:5}) says that \[ \boxed{12k} \, \sqrt{\lambda_{1,k-1}\lambda_{2k}} = \boxed{1,k-1,k} \, \sqrt{\lambda_{12}\lambda_{k-1,k}} + \boxed{1,2,k-1} \, \sqrt{\lambda_{1k}\lambda_{2,k-1}}. \] Dividing by $\sqrt{\l12\lambda_{1,k-1}\l1k}$, commuting the $\lambda$-lengths past the $\mu$-invariants, we get \[\sqrt{\l2k\over \l12\l1k }\, \boxed{12k}=\sqrt{\l k{,k-1} \over \l1{,k-1}\l1k }\,\boxed{1,k-1,k} + \sqrt{\l2{,k-1} \over\l12\l1{,k-1}}\,\boxed{1,2,k-1}\,.\] The first term on the right-hand side is simply ${\rm wt}(\sigma_{k-2})$. By induction, the second term is $\sum_{i=1}^{k-3} {\rm wt}(\sigma_i)$. Therefore we have \[\sqrt{\l2k\over \l12\l1k }\, \boxed{12k}=\wt(\sigma_{k-2}) + \left(\sum_{i=1}^{k-3}\wt(\sigma_i)\right)=\sum_{i=1}^{k-2}\wt(\sigma_{i})\] as desired. \end{proof} \subsection{Proof of Theorem~\ref{thm:main} for a zig-zag triangulation} Next, we prove our main theorem for the case of a zig-zag triangulation. \begin{Theorem} \label{thm:proof_zigzag} Consider a zigzag triangulation $T$ of an $n$-gon as depicted in Figure~{\rm \ref{fig:proof_zigzag}}. We consider all the vertices except for $1$ and $n$ to be fan centers, so that $c_{i}$ is labelled $i+1$ for $1\leq i\leq n-2$. After flipping the arcs $(n-1,n-2),(n-2,n-3),\dots,(k+2,k+1)$, we have \begin{enumerate}\itemsep=0pt \item[$(a)$] $\displaystyle \sqrt{\lambda_{kn}\lambda_{k+1,n}\over\lambda_{k,k+1}} \, \boxed{k,k+1,n} = \sum_{i=k}^{n-2} \sum_{t\in\mathcal T_{n,i+1}} \wt(t) \wt(\sigma_i)$, \item[$(b)$] $\displaystyle \lambda_{k,n} = \sum_{t \in \mathcal{T}_{kn}} {\rm wt}(t)$. \end{enumerate} \end{Theorem} \begin{figure}[h]\centering \input{figure_proof_zigzag} \caption{Proof of Theorem~\ref{thm:proof_zigzag}. {Left:} zig-zag triangulation with the default orientation. {Right:} the corresponding auxiliary graph.}\label{fig:proof_zigzag} \end{figure} \begin{proof} We assume that vertices $n$, $n-1$, and $n-2$ are oriented as in Figure~\ref{fig:proof_zigzag}. The case that they are oriented oppositely is similar. We will induct (backwards) on $k$. The base case of the induction is the case of a single triangle, when $k=n-2$. Since the edge $(n-2,n)$ is already in the triangulation, $\lambda_{n-2,n}$ is already one of the generators. Clearly the only $T$-path in this case is the single edge $(n-2,n)$ (when zero flips have been performed). This establishes part (b) for the base case. For part (a), the left-hand side is \[ \sqrt{\frac{\lambda_{n-2,n}\lambda_{n-1,n}}{\lambda_{n-1,n-2}}} \, \boxed{n-2,n-1,n}\,. \] For the right-hand side, there is only a single term in this sum. This is because $i$ only takes the value $n-2$, and the only $T$-path from $n-1$ to $n$ is the single edge $(n-1,n)$. Thus the right-hand side is \begin{align*} \lambda_{n-1,n} \wt(\sigma_{n-2}) &= \lambda_{n-1,n} \sqrt{\frac{\lambda_{n-2,n}}{\lambda_{n-2,n-1}\lambda_{n-1,n}}} \, \boxed{n-2,n-1,n} \\ &= \sqrt{\frac{\lambda_{n-2,n}\lambda_{n-1,n}}{\lambda_{n-2,n-1}}} \, \boxed{n-2,n-1,n}\,. \end{align*} This establishes part (a) for the base case. We now assume that $1 \leq k \leq n-3$. After flipping the arcs $(n-1,n-2)$, $(n-2,n-3), \dots, (k+3,k+2)$,\footnote{For the case of $k=n-3$, no arcs have yet been flipped.} we will have one of the two following quadrilaterals, depending on the parity of $n-k$: \begin {center} \begin {tikzpicture} \draw (1,0) -- (0,1) -- (-1,0) -- (0,-1) -- cycle; \draw (-1,0) --node {\tikz \draw[-triangle 90] (0,0) -- +(.1,0);} (1,0); \draw (-1,0) node[left] {$k+1$}; \draw (1,0) node[right] {$k+2$}; \draw (0,-1) node[below] {$n$}; \draw (0,1) node[above] {$k$}; \draw (3,0) node {or}; \begin {scope}[shift={(6,0)}] \draw (1,0) -- (0,1) -- (-1,0) -- (0,-1) -- cycle; \draw (1,0) --node {\reflectbox{\tikz \draw[-triangle 90] (0,0) -- +(.1,0);}} (-1,0); \draw (-1,0) node[left] {$k+2$}; \draw (1,0) node[right] {$k+1$}; \draw (0,-1) node[below] {$n$}; \draw (0,1) node[above] {$k$}; \end {scope} \end {tikzpicture} \end {center} Now we flip the edge $(k+2, k+1)$ while applying the Ptolemy relation equation~\eqref{eq:5}. In both cases pictured above, the triangle $(k,k+1,n)$ will play the role of $\theta'$ (it will be on the left, looking in the direction of the arrow after the flip). And because of the opposite orientations, application of equation~\eqref{eq:5} in both pictures gives \begin{gather*} \sqrt{\lambda_{kn} \lambda_{k+1,k+2}} \, \boxed{k,k+1,n} \\ \qquad{} = \sqrt{\lambda_{k+1,n} \lambda_{k,k+2}} \, \boxed{k,k+1,k+2} + \sqrt{\lambda_{k,k+1} \lambda_{k+2,n}} \, \boxed{k+1,k+2,n}\,. \end{gather*} Multiplying both sides by $\sqrt{\frac{\lambda_{k+1,n}}{\lambda_{k,k+1}\lambda_{k+1,k+2}}}$, we get \begin{gather*} \begin{split} & \sqrt{\frac{\lambda_{kn} \lambda_{k+1,n}}{\lambda_{k,k+1}}} \, \boxed{k,k+1,n}\\ & \qquad{} = \lambda_{k+1,n} \sqrt{\frac{\lambda_{k,k+2}}{\lambda_{k,k+1}\lambda_{k+1,k+2}}} \, \boxed{k,k+1,k+2} + \sqrt{\frac{\lambda_{k+1,n} \lambda_{k+2,n}}{\lambda_{k+1,k+2}}} \, \boxed{k+1,k+2,n}\,. \end{split} \end{gather*} First we examine the first term on the right-hand side. By induction, $\lambda_{k+1,n}$ is the weighted sum of super $T$-paths from $k+1$ to $n$. Notice that $\sqrt{\frac{\lambda_{k,k+2}}{\lambda_{k,k+1} \lambda_{k+1,k+2}}} \, \boxed{k,k+1,k+2}$ is the weight of the $\sigma$-step $\sigma_k$, going from vertex $k+1$ into the triangle labelled $\theta_k$. Therefore this first term is equal to \begin{equation} \label{eq:zigzag_1st_term} \sum_{t \in \mathcal{T}_{k+1,n}} \wt(t) \wt(\sigma_k)\,. \end{equation} Next, we focus on the second term of the right-hand side: $\sqrt{\frac{\lambda_{k+1,n} \lambda_{k+2,n}}{\lambda_{k+1,k+2}}} \, \boxed{k+1,k+2,n}\,$. By induction, this is equal to \begin{equation*} \sqrt{\frac{\lambda_{k+1,n} \lambda_{k+2,n}}{\lambda_{k+1,k+2}}} \, \boxed{k+1,k+2,n} = \sum_{i=k+1}^{n-2} \sum_{t \in \mathcal T_{n,i+1}} \wt(t) \wt(\sigma_i). \end{equation*} Adding the terms from equation~\eqref{eq:zigzag_1st_term} gives the result for part~(a). We should verify that the $\mu$-invariants in each product $\wt(t)\wt(\sigma_k)$ are in the correct positive ordering. {We use the same viewpoint as in Remark~\ref{rem:pos_order}.} By induction, the $\mu$-invariants in each term of $\wt(t)$ occur in the positive ordering of the smaller polygon which contains triangles $\theta_{k+1},\dots,\theta_{n-2}$. By construction, the positive ordering of the slightly larger polygon (which additionally contains $\theta_k$) is obtained from the smaller by placing $\theta_k$ either at the beginning or end of the previous order. But since $\wt(t)$ is an even element, it is central, and so $\wt(t)\wt(\sigma_k) = \wt(\sigma_k)\wt(t)$, and we may choose whichever one gives the correct positive ordering. Now we will prove part (b). Again after flipping the arcs $(n-1,n-2), (n-2,n-3), \dots$, $(k+3,k+2)$, we are in one of the two pictures from before. For the picture on the left, an application of equation~\eqref{eq:mutation}, followed by division on both sides by $\lambda_{k+1,k+2}$, gives \begin{gather*} \l kn = \underbrace{ \frac{\lambda_{k,k+2} \lambda_{k+1,n}}{\lambda_{k+1,k+2}}}_{\text{part 1}} + \underbrace{\frac{\lambda_{k,k+1} \lambda_{k+2,n}}{\lambda_{k+1,k+2}}}_{\text{part 2}}\nonumber \\ \hphantom{\l kn =}{}+ \underbrace{\sqrt{\frac{\lambda_{k+1,n} \lambda_{k+2,n}}{\lambda_{k+1,k+2}}}\; \boxed{k+1,k+2,n}}_{\text{part 3}} \cdot \underbrace{\sqrt{\frac{\lambda_{k,k+1} \lambda_{k,k+2}}{\lambda_{k+1,k+2}}}\; \boxed{k,k+1,k+2}}_{\text{part 4}} \end{gather*} Note that $\boxed{k+1,k+2,n}$ lies on the right of the oriented diagonal, and hence plays the role of~$\sigma$. Analogously, $\boxed{k,k+1,k+2}$ lies on the left and plays the role of $\theta$. Applying equation~\eqref{eq:mutation} to the picture on the right gives the same thing, except that parts~(3) and~(4) appear in the opposite order. First we will explain that the sum of parts~(1) and~(2) is the weighted sum of all super $T$-paths from $k$ to $n$ that do not contain a $\tau$-edge which crosses $(k+1,k+2)$. Suppose a path $t \in \mathcal{T}_{kn}$ does not contain a $\tau$ edge crossing $(k+1,k+2)$ (i.e., does not have a super step starting with $\theta_k$). Then the first step of $t$ is either edge $(k,k+1)$ or $(k,k+2)$, and the remainder of $t$ lies in the smaller polygon below the diagonal $(k+1,k+2)$. There are two cases. The second edge might be $(k+1,k+2)$, in which case the remainder of $t$ (after removing the first two edges) is a super $T$-path in either $\mathcal{T}_{k+1,n}$ or $\mathcal{T}_{k+2,n}$. This is depicted in the left of Figure~\ref{fig:zigzag_cases}. Clearly all of these paths occur as terms in parts~(1) and~(2). However, there are more terms in parts (1) and~(2); namely those in which the denominator $\lambda_{k+1,k+2}$ cancels a contribution in the numerator. This is the second case, in which the second step in $t$ is \emph{not} the edge $(k+1,k+2)$. This is depicted in the right of Figure~\ref{fig:zigzag_cases}. In this case, replacing the first edge of~$t$ with $(k+1,k+2)$ gives a super $T$-path in either $\mathcal{T}_{k+1,n}$ or $\mathcal{T}_{k+2,n}$, and these are the remaining terms in parts~(1) and~(2) in which the denominator cancels. \begin{figure}[h]\centering \input{figure_zigzag_cases} \caption{{Left:} Removing the first two edges gives a path in $\mathcal{T}_{k+2,n}$. {Right:} Replacing the first edge $(k,k+2)$ with $(k+1,k+2)$ gives a path in $\mathcal{T}_{k+1,n}$.} \label{fig:zigzag_cases} \end{figure} Now we examine parts (3) and~(4). By induction, part (3) is given by the formula in part~(a): \[ \sqrt{\frac{\lambda_{k+1,n} \lambda_{k+2,n}}{\lambda_{k+1,k+2}}}\, \boxed{k+1,k+2,n} = \sum_{i=k+1}^{n-2} \sum_{t \in \mathcal{T}_{i+1,n}} \wt(t) \wt(\sigma_i). \] Part (4) is equal to $\lambda_{k,k+1} \wt(\sigma_k)$, which is the weight of the first two steps of any super $T$-path $t \in \mathcal{T}_{kn}$ which \emph{does} contain a $\tau$-step crossing $(k+1,k+2)$. The terms of part~(3) are all possible ways to complete such a super $T$-path (after joining them by the appropriate $\tau$-step, which has weight~1). \looseness=-1 Again, as in part~(a), we must check that these expressions are written in the correct positive ordering. As we noted above in the discussion of part~(a), the positive ordering of the smaller polygon (below the diagonal $(k+1,k+2)$) agrees with the positive ordering in the larger polygon. Therefore any factors appearing the formulas obtained above can be assumed to be in the correct positive ordering. In parts~(1) and~(2), two of the three factors are single edges in the triangulation, and the terms of the third factor (either $\lambda_{k+1,n}$ or $\lambda_{k+2,n}$) appear in positive order by induction. As was discussed in part $(a)$, the terms of part~(3) can be written as either $\wt(t)\wt(\sigma_i)$ or $\wt(\sigma_i)\wt(t)$, whichever is correct. Part~(4) only contains a single $\mu$-invariant. So all that needs to be checked is that parts~(3) and~(4) occur in the correct order, depending on whether $\theta_k$ comes first or last in the positive ordering. But this is precisely the difference between the left and right pictures above (depending on whether vertex~$k$ or~$n$ is on the left of the oriented edge $k+1 \to k+2$), and parts~(3) and~(4) are positioned differently in the two cases. \end{proof} \subsection{Proof of Theorem~\ref{thm:main} for generic triangulations} By a generic triangulation, we mean a polygon in which every triangle has at least one boundary edge. This is the same hypothesis that appears in Proposition~\ref{prop:equiv_rel} because of Remark~\ref{rem:subtriang}. Given a generic triangulation $T$ with $n$ fans, we first apply the flip sequence in Theorem~\ref{thm:mu_single_fan} on each of the fans, which result in a zig-zag (sub)triangulation $T'$ whose vertices are the fan centers of~$T$ (including $c_0$ and $c_{N+1}$). See Figure~\ref{fig:main_flip_sequence}. \begin{figure}[h] \centering \input{figure_main_flip_sequence.tex} \caption{Flipping edges in each fan, to turn a generic triangulation $T$ into a zig-zag triangulation $T'$.}\label{fig:main_flip_sequence} \end{figure} We will then derive our ultimate formula for $\lambda_{ab}$ in $T$ via a combination of Theorem~\ref{thm:mu_single_fan} and Theorem~\ref{thm:proof_zigzag}. Using Theorem~\ref{thm:proof_zigzag}, we can express $\lambda_{ab}$ in terms of super $T$-paths on $T'$. This expression uses certain $\lambda$-lengths and $\mu$-invariants that are not in $T$, so we will substitute their expansion in terms of $T$ using Theorem~\ref{thm:mu_single_fan}. \begin{figure}[h] \input{figure_main_flip_sequence_labelled.tex} \caption{The auxiliary graphs for $T$ and $T'$, with $\tau$-edges omitted.}\label{fig:main_flip_sequence_labelled} \end{figure} We consider the fans of $T$ as subtriangulations, and denote them as $F_1,\dots,F_N$. We denote the $\mu$-invariants of $T'$ by $\theta_1',\dots,\theta_N'$ and their corresponding $\sigma$-edges to be $\sigma_1',\dots,\sigma_N'$. We denote the $\mu$-invariants of the $i$-th fan of $T$ by $\theta_i^1,\theta_i^2,\dots$ (ordered counterclockwise around the fan center), and the corresponding $\sigma$-edges $\sigma_i^1,\sigma_i^2,\dots$. See Figure~\ref{fig:main_flip_sequence_labelled}. When we substitute super $T$-path expressions for the fans in $T$ into $T'$, we only need to consider two cases: (1) substitute a boundary edge $(c_{i-1},c_{i+1})$ for $1\leqslant i\leqslant n$, and (2) substitute a super-step $(\dots,\sigma_i',\tau_{ij},\sigma_j',\dots\,|\,\dots)$, because every super $T$-path is a concatenation of super steps and complete ordinary $T$-paths. \begin{enumerate} \itemsep=0pt \item[(1)] Suppose we were to replace $\lambda_{c_{i-1},c_{i+1}}$ with its super $T$-path expansion in the $i$-th fan. In the super $T$-path of $T'$, the edge $(c_{i-1},c_{i+1})$ must be an odd step because it does not cross the arc $(a,b)$. Therefore when we replace it with a super $T$-path in~$F_i$, the indexing agrees up to parity. Moreover, if an edge crosses the arc $(c_{i-1},c_{i+1})$ in $F_i$, then it must also cross the arc $(a,b)$ in the bigger triangulation~$T$. Therefore the axiom (T5) is satisfied, and it is straightforward to verify that all other axioms will follow. \item[(2)] We observe that the left-hand side of Theorem~\ref{thm:mu_single_fan}(a) equals $\wt(\sigma_i^\prime)$ with respect to $T^\prime$. Using the rest of Theorem~\ref{thm:mu_single_fan}(a), we get the equality \[\wt(\sigma_i^\prime)=\sum_{j}\wt\big(\sigma_i^j\big),\] i.e., we have that the weight of $\sigma_i^\prime$ in $T^\prime$ is equal to the weighted sum of all $\sigma$-edges in the fan~$F_i$. This means that a super step $(\dots,\sigma_i^\prime,\tau_{ij},\sigma_j^\prime,\dots)$ will be expanded into the sum of all super steps from a face of $F_i$ to a face of $F_j$. This clearly preserves all axioms of super $T$-paths. \end{enumerate} For the converse, we need to prove that every super $T$-path in $T$ can be obtained by such a~substitution. First, this is clearly true for ordinary $T$-paths, or an ordinary sub-path of a super $T$-path. Therefore we only need to consider the super steps. Suppose we have a super-step using two $\sigma$-steps $\sigma^*$ and $\sigma^\bullet$, if $\sigma^*$ and $\sigma^\bullet$ are in the same fan, say $F_i$, then the super step is part of the expansion of $\lambda$-length of $(c_{i-1},c_{i+1})$. If the $\sigma^*\in F_i$ and $\sigma^\bullet\in F_j$ are in different fans, then the super step came from the super-step $(\dots,\sigma_i^\prime,\tau_{ij},\sigma_j^\prime,\dots)$ in $T'$. \section[Super-friezes from super lambda-lengths and mu-invariants]{Super-friezes from super $\boldsymbol{\lambda}$-lengths and $\boldsymbol{\mu}$-invariants}\label{sec:super-friezes} In this section, we use our formulas for super $\lambda$-lengths and $\mu$-invariants to construct arrays that are variants of the super-friezes appearing in work of Morier-Genoud, Ovsienko, and Tabachnikov~\cite{M-GOT15}. Morier-Genoud et al.\ defined a \emph{super-frieze} to be an array, whose rows alternate so that one row is all even elements, the next is all odd elements, etc. Consider a part of the array, called an ``\emph{elementary diamond}'', of the form \[\begin{array}{ccccc} & & B & & \\ & \s{\Xi} & & \s{\Psi} & \\ A & & & & D \\ & \s{\Phi} & & \s{\Sigma} & \\ & & C & & \end{array} \] Here, Roman letters are even elements and Greek letters are odd elements. The super-frieze rules are \begin{gather} AD-BC =1+ \s{\Sigma\Xi},\label{eq:frieze1}\\ AD-BC =1 + \s{\Psi}\s{\Phi},\label{eq:frieze1.5}\\ A\s{\Sigma }-C \s{\Xi} =\s{\Phi},\label{eq:frieze2}\\ B\s{\Sigma }-D \s{\Xi} =\s{\Psi},\label{eq:frieze3}\\ B\s{\Phi}-A\s{\Psi}=\s{\Xi},\label{eq:frieze4}\\ D\s{\Phi}-C\s{\Psi}=\s{\Sigma}.\label{eq:frieze5} \end{gather} To define a super-frieze, we only need equation~\eqref{eq:frieze1} or equation~\eqref{eq:frieze1.5} and any two of the four equations \eqref{eq:frieze2}--\eqref{eq:frieze5}. In other words, any two of equations \eqref{eq:frieze2}--\eqref{eq:frieze5} implies the other two, and utilizing these two equations, either of equation~\eqref{eq:frieze1} or equation~\eqref{eq:frieze1.5} implies the other. We will now observe how the $\lambda$-lengths and $\mu$-invariants satisfy a modified version of these relations. Put the $\lambda$-lengths in an array so that moving left-to-right along a~row rotates a~diagonal of the polygon by shifting indices of both endpoints up by 1, and diagonals of the array going south-east have a common first endpoint. In between these ordinary entries, we put a $\mu$-invariant multiplied by the square-root its two adjacent $\lambda$-lengths, so that $\tilde \mu_{ijk} = \sqrt{\l i j\l j k}\ \boxed{ijk}$ goes in between $\lambda_{ij}$ and $\lambda_{jk}$. With these conventions, an elementary diamond looks as follows: \[ \begin{array}{ccccc} & & b & & \\ & \tilde\theta & & \tilde\sigma' & \\ e & & & & f \\ & \tilde\sigma & & \tilde\theta' & \\ & & d & & \end{array} = \begin{array}{ccccc} & & \lambda_{i+1,j} & & \\ & \tilde \mu_{i,i+1,j} & & \tilde \mu_{i+1,j,j+1} & \\ \lambda_{ij} & & & & \lambda_{i+1,j+1} \\ &\tilde \mu_{i,j,j+1} & & \tilde\mu_{i,i+1,j+1} & \\ & & \lambda_{i,j+1} & & \end{array} \] \begin{Proposition}\label{prop:frieze-mutation} Every elementary diamond corresponds to a Ptolemy relation of a quadri\-la\-teral, with two boundary edges having $\lambda$-length $1$. \end{Proposition} \begin{proof}Consider the super flip in the following diagram, where $a=c=1$: \begin{center} \begin{tikzpicture}[scale=0.7, baseline, thick] \draw (0,0) -- (3,0) -- (60:3) -- cycle; \draw (0,0) -- (3,0) -- (-60:3) -- cycle; \draw (0,0) -- node{\tikz \draw[-triangle 90] (0,0) -- +(.1,0);} (3,0); \draw node[above] at (70:1.5){$a$}; \draw node[above] at (30:2.8){$b$}; \draw node[below] at (-30:2.8){$c$}; \draw node[below=-0.1] at (-70:1.5){$d$}; \draw node[above] at (1,-0.12){$e$}; \draw node[left] at (0,0) {}; \draw node[above] at (60:3) {}; \draw node[right] at (3,0) {}; \draw node[below] at (-60:3) {}; \draw node at (1.5,1){$\theta$}; \draw node at (1.5,-1){$\sigma$}; \end{tikzpicture} \begin{tikzpicture}[baseline] \draw[->, thick](0,0)--(1,0); \node[above] at (0.5,0) {}; \end{tikzpicture} \begin{tikzpicture}[scale=0.7, baseline, thick,every node/.style={sloped,allow upside down}] \draw (0,0)--(60:3)--(-60:3)--cycle; \draw (3,0)--(60:3)--(-60:3)--cycle; \draw node[above] at (70:1.5) {$a$}; \draw node[above] at (30:2.8) {$b$}; \draw node[below] at (-30:2.8) {$c$}; \draw node[below=-0.1] at (-70:1.5) {$d$}; \draw node[left] at (1.7,1) {$f$}; \draw (1.5,-2) --node {\tikz \draw[-triangle 90] (0,0) -- +(.1,0);} (1.5,2); \draw node[left] at (0,0) {}; \draw node[above] at (60:3) {}; \draw node[right] at (3,0) {}; \draw node[below] at (-60:3) {}; \draw node at (0.8,0){$\theta'$}; \draw node at (2.2,0){$\sigma'$}; \end{tikzpicture} \end{center} In the super-diamond, we set $\tilde\theta=\theta\sqrt{be}$, $\tilde\sigma=\sigma\sqrt{ed}$, $\tilde\theta'=\theta'\sqrt{df}$, and $\tilde\sigma' = \sigma'\sqrt{bf}$. Now, the super Ptolemy relation equation~\eqref{eq:mutation} is \[ef=1+bd+\sqrt{bd}\sigma\theta.\] Using equation~\eqref{eq:6}, we substitute $\theta$ with $\sigma\sqrt{bd}-\sigma'\sqrt{ef}$: \[ef=1+bd+\sqrt{bd}\sigma\big(\sigma\sqrt{bd}-\sigma'\sqrt{ef}\big).\] Then $\sigma$ squares to zero, so we have \[ef=1+bd+\sqrt{bdef}\sigma'\sigma.\] This is exactly equation~\eqref{eq:frieze1.5} in terms of these super-frieze entries, i.e., \[ef=1+bd+\tilde\sigma'\tilde\sigma.\] The other two Ptolemy relations give rise to the desired super-frieze relations as well. The second Ptolemy relation equation~\eqref{eq:5} is \[\theta'\sqrt{ef}=\theta\sqrt{bd}+\sigma. \] Substitute $\theta$'s with the $\tilde\theta$'s: \begin{gather*}\tilde\theta'\frac{1}{\sqrt{df} }\sqrt{ef}=\tilde\theta\frac{1}{\sqrt{be}}\sqrt{bd}+\tilde\sigma\frac{1}{\sqrt{de}},\qquad \tilde\theta'\sqrt{\frac{e}{d}}= \tilde\theta\sqrt{d\over e}+\tilde\sigma\frac{1}{\sqrt{de}}.\end{gather*} Then multiply by $\sqrt{de}$ to get $\tilde\theta'e= \tilde\theta d+\tilde\sigma$. The third Ptolemy relation equation~\eqref{eq:5} is \[\sigma'\sqrt{ef}=\sigma\sqrt{bd}-\theta.\] A similar calculation shows that this is equivalent to \[\tilde\sigma'e=\tilde\sigma b-\tilde\theta.\] These relations on the `modified' $\mu$-invariants are exactly the super-frieze relations. \end{proof} \begin{Theorem} Every super-frieze pattern comes from a decorated super-Teichm\"uller space of a~marked disk. \end{Theorem} \begin{proof} Take a diagonal of even and odd variables from a super-frieze, and we declare it to be the $\lambda$-lengths of a fan triangulation with the default orientation (see Figure~\ref{fig:frieze-triangulation}). \begin{figure}[h] \centering \begin{tikzpicture} \node at (0,0) {$\displaystyle \begin{array}{ccccccccccc} 1\\[3pt] &{\color{red}\xi_1}&&\\[3pt] &&x_1\\[3pt] &&&{\color{red}\xi_2}&&\\[3pt] &&&&x_2&&\\[3pt] &&&&&\ddots&&&&\\[3pt] &&&&&&x_n\\[3pt] &&&&&&&{\color{red}\xi_{n+1}}&&\\[3pt] &&&&&&&&1 \end{array} $}; \end{tikzpicture} \begin{tikzpicture}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \tikzstyle{every path}=[draw] \path node[ regular polygon, regular polygon sides=8, draw=none, inner sep=1.6cm, ] (T) {} % (T.corner 1) node[above] {$2$} (T.corner 2) node[above] {$1$} (T.corner 3) node[left] {$n+3$} (T.corner 4) node[left] {$n+2$} (T.corner 5) node[below] {} (T.corner 6) node[below] {$5$} (T.corner 7) node[right] {$4$} (T.corner 8) node[right] {$3$} ; \draw [-] (T.corner 4) to (T.corner 3) to node[label={[xshift=-0.2cm, yshift=-1.2cm]$\color{red} \xi_{n+1}$}] {} (T.corner 2) to node[label={[xshift=0.65cm, yshift=-0.74cm]$\color{red} \xi_1$}] {} (T.corner 1) to (T.corner 8) to node[label={[xshift=-0.6cm, yshift=0.2cm]$\color{red} \xi_2$}] {} (T.corner 7) to node[label={[xshift=-0.4cm, yshift=0.4cm]$\color{red} \xi_3$}] {} (T.corner 6); \draw [dashed] (T.corner 6) to (T.corner 5) to (T.corner 4); \draw [postaction={decorate}] (T.corner 2) to node[label={[xshift=-0.3cm, yshift=-0.5cm]$x_3$}] {}(T.corner 6); \draw [postaction={decorate}] (T.corner 2) to node[label= below:$x_2$] {} (T.corner 7); \draw [postaction={decorate}] (T.corner 2) to node[label={[xshift=0.25cm, yshift=-0.7cm]$x_n$}] {} (T.corner 4); \draw [dashed] (T.corner 5) to (T.corner 2); \foreach \x in {1,2,...,7}{ \draw (T.corner \x) node [fill,circle,scale=0.2] {};} \draw[postaction={decorate}] (T.corner 12)--(T.corner 4); \draw [postaction={decorate}] (T.corner 2) to node[label={[xshift=0.3cm, yshift=-0.8cm]$x_1$}] {}(T.corner 8); \draw [-,draw=none] (T.corner 4) to node [label={[xshift=-0.2cm, yshift=-0.3cm]$+$}] {} (T.corner 3) to node [label={[xshift=-0.2cm, yshift=-0.3cm]$+$}] {} (T.corner 2) to node [label={[xshift=0cm, yshift=-0.2cm]$+$}] {} (T.corner 1) to node [label={[xshift=0.2cm, yshift=-0.3cm]$+$}] {} (T.corner 8) to node [label={[xshift=0.23cm, yshift=-0.3cm]$+$}] {} (T.corner 7) to node [label={[xshift=0.2cm, yshift=-0.5cm]$+$}] {} (T.corner 6); \end{tikzpicture} \caption{{Left:} Diagonal of a super-frieze. {Right:} Initial fan triangulation, the $+$ signs records the initial orientation of the spin structure.} \label{fig:frieze-triangulation} \end{figure} Using Proposition~\ref{prop:frieze-mutation}, the next diagonal to the right corresponds to the triangulation obtained by flipping the edges $x_1,x_2,\dots,x_n$. Suppose the next diagonal of the frieze is as follows:\footnote{Using~\cite[Proposition~2.3.1]{M-GOT15}, the top-most non-trivial row of odd variables repeats every-other entry while the bottom-most non-trivial row of odd variables alternates in sign every-other entry.} \begin{tikzpicture} \node at (0,0) {$\begin{array}{cccccccccccccccccc} {\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}} \\ 1&&&&1\\[8pt] &{\color{red}\xi_1}&&\s{\xi_1}&&\s{\theta_1}\\[8pt] &&x_1&&&& y_1\\[8pt] &&&{\color{red}\xi_2}&&\s{*}&&\s{\theta_2}\\[8pt] &&&&x_2&&&&\ddots\\[8pt] &&&&&\ddots&&&&\s{\theta_n}\\[8pt] &&&&&&x_n&&&&y_n\\[8pt] &&&&&&&{\color{red}\xi_{n+1}}&&\s{\theta_{n+1}}&&\s{-\theta_{n+1}}\\[8pt] &&&&&&&&1&&&&1 \end{array} $}; \draw [color=blue,rounded corners=15] (1.75,0.8) to (-1.8,3.6) to (-2.8,3) to (3.6,-2) to (1.5,-3.5) to (2.5,-4.1) to (3.9,-3.05); \draw[color=blue,rounded corners=15] (3.9,-3.05) to (5.3,-2) to (1.75,0.8); \draw [color=red,rounded corners=15] (-3.05,2.95) -- (6.6,-4.5) -- (7.6,-3.5) -- (-1.6,3.8) -- cycle; \end{tikzpicture} \looseness=1 Meanwhile, applying super-flips on the edges $x_1,\dots,x_n$ gives us the following triangulation:\footnote{Notice that here we apply the clockwise flip sequence, as opposed to the counterclockwise one in Theorem~\ref{thm:mu_single_fan}. The reason of switching the convention here is to have the `wrong' arrow on the edge $y_n$ to match up the frieze relations of the last quiddity row.} \begin{center} \begin{tikzpicture}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \tikzstyle{every path}=[draw] \path node[ regular polygon, regular polygon sides=8, draw=none, inner sep=1.8cm, ] (T) {} % (T.corner 1) node[above] {$2$} (T.corner 2) node[above] {$1$} (T.corner 3) node[left] {$n+3$} (T.corner 4) node[left] {$n+2$} (T.corner 5) node[below] {} (T.corner 6) node[below] {$5$} (T.corner 7) node[right] {$4$} (T.corner 8) node[right] {$3$} ; \draw [-] (T.corner 4) to (T.corner 3) to node[label={[xshift=0cm, yshift=-1.5cm]$\color{red} \theta_{n}$}] {} (T.corner 2) to node[label={[xshift=-0.6 cm, yshift=-0.8cm]$\color{red} \theta_{n+1}$}] {} (T.corner 1) to (T.corner 8) to node[label={[xshift=-0.3cm, yshift=0.2cm]$\color{red} \theta_1$}] {} (T.corner 7) to node[label={[xshift=0cm, yshift=0.6cm]$\color{red} \theta_2$}] {} (T.corner 6); \draw [dashed] (T.corner 6) to (T.corner 5) to (T.corner 4); \draw [postaction={decorate}] (T.corner 1) to node[label={[xshift=-0.1cm, yshift=-0.8cm]$y_1$}] {}(T.corner 7); \draw [postaction={decorate}] (T.corner 1) to node[label={[xshift=-0.3cm, yshift=-0.8cm]$y_2$}] {}(T.corner 6); \draw [dashed] (T.corner 1) to (T.corner 5); \draw [postaction={decorate}] (T.corner 1) to node[label={[xshift=0cm, yshift=-1cm]$y_{n-1}$}] {}(T.corner 4); \draw [postaction={decorate}] (T.corner 3) to node[label={[xshift=-0.2cm, yshift=-0.8cm]$y_{n}$}] {}(T.corner 1); \draw [-,draw=none] (T.corner 4) to node [label={[xshift=-0.2cm, yshift=-0.3cm]$+$}] {} (T.corner 3) to node [label={[xshift=-0.2cm, yshift=-0.3cm]$+$}] {} (T.corner 2) to node [label={[xshift=0cm, yshift=-0.2cm]$+$}] {} (T.corner 1) to node [label={[xshift=0.2cm, yshift=-0.3cm]$-$}] {} (T.corner 8) to node [label={[xshift=0.23cm, yshift=-0.3cm]$+$}] {} (T.corner 7) to node [label={[xshift=0.2cm, yshift=-0.5cm]$+$}] {} (T.corner 6); \end{tikzpicture} \end{center} This corresponds to the blue-circled entries of the above frieze pattern. To obtain the next diagonal, we reverse all arrows on the `last' triangle (the triangle corresponding to $\theta_{n+1}$), and negate the $\mu$-invariant. This gives us the following triangulation, which corresponds to the red-circled entries of the above frieze: \begin{center} \begin{tikzpicture}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \tikzstyle{every path}=[draw] \path node[ regular polygon, regular polygon sides=8, draw=none, inner sep=1.8cm, ] (T) {} % (T.corner 1) node[above] {$2$} (T.corner 2) node[above] {$1$} (T.corner 3) node[left] {$n+3$} (T.corner 4) node[left] {$n+2$} (T.corner 5) node[below] {} (T.corner 6) node[below] {$5$} (T.corner 7) node[right] {$4$} (T.corner 8) node[right] {$3$} ; \draw [-] (T.corner 4) to (T.corner 3) to node[label={[xshift=0cm, yshift=-1.5cm]$\color{red} \theta_{n}$}] {} (T.corner 2) to node[label={[xshift=-0.6 cm, yshift=-0.8cm]$\color{red}-\theta_{n+1}$}] {} (T.corner 1) to (T.corner 8) to node[label={[xshift=-0.3cm, yshift=0.2cm]$\color{red} \theta_1$}] {} (T.corner 7) to node[label={[xshift=0cm, yshift=0.6cm]$\color{red} \theta_2$}] {} (T.corner 6); \draw [dashed] (T.corner 6) to (T.corner 5) to (T.corner 4); \draw [postaction={decorate}] (T.corner 1) to node[label={[xshift=-0.1cm, yshift=-0.8cm]$y_1$}] {}(T.corner 7); \draw [postaction={decorate}] (T.corner 1) to node[label={[xshift=-0.3cm, yshift=-0.8cm]$y_2$}] {}(T.corner 6); \draw [dashed] (T.corner 1) to (T.corner 5); \draw [postaction={decorate}] (T.corner 1) to node[label={[xshift=0cm, yshift=-1cm]$y_{n-1}$}] {}(T.corner 4); \draw [postaction={decorate}] (T.corner 1) to node[label={[xshift=-0.2cm, yshift=-0.8cm]$y_{n}$}] {}(T.corner 3); \draw [-,draw=none] (T.corner 4) to node [label={[xshift=-0.2cm, yshift=-0.3cm]$+$}] {} (T.corner 3) to node [label={[xshift=-0.2cm, yshift=-0.3cm]$-$}] {} (T.corner 2) to node [label={[xshift=0cm, yshift=-0.2cm]$-$}] {} (T.corner 1) to node [label={[xshift=0.2cm, yshift=-0.3cm]$-$}] {} (T.corner 8) to node [label={[xshift=0.23cm, yshift=-0.3cm]$+$}] {} (T.corner 7) to node [label={[xshift=0.2cm, yshift=-0.5cm]$+$}] {} (T.corner 6); \end{tikzpicture} \end{center} Now the interior arrows are exactly the `same' as before: all arrows are directed away from the (new) fan center. Therefore, inductively, flipping the edges $y_1,\dots,y_n$ and negating the last triangle will give us the next diagonal. Applying the above operations $n$ times will take us back to the original triangulation, but with different boundary orientations. In particular, all boundary arrows are reversed, which is equivalent to reversing the orientation of all triangles and negating all the $\mu$-invariants. \begin{center} \begin{tikzpicture} \node at (0,0) {$\displaystyle \begin{array}{ccccccccccc} 1\\[3pt] &{\color{red}-\xi_1}&&\\[3pt] &&x_1\\[3pt] &&&{\color{red}-\xi_2}&&\\[3pt] &&&&x_2&&\\[3pt] &&&&&\ddots&&&&\\[3pt] &&&&&&x_n\\[3pt] &&&&&&&{\color{red}-\xi_{n+1}}&&\\[3pt] &&&&&&&&1 \end{array} $}; \end{tikzpicture} \begin{tikzpicture}[decoration={ markings, mark=at position 0.5 with {\arrow{>}}} ] \tikzstyle{every path}=[draw] \path node[ regular polygon, regular polygon sides=8, draw=none, inner sep=1.6cm, ] (T) {} % (T.corner 1) node[above] {$2$} (T.corner 2) node[above] {$1$} (T.corner 3) node[left] {$n+3$} (T.corner 4) node[left] {$n+2$} (T.corner 5) node[below] {} (T.corner 6) node[below] {$5$} (T.corner 7) node[right] {$4$} (T.corner 8) node[right] {$3$} ; \draw [-] (T.corner 4) to (T.corner 3) to node[label={[xshift=-0.2cm, yshift=-1.2cm]$\color{red} -\xi_{n+1}$}] {} (T.corner 2) to node[label={[xshift=0.65cm, yshift=-0.74cm]$\color{red} -\xi_1$}] {} (T.corner 1) to (T.corner 8) to node[label={[xshift=-0.6cm, yshift=0.2cm]$\color{red} -\xi_2$}] {} (T.corner 7) to node[label={[xshift=-0.4cm, yshift=0.4cm]$\color{red} -\xi_3$}] {} (T.corner 6); \draw [dashed] (T.corner 6) to (T.corner 5) to (T.corner 4); \draw [postaction={decorate}] (T.corner 2) to node[label={[xshift=-0.3cm, yshift=-0.5cm]$x_3$}] {}(T.corner 6); \draw [postaction={decorate}] (T.corner 2) to node[label= below:$x_2$] {} (T.corner 7); \draw [postaction={decorate}] (T.corner 2) to node[label={[xshift=0.25cm, yshift=-0.7cm]$x_n$}] {} (T.corner 4); \draw [dashed] (T.corner 5) to (T.corner 2); \foreach \x in {1,2,...,7}{ \draw (T.corner \x) node [fill,circle,scale=0.2] {};} \draw[postaction={decorate}] (T.corner 12)--(T.corner 4); \draw [postaction={decorate}] (T.corner 2) to node[label={[xshift=0.3cm, yshift=-0.8cm]$x_1$}] {}(T.corner 8); \draw [-,draw=none] (T.corner 4) to node [label={[xshift=-0.2cm, yshift=-0.3cm]$-$}] {} (T.corner 3) to node [label={[xshift=-0.2cm, yshift=-0.3cm]$-$}] {} (T.corner 2) to node [label={[xshift=0cm, yshift=-0.2cm]$-$}] {} (T.corner 1) to node [label={[xshift=0.2cm, yshift=-0.3cm]$-$}] {} (T.corner 8) to node [label={[xshift=0.23cm, yshift=-0.3cm]$-$}] {} (T.corner 7) to node [label={[xshift=0.2cm, yshift=-0.5cm]$-$}] {} (T.corner 6); \end{tikzpicture} \end{center} Therefore the $n$-th diagonal will have the same even entries and negative odd entries as the first diagonal. This explains the glide symmetry of super-frieze patterns. \end{proof} \section{Conclusions and future directions} \subsection[Expansion formulas for mu-invariants]{Expansion formulas for $\boldsymbol{\mu}$-invariants} In Theorems~\ref{thm:mu_single_fan} and~\ref{thm:proof_zigzag}, we gave formulas for certain types of $\mu$-invariants. However, these only applied to a subset of such triangles which have at least one side being an arc of the triangulation. The proofs depended on this assumption, and a specific flip sequence, in order to apply the super Ptolemy relations. This begs the following question, subject to the ambiguity of Remark~\ref{rem:mu-invariants}, and with a specific flip sequence in mind. \begin{Question} What is the correct formula for a $\mu$-invariant of a triangle which has no sides belonging to the triangulation? \end{Question} Looking more broadly, we can consider a study of $\lambda$-lenghths and $\mu$-invariants, subject to super Ptolemy relations, for other surfaces. \begin{Question} What is the correct formula for a $\lambda$-length $($or a $\mu$-invariant$)$ for an arc $($resp.\ a triangle$)$ on an annlus, torus, or other surfaces with boundary? \end{Question} Note that for the special case of a once-punctured torus with no boundaries, such super structures were studied in~\cite{mcshane}. The cases of a three-punctured sphere and of a~once-punctured torus were also investigated in \cite[Appendix~B]{ip2018n}, but differed from our setup. Therein, they had two odd variables (rather than one) for each triangle{, along with an extra family of even variables associated to the edges}. \subsection{Connections to super cluster algebras and super-friezes}\label{sec:subfrieze} As we have demonstrated, we have been able to use Penner and Zeitlin's development of super $\lambda$-lengths for decorated super-Teichm\"uller space to obtain explicit formulas for super $\lambda$-lengths on marked disks which involves a construction of super $T$-paths. In analogy with the classical case, as in \cite{schiffler_08} where weighted generating functions of $T$-paths correspond to cluster variables in cluster algebras of type $A_n$, we wish to investigate how our formulas for super $\lambda$-lengths could aid in the development of \emph{super} cluster algebras (of type $A_n$). Steps towards defining super cluster algebras appeared in work of Ovsienko~\cite{o_15} and separately in the work of Li, Mixco, Ransingh, and Srivastava~\cite{lmrs_17}. These initial steps were followed up by related work such as~\cite{OS19,OT18,sv_19}. In particular, in \cite{OS19}, Ovsienko and Shapiro define a type of super cluster algebra, motivated by super-frieze patterns. In their setup, some of the frozen vertices of the quiver correspond to odd variables $\theta_1,\dots,\theta_m$. There can be paths of length 2 connecting the $\theta_i$ passing through the ``ordinary'' vertices: $\theta_i \to x_k \to \theta_j$. The mutation rules are the same for all ordinary arrows, and additionally, when mutating at $x_k$, \begin{enumerate}\itemsep=0pt \item[1)] for $\theta_i \to x_k \to \theta_j$, and for $x_k \to x_\ell$, add a 2-path $\theta_i \to x_\ell \to \theta_j$, \item[2)] reverse all 2-paths through $x_k$, \item[3)] cancel oppositely oriented 2-paths through $x_k$. \end{enumerate} The mutation formula for even variables is given by \[ \mu_k(x_k) =\frac{1}{x_k}\bigg( \prod_{ x_k\to x_\ell} x_\ell+ \prod_{\theta_i \to x_k \to \theta_j} (1 + \theta_i \theta_j) \prod_{x_\ell \to x_k} x_\ell \bigg).\] Ovsienko and Shapiro noticed that starting with certain initial data, one could construct a~quiver with odd vertices such that all entries of the super-frieze can be obtained by mutations. Their choice of initial data consists of all even entries along a NW-SE diagonal, along with the odd entries in the neighboring diagonal. This is pictured below: \begin{center} \(\quad\displaystyle \begin{array}{cccccccccccccccccc} {\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}}&{\color{white}\xi_{n+1}} \\ 1&&&&\\[8pt] &{\color{red}*}&&\s{\theta_1}&&\\[8pt] &&x_1&&&&\\[8pt] &&&{\color{red}*}&&\s{\theta_2}&&\\[8pt] &&&&x_2&&\s{\ddots} &&\\[8pt] &&&&&\ddots &&\s{\theta_m}&&\\[8pt] &&&&&&x_m&&&&\\[8pt] &&&&&&&{\color{red}*}&&\s{\theta_{m+1}}&&\\[8pt] &&&&&&&&1&&&& \end{array} \) \end{center} To this set of initial data, they assign the following quiver: $$ \begin{tikzpicture} \draw (0,0) node {$x_1$}; \draw (2,0) node {$x_2$}; \draw (4,0) node {$x_3$}; \draw (6,0) node {$\cdots$}; \draw (8,0) node {$x_m$}; \draw[-latex] (0.5,0) -- (1.5,0); \draw[-latex] (2.5,0) -- (3.5,0); \draw (-1,1) node {\color{red}$\theta_1$}; \draw (1, 1) node {\color{red}$\theta_2$}; \draw (3, 1) node {\color{red}$\theta_3$}; \draw (5, 1) node {\color{red}$\cdots$}; \draw (7, 1) node {\color{red}$\theta_m$}; \draw (9, 1) node {\color{red}$\theta_{m+1}$}; \draw[-latex] (0.8,0.8) -- (0.2,0.2); \draw[-latex] (-0.2,0.2) -- (-0.8,0.8); \draw[-latex] (2.8,0.8) -- (2.2,0.2); \draw[-latex] (1.8,0.2) -- (1.2,0.8); \draw[-latex] (3.8,0.2) -- (3.2,0.8); \draw[-latex] (8.8,0.8) -- (8.2,0.2); \draw[-latex] (7.8,0.2) -- (7.2,0.8); \draw[-latex] (-0.7,0.9) -- (1.7,0.1); \draw[-latex] (1.3, 0.9) -- (3.7,0.1); \end{tikzpicture} $$ \begin{Remark}\rm In Section \ref{sec:super-friezes}, we described how a super-frieze corresponds to the $\lambda$-lengths and $\mu$-invariants of a triangulated polygon. Following our construction for the case of an initial fan triangulation, we see that the super-frieze that we construct compares with the construction of Ovsienko and Shapiro via a quiver of even and odd variables as follows: The initial data of Ovsienko and Shapiro consists of the following: \begin{itemize}\itemsep=0pt \item for even variables, the $\lambda$-lengths of all the diagonals of a fan triangulation, \item for odd variables, the $($modified$)$ $\mu$-invariants \[ \sqrt{\lambda_{13}} \; \boxed{123}, \sqrt{\lambda_{14}\lambda_{24}} \, \boxed{124}, \sqrt{\lambda_{15}\lambda_{25}} \, \boxed{125}, \dots, \sqrt{\lambda_{2n}} \, \boxed{12n}\,. \] \end{itemize} Note that {unlike our usage of collections of $\mu$-invariants}, these $\mu$-invariants correspond to triangles that \emph{do not} belong to the same triangulation. \end{Remark} The results of \cite{OS19} therefore show that all $\lambda$-lengths can be expressed in terms of this initial data using sequences of mutations. On the other hand, our main theorem (Theorem~\ref{thm:main}) shows that all $\lambda$-lengths can be expressed in terms of initial data, where all $\mu$-invariants come from the same initial triangulation. From the point of view of cluster algebras, it is more natural to have all initial cluster variables coming from the same triangulation. This leads to the following open question: \begin{Question} Does there exist some modification of the extended quiver mutation from {\rm \cite{OS19}} which realizes the super Ptolemy transformations? This would entail $($at least$)$ the following: \begin{enumerate}\itemsep=0pt \item[$1.$] Specifying which $2$-paths to include for a given triangulation. \item[$2.$] Restricting the $2$-path mutation to be compatible with this choice. \item[$3.$] Odd variables must change when mutating at an even vertex. \end{enumerate} \end{Question} Section~5 of~\cite{sv_19} provides yet another conjectural connection to super cluster algebras. In particular, they consider the coordinate ring of the super Grassmannians $G_{r}\big(\mathbb{R}^{n|m}\big)$ of $r$-planes in $(n+m)$-super-space. In the case of $r=2$, $n=4$ or $5$, and $m=1$, these yield a collection of even variables $T^{ab}$ and odd variables $\theta^c$ where $a$ and $b$ can be identified as vertices of an $n$-gon, and the $\theta^c$'s can be identified as the possible triangles inside a quadrilateral or pentagon, respectively. This yields super-Pl\"ucker relations relating these even and odd variables to one another. \begin{Question} Is there an algebraic transformation that relates Shemyakova--Voronov's $T^{ab}$'s and $\theta^c$'s of~{\rm \cite{sv_19}} to our $\lambda_{ab}$'s and $\mu$-invariants so that the super-Pl\"ucker relations are satisfied? \end{Question}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Weyl modules for the loop algebra $\g \otimes \mathbb{C}[t^{\pm 1}]$ of a finite-dimensional simple complex Lie algebra $\g$ were introduced by Chari and Pressley more than a decade ago (see \cite{CP01}). Since then, the study of their properties (for instance, their homological behavior, dimension and character) has been a fruitful and successful process. The category of finite-dimensional modules for $\g \otimes \mathbb{C}[t^{\pm 1}]$ is not semisimple. In analogy with the modular representation theory of simple finite-dimensional Lie algebras, for every simple module, there exists a \emph{(local) Weyl module} satisfying certain universal properties. This Weyl module is finite-dimensional and its character and dimension have been studied and computed in a series of papers (see \cite{CP01,CL06,FL07,Nao12}). Local Weyl modules have been identified with certain Demazure modules of affine Kac--Moody algebras and their characters are also known to be characters of the $q \to 1$ limit of simple modules of the quantum affine algebra (see \cite{FL07}). In \cite{CP01}, the class of \emph{global Weyl modules} was defined. These modules are projective objects in the category of those $\g \otimes \mathbb{C}[t^{\pm 1}]$-modules whose weights are bounded by some fixed dominant integral $\g$-weight. Their $\g$-weight spaces are right modules over polynomial rings in finitely many variables. It was conjectured in \cite{CP01}, and it can be deduced from results in the aforementioned series of papers, that the global Weyl module is a free right module of finite rank for this polynomial ring (see Theorem~\ref{theo:global-Weyl-module-projective}). It turns out that the global Weyl module might be the most interesting object to study in the category of bounded $\g \otimes \mathbb{C}[t^{\pm 1}]$-modules. For instance, it is subject to an analog of the Bernstein--Gelfand--Gelfand reciprocity for simple Lie algebras (see \cite{BBCKL12,BCM12}). Furthermore, its character is known to be the $q$-Whittaker function, a solution to the $q$-Toda integrable system (see \cite{BF12}). There are several approaches to generalizing the above objects. In \cite{FL04}, local and global Weyl modules were defined in the setting where $\mathbb{C}[t^{\pm 1}]$ is replaced by the coordinate ring of a complex affine variety. A more general approach was taken in \cite{CFK10}. There the modules for a generalized current algebra $\g \otimes A$, where $A$ is a commutative, associative complex unital algebra were studied. The global Weyl module is again a projective object in a suitable category (see Corollary~\ref{cor:global-Weyl-module-free}) and its weight spaces are right modules for a certain commutative algebra. The \emph{Weyl functor} was introduced and local Weyl modules were studied, together with their homological properties. The algebra of the highest weight space was analyzed and in important cases identified with a tensor product of symmetric powers of $A$. A different approach was taken in \cite{CFS08} and \cite{FMS11}, where $\g \otimes \mathbb{C}[t^{\pm 1}]$ was replaced by the twisted loop algebra $(\g \otimes \mathbb{C}[t^{\pm 1}])^\Gamma$. This is the fixed point algebra of $\g \otimes \mathbb{C}[t^{\pm 1}]$ under the action of a group $\Gamma$ of automorphisms of $\g$, generated by a Dynkin diagram automorphism. This group acts on $\mathbb{C}[t^{\pm 1}]$ by scaling $t$ by roots of unity. It turns out that every local Weyl module of the twisted loop algebra is obtained by restriction from a local Weyl module of $\g \otimes \mathbb{C}[t^{\pm 1}]$. The global Weyl module was defined in \cite{FMS11} and it was shown that it is again a free right module of finite rank for a certain commutative algebra and it can be embedded in a direct sum of global Weyl modules for $\g \otimes \mathbb{C}[t^{\pm 1}]$. In \cite[Remark 1.10]{BF12} it is conjectured that its character solves the $q$-Toda integrable system in the nonsimply laced case. In \cite{FKKS12}, the definition of local Weyl modules was generalized to the setting of \emph{equivariant map algebras}. Let $X =\Spec A$ be an affine scheme and $\Gamma$ be a finite group acting on $\g$ and $X$ by automorphisms. The equivariant map algebra is the Lie algebra of equivariant algebraic maps from $X$ to $\g$ and is denoted $(\g \otimes A)^\Gamma$. Local Weyl modules were defined for these algebras under the assumptions that $X$ is of finite type, $\Gamma$ is an abelian group, and the action on $X$ is free. A key ingredient in this study was the definition of certain \emph{twisting} and \emph{untwisting} functors that relate the representation theory of $\g \otimes A$ and $(\g \otimes A)^\Gamma$. It was also shown that the homological properties of local Weyl modules can be generalized to the setting of equivariant map algebras. In the current paper, we define \emph{global} Weyl modules for equivariant map algebras. The paper can be divided into two parts. The first part comprises Sections~\ref{sec:EMAs} to~\ref{sec:Weyl-functor}. After some preliminaries on equivariant map algebras in~Section~\ref{sec:EMAs}, we define in Section~\ref{sec:Weyl-modules} global Weyl modules for equivariant map algebras satisfying a mild assumption (see Assumption~\ref{assumption}). In particular, this assumption is always satisfied if $\Gamma$ is cyclic or acts on $\g$ by diagram automorphisms. We also give a presentation of the global Weyl modules in terms of generators and relations (Proposition~\ref{weyl:gen-rel}). In Section~\ref{sec:Weyl-functor}, we extend the notion of Weyl functors to the twisted/equivariant setting. In particular, we define a commutative algebra $\bA^\lambda_\Gamma$ which acts naturally on the global Weyl module with highest $\g^\Gamma$-weight $\lambda$. The Weyl functor is then a functor from the category of $\bA^\lambda_\Gamma$-modules to the category of $(\g \otimes A)^\Gamma$-modules. We show that these functors (and the global Weyl modules) possess twisted versions of properties satisfied in the untwisted setting. The second part of the current paper (Sections~\ref{sec:properties-global}--\ref{sec:A-lambda-Gamma}) concerns equivariant map algebras for which $\Gamma$ is abelian, acts on $\g$ by Dynkin diagram automorphisms, and on $\maxSpec A$ freely. (Note that $\Gamma$ is abelian and acts freely on $\maxSpec A$ in the case of (twisted) loop and multiloop algebras.) Under these additional assumptions, our main results are the following. \begin{enumerate} \item (Theorem~\ref{theo:A-fg}) The algebras $\bA^\lambda_\Gamma$ are finitely generated. \item (Theorem~\ref{theo:finite-generated}) The global Weyl module (of highest weight $\lambda$) is finitely generated as an $\bA_\Gamma^\lambda$-module. \end{enumerate} After recalling the twisting functors introduced in \cite{FKKS12} and proving some additional properties of these functors in Section~\ref{sec:twisting}, we turn our attention to local Weyl modules in Section~\ref{sec:local-Weyl-modules}. There we define the local Weyl modules as the images of one-dimensional irreducible $\bA^\lambda_\Gamma$-modules under the Weyl functor (as in the untwisted setting). We show (Proposition~\ref{prop:equivalence-of-lWm-defs}) that the modules so defined coincide with those defined directly (i.e.\ without the Weyl functor) in \cite{FKKS12}. Finally, in Section~\ref{sec:A-lambda-Gamma}, we examine the algebra $\bA^\lambda_\Gamma$. In particular, we show (Theorem~\ref{theo:Alambda-isom}) that this algebra is isomorphic to a tensor product of symmetric algebras of fixed point subalgebras of $A$. Under an additional assumption, we also identify (Theorem~\ref{theo:AlamGam-coinvariants}) $\bA^\lambda_\Gamma$ with the algebra of $\Gamma$-coinvariants of the algebra $\bA^{\llift}$ corresponding to the case where $\Gamma$ is trivial (defined in \cite{CFK10}, but denoted by $\bA_{\llift}$ there), where $\llift$ is a $\g$-weight corresponding to the $\g^\Gamma$-weight $\lambda$. \medskip \paragraph{\textbf{Notation}} The set of nonnegative (respectively, positive) integers is denoted by $\N$ (respectively, $\N_+$). Throughout $\kk$ will denote an algebraically closed field of characteristic zero. All algebras are over $\kk$ unless otherwise indicated and all associative algebras are assumed to be unital. Whenever a group $\Gamma$ acts on a $\kk$-vector space $Y$, we denote the subspace of fixed points by $Y^\Gamma$. For a ring $B$, $B$-mod will denote the category of left $B$-modules. The notation $S^nB$, $n \in \N$, will denote the subring $(B^{\otimes n})^{S_n}$ of $B^{\otimes n}$ consisting of elements fixed under the natural action of the symmetric group $S_n$ on $B^{\otimes n}$. Since we work over a field of characteristic zero, this is isomorphic to the quotient of $B^{\otimes n}$ by the ideal generated by the elements $u - \tau(u)$, $u \in B^{\otimes n}$, $\tau \in S_n$. For a Lie algebra $L$, we denote its universal enveloping algebra by $\cU(L)$. We have the standard filtration $\cU(L) = \sum_{n \in \N} \cU(L)_n$. When we refer to the nodes of the Dynkin diagram of a simple Lie algebra of rank $n$ by elements of the set $\{1,2,\dotsc,n\}$, we are referring to the standard labeling that can be found, for instance, in \cite[\S11.4]{Hum72}. When we refer to the Dynkin diagram of a reductive Lie algebra, we mean the Dynkin diagram of its semisimple part (i.e.\ its derived algebra). Since we will need to refer to weights of a simple Lie algebra $\g$ and also its subalgebra $\g^\Gamma$ fixed by the action of a group $\Gamma$, we will typically denote weights of $\g$ by $\llift$ and weights of $\g^\Gamma$ by $\lambda$ to avoid confusion. We will denote by $A$ a finitely generated (hence Noetherian) commutative associative algebra over $\kk$. We let $X_\rat$ denote the set of $k$-rational points of $X \defeq \Spec A$. Since $A$ is finitely generated, we have $X_\rat = \maxSpec A$. For a point $x \in X_\rat$, we will denote the corresponding maximal ideal of $A$ by $\sm_x$. In some instances, we will identify a point $x$ with its maximal ideal $\sm_x$. For the reader's convenience, we give here an index of important notation used in the paper. \medskip \begin{center} \begin{tiny} \begin{tabular}{rl!{\hspace{2cm}}rl} \multicolumn{4}{c}{\textbf{Index of Notation}} \\ $\g^\alpha,\ \g^0,\ \g^\pm$ & Page~\pageref{eq:g-plus-minus} & $\Gamma_i$, $e_i$, $f_i$, $h_i$ & Definition~\ref{def:overline} \\ $\Lambda_\Gamma,\ \Lambda_\Gamma^+,\ Q_\Gamma,\ Q_\Gamma^+$ & Page~\pageref{def:twisted-lattices} (see also~\eqref{def:Lambda-Gamma}) & $\overline{e_i \otimes a}$, $\overline{f_i \otimes a}$, $\overline{h_i \otimes a}$ & Definition~\ref{def:overline} \\ $\mathcal U_\Gamma,\ \mathcal U_\Gamma^0,\ \mathcal U_\Gamma^\pm$ & \eqref{eq:u-gamma-def} & $\bV^\Gamma_\lambda$ & Definition~\ref{def:V-Gamma-lambda} \\ $\Xi$ & Page~\pageref{Xi-def}& $M(\psi),\ M^\Gamma(\psi)$ & Definition~\ref{def:M-psi} \\ $\cE$, $\cE_\llift$, $\cE^\Gamma$, $\cE^\Gamma_\lambda$, $Y_\Gamma$ & Definition~\ref{def:wt-ht} & $\Supp J$ & Page~\pageref{def:supp-ideal} \\ $\psi_\bx$, $\wt$, $\wt_\Gamma$, $\hei$, $\hei_\Gamma$ & Definition~\ref{def:wt-ht} & $\Ann_A^\Gamma V,\ \Supp_A^\Gamma V$ & Definition~\ref{def:support-module} \\ $V^\Gamma(\psi)$, $V(\psi)$& Definition~\ref{def:EMA-irreducibles} & $X_*$ & Page~\pageref{def:X-star} \\ $\cI^\Gamma$, $\cI^\Gamma_{\le \tau}$, $\cI^\Gamma_{\le \lambda}$ &Definition~\ref{def:I-Gamma} & $\cF_{\mathbf x},\ \cF_{\mathbf x}^\Gamma$ & Definition~\ref{def:F-x} \\ $P^\Gamma(V)$ & \eqref{eq:P-Gamma-V} & $\mathbf T,\ \mathbf T_{\mathbf x}$ & Definition~\ref{def:twisting-functor} \\ $W^\Gamma(V)$ & Definition~\ref{def:twisted-global-Weyl-module} & $e_\bi$, $f_\bi$, $h_\bi$ & \eqref{eq:ebi-fbi-hbi}\\ $V^\Gamma(\lambda)$, $v^\Gamma_\lambda$ & Page~\pageref{notation:g-Gamma-irred-modules} & $\kappa_{\bi}$ &Page~\pageref{def:kappa-bi} \\ $W^\Gamma(\lambda)$, $w^\Gamma_\lambda$ & Lemma~\ref{global-weyl-triang} & $W(\psi),\ W^\Gamma(\psi)$& Definition~\ref{def:local-weyl} \\ $\mathbf A^\lambda_\Gamma$, $\Ann_{\mathcal U^0_\Gamma}(w_\lambda)$ & Definition~\ref{def:A-Gamma-lambda} & $J(\psi)$ & \eqref{def:J-psi} \\ $\mathbf W^\Gamma_\lambda$ & Definition~\ref{def:twisted-Weyl-functor} & $\bbA^\lambda_\Gamma$ & \eqref{def:bbA} \\ $\mathbf R^\Gamma_\lambda$ & Definition~\ref{def:R-Gamma-lambda} & $\tilde{\tau}_\lambda$ & \eqref{def:tau-tilde} \\ $R^+,\ R^+_\Gamma,\ \Pi,\ \Pi_\Gamma$ & Page~\pageref{def:roots-simple-roots} & $\tau_\lambda$ & Lemma~\ref{lem:tau-surjective} \\ \end{tabular} \end{tiny} \end{center} \iftoggle{detailsnote}{ \medskip \paragraph{\textbf{Note on the arXiv version}} For the interested reader, this arXiv version of this paper includes hidden details of some straightforward computations and arguments that are omitted in the pdf file. These details can be displayed by switching the \texttt{details} toggle to true in the tex file and recompiling. }{} \medskip \paragraph{\textbf{Acknowledgements}} The authors would like to thank Vyjayanthi Chari, Daniel Daigle, Michael Lau and Erhard Neher for helpful conversations. In particular, they would like to thank Vyjayanthi Chari for explaining the details of the argument found in the proof of Theorem~\ref{theo:global-Weyl-module-projective}\eqref{theo-item:loop-global-Weyl-module-free}, Daniel Daigle for providing an outline of the arguments found in Appendix~\ref{sec:appendix} and some of the commutative algebra arguments in Section~\ref{sec:A-lambda-Gamma}, Michael Lau for pointing us towards Lemma~\ref{lem:g0-abelian} and Erhard Neher for drawing our attention to Example~\ref{eg:ABFP}. \section{Equivariant map algebras} \label{sec:EMAs} In this section we recall some basic facts about equivariant map algebras. We refer the reader to \cite{NSS12,NS12} for further details. Let $\g$ be a finite-dimensional simple Lie algebra over $\kk$, and let $\Gamma$ be a finite group acting on $\g$ by automorphisms and on the finitely generated commutative associative algebra $A$ by algebra automorphisms (hence on $\Spec A$ by scheme automorphisms). Thus $\Gamma$ acts diagonally on $\g \otimes A$. As a Lie subalgebra, the set of fixed points $\g^\Gamma$ is reductive in $\g$ (see \cite[Ch.~VII, \S1, no.~5]{Bou75}). That is, $\g^\Gamma$ is a reductive Lie algebra, which acts semisimply on $\g$ by the restriction of the adjoint action of $\g$. Let $I$ and $I_\Gamma$ be the set of nodes of the Dynkin diagrams of $\g$ of $\g^\Gamma$ respectively. Fix a triangular decomposition $\g^\Gamma = \n_\Gamma^- \oplus \h_\Gamma \oplus \n_\Gamma^+$ of $\g^\Gamma$. Denote by $Q_\Gamma^+$ the positive root lattice associated to this triangular decomposition and $Q_\Gamma^- = -Q_\Gamma^+$ the negative root lattice. Similarly, let $\Lambda_\Gamma$ (respectively, $\Lambda_\Gamma^+$) be the corresponding weight lattice (respectively, set of dominant integral weights). \label{def:twisted-lattices} Relative to $\h_\Gamma$, choose a set of Chevalley generators $\{e^\Gamma_i, f^\Gamma_i, h^\Gamma_i\}_{i \in I_\Gamma}$. We have a decomposition \begin{equation*}\label{eq:g-plus-minus} \ts \g=\bigoplus_{\alpha\in\h_\Gamma^*} \g^\alpha,\quad \g^\alpha=\left\{x\in\g\ |\ [h,x]=\alpha(h) x,\ h\in\h_\Gamma\right\}, \end{equation*} with only finitely many $\g^\alpha$ nonzero. We use superscripts here to avoid confusion with the weight spaces of $\g$ considered as a $\g$-module. Let \begin{equation}\label{eq:g-scalene} \ts \g^-=\bigoplus_{\alpha\in Q_\Gamma^- \setminus \{0\}} \g^\alpha,\quad \g^+= \bigoplus_{\alpha \notin Q_\Gamma^-} \g^\alpha. \end{equation} Then clearly $\g=\g^-\oplus\g^0\oplus\g^+$, $\n^\pm_\Gamma \subseteq \g^\pm$, and $\g^0$ and $\g^-$ are Lie subalgebras of $\g$. Note that $\g^0$ is simply the centralizer $C_\g(\h_\Gamma)$ of $\h_\Gamma$ in $\g$. Moreover, since $\Gamma \g^\alpha = \g^\alpha$ for each $\alpha \in \h_\Gamma^*$, we see that $\Gamma \g^\pm = \g^\pm$ and $\Gamma\g^0 = \g^0$. Finally, it is not difficult to see that $\g^0$ is a self-normalizing subalgebra of $\g$. It may be that $\g^\Gamma=0$, in which case $\h_\Gamma=0$ and so $\g^0=\g$ is simple. However, we have the following result. \begin{lem} \label{lem:g0-abelian} We have that $\g^0$ is abelian and hence $\g^\Gamma$ is nonzero if either of the following conditions hold: \begin{enumerate} \item \label{lem-item:Gamma-cylic-g0-abelian} The group $\Gamma$ is cyclic. \item \label{lem-item:Gamma-diag-g0-abelian} The group $\Gamma$ acts on $\g$ by diagram automorphisms with respect to a Cartan subalgebra $\h$ of $\g$. \end{enumerate} \end{lem} \begin{proof} That~\eqref{lem-item:Gamma-cylic-g0-abelian} implies the conclusion follows from~\cite[Lem.~8.1(b)]{Kac90}. If~\eqref{lem-item:Gamma-diag-g0-abelian} holds, then the action of $\Gamma$ factors through a cyclic group or the symmetric group $S_3$ on three letters (in type $D_4$). Since the fixed point subalgebra in type $D_4$ is the same for the full $S_3$-action as it is for the action of the subgroup $\Z_3$, we may in fact assume that $\Gamma$ is cyclic. Thus the result follows from the fact that~\eqref{lem-item:Gamma-cylic-g0-abelian} implies the conclusion. \end{proof} \begin{assumption} \label{assumption} For the remainder of this paper we will assume that the subalgebra $\g^0$ is abelian. By Lemma~\ref{lem:g0-abelian}, this is true if $\Gamma$ is cyclic or acts on $\g$ by diagram automorphisms. \end{assumption} \begin{example} \label{eg:ABFP} An example showing that it is possible to have $\g^\Gamma$ be nonzero and $\g^0$ be nonabelian is given in \cite[Example~4.3.1]{ABFP09}. Let $\g$ be the Lie algebra $\mathfrak{s}$ defined there. Then $\g$ is simple of type $B_3$. Let $\Gamma \cong \Z_2 \times \Z_2 \times \Z_2$ be the group generated by the order two automorphisms $\sigma_1,\sigma_2,\sigma_3$ described in that reference. Then $\g^\Gamma \cong \mathfrak{sl}_2$ and $\g^0$ contains the subalgebra consisting of block diagonal $7 \times 7$ matrices with upper left block a $3 \times 3$ zero matrix and arbitrary skew-symmetric lower right $4 \times 4$ block. Thus $\g^0$ is not abelian. \end{example} \begin{defin}[Equivariant map algebra] The \emph{map (Lie) algebra} (or \emph{generalized current algebra}) associated to $\g$ and $A$ is the tensor product $\g \otimes A$, with Lie bracket given by extending \[ [u \otimes f, v \otimes g] = [u,v] \otimes fg,\quad u,v \in \g,\ f,g \in A, \] by linearity. Thus $\g \otimes A$ is the Lie algebra of algebraic maps from $X = \Spec A$ to $\g$ (identified with affine space) equipped with pointwise multiplication. The associated \emph{equivariant map (Lie) algebra} is the Lie algebra of fixed points $(\g \otimes A)^\Gamma \subseteq \g \otimes A$, where we consider the diagonal action of $\Gamma$ on $\g \otimes A$. Thus $(\g \otimes A)^\Gamma$ is the subalgebra of $\g \otimes A$ consisting of those maps that are equivariant with respect to the action of $\Gamma$. \end{defin} Since $\Gamma$ respects the decomposition $\g = \g^- \oplus \g^0 \oplus \g^+$, we have a decomposition \begin{equation} \label{eq:EMA-triangular-decomp} (\g \otimes A)^\Gamma = (\g^- \otimes A)^\Gamma \oplus (\g^0 \otimes A)^\Gamma \oplus (\g^+ \otimes A)^\Gamma. \end{equation} We let \begin{equation}\label{eq:u-gamma-def} \cU_\Gamma \defeq \cU((\g \otimes A)^\Gamma),\quad \cU^0_\Gamma \defeq \cU((\g^0\otimes A)^\Gamma),\quad \cU^\pm_\Gamma \defeq \cU((\g^\pm\otimes A)^\Gamma). \end{equation} Let $\Xi$\label{Xi-def} be the character group of $\Gamma$. This is an abelian group, whose group operation we will write additively. Hence, $0$ is the character of the trivial one-dimensional representation, and if an irreducible representation affords the character $\xi$, then $-\xi$ is the character of the dual representation. If $\Gamma$ is abelian and acts on an algebra $B$ by automorphisms, it is well known that $B=\bigoplus_{\xi \in \Xi} B_\xi$ is a $\Xi$-grading, where $B_\xi$ is the isotypic component of type $\xi$. It follows that $(\g \otimes A)^\Gamma$ can be written as \begin{equation} \label{eq:EMSA-grading} (\g \otimes A)^\Gamma = \ts \bigoplus_{\xi \in \Xi} \, \g_\xi \otimes A_{-\xi}, \end{equation} since $\g = \bigoplus_\xi \g_\xi$ and $A=\bigoplus_\xi A_\xi$ are $\Xi$-graded and $(\g_\xi \otimes A_{\xi'})^\Gamma = 0$ if $\xi'\ne -\xi$. The decomposition \eqref{eq:EMSA-grading} is an algebra $\Xi$-grading. For the remainder of this section we assume that $\Gamma$ is abelian, acts freely on $X_\rat$, and acts by diagram automorphisms on $\g$. Then the set of nodes $I_\Gamma$ of the Dynkin diagram of $\g^\Gamma$ can be naturally identified with the set of $\Gamma$-orbits in $I$. We will often equate the two in what follows. \begin{defin}[$\cE$, $\cE^\Gamma$, $Y_\Gamma$, $\psi_\bx$, $\wt$, $\wt_\Gamma$, $\hei$, $\hei_\Gamma$, $\cE_\llift$, $\cE^\Gamma_\lambda$] \label{def:wt-ht} Let $\cE$ denote the set of finitely supported functions $\psi \colon X_\rat \to \Lambda^+$. Here the support of $\psi \in \cE$ is \[ \Supp \psi = \{x \in X_\rat\ |\ \psi(x) \ne 0\}. \] Since $\Gamma$ acts on $\g$ by diagram automorphisms, it acts naturally on $\Lambda^+$. We let $\cE^\Gamma$ denote the subset of $\cE$ consisting of those functions that are $\Gamma$-equivariant. For a $\Gamma$-invariant subset $Y$ of $X_\rat$, let $Y_\Gamma$ denote the set of subsets of $Y$ containing exactly one point from each $\Gamma$-orbit in $Y$. For $\psi \in \cE^\Gamma$ and $\bx \in (\Supp \psi)_\Gamma$, define \[ \psi_\bx \colon X_\rat \to \Lambda^+,\quad \psi_\bx(x) = \begin{cases} \psi(x) & \text{if } x \in \bx, \\ 0 & \text{if } x \not \in \bx. \end{cases} \] For $\psi \in \cE$, we define \[ \ts \wt \psi \defeq \sum_{x \in \Supp \psi} \psi(x) \in \Lambda^+. \] If $\psi \in \cE^\Gamma$, we define \[ \wt_\Gamma \psi \defeq (\wt \psi_\bx)|_{\h^\Gamma} \text{ for } \bx \in (\Supp \psi)_\Gamma \] (this definition is independent of the choice of $\bx$). For $\llift \in \Lambda$, write $\llift = \sum_{i \in I} k_i \alpha_i$, $k_i \in \Q$, as a linear combination of simple roots, and define \[ \ts \hei \llift \defeq \sum_{i \in I} k_i. \] Similarly, for $\lambda \in \Lambda_\Gamma$, write $\lambda = \sum_{\bi \in I_\Gamma} k_\bi \alpha_\bi$, $k_i \in \Q$, as a linear combination of simple roots, and define \[ \ts \hei_\Gamma \lambda \defeq \sum_{\bi \in I_\Gamma} k_\bi. \] For $\psi \in \cE$ (respectively, $\psi \in \cE^\Gamma$), we define \[ \hei \psi \defeq \hei (\wt \psi),\quad (\text{respectively,} \hei_\Gamma \psi \defeq \hei_\Gamma (\wt_\Gamma \psi)). \] For $\llift \in \Lambda^+$ (respectively, $\lambda \in \Lambda^+_\Gamma$), define $\cE_\llift \defeq \{\psi \in \cE\ |\ \wt \psi = \llift\}$ (respectively, $\cE^\Gamma_\lambda \defeq \{\psi \in \cE^\Gamma\ |\ \wt_\Gamma \psi = \lambda\}$). \end{defin} Since $\Gamma$ acts freely on $X_\rat$, the isotropy of any point of $X_\rat$ is trivial. Having fixed a triangular decomposition of $\g$, the irreducible finite-dimensional representations of $\g$ are enumerated by the set $\Lambda^+$ of dominant integral weights (by associating to a representation its highest weight). Thus, by \cite[Th.~5.5]{NSS12}, the irreducible finite-dimensional $(\g \otimes A)^\Gamma$-modules are enumerated by the set $\cE^\Gamma$. \begin{defin}[Modules $V^\Gamma(\psi)$, $V(\psi)$] \label{def:EMA-irreducibles} For $\psi \in \cE^\Gamma$, we let $V^\Gamma(\psi)$ denote the corresponding irreducible finite-dimensional $(\g \otimes A)^\Gamma$-module. Similarly, for $\psi \in \cE$, we let $V(\psi)$ denote the corresponding irreducible finite-dimensional $(\g \otimes A)$-module. \end{defin} \section{Global Weyl modules} \label{sec:Weyl-modules} In this section, we introduce our main object of study, the global Weyl module. Let $\fa$ be a reductive Lie algebra. Then, given a triangular decomposition $\fa = \n^- \oplus \h \oplus \n^+$, the irreducible finite-dimensional modules of $\fa$ are naturally enumerated by the dominant integral weights of $\fa$. For a dominant integral weight $\lambda$ of $\fa$, let $V(\lambda)$ denote the irreducible $\fa$-module of highest weight $\lambda$ and let $v_\lambda \in V(\lambda)$ be a highest weight vector. \begin{defin}[Partial order on $\Irr \fa$] Suppose that $V_1$ and $V_2$ are finite-dimensional irreducible $\fa$-modules. If we fix a triangular decomposition $\fa = \n^- \oplus \h \oplus \n^+$, then $V_i$ has a highest weight $\lambda_i$, $i=1,2$. We say that $V_1 \le V_2$ if $\lambda_2 - \lambda_1$ lies in the positive root lattice of $\fa$. This partial order is independent of the particular choice of triangular decomposition. \details{This follows from \cite[Ch.~VIII, \S5, no.~3, Prop.~5]{Bou75}.} It induces a partial order on the set $\Irr \fa$ of isomorphism classes of finite-dimensional irreducible $\fa$-modules. \end{defin} Let $V$ be a direct sum of irreducible finite-dimensional $\fa$-modules. Thus we have a decomposition \begin{equation} \ts V = \bigoplus_{\sigma \in \Irr \fa} V_\sigma, \end{equation} where $V_\sigma$ is the $\sigma$ isotypic component of $V$ for $\sigma \in \Irr \fa$. For $\tau \in \Irr \fa$, we define \begin{equation} \ts V_{\not \le \tau} \defeq \bigoplus_{\sigma \in \Irr \fa,\, \sigma \not \le \tau} V_\sigma. \end{equation} We will identify $\g^\Gamma$ with the subalgebra $\g^\Gamma \otimes \kk = (\g \otimes \kk)^\Gamma \subseteq (\g \otimes A)^\Gamma$. For a $(\g \otimes A)^\Gamma$-module $V$ and $\lambda \in \h_\Gamma^*$ (for some Cartan subalgebra $\h_\Gamma$ of $\g^\Gamma$), we denote by $V_\lambda$ the $\lambda$ weight space of $V$, where $V$ is considered as a $\g^\Gamma$-module by restriction. \begin{defin}[Categories $\cI^\Gamma$, $\cI^\Gamma_{\le \tau}$ and $\cI^\Gamma_{\le \lambda}$]\label{def:I-Gamma} Let $\cI^\Gamma$ denote the full subcategory of the category of $(\g \otimes A)^\Gamma$-modules whose objects are the modules whose restriction to $\g^\Gamma$ are direct sums of irreducible finite-dimensional $\g^\Gamma$-modules. For $\tau \in \Irr \g^\Gamma$, let $\cI^\Gamma_{\le\tau}$ denote the full subcategory of $\cI^\Gamma$ whose objects consist of those modules whose $\sigma$ isotypic components are zero for $\sigma \in \Irr \g^\Gamma$, $\sigma \not \le \tau$. That is, the objects of $\cI^\Gamma_{\le\tau}$ are $(\g \otimes A)^\Gamma$-modules $V$ whose decomposition into isotypic components is of the form $V = \bigoplus_{\sigma \le \tau} V_\sigma$. If we have fixed a triangular decomposition of $\g^\Gamma$, then $\Irr \g^\Gamma$ can be identified with the set $\Lambda^+_\Gamma$ of dominant integral weights for $\g^\Gamma$. In this case, we will sometimes write $\cI^\Gamma_{\le \lambda}$ instead of $\cI^\Gamma_{\le \tau}$, where $\tau$ is the isomorphism class of the $\g^\Gamma$-module of highest weight $\lambda \in \Lambda^+_\Gamma$. If $\Gamma$ is the trivial group, we often omit the superscripts $\Gamma$. \end{defin} \begin{lem}\label{lem:tensor-algebra} If $V$ is a direct sum of irreducible finite-dimensional $\fa$-modules, then so is the tensor algebra $T(V)\defeq\bigoplus_{n=0}^\infty V^{\otimes n}$. \end{lem} \begin{proof} This follows from the fact that the action of $\fa$ preserves each summand $V^{\otimes n}$, which clearly has the given property. \end{proof} \begin{lem} \label{lem:induced-completely-reducible} If $V$ is a direct sum of irreducible finite-dimensional $\g^\Gamma$-modules, then the induced module \begin{equation} \label{eq:P-Gamma-V} P^\Gamma(V)\defeq \cU_\Gamma \otimes_{\cU(\g^\Gamma)} V \end{equation} is a projective object in the category $\cI^\Gamma$. \end{lem} \begin{proof} Consider the action of $\g^\Gamma$ on $\g\otimes A$ given by (restriction of) the adjoint action on the first factor. Since $\g$ is a completely reducible $\g^\Gamma$-module, it follows that $\g \otimes A$ is a direct sum of irreducible finite-dimensional $\g^\Gamma$-modules. It is easily checked that $\g^\Gamma$ preserves the subalgebra $(\g\otimes A)^\Gamma$, which therefore also has this property. Then, by Lemma~\ref{lem:tensor-algebra}, we see that $T((\g\otimes A)^\Gamma)$, and hence $\cU_\Gamma$, are direct sums of irreducible finite-dimensional $\g^\Gamma$-modules. Since the tensor product is distributive over direct sums, $\cU_\Gamma \otimes_\kk V$ is a direct sum of irreducible finite-dimensional $\g^\Gamma$-modules, hence so is its quotient $P^\Gamma(V)$. Thus $P^\Gamma(V) \in \Ob \cI^\Gamma$. The fact that $P^\Gamma(V)$ is projective in this category is a special case of a standard result proved in \cite[Lem.~2]{Hoc56}. \end{proof} \begin{defin}[Twisted global Weyl module $W^\Gamma(V)$] \label{def:twisted-global-Weyl-module} Let $V$ be an irreducible finite-dimensional $\g^\Gamma$-module. The corresponding \emph{(twisted) global Weyl module} is the $(\g \otimes A)^\Gamma$-module \[ W^\Gamma(V) \defeq P^\Gamma(V) / \big( \cU_\Gamma (P^\Gamma(V)_{\not \le [V]}) \big), \] where $[V] \in \Irr \g^\Gamma$ is the isomorphism class of $V$. Up to isomorphism, $W^\Gamma(V)$ depends only on the isomorphism class of $V$. If $\Gamma$ is trivial, we will often drop the superscript $\Gamma$. It follows immediately from Lemma~\ref{lem:induced-completely-reducible} that, for all $\tau \in \Irr \g^\Gamma$ and $V \in \tau$, we have $W^\Gamma(V) \in \Ob \cI^\Gamma_{\le\tau}$. \end{defin} Given a triangular decomposition $\g^\Gamma = \n_\Gamma^- \oplus \h \oplus \n_\Gamma^+$ of $\g^\Gamma$, we use the notation $V^\Gamma(\lambda)$ to denote the irreducible $\g^\Gamma$-module of highest weight $\lambda \in \Lambda_\Gamma^+$ and $v^\Gamma_\lambda$ to denote a highest weight vector in this module. \label{notation:g-Gamma-irred-modules} If $\Gamma$ is trivial, we omit the $\Gamma$ superscripts. \begin{lem}\label{global-weyl-triang} For any dominant integral weight $\lambda \in \Lambda_\Gamma^+$, we have \[ W^\Gamma(\lambda) \defeq W^\Gamma(V^\Gamma(\lambda)) = \big( \cU_\Gamma \otimes_{\cU(\g^\Gamma)} V^\Gamma(\lambda) \big)/ \big( \cU_\Gamma (\g^+ \otimes A)^\Gamma \otimes v^\Gamma_\lambda \big). \] We let $w^\Gamma_\lambda$ denote the image of $1 \otimes v_\lambda^\Gamma$ in the above quotient. Then $W^\Gamma(\lambda)$ is generated by $w^\Gamma_\lambda$. \end{lem} \begin{proof} Let $V = V^\Gamma(\lambda)$ and $v=v^\Gamma_\lambda$. We need to show that \begin{equation} \label{eq:twisted-global-Weyl-equivalence} \cU_\Gamma(\g^+ \otimes A)^\Gamma \otimes v = \cU_\Gamma ( P^\Gamma(V))_{\not \le [V]}. \end{equation} It is clear from the definition of $\g^+$ that we have a weight decomposition \[ \ts (\g^+ \otimes A)^\Gamma \otimes v = \bigoplus_{\mu \in \Lambda_\Gamma,\, \mu \not \le \lambda} \left( (\g^+ \otimes A)^\Gamma \otimes v \right)_\mu. \] Thus the left-hand side of~\eqref{eq:twisted-global-Weyl-equivalence} is contained in the right-hand side. It remains to prove the reverse inclusion. For this, it suffices to show that \[ P^\Gamma(V)_\tau \subseteq \cU_\Gamma (\g^+ \otimes A)^\Gamma \otimes v \] for all $\tau \not \le [V]$. Now, \[ P^\Gamma(V) = \cU_\Gamma^- \cU_\Gamma^0 \cU_\Gamma^+ \otimes v = \big( \cU_\Gamma^- \cU_\Gamma^0 \otimes v \big) + \big( \cU_\Gamma (\g^+ \otimes A)^\Gamma \otimes v \big) \] and the first summand is contained in $\bigoplus_{\mu \le \lambda} P^\Gamma(V)_\mu$. The second summand thus contains all submodules of $P^\Gamma(V)$ generated by vectors of weight $\mu$ for $\mu \not \le \lambda$, hence it contains $P^\Gamma(V)_\tau$. \end{proof} \begin{rem} In the case where $\Gamma$ is trivial, it follows from Lemma~\ref{global-weyl-triang} that Definition~\ref{def:twisted-global-Weyl-module} agrees with the usual definition of the untwisted global Weyl module (see \cite[\S3.3]{CFK10}). Furthermore, Definition~\ref{def:twisted-global-Weyl-module} reduces to the definition of the twisted global Weyl module in \cite[Def.~3.3]{FMS11} when $A=\C[t^{\pm 1}]$ and $\Gamma$ acts on $\g$ by diagram automorphisms. Outside of these cases, the general definition of the global Weyl module does not seem to have appeared previously in the literature. \end{rem} We conclude this section by giving a presentation of the global Weyl module in terms of generators and relations. \begin{prop} \label{weyl:gen-rel} The global Weyl module $W^\Gamma(\lambda)$ is generated by the vector $w_\lambda^\Gamma$ with defining relations \begin{equation} \label{eq:global-Weyl-relations} (\g^+\otimes A)^\Gamma w_\lambda^\Gamma=0,\quad (h\otimes 1)w_\lambda^\Gamma=\lambda(h)w_\lambda^\Gamma,\quad (f^\Gamma_i \otimes 1)^{\lambda(h_i)+1}w_\lambda^\Gamma=0,\ \ h\in\h_\Gamma, \ \ i\in I_\Gamma. \end{equation} \end{prop} \begin{proof} That $w_\lambda^\Gamma$ satisfies the latter two relations follows immediately from the fact that $v_\lambda^\Gamma$ does, and for the first relation we only need to observe that the weights of $W^\Gamma(\lambda)$ lie in $\lambda-Q_\Gamma^+$. To see that these are all the relations, let $W$ be the $(\g\otimes A)^\Gamma$-module generated by a vector $w$ with the given relations, so that there is a surjective homomorphism of $(\g\otimes A)^\Gamma$-modules $\pi_1 \colon W \to W^\Gamma(\lambda)$, extending the assignment $w\mapsto w_\lambda^\Gamma$. By the relations in~\eqref{eq:global-Weyl-relations}, the vector $w \in W$ generates a $\g^\Gamma$-submodule of $W$ isomorphic to a quotient of $V^\Gamma(\lambda)$. Thus we have a surjective homomorphism \[ \pi_2 \colon P^\Gamma(V) \to W,\quad u_1 \otimes_{\cU(\g^\Gamma)} u_2 v^\Gamma_\lambda \mapsto u_1u_2w,\quad u_1 \in \cU_\Gamma,\ u_2 \in \cU(\g^\Gamma). \] Since the $\g^\Gamma$-weights of $W$ are bounded above by $\lambda$, it follows that $P^\Gamma(V)_{\nleq[V^\Gamma(\lambda)]}\subseteq\ker(\pi_2)$. Thus $\pi_2$ induces a map $W^\Gamma(\lambda) \to W$ inverse to $\pi_1$. \end{proof} \section{The Weyl functor} \label{sec:Weyl-functor} In this section we extend the definition of the Weyl functor defined in \cite{CFK10} (in the untwisted setting) to the twisted setting of equivariant map algebras. Throughout this section we fix a triangular decomposition $\g^\Gamma = \n_\Gamma^- \oplus \h_\Gamma \oplus \n_\Gamma^+$. The following lemma is a modification of the construction in \cite[\S3.4]{CFK10}. Recall the definition of $\cU^0_\Gamma$ given in~\eqref{eq:u-gamma-def}. \begin{lem} The assignment \[ (uw_\lambda^\Gamma)a \defeq uaw_\lambda^\Gamma, \quad \text{for all } a \in \cU_\Gamma^0,\ u \in \cU_\Gamma, \] defines a right action of $\cU_\Gamma^0$ on $W^\Gamma(\lambda)$. \end{lem} \begin{proof} To prove that this action is well-defined, we must prove the implication \[ uw^\Gamma_\lambda = u'w^\Gamma_\lambda \implies uaw^\Gamma_\lambda = u'aw^\Gamma_\lambda \] for all $u,u' \in \cU_\Gamma$ and $a \in \cU^0_\Gamma$. This is equivalent to proving that \[ (u-u')w^\Gamma_\lambda = 0 \implies (u-u')aw^\Gamma_\lambda =0 \] for all $u,u' \in \cU_\Gamma$ and $a \in \cU^0_\Gamma$. For this, it suffices to show that the vectors $aw_\lambda^\Gamma, a\in \cU_\Gamma^0$, satisfy the defining relations of $w_\lambda^\Gamma$ given in Proposition~\ref{weyl:gen-rel}. It follows from the definition of $\g^0$ that $aw_\lambda^\Gamma \in W^\Gamma(\lambda)_\lambda$. Hence each such vector must lie in the $[V^\Gamma(\lambda)]$-isotypic component of $W^\Gamma(\lambda)$, and so the relation $(f_i^\Gamma \otimes 1)^{\lambda(h_i^\Gamma)+1 }aw_\lambda^\Gamma=0$ must hold. Finally, $(\g^+\otimes A)^\Gamma aw_\lambda^\Gamma=0$ since the weights of $W^\Gamma(\lambda)$ are bounded above by $\lambda$. \end{proof} \begin{defin}[Algebra $\bA_\Gamma^\lambda$] \label{def:A-Gamma-lambda} The set $\Ann_{\cU^0_\Gamma}(w_\lambda^\Gamma)=\left\{u\in \cU^0_\Gamma\ |\ uw_\lambda^\Gamma=0\right\}$ is an ideal in the (commutative) algebra $\cU^0_\Gamma$, and we define \begin{equation} \bA_\Gamma^\lambda \defeq \cU^0_\Gamma / \Ann_{\cU^0_\Gamma}(w_\lambda^\Gamma). \end{equation} \end{defin} It follows from Definition~\ref{def:A-Gamma-lambda} that $W^\Gamma(\lambda)$ is a $(\cU_\Gamma, \bA_\Gamma^\lambda)$-bimodule. \begin{lem} \label{lem:extend-inner-aut} Every inner automorphism of $\g^\Gamma$ can be extended to an inner automorphism of $\g$ commuting with the action of $\Gamma$. \end{lem} \begin{proof} Let $\sigma$ be an inner automorphism of $\g^\Gamma$. Then $\sigma$ acts trivially on the center of $\g^\Gamma$ and so it suffices to extend the restriction of $\sigma$ to the semisimple part $[\g^\Gamma,\g^\Gamma]$ of $\g^\Gamma$. This restriction can be written in the form $e^{\ad x_1} e^{\ad x_2} \dotsm e^{\ad x_n}$, where $x_i \in ([\g^\Gamma,\g^\Gamma])_{\beta_i}$, $i=1,\dotsc,n$, and each $\beta_i$ is a (nonzero) root of $[\g^\Gamma,\g^\Gamma]$ (see, for example, \cite[Th.~27.5(e)]{Hum75}). Since $\g$ is a sum of finite-dimensional irreducible $\g^\Gamma$-modules, the action of $\ad x_i$ on $\g$ is nilpotent (since it increases weights by $\beta_i$) for all $i=1,\dotsc,n$. Thus $\sigma$ extends to an inner automorphism of $\g$. Furthermore, since $x_i \in \g^\Gamma$ for each $i=1,\dotsc,n$, this automorphism commutes with the action of $\Gamma$ on $\g$. \end{proof} \begin{prop} Up to isomorphism, $\bA_\Gamma^\lambda$ depends only the isomorphism class of $V^\Gamma(\lambda)$ and not on the choice of triangular decomposition of $\g^\Gamma$. \end{prop} \begin{proof} Let $D_i = (\n_{D_i}^\pm, \h_{D_i})$, $i=1,2$, be two triangular decompositions of $\g^\Gamma$ and fix a finite-dimensional irreducible $\g^\Gamma$-module $V$. For $i=1,2$, let $\lambda_i$ be the highest weights of $V$ with respect to the triangular decomposition $D_i$ and let $v_i$ be a highest weight vector. By \cite[Ch.~VIII, \S5, no.~3, Prop.~5]{Bou75}, there exists an inner automorphism $\sigma$ of $\g^\Gamma$ such that $\sigma(\h_{D_1})=\h_{D_2}$ and $\sigma$ carries the positive root spaces to the positive root spaces. By Lemma~\ref{lem:extend-inner-aut}, we can extend $\sigma$ to an inner automorphism of $\g$, also denoted $\sigma$, which commutes with the action of $\Gamma$. Thus, $\sigma$ induces an automorphism of $(\g \otimes A)^\Gamma$. For a $\g^\Gamma$-module (or $(\g \otimes A)^\Gamma$-module) $W$, let $W^\sigma$ denote the module obtained from $W$ by twisting the action by $\sigma$. That is, $W^\sigma$ is equal to $W$ as a vector space, with action given by \[ (x,w) \mapsto \sigma^{-1}(x)w,\quad x \in \g^\Gamma\ \text{(respectively, $x \in (\g \otimes A)^\Gamma$)},\ w \in W^\sigma. \] Then, in $V^\sigma$, $v_1$ is a highest weight vector with respect to $D_2$ of highest weight $\lambda_1 \circ \sigma^{-1}$. Since $\sigma$ is inner, $V^\sigma \cong V$ (see, for instance, \cite[Ch.~VIII, \S7, no.~2, Rem.~1]{Bou75}) and so $\lambda_1 \circ \sigma^{-1} = \lambda_2$. Thus we have an isomorphism $V^\sigma \cong V$ determined by $v_1 \mapsto v_2$. This isomorphism maps $uv_1$ to $\sigma(u)v_2$ for all $u \in \cU(\g^\Gamma)$. We also have an isomorphism $(\cU_\Gamma)^\sigma \cong \cU_\Gamma$ of $(\cU_\Gamma,\cU(\g^\Gamma))$-bimodules, where $(\cU_\Gamma)^\sigma$ has the action twisted on both sides. This isomorphism maps $u$ to $\sigma(u)$ for all $u \in (\cU_\Gamma)^\sigma$. Thus we have an isomorphism \begin{equation} \label{eq:twisting-isom} (\cU_\Gamma)^\sigma \otimes_{\cU(\g^\Gamma)} V^\sigma \cong \cU_\Gamma \otimes_{\cU(\g^\Gamma)} V,\quad u \otimes v_1 \mapsto \sigma(u) \otimes v_2, \end{equation} of left $\cU_\Gamma$-modules. Now, $\sigma(\g^0_{D_1}) = \g^0_{D_2}$, where $\g^0_{D_i}$ denotes the centralizer $C_\g(\h_{D_i})$ for $i=1,2$. It follows that $\sigma \big( \cU((\g^0_{D_1} \otimes A)^\Gamma) \big) = \cU((\g^0_{D_2} \otimes A)^\Gamma)$. Furthermore, the isomorphism~\eqref{eq:twisting-isom} implies that $\sigma \big( \Ann_{\cU((\g^0_{D_1} \otimes A)^\Gamma)} (1 \otimes v_1) \big) = \Ann_{\cU((\g^0_{D_2} \otimes A)^\Gamma)} (1 \otimes v_2)$, completing the proof. \end{proof} Each weight space $W^\Gamma(\lambda)_\mu$, $\mu \in \Lambda_\Gamma$, is a right $\bA_\Gamma^\lambda$-submodule. In particular, $W^\Gamma(\lambda)_\lambda$ is a right $\bA_\Gamma^\lambda$-module and \[ W^\Gamma(\lambda)_\lambda \cong \bA_\Gamma^\lambda \quad \text{as right $\bA_\Gamma^\lambda$-modules}. \] \begin{defin}[Twisted Weyl functor $\bW^\Gamma_\lambda$] \label{def:twisted-Weyl-functor} For $\lambda \in \Lambda^+_\Gamma$, we define the right exact \emph{(twisted) Weyl functor} (cf.\ \cite[\S3.4]{CFK10}, \cite[\S4.1]{FMS11}) \[ \bW^\Gamma_\lambda \colon \bA_\Gamma^\lambda\text{-mod} \to \cI^\Gamma_{\le\lambda},\quad \bW^\Gamma_\lambda M = W^\Gamma(\lambda) \otimes_{\bA_\Gamma^\lambda} M,\quad \bW^\Gamma_\lambda f = 1 \otimes f, \] for $M \in \Ob \bA_\Gamma^\lambda$-mod, $f \in \Mor \bA_\Gamma^\lambda$-mod. That $\bW^\Gamma_{\lambda} M \in \Ob \cI^\Gamma_{\le\lambda}$ for all $M \in \Ob \bA_\Gamma^\lambda$-mod follows from the fact that $W^\Gamma(\lambda) \in \Ob \cI^\Gamma_{\le\lambda}$. \details{In particular, if $uw_\lambda^\Gamma \otimes m\in \mathbf W_\lambda^\Gamma M$ for some $u \in \cU_\Gamma$ and $m\in M$, then $\cU(\g^\Gamma)(uw_\lambda^\Gamma \otimes m)\subseteq (\cU(\g^\Gamma)uw_\lambda^\Gamma)\otimes m$, which is a direct sum of irreducible finite-dimensional $\g^\Gamma$-modules.} \end{defin} \begin{lem} \label{lem:Ann-kills-V_lambda} For $\lambda \in \Lambda_\Gamma^+$ and $V \in \Ob \cI^\Gamma_{\le\lambda}$, we have $(\Ann_{\cU^0_\Gamma}(w_\lambda^\Gamma)) V_\lambda = 0$. \end{lem} \begin{proof} Suppose $v' \in V_\lambda$ and let $V' = \cU(\g^\Gamma)v'$ be the $\g^\Gamma$-submodule of $V$ generated by $v'$. Since $V\in\Ob\cI^\Gamma_{\le\lambda}$, it follows that $V'$ is finite-dimensional. Since $v'$ is a highest weight vector and $V$ is a sum of irreducible finite-dimensional $\g^\Gamma$-modules, we see that $V' \cong V^\Gamma(\lambda)$. Identifying $V'$ with $V^\Gamma(\lambda)$, we therefore have a linear map \[ \varphi \colon\cU_\Gamma \otimes_{\cU(\g^\Gamma)} V^\Gamma(\lambda) \to V,\ u \otimes v \mapsto uv,\quad u \in \cU_\Gamma,\ v \in V^\Gamma(\lambda), \] and \[ \cU_\Gamma(\g^+\otimes A)^\Gamma\otimes v_\lambda^\Gamma \subseteq \ker \varphi. \] Thus $\varphi$ descends to a homomorphism $\bar{\varphi} \colon W^\Gamma(\lambda)\to V$ mapping $w_\lambda^\Gamma$ to $v'$. Hence if $u \in \Ann_{\cU^0_\Gamma}(w_\lambda^\Gamma)$, we see that $uv'= \bar{\varphi}(u w_\lambda^\Gamma)=\bar{\varphi}(0)=0$. \end{proof} Note that \[ \bW^\Gamma_\lambda \bA_\Gamma^\lambda \cong W^\Gamma(\lambda) \quad \text{as $(\g \otimes A)^\Gamma$-modules}, \] and \[ (\bW^\Gamma_\lambda M)_\mu = W^\Gamma(\lambda)_\mu \otimes_{\bA_\Gamma^\lambda} M \quad \text{as vector spaces}, \] for $\lambda \in \Lambda_\Gamma^+$, $\mu \in \Lambda_\Gamma$ and $M \in \Ob \bA_\Gamma^\lambda$-mod. \begin{defin}[Twisted restriction functor $\bR^\Gamma_\lambda$]\label{def:R-Gamma-lambda} By Lemma~\ref{lem:Ann-kills-V_lambda}, the left action of $\cU_\Gamma^0$ on $V \in \Ob \cI^\Gamma_{\le\lambda}$ induces a left action of $\bA_\Gamma^\lambda$ on $V_\lambda$. We denote the resulting $\bA_\Gamma^\lambda$-module by $\bR^\Gamma_\lambda V$. For $\psi \in \Hom_{\cI^\Gamma_{\le\lambda}}(V,V')$, the restriction $\bR^\Gamma_\lambda \psi \colon V_\lambda \to V'_\lambda$ is a morphism of $\bA_\Gamma^\lambda$-modules. These maps define the \emph{(twisted) restriction functor} $\bR^\Gamma_\lambda \colon \cI^\Gamma_{\le\lambda} \to \bA_\Gamma^\lambda$-mod. The functor $\bR^\Gamma_\lambda$ is exact since restriction to a weight space is exact. \end{defin} The following is a generalization of results that are known in the untwisted case (see \cite[\S3.6 and Prop.~5]{CFK10}). \begin{prop} \label{prop:right-adjoint} The functors $\bW^\Gamma_\lambda$ and $\bR^\Gamma_\lambda$ have the following properties: \begin{enumerate} \item \label{prop-item:RW=1} $\bR^\Gamma_\lambda \bW^\Gamma_\lambda \cong \Id_{\bA_\Gamma^\lambda\textup{-mod}}$ (isomorphism of functors); \item $\bW^\Gamma_\lambda$ is left adjoint to $\bR^\Gamma_\lambda$; \item \label{lem-item:W-projectives} the functor $\bW^\Gamma_\lambda$ maps projective objects to projective objects. \end{enumerate} \end{prop} \begin{proof} \begin{asparaenum} \item For $M \in \Ob \bA_\Gamma^\lambda$-mod, we have the following isomorphisms of left $\bA_\Gamma^\lambda$-modules: \[ \bR^\Gamma_\lambda \bW^\Gamma_\lambda M = (\bW^\Gamma_\lambda M)_\lambda = W^\Gamma(\lambda)_\lambda \otimes_{\bA_\Gamma^\lambda} M \cong \bA_\Gamma^\lambda \otimes_{\bA_\Gamma^\lambda} M \cong M. \] \item We must define natural transformations \[ \epsilon \colon \bW^\Gamma_\lambda \bR^\Gamma_\lambda\Rightarrow \Id_{\cI^\Gamma_{\le\lambda}},\quad \eta \colon \Id_{\bA_\Gamma^\lambda\text{-mod}}\Rightarrow \bR^\Gamma_\lambda\bW^\Gamma_\lambda, \] such that, for each $M\in\Ob\bA_\Gamma^\lambda$-mod and $V\in\Ob\cI^\Gamma_{\le\lambda}$, we have the equality of morphisms \begin{equation}\label{eq:adjoint} \Id_{\bW^\Gamma_\lambda M}= \epsilon_{\bW^\Gamma_\lambda M}\circ\bW^\Gamma_\lambda(\eta_M), \quad \Id_{\bR^\Gamma_\lambda V}=\bR^\Gamma_\lambda(\epsilon_V) \circ\eta_{\bR^\Gamma_\lambda V}. \end{equation} For $M\in\Ob\bA_\Gamma^\lambda$-mod, define \[ \eta_M \colon M\to \bR^\Gamma_\lambda\bW^\Gamma_\lambda M, \quad m\mapsto w^\Gamma_\lambda\otimes m. \] It is straightforward to verify that $\eta_M$ is natural in $M$ and thus the collection $\{\eta_M\ |\ M\in\Ob\bA_\Gamma^\lambda\text{-mod}\}$ does indeed define a natural transformation $\eta \colon \Id_{\bA_\Gamma^\lambda\text{-mod}}\Rightarrow \bR^\Gamma_\lambda\bW^\Gamma_\lambda$. \details{To see that $\eta_M$ is natural in $M$, suppose $f:M\to N$ is a morphism of $\bA^\lambda_\Gamma$-modules. Then, for $m\in M$, we have \[ \bR^\Gamma_\lambda \bW^\Gamma_\lambda(f)(\eta_M(m))=\bR^\Gamma_\lambda\bW^\Gamma_\lambda(f)(w^\Gamma_\lambda\otimes m)=(1\otimes f)_\lambda(w^\Gamma_\lambda\otimes m)=w^\Gamma_\lambda\otimes f(m) = \eta_N(f(m)), \] proving the commutativity of the following diagram: \[ \entrymodifiers={+!!<0pt,\fontdimen22\textfont2>}\xymatrixcolsep{5pc}\xymatrix{ M\ar[r]^{f} \ar[d]_{\eta_M}&N\ar[d]^{\eta_N} \\ \bR^\Gamma_\lambda\bW^\Gamma_\lambda M\ar[r]_{\bR^\Gamma_\lambda\bW^\Gamma_\lambda(f)}& \bR^\Gamma_\lambda\bW^\Gamma_\lambda N} \] Hence the collection $\{\eta_M\ |\ M\in\Ob\bA_\Gamma^\lambda\text{-mod}\}$ does indeed define a natural transformation $\eta \colon \Id_{\bA_\Gamma^\lambda\text{-mod}}\Rightarrow \bR^\Gamma_\lambda\bW^\Gamma_\lambda$.} For $V \in \Ob \cI^\Gamma_{\le \lambda}$, define $\epsilon_V:\bW^\Gamma_\lambda\bR^\Gamma_\lambda V\to V$ as follows. First, we regard $W^\Gamma(\lambda)\otimes_{\kk} \bR^\Gamma_\lambda V$ as a left $(\g\otimes A)^\Gamma$-module via the action of $(\g\otimes A)^\Gamma$ on $W^\Gamma(\lambda)$. Then it follows by Proposition~\ref{weyl:gen-rel} that the assignment $\epsilon_1 \colon W^\Gamma(\lambda)\otimes_{\kk} \bR^\Gamma_\lambda V\to V$ given by $u w^\Gamma_\lambda\otimes v\mapsto uv,\ u\in \cU_\Gamma$, is a well-defined map of left $(\g\otimes A)^\Gamma$-modules. To see that this map factors through to a map $\epsilon_V:\bW^\Gamma_\lambda \bR^\Gamma_\lambda V\to V$ we observe that, for $a \in \bA^\lambda_\Gamma$ and $u\in \cU_\Gamma$, we have \[ \epsilon_1(u w^\Gamma_\lambda a \otimes v) = \epsilon_1(u a w^\Gamma_\lambda \otimes v) = u av = \epsilon_1(u w^\Gamma_\lambda \otimes a v). \] Again, it is straightforward to check that $\epsilon_V$ is natural in $V$. \details{Thus $\epsilon_V$ is given by extending by linearity the assignment $uw^\Gamma_\lambda\otimes v\mapsto uv$, where $u\in \cU_\Gamma$ and $v\in V_\lambda$. For $f \colon V\to U$ in $\Mor\cI^\Gamma_{\le\lambda}$, we have \[ \epsilon_U(\bW^\Gamma_\lambda \bR^\Gamma_\lambda (f)(uw^\Gamma_\lambda \otimes v)) = \epsilon_U(uw^\Gamma_\lambda\otimes f(v))=uf(v) = f(uv) = f(\epsilon_V(uw^\Gamma_\lambda\otimes v)). \] Thus the diagram \[ \entrymodifiers={+!!<0pt,\fontdimen22\textfont2>}\xymatrixcolsep{5pc}\xymatrix{ \bW^\Gamma_\lambda\bR^\Gamma_\lambda V\ar[r]^{\bR^\Gamma_\lambda\bW^\Gamma_\lambda(f)} \ar[d]_{\epsilon_V}&\bW^\Gamma_\lambda\bR^\Gamma_\lambda U\ar[d]^{\epsilon_U} \\ V\ar[r]_{f}&U} \] commutes and so $\epsilon$ is natural in $V$.} Finally, we check the equalities in \eqref{eq:adjoint}. For $M \in \Ob \bA_\Gamma^\lambda$-mod and $m\in M$, we have \[ (\epsilon_{\bW^\Gamma_\lambda M}\circ \bW^\Gamma_\lambda(\eta_M))(uw^\Gamma_\lambda\otimes m)=\epsilon_{\bW^\Gamma_\lambda M}(uw^\Gamma_\lambda\otimes w^\Gamma_\lambda\otimes m)=uw^\Gamma_\lambda\otimes m, \] and for $V\in \Ob \cI^\Gamma_{\le\lambda}$, $m \in \bR^\Gamma_\lambda V$, we have \[ (\bR^\Gamma_\lambda(\epsilon_V) \circ \eta_{\bR^\Gamma_\lambda V})(m) = \bR^\Gamma_\lambda(\epsilon_V)(w^\Gamma_\lambda\otimes m)=m, \] which establishes \eqref{eq:adjoint}. \item This follows from the fact that $\bW^\Gamma_\lambda$ is left adjoint to a right exact functor (see, for example, \cite[Prop.~II.10.2]{HS97}). \qedhere \end{asparaenum} \end{proof} \begin{cor} \label{cor:global-Weyl-module-free} For $\lambda \in \Lambda_\Gamma^+$, the global Weyl module $W^\Gamma(\lambda)$ is a projective object in $\cI^\Gamma_{\le \lambda}$. \end{cor} \begin{proof} This follows from Proposition~\ref{prop:right-adjoint}\eqref{lem-item:W-projectives} and the fact that $\bW^\Gamma_\lambda \bA^\lambda_\Gamma = W^\Gamma(\lambda)$. \end{proof} By Proposition~\ref{prop:right-adjoint}\eqref{prop-item:RW=1}, we have \begin{equation} \label{eq:W=WRW} \bW^\Gamma_\lambda M\cong\bW^\Gamma_\lambda \bR^\Gamma_\lambda \bW^\Gamma_\lambda M \quad \text{for all } M \in \Ob \bA^\lambda_\Gamma\text{-mod}. \end{equation} The following theorem gives a homological characterization of this property. In the untwisted case (i.e.\ when $\Gamma$ is trivial), it was proved in \cite[Th.~1]{CFK10}. \begin{theo} \label{theo:WR=1-hom-char} Let $V\in\Ob\cI^\Gamma_{\leq\lambda}$. Then $V\cong \bW^\Gamma_\lambda\bR^\Gamma_\lambda V$ if and only if, for each $U\in\Ob\cI^\Gamma_{\leq\lambda}$ with $U_\lambda=0$, we have \[ \Hom_{\cI^\Gamma_{\leq\lambda}}(V,U)=0,\quad\Ext^1_{\cI^\Gamma_{\leq\lambda}}(V,U)=0. \] \end{theo} \begin{proof} First, let $V\in\Ob\cI^\Gamma_{\leq\lambda}$ with $V\cong\bW^\Gamma_\lambda\bR^\Gamma_\lambda V$. Suppose that there is a homomorphism of $(\g\otimes A)^\Gamma$-modules $\varphi \colon V\to U$, where $U\in\Ob\cI^\Gamma_{\leq\lambda}$ with $U_\lambda=0$. By hypothesis $V$ is generated as a $\cU_\Gamma$-module by $\bR^\Gamma_\lambda V$, but on the other hand $\bR^\Gamma_\lambda \varphi=0$. Hence, $\varphi=0$. To establish the second condition, let $P$ be a projective object of $\bA_\Gamma^\lambda$-mod equipped with a surjective homomorphism $\pi \colon P\to \bR^\Gamma_\lambda V$. Applying the right exact functor $\bW^\Gamma_\lambda$ yields a surjective homomorphism of $(\g\otimes A)^\Gamma$-modules \[ 1\otimes\pi:\bW^\Gamma_\lambda P\to \bW^\Gamma_\lambda \bR^\Gamma_\lambda V\cong V, \] with $\bW^\Gamma_\lambda P$ a projective module by Proposition~\ref{prop:right-adjoint}\eqref{lem-item:W-projectives}. Take $K=\ker(1\otimes\pi)$ to obtain a short exact sequence \begin{equation}\label{weyl-seq} 0\to K\to \bW^\Gamma_\lambda P\to V\to 0. \end{equation} Now, $K$ is generated by $K_\lambda$ as a $\cU_\Gamma$-module, being a homomorphic image of $\bW^\Gamma_\lambda(\ker\pi)$, which is itself generated by its $\lambda$ weight space $W^\Gamma(\lambda)_\lambda\otimes_{\bA_\Gamma^\lambda} (\ker\pi)$. Thus $\Hom_{\cI^\Gamma_{\leq\lambda}}(K,U)=0$, and it follows from the long exact sequence obtained by applying the functor $\Hom_{\cI^\Gamma_{\leq\lambda}}(-,U)$ to the sequence \eqref{weyl-seq} that $\Ext^1_{\cI^\Gamma_{\leq\lambda}}(V, U)=0$. \details{We get the long exact sequence: \begin{multline*} 0 \to \Hom_{\cI^\Gamma_{\leq\lambda}}(V,U) \to \Hom_{\cI^\Gamma_{\leq\lambda}}(\bW^\Gamma_\lambda P,U) \to \Hom_{\cI^\Gamma_{\leq\lambda}} (K,U) \to \Ext^1_{\cI^\Gamma_{\leq\lambda}}(V,U) \\ \to \Ext^1_{\cI^\Gamma_{\leq\lambda}}(\bW^\Gamma_\lambda P) \to \Ext^1_{\cI^\Gamma_{\leq\lambda}} (K,U) \to \cdots, \end{multline*} which becomes (using the fact that $\Ext^1_{\cI^\Gamma_{\leq\lambda}}(\bW^\Gamma_\lambda P,U)=0$ since $\bW^\Gamma_\lambda P$ is projective) \[ 0 \to 0 \to \Hom_{\cI^\Gamma_{\leq\lambda}}(\bW^\Gamma_\lambda P,U) \to 0 \to \Ext^1_{\cI^\Gamma_{\leq\lambda}}(V,U) \to \Ext^1_{\cI^\Gamma_{\leq\lambda}}(\bW^\Gamma_\lambda P,U) \to \Ext^1_{\cI^\Gamma_{\leq\lambda}} (K,U) \to \cdots. \] } Conversely, let $V\in \Ob \cI^\Gamma_{\le \lambda}$ satisfy the given vanishing conditions on $\Hom$ and $\Ext^1$. Set $V'=\cU_\Gamma V_\lambda$ and observe that $V/V'\in\Ob\cI^\Gamma_\lambda$ with $(V/V')_\lambda=0$. Thus our hypothesis implies that $\Hom_{\cI^\Gamma_\lambda}(V,V/V')=0$, so that $V=V'$. We immediately see that the map $\epsilon_V \colon \bW^\Gamma_\lambda \bR^\Gamma_\lambda V \to V$ defined in the proof of Proposition~\ref{prop:right-adjoint} is surjective. Let $U=\ker(\epsilon_V)$, so that $U_\lambda=0$. Consider the short exact sequence \[ 0\to U\to \bW^\Gamma_\lambda\bR^\Gamma_\lambda V \xrightarrow{\epsilon_V} V\to 0. \] The long exact sequence obtained by applying $\Hom_{\cI^\Gamma_{\le\lambda}}(-,U)$ now gives $\Hom_{\cI^\Gamma_{\le\lambda}}(U,U) = 0$ and hence $U=0$. \details{We get the long exact sequence: \[ 0 \to \Hom_{\cI^\Gamma_{\leq\lambda}}(V,U) \to \Hom_{\cI^\Gamma_{\leq\lambda}}(\bW^\Gamma_\lambda\bR^\Gamma_\lambda V,U) \to \Hom_{\cI^\Gamma_{\leq\lambda}}(U,U) \to \Ext^1_{\cI^\Gamma_{\leq\lambda}}(V,U) \to \cdots. \] which becomes (using the fact that $\Hom_{\cI^\Gamma_{\leq\lambda}}(\bW^\Gamma_\lambda\bR^\Gamma_\lambda V,U)=0$ since $\bW^\Gamma_\lambda\bR^\Gamma_\lambda V$ is generated by its $\lambda$ weight space, but $U_\lambda=0$) \[ 0 \to 0 \to 0 \to \Hom_{\cI^\Gamma_{\leq\lambda}}(U,U) \to 0 \to \cdots. \] } Thus $\epsilon_V$ is an isomorphism, which completes the proof. \end{proof} \section{Properties of global Weyl modules} \label{sec:properties-global} This section marks the beginning of the second part of the current paper. For the remainder of the paper, we assume that $\Gamma$ is a finite abelian group acting freely on $X_\rat$ and acting on $\g$ by diagram automorphisms. (Note that $\Gamma$ is a finite abelian group and acts freely on $X_\rat$ in the case of (twisted) loop and multiloop algebras.) Under this additional assumption, we shall deduce in this section some further properties of global Weyl modules and of the algebra $\bA^\Gamma_\lambda$. In particular, we will see that both are finitely generated. This generalizes the results of \cite{FMS11} (which considers the twisted loop algebra), but is new even in the case of twisted multiloop algebras. Fix a triangular decomposition $\g = \n_- \oplus \h \oplus \n_+$. Then, since $\Gamma$ acts by diagram automorphisms, we have an induced triangular decomposition $\g^\Gamma = \n_-^\Gamma \oplus \h^\Gamma \oplus \n_+^\Gamma$. We let $R^+$ (respectively, $\Pi$) and $R^+_\Gamma$\label{def:roots-simple-roots} (respectively, $\Pi_\Gamma$) denote the sets of positive (respectively, simple) roots of $\g$ and $\g^\Gamma$, respectively. \begin{lem}[{\cite[Ch.~V, \S1, no.~9, Th.~2]{Bou85b}}] \label{lem:AGamma-fg} The algebra $A^\Gamma$ is finitely generated (as an algebra) and $A_\xi$ is finitely generated as an $A^\Gamma$-module for all $\xi \in \Xi$. \end{lem} \begin{rem} \label{rem:cyclic-assumption} Lemma~\ref{lem:AGamma-fg} allows us to make a simplifying assumption as follows. Let $\Gamma'$ be the subgroup of $\Gamma$ acting trivially on $\g$ (equivalently, fixing the Dynkin diagram of $\g$). Then $(\g \otimes A)^\Gamma \cong (\g \otimes A^{\Gamma'})^{\Gamma/\Gamma'}$. By Lemma~\ref{lem:AGamma-fg}, $A^{\Gamma'}$ is finitely generated. Thus, replacing $A$ by $A^{\Gamma'}$ and $\Gamma$ by $\Gamma/\Gamma'$, we may assume without loss of generality that $\Gamma$ acts faithfully on $\g$ (equivalently, on the Dynkin diagram of $\g$). Since $\Gamma$ is abelian, this implies that $\Gamma$ is either trivial or is a cyclic group of order two or three. \end{rem} \begin{assumption} For the remainder of the paper, we will assume that $\Gamma$ is a cyclic group generated by an automorphism $\sigma$ of the Dynkin diagram of $\g$. \end{assumption} \begin{lem} \label{lem:A-xi-generators} Let $\xi \in \Xi \setminus \{0\}$. Then there exists a finite subset of $A_\xi$ that generates $A$ as an algebra. \end{lem} \begin{proof} Since $A$ is finitely generated, it admits a finite generating set $\{a_1,\dots,a_n\}$. Taking homogeneous components of the elements in this set, we may assume that each $a_i$ is homogeneous (i.e.\ belongs to $A_\tau$ for some $\tau \in \Xi$). Let $m = |\Gamma|$. By \cite[Lem.~4.4]{NS11}, we have $(A_\xi)^m = A_{m \xi} = A_0$. Thus, we can write $1 = \sum_{\ell=1}^k g_{\ell,1} g_{\ell,2} \dotsm g_{\ell,m}$ for some $g_{\ell,s} \in A_\xi$, $\ell=1,\dotsc,k$, $s = 1,\dotsc,m$. Let $S = \{a_1,\dots,a_n\} \cup \{g_{\ell,s}\ |\ \ell=1,\dotsc,k,\ s=1,\dotsc,m\}$. Now, suppose $a_i \in A_\tau$ for $\tau \ne \xi$. Since $\Gamma$ (hence $\Xi$) is a cyclic group of order two or three (see Remark~\ref{rem:cyclic-assumption}), $\xi$ generates $\Xi$. Thus we have $-\tau = (r-1)\xi$ for some $1 \le r \le m$. So $\tau + r \xi = \xi$. Now, $a_i = a_i 1 = a_i \sum_{\ell=1}^k g_{\ell,1} g_{\ell,2} \dotsm g_{\ell,m}$. For $\ell=1,\dotsc,k$, let $a_{i,\ell} = a_i g_{\ell,1} \dotsm g_{\ell,r} \in A_\xi$ and set $S' = (S \setminus \{a_i\}) \cup \{a_{i,1},\dotsc,a_{i,k}\}$. Then $S'$ generates $A$ and, compared with $S$, has one fewer element lying outside $A_\xi$. The result then follows by induction. \end{proof} \begin{defin}[$\delta_\bi$] For $\mathbf{i}\in I_\Gamma$, define \[ \delta_\bi \defeq \begin{cases} 1, & \text{if $\g$ is of type $A_{2n}$}, \\ 1, &\text{if $\g$ is not of type $A_{2n}$ but $\alpha_{\mathbf i}$ is a short root (i.e.\ $|\bi| > 1$)}, \\ |\Gamma|, &\text{if $\g$ is not of type $A_{2n}$ and $\alpha_{\mathbf i}$ is a long root (i.e.\ $|\bi|=1$)}, \end{cases} \] where we have identified $I_\Gamma$ with the set of $\Gamma$-orbits in $I$ and $|\bi|$ denotes the size of the orbit $\bi$. \end{defin} \begin{defin}[$\Gamma_i$, $\overline{e_i \otimes a}$, $\overline{f_i \otimes a}$, $\overline{h_i \otimes a}$] \label{def:overline} Let $\{e_i,f_i,h_i\}_{i \in I}$ be a set of Chevalley generators of $\g$. For $i \in I$, let $\Gamma_i = \{\gamma \in \Gamma\ |\ \gamma i = i\}$ be the isotropy subgroup of $i$. Then, for $a \in A^{\Gamma_i}$, let \[ \ts \overline{e_i \otimes a} \defeq \sum_{\gamma \in \Gamma/\Gamma_i} \gamma(e_i \otimes a) = \sum_{\gamma \in \Gamma/\Gamma_i} e_{\gamma i} \otimes \gamma(a) \in (\n^+ \otimes A)^\Gamma. \] We define $\overline{f_i \otimes a}$ and $\overline{h_i \otimes a}$ similarly. Note that replacing $i$ by another element in the same $\Gamma$-orbit in $I$ only changes the above elements by a scalar multiple. \end{defin} The elements $\overline{h_i \otimes a}$, $i \in I$, $a \in A^{\Gamma_i}$, span $(\h \otimes A)^\Gamma$. Now fix $\xi \in \Xi \setminus \{0\}$. Since $(A_\xi)^{|\Gamma|}=A^\Gamma$ (by \cite[Lem.~4.4]{NS11}), we see that the elements $\overline{h_i \otimes a^{\delta_\bi}}$, $\bi \in I_\Gamma$, $i \in \bi$, $a \in A_\xi$, also span $(\h \otimes A)^\Gamma$. For $b$ an element of any associative algebra $B$, we denote by $b^{(r)}$ the divided power $b^r/r!$. \begin{lem}\label{lem:twisted-garland} Suppose that $\ell\in\mathbb N$, $a\in A_\xi$ for $\xi\in\Xi \setminus\{0\}$, and $\bi \in I_\Gamma$. Let $i \in \bi$. Then there exist $q(i,a)_s \in \cU((\h \otimes A)^\Gamma)$, $s=1,\dotsc,\ell$, such that the following statements hold: \begin{enumerate} \item We have $q(i,a)_0=1$ and $q(i,a)_s \in \cU((\h \otimes \kk[a]_s)^\Gamma)$, where \[ \ts \kk[a]_s \defeq \{\sum_{i=0}^s c_i a^{\delta_\bi i}\ |\ c_1,\dotsc,c_s \in \kk\} \subseteq A. \] \item If $\g$ is not of type $A_{2n}$, or if $\g$ is of type $A_{2n}$ and $\alpha_\bi$ is a long root, then \begin{equation}\label{lem-eq:a-lambda-fg-eq2} \ts (\overline{e_i\otimes a^{\delta_\bi}})^{(\ell)}(\overline{ f_i\otimes 1})^{(\ell+1)}+ (-1)^{\ell+1} \sum_{s=0}^{\ell} (\overline{f_i \otimes a^{\delta_\bi(\ell - s)}})q(i,a)_s \in \mathcal U_\Gamma(\n^+\otimes A)^\Gamma. \end{equation} \item If $\g$ is of type $A_{2n}$ and $\alpha_\bi\in R^+_\Gamma$ is the simple short root of $\g^\Gamma$, then \begin{equation} \label{lem:eq:a-lambda-fd-A2n-short} \ts (\sqrt{2}\ \overline{e_i\otimes 1})^{(2\ell+1)} (y_i\otimes a)^{(\ell+1)}+\sum_{s=0}^{\ell} (\sqrt{2}\ \overline{f_i \otimes a^{\ell - s+1}})q(i,a)_s \in \mathcal U_\Gamma(\n^+\otimes A)^\Gamma, \end{equation} where $y_i=-[f_i,f_{i+1}]\in\g_1$ and the subscript $1$ here denotes the nonzero element of $\Xi$. \end{enumerate} \end{lem} \begin{proof} The homomorphism of $\kk$-algebras $\kk[t]\to A$ given by $t \mapsto a$ extends to a homomorphism of Lie algebras $\g\otimes\kk[t]\to\g\otimes A$. Applying this latter map to the formulas found in \cite[Lem.~3.3]{CFS08} proves the lemma. Note that \cite[Lem.~3.3(iii)(a)]{CFS08} is incorrect. To obtain the correct statement, one should replace $t^{mk+\epsilon}$ in the power series $\mathbf{x}_{n}^-(u)$ in that reference by $t^{mk+\epsilon+1}$. \end{proof} \begin{theo} \label{theo:A-fg} For all $\lambda \in \Lambda^+_\Gamma$, $\mathbf{A}_{\Gamma}^{\lambda}$ is a finitely generated algebra. \end{theo} \begin{proof} If the action of $\Gamma$ on $\g$ is trivial, then $(\g \otimes A)^\Gamma \cong \g \otimes A^\Gamma$ and the theorem follows from \cite[Th.~2(i)]{CFK10}. Thus, we assume that $\Gamma$ acts nontrivially on $\g$. Fix $\xi \in \Xi \setminus \{0\}$, let $\{a_1, \ldots, a_N \} \subseteq A_\xi$ be generators of $A$ as an algebra (see Lemma~\ref{lem:A-xi-generators}) and fix a generator $a_k$ with $1\le k\le N$. First, suppose either that $\g$ is not of type $A_{2n}$ and $\alpha_\bi$ is any simple root of $\g^\Gamma$, or that $\g$ is of type $A_{2n}$ and $\alpha_\bi$ is a long root of $\g^\Gamma$ (i.e.\ $\bi \ne \{n,n+1\}$ when we identify $I_\Gamma$ with $\Gamma$-orbits in $I$). For any homogeneous $a \in A^{\Gamma_i}$, multiplying both sides of~\eqref{lem-eq:a-lambda-fg-eq2} by $\overline{ e_i \otimes a}$ on the left gives the following identity in $\mathcal U_\Gamma$: \[ \ts (\overline{e_i \otimes a})(\overline{e_i \otimes a_k^{\delta_\bi}})^{( \ell )}(\overline{f_i\otimes 1})^{(\ell+1)} + (-1)^{\ell+1} \sum_{s=0}^{\ell} (\overline{h_i \otimes a a_k^{\delta_\bi(\ell - s)}}) q(i,a_k)_s \in \mathcal U_\Gamma (\n^+ \otimes A)^\Gamma. \] Now suppose that $\ell\ge \lambda(h_\bi)$. In this case, we have $(\overline{f_i \otimes 1})^{\ell+1}w_\lambda^\Gamma = 0,$ and thus \[ \ts (\overline{h_i \otimes a a_k^{\delta_\bi\ell}})w_\lambda^\Gamma = -\left(\sum_{s=1}^{\ell} (\overline{h_i \otimes a a_k^{\delta_\bi(\ell - s)}})q(i,a_k)_s \right)w_\lambda^\Gamma. \] Iterating this argument, it follows that for all $s_1,\ldots,s_n\in\mathbb N$, \begin{equation}\label{a-lambda-fg-H-long} (\overline{h_i \otimes a_1^{\delta_\bi s_1}\cdots a_N^{\delta_\bi s_N}})w_\lambda^\Gamma = H(i,s_1,\ldots,s_N)w_\lambda^\Gamma, \end{equation} where $H(i,s_1,\ldots,s_N)$ is a linear combination of finite products of elements of $\mathcal U((\h\otimes A)^\Gamma$) of the form $(\overline{h_i\otimes a_1^{\delta_\bi\ell_1}\cdots a_N^{\delta_\bi\ell_n}})$ with $0\le \ell_1,\ldots,\ell_N < \lambda(h_\bi)$. Therefore, the images of the vectors \[ \{ (\overline{h_i \otimes a_1^{\delta_\bi\ell_1} \cdots a_N^{\delta_\bi\ell_N}})\ :\ 0 \leq \ell_1, \ldots, \ell_N < \lambda(h_\bi) \} \] generate $\bA_\Gamma^\lambda$. Now let $\g$ be of type $A_{2n}$ and suppose that $\bi$ corresponds to the orbit $\{n,n+1\}$, that is, $\alpha_\bi$ is the simple short root of $\g^\Gamma$. Since the generators $a_1, \ldots, a_N$ lie in $A_1$ (where 1 denotes the nontrivial character of $\Gamma$), it follows that $y_i \otimes a_k \in (\g \otimes A)^\Gamma$. Let $a \in A$. Multiplying both sides of~\eqref{lem:eq:a-lambda-fd-A2n-short} by $\sqrt{2}\ \overline{ e_i \otimes a}$ on the left gives \[ \ts 2^{\ell+1}(\overline{e_i \otimes a})(\overline{e_i \otimes 1})^{(2 \ell + 1)} (y_i \otimes a_k)^{(\ell+1)} + \sum_{s=0}^{\ell} 2(\overline{h_i \otimes a a_k^{\ell - s+1}})q(i,a_k)_s \in \mathcal U_{\Gamma}(\n^+ \otimes A)^\Gamma. \] Now, we claim that for all $\ell \ge r_i \defeq \frac{1}{2} \lambda(h_\bi)$, we have $(y_i \otimes a_k)^{\ell+1}w_\lambda^\Gamma = 0$, which implies that \[ \ts (\overline{h_i \otimes a a_k^{\ell+1}})w_\lambda^\Gamma =- \sum_{s=1}^{\ell} (\overline{h_i \otimes a a_k^{\ell - s+1}})q(n,a_k)_s w_\lambda^\Gamma. \] Iterating this, we arrive again at \eqref{a-lambda-fg-H-long} (with the upper bound $\lambda(h_\bi)$ on $\ell_j$, $1 \le j \le N$, replaced by $r_i+1$), and the result follows. It remains to prove the claim. A straightforward calculation shows that $[h,y_i]=-2\alpha_{\bi}(h)y_i$ for all $h\in\h$. \details{We check on vectors $h_{\mathbf j}$. If $\mathbf j=\{n,n+1\}$ we compute that $2[h_n+h_{n+1},[f_{i+1},f_i]] =-4y_i.$ If $\mathbf j=\{n-1, n+2\}$ we get $2y_i$, and otherwise zero. This is exactly $-2\alpha_\bi(h_{\mathbf j})y_i.$} Thus, $(y_i \otimes a_k)^{r_i+1}w_\lambda^\Gamma$ has weight $\lambda-(\lambda(h_\bi)+2)\alpha_\bi$, and so it suffices to show that $\lambda-(\lambda(h_\bi)+2)\alpha_\bi$ is not a weight of $W^\Gamma(\lambda)$. Since $W^\Gamma(\lambda)$ is a direct sum of irreducible finite-dimensional $\g^\Gamma$-modules, its weights (which all lie below $\lambda$) are invariant under the action of the Weyl group of $\g^\Gamma$. But $s_\bi(\lambda-(\lambda(h_\bi)+2)\alpha_\bi)=\lambda+2\alpha_\bi$ does not lie below $\lambda$, concluding the proof. \end{proof} \begin{lem}\label{lem:garland} Suppose $r \in \N$, $a \in A^\Gamma$, $\alpha \in R_\Gamma^+$, and $\{x_\alpha^-, x_\alpha^+, h_\alpha \}$ is an $\mathfrak{sl}_2$-triple in $\g^\Gamma$ corresponding to $\alpha$. Then \[ \ts (x_\alpha^+ \otimes a)^r(x_\alpha^- \otimes 1)^{r+1} - \sum_{s = 0}^r (x_\alpha^- \otimes a^{r-s})p(a, \alpha)_s \in \cU(\g^\Gamma \otimes A^\Gamma)(\n_+^\Gamma \otimes A^\Gamma), \] for some $p(a, \alpha)_s \in \cU(\h^\Gamma \otimes A^\Gamma) \subseteq \cU((\h \otimes A)^\Gamma)$ with $p(a,\alpha)_0=1$. \end{lem} \begin{proof} This follows from~\cite[Lem.~5]{CFK10} (see also~\cite[Lem.~7.5]{Gar78}), where we replace the $\g$ and $A$ there by $\g^\Gamma$ and $A^\Gamma$ respectively. \end{proof} \begin{theo}\label{theo:finite-generated} For all $\lambda \in \Lambda^+_\Gamma$, the global Weyl module $W^\Gamma(\lambda)$ is a finitely generated right $\bA^\lambda_\Gamma$-module. \end{theo} \begin{proof} If the action of $\Gamma$ on $\g$ is trivial, then $(\g \otimes A)^\Gamma \cong \g \otimes A^\Gamma$ and the theorem follows from \cite[Th.~2(i)]{CFK10}. Thus we assume that $\Gamma$ acts nontrivially on $\g$. Let $\{a_1, \ldots, a_s\}$ be a finite set of generators of $A^\Gamma$ (see Lemma~\ref{lem:AGamma-fg}). By Proposition~\ref{weyl:gen-rel}, we have $(\n_+\otimes A)^\Gamma w_\lambda^\Gamma = 0$, where $w_\lambda^\Gamma$ is the usual generator of $W^\Gamma(\lambda)$. We also have $\cU((\h \otimes A)^\Gamma) w_\lambda^\Gamma = w_\lambda^\Gamma \bA^\lambda_\Gamma$. Then the PBW theorem implies that \[ W^\Gamma(\lambda) = \cU((\n^- \otimes A)^\Gamma) w_\lambda^\Gamma \bA^\lambda_\Gamma. \] For a positive root $\alpha \in R^+_\Gamma$ of $\g^\Gamma$, let $\{x_\alpha^+, x_\alpha^-, h_\alpha\} \subseteq \g^\Gamma$ be a corresponding $\mathfrak{sl}_2$-triple. \medskip \noindent \emph{Claim 1:} For all $\alpha \in R_\Gamma^+$, we have \[ (x_{\alpha}^- \otimes A^\Gamma) w_\lambda^\Gamma \subseteq \Span\{ (x_{\alpha}^- \otimes a_1^{\ell_1} \cdots a_s^{\ell_s}) w_\lambda^\Gamma\bA^\lambda_\Gamma \ | \ 0 \leq \ell_1, \ldots, \ell_s< \lambda(h_\alpha)\}. \] \medskip \noindent \emph{Proof of Claim 1:} Let $r \geq \lambda(h_{\alpha})$ and $1 \le i \le s$. By Lemma~\ref{lem:garland} we have \[ \ts \sum_{s = 0}^r (x_\alpha^- \otimes a_i^{r-s})p(a_i, \alpha)_s w_\lambda^\Gamma = (x_\alpha^+ \otimes a_i)^r(x_\alpha^- \otimes 1)^{r+1} w_\lambda^\Gamma = 0. \] Thus \[ (x_\alpha^- \otimes a_i^r) w_\lambda^\Gamma \in \Span \{ (x_{\alpha}^- \otimes a_i^s) w_\lambda^\Gamma \bA^\lambda_\Gamma \ | \ 0 \leq s < r \}. \] We then have, by induction, that \begin{eqnarray} \label{eq:fin-1} (x_\alpha^- \otimes a_i^r) w_\lambda^\Gamma \in \Span \{ (x_{\alpha}^- \otimes a_i^\ell) w_\lambda^\Gamma\bA^\lambda_\Gamma \ | \ 0 \leq \ell < \lambda(h_\alpha)\} \quad \text{for all } r \ge \lambda(h_\alpha). \end{eqnarray} Now, for $1 \le i,j \le s$ and $m_i, m_j \in \N$, we have \begin{align} \label{eq:fin-2} (h_\alpha \otimes a_j^{m_j})(x_\alpha^- \otimes a_i^{m_i}) w_\lambda^\Gamma &= \big( -2 x_\alpha^- \otimes a_j^{m_j} a_i^{m_i} + (x_\alpha^- \otimes a_i^{m_i})(h_\alpha \otimes a_j^{m_j}) \big) w_\lambda^\Gamma \\ &\in -2(x_\alpha^- \otimes a_j^{m_j} a_i^{m_i})w_\lambda^\Gamma + (x_\alpha^- \otimes a_i^{m_i}) w_\lambda^\Gamma \bA^\lambda_\Gamma \nonumber \end{align} and so \begin{align*} (x_\alpha^- \otimes a_j^{m_j} a_i^{m_i})w_\lambda^\Gamma &\in \ts \frac{-1}{2} (h_\alpha \otimes a_j^{m_j})(x_\alpha^- \otimes a_i^{m_i}) w_\lambda^\Gamma + (x_\alpha^- \otimes a_i^{m_i}) w_\lambda^\Gamma \bA^\lambda_\Gamma \\ &\in \ts \Span \{ (h_\alpha \otimes a_j^{m_j}) (x_{\alpha}^- \otimes a_i^\ell) w_\lambda^\Gamma\bA^\lambda_\Gamma, (x_{\alpha}^- \otimes a_i^\ell) w_\lambda^\Gamma\bA^\lambda_\Gamma \ | \ 0 \leq \ell < \lambda(h_\alpha)\} \end{align*} by~\eqref{eq:fin-1}. Then, using~\eqref{eq:fin-2} with $m_i=\ell$, we have \begin{equation} \label{eq:fin-3} (x_\alpha^- \otimes a_j^{m_j} a_i^{m_i})w_\lambda^\Gamma \in \Span \{(x_\alpha^- \otimes a_j^{m_j} a_i^\ell)w_\lambda^\Gamma \bA^\lambda_\Gamma, (x_\alpha^- \otimes a_i^\ell)w_\lambda^\Gamma \bA^\lambda_\Gamma\ |\ 0 \leq \ell < \lambda(h_\alpha)\}. \end{equation} Replacing $j,i,m_j,m_i$ by $i,j,\ell,m_j$ (respectively) in~\eqref{eq:fin-3}, we have \[ (x_\alpha^- \otimes a_i^\ell a_j^{m_j})w_\lambda^\Gamma \in \Span \{(x_\alpha^- \otimes a_i^\ell a_j^{\ell_j}) w_\lambda^\Gamma \bA^\lambda_\Gamma, (x_\alpha^- \otimes a_j^{\ell_j})w_\lambda^\Gamma \bA^\lambda_\Gamma \ |\ 0 \le \ell_j < h(h_\alpha)\}. \] Thus \[ (x_\alpha^- \otimes a_j^{m_j} a_i^{m_i})w_\lambda^\Gamma \in \Span \{(x_\alpha^- \otimes a_j^{\ell_j} a_i^{\ell_i})w_\lambda^\Gamma \bA^\lambda_\Gamma\ |\ 0 \leq \ell_i, \ell_j < \lambda(h_\alpha)\}. \] Repeating the above argument gives \begin{equation} \label{eq:fin-5} (x_\alpha^- \otimes a_1^{m_1} \cdots a_s^{m_s}) w_\lambda^\Gamma \in \Span\{ (x_{\alpha}^- \otimes a_1^{\ell_1} \cdots a_s^{\ell_s}) w_\lambda^\Gamma\bA^\lambda_\Gamma \ | \ 0 \leq \ell_i < \lambda(h_\alpha)\ \forall\ i\}. \end{equation} Since $\{a_1^{m_1} \cdots a_s^{m_s} \ |\ m_i \geq 0\}$ is a spanning set of $A^\Gamma$, Claim 1 follows. \medskip \noindent \emph{Claim 2:} As a right $\bA^\lambda_\Gamma$-module, $(\n' \otimes A)^\Gamma w_\lambda^\Gamma$ is finitely generated, where $\n' \defeq \bigoplus_{\alpha \in \Pi} \g_{-\alpha}$. \medskip \noindent \emph{Proof of Claim 2:} Let $m$ be the order of the generator $\sigma$ of $\Gamma$. This generator $\sigma$ induces a permutation of the simple roots of $\g$. For a simple root $\beta \in \Pi$ of $\g$, we denote by $\{y_\beta^+, y_\beta^-, h_\beta \} \subseteq \g$ a corresponding $\mathfrak{sl}_2$-triple. A basis of $(\n')^\Gamma$ is given by the set \[ \ts \cB_0 \defeq \{ y_\beta^- \ |\ \beta \in \Pi,\ \sigma(\beta) = \beta \} \cup \left\{ \sum_{j= 0}^{m-1} y_{\sigma^j(\beta)}^- \ |\ \beta \in \Pi,\ \sigma(\beta) \neq \beta \right\}. \] If $m=2$, we have a basis $\cB_0 \sqcup \cB_1$ of $\n'$ where \[ \cB_1 \defeq \{ y_\beta^- - y_{\sigma(\beta)}^- \ |\ \beta \in \Pi,\ \sigma(\beta) \neq \beta \} \] consists of $\sigma$-eigenvectors of eigenvalue $-1$. If $m=3$ we have a basis $\cB_0 \sqcup \cB_1 \sqcup \cB_2$ of $\n'$ where \begin{gather*} \cB_1 \defeq \{y_\beta^- + \eta y_{\sigma(\beta)}^- + \eta^2 y_{\sigma^2(\beta)}^- \ |\ \beta \in \Pi,\ \sigma(\beta) \ne \beta \},\\ \cB_2 \defeq \{y_\beta^- + \eta^2 y_{\sigma(\beta)}^- + \eta y_{\sigma^2(\beta)}^- \ |\ \beta \in \Pi,\ \sigma(\beta) \ne \beta \}. \end{gather*} Here $\cB_i$, $i=1,2$, consists of $\sigma$-eigenvectors of eigenvalue $\eta^i$, where $\eta$ is a primitive third root of unity. For each $\alpha \in \Pi_\Gamma$, after multiplying by a scalar if necessary, we have $x_\alpha^- = y_{\beta_\alpha}^-$ (or $x_\alpha^- = \sqrt{\kappa_\alpha} \sum_{j= 0}^{m-1} y_{\sigma^j(\beta_\alpha)}^-$) and $h_\alpha = h_{\beta_\alpha}$ (respectively, $h_\alpha = \kappa_\alpha \sum_{j= 0}^{m-1} h_{\sigma^j(\beta_\alpha)}$) for some $\beta_\alpha \in R^+$, where $\kappa_\alpha=2$ if $\g$ is of type $A_{2n}$ and $\alpha$ is the simple short root of $\g^\Gamma$, otherwise $\kappa_\alpha=1$. In fact, the $\Gamma$-orbit of $\beta_\alpha$ is uniquely determined by the condition $\beta_\alpha|_{\h^\Gamma} = \alpha$. For the remainder of the proof, we restrict our attention to the case $m=2$. The case $m=3$ is similar and will be omitted. We have \begin{equation} \label{eq:n'A-decomp} (\n' \otimes A)^\Gamma = (\n'_0 \otimes A_0) \oplus (\n'_1 \otimes A_1), \end{equation} where the subscript 1 denotes the nontrivial character of $\Gamma$. Furthermore, $\cB_1$ is a basis of $\n'_1$. By Lemma~\ref{lem:AGamma-fg}, we know that $A_1$ is a finitely generated $A^\Gamma$-module. Let $\{b_1, \ldots, b_k\}$ be a finite set of generators of this module. Now choose $\alpha \in \Pi_\Gamma$ such that $\sigma(\beta_\alpha) \ne \beta_\alpha$ and set $\beta = \beta_\alpha$. Then \[ \{ (y_\beta^- - y_{\sigma(\beta)}^-) \otimes a_1^{m_1} \cdots a_s^{m_s} b_i \ |\ m_j \geq 0,\ 1 \leq i \leq k \} \subseteq \n'_1 \otimes A_1 \subseteq (\n' \otimes A)^\Gamma. \] Furthermore, $(h_\beta - h_{\sigma(\beta)}) \otimes b_i \in (\h \otimes A)^\Gamma$, and so $((h_\beta - h_{\sigma(\beta)}) \otimes b_i) w_\lambda^\Gamma \in w_\lambda^\Gamma \bA^\lambda_\Gamma$. Now, \begin{equation} \label{eq:fin-comm-rel} [ h_\beta - h_{\sigma(\beta)}, x_\alpha^- ] = [h_\beta - h_{\sigma(\beta)}, y_\beta^- + y_{\sigma(\beta)}^-] = -(\kappa_\alpha+1)(y_\beta^- - y_{\sigma(\beta)}^-). \end{equation} This implies, for all $m_j \geq 0$, that \begin{align*} -(\kappa_\alpha&+1)((y_\beta^- - y_{\sigma(\beta)}^-) \otimes a_1^{m_1} \dotsb a_s^{m_s} b_i)w_\lambda^\Gamma \\ &= ((h_\beta - h_{\sigma(\beta)}) \otimes b_i)(x_\alpha^- \otimes a_1^{m_1} \cdots a_s^{m_s}) w_\lambda^\Gamma - (x_\alpha^- \otimes a_1^{m_1} \cdots a_s^{m_s})((h_\beta - h_{\sigma(\beta)}) \otimes b_i) w_\lambda^\Gamma \\ &\in ((h_\beta - h_{\sigma(\beta)}) \otimes b_i)(x_\alpha^- \otimes a_1^{m_1} \cdots a_s^{m_s}) w_\lambda^\Gamma - (x_\alpha^- \otimes a_1^{m_1} \cdots a_s^{m_s}) w_\lambda^\Gamma \bA_\Gamma^\lambda \\ &\subseteq (h_\beta - h_{\sigma(\beta)}) \otimes b_i) \Span\{ (x_{\alpha}^- \otimes a_1^{\ell_1} \cdots a_s^{\ell_s}) w_\lambda^\Gamma\bA^\lambda_\Gamma \ | \ 0 \leq \ell_i < \lambda(h_\alpha)\} \\ &\qquad \qquad \qquad \qquad \qquad + \Span\{ (x_{\alpha}^- \otimes a_1^{\ell_1} \cdots a_s^{\ell_s}) w_\lambda^\Gamma\bA^\lambda_\Gamma \ | \ 0 \leq \ell_i < \lambda(h_\alpha)\ \forall\ i\} \quad \text{(by~\eqref{eq:fin-5})} \\ &\subseteq \Span \{ ((y_\beta^- - y_{\sigma(\beta)}^-) \otimes a_1^{\ell_1} \cdots a_s^{\ell_s}b_i) w_\lambda^\Gamma\bA^\lambda_\Gamma , (x_{\alpha}^- \otimes a_1^{\ell_1} \cdots a_s^{\ell_s}) w_\lambda^\Gamma\bA^\lambda_\Gamma\ | \ 0 \leq \ell_i < \lambda(h_\alpha)\ \forall\ i\}, \end{align*} where the last containment follows from~\eqref{eq:fin-comm-rel}. Since $\kappa_\alpha+1 \ne 0$ and $A_1$ is spanned by elements of the form $a_1^{m_1} \cdots a_s^{m_s}b_i$, we see that $((y_\beta^- - y_{\sigma(\beta)}^-) \otimes A_1) w_\lambda^\Gamma$ is contained in \[ \Span \{ ((y_\beta^- - y_{\sigma(\beta)}^-) \otimes a_1^{\ell_1} \cdots a_s^{\ell_s}b_i) w_\lambda^\Gamma\bA^\lambda_\Gamma , (x_{\alpha}^- \otimes a_1^{\ell_1} \cdots a_s^{\ell_s}) w_\lambda^\Gamma\bA^\lambda_\Gamma\ | \ 0 \leq \ell_j < \lambda(h_\alpha),\ 1 \le i \le k\}. \] Claim 2 now follows from~\eqref{eq:n'A-decomp} and the above arguments. \medskip \noindent \emph{Completion of the proof of Theorem~\ref{theo:finite-generated}:} We continue to assume that $m=2$, the case $m=3$ being similar. Define \begin{gather*} \mathcal{D}_0 \defeq \{ z \otimes a_1^{\ell_1} \cdots a_s^{\ell_s} \ | \ z \in \cB_0,\ 0 \leq \ell_j < \lambda(h_\alpha)\ \forall\ j\} \quad \text{and} \\ \mathcal{D}_1 \defeq \{ z \otimes a_1^{\ell_1} \cdots a_s^{\ell_s} b_i\ | \ z \in \cB_1,\ 0 \leq \ell_j < \lambda(h_\alpha)\ \forall\ j,\ 1 \le i \le k\}. \end{gather*} Let $\mathcal{D} = \mathcal{D}_0 \cup \mathcal{D}_1$. We claim that $\cU((\n' \otimes A)^\Gamma)_n w_\lambda^\Gamma \subseteq \sum_{\ell=0}^n \mathcal{D}^\ell w_\lambda^\Gamma \bA^\lambda_\Gamma$ for all $n \in \N_+$. The result is true for $n=1$ by the above. Assume that it is true for some $n \ge 1$. Let $u \in (\n' \otimes A)^\Gamma$ and $\tilde u \in \cU((\n' \otimes A)^\Gamma)_n$. Then, by assumption, we have $\tilde u w_\lambda^\Gamma \in u' w_\lambda^\Gamma \bA^\lambda_\Gamma$ for some $u' \in \sum_{\ell=0}^n \mathcal{D}^\ell$. Then we have \begin{multline*} \ts u \tilde u w_\lambda^\Gamma \in uu' w_\lambda^\Gamma \bA^\lambda_\Gamma = ([u,u']w_\lambda^\Gamma + u'uw_\lambda^\Gamma)\bA^\lambda_\Gamma \\ \ts \subseteq \cU((\n' \otimes A)^\Gamma)_n w_\lambda^\Gamma \bA^\lambda_\Gamma + u' (\mathcal{D} w_\lambda^\Gamma) \bA^\lambda_\Gamma \subseteq \sum_{\ell =1}^{n+1} \mathcal{D}^\ell w_\lambda^\Gamma \bA^\lambda_\Gamma. \end{multline*} Thus our claim holds by induction. Now, \begin{align*} [(\n' \otimes A)^\Gamma, (\n' \otimes A)^\Gamma] &= [\n'_0 \otimes A_0 + \n'_1 \otimes A_1, \n'_0 \otimes A_0 + \n'_1 \otimes A_1] \\ &= \big(([\n'_0,\n'_0] + [\n'_1,\n'_1]) \otimes A_0\big) \oplus \big([\n'_0,\n'_1] \otimes A_1\big) \\ &= ([\n',\n'] \otimes A)^\Gamma, \end{align*} where, in the second equality, we have used that $A_1^2=A_0$ by~\cite[Lem.~4.4]{NS11}. Since $\n'$ generates $\n_-$, an easy inductive argument then shows that $\cU((\n_- \otimes A)^\Gamma)_1 \subseteq \cU((\n' \otimes A)^\Gamma)_N$ for some $N \in \N_+$. Thus, for $n \in \N_+$, we have \[ \ts \cU((\n_- \otimes A)^\Gamma)_n w_\lambda^\Gamma \subseteq \cU((\n' \otimes A))_{Nn} w_\lambda^\Gamma \subseteq \sum_{\ell=0}^{Nn} \mathcal{D}^\ell w_\lambda^\Gamma \bA^\lambda_\Gamma. \] Now, all $\g^\Gamma$-weights of $\n_- \otimes A$ are nonzero (this follows from the fact that $\mathcal{D}$ is a basis of $\n' \otimes A$ and all of its elements have nonzero $\g^\Gamma$-weight) and the set of $\g^\Gamma$-weights occurring in $W^\Gamma(\lambda)$ is finite. Thus, there exists an $M \in \N$ such that $\cU((\n_- \otimes A)^\Gamma)_n w_\lambda^\Gamma \bA^\lambda_\Gamma = W^\Gamma(\lambda)$ for all $n \ge M$. Therefore $\sum_{\ell=1}^{NM} \mathcal{D}^\ell w_\lambda^\Gamma \bA^\lambda_\Gamma = W^\Gamma(\lambda)$ and so $W^\Gamma(\lambda)$ is finitely generated as an $\bA^\lambda_\Gamma$-module. \end{proof} Theorems~\ref{theo:A-fg} and~\ref{theo:finite-generated} are generalizations of~\cite[Th.~2(i)]{CFK10}, which gives the result in the untwisted setting (i.e.\ when $\Gamma$ is trivial). We have the following immediate corollary. \begin{cor} \label{cor:WM-fin-dim} If $M$ is a finite-dimensional $\bA^\lambda_\Gamma$-module, then $\bW_\lambda^\Gamma M$ is finite-dimensional. \end{cor} It is straightforward to show that if $V \in \Ob \cI^\Gamma$ and, for some $\lambda \in \Lambda_\Gamma^+$, we have $\dim V_\lambda = 1$, $\wt V \subseteq \lambda - Q_\Gamma^+$ and $\cU((\g \otimes A)^\Gamma) V_\lambda = V$, then $V$ has a unique irreducible quotient. The following is a generalization of~\cite[Prop.~8]{CFK10} and a refinement of Theorem~\ref{theo:WR=1-hom-char}. \begin{cor} \label{cor:local-weyl-module-char-fd} Let $V\in\Ob\cI^\Gamma_{\le \lambda}$ with $\dim V_\lambda < \infty$. Then $V\cong \bW^\Gamma_\lambda\bR^\Gamma_\lambda V$ if and only if \begin{equation} \label{eq:lWm-hom-char-fd} \Hom_{\cI^\Gamma_{\le \lambda}}(V,U)=0 \quad \text{and} \quad \Ext^1_{\cI^\Gamma_{\le \lambda}}(V,U)=0 \end{equation} for all irreducible finite-dimensional $U\in\Ob\cI^\Gamma_{\leq\tau}$ with $U_\lambda=0$. \end{cor} \begin{proof} The forward implication holds by Theorem~\ref{theo:WR=1-hom-char}. To prove the reverse implication, assume $V \in \Ob \cI^\Gamma_{\le \lambda}$ satisfies \eqref{eq:lWm-hom-char-fd} for all irreducible finite-dimensional $U \in \Ob \cI^\Gamma_{\le \lambda}$ with $U_\lambda=0$. As in the proof of Theorem~\ref{theo:WR=1-hom-char}, we see that $V = \cU((\g \otimes A)^\Gamma) V_\lambda$. The map $\epsilon_V$ of the proof of Proposition~\ref{prop:right-adjoint} is surjective since $V$ is generated by its highest weight space. Thus we have a short exact sequence \begin{equation} \label{eq:K-seq} 0 \to K \to \bW^\Gamma_\lambda V_\lambda \xrightarrow{\epsilon_V} V \to 0. \end{equation} By Corollary~\ref{cor:WM-fin-dim}, $\dim \bW^\Gamma_\lambda V_\lambda < \infty$ and so $\dim K < \infty$ and $K_\lambda = 0$. If $K \ne 0$, then there exists some irreducible finite-dimensional $U \in \Ob \cI^\Gamma_{\le \lambda}$ with $U_\lambda=0$ such that $\Hom_{(\g \otimes A)^\Gamma} (K,U) \ne 0$. A straightforward argument using the long exact sequence obtained from~\eqref{eq:K-seq} by applying the contravariant left exact functor $\Hom_{(\g \otimes A)^\Gamma}(-,U)$ then yields a contradiction. \details{Applying the contravariant left exact functor $\Hom_{(\g \otimes A)^\Gamma}(-,U)$ to the short exact sequence~\eqref{eq:K-seq} yields the long exact sequence \begin{multline*} 0 \to \Hom_{(\g \otimes A)^\Gamma} (V,U) \to \Hom_{(\g \otimes A)^\Gamma} (\bW^\Gamma_\lambda V_\lambda,U) \\ \to \Hom_{(\g \otimes A)^\Gamma} (K,U) \to \Ext^1_{(\g \otimes A)^\Gamma} (V,U) \to \cdots. \end{multline*} Since $\Hom_{(\g \otimes A)^\Gamma} (\bW^\Gamma_\lambda V_\lambda,U)=0$ by assumption, we see that $\Hom_{(\g \otimes A)^\Gamma} (K,U)$ injects into $\Ext^1_{(\g \otimes A)^\Gamma} (V,U)$. But this contradicts the assumption on $V$, which implies that the latter space is zero.} Thus $K=0$ and hence $V \cong \bW^\Gamma_\lambda V_\lambda = \bW^\Gamma_\lambda \bR^\Gamma_\lambda V$. \end{proof} \begin{defin}[Map $\bV^\Gamma_\lambda$] \label{def:V-Gamma-lambda} Since $\bA^\lambda_\Gamma$ is a finitely generated commutative algebra, any irreducible finite-dimensional $\bA^\lambda_\Gamma$-module $M$ has dimension one. For such an $\bA^\lambda_\Gamma$-module $M$, we let $\bV^\Gamma_\lambda M$ denote the unique irreducible quotient of $\bW_\lambda^\Gamma M$, which is finite-dimensional by Corollary~\ref{cor:WM-fin-dim}. This defines a map $\bV^\Gamma_\lambda$ from the set of irreducible $\bA^\lambda_\Gamma$-modules to the set of irreducible finite-dimensional $(\g \otimes A)^\Gamma$-modules. \end{defin} Recall the definition of $V(\psi)$ and $V^\Gamma(\psi)$ from Definition~\ref{def:EMA-irreducibles}. \begin{defin}[Modules $M(\psi)$ and $M^\Gamma(\psi)$] \label{def:M-psi} For $\psi \in \cE_\llift$ (respectively, $\psi \in \cE^\Gamma_\lambda$), define $M(\psi) \defeq \bR_\llift V(\psi)$ (respectively, $M^\Gamma(\psi) \defeq \bR_\lambda^\Gamma V^\Gamma(\psi)$). \end{defin} \begin{prop} \label{prop:irred-wt-space-modules} \begin{enumerate} \item \label{prop-item:gWm-zero} The global Weyl module $W^\Gamma(\lambda)$ is the zero module, and hence the algebra $\bA^\lambda_\Gamma$ is the zero algebra, if $\lambda \in \Lambda_\Gamma^+$ is not the restriction of some element of $\Lambda^+$. \item \label{prop-item:irred-functor} For all $\psi \in \cE^\Gamma_\lambda$, we have $V^\Gamma(\psi) \cong \bV_\lambda^\Gamma M^\Gamma(\psi)$. \item \label{prop-item:VR-bijection} For all $\lambda \in \Lambda^+_\Gamma$, the maps $\bV^\Gamma_\lambda$ and $\bR^\Gamma_\lambda$ induce mutually inverse bijections between the set of irreducible finite-dimensional $(\g \otimes A)^\Gamma$-modules whose highest weight as a $\g^\Gamma$-module is $\lambda$ and the set of irreducible finite-dimensional $\bA^\lambda_\Gamma$-modules. \item The map $\psi \mapsto [M^\Gamma(\psi)]$ is a bijection from $\cE^\Gamma_\lambda$ to the set of isomorphism classes of irreducible finite-dimensional $\bA^\lambda_\Gamma$-modules. \end{enumerate} \end{prop} \begin{proof} \begin{asparaenum} \item Suppose $\lambda \in \Lambda_\Gamma^+$. The irreducible (one-dimensional) $\bA^\lambda_\Gamma$-modules are of the form $M=\bA^\lambda_\Gamma/\sm$ for some maximal ideal $\sm$ of $\bA^\lambda_\Gamma$. By Nakayama's Lemma, $\bW^\Gamma_\lambda M = W^\Gamma(\lambda)/\sm W^\Gamma(\lambda)$ is zero if and only if the global Weyl module $W^\Gamma(\lambda)$ is zero. By Corollary~\ref{cor:WM-fin-dim}, $\bW^\Gamma_\lambda M$ is finite-dimensional. If it is nonzero, it has some nonzero irreducible finite-dimensional quotient $V$ whose highest $\g^\Gamma$ weight is also $\lambda$. By \cite[Th.~5.5]{NSS12}, $V$ is a tensor product of evaluation representations (corresponding to representations of $\g$). Thus, its highest weight must be a restriction of a weight of $\g$. \item As in the proof of Proposition~\ref{prop:right-adjoint}, we have a (nonzero) surjective map \[ \bW^\Gamma_\lambda M^\Gamma(\psi) = \bW^\Gamma_\lambda \bR^\Gamma_\lambda V^\Gamma(\psi) \twoheadrightarrow V^\Gamma(\psi). \] Thus $V^\Gamma(\psi)$ must be isomorphic to the unique irreducible quotient $\bV_\lambda^\Gamma M^\Gamma(\psi)$ of $\bW^\Gamma_\lambda M^\Gamma(\psi)$. \item Let $\lambda \in \Lambda^+_\Gamma$. By \cite[Th.~5.5]{NSS12}, every irreducible finite-dimensional $(\g \otimes A)^\Gamma$-module with highest weight $\lambda$ is of the form $V^\Gamma(\psi)$ for some $\psi \in \cE^\Gamma_\lambda$. Thus, by part~\eqref{prop-item:irred-functor}, we have that $\bV^\Gamma_\lambda \bR^\Gamma_\lambda$ is the identity map on the set of such modules. Now, for an irreducible $\bA^\lambda_\Gamma$-module $M$, we have that \[ \bR^\Gamma_\lambda \bV^\Gamma_\lambda M = (\bV^\Gamma_\lambda M)_\lambda = (\bW^\Gamma_\lambda M)_\lambda = \bR^\Gamma_\lambda \bW^\Gamma_\lambda M = M. \] \item By \cite[Th.~5.5]{NSS12}, the map $\psi \mapsto V^\Gamma(\psi)$ is bijection from $\cE^\Gamma_\lambda$ to the set of irreducible finite-dimensional $(\g \otimes A)^\Gamma$-modules with highest weight $\lambda$. Then the result follows from part~\eqref{prop-item:VR-bijection}. \qedhere \end{asparaenum} \end{proof} \begin{rem} \label{rem:restricted-weights} The condition in Proposition~\ref{prop:irred-wt-space-modules}\eqref{prop-item:gWm-zero} that $\lambda$ be the restriction of a $\g$ weight is only relevant in the case where $\g$ is of type $A_{2n}$ (and $\Gamma$ acts nontrivially on $\g$). In this case, the restriction condition amounts to requiring that when $\lambda$ is written as a sum of fundamental weights, its coefficient for the fundamental weight corresponding to the short root be even. If $\g$ is not of type $A_{2n}$, then the restriction map $\Lambda^+ \to \Lambda_\Gamma^+$ is surjective (see Lemma~\ref{lem:height-wt}\eqref{lem-item:root-restriction}). \end{rem} The \hyperlink{proof:projective-free}{proof} of the following theorem will be given at the end of Section~\ref{sec:local-Weyl-modules}. Part~\eqref{theo-item:loop-global-Weyl-module-free} was first proved in \cite[Th.~6.5]{FMS11}. However, we will provide some details omitted there. \begin{theo} \label{theo:global-Weyl-module-projective} Suppose $\kk = \C$ and assume that $\Gamma$ is abelian, acting freely on $X_\rat$ and acting on $\g$ by diagram automorphisms. \begin{asparaenum} \item \label{theo-item:fund-weight-proj-property} If $A$ is the coordinate algebra of a smooth complex algebraic variety and $\lambda$ is a fundamental weight of $\g^\Gamma$, then the global Weyl module $W^\Gamma(\lambda)$ is a projective $\bA^\lambda_\Gamma$-module. If, in addition, $\bA^\lambda_\Gamma$ is a generalized Laurent polynomial ring $\C[t_1^{\pm 1},\dotsc,t_n^{\pm 1},s_1,\dotsc,s_m]$, $n,m \in \N$, then the global Weyl module $W^\Gamma(\lambda)$ is a free $\bA^\lambda_\Gamma$-module. \item \label{theo-item:loop-global-Weyl-module-free} If $A=\C[t^{\pm 1}]$, then $W^\Gamma(\lambda)$ is a free $\bA^\lambda_\Gamma$-module for all $\lambda \in \Lambda_\Gamma^+$, and its rank is equal to the dimension of any local Weyl module. \end{asparaenum} \end{theo} \begin{rem} The condition that $\bA^\lambda_\Gamma$ is a generalized Laurent polynomial ring can be verified in specific cases using the explicit realization of $\bA^\lambda_\Gamma$ given in Section~\ref{sec:A-lambda-Gamma} (see Theorem~\ref{theo:Alambda-isom}). \end{rem} \section{Twisting functors} \label{sec:twisting} In this section we recall the twisting functors introduced in \cite{FKKS12} and prove some facts related to them that will be used in the sequel. We continue to assume that $\Gamma$ is cyclic, acts freely on $X_\rat$, and acts faithfully on $\g$ by diagram automorphisms (see Remark~\ref{rem:cyclic-assumption}). We define the \emph{support} of an ideal $J$ of $A$ to be \[\label{def:supp-ideal} \Supp J \defeq \{\sm \in \maxSpec A\ |\ J \subseteq \sm\} \cong \maxSpec (A/J). \] Note that the support of an ideal is often defined to be the set of prime (rather than maximal) ideals containing it. So our definition is more restrictive. When we refer to the \emph{codimension} of an ideal of an algebra, we mean its codimension as a $\kk$-vector space (and not, for instance, some geometric codimension). \begin{lem} \label{lem:ideal-form} All ideals of $(\g \otimes A)^\Gamma$ are of the form $(\g \otimes J)^\Gamma = \bigoplus_{\xi \in \Xi} \g_\xi \otimes J_{-\xi}$, where $J=\bigoplus_{\xi \in \Xi} J_\xi$ is a $\Gamma$-invariant ideal of $A$. \end{lem} \begin{proof} This is proved in \cite[Prop.~7.1]{Sav12} in the more general setting of Lie superalgebras. \end{proof} \begin{defin}[Support]\label{def:support-module} It follows from Lemma~\ref{lem:ideal-form} that the annihilator of any $(\g \otimes A)^\Gamma$-module $V$ is of the form $(\g \otimes J)^\Gamma$ for a unique $\Gamma$-invariant ideal $J$ of $A$. We denote this ideal $J$ by $\Ann_A^\Gamma V$. Thus \[ \Ann_A^\Gamma V \defeq \langle f \in A\ |\ uV=0 \text{ for all } u \in (\g \otimes A)^\Gamma \cap (\g \otimes f) \rangle. \] We define the \emph{support} of $V$ to be \[ \Supp_A^\Gamma V \defeq \Supp \Ann_A^\Gamma V. \] When the group $\Gamma$ is trivial, we will often omit the superscript $\Gamma$. \end{defin} Let $X_*$\label{def:X-star} denote the set of finite subsets of $X_\rat$ that do not contain two points in the same $\Gamma$-orbit. \begin{defin}[Categories $\cF_\bx$ and $\cF_\bx^\Gamma$]\label{def:F-x} For $\bx \in X_*$, let $\cF_\bx$ denote the full subcategory of the category of $(\g \otimes A)$-modules whose objects are finite-dimensional $(\g \otimes A)$-modules $V$ with $\Supp_A V \subseteq \bx$. Similarly, let $\cF_\bx^\Gamma$ be the full subcategory of the category of $(\g \otimes A)^\Gamma$-modules whose objects are finite-dimensional $(\g \otimes A)^\Gamma$-modules $V$ with $\Supp_A^\Gamma V \subseteq \Gamma \cdot \bx$. \end{defin} If $V$ is a finitely supported $(\g \otimes A)$-module and $V^\Gamma$ denotes the corresponding $(\g \otimes A)^\Gamma$-module obtained by restriction, then it is clear that $\Supp_A^\Gamma V^\Gamma = \Gamma \cdot \Supp_A V$. \begin{defin}[{Twisting functors $\bT$ and $\bT_\bx$ (\cite[Def.~2.8]{FKKS12})}]\label{def:twisting-functor} We have a natural \emph{twisting functor} $\bT$ from the category of $(\g \otimes A)$-modules to the category of $(\g \otimes A)^\Gamma$-modules, defined by restriction. For any $\bx \in X_*$, we have the induced functor $\bT_\bx \colon \cF_\bx \to \cF_\bx^\Gamma$. \end{defin} \begin{prop}[{\cite[Th.~2.10]{FKKS12}}] \label{prop:twisting} For $\bx \in X_*$, the functor $\bT_\bx \colon \cF_\bx \to \cF_\bx^\Gamma$ is an isomorphism of categories. Furthermore, for $\psi \in \cE^\Gamma$ with $\bx \in (\Supp \psi)_\Gamma$, we have $\bT_\bx(V(\psi_\bx)) = V^\Gamma(\psi)$. \end{prop} \begin{proof} This follows immediately from~\cite[Th.~2.10]{FKKS12} after the straightforward verification that $(\psi_\bx)^\Gamma = \psi$ in the notation of that theorem. \end{proof} Let $\omega_i$ be the fundamental weight of $\g$ corresponding to $i \in I$. So we have $\Lambda^+ = \sum_{i \in I} \N \omega_i$. Recall that the set of nodes $I_\Gamma$ of the Dynkin diagram of $\g^\Gamma$ can be naturally identified with the set of $\Gamma$-orbits in $I$ (and we will equate the two in what follows). For $\bi \in I_\Gamma$, we define \begin{equation} \label{eq:ebi-fbi-hbi} \ts e_\bi \defeq \sqrt{\kappa_\bi} \sum_{i \in \bi} e_i,\quad f_\bi \defeq \sqrt{\kappa_\bi} \sum_{i \in \bi} f_i,\quad h_\bi \defeq \kappa_\bi \sum_{i \in \bi} h_i, \end{equation} where $\kappa_\bi = 2$\label{def:kappa-bi} if $\g$ is of type $A_{2n}$, $\Gamma$ acts nontrivially on $\g$ and $\bi$ corresponds to the short root of $\g^\Gamma$ (which is of type $B_n$). Otherwise, $\kappa_\bi=1$. Then $\{e_\bi, f_\bi, h_\bi\}$ is an $\mathfrak{sl}_2$-triple for each $\bi \in I_\Gamma$ and these triples generate $\g^\Gamma$. We refer the reader to \cite[\S8.3]{Kac90} for details. We let $\alpha_\bi$ and $\omega_\bi$ denote the simple root and fundamental weight, respectively, of $\g^\Gamma$ corresponding to $\bi \in I_\Gamma$. Thus \begin{equation} \label{def:Lambda-Gamma}\ts \Lambda_\Gamma = \bigoplus_{\bi \in I_\Gamma} \Z \omega_\bi,\quad \Lambda_\Gamma^+ = \bigoplus_{\bi \in I_\Gamma} \N \omega_\bi, \quad Q_\Gamma = \bigoplus_{\bi \in I_\Gamma} \Z \alpha_\bi,\quad Q_\Gamma^+ = \bigoplus_{\bi \in I_\Gamma} \N \alpha_\bi \end{equation} are the integral weight lattice, dominant integral weight lattice, root lattice, and positive root lattice of $\g^\Gamma$ respectively. We conclude this section with a lemma collecting some technical results that will be used in the sequel. \begin{lem} \label{lem:height-wt} Suppose that $\Gamma$ acts nontrivially on $\g$ by diagram automorphisms. \begin{enumerate} \item \label{lem-item:root-restriction} For all $i \in I$, we have $\alpha_i|_{\h^\Gamma} = \alpha_{\Gamma i}$ and $\omega_i|_{\h^\Gamma} = \kappa_{\Gamma i} \omega_{\Gamma i}$. \item \label{lem-item:height-wt} For all $\psi \in \cE^\Gamma$ and $\bx \in (\Supp \psi)_\Gamma$, we have $\hei_\Gamma \psi = \hei \psi_\bx$. \item \label{lem-item:category-I-restriction} Let $\llift \in \Lambda^+$ and set $\lambda \defeq \llift|_{\h^\Gamma}$. Then, for $V \in \Ob \cI_{\le \llift}$, we have $\bT(V) \in \Ob \cI^\Gamma_{\le \lambda}$ and $V_\llift = \bT(V)_{\lambda}$ as vector spaces. \end{enumerate} \end{lem} \begin{proof} \begin{asparaenum} \item This is a straightforward computation and will be omitted. \item Suppose $\psi \in \cE^\Gamma$ and choose $\bx \in (\Supp \psi)_\Gamma$. Then \[ \hei_\Gamma \psi = \hei_\Gamma \left((\wt \psi_\bx)|_{\h^\Gamma}\right) = \hei \wt \psi_\bx = \hei \psi_\bx, \] where the second equality follows from part~\eqref{lem-item:root-restriction}. \item Let $V \in \Ob \cI_{\le \llift}$. Then the $\g$-weights of $V$ lie in $\llift - Q^+$, where $Q^+$ is the positive root lattice of $\g$. By part~\eqref{lem-item:root-restriction}, the $\g^\Gamma$-weights of $\bT(V)$ lie in $\lambda - Q^+_\Gamma$ and so $\bT(V) \in \Ob \cI^\Gamma_{\le \lambda}$. The second part of the statement follows easily. \qedhere \end{asparaenum} \end{proof} \section{Local Weyl modules} \label{sec:local-Weyl-modules} In this section we define local Weyl modules and prove some of their important properties. We continue to assume that $\Gamma$ is cyclic, acts freely on $X_\rat$ and acts faithfully on $\g$ by diagram automorphisms. We will also assume that the action of $\Gamma$ on $\g$ is nontrivial, since the case of trivial action has been covered in \cite{CFK10}. Recall the definition of the $M(\psi)$ and $M^\Gamma(\psi)$ from Definition~\ref{def:M-psi}. \begin{defin}[Local Weyl modules $W(\psi)$ and $W^\Gamma(\psi)$]\label{def:local-weyl} Let $\psi \in \cE$ (respectively, $\psi \in \cE^\Gamma$) and set $\llift = \wt \psi$ (respectively, $\lambda = \wt_\Gamma \psi$). The corresponding \emph{untwisted} (respectively, \emph{twisted}) \emph{local Weyl module} is $W(\psi) \defeq \bW_\llift M(\psi)$ (respectively, $W^\Gamma(\psi) \defeq \bW_\lambda^\Gamma M^\Gamma(\psi)$). \end{defin} \begin{lem} \label{lem:ideal-gen-by-zero-component} If $J = \bigoplus_{\xi \in \Xi} J_\xi$ is a $\Gamma$-invariant ideal of $A$, then $A_\tau J_\xi = J_{\tau + \xi}$ for all $\tau,\xi \in \Xi$. \end{lem} \begin{proof} Fix $\tau, \xi \in \Xi$. Since $J$ is an ideal, we have $A_\tau J_\xi \subseteq J_{\tau + \xi}$. Now choose $a \in J_{\tau + \xi}$. By \cite[Lem.~4.4]{NS11}, we have $A_\tau A_{-\tau} = A_0$. Thus $J_{\tau + \xi} = A_0 J_{\tau + \xi} = A_\tau A_{-\tau} J_{\tau + \xi} \subseteq A_\tau J_\xi$. \end{proof} \begin{lem} \label{lem:EMA-ideal} Suppose that $J_0$ is an ideal of $A_0 = A^\Gamma$. Then the ideal of $(\g \otimes A)^\Gamma$ generated by $\g^\Gamma \otimes J_0$ is $(\g \otimes J)^\Gamma$, where $J= \bigoplus_{\xi \in \Xi} A_\xi J_0$ is the ideal of $A$ generated by $J_0$. \end{lem} \begin{proof} By Lemma~\ref{lem:ideal-form}, the ideal of $(\g \otimes A)^\Gamma$ generated by $\g^\Gamma \otimes J_0$ is of the form $(\g \otimes J')^\Gamma = \bigoplus_{\xi \in \Xi} \g_\xi \otimes J'_{-\xi}$ for some $\Gamma$-invariant ideal $J'$ of $A$. Clearly we have $J_0 \subseteq J'_0$ and so $J \subseteq J'$. By Lemma~\ref{lem:ideal-gen-by-zero-component}, $(\g \otimes J)^\Gamma$ is an ideal of $(\g \otimes A)^\Gamma$ containing $\g^\Gamma \otimes J_0$, and so we must have $J' \subseteq J$. \end{proof} For $\psi \in \cE$, define \begin{equation} \label{def:J-psi} \ts J(\psi) \defeq \prod_{\sm \in \Supp \psi} \sm. \end{equation} If $\psi \in \cE^\Gamma$, then $J(\psi)$ is clearly $\Gamma$-invariant. Recall the definition of $Y_\Gamma$ for a $\Gamma$-invariant subset $Y \subseteq X_\rat$ given in Definition~\ref{def:wt-ht}. \begin{prop} \label{prop:local-Weyl-module-annihilated} For $\psi \in \cE^\Gamma$, the ideal $(\g \otimes J(\psi)^k)^\Gamma$ annihilates the local Weyl module $W^\Gamma(\psi)$ for some positive integer $k$. In particular, $W^\Gamma(\psi) \in \cF^\Gamma_\bx$ for $\bx \in (\Supp \psi)_\Gamma$. \end{prop} \begin{proof} Let $\lambda = \wt_\Gamma \psi$, fix a nonzero element $m \in M^\Gamma(\psi)$ and let $J = J(\psi)$. First suppose that $\g$ is not of type $A_{2n}$. Let $\theta$ be the highest root of $\g$ and let $\{e_\theta, f_\theta, h_\theta\}$ be a corresponding $\mathfrak{sl}_2$-triple. Then $\{e_\theta, f_\theta, h_\theta\} \subseteq \g^\Gamma$ and this set forms an $\mathfrak{sl}_2$-triple corresponding to $\theta_\Gamma \defeq \theta|_{\h^\Gamma}$, which is the highest root of $\g^\Gamma$. \details{Using Lemma~\ref{lem:height-wt}\eqref{lem-item:root-restriction} and the explicit description of the highest roots given, for instance, in \cite[\S12.2, Table~2]{Hum72}, we see that $\theta|_{\h^\Gamma}$ is the highest root of $\g^\Gamma$ and that $\theta$ is the \emph{only} root of $\g$ that restricts to the highest root of $\g^\Gamma$. Thus $\Gamma$ must act trivially on $\g_\theta$ (similarly, on $\g_{-\theta}$) and the result follows.} One can see from Proposition~\ref{weyl:gen-rel} (see also \cite[Prop.~4]{CFK10}) that there is a map of $(\g^\Gamma \otimes A^\Gamma)$-modules from the untwisted global Weyl module for $\g^\Gamma \otimes A^\Gamma$ to the twisted global Weyl module for $(\g \otimes A)^\Gamma$ that maps $w_\lambda$ to $w_\lambda^\Gamma$. Thus, applying \cite[Prop.~9]{CFK10} (see also \cite[Prop.~4.1]{CFS08}) with $\g^\Gamma$ in place of $\g$ and $A^\Gamma$ in place of $A$, we have \begin{equation}\label{f-theta-annihilate} \left(f_\theta \otimes a^{\lambda(h_{\theta})}\right)(w_\lambda^\Gamma \otimes m) = 0,\quad a \in J^\Gamma. \end{equation} Since $[f_\theta, \n^-]=0$ and $\cU\left((\n^- \otimes A)^\Gamma\right) (w_\lambda^\Gamma \otimes m) = W^\Gamma(\psi)$, we have \[ \left(f_\theta \otimes a^{\lambda(h_\theta)}\right) W^\Gamma(\psi)=0, \quad a \in J^\Gamma. \] Since $\g^\Gamma$ is simple and the annihilator of $W^\Gamma(\psi)$ in $\g^\Gamma \otimes A^\Gamma$ is an ideal, it follows that $\g^\Gamma \otimes a^{\lambda(h_\theta)}$ annihilates $W^\Gamma(\psi)$ for all $a \in J^\Gamma = J_0$. By Lemma~\ref{lem:ideal-form}, the annihilator of $W^\Gamma(\psi)$ is of the form $(\g \otimes J')^\Gamma$ for some $\Gamma$-invariant ideal $J'$ of $A$. By the above, we have $a^{\lambda(h_\theta)} \in J'_0$ for all $a \in J_0$. By \cite[Th.~5]{AKL94}, this implies that $(J_0)^{\lambda(h_\theta)} \subseteq J'_0$. Let $K$ be the ideal of $A$ generated by $(J_0)^{\lambda(h_\theta)}$. For $\xi \in \Xi$, we have \[ K_\xi = A_\xi (J_0)^{\lambda(h_\theta)} = J_\xi (J_0)^{{\lambda(h_\theta)}-1} = (J^{\lambda(h_\theta)})_\xi, \] where the second equality holds by Lemma~\ref{lem:ideal-gen-by-zero-component} and the third equality holds by \cite[Lem.~4.4]{NS11}. Thus $K=J^{\lambda(h_\theta)}$. By Lemma~\ref{lem:EMA-ideal}, the ideal of $(\g \otimes A)^\Gamma$ generated by $\g \otimes (J_0)^{\lambda(h_\theta)}$ is equal to $(\g \otimes K)^\Gamma$. Thus we have $J^{\lambda(h_\theta)} = K \subseteq J'$. Now suppose that $\g$ is of type $A_{2n}$. Let $\beta_\Gamma$ denote the highest root of $\g^\Gamma$ (which is of type $B_n$) and $\beta_\Gamma^s$ the highest short root. We let $\{e_{\beta_\Gamma}, f_{\beta_\Gamma}, h_{\beta_\Gamma}\}$ be an $\mathfrak{sl}_2$-triple in $\g^\Gamma$ corresponding to $\beta_\Gamma$. Since $\Gamma$ is of order two, $\g$ decomposes into $\g^\Gamma \oplus \g_1$. By \cite[Prop.~8.3d]{Kac90}, we know that $\g_1$ is a simple $\g^\Gamma$-module of highest weight $2\sum_{\bi\in I_\Gamma}\alpha_\bi$ (\cite[\S8.3, Table]{Kac90}), which is equal to $2\beta_\Gamma^s$ (see, for example, \cite[\S12.2, Table 2]{Hum72}). Recall that $ \{ e_{i}, f_i , h_i\}$ is an $\mathfrak{sl}_2$-triple in $\g$ corresponding to the simple root $\alpha_i \in R^+$, $i \in I$. Then, for $1 \leq i \leq n$, \begin{equation} \label{eq:simple-root-spaces} f_{\mathbf{1}} = \sqrt{\kappa_{\mathbf{1}}}(f_1+f_{2n}) \in \g^\Gamma \text{ and }f_i - f_{2n+1 - i} \in \g_{1}. \end{equation} Before completing the proof of the proposition, we develop some additional ideas specific to the $A_{2n}$ case. Since $\g^\Gamma$ is of type $B_n$, $2\beta_\Gamma^s-\beta_\Gamma$ is the simple root $\alpha_{\mathbf{1}}$. On the other hand, it follows from Lemma~\ref{lem:height-wt}\eqref{lem-item:root-restriction} that the weight of $f_1-f_{2n}$ is $-\alpha_{\mathbf1}$. Consider the vector \[ [f_{\beta_\Gamma}, f_1-f_{2n}]\in(\g_1)_{-2\beta_\Gamma^s}, \] which we claim is nonzero. Assuming the claim, we see that $ [f_{\beta_\Gamma}, f_1-f_{2n}]$ spans the lowest weight space of $\g_1$ as a $\g^\Gamma$-module, since simple finite-dimensional modules for $B_n$ are self-dual. Therefore \begin{equation}\label{fbg-generates} \left [\cU(\n_+^\Gamma), [f_{\beta_\Gamma}, f_1-f_{2n}]\right]=\g_1, \end{equation} that is, $[f_{\beta_\Gamma}, f_1-f_{2n}]$ generates $\g_1$ as an $\n_+^\Gamma$-module. To see that $[f_{\beta_\Gamma}, f_1-f_{2n}]\ne 0$, let $w\in (\g_1)_{-2\beta_\Gamma^s}$. Since $[h_{\beta_\Gamma},w]=-2\beta_\Gamma^s(h_{\beta_\Gamma})w=-2w$, it follows that $[e_{\beta_\Gamma}, w]$ is a nonzero weight vector of weight $-\alpha_{\mathbf1}$. Now this weight space has dimension 1, for the following reason. First, a straightforward computation shows that \[ s_{\mathbf{2}}s_{\mathbf{3}} \dotsm s_{\mathbf{n-1}}s_{\mathbf{n}}s_{\mathbf{n-1}} \dotsm s_{\mathbf{3}}s_{\mathbf{2}}s_{\mathbf{1}}(-\alpha_{\mathbf1}) = \alpha_{\mathbf{1}}+2\alpha_{\mathbf{2}}+\cdots+2\alpha_{\mathbf{n}} = \beta_\Gamma \] and so $-\alpha_{\mathbf1}$ is in the Weyl group orbit of $\beta_\Gamma$. Now the dimension of the $\beta_\Gamma$ weight space of the Verma module of highest weight $2\beta_\Gamma^s$ is one, the number of ways of writing $2\beta_\Gamma^s - \beta_\Gamma = \alpha_{\mathbf{1}}$ as a sum of positive roots. Thus $V(2\beta_\Gamma^s)_{\beta_\Gamma}$ has dimension at most 1. On the other hand, since $\beta_\Gamma$ is a dominant weight lying below the highest weight of $V(2\beta_\Gamma^s)$, the dimension is also at least 1, and thus $\dim_\kk V(2\beta_\Gamma^s)_{\beta_\Gamma}=1$. Because $\beta_\Gamma$ is Weyl conjugate to $-\alpha_\mathbf{1}$, it follows that $\dim V(2\beta_\Gamma^s)_{-\alpha_{\mathbf{1}}} = 1$ and so $[e_{\beta_\Gamma}, w]$ is a nonzero constant multiple of $f_1-f_{2n}$. Since $[f_{\beta_\Gamma}, [e_{\beta_\Gamma},w]]$ is a nonzero multiple of $w$, we see that $[f_{\beta_\Gamma}, f_1-f_{2n}]\ne 0$, which proves the claim and establishes \eqref{fbg-generates}. Now, the set of weights $\mu_\Gamma\in \Lambda_\Gamma$ with $-\beta_\Gamma>\mu_\Gamma>-2\beta_\Gamma^s$ is empty since the difference $2\beta_\Gamma^s-\beta_\Gamma$ is the simple root $\alpha_\mathbf{1}$, and so it follows that \begin{equation}\label{n-1-minus} [(\n_-)_1, f_{\beta_\Gamma}]=\Span_\kk [f_{\beta_\Gamma}, f_1-f_{2n}]. \end{equation} Finally, we observe that \begin{equation} \label{n-minus-rk} [\n_-,[f_{\beta_\Gamma}, f_1-f_{2n}]] = 0. \end{equation} To see this, note that $[\n_-^\Gamma, [f_{\beta_\Gamma}, f_1-f_{2n}]]=0$ since any vector in this space would be a vector in $\g_1$ of $\g^\Gamma$-weight strictly lower than the lowest weight. On the other hand, $[(\n_-)_1, [f_{\beta_\Gamma}, f_1-f_{2n}]]=0$ since any vector in this space would be a vector in $\g^\Gamma$ of $\g^\Gamma$-weight strictly lower than $-2\beta_\Gamma^s\le-\beta_\Gamma$. To complete the proof of the proposition in the case where $\g$ is of type $A_{2n}$, we will generalize the arguments used in \cite[Prop.~4.1]{CFS08}, where the proposition was proved for the twisted loop algebra. Recall that we have decompositions $\g = \g^\Gamma \oplus \g_{1}$ and $J=J_0\oplus J_1$, and so again by \cite[Lem.~4.4]{NS11}, we have \[ (\g\otimes J^r)^\Gamma=(\g^\Gamma \otimes (J_0)^r)\oplus (\g_1\otimes J_1(J_0)^{r-1}) \quad \text{for } r \ge 1. \] Thus it suffices to show that \begin{gather} \label{6-a2n-sts1} \left( \g^\Gamma\otimes (J_0)^k \right) W^\Gamma(\psi) = 0\quad\text{and} \\ \label{6-a2n-sts2} \left( \g_1\otimes J_1(J_0)^k \right) W^\Gamma(\psi) = 0, \end{gather} for sufficiently large $k$. As in \eqref{f-theta-annihilate} we obtain \[ \label{eq:theta-action2} \left(f_{\beta_\Gamma} \otimes a^{ \lambda(h_{\beta_\Gamma})}\right)(w_\lambda^\Gamma \otimes m) = 0,\quad a \in J_0. \] Using the fact that $f_{\beta_\Gamma}$ is a lowest weight vector for the adjoint representation of $\g^\Gamma$, it follows, once again using \cite[Th.~5]{AKL94}, that \begin{equation} \label{eq:ggamma-ideal} \left(\g^\Gamma \otimes (J_0)^{\lambda(h_{\beta_\Gamma})}\right) (w_\lambda^\Gamma \otimes m) =0. \end{equation} In particular, \begin{equation} \label{eq:alphan-ideal} \left(f_\mathbf{1} \otimes (J_0)^{\lambda(h_{\beta_\Gamma})}\right)(w_\lambda^\Gamma \otimes m) = 0. \end{equation} Since $[h_1-h_{2n},f_{\mathbf{1}}] = -(2+\delta_{1,n})\sqrt{\kappa_{\mathbf 1}}(f_1-f_{2n}) \in \g_1$, we have \begin{align*} \left( (f_1 - f_{2n}) \otimes J_1 (J_0)^{\lambda(h_{\beta_\Gamma})} \right) (w_\lambda^\Gamma \otimes m) &= \left( [h_1-h_{2n}, f_{\mathbf{1}}] \otimes J_1 (J_0)^{\lambda(h_{\beta_\Gamma})} \right) (w_\lambda^\Gamma \otimes m) \\ &= \left[ (h_1 - h_{2n}) \otimes J_1, f_{\mathbf{1}} \otimes (J_0)^{\lambda(h_{\beta_\Gamma})} \right] (w_\lambda^\Gamma \otimes m). \end{align*} Thus \begin{equation} \label{eq:yn-action} \left( (f_1 - f_{2n}) \otimes J_1 (J_0)^{\lambda(h_{\beta_\Gamma})} \right) (w_\lambda^\Gamma \otimes m) = 0, \end{equation} by~\eqref{eq:alphan-ideal} and the fact that $(h_1 - h_{2n}) \otimes J_1 \subseteq (\h \otimes A)^\Gamma$ acts by scalar multiplication on the highest weight vector $w_\lambda^\Gamma \otimes m$. By~\eqref{eq:ggamma-ideal} and~\eqref{eq:yn-action}, we have \begin{multline}\label{eq:lw-ideal} \left( [f_{\beta_\Gamma}, f_1 - f_{2n}] \otimes J_{1} (J_0)^{2\lambda(h_{\beta_\Gamma})} \right) (w_\lambda^\Gamma \otimes m) \\ = \left[f_{\beta_\Gamma}\otimes (J_0)^{\lambda(h_{\beta_\Gamma})}, (f_1 - f_{2n}) \otimes J_{1}(J_0)^{\lambda(h_{\beta_\Gamma})}\right] (w_\lambda^\Gamma\otimes m)=0. \end{multline} By~\eqref{n-minus-rk}, $[\n_-, [f_{\beta_\Gamma}, f_1-f_{2n}]]=0$. Thus, again using the fact that $W^\Gamma(\psi) = \cU((\n^- \otimes A)^\Gamma) (w_\lambda^\Gamma \otimes m)$, we have that \[ \left( \left[f_{\beta_\Gamma}, f_1 -f_{2n}\right]\otimes J_{1}(J_0)^{2\lambda(h_{\beta_\Gamma})}\right) W^\Gamma(\psi) = 0. \] By \eqref{fbg-generates}, $[f_{\beta_\Gamma}, f_1 -f_{2n}]$ generates $\g_{1}$ as an $\n_{+}^{\Gamma}$-module, which now establishes~\eqref{6-a2n-sts2}. Combining~\eqref{n-1-minus}, \eqref{n-minus-rk} and~\eqref{eq:ggamma-ideal}, we see that \[ \left( f_{\beta_\Gamma} \otimes (J_0)^{k} \right)) W^\Gamma(\psi) \subseteq \cU((\n^- \otimes A)^{\Gamma}) \left( [f_{\beta_\Gamma}, f_1 -f_{2n}] \otimes J_{1}(J_0)^{k} \right) (w_\lambda^\Gamma \otimes m)=0 \] for $k\ge 2\lambda(h_{\beta_\Gamma})$, where the last equality follows by \eqref{eq:lw-ideal}. Again since $f_{\beta_\Gamma}$ is a lowest weight vector for the adjoint representation of $\g^\Gamma$, we have \[ (\g^\Gamma \otimes (J_0)^{k}) W^\Gamma(\psi) = 0 \quad \text{for} \quad k \geq 2\lambda(h_{\beta_\Gamma}), \] which establishes~\eqref{6-a2n-sts1} and completes the proof. \end{proof} \begin{lem} \label{lem:lWm-characterization} A $(\g \otimes A)^\Gamma$-module $V$ is isomorphic to the local Weyl module $W^\Gamma(\psi)$ if and only if it satisfies the following three conditions: \begin{enumerate} \item \label{lem-item:lWm-cat} $V \in \Ob \cI^\Gamma_{\le \lambda}$, where $\lambda = \wt_\Gamma \psi$; \item \label{lem-item:lWm-top} $\bR^\Gamma_\lambda V \cong M^\Gamma(\psi)$; \item \label{lem-item:lWm-hom} $\Hom_{\cI^\Gamma_{\le \lambda}} (V,U)=0$ and $\Ext^1_{\cI^\Gamma_{\le \lambda}}(V,U)=0$ for all irreducible finite-dimensional $U \in \Ob \cI^\Gamma_{\le \lambda}$ with $U_\lambda=0$. \end{enumerate} \end{lem} \begin{proof} If $V$ satisfies the conditions in the lemma, then by Corollary~\ref{cor:local-weyl-module-char-fd} we have $V \cong \bW^\Gamma_\lambda \bR^\Gamma_\lambda V \cong \bW^\Gamma_\lambda M^\Gamma(\psi) = W^\Gamma(\psi)$. Conversely, the local Weyl module $W^\Gamma(\psi)$ satisfies~\eqref{lem-item:lWm-cat} by definition of the functor $\bW^\Gamma_\lambda$. We have $\bR^\Gamma_\lambda W^\Gamma(\psi) = \bR^\Gamma_\lambda \bW^\Gamma_\lambda M^\Gamma(\psi) \cong M^\Gamma(\psi)$ by Lemma~\ref{prop:right-adjoint}\eqref{prop-item:RW=1}. Then $W^\Gamma(\psi)$ satisfies~\eqref{lem-item:lWm-hom} by~\eqref{eq:W=WRW} and Theorem~\ref{theo:WR=1-hom-char} (or Corollary~\ref{cor:local-weyl-module-char-fd}). \end{proof} Fix $\psi \in \cE^\Gamma$ and $\bx \in (\Supp \psi)_\Gamma$. Set $\llift = \wt \psi_\bx$ and $\lambda = \wt_\Gamma \psi = \llift|_{\h^\Gamma}$. By \cite[Prop.~9]{CFK10}, there exists a positive integer $n_1$ such that $\g \otimes J(\psi_\bx)^{n_1}$ annihilates the untwisted local Weyl module $\bW_\llift M(\psi_\bx)$. By Proposition~\ref{prop:local-Weyl-module-annihilated}, there exists a positive integer $n_2$ such that $(\g \otimes J(\psi)^{n_2})^\Gamma$ annihilates the twisted local Weyl module $\bW_{\lambda}^\Gamma M^\Gamma(\psi)$. Let $n=\max(n_1,n_2)$. We have a sequence of isomorphisms \begin{multline*} \ts (\g \otimes A)^\Gamma/(\g \otimes J(\psi)^n)^\Gamma \cong (\g \otimes A/J(\psi)^n)^\Gamma \cong \left( \bigoplus_{x \in \Supp \psi} \g \otimes A/\sm_x^n \right)^\Gamma \\ \ts \cong \bigoplus_{x \in \bx} \g \otimes A/\sm_x^n \cong (\g \otimes A)/(\g \otimes J(\psi_\bx)^n). \end{multline*} \begin{prop} \label{prop:equivalence-of-lWm-defs} Suppose that $\Gamma$ is abelian, acts freely on $X_\rat$ and acts on $\g$ by diagram automorphisms. Then $W^\Gamma(\psi) = \bT (W(\psi_\bx))$ for all $\psi \in \cE^\Gamma$ and $\bx \in (\Supp \psi)_\Gamma$. In other words, the twisted local Weyl module is obtained by restriction from an untwisted local Weyl module and thus isomorphic to the twisted local Weyl module as defined in \cite[Def.~3.7]{FKKS12}. \end{prop} \begin{proof} Fix $\psi \in \cE^\Gamma$ and $\bx \in (\Supp \psi)_\Gamma$. By Proposition~\ref{prop:local-Weyl-module-annihilated}, $W^\Gamma(\psi) \in \cF_\bx^\Gamma$. Let $\llift = \wt \psi_\bx$ and $\lambda = \wt_\Gamma \psi$, so that $\lambda = \llift|_{\h^\Gamma}$. Then $W(\psi_\bx) \in \Ob \cI_{\le \llift}$ and so, by Lemma~\ref{lem:height-wt}\eqref{lem-item:category-I-restriction}, $\bT(W(\psi_\bx)) \in \Ob \cI^\Gamma_{\le \lambda}$. Now suppose $V^\Gamma(\varphi)$, $\varphi \in \cE^\Gamma$, is an irreducible finite-dimensional object of $\cI_{\le \lambda}^\Gamma$ with $V^\Gamma(\varphi)_\lambda=0$. This implies that $\wt_\Gamma \varphi \in \lambda - Q^+_\Gamma = \wt_\Gamma \psi - Q^+_\Gamma$. Thus, by Lemma~\ref{lem:height-wt}\eqref{lem-item:height-wt}, we have $\hei \varphi_\bx < \hei \psi_\bx$. Enlarging $\bx$ if necessary, we may assume that $\Supp \varphi \subseteq \Gamma \cdot \bx$ (i.e.\ $V^\Gamma(\varphi) \in \cF_\bx^\Gamma$). For $\ell=0,1$, we have \[ \Ext^\ell_{\cF_\bx^\Gamma}(\bT_\bx(W(\psi_\bx)), V^\Gamma(\varphi)) = \Ext^\ell_{\cF_\bx^\Gamma}(\bT_\bx(W(\psi_\bx)), \bT_\bx(V(\varphi_\bx))) = \Ext^\ell_{\cF_\bx}(W(\psi_\bx), V(\varphi_\bx))=0, \] where the first two equalities follow from Proposition~\ref{prop:twisting} and the last equality follows from \cite[Th.~4.5]{FKKS12}. Now, the weight space $W(\psi_\bx)_\llift$ is isomorphic to the weight space $V(\psi_\bx)_\llift$ as a $(\h \otimes A)$-module. Restricting the action to $(\h \otimes A)^\Gamma$ and using the fact that $\bT_\bx (V(\psi_\bx)) = V(\psi)$, we see that the weight space $(\bT_\bx(W(\psi_\bx)))_\lambda$ is isomorphic to the weight space $V(\psi)_\lambda$ as a $(\h \otimes A)^\Gamma$-module, and hence as a $\bA^\lambda_\Gamma$-module. Therefore $\bR^\Gamma_\lambda \bT_\bx (W(\psi_\bx)) = M^\Gamma(\psi)$. Thus, by Lemma~\ref{lem:lWm-characterization}, we have $W^\Gamma(\psi) = \bT_\bx(W(\psi_\bx))$, as desired. \end{proof} We are now in a position to prove Theorem~\ref{theo:global-Weyl-module-projective}. \begin{proof}[Proof of Theorem~\ref{theo:global-Weyl-module-projective}] \hypertarget{proof:projective-free} Choose $\llift \in \Lambda^+$ such that $\llift|_{\h^\Gamma} = \lambda$ (see Prop.~\ref{prop:irred-wt-space-modules}\eqref{prop-item:gWm-zero}). By Proposition~\ref{prop:equivalence-of-lWm-defs}, the local Weyl modules $W^\Gamma(\psi)$, $\psi \in \cE_\lambda^\Gamma$, for $(\g \otimes A)^\Gamma$ are restrictions of local Weyl modules $W(\psi')$, $\psi' \in \cE_\llift$, for the untwisted map algebra $\g \otimes A$. In addition, we have that $\bA^\lambda_\Gamma$ is a finitely generated algebra and $W^\Gamma(\lambda)$ is a finitely generated $\bA^\lambda_\Gamma$-module by Theorems~\ref{theo:A-fg} and~\ref{theo:finite-generated}. \begin{asparaenum} \item Assume that $A$ is the coordinate algebra of a smooth complex algebraic variety and $\lambda$ is a fundamental weight of $\g^\Gamma$. Then, by \cite[Cor.~8]{CFK10}, the dimensions of the local Weyl modules $W(\psi')$, $\psi' \in \cE_\llift$, for $\g \otimes A$ are all equal. By the above, this implies that the dimensions of the local Weyl modules $W^\Gamma(\psi)$, $\psi \in \cE_\lambda^\Gamma$, are all equal. Then the result follows from Corollary~\ref{cor:constant-localized-dim}. \item Assume $A = \C[t^{\pm 1}]$. Let $\psi \in \cE_\lambda^\Gamma$ and write $\psi = \sum_{\ell=1}^m \psi_\ell$ where $\psi_\ell \in \cE_{\lambda_\ell}^\Gamma$ and $\lambda_\ell$ is a fundamental weight for $\ell=1,\dotsc,m$. By part~\eqref{theo-item:fund-weight-proj-property} (and Corollary~\ref{cor:constant-localized-dim}), it suffices to show that \begin{equation} \label{eq:local-Weyl-tensor-property} \ts \dim_\C W^\Gamma(\psi) = \prod_{\ell=1}^m \dim_\C W^\Gamma(\psi_\ell). \end{equation} By \cite[Th.~2]{CP01} (or \cite[Prop.~3.9]{FKKS12}), we are reduced to the case where the support of $\psi$ is a single $\Gamma$-orbit. Furthermore, since the twisted local Weyl modules are restrictions of untwisted ones (as explained above), it suffices to consider the untwisted case, that is, we can assume that $\Gamma$ is trivial. Suppose $\Supp \psi = \{a\}$ for some $a \in \C^*$. Then, by Proposition~\ref{prop:local-Weyl-module-annihilated}, $W(\psi)$ is annihilated by $\g \otimes (t-a)^N\C[t^{\pm 1}]$ for some $N \in \N$. We have a commutative diagram \[ \xymatrix{ \g \otimes \C[t^{\pm 1}] \ar@{->>}[r] & \g \otimes \big(\C[t^{\pm 1}]/(t-a)^N\C[t^{\pm 1}]\big) \\ \g \otimes \C[t] \ar@{^(->}[u] \ar@{->>}[r] & \g \otimes \big(\C[t]/(t-a)^N\C[t]\big) \ar[u]^{\cong} } \] where the vertical arrows are induced by inclusion and the horizontal arrows are the natural projections. Since the local Weyl modules for the current algebra $\g \otimes \C[t]$ are annihilated by $\g \otimes (t-a)^N \C[t]$ (increasing $N$ if necessary), it follows that the pullback of the local Weyl module for the loop algebra $\g \otimes \C[t^{\pm 1}]$ is the local Weyl module for the current algebra (see Lemma~\ref{lem:lWm-characterization}). Thus it suffices to prove~\eqref{eq:local-Weyl-tensor-property} for the current algebra. Then, by considering the automorphism of $\g \otimes \C[t]$ determined by $u \otimes f(t) \mapsto u \otimes f(t-a)$ for $f(t) \in \C[t]$, we see that it suffices to prove~\eqref{eq:local-Weyl-tensor-property} when $a=0$ (i.e.\ $\psi$ is supported at the origin). This was proved in \cite[Th.~5]{CP01} for $\g = \mathfrak{sl}_2$ (in fact, the statement there is for the loop algebra itself), in \cite[Th.~1.5.1]{CL06} for $\g = \mathfrak{sl}_n$, in \cite[Cor.~B]{FL07} (and conjectured in~\cite{CP01b}) for simple simply laced $\g$ and in~\cite[Cor.~A]{Nao12} for arbitrary simple $\g$. We note that the freeness of the global Weyl module in the untwisted case also follows from results found in \cite{Nak01,BN04}. \qedhere \end{asparaenum} \end{proof} \section{The algebra \texorpdfstring{$\bA_\lambda^\Gamma$}{acting on the weight spaces}} \label{sec:A-lambda-Gamma} In this section we give an alternative, and more explicit, characterization of the algebra $\bA_\lambda^\Gamma$. We also relate it to the corresponding algebra in the untwisted setting. We assume in this section that the Jacobson radical $\rad A$ of $A$ (which is equal to the nilradical of $A$ since $A$ is finitely generated) is zero and that $\Gamma$ is a nontrivial cyclic group acting faithfully on $\g$ by diagram automorphisms (see Remark~\ref{rem:cyclic-assumption}). We also continue to assume that $\Gamma$ acts freely on $X_\rat$. In addition, we assume in this section that $\g$ is not of type $A_{2n}$. In other words, we assume that $\Gamma$ acts by \emph{admissible} diagram automorphisms (no two nodes of the Dynkin diagram are contained in the same $\Gamma$-orbit). In this section, for a commutative associative unital algebra $B$, we will not distinguish between a point of $\maxSpec B$ and its corresponding maximal ideal. Since $\g$ is not of type $A_{2n}$, the restriction map $\Lambda^+ \to \Lambda_\Gamma^+$ is surjective (see Lemma~\ref{lem:height-wt}\eqref{lem-item:root-restriction}). Recall that we can naturally identify $I_\Gamma$ with the set $I/\Gamma$ of $\Gamma$-orbits on $I$. Fix $\lambda = \sum_{\bi \in I_\Gamma} r_\bi \omega_\bi \in \Lambda^+_\Gamma$. For $\bi \in I_\Gamma$ and $i \in \bi$, let $r_i = r_\bi$. Recall that, for $i \in I$, we let $\Gamma_i = \{\gamma \in \Gamma\ |\ \gamma i = i\}$ denote the corresponding isotropy subgroup. Let $J \subseteq I$ contain one point in each orbit of the $\Gamma$-action on $I$. We will often identify $J$ with the set $\{1,\dots,|J|\}$ using the standard labeling of Dynkin diagrams found, for instance, in \cite[\S11.4]{Hum72}. Define \begin{equation} \label{def:bbA} \ts \bbA^\lambda_\Gamma \defeq \bigotimes_{j \in J} S^{r_j} (A^{\Gamma_j}). \end{equation} So we have a natural identification \begin{equation} \label{eq:AA-maxSpec} \ts \maxSpec \bbA^\lambda_\Gamma \cong \prod_{j \in J} \big( (\maxSpec A^{\Gamma_j})^{r_j} \big)/S_{r_j}. \end{equation} From now on, we will identify the two sides of~\eqref{eq:AA-maxSpec}. Recall that $\cE^\Gamma$ is the set of $\Gamma$-equivariant, finitely-supported maps from $X_\rat = \maxSpec A$ to $\Lambda^+$. Let $\mm \in \maxSpec \bbA^\lambda_\Gamma$. Then $\mm$ is of the form \[ \mm = ((\mm_{j,\ell})_{\ell=1}^{r_j})_{j \in J}, \] where, for $j \in J$, the (unordered) tuple $(\mm_{j,\ell})_{\ell=1}^{r_j}$ is an element of $\big( (\maxSpec A^{\Gamma_j})^{r_j} \big)/S_{r_j}$. Hence, each $\mm_{j,\ell}$ can be identified with an element of the quotient $(\maxSpec A)/{\Gamma_j} \cong \maxSpec A^{\Gamma_j}$, that is, with a $\Gamma_j$-orbit in $\maxSpec A$. Then define $\psi_\mm \in \cE^\Gamma$ by \[ \ts \psi_\mm(\sm) = \sum_{j \in J,\ \gamma \in \Gamma/\Gamma_j} \sum_{\ell=1}^{r_j} \delta_{\sm,\gamma \mm_{j,\ell}} \omega_{\gamma j},\quad \sm \in \maxSpec A, \] where $\delta_{\sm,\sm'}$, $\sm \in \maxSpec A$, $\sm' \in \maxSpec A^{\Gamma_j}$, is equal to one if $\sm \mapsto \sm'$ under the quotient map $\maxSpec A \twoheadrightarrow \maxSpec A^{\Gamma_j}$ corresponding to the inclusion $A^{\Gamma_j} \hookrightarrow A$ (i.e., if $\sm' = \sm \cap A^{\Gamma_j}$), and is equal to zero otherwise. It is readily verified that the map $\mm \mapsto \psi_\mm$ is a well-defined bijection of sets $\maxSpec \bbA^\lambda_\Gamma \to \cE^\Gamma_\lambda$. \details{To see that $\psi_\mm(\sm)$ is $\Gamma$ equivariant, note that, for $\tau \in \Gamma$, \begin{gather*} \ts \psi_\mm(\tau \sm) = \sum_{j \in J,\ \gamma \in \Gamma/\Gamma_j} \sum_{\ell=1}^{r_j} \delta_{\tau \sm, \gamma \mm_{j,\ell}} \omega_{\gamma j} = \frac{1}{|\Gamma_j|} \sum_{j \in J,\ \gamma \in \Gamma} \sum_{\ell=1}^{r_j} \delta_{\tau \sm, \gamma \mm_{j,\ell}} \omega_{\gamma j} \\ \ts = \frac{1}{|\Gamma_j|} \sum_{j \in J,\ \gamma \in \Gamma} \sum_{\ell=1}^{r_j} \delta_{\sm, \tau^{-1} \gamma \mm_{j,\ell}} \omega_{\gamma j} = \frac{1}{|\Gamma_j|} \sum_{j \in J,\ \nu \in \Gamma} \sum_{\ell=1}^{r_j} \delta_{\sm, \nu \mm_{j,\ell}} \omega_{\nu \tau j} \\ \ts = \tau \left(\frac{1}{|\Gamma_j|} \sum_{j \in J,\ \nu \in \Gamma} \sum_{\ell=1}^{r_j} \delta_{\sm, \nu \mm_{j,\ell}} \omega_{\nu j} \right) = \tau \psi_\mm(\sm), \end{gather*} where, in the fourth equality, we set $\nu = \tau^{-1} \gamma$.} Recall that for $\sm \in \maxSpec A$, the quotient $A/\sm$ is canonically isomorphic to $\kk$. We will identify the two in what follows. For $\psi \in \cE^\Gamma$, choose $\sM \in (\Supp \psi)_\Gamma$ and consider the composition \begin{equation} \label{eq:hev-def} \ts (\h \otimes A)^\Gamma \hookrightarrow \h \otimes A \twoheadrightarrow \bigoplus_{\sm \in \sM} \left( \h \otimes (A/\sm) \right) \cong \bigoplus_{\sm \in \sM} \h \xrightarrow{\sum_{\sm \in \sM} \psi(\sm)} \kk. \end{equation} One readily verifies that this map does not depend on the choice of $\sM$. It induces a map \[ \hev_\psi^\Gamma \colon \cU((\h \otimes A)^\Gamma) \to \kk. \] We use the notation $\hev_\psi^\Gamma$ to distinguish this evaluation representation of $(\h \otimes A)^\Gamma$ from the evaluation map $\ev_\psi^\Gamma$. \begin{lem} \label{lem:Ann-in-hev} For $\lambda \in \Lambda^+_\Gamma$, we have $\Ann_{\cU((\h \otimes A)^\Gamma)} w_\lambda^\Gamma \subseteq \bigcap_{\psi \in \cE^\Gamma_\lambda} \ker \hev_\psi^\Gamma$. \end{lem} \begin{proof} Let $u \in \Ann_{\cU((\h \otimes A)^\Gamma)} w_\lambda^\Gamma$ and $\psi \in \cE^\Gamma_\lambda$. Since $V^\Gamma(\psi)$ is a quotient of $W^\Gamma(\psi)$, we have $u v^\Gamma_\lambda = 0$. But, by the definition of $V^\Gamma(\psi)$, we have $u v^\Gamma_\lambda = \hev_\psi^\Gamma(u) v^\Gamma_\lambda$. Thus $\hev_\psi^\Gamma(u)=0$. \end{proof} For a $\kk$-algebra $B$ and $m \in \N$, define \[ \ts \sym_m(b) \defeq \sum_{\ell=1}^m 1^{\otimes (\ell-1)} \otimes b \otimes 1^{\otimes (m-\ell)} \in S^mB,\quad b \in B. \] Recall that if $B$ is finitely generated, then $S^m B$ is generated by elements of the form $\sym_m(b)$, $b \in B$ (see \cite[Lem.~4.56(ii)]{EGHLSVY} or note that, since $B$ is finitely generated, this follows from the case where $B$ is a polynomial algebra in finitely many variables, in which case the result can be found in \cite[Th.~1.2]{Dal99}, but goes back to \cite{Sch1852}). Thus the algebra $\bbA^\lambda_\Gamma$ is generated by the classes of elements of the form \[ \ts \sym^j_\lambda(a) \defeq 1^{\otimes(r_1 + \dotsb + r_{j-1})} \otimes \sym_{r_j}(a) \otimes 1^{\otimes (r_{j+1} + \dotsb + r_{|J|})},\quad j \in J ,\ a \in A^{\Gamma_j}. \] The Lie algebra $(\h \otimes A)^\Gamma$ is spanned by elements of the form \begin{equation} \label{eq:hAG-form} \ts \overline{h_j \otimes a},\quad j \in J,\ a \in A^{\Gamma_j} \end{equation} (recall Definition~\ref{def:overline}). Let $\tilde \tau_\lambda \colon \cU((\h \otimes A)^\Gamma) \twoheadrightarrow \bbA^\lambda_\Gamma$ be the surjective map determined by \begin{equation} \label{def:tau-tilde} \ts \tilde \tau_\lambda \left( \overline{h_j \otimes a} \right) = \sym^j_\lambda (a),\quad j \in J,\ a \in A^{\Gamma_j}. \end{equation} \begin{lem} \label{lem:tau-eval-composition} We have $\hev_{\psi_\mm}^\Gamma = \ev_\mm \circ \tilde \tau_\lambda$ for all $\mm \in \maxSpec \bbA^\lambda_\Gamma$, where $\ev_\mm \colon \bbA^\lambda_\Gamma \twoheadrightarrow \bbA^\lambda_\Gamma/\mm \cong \kk$ is the canonical projection for $\mm \in \maxSpec \bbA^\lambda_\Gamma$. \end{lem} \begin{proof} It suffices to prove that the maps agree on elements of the form~\eqref{eq:hAG-form}. Fix $k \in J$ and $a \in A^{\Gamma_k}$. Choose $\mm = ((\mm_{j,\ell})_{\ell=1}^{r_j})_{j \in J} \in \maxSpec \bbA^\lambda_\Gamma$. Then \[ \ts \ev_\mm \circ \tilde \tau_\lambda ( \overline{h_k \otimes a} ) = \ev_\mm (\sym^k_\lambda(a)) = \sum_{\ell=1}^{r_k} (a + \mm_{k,\ell}) \in \kk, \] where we have canonically identified $A^{\Gamma_k}/\mm_{k,\ell} \cong \kk$ for $\ell=1,\dotsc,r_k$. On the other hand, choose $\sM$ in~\eqref{eq:hev-def} to contain $\{\sm_{k,\ell}\}_{\ell=1}^{r_k}$, where $\sm_{k,\ell} \mapsto \mm_{k,\ell}$ under the quotient map $\maxSpec A \twoheadrightarrow \maxSpec A^{\Gamma_k}$. Then \[ \ts \hev_{\psi_\mm}^\Gamma ( \overline{h_k \otimes a} ) = \sum_{\ell=1}^{r_k} (a + \sm_{k,\ell}) \in \kk. \] Since, for $\ell=1,\dotsc,r_k$, the inclusion $A^{\Gamma_k} \hookrightarrow A$ induces an isomorphism $A^{\Gamma_k}/\mm_{k,\ell} \cong A/\sm_{k,\ell}$ mapping $a + \mm_{k,\ell} \mapsto a + \sm_{k,\ell}$, the proof is complete. \end{proof} \begin{lem} \label{lem:tau-surjective} We have $\Ann_{\cU((\h \otimes A)^\Gamma)} w_\lambda^\Gamma \subseteq \ker \tilde \tau_\lambda$ and thus $\tilde \tau_\lambda$ induces a surjective algebra homomorphism $\tau_\lambda \colon \bA^\lambda_\Gamma \twoheadrightarrow \bbA^\lambda_\Gamma$. \end{lem} \begin{proof} We have \[ \ts \ker \tilde \tau_\lambda = \bigcap_{\mm \in \maxSpec \bbA^\lambda_\Gamma} \ker \ev_\mm \circ \tilde \tau_\lambda = \bigcap_{\psi \in \cE_\lambda^\Gamma} \ker \hev^\Gamma_\psi. \] Indeed, the first equality follows from the fact that $\rad \bbA^\lambda_\Gamma = 0$ (since $\rad A=0$) and the second follows from Lemma~\ref{lem:tau-eval-composition} and the fact that the map $\mm \mapsto \psi_\mm$ is a bijection $\maxSpec \bbA^\lambda_\Gamma \to \cE^\Gamma_\lambda$. The statement of the lemma then follows from Lemma~\ref{lem:Ann-in-hev}. \end{proof} Recall that we have a $\Xi$-grading $A = \bigoplus_{\xi \in \Xi} A_\xi$ on $A$, where $\Xi$ is the character group of $\Gamma$. Choosing a basis for each $A_\xi$ yields a basis $\cB$ of $A$. We do this in such a way that our basis contains the element $1 \in A$. Since $A$ is finitely generated, the basis $\cB$ is countable. So we can write $\cB = \{a_r\ |\ r \in \N\}$, with $a_0=1$. We say that $a_r \le a_{r'}$ if $r \le r'$. Since each $A^{\Gamma_j}$, $j \in J$, is a sum of isotypic components $A_\xi$, $\xi \in \Xi$, we have that $\cB_j \defeq \cB \cap A^{\Gamma_j}$ is a basis of $A^{\Gamma_j}$ for $j \in J$. \begin{lem} \label{lem:A-lambda-Gamma-spanning-set} The elements \[ \ts \prod_{j \in J} \prod_{s=1}^{m_j} \overline{h_j \otimes b_{j,s}}\, w^\Gamma_\lambda, \quad b_{j,s} \in \cB_j,\ a_0 < b_{j,1} \le \dots \le b_{j,m_j},\ m_j \le r_j \text{ for } j \in J, \] span $W^\Gamma(\lambda)_\lambda$. \end{lem} \begin{proof} It suffices to prove that for all $j \in J$ and $a_{p_1},\dotsc,a_{p_\ell} \in \cB_j$ with $1 \le p_1 \le \dots \le p_\ell$, $\ell \in \N$, we have \[ \ts \prod_{s=1}^\ell \overline{h_j \otimes a_{p_s}}\, w^\Gamma_\lambda \in \Span_\kk \left\{ \left. \prod_{q=1}^m \overline{h_j \otimes a_{t_q}}\, w^\Gamma_\lambda \ \right|\ 1 \le t_1 \le \dots \le t_m,\ m \le r_j \right\}. \] For $j \in J$ and $a,a' \in \cB_j$, we have \begin{gather*} \ts \left[ \overline{e_j \otimes a}, \overline{f_j \otimes a'} \right] = \overline{h_j \otimes aa'},\\ \ts \left[ \overline{h_j \otimes a}, \overline{e_j \otimes a'} \right] = 2\, \overline{e_j \otimes aa'},\quad \left[ \overline{h_j \otimes a}, \overline{f_j \otimes a'} \right] = - 2\, \overline{f_j \otimes aa'}. \end{gather*} Thus, for $\ell \ge r_j + 1$, we have \[ \ts 0 = \left( \prod_{s=1}^\ell \overline{e_j \otimes a_{p_s}} \right) \left( \overline{f_j \otimes 1} \right)^\ell w^\Gamma_\lambda = \prod_{s=1}^\ell \overline{h_j \otimes a_{p_s}}\, w^\Gamma_\lambda + C w^\Gamma_\lambda, \] where $C$ is a $\kk$-linear combination of elements of the form $\prod_{s=1}^m \overline{h_j \otimes a_{p_{k_s}}}$ with $m < \ell$. The lemma follows by induction on $\ell$. \end{proof} By Lemma~\ref{lem:A-lambda-Gamma-spanning-set}, we see that $\bA^\lambda_\Gamma$ is spanned by the image of the set \begin{equation} \label{eq:A-lambda-Gamma-span-set} \ts \left\{ \left. \prod_{j \in J} \prod_{s=1}^{m_j} \overline{h_j \otimes b_{j,s}}\ \right|\ b_{j,s} \in \cB_j,\ a_0 < b_{j,1} \le \dots \le b_{j,m_j},\ m_j \le r_j \text{ for } j \in J \right\}. \end{equation} We now state the first main result of this section, which gives an explicit realization of the algebra $\bA^\lambda_\Gamma$. \begin{theo} \label{theo:Alambda-isom} The map $\tau_\lambda \colon \bA^\lambda_\Gamma \to \bbA^\lambda_\Gamma$ is an isomorphism of algebras. \end{theo} \begin{proof} By Lemma~\ref{lem:tau-surjective}, it suffices to show that $\tau_\lambda$ is injective. For this, it is enough to show that the images under $\tau_\lambda$ of the elements of the set~\eqref{eq:A-lambda-Gamma-span-set} are linearly independent (over $\kk$) in $\bbA^\lambda_\Gamma$. Now, for $b_{j,s} \in \cB_j$, $s=1,\dotsc,m_j$, $a_0 < b_{j,1} \le \dots \le b_{j,m_j}$, $m_j \le r_j$, $j \in J$, we have \begin{equation} \label{eq:tau-lambda-formula} \ts \tau_\lambda \left( \prod_{j \in J} \prod_{s=1}^{m_j} \overline{h_j \otimes b_{j,s}} \right) = \prod_{j \in J} \prod_{s=1}^{m_j} \sym^j_\lambda(b_{j,s}). \end{equation} Since the tensor product of linearly independent sets is linearly independent, it suffices to prove that, for a fixed $j \in J$, the elements \[ \ts \prod_{s=1}^m \sym_{r_j}(b_s),\quad b_s \in \cB_j,\ s=1,\dotsc,m,\ a_0 < b_1 \le \dots \le b_m,\ m \le r_j, \] are linearly independent elements of $S^{r_j}(A^{\Gamma_j})$. Consider a linear combination of distinct elements of this set equal to zero: \begin{equation} \label{eq:lin-comb-syms} \ts \sum_{t=1}^N \left( c_t \prod_{s=1}^{m_t} \sym_{r_j} (b_{s,t}) \right) = 0 \end{equation} for some $b_{s,t} \in \cB_j$, $t=1,\dotsc,N$, $s=1,\dotsc,m_t$, $a_0 < b_{1,t} \le \dotsb \le b_{m_t,t}$, $m_t \le r_j$ and $c_1,\dotsc,c_N \in \kk$. Choose $\ell \in \{1,\dotsc,r_j\}$. Let $A^{\Gamma_j}_+ \defeq \Span_\kk \{b \in \cB_j\ |\ b \ne 1\} \subsetneq A^{\Gamma_j}$. Applying the projection $S^{r_j}(A^{\Gamma_j}) \twoheadrightarrow (A^{\Gamma_j}_+)^{\otimes \ell} \otimes 1^{\otimes(r_j-\ell)}$ to both sides of~\eqref{eq:lin-comb-syms} gives \[ \ts \sum_{1 \le t \le N,\ m_t=\ell} c_t \sum_{\varsigma \in S_{\ell}} \varsigma \left( b_{1,t} \otimes b_{2,t} \otimes \dotsb \otimes b_{\ell,t} \otimes 1^{\otimes(r_j-\ell)} \right) = 0, \] where we view $S_\ell$ as a subgroup of $S_n$ in the natural way (i.e., permuting the first $\ell$ elements). Since $b_{1,t},\dots,b_{\ell,t}$ are elements of a basis of $A$ (for any $t$), the elements \[ \ts \sum_{\varsigma \in S_{\ell}} \varsigma \left( b_{1,t} \otimes b_{2,t} \otimes \dotsb \otimes b_{\ell,t} \otimes 1^{\otimes(r_j-\ell)} \right) \] appearing above are linearly independent. Thus $c_t=0$ for all $t=1,\dots,N$. \end{proof} In the remainder of this section, we provide an alternative description of $\bbA^\lambda_\Gamma$ in terms of coinvariants that does not depend on the choice $J$ of one element from each $\Gamma$-orbit in $I$. For an algebra $B$ with the action (by automorphisms) of a finite group $\Upsilon$, we define $B_{(\Upsilon)}$ to be the ideal of $B$ generated by the set $\{b - \gamma b\ |\ b \in B,\ \gamma \in \Upsilon\}$ and let $B_\Upsilon \defeq B/B_{(\Upsilon)}$ be the algebra of coinvariants. We hope this causes no confusion with the notation $\bbA^\lambda_\Gamma$, which is not, a priori, the algebra of coinvariants of $\bbA^\lambda$ (but see Theorem~\ref{theo:AlamGam-coinvariants}). Note that if $\Upsilon$ is abelian, then $B_{(\Upsilon)}$ is the ideal of $B$ generated by $\bigoplus_{\xi \in \Xi,\, \xi \ne 0} B_\xi$, where $\Xi$ is the character group of $\Upsilon$. \begin{lem} \label{lem:transitive-group-action-on-tensor-prod} Suppose that $\Upsilon$ is a finite group acting on a commutative unital $\kk$-algebra $B$ by algebra automorphisms and simply transitively on a finite set $Z$. Consider the action of $\Upsilon$ on $B' \defeq \bigotimes_{z \in Z} B_z$, where $B_z=B$ for all $z \in Z$, determined by \[ \ts \gamma \left(\bigotimes_{z \in Z} b_z\right) = \bigotimes_{z \in Z} \gamma b_{\gamma^{-1}z},\quad \gamma \in \Upsilon,\ b_z \in B, z \in Z. \] Then $(B')_\Upsilon \cong B$. \end{lem} \begin{proof} Label the elements of $\Upsilon$ so that we have $\Upsilon = \{\tau_1,\dots,\tau_n\}$, with $\tau_1$ being the identity element of $\Upsilon$. Choose $z_1 \in Z$, and set $z_i = \tau_i z_1$ for $i=1,\dotsc,n$. So $Z=\{z_1,\dots,z_n\}$ and we have a natural action of $\Upsilon$ on the set $\{1,\dots,n\}$ by defining $\gamma i = j$ if $\gamma z_i = z_j$ (equivalently, if $\gamma \tau_i = \tau_j$) for $\gamma \in \Upsilon$ and $i \in \{1,\dots,n\}$. Consider the algebra homomorphism determined by \[ \ts \varpi \colon B' \to B,\quad \bigotimes_{i=1}^n b_i \mapsto \prod_{i=1}^n \tau_i^{-1} b_i,\quad b_i \in B_{z_i}. \] Clearly $\varpi$ is surjective and so it remains to show that $\ker \varpi = (B')_{(\Upsilon)}$. For all $b' = \bigotimes_{i=1}^n b_i$ (recalling that elements of this form span $B'$), we have \[ \ts \varpi (\gamma b') = \varpi \left( \bigotimes_{i=1}^n \gamma b_{\gamma^{-1}i} \right) = \prod_{i=1}^n \tau_i^{-1} \gamma b_{\gamma^{-1}i} = \prod_{j=1}^n \tau_j^{-1} b_j = \varpi (b'), \] where, in the third equality, we have changed the index by setting $j = \gamma^{-1}i$. Thus $(B')_{(\Upsilon)} \subseteq \ker \varpi$. Now suppose $b' \in \ker \varpi$. Then we can write \[ \ts b' = \sum_{j=1}^\ell ( b_{1,j} \otimes \dotsb \otimes b_{n,j} ),\quad b_{i,j} \in B_{z_i} \text{ for all } 1 \le i \le n,\ 1 \le j \le \ell. \] It is straightforward to verify that \[ \ts b_{1,j} \otimes \dotsb \otimes b_{n,j} \equiv \left( \prod_{i=1}^n \tau_i^{-1}b_{i,j} \right) \otimes 1 \otimes \dotsb \otimes 1 \mod B_{(\Upsilon)}. \] \details{ For each $j=1,\dots,n$, the element $b_{1,j} \otimes \dotsb \otimes b_{n,j}$ is equal to the telescoping sum \begin{gather*} \ts \left( \prod_{i=1}^n \tau_i^{-1}b_{i,j} \right) \otimes 1 \otimes \dotsb \otimes 1 \\ \ts + \left( \prod_{i=1}^{n-1} \tau_i^{-1} b_{i,j} \right) \otimes 1 \otimes \dotsb \otimes 1 \otimes b_{n,j} - \left( \prod_{i=1}^n \tau_i^{-1}b_{i,j} \right) \otimes 1 \otimes \dotsb \otimes 1 \\ \ts + \left( \prod_{i=1}^{n-2} \tau_i^{-1} b_{i,j} \right) \otimes 1 \otimes \dotsb \otimes 1 \otimes b_{n-1,j} \otimes b_{n,j} - \left( \prod_{i=1}^{n-1} \tau_i^{-1} b_{i,j} \right) \otimes 1 \otimes \dotsb \otimes 1 \otimes b_{n,j} \\ \ts \dotsb + b_{1,j} \otimes \dotsb \otimes b_{n,j} - (b_{1,j} \tau_2^{-2}b_{2,j}) \otimes 1 \otimes b_{3,j} \otimes \dotsb \otimes b_{n,j}, \end{gather*} which, in turn, is equal to \begin{gather*} \ts \left( \prod_{i=1}^n \tau_i^{-1}b_{i,j} \right) \otimes 1 \otimes \dotsb \otimes 1 \\ \ts + \Big(\left( \prod_{i=1}^{n-1} \tau_i^{-1} b_{i,j} \right) \otimes 1 \otimes \dotsb \otimes 1\Big) \Big(1 \otimes \dotsb \otimes 1 \otimes b_{n,j} - (\tau_n^{-1}b_{n,j})\otimes 1 \otimes \dotsb \otimes 1\Big) \\ \ts + \Big(\left( \prod_{i=1}^{n-2} \tau_i^{-1} b_{i,j} \right) \otimes 1 \otimes \dotsb \otimes 1 \otimes b_{n,j} \Big) \Big(1 \otimes \dotsb \otimes 1 \otimes b_{n-1,j} \otimes 1 - ( \tau_{n-1}^{-1} b_{{n-1},j}) \otimes 1 \otimes \dotsb \otimes 1 \Big) \\ \ts \dotsb + \Big( b_{1,j} \otimes 1 \otimes b_{3,j} \dotsb \otimes b_{n,j} \Big) \Big( 1 \otimes b_{2,j} \otimes 1 \otimes \dotsb \otimes 1 - \tau_2^{-2}b_{2,j} \otimes 1 \otimes \dotsb \otimes 1 \Big). \end{gather*} Each of the right-hand factors in the lines above (except the first line) are elements of the form $b'' - \tau b''$ for some $b'' \in B'$ and $\tau \in \Upsilon$. } Therefore, \[ \ts b' \equiv \left(\sum_{j=1}^\ell \prod_{i=1}^n \tau_i^{-1}b_{i,j} \right) \otimes 1 \otimes \dotsb \otimes 1 \equiv \varpi(b') \otimes 1 \otimes \dotsb \otimes 1 \equiv 0 \mod B_{(\Upsilon)}. \qedhere \] \end{proof} \begin{lem} \label{lem:coinvariant-reduction} Suppose that $\Upsilon$ is a finite cyclic group acting on a finitely generated commutative associative unital $\kk$-algebra $B$ by algebra automorphisms in such a way that the induced action of $\Upsilon$ on $\maxSpec B$ is free. Fix a positive integer $n$ and consider the diagonal action of $\Upsilon$ on $B^{\otimes n}$. This action commutes with the natural action of $S_n$ on $B^{\otimes n}$ and thus we have an induced action of $\Upsilon$ on $S^n B$. \begin{enumerate} \item If the order of $\Upsilon$ does not divide $n$, then $(S^n B)_\Upsilon = 0$. \item If $n = m |\Upsilon|$ for some positive integer $m$ and $(S^n B)_\Upsilon$ is reduced, then $(S^n B)_\Upsilon \cong S^m(B^\Upsilon)$. \end{enumerate} \end{lem} \begin{proof} \begin{asparaenum} \item \label{lem-item:coinvariant-reduction-zero} We have a bijection \[ \ts \maxSpec (S^n B)_\Upsilon \cong \left( \left( \prod_{i=1}^n \maxSpec B \right)/S_n \right)^\Upsilon. \] In other words, the maximal ideals of $(S^n B)_\Upsilon$ can be identified with $\Upsilon$-invariant unordered $n$-tuples of maximal ideals of $B$. Therefore, they are unions of $\Upsilon$-orbits on the set $\maxSpec B$. Since this action is free, $(S^n B)_\Upsilon$ has no maximal ideals if $n$ is not divisible by the order of $\Upsilon$. \item Let $\ell = |\Upsilon|$ and assume $n=m \ell$ for some positive integer $m$. Recall that for any $\kk$-algebra $C$ with an $\Upsilon$-action, we have the induced grading $C=\bigoplus_{\xi \in \Xi}C_\xi$, where $\Xi$ is the character group of $\Upsilon$. Let $\sigma$ be a generator of $\Upsilon$ and consider the map \[ B^{\otimes \ell} \to B,\quad b_1 \otimes \dotsb \otimes b_\ell \mapsto b_1 \sigma(b_2) \dotsm \sigma^{\ell-1}(b_\ell), \] extended by linearity. Let $\Phi'$ denote the restriction of this map to $S^\ell B$. One easily checks that $\Phi'$ is $\Upsilon$-invariant and $\Phi'(S^\ell B) = B_0$. Thus, \begin{equation} \label{eq:Phi'-vanish} \Phi'((S^\ell B)_\xi) = 0 \quad \text{for all } \xi \ne 0. \end{equation} We have the induced surjective map \[ \Phi \colon (S^\ell B)^{\otimes m} \xrightarrow{(\Phi')^{\otimes m}} (B_0)^{\otimes m}. \] Restricting $\Phi$ to $S^n B$ gives a surjective map \[ \varphi \colon S^n B \twoheadrightarrow S^m(B_0). \] Now, \begin{equation} \label{eq:Bn-decomp} \ts B^{\otimes n} \cong (B^{\otimes n})_0 \oplus B', \quad \text{where } B' = \bigoplus_{\xi \in \Xi,\, \xi \ne 0} (B^{\otimes n})_\xi. \end{equation} Each summand in the decomposition~\eqref{eq:Bn-decomp} is preserved by the action of $S_n$. Thus, \[ S^n B \cong ((B^{\otimes n})_0)^{S_n} \oplus (B')^{S_n}. \] Now, \[ \ts (B')^{S_n} \subseteq \bigoplus_{\xi_1 + \dotsb + \xi_m \ne 0} (S^\ell B)_{\xi_1} \otimes \dotsb \otimes (S^\ell B)_{\xi_m}. \] Thus it follows from~\eqref{eq:Phi'-vanish} that $\varphi((B')^{S_n})=0$. So $\varphi$ vanishes on the ideal of $S^n B$ generated by $(B')^{S_n}$, which is precisely $(S^n B)_{(\Upsilon)}$. Therefore, $\varphi$ induces a surjective map of algebras \[ \bar \varphi\colon (S^n B)_\Upsilon \twoheadrightarrow S^m(B_0) = S^m(B^\Upsilon). \] Applying the functor $\Spec$ gives a morphism of schemes \[ \Spec \bar \varphi \colon \Spec S^m(B^\Upsilon) \to \Spec (S^n B)_\Upsilon \] which is a closed immersion. Now, \begin{gather*} \maxSpec S^m(B^\Upsilon) \cong (\maxSpec B^\Upsilon)^m/S_m \cong ((\maxSpec B)/\Upsilon)^m/S_m, \quad \text{and} \\ \maxSpec (S^n B)_\Upsilon \cong (\maxSpec S^n B)^\Upsilon \cong ((\maxSpec B)^n/S_n)^\Upsilon. \end{gather*} The map $\Spec \bar \varphi$ induces a bijection between these two sets. Namely, it maps the element of $\maxSpec S^m(B^\Upsilon)$ corresponding to an (unordered) $m$-tuple of $\Upsilon$-orbits on $\maxSpec B$ to the union (counting multiplicity) of these orbits, which is an $\Upsilon$-invariant $n$-tuple of $\maxSpec B$. In particular, $\Spec \bar \varphi$ is surjective on maximal ideals. Thus $\ker \bar \varphi$ is included in the intersection of all the maximal ideals of $(S^n B)_\Upsilon$. Now, since $B$ is finitely generated, so is $B^{\otimes n}$, hence so is the fixed point algebra $S^n B = (B^{\otimes n})^{S_n}$, and thus so is the quotient $(S^n B)_\Upsilon$. Therefore, the intersection of all the maximal ideals of $(S^n B)_\Upsilon$ is equal to the nilradical of $(S^n B)_\Upsilon$, which is zero by our assumption that $(S^n B)_\Upsilon$ is reduced. Thus, $\bar \varphi$ is injective and hence an isomorphism. \qedhere \end{asparaenum} \end{proof} Define \begin{equation} \ts \bbA^\lambda \defeq \bigotimes_{i \in I} S^{r_i |\Gamma_i|} A. \end{equation} The diagonal action of $\Gamma$ on $A^{\otimes r_i|\Gamma_i|}$ induces an action on $S^{r_i |\Gamma_i|}A$ for each $i \in I$. Then $\Gamma$ acts on $\bbA^\lambda$ via \[ \ts \gamma \left( \bigotimes_{i \in I} a_i \right) = \bigotimes_{i \in I} \gamma a_{\gamma^{-1}i},\quad \gamma \in \Gamma,\ a_i \in S^{r_i|\Gamma_i|}A,\ i \in I. \] \begin{theo} \label{theo:AlamGam-coinvariants} If $(S^{r_i|\Gamma_i|}A)_{\Gamma_i}$ is reduced for all $i \in I$, then \[ \bbA^\lambda_\Gamma \cong (\bbA^\lambda)_\Gamma. \] \end{theo} \begin{proof} For $i \in I$, let $B_i = S^{r_i |\Gamma_i|}A$. Since the action of $\Gamma$ preserves each factor $\bigotimes_{i \in \bi} B_i$ in $\bbA^\lambda = \bigotimes_{\bi \in I_\Gamma} \bigotimes_{i \in \bi} B_i$, it suffices to prove the theorem for the case where $\lambda = r_\bi \omega_\bi$ for some $\bi \in I_\Gamma$. Let $j \in J$ be the point in the $\Gamma$-orbit $\bi$ that we chose in our definition of $\bbA^\lambda_\Gamma$. Since $\Gamma$ is commutative, we have $\Gamma_i = \Gamma_j$ for all $i \in \bi$. So we have $\bbA^\lambda_\Gamma = S^{r_j}(A^{\Gamma_j})$ and $\bbA^\lambda = \bigotimes_{i \in \bi} B_i$. By Lemma~\ref{lem:coinvariant-reduction}, we have \[ \bbA^\lambda_\Gamma = S^{r_j}(A^{\Gamma_j}) \cong (B_j)_{\Gamma_j}. \] Consider the composition \begin{equation} \label{eq:double-surjection} \ts \bbA^\lambda = \bigotimes_{i \in \bi} B_i \stackrel{\varpi'}{\twoheadrightarrow} \left( \bigotimes_{i \in \bi} B_i \right)_{\Gamma_j} \cong \bigotimes_{i \in \bi} (B_i)_{\Gamma_j} \stackrel{\varpi}{\twoheadrightarrow} (B_j)_{\Gamma_j}, \end{equation} where $\varpi'$ is the natural projection and the third map is the map $\varpi$ of the proof of Lemma~\ref{lem:transitive-group-action-on-tensor-prod}, with $\Upsilon \defeq \Gamma/\Gamma_j$ and $Z \defeq \bi$. Since $\varpi$ and $\varpi'$ are both surjective, it suffices to show that the kernel of the above composition is $(\bbA^\lambda)_{(\Gamma)}$. We know from the proof of Lemma~\ref{lem:transitive-group-action-on-tensor-prod} that the kernel of $\varpi$ is $\left( \bigotimes_{i \in \bi} (B_i)_{\Gamma_j} \right)_{(\Gamma/\Gamma_j)}$, which is isomorphic to $\left( \left( \bigotimes_{i \in \bi} B_i \right)_{\Gamma_j} \right)_{(\Gamma/\Gamma_j)}$ under the isomorphism in~\eqref{eq:double-surjection}. Thus it suffices to show that \begin{equation} \label{eq:varpi'-equality} \ts \left( \bigotimes_{i \in \bi} B_i \right)_{(\Gamma)} = (\varpi')^{-1} \left( \left( \left( \bigotimes_{i \in \bi} B_i \right)_{\Gamma_j} \right)_{(\Gamma/\Gamma_j)} \right). \end{equation} But this follows from the fact that, for $b \in \bigotimes_{i \in \bi} B_i$ and $\gamma \in \Gamma$, we have $\varpi'(b-\gamma b) = \varpi'(b) - \bar \gamma \varpi'(b)$, where $\bar \gamma$ denotes the image of $\gamma$ in $\Gamma/\Gamma_j$. \end{proof} \begin{lem} \label{lem:symmetric-Laurent-polys} For $m \in \N$, we have \[ \kk[t_1^{\pm 1},\dotsc,t_m^{\pm 1}]^{S_m} = \kk[e_1,\dotsc,e_m,e_m^{-1}], \] where $e_\ell$ denotes the $\ell$-th elementary symmetric polynomial for $\ell=1,\dotsc,m$. Here the action of $S_m$ is by permutation of the variables $t_1,\dotsc,t_m$. \end{lem} \begin{proof} We clearly have $\kk[e_1,\dotsc,e_m,e_m^{-1}] \subseteq \kk[t_1^{\pm 1},\dotsc,t_m^{\pm 1}]^{S_m}$, so it remains to prove the reverse inclusion. Suppose \[ \ts y = \sum_{i_1,\dotsc,i_m} c_{i_1,\dotsc,i_m} t_1^{i_1} \dotsb t_m^{i_m} \in \kk[t_1^{\pm 1},\dotsc,t_m^{\pm 1}]^{S_m}. \] Since all but a finite number of the $c_{i_1,\dotsc,i_m}$ are zero, we can find a positive integer $N$ such that $i_\ell + N \ge 0$ for all $\ell=1,\dotsc,m$ whenever $c_{i_1,\dotsc,i_m} \ne 0$. Then \[ \ts y = (t_1 t_2 \dotsm t_m)^{-N} \sum_{i_1,\dotsc,i_m} c_{i_1,\dotsc,i_m} t_1^{i_1+N} \dotsb t_m^{i_m+N} \in e_m^{-N} \kk[t_1,\dots,t_m]^{S_m} \subseteq \kk[e_1,\dotsc,e_m,e_m^{-1}]. \qedhere \] \end{proof} \begin{cor} In the case of a twisted loop algebra, where $A = \kk[t,t^{-1}]$ and the generator $\sigma$ of $\Gamma$ acts on $X_\rat \cong \kk^\times$ by multiplication by a primitive $|\Gamma|$-th root of unity, then $\bbA^\lambda_\Gamma \cong (\bbA^\lambda)_\Gamma$. \end{cor} \begin{proof} In this case, for $i \in I$, we have, by Lemma~\ref{lem:symmetric-Laurent-polys}, \[ S^{r_i |\Gamma_i|}A \cong \kk [ e_1,\dotsc,e_{r_i |\Gamma_i|},e_{r_i |\Gamma_i|}^{-1} ] \subseteq \kk[t_1^{\pm 1},\dotsc,t_{r_i |\Gamma_i|}^{\pm 1}] \cong A^{\otimes r_i |\Gamma_i|}. \] Viewing the above isomorphisms as identifications, it is easily seen that $(S^{r_i |\Gamma_i|}A)_{(\Gamma_i)}$ is the subring of $S^{r_i |\Gamma_i|}A$ generated by the $e_\ell$ for $\ell$ not a multiple of $|\Gamma_i|$. Thus \[ (S^{r_i |\Gamma_i|}A)_{\Gamma_i} = \kk[e_{|\Gamma_i|},e_{2|\Gamma_i|},\dots,e_{r_i|\Gamma_i|},e_{r_i |\Gamma_i|}^{-1}] \] is reduced and the result follows from Theorem~\ref{theo:AlamGam-coinvariants}. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} For the tailored growth of a structure on a substrate, careful consideration of energetic parameters, such as surface and interface energies, and kinetic parameters, such as diffusion barriers, is necessary. However, the parameters related to the dynamics of the deposited atoms, other than the deposition flux, have not been seriously considered. Recently, Dijken, Jorritsma, and Poelsema\cite{Dijken1} has observed by spot profile analysis low energy electron diffraction (SPA-LEED) that the islands formed on Cu (001) show rectangular symmetry in contrast to the square symmetry of the substrate when the 0.5 monolayer (ML) of Cu is deposited at deposition angle ($\theta$) of 80$^o$ from the surface normal. They suggest a model in which the interaction between deposited atom and substrate atoms steers the deposited atom and thus results in the inhomogeneous deposition flux, so called the steering effect. Our previous study on the thin film growth \cite{Seo1}, simulating dynamics of the deposited atoms by molecular dynamics (MD), has confirmed this hypothetical model. These studies demonstrate that the dynamic parameters involved in the deposition process, which have been ignored in most of the previous studies, exert notable effects on the thin film growth. Since the steering effect is unavoidable during deposition process, it influences all the thin film growth by atomic deposition in some degree. Even in the case of the normal deposition, the steering effect is found to affect the thin film growth.\cite{Montalenti,Montalenti2,Yu} It has been also shown that the deposition dynamics is a cause of the unstable growth of thin film on a vicinal surface.\cite{Seo2} \par Dijken, Jorritsma and Poelsema \cite{Dijken2} has observed, in the thicker film (40 ML) growth with $\theta =80^o$ that the surface is rougher than that with $\theta = 0^o$. It has also been observed that the slope of the mound facing the deposition direction is much steeper than that of the side shadowed from the deposition.\cite{Dijken2} Although a qualitative model is suggested relating the experimental result with the inhomogeneous deposition flux due to the steering and screening effects,\cite{Wormeester} there has not been any realistic growth study or simulation work which confirms such speculation. In the present work, we perform kinetic Monte Carlo (KMC) simulation in conjunction with MD simulation that is designed to probe the effects of the dynamic processes on the thick film growth in atomic level. The main results of our simulation are as follows; (1) the roughness increases with increasing deposition angle, (2) the mound formed in the thick film growth has rectangular symmetry with sides elongated in the direction perpendicular to the deposition direction, and (3) the slopes of the illuminated and shadowed sides of the mound significantly differ, which is consistent with the experimental results.\cite{Dijken2} The aforementioned three characteristic morphological features are mainly caused by the inhomogeneous deposition flux on the top terrace of the mound mainly due to the steering effects rather than the screening effects. In the present study, in addition, it is found that the side of mound in each direction is formed of various local facets instead of all being in one selected mound angle, even if the slope selection is attained, and that the experimentally observed mound slope actually corresponds to the mean slope of various local facets coexisting on each mound side. Also found is that the dependence of the mound slope on growth condition is due to the variation of the relative population of the facets. \par \section{Simulation Schemes} Kinetic Monte Carlo (KMC) simulation is utilized to study the thin film growth by deposition. In most of the previous KMC simulations, the deposition process is treated by randomly or uniformly positioning atoms at arbitrary adsorption sites. In the present study, when the deposition process is selected during usual KMC routine, an MD routine is called to simulate the trajectory of a deposited atom by fully considering the interaction between the deposited atom and substrate atoms.\par The details of the simulation are as follows. The substrate, Cu(001), lies on the xy-plane (at z=0) with x-axis lying parallel to the [110] direction. In the deposition process, the deposition starts at the height of 11-28 $a_z$ above the substrate, and deposited atoms are incident at an angle in the direction of x-axis. Here, $a_z$ is the interlayer spacing of Cu(001). The interaction between a deposited atom and substrate atoms is calculated by summing pairwise Lennard-Jones potentials, $U(r)=4D[(\sigma/r)^{12}-(\sigma/r)^{6}]$. Here, $D=$ 0.4093 eV, $\sigma =$ 2.338 \AA,\cite{Sanders1} and $r$ is the distance between two atoms. The initial kinetic energy of the deposited atom is set to 0.15 eV, which corresponds to the melting temperature of copper. During each deposition process, all the substrate atoms are assumed to be frozen in their positions. Verlet algorithm is used in the MD simulation.\par In between two sequential deposition processes, KMC simulation is performed to simulate the diffusion processes of atoms on the substrate. In the KMC, only the diffusion into empty lattice sites is allowed and the exchange diffusion is not allowed. Also the overhang is not allowed during both deposition and diffusion processes. The simulation system is composed of 400$\times$400 atomic lattice sites in fcc (001) surface and a vacuum region on top of the substrate with height of 28 atomic layers. \par \begin{table} \caption{ diffusion barriers and parameters adopted in our simulation} \begin{ruledtabular} \begin{tabular}{cc} type of diffusion & diffusion barrier \\ \tableline single adatom hopping(E1) & 0.48 eV\\ step edge diffusion (E2) & 0.44 eV \\ dimer lateral bond breaking (E3) & 0.46 eV \\ re-estbilishing of a NN bond (E6) & 0.18 eV \\ ES barrier (ES) & 0.10 eV\\ ES barrier (kink site) & 0.05 eV\\ \tableline jump frequency($\nu_{0}$) & $2.4 \times 10^{13} $ \\ deposition rate ($F_{0}$) & 0.00416 ML/s \\ \end{tabular} \end{ruledtabular} \end{table} Values of diffusion coefficients and diffusion barriers are adopted from those used by Furman and coworkers,\cite{Furman,Mehl} which reproduced the surface morphology of Cu islands on Cu(001) very well; the step Ehrlich-Schwoebel (ES) barrier is 0.10 eV and the kink ES barrier is 0.05 eV. In total, eleven kinds of diffusion barriers (including the ES barriers) are used in the KMC simulation and some of the important diffusion barriers are listed in Table I. Note that the barrier (E2) for the diffusion along step edge is 0.44 eV in the present study, which is much larger than the generally accepted values of 0.2 $\sim$0.3 eV, to save the computation time for the very frequent diffusions back and forth along steps. We examine the dependence of the surface morphology on E2, and find that that the morphology does not show any noticeable dependence on E2 down to 0.34 eV, the lowest tested value of E2. In addition, the simulation with the E2 fairly reproduces the real growth mode. Hence, we anticipate that the high E2 value would not seriously limit the validity of the present simulation.\par The surface roughness is determined by the root-mean-square fluctuation of surface height around the mean height. The mound radius is determined as the radius (r) that makes the first zero of the height-height correlation function $G({\vec r})= <h({\vec r})h(0)> - <h(0)>^{2} $, and the mound radii along x- and y-axis are calculated separately. All the results presented are obtained from the average of 20 simulations under identical conditions. Unless mentioned otherwise, the distance in a plane is in unit of a$_{nn}$, and that in the vertical direction is in unit of a$_z$. Here, the nearest neighbor distance, a$_{nn}$, is $a_{nn}=a/\sqrt{2}$, and the interlayer distance is $a_{z}=a/2$, where $a$ is the lattice constant of Cu. \par \section{RESULTS} \subsection{Roughness} \begin{figure} \includegraphics[width=0.45\textwidth]{fig1} \caption{Surface roughness as a function of coverage at 250 K. Inset shows the roughness as function of the deposition angle at 10 ML coverage. The solid line in the inset is the simulation result of random deposition without considering the steering or screening effect (NSS: No Steering or Screening effect). } \end{figure} In Fig. 1, the surface roughness is presented as a function of the coverage ($\Theta$) when Cu atoms are deposited at various deposition angles on Cu(001) at 250K. The most notable feature is that, even at the same coverage, the surface becomes much rougher as the deposition angle increases. Also, as $\Theta$ increases, the difference in roughness between the grazing angle deposition and the normal deposition ($\theta = 0^o$) multiplies. At $\Theta =$ 10 ML, the roughness with $\theta = 80^o$, comes to be twice larger than that with the normal deposition. (Inset of Fig. 1) The presently observed dependence of the roughness on the deposition angle is expected to originate from the deposition dynamics rather than the diffusion kinetics of the atoms on the substrate on the following grounds; (1) such angle dependence of the roughness is observed irrespective of the substrate temperature ($T$) (Fig. 2), and (2) both steering and screening effects become more effective as $\theta$ increases, as revealed by the simulation.(Fig.1 Inset) \par \begin{figure} \includegraphics[width=0.45\textwidth]{fig2} \caption{Surface roughness as a function of temperature at coverage of 10 ML for two deposition angles of 80$^o$ and 0$^o$ (normal incidence). The temperature dependence is more distinct for 80$^o$ case than 0$^o$ case. } \end{figure} In Fig. 2, shown is the dependence of the roughness on $T$; the roughness tends to decrease when $T$ becomes too high or too low. The $bell-shape$ curve is well-explained by the $T$-dependence of the destabilizing current by the ES barrier.\cite{Amar} Thus, such $T$-dependence is caused by the kinetics of deposited atoms and is irrelevant to the deposition dynamics. When the deposition is made at $\theta = 80^o$, such $bell-shape$ $T$-dependence is also observed, but now in an amplified form. This illustrates that the surface roughness is determined $synergetically$ by the diffusion kinetics and the deposition dynamics.\par \subsection{Mound radius} \begin{figure} \includegraphics[width=0.45\textwidth]{fig3} \caption{Mound radius as a function of coverage. Results for $\theta = 80^o$ are shown in solid and broken curve for radius along x- and y-axis, respectively. Result for the normal deposition ($\theta = 0^o$) is also shown in gray broken curve. The substrate temperature is 250 K for all the cases. Inset: the mound radius as a function of the deposition angle at the coverage of 10 ML. Open squares and closed circles signify the radii along x- and y-axis, respectively. } \end{figure} The mound radius as a function of the coverage and the deposition angle is shown in Fig. 3. When atoms are deposited in normal direction ($\theta = 0^o$), square mounds form with the same four-fold symmetry as that of the substrate. However, when the atoms are deposited at grazing angles, rectangular mounds form with evidently elongated sides along y-axis. (Here, x(y)-axis is parallel (perpendicular) to the deposition direction.) It is conspicuous, from Fig. 3 and its inset, that the difference between mound radii in x- and y-axis increases as does the coverage or the deposition angle. This prediction agrees well with the experimental results of Dijken {\it et al.}\cite{Dijken2} and Lu {\it et al.}\cite{Lu}, where such side way growth of the mound with high aspect ratio has been observed with the deposition at grazing angle. In regards to Fig. 3, the anisotropy in the shape of mound is caused mainly because the growth of the mound along x-axis slows down once it reaches a certain length, $\simeq$13 $a_{nn}$. \par \begin{figure} \includegraphics[width=0.45\textwidth]{fig4} \caption{Mound radius as a function of temperature for the deposition angle of 80$^o$ and the coverage of 10 ML. Open and closed circles are for the radius along x- and y-axis, respectively. Inset: the aspect ratio of the mound radii as a function of temperature.} \end{figure} Fig. 4 shows $T$-dependence of the mound radii in two directions when 10 ML film is grown at $\theta = 80^o$. It shows that both mound radii increase as does $T$ due to the increased atomic diffusion length. At both high and low $T$ regimes, the mound shape is very asymmetric. The inset of Fig. 4 shows the aspect ratio as a function of $T$ at $\Theta =$ 10 ML, and the difference between the two mound radii as high as 40 \%. (Here, the aspect ratio is defined as the ratio of the mound radius along y-axis to that along x-axis.) The complicated dependence of the aspect ratio on the temperature suggests that the diffusion kinetics also plays a substantial role in the determination of the mound shape in conjunction with the deposition dynamics. The asymmetric mound shape is, however, found over the whole $T$-range, which illustrates that the effect of deposition dynamics on the mound shape is never wiped out by the diffusion kinetics over the examined $T$-range. \par \subsection{Mound slope} For the characterization of the slopes of the mound, we investigate the local slope at each step on each side of the mound that is defined as the step height divided by the width of the adjacent lower terrace. Most steps in the sides of the mounds are of one-atomic-layer height. Thus, if the width of a lower terrace adjacent to a step is 0.5 a$_{nn}$, then, the local slope is a$_z$/0.5 a$_{nn}$ which corresponds to the slope of \{1,1,1\}-facet on the fcc (001) surface. For the steps with the terrace widths, 1.5 and 2.5 $a_{nn}$, the corresponding local slopes are those of \{1,1,3\}- and \{1,1,5\}-facets, respectively. \begin{figure} \includegraphics[width=0.45\textwidth]{fig5} \caption{Distribution of the facets that compose the sides of the mounds formed with the normal deposition (0$^o$) at 250 K. Inset: the average slope of the mounds as a function of the coverage. } \end{figure} In our simulation, we find that various kinds of steps coexist on each side of mound. In Fig. 5, the distribution of the various steps with different local slopes is presented as a function of the coverage for the thin films grown by the deposition at $\theta = 0^o$ and $T =$ 250 K. As $\Theta$ increases, so does the mean slope. (Inset of Fig. 5) At $\Theta =$ 100 ML, the mean terrace width becomes 1.67 $a_{nn}$ that is close to that of the \{1,1,3\}-facet. However, at the coverage, the relative population of the step with the local slope of \{1,1,j\}-facet (from now on, referred as \{1,1,j\}-step) is 21 \% (j=1), 53 \% (j=3), 18 \% (j=5), 6 \% (j=7), and 2 \% (j=9). (The portion of steps with their slopes less than that of \{1,1,11\}-facet is negligible.) Therefore, even though the mean slope converges to that of \{1,1,3\}-facet, the actual relative population of steps with the mean slope is only 50 \% and the rest of the mounds are composed of the steps with relatively wide range of the local slopes. \par \begin{figure} \includegraphics[width=0.45\textwidth]{fig6} \caption{Distribution of the facets that compose the sides of the mounds formed after depositing 10 ML at 250 K at various deposition angles. (a) P-side: Two sides of mound facing in the direction perpendicular to the deposition direction, (b) IL-side: the side facing the deposition direction, and (c) SH-side: the shadowed (or back) side of mound in the deposition direction.} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{fig7a} \includegraphics[width=0.45\textwidth]{fig7b} \caption{Relative population of various facets and average facet slope at the coverage of 10 ML as a function of temperature for both deposition angles of (a) 0$^o$ and (b-d) 80$^o$. (e)Mean slopes of the three sides of the mound as a function of the deposition temperature for the coverage of 10 ML. } \end{figure} Now, we examine the dependence of the distribution of slopes or steps on the deposition angle for three inequivalent sides of the mounds, illuminated (IL), shadowed (SH), and perpendicular (P) sides. (See the caption of Fig. 6 for the description of the sides.) In Fig. 6 shown is the distribution of local steps on each side after depositing 10 ML at 250 K at various deposition angles. From the figure, we find that the distributions of the steps are similar for these three sides of the mounds for $\theta < 50^o$. As the deposition angle further increases, the distributions start to change and become quite distinct from each other at $\theta = 80^o$. For $\Theta =$ 10 ML and $\theta = 80^o$, 54 \% of the IL-sides are composed of \{1,1,1\}-step. On the other hand, only 30 \% of SH-side is composed of the \{1,1,1\}-step, and thus its mean slope is much less steep than that of the IL-side. In the SPA-LEED experiment of Cu/Cu(001), Dijken {\it et al.} \cite{Dijken2} reported that the IL-side is made of \{1,1,1\}-step and SH- and P-sides are of \{1,1,3\}-step. These experimental results nearly match the average behaviors observed in our simulation. \par We also investigate the distribution of the steps as a function of the substrate temperature (Fig. 7). For the case of grazing angle deposition, the step distributions (Fig. 7 (b)-(d)) and the mean slopes(Fig. 7 (e)) on the three sides become similar around 220 K. Further, they vary little, as the substrate temperature is lowered below 220K. These observations suggest that the step distribution for $T < 220 K$ represents the limiting behavior of the slope formation or the steepest slope reachable by deposition, during which some less steep steps such as \{113\}-steps other than the steepest \{111\}-steps occurs due to statistical fluctuation.\par As $T$ increases, the portion of the less steep steps increases and that of the steeper steps such as \{111\}-step decreases. That is, the mound becomes smooth as $T$ increases. The way of smoothing, however, differs depending on the mound side: As $T$ increases, the P-side (Fig. 7(b)) and the SH-side (Fig. 7(d)) rapidly become smooth, while the change in the step distribution on the IL-side occurs in the smaller scale (Fig. 7(c)). This can also be seen clearly in the $T$ dependence of the mean slopes in Fig. 7(e). \par \section{DISCUSSION} \subsection{Effects of the deposition dynamics on the morphology of the film} In the previous section, we observe that the roughness (Inset of Fig. 1 and 2), the mound shape (Fig. 3 and 4) and the mound slope (Fig. 5-7) depend on the non-kinetic variable such as deposition angle. Such angular dependence can be attributed to the deposition dynamics that includes (1) the steering of the trajectory of the deposited atom due to its interaction with substrate atoms and (2) the screening effects of the deposited atoms due to the geometrical structure already formed on the substrate. Both effects cause inhomogeneous deposition flux depending on the deposition condition and make the growth of thin films sensitive to the deposition condition. \par \begin{figure} \includegraphics[width=0.45\textwidth]{fig8} \caption{Deposition flux calculated by MD simulation. Atoms are deposited at grazing deposition angle of 80$^o$ on an 8-layer high mound surrounded by (1,1,5)-facets. (a) The local deposition flux in gray scale, where the brighter color indicates the higher local flux. (b) The local deposition flux along the line crossing the center of the mound along x-axis. The ordinate is the deposition flux in percentage relative to the average deposition flux over the total area. The deposition flux at normal (0$^o$) deposition is shown with + symbol as a reference and the mound is depicted by gray circles at the bottom. } \end{figure} To study the effects of the inhomogeneous deposition flux on the growth of thin film, we investigate the flux on an 8-layer high mound surrounded with \{1,1,5\}-facets by MD simulation. Fig. 8(a) shows the deposition flux over the mound in gray scale for $\theta = 80^o$. Fig. 8(b) shows the deposition flux along a line passing through the center of the mound along the x-axis, which shows strong asymmetry, i.e., the deposition flux on the IL-side is 2 to 4 times larger than the average deposition flux and that on SH-side is only about 10 to 50 \% of the average. The simulation with the mounds surrounded by the different facets show the same trend, $i.e.$, the enhanced deposition flux on the IL-side and the reduced deposition flux on the SH-side.\par Such inhomogeneous deposition flux gives rise to different growth speeds on each side, and thus the film is rougher than the film grown under the normal deposition condition (Figs.1 and 2). Especially, the enhanced deposition flux near the front edge of the top terrace increases the destabilizing current toward ascending step edge by the ES barrier, and critically roughens the surface.\cite{Amar}(See the following section for the detailed description.) \par The asymmetric shape of mound formed during grazing angle deposition (Fig. 3 and 4) is attributed to the decrease of the overall deposition flux along x-direction (i.e., over both the IL- and SH-sides), and the simultaneous increase of the effective deposition flux over the P-sides (Fig. 8) due to the following two reasons; the first one is the net mass transfer from the region near the rear edge of the mound to the top of the mound due to the steering effects, reducing the deposition flux along x-axis.\cite{Dijken1,Seo1} Some of the transferred mass on the top of the mound is in the long run redistributed equally to the four sides.\cite{Seo1} It effectively increases the deposition flux along the y-axis and {\it vice versa}. The second one is the increased deposition flux over the P-sides due to the attraction of the deposited atoms moving along the edges of the P-sides toward the sides.\cite{Seo1} Such imbalance of the effective deposition flux results in the faster growth speed in the y-direction than in the x-direction, giving birth to the asymmetric mounds elongated along y-axis. \par The observed asymmetric slopes of the mounds observed for $\theta > 50^o$ (Fig. 6) can also be explained by the inhomogeneous deposition flux on the following two grounds. Firstly, the higher deposition flux on IL-side than that on SH-side offers growth environment effectively equivalent to the lower growth temperature on IL-side than that on SH-side. Thus, the terrace size in IL-side is narrower than that in SH-side or, equivalently, the slope in IL-side is steeper than that in SH-side as observed in both the previous experiment\cite{Dijken2} and the present simulation. \par In addition, the flux distribution on the top terrace strengthens the asymmetric slope formation; A notable feature of the deposition flux in Fig. 8(b) is that the flux near the edge toward IL-side is much higher than that near the opposite edge, which should result in the increased density of islands near to the edge of IL-side on the top terrace. Sequential formation of islands on the top terrace preferentially close to the edge makes the mound have steps with the narrower terrace width or the steeper slope on IL-side than those on SH-side. \par In Fig.7, we observe the dependence of the smoothing kinetics on the side of the mound; as $T$ increases, relatively rapid smoothing occurs on both the P- and SH- sides, while it is retarded at the IL-side. This is because the deposition flux is larger on the IL-side than those on the other sides (Fig. 8); The mean capture length of the deposited atoms to form islands on the IL-side is shorter than those on the other sides. It means that the effective temperature felt by the deposited atoms on the IL-side is lower than that on the other sides. As a result, the smoothing proceeds relatively slowly on the IL-side, as $T$ increases. \par In summary, both (1) the inhomogeneous deposition flux over the sides of the mound due to the steering and screening effects and (2) the enhanced deposition flux near the front edge of the top terrace of the mound due to the steering effect cooperatively give rise to the asymmetric mound formation with different lengths and slopes on each side, and also accelerate the roughening of surface during deposition at grazing angle.\par \subsection{The steering effects v.s. the screening effects} \begin{figure} \includegraphics[width=0.45\textwidth]{fig9ab} \includegraphics[width=0.45\textwidth]{fig9cde} \caption{Simulation results with the screening effects alone with $\theta = 80^o$ and $\Theta = 10$ ML. As a reference, the simulation results considering both screening and steering effects are also presented. (a) Roughness as a function of temperature: open circles for deposition considering screening effects alone, + for random deposition (NSS), and closed circles for deposition considering both screening and steering effects. (b) Mean slopes for the SH-side (circle) and the IL-side (square). Open (closed) symbols for deposition including screening effects alone (both screening and steering effects). (c-e) Mound radius as a function of coverage. Gray (black) curves correspond to the deposition considering screening effects (both screening and steering effects). Solid and broken curves are radii along x- and y-axis, respectively. } \end{figure} As the thickness of the film become thick, in addition to the steering effects, the geometric screening effects play an important role. To study the contribution of the individual dynamic effect to thin film growth, we carry out another set of growth simulation which takes only the geometric screening effects into account. In this simulation, the trajectory of the deposited atom is a straight line determined by the initial position and velocity of the atom.\par In Fig. 9(a) shown is the roughness evolution as a function of the temperature for three different cases; (1) the case assuming no steering and screening effects (NSS) or random deposition, (2) the case taking only the geometric screening effects into account, and (3) the usual simulation taking all the dynamic effects, i.e., both steering and screening effects. At low temperature (e.g., 190 K), the screening effects contribute to the roughening almost as much as the steering effects. Here, the contribution of the screening effects to the roughening is estimated as the difference between the roughness obtained with the screening effects included and that of NSS. The contribution of the steering effects is estimated similarly as the difference between the roughness with both effects considered and that with only the screening effects. The roughening due to the screening effects, however, decreases gradually as temperature increases and becomes negligible at temperatures higher than 260 K. On the other hand, the roughness due to the steering effect keeps increasing up to 240 K and starts to decrease gradually at the higher temperatures. This suggests that the effects of the inhomogeneous deposition flux due to the screening effect alone relaxes through diffusion kinetics at a temperature lower than that due to the steering effects.\par \begin{figure} \includegraphics[width=0.45\textwidth]{fig10} \caption{Deposition flux calculated by considering only the screening effects. Atoms are deposited at grazing deposition angle of 80$^o$ on an 8-layer high mound surrounded by (1,1,5)-facets. (a) The local deposition flux in gray scale, where the brighter color indicates the higher local flux. (b) The local deposition flux on the line crossing the center of the mound along x-axis is presented; the ordinate is the relative amount (\%) of deposition flux relative to the average deposition flux over the whole surface. The mound is shown with gray circles in the upper region.} \end{figure} The origin of the difference in the roughness and the relaxation temperature between these two cases lies in their different flux distributions: In Fig. 10 shown is the flux distribution for the same mound as that in Fig. 8, but only with the screening effects taken into account. The flux near SH-side is totally depleted and about the same amount is added to IL-side. The most notable difference between two cases is found in the flux on the top terrace. The flux distribution taking both dynamic effects in Fig. 8 shows pronounced enhancement of the flux near the front edge of the top terrace, while no such enhancement of flux is found in Fig. 10. The steering effect causes vertical mass redistribution, displacing the flux on SH-side expected for the random deposition to the top terrace, accelerating the roughening of the surface. In contrast, the screening effect just redistributes the depleted flux from SH-side to IL-side in the same plane. Thus, for the relaxation of the screening effects, only the diffusions in the same plane via terrace diffusion or diffusion along the edges of mound need to be activated. For the relaxation of the steering effects, however, interlayer diffusion across the ES barrier that requires much higher activation energy than that for the in-plane diffusion should be accompanied. In fact, the $bell-shape$ dependence of the roughness on the temperature in Fig. 9(a), shown for the case when the steering effect is also considered, is reminiscent of the growth mode where the limiting process is the diffusion over ES barrier.\cite{Amar} Hence, the relaxation of the steering effects takes place at temperature higher than that for the relaxation of the screening effects. \par In Fig. 9(b), the mean slopes for IL- and SH-sides are shown as a function of growth temperature also for the aforementioned two cases. The mean slope of the IL- side is steeper than that of the SH-side over the whole temperature range for both cases. If only the screening effect is considered, the difference in their slopes dwindles as $T$ increases, and finally becomes almost negligible around 270 K. At 250 K, the slopes of both sides are slightly lower than that of \{113\} facet. In the previous experiment depositing 40 ML of Cu on Cu(001) at 250 K\cite{Dijken2}, the slopes of the IL- and SH-side is found to be those of \{111\} and \{113\}, respectively, in contradiction with the results of the aforementioned simulation considering the screening effects alone.\par The inclusion of the steering effect makes the slopes steeper, especially on the IL-side, and the difference in slopes becomes evident as observed in the experiment. (Fig. 9(b)) Further, the mean slope of the IL-side lies in between \{111\} and \{113\}, and that of the SH- side corresponds to that of \{113\}. These slopes now approach the ones observed in the previous experiment.\cite{Dijken2} \par Regarding the mound radius, the screening effects alone make the mounds of almost symmetric shape (Figs. 9 (c)-(e)) which strikingly differs from what has been observed in the experiments.\cite{Dijken2,Lu} This is understood by the fact that the flux blocked by the screening effect on the SH-side results in the increase of the flux on the IL-side (Fig. 10), which looks as if the blocked flux is simply displaced to the IL side. Therefore, the reduced lateral growth speed on the SH-side is almost compensated by the increased growth speed on the IL-side. Hence, the overall shape of mound remains square symmetric,\cite{square} if only the screening effects are taken into account in the growth. The asymmetric mounds with longer side along y-axis (Fig. 9) form as observed in the experiment,\cite{Dijken2} only when the steering effect is added. \par In short, in the thin film growth at grazing deposition angle, the asymmetric mound shape, the surface roughness, and the slope difference on the IL- and SH-sides of mounds result mainly due to the steering effect rather than the screening effect. (Fig. 9). Remembering that the main difference of the steering effects from the screening effects is the higher deposition flux near the front edge of the top terrace, the observed characteristics of the films grown at grazing deposition angle is largely determined by the growth characteristics of the top layer as the growth front. \par \section{SUMMARY AND CONCLUSION} We perform KMC simulation to study the thin film growth by deposition at grazing angle. We observe (1) the notable increase of the surface roughness and (2) the asymmetry in both mound shape and slopes, as compared with the thin film grown by the normal deposition. Such results are in good agreement with the previous experimental observations. We find that the aforementioned structural features of the films grown by grazing angle deposition are mainly attributed to the steering effect rather than the screening effect. Especially, the inhomogeneous deposition flux on the top terrace induced by the steering effects is the most influential factor. \par We also make an additional interesting observation that the mound side is not composed of one kind of facet, even when the slope selection is attained. Instead, we find that there coexist variety of local facets and the selected mound slope observed in experiment represents only the mean slope of those. Therefore, the slope selection does not mean the facet selection. \par \begin{acknowledgements} \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} There has been a lot of progress recently in realizing Majorana bound states (MBS) in various low-dimensional systems both theoretically\cite{Kitaev2001,Fu2008,Fu2009,Sato2009,Lutchyn2010,Oreg2010,Potter2010,Qi2011,Martin2012,Tewari2012,Klinovaja2012,Klinovaja2013,NadjPerge2013,Sau2013,Pientka2013,Mizushima2013,Wang2014,Poyhonen2014,Seroussi2014,Wakatsuki2014,SanJose2014,Mohanta2014,BenShach2015} and experimentally\cite{Deng2012,Mourik2012,Das2012,Lee2014,NadjPerge2014}. The key ingredients are a strong Rashba spin-orbit coupling and Zeeman magnetic fields. However, the inevitable orbital effects of the magnetic field drastically modify the topological phase diagram, eventually destroying the MBS \cite{Lim2012,Lim2013,Osca2015,Nijholt2016}. One possible way to avoid orbital effects is to use in-plane magnetic fields \cite{Kjaergaard2012,Loder2015,Albrecht2016,Li2016}. In Refs. [\onlinecite{Sticlet2012,Sedlmayr2015a,Sedlmayr2015b,Sedlmayr2016}] various low-dimensional systems, such as infinite ribbons and finite-size strips, have been studied. These systems respect particle-hole symmetry (PHS), but violate time-reversal symmetry (TRS) due to the presence of a magnetic field. Therefore, the corresponding topological phase diagrams have been obtained by computing the $\mathbb{Z}_2$ topological invariant (in accordance with the well-known tenfold topological classification \cite{Altland1997}), as well as numerical methods. However, only the case of magnetic fields perpendicular to the plane of the system has been considered. In this paper we consider the formation of MBS in infinite ribbons and finite-size strips, while focusing on in-plane magnetic fields. We thus show that infinite ribbons can host one or two pairs of chiral Majorana modes\cite{Sato2010,Qi2010,Wu2012,Daido2017,He2017} in the presence of an in-plane magnetic field perpendicular to the edges, and we calculate the corresponding topological phase diagram. For finite-size strips we first use a numerical diagonalization of the tight-binding Hamiltonian \cite{matq} which allows us to evaluate the Majorana polarization (MP)\cite{Sticlet2012,Sedlmayr2015b,Sedlmayr2016}. We show that one or multiple MBS pairs can form along the short edges of the system for an in-plane magnetic field parallel to the longer dimension of the finite-size strips, and we calculate the topological phase diagrams of these systems. Second we use the singular points (SP) technique introduced in Refs.~ [\onlinecite{Mandal2015,Mandal2016a,Mandal2016b,Aguado2016}], based on the momentum values where the determinant of the Hamiltonian vanishes, and we show that it yields results consistent with the numerical ones. We study the stability of the resulting topological states with respect to disorder\cite{Sedlmayr2015a}, and we confirm that the states with even numbers of Majorana fermions are not protected, whereas those with odd numbers are. We also perform a calculation of the $\mathbb{Z}_2$ invariant which should give one access to the parity of the number of Majorana modes. Indeed we find that this calculation predicts correctly a topologically non-trivial character for the phase-space regions shown numerically to have an odd number of MBS pairs and to survive the effects of disorder. The paper is organized as follows: in Sec. \ref{Model} we introduce the general model and give a concise description of the tight-binding and singular points techniques, as well as of the Majorana polarization definition. In Sec. III we present the results for 1D wires and infinite ribbons. In Sec. IV we present the phase diagrams for finite-size strips of different widths, using the numerical diagonalization and singular points methods. In Sec V we present the effects of disorder and we compare the disordered results with those obtained using a topological-invariant calculation. In Section VI we consider finite-size square systems. Finally, we conclude in Sec. \ref{Conclusions} leaving the technical details of topological-invariant calculations for the Appendix. \section{Model and methods} \label{Model} We introduce a model that can describe different 1D and 2D systems with an intrinsic (or proximity-induced) s-wave superconducting pairing $\Delta$, longitudinal and transversal Rashba spin-orbit couplings $\lambda_{x,y}$, and a Zeeman magnetic field $\bs{B} = (B_x, B_y, B_z)$. We write the Hamiltonian in the Nambu basis $\Psi_{\bs{r}} = \left\{ c^\dag_{\bs{r} \uparrow},\, c^\dag_{\bs{r} \downarrow},\, c_{\bs{r} \downarrow},\, -c_{\bs{r} \uparrow} \right\} $, where $c_{\bs{r} \sigma}$ ($c^{\dag}_{\bs{r} \sigma}$) annihilates (creates) a particle of spin $\sigma$ at site $ \bs{r} = (i,j)$ in a square lattice: \begin{eqnarray} H = &\sum\limits_{\bs{r}}\left[ \Psi^\dag_{\bs{r}} \left(-\mu \tau_z + \Delta \tau_x + \bs{B \cdot \sigma}\right) \Psi_{\bs{r}} \right. + \\ \nonumber &+ \Psi^\dag_{\bs{r}} \left(-t_x -i \lambda_x \sigma_y \right) \tau_z \Psi_{\bs{r} + \bs{x}} + \mathrm{H.c.} + \\ \nonumber &+ \left. \Psi^\dag_{\bs{r}} \left(-t_y +i \lambda_y \sigma_x \right) \tau_z \Psi_{\bs{r}+\bs{y}}+ \mathrm{H.c.} \right], \label{TBHamil} \end{eqnarray} where $t$ is the hopping amplitude, $\mu$ denotes the chemical potential, $\bs{x}$,$\bs{y}$ are unit vectors for the $x$ and $y$ directions correspondingly, and the lattice spacing is set to unity. \subsection{Numerical tight-binding techniques and the Majorana polarization} The eigenstates of the tight-binding Hamiltonian described above can be obtained using a numerical diagonalization (here performed using the MatQ code\cite{matq}). In the Nambu basis, an eigenstate $j$ of the tight binding Hamiltonian can be written as $\psi^{j\T}_{\bs{r}} = \left\{ u^j_{\bs{r} \uparrow},\, u^j_{\bs{r} \downarrow},\, v^j_{\bs{r} \downarrow},\, -v^j_{\bs{r} \uparrow} \right\}$, where $u$ and $v$ denote the electron and hole components respectively. The vector of the local Majorana polarization \cite{Sticlet2012,Sedlmayr2015b,Sedlmayr2016} on each site $\bs{r} = (x,y)$, for the eigenstate $j$ is given by: \begin{equation} P^j({\bs{r}}) \equiv \begin{pmatrix} P^j_x({\bs{r}}) \\ P^j_y({\bs{r}}) \end{pmatrix} \equiv \begin{pmatrix} -2 \re \left[u^j_{\bs{r} \uparrow} v^j_{\bs{r} \uparrow} + u^j_{\bs{r} \downarrow} v^j_{\bs{r} \downarrow} \right] \\ -2 \im \left[u^j_{\bs{r} \uparrow} v^j_{\bs{r} \uparrow} + u^j_{\bs{r} \downarrow} v^j_{\bs{r} \downarrow} \right] \end{pmatrix} \label{MPvector} \end{equation} This quantity allows to discriminate locally pure electron (hole) states from Majorana-like states. It is easy to see that for pure electron ($v_{\bs{r} \uparrow}=v_{\bs{r} \downarrow}=0$) and pure hole states ($u_{\bs{r} \uparrow}=u_{\bs{r} \downarrow}=0$) the local Majorana polarization equals to zero. For our purposes it is more practical to use the integral of the MP vector over a spatial region $\mathcal{R}$ defined as: \begin{equation} C_j = \left| \sum\limits_{\bs{r} \in \mathcal{R}} \left[ P^j_x({\bs{r}}) + i P^j_y({\bs{r}})\right] \right|^2 \label{MPint} \end{equation} Note that in Eqs. (\ref{MPvector},\ref{MPint}) we assume that the wave function is normalized. To obtain the topological phase diagram we first find the lowest energy states of the given system. If these states have energies close to zero they may be MBS. We divide our system into two halves (along the shorter length), and we compute the integral of the MP vector in each of these halves defined by $\bs{r} \in \mathcal{R}$ for each 'zero'-energy state. The states that have $C=1$ are MBS, and those with $C=0$ are regular electron or hole states. Note that we may have only a pair of state with $C=1$, or multiple degenerate zero-energy MBS states with $C=1$. In the results that we present here we sum the MP over all the lowest energy states. Note however that the states with even $C$ (even number of MBS pairs) are not topologically protected, and thus any small disorder introduced into the system destroys such Majorana states. \subsection{Singular points of the Hamiltonian} To find a MBS in a system described by a PHS Hamiltonian $H(k)$, we seek for localized zero-energy solutions of the Schr\"odinger equation: $H(k) \Phi = 0$. In the most general case these solutions are of the form $e^{i k r}$ that can be rewritten as $e^{i k_\parallel r_\parallel} \cdot e^{-z r_\perp}$, where $k_\parallel$ denotes the ``good" quantum number and $z$ is defined below. We analytically continue $k$ to the complex plane and we consider the solutions of the following equation \begin{equation} \det H(k) = 0, \label{EPequation} \end{equation} defining the so-called singular points $k_i \equiv k_\parallel + i z$ in the complex plane at which the determinant of the Hamiltonian vanishes. By definition $z$ is given by the imaginary part of the singular point $k_i$. The practical use of these complex momentum values is the following: by continuously changing the parameters of our Hamiltonian, we continuously change the corresponding $k_i$'s. If we are in a topological phase, $z$ must be positive (in other words the solution is localized). As soon as $z$ crosses zero and becomes negative, the solution becomes delocalized, and therefore we enter a non-topological phase. Further details can be found in Refs.~[\onlinecite{Mandal2015,Mandal2016a,Mandal2016b}]. We propose the following way of constructing a phase diagram. The parameter space is given by the chemical potential $\mu$ and the magnetic field $B = |\bs{B}|$. Firstly, we find all the $k_i$'s as a function of the parameters in the Hamiltonian, such that $k_i = k_i(\mu, B), i \in \overline{1,2N}$, where $2N$ is the total number of $k_i$ solutions. We then sort them at each point in the parameter space with respect to their imaginary parts as follows: $\im k_i < \im k_{i+1}, i \in \overline{1,2N-1}$. Sorting is one way to construct continuous functions $\im k_i(\mu, B)$ in the parameter space. Although it is possible to deal with discontinuities analytically\cite{Mandal2016b}, in numerical simulations the continuity of $ \im k_{i}$'s becomes crucial since it is hard to discriminate the zeros of $\im k_i(\mu, B)$ from its discontinuities. Subsequently we look for the functions $k_i(\mu, B)$ whose imaginary part crosses zero. This can be done by plotting their imaginary part as a function of the system parameters. Since Eq.~(\ref{EPequation}) yields pairs of solutions with opposite imaginary parts, and since these pairs have been sorted according to their imaginary parts, it is therefore sufficient to plot the imaginary part of either the smallest positive root ($i=N+1$) or of the largest negative root ($i=N$). The set of points $(\mu_0, B_0)$ where $\im k_{N}(\mu_0, B_0) = 0$ (or equivalently $\im k_{N+1}(\mu_0, B_0) = 0$) yield thus the phase transition lines between the topological and non-topological regions in the phase diagram. Note that this technique can in principle used also to count the number of MBS present in a given phase. The corresponding counting formula is extremely simplified when ''Exceptional points'' are present\cite{Mandal2015,Mandal2016a} for a system with unbroken chiral symmetry. In such a case, the Hamiltonian can be brought to a block off-diagonal form, however, when the chiral symmetry is broken, this block-off diagonal form cannot be achieved in any basis and it becomes cumbersome to isolate two sets of $N$ continuous solutions $k_i$ (with opposite imaginary parts) and plug them into the counting formula to get the total number of MBS. Thus we will use the singular points method only to find the phase-transition lines. In order to find the number of MBS pairs in a given phase we will rely on the numerical tight-binding calculations. \section{1D wires and infinite ribbons} \subsection{1D wire} We start by describing the well-known phase diagram of a 1D SC wire which we take to be lying along the $x$-axis ($N_y=1$ and $N_x \gg1 $). In the presence of a magnetic field the time-reversal symmetry (TRS) is broken, and only the particle-hole symmetry (PHS) holds, therefore the system is in the topological class D described by a $\mathbb{Z}_2$ invariant \cite{Altland1997}. If the applied magnetic field is perpendicular to the spin-orbit direction, i.e. either $\bs{B} = (0, 0, B_z)$ or $\bs{B} = (B_x, 0, 0)$, the SC wire enters a gapful topological phase as soon as $B_x$ or $B_z$ become larger than $\sqrt{(\mu-2t_x)^2 + \Delta^2}$. The corresponding phase diagram is shown in Fig. \ref{1wire-PhD}. Further details of the $\mathbb{Z}_2$ invariant calculation can be found in the first subsection of Appendix A. \begin{figure} \includegraphics*[width = 0.7\columnwidth]{1Dwire-TI.pdf} \caption{(color online) The phase diagram of a 1D superconducting nanowire obtained with topological invariant calculation as a function of the chemical potential $\mu$ and the magnetic field along the wire $B=B_x$ (the phase diagram remains the same in the case of a magnetic field perpendicular to the wire $B=B_z$). We set $\Delta=0.2t, \lambda_x=0.5t$.} \label{1wire-PhD} \end{figure} \subsection{Infinite ribbon} In this subsection we study an infinite ribbon with a finite but large number of sites in the $y$-direction ($N_y \gg 1$), and infinite in the $x$-direction $N_x\rightarrow \infty$ (see Fig.~\ref{2Dribbonsketch}). We set $\lambda_x = \lambda_y = \lambda$ and $t_x = t_y = 1$. We are interested in the formation of zero-energy edge states parallel to the $x$ axis (see the black circles in Fig.~\ref{2Dribbonsketch}). We consider open boundary conditions (also referred to as "zero boundary conditions") at the edges of the ribbon (in the $y$ direction), and that the ribbon is infinite (or equivalently periodic boundary conditions) in the $x$ direction. In Fig.~\ref{2Dribbon} we plot the band structure of this system for an in-plane magnetic field $B_y$ parallel to the $y$-axis (perpendicular to the ribbon edges), as well as the topological phase diagram of such a ribbon obtained using the tight-binding numerical diagonalization and the evaluation of the MP as described in section II. First of all, we note that the spectrum is PHS even though the band structure is not. Second, as we can see from the band structure, the system may become gapless, i.e. there are region in the momentum space in which the gap in the spectrum is closing. However, despite the fact that there is no overall gap, chiral MBS\cite{He2017} do form, and they correspond to values of momenta for which the bulk spectrum remains gapped (e.g. $k_x a=0$ and $k_x a=\pi$). Such states are dispersive and propagate along the edges of the ribbon. We should note that similar situations in which the closing the gap can occur for certain regions in the parameter space have been previously studied, and it has been shown that, despite the absence of an overall gap, the system can still be topological, and support MBS\cite{Teo2010,Matsuura2013,Deng2014,Baum2015a,Baum2015b}. The number of MBS pairs varies from 0 to 2, depending on the parameters of the system (see Fig.~\ref{2Dribbon}). However, the case of two Majorana fermions propagating at the same boundary is not stable, and in the absence of protection by TRS, for example in the presence of small disorder, such states would combine to form a conventional fermionic state. Thus the system is topologically non-trivial only when the number of MBS pairs is equal to 1. It is also worth mentioning that the number of sites in the $y$ direction must be large enough so that the overlap of the wave functions of the two Majorana states localized on the two opposite edges of the ribbon is exponentially small, and that these states cannot hybridize and acquire a finite energy. \begin{figure}[h!] \centering \includegraphics*[width = 0.8\columnwidth]{2Dribbonsketch.pdf} \caption{(color online) A sketch of an infinite ribbon along $x$-axis with a magnetic field $B=B_y$ perpendicular to its edges. The black sites denote the edges of the ribbon where the chiral Majorana modes are localized.} \label{2Dribbonsketch} \end{figure} Note that if the magnetic field is applied along the $x$-axis the system is also gapless, however, in this case no Majorana modes form, for any region in the parameter space, and the system is fully trivial. \begin{figure}[h!] \centering \includegraphics*[width = 0.6\columnwidth]{2DBymu4.pdf}\\ \vspace{.2in} \includegraphics*[width = 0.6\columnwidth]{2DBymu0.pdf}\\ \vspace{.2in} \hspace{.2in} \includegraphics*[width = 0.7\columnwidth]{ribbon100.pdf} \caption{(color online) Band structure of an infinite ribbon with a magnetic field $B=B_y$ perpendicular to the edges for $\mu=4t, B_y=0.3t$ (upper panel) and $\mu=0t, B_y=0.3t$ (middle panel). Note that the system may host either one or two pairs of chiral Majorana modes. The corresponding topological phase diagram (lower panel) depicts the number of Majorana modes (as evaluated from the total MP) as a function of $\mu$ and $B$. In all the examples we set $\Delta=0.2t, \lambda_x=\lambda_y=0.5t$.} \label{2Dribbon} \end{figure} \section{Finite-size strips} In what follows we focus on finite-size strips, i.e. systems made-up of $N_y > 1$ coupled wires each with a finite but large number of sites $N_x \gg 1$ and $N_x \gg N_y$. We consider an in-plane magnetic field $B_x$ parallel to the long edge of the system (see Fig.~\ref{quasi-1Dsketch}). Note that for a magnetic field parallel to the $y$-axis, no Majorana states can form, since the magnetic field would in this case be parallel to the direction of the spin-orbit coupling in the wires. A similar system was considered in Ref.~[\onlinecite{Sedlmayr2016}], but only for magnetic fields perpendicular to the plane of the system. We apply open boundary conditions in both $x$ and $y$ directions. \begin{figure}[h!] \centering \includegraphics*[width = 0.9\columnwidth]{quasi-1Dsketch.pdf} \caption{(color online) A sketch of a finite-size strip with a magnetic field $B=B_x$ along the $x$-axis. The black sites denote the short edges of the system where the Majorana modes would be localized. This system can be thought of as a set of 1D wires coupled in the $y$-direction.} \label{quasi-1Dsketch} \end{figure} As described in Section II, we will use two main tools to obtain the phase diagram for these systems. The first is to numerically diagonalize the tight-binding Hamiltonian and employ the integrated MP by plotting its value as a function of the chemical potential and the magnetic field. The second is to assume that the momentum $k_x$ along $x$ is a "good" quantum number, and exploit the SP technique. In Fig.~\ref{234wires-PhD} we show numerically that the results of the first two methods are fully consistent. Each phase transition boundary defined as a change in the number of MBS pairs obtained via the MP technique corresponds to a line of zeroes in the SP plot. The only apparent exception is the special case of the white lines in the $N_y=4$ close to $\mu=t$ and $\mu=2.2t$ and $B=0.5t$. They seem to correspond to the special case of a zero-size non-topological line-like region between two topological regions with one pair of MBS. Such situations, i.e. topological regions divided by a line of non-topological points, can arise, and are well captured here by the SP method. The numerical method is a bit less precise in this case, and the non-topological lines acquire a finite width, mostly because of the finite length of the considered systems (in the infinite-length limit the width of these regions should go to zero). \begin{figure}[h!] \begin{tabular}{cc} {\textbf{Majorana polarisation}} & {\textbf{Singular points}$\phantom{aa}$} \\ \includegraphics*[width = 0.45\columnwidth]{quasi1D300x2-MP.pdf} & \includegraphics*[width = 0.47\columnwidth]{quasi1D300x2-EP.pdf}\\ \includegraphics*[width = 0.45\columnwidth]{quasi1D300x3-MP.pdf} & \includegraphics*[width = 0.47\columnwidth]{quasi1D300x3-EP.pdf}\\ \includegraphics*[width = 0.45\columnwidth]{quasi1D300x4-MP.pdf} & \includegraphics*[width = 0.47\columnwidth]{quasi1D300x4-EP.pdf}\\ \end{tabular} \caption{(color online) Topological phase diagrams in the $(\mu, B_x)$ plane. In the left column we plot the total MP summed over all the low-energy states with a MP larger than a given cutoff, here taken to be $0.8$. The color scheme indicates the number of MBS pairs. In the right column we plot the results of the SP calculation; the phase transition lines are given by the zeroes of the plot. We consider finite-size strips with 2, 3 and 4 coupled wires. The results of these two methods are consistent, and the phase transition lines coincide. We set $\Delta=0.2t, \lambda_x=\lambda_y=0.5t$. } \label{234wires-PhD} \end{figure} Note that the integrated MP allows not only to show the phase transition boundaries, but also to give access to the number of emerging MBS. Since the chiral symmetry (a combination of the PHS and the TRS) is absent, we cannot easily use the counting formula introduced in Ref.~[\onlinecite{Mandal2016b}] to obtain the number of Majorana modes using this method. The counting formula is in principle also applicable in the absence of the chiral symmetry, but the broken TRS case is very cumbersome and much harder to implement numerically. Therefore, we use here the SP technique only to obtain the phase transition lines; the actual number of the MBS pairs and the topological character of a given phase space region are obtained numerically via the calculation of the total MP. We should point out that some segments of the phase transition lines in the right column of Fig.~\ref{234wires-PhD} are almost non-visible. This is not due to the failure of the method, but to the numerical grid: the regions in which the zeroes of the determinant occur are very thin and the number of points required in the grid would be too large to capture them entirely. We did check though that the phase transition lines are present everywhere as expected, even if not fully shown in Fig.~\ref{234wires-PhD}. Note also that by increasing the number of wires we can increase the number of MBS. However, as we will show in the next section only the states with an odd number of MBS pairs are topologically protected. \section{Effects of disorder and topological invariant calculations} In what follows we show that a small amount of disorder makes the Majorana modes re-combine and form regular electronic states in all the regions in the parameter space with an even number of Majorana modes; thus these regions are not topologically protected. However, in all the regions with odd numbers of Majorana modes one MBS pair survives the effects of disorder. In the left column of Fig. \ref{disorderq1D} we show the phase diagrams for finite-size strips in the presence of disorder. The disorder considered here is a random variation of the value of the Zeeman magnetic field with an intensity of $5\%$ around its average value.\cite{Sedlmayr2015a} Indeed, we see that in the even-parity regions of the phase space the Majorana modes are destroyed, confirming their non-topological character. We also present the corresponding phase diagrams computed using the topological invariant (TI) (for the details of the derivations see Appendix A, as well as Refs.~[\onlinecite{Sedlmayr2016}] and [\onlinecite{Gibertini2012}]). In Fig. \ref{disorderq1D} we compare the phase diagrams showing the topological regions surviving the effects of disorder (left column) and those obtained using the topological invariant (right column). Indeed, up to some sets of lines of special points, the topological regions, as predicted by the topological invariant, coincide with the regions in the phase diagram shown numerically to exhibit an odd-parity of MBS pairs and to survive the presence of disorder. It is also worth discussing how the value of the topological gap protecting the zero-energy states changes in the presence of disorder. In Fig.~\ref{TBspectra300x4} we plot the energy spectra of a 4-wire finite-size strip for a fixed value of the chemical potential as a function of an in-plane magnetic field $B_x$, both in the absence and in the presence of disorder. This corresponds to taking vertical cuts of the lower left panels of Figs.~\ref{234wires-PhD} and \ref{disorderq1D}. Without disorder, in full accordance with Fig.~\ref{234wires-PhD}, we have MBS for magnetic fields $B_x$ from $\sim0.45t$ to $\sim0.73t$ and $\sim0.92t$ to $\sim1.25t$. It is worth mentioning that certain regions contain more than one pair of MBS. The number of pairs is shown above the corresponding Majorana zero energy lines, highlighted in red. First, we note that, consistent with the phase diagrams presented in Figs.~\ref{234wires-PhD} and \ref{disorderq1D}, all the regions with odd numbers of MBS are protected against disorder, exhibiting one stable zero-energy mode (\emph{cf.} regions from $\sim0.55t$ to $\sim0.64t$ and $\sim0.92t$ to $\sim1.25t$ respectively), whereas in the regions with even numbers these states acquire a finite energy in the presence of disorder, confirming that these regions in the phase diagram are not topologically protected. Moreover we see that the gap protecting these zero-energy states is affected slightly by disorder, more significantly for the states with even numbers of Majoranas, consistent with the lack of topological protection for these states. \begin{figure}[h!] \centering \begin{tabular}{ccc} {\textbf{MP with disorder}}$\phantom{a}$ & {\textbf{TI without disorder}} \\ \includegraphics*[width=0.47\columnwidth]{quasi1D300x2dis.pdf} & \includegraphics*[width=0.42\columnwidth]{2wires-TI.pdf} \\ \includegraphics*[width=0.47\columnwidth]{quasi1D300x4dis.pdf} & \includegraphics*[width=0.42\columnwidth]{4wires-TI.pdf} \end{tabular} \caption{(color online) Topological phase diagrams in the $(\mu, B_x)$ plane for $2$ and $4$ coupled wires. In the left column we depict the MP of the lowest-energy mode for a disordered system. We impose a MP cutoff of $0.95$ (the states with MP smaller than $1$ cannot be considered actual Majoranas and usually correspond to non-zero energies, even if they remain Majorana-like). In the right column we depict the phase diagram as obtained using the topological invariant calculation (without disorder) (the topological regions are shown in violet). Note that, up to some special-points lines, the TI results are fully consistent with the those for the MP in the disordered system, which is expected since the TI gives access to the parity of the number of the MBS. In all the panels $\Delta=0.2t, \lambda_x=\lambda_y=0.5t$.} \label{disorderq1D} \end{figure} \begin{figure}[h!] \centering \includegraphics*[width = 0.99\columnwidth]{TBspectra300x4.pdf} \includegraphics*[width = 0.99\columnwidth]{TBspectra300x4dis.pdf} \caption{(color online) Energy spectra for 4-wires finite-size strips with and without disorder (lower and upper panel respectively) as a function of an in-plane magnetic field $B_x$, varying from $0.25t$ to $1.25t$. We restrict ourselves to plotting only the lowest 60 energy levels and we set $\Delta=0.2t, \lambda_x=\lambda_y=0.5t$, and $\mu=0.69t$. These panels correspond to vertical cuts of lower left panels in Fig.~\ref{234wires-PhD} and \ref{disorderq1D} respectively. The number of pairs is shown above the corresponding Majorana zero energy lines, highlighted in red. } \label{TBspectra300x4} \end{figure} \section{Finite-size squares} If both $N_{x,y} \gg 1$ and are comparable in size then we are dealing with a finite-size square. In Ref.~[\onlinecite{Sedlmayr2016}] it has been shown that for perpendicular Zeeman magnetic fields finite-energy quasi-Majorana-like states ($C=\sqrt{2}/2$) may form, localized mostly in the corners of these square flakes, for a set of parameters inside the 2D bulk topological phase. However, for in-plane magnetic fields (e.g. along $x$-axis) the situation is very different since the rotation symmetry is broken and we always have an in-plane special direction, and we can no longer expect quasi-Majorana states with rotationally symmetric MP. By analyzing the MP of these systems we note that the generic situation that emerges is that depicted in Fig.~\ref{squareMPvector}: quais-disordered edge states localized on the edges of the system perpendicular to the magnetic field. Such states have also a quasi-disordered MP, and the integral of the MP over one of these edges states is finite (for the case in Fig.~\ref{squareMPvector} this is of the order of $0.9$). This tendency to form a Majorana state is larger for values of the magnetic field close to the transition, and for systems with a very large $N_x$ we can actually recover actual Majorana states in these systems on the edges perpendicular to the magnetic field. The systems required to recover a full Majorana are too large for our numerical abilities, but even for smaller systems we have managed to tune up the parameters to get a MP up to $0.9$, and increasing the size will improve this value. This is important from an experimental perspective, since it indicates that for in-plane fields actual Majorana states can form even in wide square systems, while for perpendicular fields this can never be the case unless one dimension is much larger than the other one.\\ \begin{figure} \centering \vspace{.2in} \includegraphics*[width = 0.6\columnwidth]{squareMPvector.pdf} \caption{(color online) The MP vector for a finite-size square consisting of $100\times 100$ sites in a magnetic field along the $x$-axis. We choose a set of parameters $\mu=4t, B_x=0.28t, \Delta=0.2t, \lambda_x=\lambda_y=0.5t$.} \label{squareMPvector} \end{figure} \section{Conclusions} \label{Conclusions} We have studied the formation of Majorana bound states in infinite ribbons, finite-size strips and squares with Rashba spin-orbit coupling and an in-plane magnetic field. We have shown that in infinite ribbons chiral Majorana fermions may form when the magnetic field is perpendicular to the edges of the ribbon. Furthermore, we have studied finite-size strips exploiting a numerical diagonalization technique and the Majorana polarization, as well as the singular points technique, and we have proven the qualitative equivalence of these two methods in constructing the phase diagrams of these systems. We have also evaluated the topological invariant for the finite-size strips and we have shown that its usage allows one to obtain the correct phase diagrams for the parity of the number of MBS pairs. Moreover, we have confirmed numerically that the phases with even number of MBS pairs are not stable in the presence of disorder, and are thus topologically trivial, while the phases with an odd number of MBS preserve their topological character. \section{Acknowledgements} We would like to thank Nicholas Sedlmayr for fruitful discussions and useful comments on our manuscript. VK is grateful for the hospitality provided by the Perimeter Institute where a part of this work was performed. The research at the Perimeter Institute was supported in part by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Information. The research at IPhT was supported in part by the ERC Starting Independent Researcher Grant NANOGRAPHENE 256965.\\ \section{Author contribution statement} Cristina Bena suggested the issue, Ipsita Mandal helped with implementing the singular points technique, and Vardan Kaladzhyan, Julien Despres and Cristina Bena carried out the calculations. All the authors wrote and revised this article.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sect1}Introduction} Lithium isotope $^7$Li is a target material nucleus that plays an important role in the fields of nuclear technology and nuclear engineering, such as the compact accelerator-driven neutron source and International Fusion Materials Irradiation Facility (IFMIF) \cite {Garin2011}. The neutron beam produced by compact accelerator-driven neutron sources can be applied to nondestructive testing and medical treatment \cite{M.Paul2015}. For improving the reliability in design of target system for compact accelerator-driven neutron sources, accurate nuclear data of proton induced reactions on $^7$Li are required for Monte Carlo calculation code. The reaction data of $p + ^7$Li are important due to not only the value of the applications but also the basic research interest in the field of nuclear reaction. There are many open reaction channels at incident proton energy even below $20$ MeV for $p + ^7$Li reaction, so the reaction mechanism is very complex. The cluster separations, such as $^5$He $\rightarrow n+\alpha$, $^5$Li $\rightarrow p+\alpha$ and $^8$Be $\rightarrow \alpha+\alpha$, are involved besides the sequential particle emission processes. The calculation of the nuclear reaction is sensitive to the spin, excited energy and parity of the nuclear levels, especially the elastic and inelatic scattering angular distribution, double-differential cross section for neutron and proton induced nuclear reactions with $1p-$shell light nuclei involved. Therefore, the detailed study on $p + ^7$Li reaction will extend the knowledge of light charged particle induced nuclear reactions with $1p$-shell light nuclei involved, as well as the accurate structural information of some unstable light nuclei. Furthermore, this study will provide abundant proofs to test the reliability of statistical theory of light nucleus reactions (STLN), which has been applied successfully to calculate the double differential cross sections of outgoing neutrons both for neutron and proton induced nuclear reactions with $1p$-shell light nuclei involved \cite{X.J.Sun2017,Zhang2001Li6,Zhang2002Li7,Sun2009Be9,Duan2010Be9,Zhang2003B10,Zhang2003B11,Sun2008kerma,Zhang1999C12,Sun2007C12,Yan2005N14,Zhang2001O16,Duan2005O16,Duan2007,Zhang2015,Yan 5He}. The double differential cross sections can give the most detailed information of the emission particles \cite{Trkov2011}. Therefore double differential cross sections of the emitted particles are very important both for the theoretical calculations and the applications. Due to the lack of the appropriate theoretical method for light nucleus reaction, the evaluation or model calculation of outgoing neutron and light charged particles double differential cross section for $p + ^7$Li reaction is not satisfactory. The complete nuclear reaction data for $p + ^7$Li reaction are scarce both for theory and experiment. There are a few data on partial cross sections for $p + ^7$Li reaction, such as $^7$Li($p,p$), $^7$Li($p,n$), $^7$Li($p,d$) and $^7$Li($p,\alpha$) in ENDF/B-VII.$1$ \cite{M.B.Chadwick2011} and ENDF/B-VIII.$0$ \cite{A.D.Carlson2018}. The nuclear reaction data in ENDF/B-VII.1 and ENDF/B-VIII.0 were given based on R-matrix analysis \cite{Philip2005} and fitting to experimental data. In the JENDL-$4.0$/HE \cite{K.Shibata2011}, evaluation was performed by the multi-channel R-matrix fitting the experimental data at incident proton energy below $10$ MeV, and the CCONE code \cite{Iwamoto.O2013} was used to fit ($p,xn$) spectra in incident proton energy range from $10$ to $200$ MeV. The double differential cross sections of some outgoing charged particles for $p + ^7$Li reaction are not given in those three nuclear reaction databases \cite{M.B.Chadwick2011,A.D.Carlson2018,K.Shibata2011}, and cluster separation processes are not also considered in the evaluations. In addition, $n + ^6$Li reaction \cite{T.Matsumoto2011} and $n + ^7$Li reaction \cite{D.Ichinkhorloo2011} had been calculated with continuum discretized coupled channels (CDCC) method in $2011$ and $2012$, respectively. After that, proton induced reactions on $^{6,7}$Li \cite{H.Guo2013} are also calculated with CCDC method in $2013$. Based on this method, the double differential cross sections of proton and triton from $p + ^7$Li reaction \cite{H.R.Guo2019} were given recently by one of our collaborators. The calculated double differential cross sections of outgoing proton agree well with the experimental data, but the results of outgoing triton are overestimated at large outgoing angles comparing with the measurements. The same method was applied to calculate the double differential cross sections of outgoing neutron for $n+^{6,7}$Li reactions, and the results are underestimated in some low-emission-energy region comparing with the experimental data \cite{H.R.Guo2019}. Moreover, the double differential cross sections of outgoing deuteron are not given in references \cite{T.Matsumoto2011,D.Ichinkhorloo2011,H.Guo2013,H.R.Guo2019}. One of reasons is that the sequential secondary particle emission processes are not considered in CDCC model. Obviously, for high incident proton energies, the effects of secondary particle emission processes between the discret levels and cluster separations in light nuclear reactions are very important. The reaction cross sections for $p + ^7$Li reaction have been successfully analyzed by R-matrix theory in previous studies, however the analysis of energy-angular spectra of the outgoing particles is still an open problem. There are two main problems in calculating double differential cross sections for neutron and proton induced light nucleus reactions. One is the theoretical description of the emission process from a compound nucleus to the discrete levels of the residual nuclei and from the discrete levels of the first residual nuclei to the discrete levels of the secondary residual nucleus with angular momentum and parity conservation through pre-equilibrium process. Another is the recoil effect, which is very important for light nucleus reaction, must be taken into account exactly to keep energy conservation in different reaction processes. Fortunately, these two problems have been solved by STLN \cite{Zhang2015,X.J.Sun2016}. Based on the unified Hauser-Feshbach and exciton model \cite{Zhang1993} , which can describe the particle emission processes between the discrete levels with energy, angular momentum, and parity conservation, STLN is proposed to calculate the double differential cross sections of outgoing neutron and light charged particles for neutron induced reactions with the $1p$-shell nuclei involved. STLN has been applied successfully to calculate the double differential cross sections of outgoing neutron for neutron induced reaction on $^6$Li \cite{Zhang2001Li6},$^7$Li \cite{Zhang2002Li7},$^9$Be \cite{Sun2009Be9,Duan2010Be9},$^{10}$B \cite{Zhang2003B10},$^{11}$B \cite{Zhang2003B11},$^{12}$C \cite{Sun2008kerma,Zhang1999C12,Sun2007C12},$^{14}$N \cite{Yan2005N14},$^{16}$O \cite{Zhang2001O16,Duan2005O16} and $^{19}$F \cite{Duan2007}, and the calculated results very well reproduced the measurements. Furthermore, STLN has been improved to apply to the light charged particle induced nuclear reaction with the $1p$-shell nuclei involved. For example, the double differential cross section of outgoing neutron for $p + ^9$Be reaction had been calculated and analyzed in $2016$, and the calculated results successfully reproduced the measurements \cite{X.J.Sun2016}. In this paper, we will further improve STLN, such as analyzing the probable open reaction channels, considering the Coulomb barrier and the emission ratio of the composite particle, to obtain the double differential cross sections of outgoing charged particles for $p + ^7$Li reaction. In Sec. \ref{sect2}, the theoretical model used in this work is briefly introduced. The reaction channels of $p + ^7$Li reaction below $20$ MeV are analyzed in detail in Sec. \ref{sect3}. The comparisons between calculated results with experimental data and analysis are given in Sec IV, and summary is given in the last section. \section{ THEORETICAL MODEL}\label{sect2} \subsection{ Theoretical Frame}\label{sect2.1} Since the approach of decribing neutron induced light nucleus reactions with 1$p$-shell light nuclei involved had been proposed in $1999$ \cite{Zhang1999C12}, many experimental data, especially the double differential cross sections, had been analyzed. In dynamics, the angular momentum coupling and parity effect in pre-equilibrium emission process from discrete levels of the first residual nuclei to the discrete levels of the secondary residual nuclei were proposed to accurately keep the angular momentum and parity conservations \cite{Zhang2001O16}, so the double differential cross sections of secondary emitted particles can be calculated by this theoretical model. In kinematics, the recoil effect is strictly taken into account, and the energy balance of particle emission during the different reaction processes could be held accurately. Recently, this approach has been improved to describe proton induced light nucleus reactions with 1$p$-shell light nuclei involved, and to calculate the double differential cross sections of neutron and light charged particles. For illustrating the physical picture, the fundermental formulas are simply given in this paper. The detailed description of STLN can be found in Refs. \cite{Zhang1999C12,Zhang2015,X.J.Sun2016}. Based on the unified Hauser-Feshbach and exciton model \cite{Zhang1993}, the cross sections of the first emitted particles from compound nucleus to the discrete energy levels of the first residual nuclei can be expressed as \begin{eqnarray}\label{eq1} \sigma_{m_1,k_1}(E_L)=\sum_{j\pi}\sigma_a^{j\pi}(E_L)\{\sum_{n=3}^{n_{max}}P^{j\pi}(n) \frac{W_{m_1,k_1}^{j\pi}(n,E^*,\varepsilon_{m_1}^c)}{W_T^{j\pi}(n,E^*)} +Q^{j\pi}(n)\frac{W_{m_1,k_1}^{j\pi}(E^*,\varepsilon_{m_1}^c)}{W_T^{j\pi}(E^*)}\}. \nonumber\\ \end{eqnarray} Where $P^{j\pi}(n)$ is the occupation probability of the $n$-th exciton state in the $j\pi$ channel ($j$ and $\pi$ denote the angular momentum and parity in final state, respectively). $P^{j\pi}(n)$ can be obtained by solving the $j$-dependent exciton master equation under the conservation of angular momentum in pre-equilibrium reaction processes \cite{Zhang1994}. $Q^{j\pi}(n)$ is the occupation probability of the equilibrium state in $j\pi$ channel. $W_{m_1,k_1}^{j\pi}(n,E^*,\varepsilon_{m_1}^c)$ is emission rate of the first emitted particle $m_1$ at the $n$-th exciton state with outgoing kinetic energy $\varepsilon_{m_1}^c$ in center-of-mass system (CMS), and $W_T^{j\pi}(n,E^*)$ is total emission rate at the $n$-th exciton state. $W_{m_1,k_1}^{j\pi}(E^*,\varepsilon_{m_1}^c)$ is emission rate of the first emitted particle $m_1$ at the equilibrium state with outgoing kinetic energy $\varepsilon_{m_1}^c$ in CMS, and $W_T^{j\pi}(E^*)$ is total emission rate at the equilibrium state. $E^*$ is excited energy of compound nucleus, $\sigma_a^{j\pi}(E_L)$ is absorption cross section in $j\pi$ channel. In Eq. (\ref{eq1}), the first term in the brace denotes the contribution of the pre-equilibrium process, which dominates the light nucleus reactions with 1$p$-shell light nuclei involved. And the second term in the brace denotes the contribution of the equilibrium process. The cross section of the secondary outgoing particle from discrete energy level of first residual nucleus to the discrete energy level of the secondary residual nucleus can be expressed as \begin{eqnarray}\label{eq2} \sigma_{k_1 \rightarrow k_2}(n, m_1, m_2)=\sigma_{k_1}(n, m_1)\cdot R_{m_2}^{k_1\rightarrow k_2}(E_{k_1}), \end{eqnarray} where $\sigma_{k_1}(n, m_1)$ is cross section of the first emitted particle $m_1$ expressed in Eq. (\ref{eq1}), and $R_{m_2}^{k_1\rightarrow k_2}(E_{k_1})$ is the branching ratio of the secondary outgoing particle $m_2$ from energy level $E_{k_1}$ of first residual nucleus $M_1$ to the energy level $E_{k_2}$ of secondary residual nucleus $M_2$. The formulas (\ref{eq1}) and (\ref{eq2}) describe the particle emission from compound nucleus to discrete energy levels of first residual nuclei, and from the discrete energy levels of the first residual nuclei to the discrete levels of the secondary residual nuclei with angular momentum and parity conservation through the pre-equilibrium and equilibrium reaction process. Our previous researches indicate that the contributions of the total double differential cross sections of outgoing particle for light nucleus reactions are mainly from the pre-equilibrium emission process \cite{X.J.Sun2017,Zhang2001Li6,Zhang2002Li7,Sun2009Be9,Duan2010Be9,Zhang2003B10,Zhang2003B11,Sun2008kerma,Zhang1999C12,Sun2007C12,Yan2005N14,Zhang2001O16,Duan2005O16,Duan2007,Zhang2015,Yan 5He,X.J.Sun2015,X.J.Sun2016}. Only the equilibrium reaction process does not reproduce the double differential cross sections of the light nucleus reactions. The linear momentum-dependent exciton state density model \cite{M.B.Chadwick1991} is used to obtain the Legendre expansion coefficients of the first outgoing particle and its residual nucleus. The double differential cross sections of the first outgoing deuteron, triton, $^3$He, and $\alpha$ are calculated with the improved Iwamoto-Harada model \cite{A.Iwamoto1982,J.S.Zhang93,J.S.Zhang1996}, which describes the light composite particle emissions. The representation of the double differential cross sections of secondary outgoing particle had been obtained by the accurate kinematics in Refs.\cite{Zhang2003B10,Zhang1999C12}. And the representation of the double differential cross sections of cluster separation and three-body breakup process can be found in Refs.\cite{Zhang2001Li6,Zhang2002Li7,Zhang2003B11}. Energy conservation is held strictly during nuclear reaction process in laboratory system (LS) for different reaction processes. A new integral formula \cite{X.J.Sun2015}, which is not compiled in any integral tables or any mathematical softwares, had been employed to describe the double differential cross sections of outgoing particles. According to the Heisenberg's uncertainty relation, the level widths and energy resolution could be considered in fitting experimental data. The fitting procedure for double differential cross sections of outgoing particles are performed with Gaussian expansion form. And the transformation formulas from CMS to LS have been given in Ref. \cite{Zhang1999C12}. All of the energy level widths is derived from the experimental measurements \cite{Tilley1992,Tilley2002,Tilley2004} as fixed input parameters. The optical model is very important to calculate the reduced penetration factor, which determines the emission rate of the first emitted particle. The phenomenological spherical optical model potential is employed in the model calculations. The potential parameters of the incident and ejected channels are determined by various reaction cross sections, and the angular distributions of the elastic and inelastic scattering. \subsection{ Coulomb Barrier}\label{sect2.2} Since Coulomb barrier has significant effect for open reaction channels of charged particles, it must be reasonably considered in the calculation for incident channel and outgoing channels. Considering the energy-momentum conservation in CMS for outgoing channel, the definitive kinetic energy $\varepsilon _{{m_1}}^c$ of the first emitted particle can be easily derived as \begin{eqnarray}\label{eq3} \varepsilon _{{m_1}}^c = \frac{{{M_1}}}{{{M_C}}}\left( {{E^*} - {B_1} - {E_{{k_1}}}} \right). \end{eqnarray} Where $M_1$ is mass of the first residual nucleus after emitting the first particle $m_1$, and $M_C$ is mass of compound nucleus. For convenience, $m_1$ and $M_1$ also denote the first outgoing particle and the first residual nucleus, expectively. $E^*$ is excited energy of compound nucleus. $B_1$ is binding energy of the first emitted particle in compound nucleus. $E_{k_1}$ is excited energy of the $k$-th discrete level of the first residual nucleus. Considering the energy-momentum conservation in CMS for incident channel, the excited energy of compound nucleus can be expressed as \begin{eqnarray}\label{eq4} {E^*} = \frac{{{M_T}}}{{{M_C}}}{E_p} + {B_p}, \end{eqnarray} where $M_T$ is mass of target nucleus. $E_p$ is kinetic energy of incident particle. $B_p$ is binding energy of incident particle in compound nucleus. According to Eqs. (\ref{eq3}) and (\ref{eq4}), the threshold energy $E_{th}$ can be calculated. Due to effect of Coulomb barrier \cite{G.R.Satchler1991,Peter W1970}, the kinetic energy of first outgoing charged particle must be higher than the Coulomb barrier $V_{Coul}$ , namely $ \varepsilon _{{m_1}}^c > {V_{Coul}}$. According to the assumption of the spherical nucleus \cite{Zhang2015}, the Coulomb barrier ${V_{Coul}}$ can be approximatively expressed as \begin{eqnarray}\label{eq5} {V_{Coul}} = \frac{{{e^2}{Z_{{M_1}}}{Z_{{m_1}}}}}{{{r_C}(A_{{M_1}}^{\frac{1}{3}} + A_{{m_1}}^{\frac{1}{3}})}}, \end{eqnarray} where $Z_{M_1}$ and $Z_{m_1}$ is charge number of residual nucleus and first outgoing charged particle, respectively. $r_C(=1.2\sim1.5$fm) is charge radii parameter. For proton, deuteron, triton, $^3$He, $\alpha $ and $^5$He, their charge radii $r_C$A$^\frac{1}{3}$ will be substituted by the measurements compiled in Ref. \cite{I.Angeli2013}. Therefore, the incident energy $E_p$ must meet Eq. (\ref{eq6}) to open reaction channels, i.e. \begin{eqnarray}\label{eq6} {E_p} > \frac{{{M_C}}}{{{M_T}}}(\frac{{{M_C}}}{{{M_1}}}{V_{Coul}} + {E_{{k_1}}} + {B_1} - {B_p}). \end{eqnarray} Obviously, the Coulomb barrier can affect significantly the open reaction channels. It is necessary that the reduced penetration factor calculated by optical model potential is $0$, if $ \varepsilon _{{m_1}}^c <{V_{Coul}}$. \subsection{ Double Differential Cross Section of Light Composite Particle}\label{sect2.3} The double differential cross sections of the emitted neutron and proton can be calculated using the linear momentum-dependent exciton state density model \cite{M.B.Chadwick1991}. The formulas for double differential cross sections of outgoing light composite particles ( deuteron, triton, $^3$He, $\alpha $ and $^5$He) can be expressed as \cite{Zhang2015} \begin{eqnarray}\label{eq7} \frac{{{d^2}\sigma }}{{d\varepsilon d\Omega }} = \sum\limits_n {\frac{{d\sigma (n)}}{{d\varepsilon }}A(n,\varepsilon ,\Omega )} . \end{eqnarray} Where $\frac{{d\sigma (n)}}{d\varepsilon }$ is energy spectrum in $n$-th exciton state, and can be calculated with angular momentum and parity conservation. A$(n,\varepsilon ,\Omega )$ is angle factor satisfying the normalization condition, expressed as \begin{eqnarray}\label{eq8} A(n,\varepsilon ,\Omega ) = \frac{1}{{4\pi }}\sum\limits_l {(2l + 1)\frac{{{G_l}(\varepsilon ,n)}}{{{G_0}(\varepsilon ,n)}}} \frac{{{\tau _l}(n,\varepsilon )}}{{{\tau _0}(n,\varepsilon )}}{P_l}(\cos \theta ). \end{eqnarray} Where $\Omega$ is solid angle of outgoing particle. ${\tau _l}(n,\varepsilon )$ is lifetime of the $l$-th partial wave with outgoing particle energy $\varepsilon$ emitted from $n$-th exciton state, and can be derived from the exciton model with angular momentum and parity conservation. ${G_l}(\varepsilon ,n)$ is geometric factor in $n$-th exciton state with outgoing particle energy $\varepsilon$, expressed as \begin{eqnarray}\label{eq9} G_l^b({\varepsilon _b}) = \frac{1}{{{x_b}}}\int\limits_{\max \left\{ {1,{x_b} - {A_b} + 1} \right\}}^{\sqrt {1 + \frac{E}{{{\varepsilon _F}}}} } {{x_1}d{x_1}} \int\limits_{{x_b} - {x_1}}^{{A_b} - 1} {dy{Z_b}(y){P_l}(\cos \Theta )}. \end{eqnarray} Where $\varepsilon _b$ is kinetic energy of the outgoing composite particle, and $\varepsilon _F$ is Fermi energy. $E^*$ is excited energy of compound nucleus. $A_b$ is mass number of outgoing particle $b$. Here $x_1=p_1/p_F$, $p_1$ is momentum of the first nucleon in the outgoing composite particle $b$, and $p_F$ is Fermi momentum. And $x_b=p_b/p_F$, $p_b$ is momentum of the outgoing composite particle. $y=p_y/p_F$, and $p_y$ is total momentum of nucleons except the first nucleon in the outgoing composite particle $b$. ${Z_b}(y)$ is a factor related to emitted composite particle, expressed as \begin{eqnarray}\label{eq10} {Z_b}(y) = \left\{ {\begin{array}{*{20}{l}} {y, ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~~~~~~~ ~~ ~~~~~~~~~b = \textmd{deuteron}}\\ {y{{(y - 2)}^2}(y + 4), ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~ ~~~~~~~~ ~~ ~ ~~~~~~~~ ~~b = \textmd{triton}, \textmd{$^3$He}}\\ {{{(y - 3)}^4}({y^3} + 12{y^2} + 27y - 6),~~~~~~~~~~~~~~~~~~~~b = {\alpha} }\\ {{{(y - 4)}^6}({y^4} + 24{y^3} + 156{y^2} + 224y - 144),~~~~~b = \textmd{$^5$He}}. \end{array}} \right. \end{eqnarray} Cosinoidal function $ \cos \Theta$ can be expressed as \begin{eqnarray}\label{eq11} \cos\Theta = \frac{{x_b^2 + x_1^2 - {y^2}}}{{2{x_b}{x_1}}}. \end{eqnarray} The formulas mentioned above can be used to calculate the double differential cross sections of the outgoing neutron, proton, deuteron, triton, $^3$He, $\alpha$, and $^5$He in this work. The calculated results are given in Sec IV. \section{ANALYSIS OF REACTION CHANNELS FOR $p + ^7$Li REACTION}\label{sect3} For proton induced $^7$Li reaction, reaction channels exist theoretically at incident energy $E_p \leq 20$ MeV in terms of the reaction threshold energy $E_{th}$ as follows: \begin{eqnarray}\label{eq12} p+^7\textmd{Li}\rightarrow ^{8}\textmd{Be}^* \rightarrow \left\{ \begin{array}{llr} (p, \gamma)^{8}\textmd{Be}, ~~~~~Q=+17.255 \textmd{MeV}, ~~~~~E_{th}=0.000 \textmd{MeV}\\ (p, n)^{7}\textmd{Be},~~~~~Q=-1.643 \textmd{MeV}, ~~~~~~E_{th}=1.879 \textmd{MeV}\\ (p, p)^{7}\textmd{Li},~~~~~~~Q=~0.000 \textmd{MeV}, ~~~~~~E_{th}=0.000 \textmd{MeV}\\ (p, \alpha)\alpha, ~~~~~~~~Q=+17.348 \textmd{MeV}, ~~~~E_{th}=0.000 \textmd{MeV}\\ (p, ^3\textmd{He})^{5}\textmd{He}, ~~~Q=-4.125 \textmd{MeV}, ~~~~~E_{th}=4.7175 \textmd{MeV}\\ (p, d)^{6}\textmd{Li}, ~~~~~~~Q=-5.025 \textmd{MeV}, ~~~~~E_{th}=5.7468 \textmd{MeV}\\ (p, t)^{5}\textmd{Li}, ~~~~~~~Q=-4.434 \textmd{MeV}, ~~~~~E_{th}=5.0709 \textmd{MeV}\\ (p, 2n)^{6}\textmd{Be}, ~~~~Q=-12.320 \textmd{MeV}, ~~~~E_{th}=14.0897 \textmd{MeV}\\ (p, np)^{6}\textmd{Li}, ~~~~~~Q=-7.249 \textmd{MeV}, ~~~~E_{th}=8.2903 \textmd{MeV}\\ (p, pn)^{6}\textmd{Li}, ~~~~~~Q=-7.249 \textmd{MeV}, ~~~~E_{th}=8.2903 \textmd{MeV}\\ (p, n\alpha)^{3}\textmd{He}, ~~~~~Q=-3.230 \textmd{MeV}, ~~~~E_{th}=3.694 \textmd{MeV}\\ (p, nd)^{5}\textmd{Li}, ~~~~~~Q=-10.691 \textmd{MeV}, ~~~E_{th}=12.2267 \textmd{MeV}\\ (p, 2p)^{6}\textmd{He}, ~~~~~~Q=-9.974 \textmd{MeV}, ~~~~E_{th}=11.4067 \textmd{MeV}\\ (p, pt)^{4}\textmd{He}, ~~~~~~Q=-2.467 \textmd{MeV}, ~~~~E_{th}=2.8214 \textmd{MeV}\\ (p, tp)^{4}\textmd{He}, ~~~~~~Q=-2.467 \textmd{MeV}, ~~~~E_{th}=2.8214 \textmd{MeV}\\ (p, pd)^{5}\textmd{He}, ~~~~~~Q=-9.619 \textmd{MeV}, ~~~E_{th}=11.0007 \textmd{MeV}\\ (p, dp)^{5}\textmd{He}, ~~~~~~Q=-9.619 \textmd{MeV}, ~~~E_{th}=11.0007 \textmd{MeV}. \end{array} \right. \end{eqnarray} Considering the conservations of the energy, angular momentum, and parity in the particle emission processes, the reaction channels of the first particle emission are listed as follows: \begin{eqnarray}\label{eq13} p+^7\textmd{Li}\rightarrow ^{8}\textmd{Be}^* \rightarrow \left\{ \begin{array}{l} n+ ^{7}\textmd{Be}^* ~~(k_1=gs, 1, 2, ...,7),\\ p+ ^{7}\textmd{Li}^* ~~(k_1=gs, 1, 2, ..., 10),\\ \alpha+{\alpha}^* ~~(k_1=gs, 1, 2, ..., 14),\\ ^3\textmd{He}+ ^{5}\textmd{He}^* ~~(k_1=gs, 1),\\ d+ ^6\textmd{Li}^* ~~(k_1=gs, 1, 2, ..., 5),\\ t+ ^5\textmd{Li}^* ~~(k_1=gs,1,2). \end{array} \right. \end{eqnarray} Where $gs$ and $k_1$ denote the ground state and the $k_1$-th energy level of the first residual nuclei $M_1$ taken from measurements \cite{Tilley1992,Tilley2002,Tilley2004}, respectively. For the first particle emission channel $^7$Li($p, n$)$^7$Be$^*$, the first residual nucleus $^7$Be$^*$, which attains the seventh energy level, can still emit a proton with the residual nucleus $^6$Li or a alpha particle with the residual nucleus $^3$He. Furthermore, the secondary residual nucleus $^6$Li can break up into deuteron and alpha particle \cite{Zhang2001Li6} if $^6$Li is in the first, third and fourth discrete energy levels. Therefore, the first particle emission channel $^7$Li($p, n$)$^7$Be$^*$ can further open ($p, np$)$^6$Li, ($p, npd\alpha$) and ($p, n\alpha$)$^3$He reaction channels in the final state. For the first particle emission channel $^7$Li($p, p$)$^7$Li$^*$, the first excited level of residual nucleus $^7$Li cannot emit any particle, so it purely contributes to the inelastic scattering channel. The second and the third excited levels of $^7$Li can emit triton, so they contribute to the ($p, pt\alpha$) reaction channel. If the first residual nucleus $^7$Li$^*$ is at the $k_1$-th ($k_1 \geq 4$) excited energy level, some energy levels will emit a neutron, so they contribute to the $(p, pn)^6$Li reaction channel. Furthermore, the secondary residual nucleus $^6$Li with high excited energy can break up into $d + \alpha$, thus this reaction process belongs to ($p, pnd\alpha$) reaction channel. If the first residual nucleus $^7$Li$^*$ is at the $k_1$-th ($k_1 \geq 6$) excited energy level, some energy levels will emit proton and deuteron with the corresponding secondary residual nuclei as $^6$He$_{gs}$ and $^5$He, respectively. Considering the two cluster separation reaction, i.e. $^5$He $\rightarrow$ $n + \alpha$ \cite{Yan 5He}, so these reaction processes belong to ($p,2p$)$^6$He$_{gs}$ and ($p,pnd\alpha$) reaction channels, respectively. Therefore, first particle emission channel $^7$Li($p, p$)$^7$Li$^*$ will contribute to ($p, pn$)$^6$Li, ($p, pt\alpha$), ($p, npd\alpha$) and ($p, 2p$)$^6$He$_{gs}$ reaction channels in the final state, besides the elastic and inelastic scattering. For the first particle emission channel $^7$Li($p,d$)$^6$Li$^*$, besides reaction process $^6$Li$^*$ $\rightarrow$ $d + \alpha$ as mentioned above belongs to ($p, 2d\alpha$) reaction channel in the final state, some excited energy levels ($k_1 > 3$) of the first residual nucleus $^6$Li$^*$ can emit proton with the secondary residual nucleus $^5$He. As mentioned above, $^5$He is unstable and can be separated into a neutron and an alpha particle spontaneously \cite{Yan 5He}, so ($p, dp$)$^5$He reaction channel belongs to ($p, pnd\alpha$) reaction channel in the final state. Considering the two-cluster separation processes, i.e. $^5$Li $\rightarrow$ $p + \alpha$ and $^5$He$ \rightarrow$ $n + \alpha$, so the first particle emission channels such as ($p, t$)$^5$Li and ($p, $$^3$He)$^5$He belong to ($p, pt\alpha$) and ($p, n\alpha$)$^3$He reaction channels in the final state, respectively. For proton induced $^7$Li reaction at $E_p=14$ MeV, the compound nucleus $^8$Be can even reach the twenty-seventh discrete energy level with $28.6$ MeV in term of Eq. (\ref{eq4}), so it can emit neutron, proton, deuteron, triton, $^3$He, and break up into two alpha particles. Because of the high excited energy of the compound nucleus $^8$Be, alpha particles through two body breakup process are also at high excited energy in term of the energy conservation. So $^4$He can emit a proton at $k_1$-th ($k_1\geq 1$) energy level, a neutron at $k_1$-th ($k_1\geq 2$) energy level, and break up into two deuterons at $k_1$-th ($k_1\geq 9$) energy level, respectively. These reaction processes belong to ($p, 2\alpha$), ($p, pt\alpha$), ($p,n\alpha$)$^3$He and ($p, 2d\alpha$) reaction channels. Certainly, the gamma decay obviously competes with the particle emission, and the branching ratios can be obtained by means of the model calculation in STLN. According to the analysis of reaction channels mentioned above, the total spectra could be produced by adding all of the partial spectra of the same outgoing particle yielded from every reaction channel. The contributions of the double differential cross sections of total emitted proton are from elastic scattering, inelastic scattering, ($p,np$)$^6$Li, ($p,2p$)$^6$He$_{gs}$, ($p, npd\alpha$) and ($p, pt\alpha$) reaction channels. The contributions of the double differential cross sections of total emitted deuteron are from ($p,d$)$^6$Li, ($p, npd\alpha$) and ($p, 2d\alpha$) reaction channels. The contributions of the double differential cross sections of total emitted triton are just from $(p,t)^5$Li and ($p, pt\alpha$) reaction channels. The contributions of the double differential cross sections of total emitted neutron are just from ($p,n$)$^7$Be, ($p, n^3$He)$\alpha$, $(p,npd\alpha)$ and $(p,np)^6$Li reaction channels. The contribution of the double differential cross sections of total emitted $^3$He is only from ($p, n^3$He)$\alpha$ reaction channel. In conclusion, for the proton induced $^7$Li reaction, reaction channels exist at incident energy $E_p \leq 20$ MeV as follows: \begin{eqnarray}\label{eq14} p + {}^7\textmd{Li} \to {}^8\textmd{Be}^* \to \left\{ {\begin{array}{*{20}{l}} {n + {}^7\textmd{Be}^*\left\{ {\begin{array}{*{20}{l}} {k_1 = gs,1~~~~~~~~~~~~~~~~~~~~~~~~~(p,n){}^7\textmd{Be}}\\ {k_1 = 2,7~~~~~~~~~~~~~~~~~~~~~~~~~(p,n{}^3\textmd{He})\alpha }\\ {k_1 = 4,7~~~~~~~~~~~~~~~~~~~~~~~~(p,np){}^6\textmd{Li}_{gs}} \end{array}} \right.}\\ {p + {}^7\textmd{Li}^*\left\{ {\begin{array}{*{20}{l}} {k_1 = gs ~~~~~~~~~~~~~~~~Compound~elastic}\\ {k_1 = 1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(p,{p'})}\\ {k_1 = 2,...,10~~(t + \alpha )~~~~~~~~~~~~~~(p,pt\alpha )}\\ {k_1 \ge 6(d + {}^5\textmd{He})~~~~~~~~~~~~~~~~~~(p,npd\alpha )}\\ {k_1 \ge 6(p + {}^6\textmd{He}_{gs})~~~~~~~~~~~~(p,2p){}^6\textmd{He}_{gs}}\\ {k_1 \ge 4(n + {}^6\textmd{Li})~~~~~~~~~~~~~~~~~(p,np){}^6\textmd{Li}} \end{array}} \right.}\\ {\alpha + {\alpha ^*}\left\{ {\begin{array}{*{20}{l}} {k_1 = gs~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(p,2\alpha )}\\ {k_1 = 1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(p,pt\alpha )}\\ {k_1 \ge 2~~~~~~~~~~~~~~~~~~~(p,pt\alpha ),(p,n{}^3\textmd{He}\alpha )}\\ {k_1 \ge 9~~~~~~~~(p,pt\alpha ),(p,n{}^3\textmd{He}\alpha ),(p,2d\alpha )} \end{array}} \right.}\\ {{}^3\textmd{He} + {}^5\textmd{He}~(k_1 = gs,1) \to (n + \alpha )~~~~~~~~(p,n{}^3\textmd{He}\alpha )}\\ {d + {}^6\textmd{Li}^*\left\{ {\begin{array}{*{20}{l}} {k_1 = gs,2~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(p,d)}\\ {k_1 = 1,3,4,5~~~~~~~~~~~~~~~~~~~~~~~~~(p,2d\alpha )}\\ {k_1 = 4,5(p+^5\textmd{He} \to n + \alpha)~~~~(p,npd\alpha)} \end{array}} \right.}\\ {t + {}^5\textmd{Li}~(k_1 = gs,1,2) \to (p + \alpha )~~~~~~~~~~~~~~(p,pt\alpha )}. \end{array}} \right. \end{eqnarray} \section{THE CALCULATED RESULTS AND ANALYSIS}\label{sect4} The experimental double differential cross sections of proton for $p + ^7$Li reaction had been measured only at incident proton energy $E_p = 14$ MeV in 1989 \cite{N. Koori1989}. The experimental double differential cross sections of deuteron and triton for $p + ^7$Li reaction had also been given in 1991 \cite{N. Koori1991}. The PUNF code based on STLN is developed for calculating the cross sections, elastic angular distributions and the double differential cross sections of outgoing neutrons, proton and light charged particles. In this paper, the comparisons between the calculated results with the measurements of double differential cross sections of total outgoing proton, deuteron and triton for $p + ^7$Li reaction will be performed. The comparisons of the calculated double differential cross sections of total outgoing proton with the measured data are shown in Figs. \ref{Fig1} - \ref{Fig3} at the incident proton energy $14$ MeV for outgoing angles $20^\circ, 30^\circ, 40^\circ, 50^\circ, 60^\circ, 70^\circ, 80^\circ, 90^\circ, 100^\circ, 110^\circ, 120^\circ, 130^\circ, 140^\circ, 150^\circ, 160^\circ $ and $165^\circ$, respectively. The black points denote the experimental data derived from Ref. \cite{N. Koori1989}, and the red solid lines denote the calculated total double differential cross sections of outgoing proton. The calculated results agree well with the measurements except some peaks which are contaminated by the scattering from $^1$H, $^{12}$C and $^{16}$O as illuminated in Ref. \cite{H.R.Guo2019,N. Koori1989}. Fig. \ref{Fig_Par_P1} shows the partial double differential cross sections of outgoing proton from reaction channel $^7$Li($p,{p'}$)$^7$Li with outgoing angle $60^\circ$ at $E_p = 14$ MeV in LS. The black solid lines denote the partial spectra of the first outgoing proton from the compound nucleus to the ground state, up to the eighth excited energy levels of the first residual nucleus $^7$Li as labeled in Fig. \ref{Fig_Par_P1}. In this paper, only the cross sections with values larger than 10$^{-3}$ mb are given. Fig. \ref{Fig_Par_P2} shows the partial spectra of secondary outgoing proton from the third to sixth excited energy levels of $^7$Be to the ground state of $^6$Li, and from the fifth excited energy level of $^7$Be to the first excited energy level of $^6$Li for $^7$Li($p, np$)$^6$Li reaction channel labeled by the blue dashed lines. The green dotted lines denote the partial spectra of secondary outgoing proton from the fourth and fifth excited energy levels of $^6$Li to the ground state of $^5$He for $^7$Li($p, dp$)$^5$He reaction channel. The orange dash-dotted lines denote the partial spectra of secondary outgoing proton from ground state and the first excited energy level of $^5$Li to ground state of $^4$He for $^7$Li($p, tp$)$^4$He reaction channel. Fig. \ref{Fig_Par_P3} shows the partial spectra of secondary outgoing proton from the first to 10th excited energy levels of $^4$He which breaks into $p$ and $t$ for $^7$Li($p, \alpha p)t$ reaction channel labeled by the black solid lines. From Figs. \ref{Fig_Par_P1} - \ref{Fig_Par_P3}, one can see that the contributions of the secondary outgoing proton is far smaller than that of the first outgoing proton, as shown in Ref. \cite{H.R.Guo2019}. The calculated double differential cross sections of total outgoing deuteron for $p + ^7$Li reaction at $14$ MeV are compared with the experimental data with outgoing angles of $10^\circ, 20^\circ, 30^\circ, 40^\circ, 50^\circ, 60^\circ, 70^\circ, 80^\circ, 90^\circ, 100^\circ, 110^\circ, 120^\circ, 130^\circ, 140^\circ, 150^\circ, 160^\circ$ and $165^\circ$, as shown in Figs. \ref{Fig4} - \ref{Fig6}. The black points denote the experimental data derived from Ref. \cite{N. Koori1991}, and the red solid lines denote the calculated total double-differential cross sections of outgoing deuteron. One can see that the calculated results agree well with the measurements. Fig. \ref{Fig_Par_D1} shows the partial double differential cross sections of outgoing deuteron from reaction channels $^7$Li($p,d$)$^6$Li, $^7$Li($p,pd$)$^5$He, $^7$Li($p,dd$)$^4$He and $^7$Li($p,npd\alpha$) with outgoing angle $60^\circ$ at $E_p = 14$ MeV in LS. In Fig. \ref{Fig_Par_D1}, the black solid lines denote the partial spectra of first outgoing deuteron from compound nucleus to the ground state, up to the fifth excited energy level of the first residual nucleus $^6$Li for $^7$Li$(p, d)^6$Li reaction channel. The blue dashed line denotes the contribution of secondary outgoing deteron from the eighth excited energy level of $^7$Li to the ground state of $^5$He for $^7$Li$(p, pd)^5$He reaction channel. The orange dotted lines denote the contributions of secondary outgoing deteron from the first, third, fourth and fifth excited energy levels of $^6$Li to the ground state of $^4$He for $^7$Li$(p, dd)^4$He reaction channel. The magenta dash-dotted line denotes the contribution of secondary outgoing deteron for reaction channel ($p, np$)$^6$Li $\rightarrow$ ($p, np+d\alpha$) from the fifth excited energy level of $^7$Be to the first excited energy level of $^6$Li, which can break up into $d + \alpha$. The green dash-dotted lines denote the contributions of secondary outgoing deteron for reaction channel ($p, pn$)$^6$Li $\rightarrow$ ($p, pn+d\alpha$) from the seventh and eighth excited energy levels of $^7$Li to the first excited energy level of $^6$Li. Fig. \ref{Fig_Par_D2} shows the partial double differential cross sections of outgoing deuteron from reaction channel $^7$Li($p,\alpha d$)$d$ with outgoing angle $60^\circ$ at $E_p = 14$ MeV in LS. In Fig. \ref{Fig_Par_D2}, the black solid lines denote the secondary outgoing deuteron from the ninth to $13$th excited energy levels of $^4$He, which can further break up into 2$d$. The calculated double differential cross sections of total outgoing triton for $p + ^7$Li reaction at $14$ MeV are compared with the experimental data with outgoing angles of $10^\circ, 20^\circ, 30^\circ, 40^\circ, 50^\circ, 60^\circ, 70^\circ, 80^\circ, 90^\circ, 100^\circ, 110^\circ, 120^\circ, 130^\circ, 140^\circ, 150^\circ, 160^\circ$ and $165^\circ$, as shown in Figs. \ref{Fig7} - \ref{Fig9}. The black points denote the experimental data derived from Ref. \cite{N. Koori1991}, and the red solid lines denote the calculated total double-differential cross sections of outgoing triton. One can see that the calculated results agree well with the measurements. Fig. \ref{Fig_Par_T1} shows the partial double differential cross sections of outgoing triton from reaction channels $^7$Li($p,t$)$^5$Li and $^7$Li($p,pt$)$^4$He with outgoing angle $60^\circ$ at $E_p = 14$ MeV in LS. In Fig. \ref{Fig_Par_T1}, the blue dashed lines denote the partial spectra of the first outgoing triton from the compound nucleus to the ground state and the first excited energy level of $^5$Li for $^7$Li$(p, t)^5$Li reaction channel. The black solid lines denote the contributions of secondary outgoing triton from second to eighth excited energy levels of $^7$Li to the ground state of $^4$He for $^7$Li$(p, pt)^4$He reaction channel. Fig. \ref{Fig_Par_T2} shows the partial double differential cross sections of secondary outgoing triton from reaction channel $^7$Li($p,\alpha p$)$t$ with outgoing angle $60^\circ$ at $E_p = 14$ MeV in LS. In Fig. \ref{Fig_Par_T2}, the black solid lines denote the partial spectra of secondary outgoing triton from the first to 10th excited energy levels of $^4$He, which can break up into $p$ and $t$. As shown in Figs. \ref{Fig_Par_P3} and \ref{Fig_Par_T2}, there are some wave-form partial spectra because of the too small Gaussian expansion coefficients no more than. \section{SUMMARY AND CONCLUSION} \label{sect4} Based on the unified Hauser-Feshbach and exciton model \cite{Zhang1993}, which can describe the nuclear reaction emission processes between the discrete levels with energy, angular momentum, and parity conservation, STLN has been applied successfully to calculate the double differential cross sections of outgoing neutrons for neutron and proton induced reactions with the $1p$-shell nuclei involved. In this paper, the STLN has been improved to calculate the double differential cross sections of outgoing neutron, proton, deuteron, triton, $^3$He, and alpha particle for proton induced $^7$Li nuclear reaction. The calculated results very well reproduce the existed measurements of outgoing proton, deuteron and triton. The model calculation for $p + ^7$Li reaction indicates that the pre-equilibrium emission process is the dominant reaction mechanism in 1$p$-shell light nucleus reactions. Due to the light mass of $1p$-shell nuclei, the recoiling effects in various emission processes are strictly taken into account. And the calculated results indicate that the double differential cross sections of outgoing particles are very sensitive to energy, spin and parity of the discrete energy levels both for the target nucleus and the corresponding residual nuclei. Furthermore, the complete nuclear reaction data with ENDF-6 format for $p + ^7$Li can be obtained by PUNF code on the basis of STLN. \textbf{Acknowledgements} This work is supported by Natural Science Foundation of Guangxi (No. 2019GXNSFDA185011); National Natural Science Foundation of China (No. 11465005); and Foundation of Guangxi innovative team and distinguished scholar in institutions of higher education (No. 2017GXNSFGA198001).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{intro}Introduction} Progress in solving string theory in various backgrounds can be done by considering sectors of the theory that decouple from the rest of the degrees of freedom in a suitable limit. Such decoupled sectors are characterized by having an altogether different asymptotic symmetry compared to that of the parent string theory. A well known example to such a truncation is the BMN \cite{Berenstein:2002jq} sector of string theory in AdS$_5\times S^5$. Once a consistent sector is found, a complete worldsheet theory with the appropriate symmetries can be written down without further reference of the parent theory. Non-relativistic string theory \cite{jGom_Oog} (see also \cite{DanGuiKru}) in flat space is another consistent sector of string theory, whose world-sheet conformal field theory description has the appropriate Galilean symmetry \cite{Brugues:2004an}. Non-relativistic superstrings and non-relativistic superbranes \cite{JGom_Kam_Town,Garcia:2002fa} are obtained as a certain decoupling limit of the full relativistic theory. The basic idea behind the decoupling limit is to take a particular non-relativistic limit in such a way that the light states satisfy a Galilean-invariant dispersion relation, while the rest decouple. For the case of strings, this can be accomplished by considering wound strings in the presence of a background $B$-field and tuning the $B$-field so that the energy coming from the $B$-field cancels the tension of the string. In flat space, once kappa symmetry and diffeomorphism invariance are fixed, non-relativistic strings are described by a free field theory in flat space. In AdS$_5\times S^5$ \cite{GGK_AdS5XS5}, the world-sheet theory reduces to a supersymmetric free field theory in AdS$_2$. It is an interesting question whether similar non-relativistic string actions can be constructed in an expanding spacetime and if so, whether non-relativistic strings could play a cosmological role in the form of cosmic strings. The study of cosmic strings\footnote{The dynamics of cosmic strings can be described by considering perturbations around a static solitonic string solution. Keeping all orders in the perturbations results in a relativistic effective string action, while keeping only up to quadratic order gives rise to the non-relativistic string action we will consider in this paper \cite{JGom_Kam_Town}.} has been catalysed in the past few years mainly due to theoretical motivations, in particular the realisation that they are generic in Supersymmetric Grand Unified Theory (SUSY GUT) models \cite{Jeannerot} and brane inflation \cite{BMNQRZ,SarTye}. The latter possibility is of particular significance as it provides a potential observational window to superstring physics \cite{PolchStab,PolchIntro}. Further, the fact that the Planck satellite and laser interferometers such as LISA and LIGO may be able to probe a significant part of cosmic string tensions relevant to these models \cite{DamVil}, opens the possibility of detecting cosmic strings in the foreseeable future. One can think of situations in which ordinary cosmic strings could behave non-relativistically. Network simulations in a matter or radiation dominated universe \cite{All_Shell} suggest that, at late times, string segments move relatively slowly and coherently on the largest scales, but also show evidence that small-scale-structure \cite{Mart_Shell_sss,Polch_Rocha} which is largely responsible for damping energy from the network through the formation of minuscule loops, remains relativistic as Hubble damping is inefficient at scales much smaller than the horizon \cite{book}. However, the situation is different for strings in de Sitter spacetime, where Hubble damping can be very efficient rendering the strings essentially non-relativistic. This may be relevant for late time cosmology as observations \cite{Perlmutter,WMAP3} suggest that the universe is already entering a de Sitter phase. Further, non-relativistic string networks have been considered as Solid Dark Matter (SDM) \cite{BuchSper,BatCarChMo} and more recently \cite{Alexand} as an alternative explanation of galactic rotation curves. It would thus be desirable to have an effective diffeomorphism invariant action\footnote{Note that Ref.~\cite{BuchSper} considers an action applicable to a `continuous medium' with internal structure, which is invariant under limited reparametrisations preserving the worldlines of the constituent particles. Here we consider a \emph{diffeomorphism invariant} non-relativistic action.} describing the dynamics of non-relativistic strings in a cosmological context. On the other hand, the fact that one can construct a consistent worldsheet theory of non-relativistic strings at quantum level (in flat space) also motivates the study of \emph{fundamental} non-relativistic strings in an expanding spacetime. In this paper, we point out that a non-relativistic diffeomorphism invariant action can be obtained in the case of a Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) spacetime as a limit of the relativistic Nambu-Goto action, and study the dynamics and cosmological evolution of non-relativistic strings. The structure of the paper is as follows. In section \ref{rel_string} we review some basic results about the Nambu-Goto action and the dynamics of relativistic strings in an expanding universe, which will be useful for comparison to the non-relativistic case. In section \ref{non_rel_lim} we obtain a non-relativistic diffeomorphism invariant worldsheet action by taking a particular limit of the Nambu-Goto action in expanding spacetime. We move on in section \ref{NR_dynamics} to study non-relativistic string dynamics as described by this action. The physical interpretation of non-relativistic strings as well as their possible coupling to cosmology-in particular the effective equation of state of an `ideal gas' of non-interacting, non-relativistic strings-are discussed in section \ref{physical}. The effect of string interactions is left for section \ref{VOS}, where macroscopic models for the cosmological evolution of both relativistic and non-relativistic string networks are discussed. In section \ref{RelVnonRel} we solve numerically the non-relativistic network model for a wide range of parameters and compare the results to those of the relativistic model. In section \ref{discuss} we discuss possible applications of non-relativistic strings in condensed matter and cosmological contexts. Finally, we have three appendices which describe an alternative derivation of our non-relativistic string action as a semiclassical expansion \cite{semiclass} around the vacuum solution (appendix \ref{semicl}), the Hamiltonian formulation of relativistic and non-relativistic string dynamics (appendix \ref{hamiltonian}) and the construction of a spacetime energy-momentum tensor for the non-relativistic string (appendix \ref{NR_EM_tensor}). \section{\label{rel_string} Relativistic String in Expanding Spacetime}\setcounter{equation}{0} Let us first consider a string moving in a $D+1$ dimensional spacetime with metric $G_{MN}\, \,(M,N=0,1,2,\ldots,D)$. Its world history is described by a two-dimensional spacetime surface, the string worldsheet $x^M=x^M(\sigma^i)$, $i=0,1$. The dynamics is governed by the Nambu-Goto action \begin{equation}\label{nambu} S_{NG}=-T_R \! \int \! \sqrt{-\gamma}\, d^2\sigma \, , \end{equation} where $T_R$ is the string tension and $\gamma$ is the determinant of the pullback of the background metric on the worldsheet, $\gamma_{ij}=G_{MN}(x)\partial_i x^M \partial_j x^N$. The equations of motion for the fields $x^M$ obtained from this action are given by \begin{equation}\label{eom} \nabla^2 x^M + \Gamma^M_{N\Lambda} \gamma^{ij} \partial_i x^N \partial_j x^\Lambda = 0 \, , \end{equation} where $\Gamma^M_{N\Lambda}$ is the ($D+1$)-dimensional Christoffel symbol and $\nabla^2 x^M$ the covariant Laplacian of the worldsheet fields $x^M$. By varying the action with respect to the background metric $G_{MN}$ we obtain a spacetime energy-momentum tensor \begin{equation}\label{emt} T^{MN}(y^\Lambda)=\frac{1}{\sqrt{-G}}\,T_R \! \int \! d^2\sigma \sqrt{-\gamma} \gamma^{ij}\partial_i x^M \partial_j x^N \, \delta^{(D+1)}(y^\Lambda-x^\Lambda(\sigma^i))\,. \end{equation} The rigid symmetries of (\ref{nambu}) are given by the Killing vectors of $G_{MN}$. The Nambu-Goto action is also invariant under 2D diffeomosrphisms of the worldsheet coordinates $\sigma^i$. One can use this freedom to fix the gauge by imposing two conditions on $x^M(\sigma^i)$. Now consider string propagation in an expanding Universe described by a flat FLRW metric \begin{equation}\label{FLRW_metric} G_{MN} = a(x^0)^2 \eta_{MN} \end{equation} in conformal time $x^0\!\equiv\!\eta$. A convenient gauge choice in this case is the \emph{transverse temporal gauge} given by: \begin{equation}\label{trans_temp} \left\{ \begin{array}{l} \dot x x^\prime = 0 \\ \tau=x^0 \end{array} \right. \end{equation} The equations of motion (\ref{eom}) become~\cite{Tur_Bhatt}: \begin{eqnarray} \left\{ \begin{array}{l} \dot\epsilon=-2\frac{\dot a}{a} \epsilon \dot{\bf x}^2 \label{eom_eps} \\ \ddot {\bf x} + 2\frac{\dot a}{a} \left( 1 - \dot{\bf x}^2 \right) \dot {\bf x} = {\left(\frac{{\bf x}^{\prime}}{\epsilon}\right)}^{\prime} \epsilon^{-1} \label{eom_x} \end{array} \right. \end{eqnarray} where \begin{equation}\label{epsilon} \epsilon = \frac{ {-x^\prime}^2 }{ \sqrt{-\gamma} } = \left( \frac{ {{\bf x}^\prime }^2 } { 1-\dot{\bf x}^2 } \right)^{1/2}\, . \end{equation} The variable $\epsilon$ is related to the canonical momentum associated to the field $x^0(\tau)$. Indeed, in the transverse gauge $\gamma_{01}\!\equiv\!\gamma_{\tau\sigma}\!=\!0$, we have: \begin{equation}\label{p_0_Lagr} p_0=-\frac{T_R}{2} \sqrt{-\gamma} \gamma^{ij} \frac{\partial \gamma_{ij}} {\partial_\tau x^0}= - T_R a(x^0)^2 \epsilon \dot x^0 \, , \end{equation} which, after imposing the temporal gauge condition $\dot x^0=1$, becomes $-T_R a(x^0)^2 \epsilon$. In the transverse temporal gauge, the energy-momentum tensor (\ref{emt}) of a relativistic string in a FLRW background is \cite{book}: \begin{equation}\label{emt_trans_temp} T^{MN}(\eta,y^I)=\frac{1}{a(\eta)^{D+1}}\,T_R \! \int \! d\sigma \left( \epsilon \dot x^M \dot x^N -\epsilon^{-1} x^{\prime M} x^{\prime N} \right) \, \delta^{(D)}(y^I-x^I(\sigma,\eta))\, , \end{equation} where, having integrated out $\delta(\eta-x^0(\tau))$, $I$ runs from $1$ to $D$. To construct the string energy one projects $T^{MN}$ on a spatial hypersurface $\eta\!=\!{\rm const}$, with induced metric $h$ and normal covectors $n_M\!=\!\left(a(\eta),{\bf 0}\right)$, integrating over the $D$ spatial coordinates: \begin{eqnarray}\label{energy} E(\eta) &=& - \! \int \! \sqrt{h} n_M n_N T^{MN} d^D{\bf y} \nonumber \\ &=& \! \int \! \sqrt{h} T^0_{\; 0} d^D{\bf y} \, . \end{eqnarray} Thus, due to the foliation, the energy can be constructed from the $00$ component of the energy momentum tensor. Since, $\sqrt{h}= a(\eta)^D$, equation (\ref{energy}) becomes: \begin{equation}\label{string_energy} E(\eta) = \! \int \! T^0_{\;0} a(\eta)^D d^D{\bf y} = a(\eta) T_R \! \int \! \epsilon \, d\sigma \, . \end{equation} Therefore, the energy of the relativistic string is simply the tension times the {\it physical} string length, taking into account relativistic length contraction. \section{\label{non_rel_lim}Non-Relativistic Limit of Nambu-Goto Action in FLRW}\setcounter{equation}{0} Now consider a string, charged under a background antisymmetric 2-tensor field $B_{MN}$, propagating in $D+1$ FLRW spacetime. The string couples to $B$ through a topological Wess-Zumino term, so that the total action reads: \begin{equation}\label{total_action} S=S_{NG}+S_{WZ}=-T_R \! \int \! \sqrt{-\gamma}\, d^2\sigma + q \! \int \! B^* \, , \end{equation} where $q$ is the string charge and $B^*$ the pullback of $B$ on the worldsheet. We consider a relativistic string aligned in the $x^0$, $x^1$ directions, its transverse coordinates being $X^a$ with $a=2,3,...D$. The non-relativistic limit \cite{jGom_Oog, JGom_Kam_Town, DanGuiKru} of this string consists of rescaling the longitudinal coordinates \begin{equation}\label{rescale} x^\mu \rightarrow \omega x^\mu\,, \quad \mu=0,1 \end{equation} and taking the limit $\omega \rightarrow \infty$. This yields a divergent term, coming from $S_{NG}$, which (in some geometries) can be canceled by an appropriate choice of a closed $B_{MN}$. If we assume that the string is wrapped on a spatial circle \begin{equation}\label{circle} x^1 \sim x^1 + 2\pi R \end{equation} then the chosen $B_{01}$ cannot be set to zero by a gauge transformation. The above procedure generally works in flat spacetime (in fact one only needs the longitudinal part of the metric be flat \cite{JGom_Kam_Town}), but for a curved background it is not guaranteed that there is a choice of closed $B$ which cancels the diverging piece of the action. Non-relativistic superstring actions have been obtained in the case of AdS$_5\times S^5$ \cite{GGK_AdS5XS5}. We will now see that the non-relativistic limit can also be taken in the case of a FLRW background. We write the Lagrangian density of the Nambu-Goto piece as: \begin{eqnarray} {\cal{L}}_{NG} &=& -T_R \sqrt{-{\rm det}[g_{ij}+G_{ab}(\eta)\partial_i X^a \partial_j X^b]} \nonumber \\ &=&-T_R \sqrt{-{\rm det}g_{ij}}\sqrt{{\rm det}[\delta^i_j+g^{ik} G_{ab}(\eta)\partial_k X^a \partial_j X^b]} \label{L_NG} \, , \end{eqnarray} where $g_{ij}=G_{\mu\nu}\partial_i x^\mu \partial_j x^\nu$ and $G_{ab}(\eta)= a(\eta)^2 \delta_{ab}$. Then, assuming a power law expansion $a(\eta)\!=\!\eta^{\alpha/2}$ (for example $\alpha=2$ resp. $4$ in radiation resp. matter dominated era) we obtain the non-relativistic limit of $S_{NG}$ by the rescaling (\ref{rescale}), which implies \begin{equation} a(\eta) \rightarrow \omega^{\alpha/2} a(\eta)\label{a_rescale} \, . \end{equation} Expanding the Lagrangian density in powers of the parameter $\omega$ we then obtain: \begin{equation}\label{L_NG_expand} {\cal{L}}_{NG} = -T_R \omega^\alpha \left\{ \omega^2 \sqrt{-{\rm det}g} + \frac{1}{2}\sqrt{-{\rm det}g} g^{ij}G_{ab}(\eta)\partial_i X^a \partial_j X^b + {\cal O}\left(\frac{1}{\omega^2}\right)\right\} \, . \end{equation} We can then rescale the string tension by \begin{equation}\label{T_rescale} T_R\, \omega^\alpha \rightarrow T_0 \end{equation} and take the limit $\omega \rightarrow \infty$, yielding a finite and a divergent piece: \begin{eqnarray} {\cal L}_{\rm reg}&=&-\frac{T_0}{2} \sqrt{-{\rm det}g} g^{ij} G_{ab}(\eta)\partial_i X^a \partial_j X^b \label{L_reg} \\ {\cal L}_{\rm div}&=&-T_0\omega^2\sqrt{-{\rm det}g} =-T_0\omega^2 a(\eta)^2 \sqrt{-{\rm det}(\eta_{\mu\nu}\partial_i x^\mu \partial_j x^\nu)} \label{L_div} \, . \end{eqnarray} The divergent piece can be canceled by choosing an appropriate closed $B_{\mu\nu}$. Indeed if we choose\footnote{The chosen $B_{\mu\nu}$ is closed. Working in zweibeins $e^\mu$ we have: ${\rm d}B = \frac{1}{2}{\rm d}[a(x^0)^2\epsilon_{\mu\nu}e^\mu \wedge e^\nu] = {\rm d} [ a(x^0)^2 e^0 \wedge e^1 ] = 2a \dot a \, e^0 \wedge e^0 \wedge e^1 + a^2 ({\rm d}e^0 \wedge e^1 + e^0 \wedge {\rm d}e^1 ) = a^2 (-w^{01} \wedge e^1 \wedge e^1 ) - a^2 e^0 \wedge w^{10} \wedge e^0 = 0$ where we have used ${\rm d}e + w\wedge e=0$, Cartan's structure equation with zero torsion.} $B_{\mu\nu}=a(\eta)^2 \epsilon_{\mu\nu}$ the Wess-Zumino part of the Lagrangian becomes: \begin{equation}\label{L_WZ} \frac{1}{2} \omega^2 (q\omega^\alpha) a(\eta)^2 \epsilon_{\mu\nu} \epsilon^{ij} \partial_i x^\mu \partial_j x^\nu . \end{equation} This term precisely cancels the divergent piece (\ref{L_div}) if one tunes the rescaled charge $(q\omega^\alpha)$ with the string tension $T_0$. We are thus left with the Non-Relativistic string action: \begin{equation}\label{S_NR} S_{NR}=-\frac{T_0}{2} \! \int \! \sqrt{-{\rm det}g} g^{ij}G_{ab}(\eta)\partial_i X^a \partial_j X^b d^2\sigma \, . \end{equation} This action can also be derived by a `semiclassical approximation' \cite{semiclass} from the classical solution: \begin{equation} x_0^M = \left\{ \begin{array}{cll} \tau &,& M=0 \\ \lambda\sigma &,& M=1 \\ 0 &,& M=a\in (2,\ldots,D) \end{array} \right. \end{equation} (see Appendix~\ref{semicl} for details). \section{\label{NR_dynamics}Non-Relativistic String Dynamics}\setcounter{equation}{0} The action (\ref{S_NR}) is characterised by 2D diffeomorphism invariance with respect to the worldsheet coordinates $\sigma^i$ and global Galilei invariance (modulo time translations due to time dependence of the metric) with respect to the transverse spacetime coordinates $X^a$. The canonical variables satisfy two primary constraints: \begin{eqnarray} && p_\mu \epsilon^{\mu\rho} \eta_{\rho\nu} x^{\prime\nu} + \frac{1}{2} \left(\frac{P_aP_b}{T_0}G^{ab}(x^0) + T_0 X^{\prime a}X^{\prime b} G_{ab}(x^0) \right) = 0 \label{energy_constr} \\ && p_\mu x^{\prime\mu} + P_a X^{\prime a} = 0 \, , \label{pxp_constr} \end{eqnarray} where $p_\mu$, $P_a$ are the canonical momenta corresponding to $x^\mu$ and $X^a$. Varying the action with respect to the transverse and longitudinal fields $X^a$ and $x^\mu$ respectively, one obtains the equations of motion: \begin{eqnarray} \begin{array}{l} \partial_i\left(\sqrt{-g}g^{ij}\partial_j X^a\right)+\sqrt{-g}g^{ij} \Gamma^a_{bc} \partial_i X^b \partial_j X^c + \sqrt{-g}g^{ij} \partial_i X^b \partial_j x^0 \, (\partial_0 G_{bc}) \, G^{ca} = 0 \end{array} \nonumber \\ && \label{eom_trans} \\ \begin{array}{rl} \partial_i\left[\sqrt{-g}\partial_kx^\nu\eta_{\mu\nu} a(x^0)^2 \left( g^{ik}g^{mn}-2g^{im}g^{kn} \right) \right. \partial_m & \left. \!\! X^a \, \partial_n X^b \, G_{ab}(x^0) \right] \\ & = \sqrt{-g} g^{mn} \partial_m X^a \partial_n X^b \frac{\partial G_{ab}(x^0)} {\partial x^\mu} \end{array} \nonumber \\ && \label{eom_long} \end{eqnarray} subject to the boundary condition (\ref{circle}). For the metric (\ref{FLRW_metric}) the Christoffel symbols $\Gamma^a_{bc}$ vanish and the transverse equations of motion (\ref{eom_trans}) relate the covariant divergence of the transverse fields $X^a$ to the time derivative of the transverse metric $G_{ab}$. We can use the 2D reparametrisation invariance of the action to fix the gauge. For our discussion it will be convenient to work in the \emph{static gauge} \begin{equation}\label{nonrel_gauge} x^0-\tau=0 \, , \quad x^1-\lambda\sigma=0 \, , \end{equation} identifying worldsheet and background times, while allowing for multiple windings of the non-relativistic string. Indeed, defining $\sigma\in [0,2\pi)$, the periodicity condition (\ref{nonrel_gauge}) requires that \begin{equation}\label{lambda} \lambda=nR \, , \end{equation} where $n$ is the string winding number. After fixing the gauge, the physical degrees of freedom of the non-relativistic string are the transverse coordinates $X^a$ and the corresponding momenta $P_a$. The equation of motion (\ref{eom_trans}) becomes: \begin{equation}\label{eom_gauge} \ddot X^a = -2\frac{\dot a}{a} \dot X^a + \lambda^{-2} X^{\prime\prime a} \, , \end{equation} which is the wave equation with a cosmological damping term $-2\frac{\dot a}{a} \dot X^a$. This equation (for $\lambda\!=\!1$) has been used by Vilenkin \cite{Vil_CS} to describe small perturbations around a straight cosmic string, and was obtained by taking the limit $\dot X^2\!\ll\! 1$, $X^{\prime 2}\!\ll\! 1$ of the relativistic equations of motion in the static gauge. Here, there is also a winding number $\lambda$. One might be tempted to say that, for $\dot a/a=0$, equation (\ref{eom_gauge}) implies a wave propagation velocity \begin{equation}\label{v_0} v_0^2=\lambda^{-2} \, . \end{equation} However, one should remember that the physical coordinates are not $\sigma,\tau$ but rather $x^1\!=\!\lambda \sigma,\, x^0=\tau$, so rewriting (\ref{eom_gauge}) in terms of the physical variables we get (in the case $\dot a/a=0$) \begin{equation}\label{eom_phys} \partial_{x^0}^2 X^a = \partial_{x^1}^2 X^a \, , \end{equation} which describes a wave propagating at the velocity of light. The non-relativistic string allows the propagation of waves along the longitudinal directions with the speed of light. However, the transverse velocities are not restricted, in contrast to the case of the relativistic string. An `energy' \begin{equation}\label{P_0} {\cal P}_0=\frac{1}{2\lambda} \! \int \! d\sigma \left(\frac{P_aP_b}{T_0} G^{ab}(x^0) + T_0 X^{\prime a}X^{\prime b} G_{ab}(x^0) \right) \end{equation} can be obtained from the constraint (\ref{energy_constr}), which in the gauge (\ref{nonrel_gauge}) becomes: \begin{equation}\label{P_0_gauge} {\cal P}_0 = \frac{1}{2} \! \int \! d\sigma \left( \lambda T_0 \dot X^a \dot X^b + \lambda^{-1} T_0 X^{\prime a}X^{\prime b} \right) G_{ab}(x^0) \, . \end{equation} This can be interpreted as the sum of the kinetic and potential energies of transverse excitations along the string. The actual string energy, obtained by integrating the projection of the energy-momentum tensor on a constant $x^0$ hypersurface is (see appendix \ref{NR_EM_tensor}): \begin{equation}\label{E_nonrel} E(x^0)=a(x^0)^{-1}\,{\cal P}_0 \, . \end{equation} Since $x^0$-translation is not an isometry of (\ref{FLRW_metric}) the Lagrangian is not time-translationally invariant and $p_0$ is not conserved. In fact, its time evolution can be found from the longitudinal equations of motion (\ref{eom_long}). In the gauge (\ref{nonrel_gauge}) the $\mu=0$ component of (\ref{eom_long}) becomes: \begin{equation}\label{p_0_dot} \frac{1}{2} \left(\dot X^a \dot X_a + \lambda^{-2} X^{\prime a} X^{\prime}_a \right)\dot{} = \lambda^{-2}\left(X^{\prime a} \dot X_a\right)^\prime + \frac{\dot a}{a} \left( \lambda^{-2} X^{ \prime a} X^{\prime}_a - \dot X^a \dot X_a \right) \, . \end{equation} Integrating we obtain: \begin{equation}\label{P_0_dot} \dot{\cal P}_0 = a \dot a \lambda T_0 \!\int\! d\sigma \left( \lambda^{-2} X^{ \prime a} X^{\prime b} - \dot X^a \dot X^b \right)\delta_{ab} \, , \end{equation} where the boundary term gives no contribution due to the periodicity condition (\ref{circle}). Similarly, from the constraint (\ref{pxp_constr}) we define the momentum ${\cal P}_1$ along the string \begin{equation}\label{P_1} {\cal P}_1=-\frac{1}{\lambda} \! \int \! P_a X^{\prime a} d \sigma = - T_0 \! \int \! \dot X_a X^{\prime a} d\sigma \end{equation} in the gauge (\ref{nonrel_gauge}). Translational invariance then dictates that ${\cal P}_1$ is conserved, as can be easily verified using the equations of motion. \section{\label{physical}Physical Interpretation and Cosmology}\setcounter{equation}{0} \subsection{The NR Particle vs NR String Limit} The non-relativistic limit is generally understood as a low velocity limit, which can be formally obtained by sending the speed of light $c$ to infinity. This procedure works, at least in the case of the point particle, although there are some conceptual issues involved when taking limits of dimensionful constants like $c$ \cite{Duff,DufOkVen}. A safer route is to keep $c$ constant and rescale the time coordinate by a dimensionless parameter, say $\omega$, taking the limit $\omega\rightarrow\infty$. One can thus obtain a reparametrisation invariant, non-relativistic action for the point particle. The naive application of this to the case of the string fails\footnote{The string obtained in this limit has a fixed length and no physical oscillations (see Ref.~\cite{Yastremiz}).} but this problem was solved with the realisation \cite{jGom_Oog} (see also \cite{DanGuiKru}) that in order to obtain a Galilei invariant string action one has to rescale both longitudinal coordinates, not the time coordinate only. In a sense, one can speak of a non-relativistic `particle' limit, obtained by taking $v\ll 1$ and a non-relativistic limit for extended objects for which one has to rescale all worldvolume coordinates, as we did in section \ref{non_rel_lim} for the case of the string. The rescaling of the longitudinal string direction corresponds to the assumption $(\partial y/ \partial x)^2\ll 1$, which one makes when deriving the wave equation by applying Newton's 2nd law on an infinitesimal string segment. In the rescaling prescription we followed, the waves move along the string at the speed of light as the string tension equals the mass per unit length. One usually thinks of non-relativistic strings as `violin-type' having a small tension compared to their mass per unit length and thus a subluminal `sound speed' along the string. In this sense, the strings we consider here are `hybrid', having a relativistic speed of propagation along the string, but transverse Galilei invariance. However, it is precisely this hybrid action (in flat space) which arises in the simplest Lorentz invariant field theories when one studies the low-energy dynamics of domain wall solutions. Strings with subluminal propagation speeds (which would correspond to a differentiation between the string mass per unit length and the tension) can arise in more complicated models, which allow for spontaneously broken longitudinal Lorentz invariance through a current generation mechanism on the string worldsheet\footnote{See Ref.~\cite{BlPil_Redi} for a discussion of the relation between strings with broken longitudinal Lorentz invariance and Kaluza-Klein strings in one dimension higher.} \cite{Witten, Carter_cwc}. To obtain such string actions as a non-relativistic limit of the Nambu-Goto action, one would have to rescale each of the longitudinal coordinates by a different factor and take both factors to infinity while keeping their ratio constant. Note that, in order to ensure that the antisymmetric field $B$ used to cancel the divergent piece of the action cannot be gauged away, the non-relativistic string had to wind a compact dimension, say $x^1\sim x^1 + 2\pi R$. In fact, the divergent piece of the action is a total derivative with respect to the worldsheet coordinates \cite{JGom_Kam_Town}, so if the action (\ref{S_NR}) was to be interpreted as an effective non-relativistic action, one could simply drop this term without requiring that the string is wound. However, if the non-relativistic string is to be interpreted as a fundamental object, consistency requires a non-trivial winding. In this case, there are two distinct scales, namely $T_0$ of dimension mass-squared, which appears in the action (\ref{S_NR}), and the mass scale $m=2\pi n R T_0$, related to the geometry (through the compactification radius $R$) and the string winding number $n$. In fact, when quantising the non-relativistic string \cite{jGom_Oog}, one encounters again the necessity of winding, as the mass $m$ is needed to define the energy states of the non-relativistic string spectrum. In the flat case there are no physical states with zero winding number \cite{jGom_Oog}. Also note that in deriving the non-relativistic string action (\ref{S_NR}) we have defined the tension $T_0$ by a rescaling of the relativistic string tension $T_R$, appearing in the Nambu-Goto action (see equation (\ref{T_rescale})): \begin{equation}\label{T0_TR} T_0 = \omega^\alpha T_R \, , \end{equation} where the expansion exponent $\alpha$ is positive and $\omega$ is taken to infinity. Interpreting $T_0$ as the physically relevant quantity which is to be kept constant, the relativistic tension $T_R$ goes to zero as $\omega$ tends to infinity. Finally, we comment on the stability of the non-relativistic string. A closed non-relativistic string is more stable to breakage than its relativistic counterpart\footnote{We thank F. Passerini for discussions of this point.}. This is a consequence of the winding, which only allows a discrete number of potential `splitting points' along the string. From an astrophysical perspective, ordinary cosmic string loops decay through gravitational radiation, which mainly couples to the kinetic energy of the fluctuations. In particular, the power in gravitational radiation scales with the sixth power of the root mean square (rms) velocity (see for example \cite{book, vosk}). Thus, if such non-relativistic strings were to play an astrophysical role, their decay rate would be power-law suppressed. For long stings, the main energy-loss mechanism is through string intercommutation, which removes string length from the long string network. This is also expected to be suppressed for non-relativistic strings as the interaction rates are proportional to the string velocities. We shall now consider the possibility of coupling non-relativistic strings to cosmology. \subsection{Coupling to Gravity and Cosmology} The non-relativistic action we have analysed describes the dynamics of the independent degrees of freedom of the non-relativistic string, namely the transverse excitations. In obtaining this action we have introduced a closed $B$ field, which cancels the divergent piece corresponding to the rest energy of the string. Alternatively, if we are not interested in quantisation, we can simply drop the divergent part of the action-without introducing the $B$ field-because it is a total derivative (cf the case of the point-particle). Here, we will follow the latter approach. The energy-momentum tensor of the non-relativistic string (Appendix \ref{NR_EM_tensor}) therefore describes the energy of the transverse excitations but does not include a contribution from the rest mass of the string. However, when one couples non-relativistic matter to General Relativity it is necessary to include the rest mass $m_0 c^2$ in the energy-momentum tensor, which gives the main contribution to the $T^{00}$ part while kinetic contributions are subdominant. Following this logic we will add the rest mass of the string to the $T^{00}$ part of the energy-momentum tensor of Appendix \ref{NR_EM_tensor}, which can then be coupled to Einstein's equations. From now on we work in $D=3$ spatial dimensions. Consider a cosmological setup where the cosmic fluid has a component due to a gas of non-interacting, non-relativistic strings. To obtain the energy density of the string fluid, one has to sum the contributions of all string segments in the network and, as we discussed, it is the rest energy of the segments which will give the dominant contribution. This is in analogy to a gas of non-relativistic particles (dust), where the dominant contribution to the energy density is $\rho \equiv T^0_{\; 0} = m_0 n + {\cal O}(v^2)$, where $m_0$ is the particle rest mass, $n$ the rest frame number density and $v$ the rms particle velocity. The off-diagonal terms of the energy-momentum tensor of the particle fluid average out to zero by summing over all particles with random velocities in all directions, whereas the $T^i_{\; i}\equiv -p$ components are proportional to the kinetic energy density, which for non-relativistic particles is negligible so that $p\ll \rho$. In the case of a `string gas' one can obtain an effective energy-momentum tensor in an analogous manner, by approximating the string network as a collection of straight string segments moving with average velocity $v$, and averaging over string orientations and directions of motion. Let us first consider the relativistic case. The effective energy-momentum tensor can be constructed by considering a straight string oriented in the $\hat {\bf z}$ direction say, and Lorentz-boosting its energy-momentum tensor in the $\pm \hat {\bf x}$ and $\pm \hat {\bf y}$ directions, then averaging and repeating the same procedure for strings oriented in the $\hat {\bf x}$ and $\hat {\bf y}$ directions \cite{Kolb_Turn}. The result is: \begin{equation}\label{T_fin} \langle T^\mu_{\; \nu} \rangle = \frac{\mu}{3L^2} \left( \begin{array}{cccc} 3 \gamma^2 & 0 & 0 & 0 \\ 0 & (1-v^2\gamma^2) & 0 & 0 \\ 0 & 0 & (1-v^2\gamma^2) & 0 \\ 0 & 0 & 0 & (1-v^2\gamma^2) \\ \end{array} \right) \, , \end{equation} where $\mu$ is the string tension, $L$ the average separation between nearby strings in the network and $\gamma=(1-v^2)^{-1/2}$ a Lorentz factor corresponding to $v$. From (\ref{T_fin}) the equation of state can be read: \begin{equation}\label{eos} -p \equiv \langle T^i_{\; i} \rangle = \frac{1}{3} ( \gamma^{-2} - v^2 ) \langle T^0_{\; 0} \rangle = \frac{1}{3} (1-2v^2) \langle T^0_{\; 0} \rangle \Rightarrow p=-\frac{1}{3} (1-2v^2) \rho \, . \end{equation} A similar procedure can be followed for non-relativistic strings, which are generally expected to have much smaller string velocities. Indeed, for relativistic strings the constraint $\dot x^2+x^{\prime 2}\equiv 0$ in the conformal gauge imposes that critical points on the string move with the speed of light, but for non-relativistic strings the physical string velocities can take any value. One can thus obtain the equation of state for such a non-relativistic string gas by using Lorentz boosts with $\gamma=1$ or, alternatively, by performing transverse Galilean boosts instead. The result is again $p=-\frac{1}{3}(1-2v^2) \rho$, but with the difference that one can safely assume $v\ll 1$, unlike the relativistic network case, where the strings oscillate relativistically at small scales, while there is no known mechanism which is efficient enough to damp these excitations. Indeed, Hubble damping is inefficient at scales much smaller than the horizon, and for large scales, of order the string correlation length, numerical simulations (see for example \cite{All_Shell}) demonstrate that string segments move more slowly and coherently, but at speeds large enough to produce significant deviations from the equation of state $w\equiv p/\rho=-1/3$. Note that one can apply an analogous procedure for strings which have a tension $T$ smaller than their mass per unit length $\mu$ ($T<\mu$). In this case the resulting equation of state is: \begin{equation}\label{eos_Tmu} p=-\frac{1}{3} [ T/\mu(1-v^2) - v^2 ] \rho = -\frac{1}{3} [ v_0^2 - (1+v_0^2) v^2 ] \rho \, , \end{equation} where we have defined the `sound speed' along the string $v_0 = \sqrt{T/\mu}$. Equation (\ref{eos_Tmu}) can in general lead to both positive or negative equation of state with $p>-\rho/3$. This is in contrast to vacuum (non-interacting) cosmic strings with $\mu=T$, where the rms speed does not exceed $1/\sqrt{2}$ so the equation of state is nonpositive (\ref{eos}) with $p \ge -\rho/3$. However, this is to be expected because in the limit $T\rightarrow 0$ the `string' describes a line-like structure of dust particles with $0<p\ll \rho$. In fact, taking $v_0\rightarrow 0$ in equation (\ref{eos_Tmu}) gives $p=\rho v^2/3$, or, in terms of the kinetic energy density $\rho_k$, \begin{equation}\label{kin} p=\frac{2}{3} \rho_k \, , \end{equation} which is precisely the equation of state for a gas of non-relativistic particles, following from ordinary kinetic theory considerations. In connection to the discussion of the previous sections, obtaining this kind of non-relativistic string from the Nambu-Goto action involves a rescaling of the longitudinal directions by different factors, the ratio of which determines the propagation speed $v_0$. Finally, note that this discussion only applies to a `perfect' gas of non-interacting strings. String intercommutations typically result in the removal of energy from the network in the form of closed string loops, significantly altering the above picture. Thus, a \emph{frustrated} string network, with $w\simeq -1/3$, $\rho \propto a^{-2}$ eventually dominates over matter or radiation, but turning on string interactions will result to a different equation of state. For abelian string networks, where interactions are efficient, the resulting scaling law is $\rho \propto t^{-2}$, where $t$ is cosmic time, which scales like radiation in the radiation era and like matter in the matter era. The cosmological evolution of non-relativistic string networks, including the possible effects of string intercommutation will be discussed in the next section. \section{\label{VOS} Velocity Dependent One-Scale (VOS) Models}\setcounter{equation}{0} In this section we discuss analytic models for the evolution of macroscopic variables describing the large-scale properties of a string network. We will first review results for relativistic strings and then construct a macroscopic evolution model for non-relativistic strings, based on the action (\ref{S_NR}). To set up the physical picture we briefly summarise Kibble's one-scale model \cite{Kibble}, which captures the basic qualitative features of network evolution. Monte-Carlo simulations of cosmic string formation suggest that to a good approximation the strings have the shapes of random walks at the time of formation \cite{Vach_Vil}. Such `Brownian' strings can be described by a characteristic length $L$, which determines both the typical radius of curvature of strings and the typical distance between nearby string segments in the network. On average there is a string segment of length $L$ in each volume $L^3$ and thus the density of the cosmic string network at formation is \begin{equation}\label{rho} \rho=\frac{\mu L}{L^3}=\frac{\mu}{L^2} \,, \end{equation} where $\mu$ is the string mass per unit length, which for relativistic strings is equal the `tension' $T_R$ appearing in the Lagrangian. Assuming that the strings are simply stretched by the cosmological expansion we have $\rho \propto a(t)^{-2}$. This decays slower than both matter ($\propto a^{-3}$) and radiation ($\propto a^{-4}$) energy densities and so such non-interacting strings would soon dominate the universe. Now consider the effect of string interactions. As the network evolves, the strings collide or curl back on themselves creating small loops, which oscillate and radiatively decay. Via these interactions enough energy is lost from the network to ensure that string domination does not actually take place. Each string segment travels on average a distance $L$ before encountering another nearby segment in a volume $L^3$. Assuming relativistic motion ($v\approx 1$) and that the produced loops have an average size $L$, the corresponding energy loss is given by $\dot\rho_{\rm{loops}}\approx L^{-4} \mu L$. The energy loss rate equation is therefore \begin{equation}\label{rholoss} \dot\rho\approx -2\frac{\dot a}{a}\rho - \frac{\rho}{L}\,. \end{equation} Equation (\ref{rholoss}) has an attractor `scaling' solution in which the characteristic length $L$ stays constant relative to the horizon $d_H\sim t$ \cite{Kibble}. The approach of string networks to a scaling regime has been verified by high-resolution simulations \cite{BenBouch, All_Shell}. Equation (\ref{rholoss}) was derived on physical grounds and it only captures the basic processes involved in string evolution, namely the stretching and intercommuting of strings. It does not take into account other effects like the redshifting of string velocities due to Hubble expansion. In fact, it neglects completely the evolution of string velocities, making the crude approximation that they remain constant throughout cosmic history. However, we can construct a more accurate Velocity-dependent One-Scale (VOS) model, based on the Nambu-Goto action (\ref{nambu}). \subsection{\label{rel_VOS}Relativistic Strings} The relativistic VOS model \cite{vos,vosk} extends Kibble's one-scale model, abandoning the constant string velocity approximation and introducing an extra variable, the rms velocity of string segments, whose dynamics is governed-as we will see-by a macroscopic version of the relativistic equations of motion (\ref{eom_x}). Although the simple one-scale model captures most of the qualitative features of macroscopic string evolution, this correction is crucial for quantitative modelling. Indeed, the average string velocity enters linearly in the loop production term, which provides the main energy loss mechanism of the string network, and so the evolution of string velocities significantly affects the string energy density. The resulting VOS model is still very simple depending on only one free parameter\footnote{ Strictly speaking there are two parameters in the VOS model, the loop production efficiency $\tilde c$ and the momentum parameter $k$. For the second parameter, however, there exists a physically motivated ansatz (\ref{kans_R}), which expresses it in terms of the rms velocity $v(t)$. Once this choice is made, one is only left with the freedom of tuning $\tilde c$ when trying to fit numerical simulations.} but, remarkably, it has been shown to accurately fit numerical simulation data throughout cosmic history \cite{vostests}. We briefly sketch how the model is constructed from the microscopic equations of section \ref{rel_string}. This will be useful for comparison to the non-relativistic case. Consider the relativistic string energy defined in section \ref{rel_string} (equation (\ref{string_energy})): \[ E(\eta) = a(\eta) T_R \! \int \! \epsilon \, d\sigma \] and take the first derivative with respect to conformal time $\eta$. Using the equation of motion (\ref{eom_eps}) for $\epsilon$, one finds \begin{equation}\label{E_dot_rel} \dot E = \frac{\dot a}{a} \left( 1 - 2 v^2 \right) E \, , \end{equation} where $v^2\!=\! \int \! \epsilon \dot{\bf x}^2 \, d\sigma / \! \int \! \epsilon \, d\sigma\! \equiv\! \langle \dot {\bf x}^2 \rangle$ is the worldsheet average of the square of transverse velocities. For a network of strings the energy density $\rho$ is related to the total string energy by $E\propto \rho a(\eta)^3$. Therefore: \begin{equation}\label{rho_dot} \frac{\dot \rho}{\rho} = \frac{\dot E}{E} - 3\frac{\dot a}{a} = -2 \frac{\dot a}{a} \left(1+v^2\right) \, . \end{equation} To this we add a phenomenological term \cite{Kibble, book} describing the production of loops when strings collide and curl back on themselves. The resulting network density evolution equation is: \begin{equation}\label{rho_dot_full} \dot\rho = -2 \frac{\dot a}{a} \left(1+v^2\right) \rho - \tilde c \frac{v \rho}{L} \, , \end{equation} where $\tilde c$ is the loop production efficiency related to the integral of an appropriate loop production function over all relevant loop sizes \cite{book}. This is treated as a free parameter which can be determined by comparison to numerical simulations. In the VOS model, the rms velocity $v$ appearing in equation (\ref{rho_dot_full}) is promoted to a dynamical variable whose evolution is given by a macroscopic version of the Nambu-Goto equation of motion (\ref{eom_x}). This equation can be obtained by differentiating $v^2$ and eliminating $\ddot {\bf x}$ using the equation of motion. This introduces the second spatial derivative ${\bf x}^{\prime\prime}$ which corresponds to string curvature and can be expressed in terms of the mean curvature radius of the network. Differentiating $v^2$ and using equation (\ref{eom_eps}) we find: \begin{equation}\label{v_square_dot} 2v \dot v = \langle \dot {\bf x}^2 \rangle \dot{} = 2\langle \dot{\bf x}\cdot \ddot{\bf x} \rangle - 2\frac{\dot a}{a} \left( \langle \dot{\bf x}^2 \rangle^2 -\langle \dot{\bf x}^4 \rangle \right) \, . \end{equation} The second term is of purely statistical nature and has the effect of `renormalising' the coefficient of the $\frac{\dot a}{a} v^4$ term which will find later. It has been demonstrated numerically \cite{vos} to have small magnitude and thus can be neglected. Keeping only the first term and using the equation of motion for ${\bf x}$ we find: \begin{equation}\label{v_vdot} v\dot v = \frac{\int\! \dot{\bf x} \cdot {\bf x}^{\prime\prime} \epsilon^{-1} \, d\sigma}{\int\!\epsilon\, d\sigma} + \frac{\int\! \dot{\bf x} \cdot {\bf x}^\prime (\epsilon^{-1})^\prime \, d\sigma}{\int\!\epsilon \, d\sigma} - 2 \frac{\dot a}{a} \left( \langle \dot{\bf x}^2 \rangle - \langle \dot{\bf x}^4 \rangle \right) \, . \end{equation} The second term vanishes due to the gauge condition $\dot{\bf x} \cdot{\bf x}^\prime = 0$. Further, within our approximations $\langle \dot{\bf x}^4 \rangle\simeq \langle \dot{\bf x}^2 \rangle^2$ so the third term becomes $2\frac{\dot a}{a} v^2 (1-v^2)$. For the first term we need to express ${\bf x}^{\prime\prime}$ in terms of the local curvature vector. We define \begin{equation}\label{ds} ds = \sqrt{ {\bf x}^{\prime2} } d\sigma = \sqrt{1-\dot{\bf x}^2} \epsilon d\sigma \end{equation} and the physical (local) radius of curvature by \begin{equation}\label{R} \frac{d^2 {\bf x}}{ds^2}=\frac{a(\eta)}{{\cal R}} \hat{\bf u} \, , \end{equation} where $\hat{\bf u}$ is a unit vector. Then: \begin{equation}\label{x_pp} {\bf x}^{\prime\prime} = \frac{d^2{\bf x}}{d\sigma^2} = {\bf x}^{\prime2} \frac{d^2 {\bf x}}{ds^2} + {\bf x}^\prime \frac{d \sqrt{{\bf x}^{\prime2}}}{ds} \end{equation} Due to the constraint $\dot{\bf x} \cdot{\bf x}^\prime = 0$ the second term vanishes on dotting with $\dot{\bf x}$ so we have: \begin{equation}\label{xdot_x_pp} \int\! \dot{\bf x} \cdot {\bf x}^{\prime\prime} \epsilon^{-1} \, d\sigma = \int\! \dot{\bf x} \cdot \frac{d^2 {\bf x}}{ds^2} ( 1 - \dot{\bf x}^2 ) \epsilon \, d\sigma = a(\eta) \langle (\dot{\bf x} \cdot \hat{\bf u} ) ( 1 - \dot{\bf x}^2 ) / {\cal R} \rangle \int\! \epsilon \, d\sigma \, . \end{equation} We define the momentum parameter $k$ \cite{vosk} by the equation: \begin{equation}\label{k} \langle (\dot{\bf x} \cdot \hat{\bf u} ) ( 1 - \dot{\bf x}^2 ) / {\cal R} \rangle = \frac{kv}{{\cal R}} (1-v^2) \, , \end{equation} where ${\cal R}$ is now the average string radius of curvature, numerically close to the correlation length $L$ for Brownian networks \cite{book, vos, Aust_Cop_Kib}. With this definition, equation (\ref{v_vdot}) becomes: \begin{equation}\label{dv_dtau} \dot v = \frac{a(\eta)}{{\cal R}}k(1-v^2) - 2\frac{\dot a}{a}v(1-v^2) \, . \end{equation} Changing to cosmic time $t$, with $dt = a d\eta$ and $\dot{}=a\frac{d}{dt}$ we finally obtain: \begin{equation}\label{dv_dt} \frac{dv}{dt} = (1-v^2) \left( \frac{k}{{\cal R}} - 2Hv \right) \, , \end{equation} where $H=a^{-1} \frac{da}{dt}$ is the Hubble parameter. Note that, since \[ v^2=\langle \dot{\bf x}^2 \rangle=\left\langle \left( \frac{d{\bf x}} {d\eta} \right)^2 \right\rangle = \left\langle \left( a \frac{d{\bf x}}{dt} \right)^2 \right\rangle \, \] and the physical coordinates ${\bf x}_{\rm phys}$ are given in terms of the comoving ones ${\bf x}$ by ${\bf x}_{\rm phys} = a {\bf x}$, the rms velocity $v$ has the interpretation of physical \emph{peculiar} velocity of string segments. Equation (\ref{dv_dt}) has therefore a clear physical meaning: the rms peculiar velocities of string segments are produced by string curvature and damped by cosmological expansion. The momentum parameter $k$ is a measure of the angle between the curvature vector and the velocity of string segments and thus it is related to the smoothness of the strings. As $v$ increases towards relativistic values the accumulation of small-scale structure renders the strings wiggly. Velocities become uncorrelated to curvature and $k$ decreases. In particular it can be shown analytically that for flat space, where $v^2=1/2$, the momentum parameter vanishes for a wide range of known solutions \cite{vos, Carl_thesis}. An accurate ansatz for the momentum parameter $k$ for relativistic strings has been proposed in \cite{vosk} \begin{equation}\label{kans_R} k = k(v) = \frac{2\sqrt{2}}{\pi}\frac{1-8 v^6}{1+8 v^6} \, , \end{equation} satisfying $k(1/\sqrt{2})=0$. Note that the fact that $v=1/\sqrt{2}$ in flat spacetime, can be shown analytically for closed loops only, but for long strings it is observed in numerical simulations \cite{book}. For expanding or contracting spacetimes, $v$ is less or greater than $1/\sqrt{2}$ respectively. Hence for an expanding universe, string velocities are subject to the constraint: \begin{equation}\label{vconstr} v^2 \le \frac{1}{2} \, . \end{equation} In a matter or radiation dominated universe, Hubble expansion is too weak to significantly reduce string velocities, which remain close to $1/2$ at short scales \cite{book}. This limitation does not apply to non-relativistic strings. \subsection{\label{NRVOS}Non-Relativistic Strings} For the non-relativistic string the energy of the excitations is given by (see Appendix \ref{NR_EM_tensor}): \begin{equation}\label{E_exc} E_{\rm exc}=a(\eta) \frac{1}{2} \! \int \! d\sigma \left( \mu \dot{\bf X}^2 + \mu \lambda^{-2} {\bf X}^{\prime 2} \right)=a(\eta)^{-1}\,{\cal P}_0 \, , \end{equation} where ${\bf X}$ are the \emph{transverse} string coordinates and we have defined the tension $\mu\!=\!\lambda T_0$. To that we must add the string mass \begin{equation}\label{E_0} E_0 = a(\eta) \mu \! \int \! d\sigma \, , \end{equation} so that the total energy is: \begin{equation}\label{E_tot} E=E_0 + E_{\rm exc}=a(\eta) \mu \! \int \! d\sigma + a(\eta)^{-1} \,{\cal P}_0 \, . \end{equation} Then, differentiating with respect to conformal time ( $\dot {}\ \! = \frac{d}{d\eta}$), we have: \begin{eqnarray} \dot E &=& \frac{\dot a}{a} E_0 + (a^{-1}{\cal P}_0)\dot{} = \frac{\dot a}{a} E_0 - \frac{\dot a}{a} E_{\rm exc} + a^{-1} \,\dot{\cal P}_0 \nonumber \\ &=& \frac{\dot a}{a} \left(1 + \frac{1}{2} W^2 - \frac{3}{2} V^2 \right) E_0 \label{E0dot_E0} \, , \end{eqnarray} where we have used equations (\ref{P_0_gauge}), (\ref{p_0_dot}) and defined the rms quantities: \begin{equation}\label{V} V^2= \frac{\int\! d\sigma \dot {\bf X}^2}{\int \! d\sigma} \equiv \langle \dot {\bf X}^2 \rangle \, , \end{equation} and \begin{equation}\label{W} W^2 = \frac{\int\! d\sigma \lambda^{-2} {\bf X}^{\prime 2}}{\int \! d\sigma} \equiv \langle \lambda^{-2} {\bf X}^{\prime 2}\rangle = \langle (\partial_{x^1}{\bf X})^2 \rangle \, , \end{equation} corresponding to the average velocity of string segments and the average magnitude of string tangent vectors. The latter quantity parametrises small-scale perturbations on the string, $W=0$ corresponding to strings which are straight at scales smaller than the correlation length\footnote{With this interpretation, one expects that $W$ should have the effect of reducing the effective radius of curvature of the network. As we will see later, this is indeed the case.}. Thus, the term $W^2/2$ in equation (\ref{E0dot_E0}) corresponds to the average elastic energy due to short-scale string deformations. In the non-relativistic limit one has $W^2\ll 1$. Defining the energy density $\rho \propto E a^{-3}$, and using \begin{equation}\label{E_dot_E0} \frac{\dot E}{E_0} \simeq \frac{\dot E}{E} = \frac{\dot\rho}{\rho} + 3 \frac{\dot a}{a} \, , \end{equation} we find \begin{equation}\label{rho_dot_NR} \dot\rho = -\frac{\dot a}{a} \left( 2 - \frac{1}{2} W^2 + \frac{3}{2} V^2 \right) \rho - \tilde c V \frac{\rho}{L} \, , \end{equation} where we have included a phenomenological loop production term, as in the relativistic case. From (\ref{V}) we have \begin{equation}\label{v_square_dot_NR} 2 V \dot V = \langle \dot {\bf X}^2 \rangle^{\cdot} = 2 \langle \dot{\bf X}\cdot \ddot{\bf X} \rangle - 2\frac{\dot a}{a}\left( \langle \dot{\bf X}^2 \rangle^2 -\langle \dot{\bf X}^4 \rangle \right) \end{equation} as before. We neglect the statistical terms and using the non-relativistic equation of motion (\ref{eom_gauge}) we find: \begin{equation}\label{vv_dot_NR} V\dot V = \frac{\int\!\dot{\bf X} \cdot \ddot{\bf X} \, d\sigma} {\int \! \, d\sigma} = \frac{\int\! \lambda^{-2} \dot{\bf X} \cdot {\bf X}^{\prime\prime}\, d\sigma}{\int \! \, d\sigma} - 2\frac{\dot a}{a} V^2 \end{equation} In order to express ${\bf X}^{\prime\prime}$ in terms of the string curvature vector we define: \begin{equation}\label{ds_NR} ds = \sqrt{ 1 + (\partial_{x^1} {\bf X})^2 } dx^1= \lambda \sqrt{ 1 + \lambda^{-2} {\bf X}^{\prime2} } d\sigma \end{equation} and the physical radius of curvature: \begin{equation}\label{R_NS} \frac{d^2 {\bf Y}}{ds^2}=\frac{a(\eta)}{{\cal R}}\hat{\bf u}\,, \end{equation} where we have introduced the 3-vector ${\bf Y}\!=\!\left(x^1,{\bf X}\right)$ and a unit 3-vector $\hat{\bf u}$. Now: \begin{equation}\label{X_pp_NR} {\bf X}^{\prime\prime} = \frac{d^2{\bf X}}{d\sigma^2} = \lambda^2 \left(1+\lambda^{-2}{\bf X}^{\prime2}\right) \frac{d^2 {\bf X}}{ds^2} + \lambda {\bf X}^\prime \, \frac{d \sqrt{1+ \lambda^{-2}{\bf X}^{\prime2}}}{ds} \end{equation} In this case, the second term will not cancel on dotting with $\dot {\bf X}$, because $\dot {\bf X} \cdot {\bf X}^\prime \ne 0$ for the non-relativistic string. Instead we have two terms: \begin{equation}\label{Xdot_X_pp_NR} \lambda^{-2} \int\! \dot{\bf X} \cdot {\bf X}^{\prime\prime} \, d\sigma = \int\! \dot{\bf X} \cdot \frac{d^2 {\bf X}}{ds^2} \left(1 + \lambda^{-2}{\bf X}^{\prime2}\right) \, d\sigma + \lambda^{-2} \int\! \dot{\bf X} \cdot{\bf X}^\prime \left(\ln \sqrt{1 + \lambda^{-2}{\bf X}^{\prime2}} \right)^\prime d\sigma \, . \end{equation} For the first term we note that, since $\dot{\bf X}$ is normal to $(x^1,{\bf 0})$ in Cartesian coordinates, \begin{equation}\label{Xdot_curv} \dot{\bf X} \cdot \frac{d^2 {\bf X}}{ds^2} = \dot{\bf X} \cdot \frac{d^2{\bf Y}}{ds^2} \end{equation} and so we can use equation (\ref{R_NS}) to write: \begin{equation}\label{int_Xdot_curv} \int\! \dot{\bf X} \cdot \frac{d^2 {\bf X}}{ds^2} \left(1 + \lambda^{-2}{\bf X}^{\prime2}\right) \, d\sigma = a(\eta) \frac{k V}{{\cal R}} (1 + W^2) \!\int\! d\sigma\, . \end{equation} Here, in analogy to the relativistic case, we have defined a momentum parameter $k$ by: \begin{equation}\label{kdef_NR} \left\langle \left(1 + \lambda^{-2}{\bf X}^{\prime2}\right) (\dot{\bf X} \cdot \hat{\bf u}) /{\cal R} \right\rangle = \frac{k V}{{\cal R}} (1 + W^2) \, . \end{equation} For the second term in (\ref{Xdot_X_pp_NR}) we have: \begin{eqnarray} && \lambda^{-2} \int\! \dot{\bf X} \cdot{\bf X}^\prime \left(\ln\sqrt{1 + \lambda^{-2}{\bf X}^{\prime2}} \right)^\prime d\sigma = \lambda^{-2} \int\! \dot{\bf X} \cdot{\bf X}^\prime \frac{{\bf X}^\prime \cdot {\bf X}^{\prime\prime}\lambda^{-2}}{1+ \lambda^{-2}{\bf X}^{\prime2}} \, d\sigma \nonumber \\ && \ \ = \lambda^{-2} \int\! \left(\dot{\bf X} \cdot{\bf X}^\prime \right) \left({\bf X}^\prime \cdot \hat {\bf u}\right) \frac{a(\eta)}{{\cal R}} \, d\sigma + \lambda^{-3} \int\! \left(\dot{\bf X} \cdot{\bf X}^\prime \right){\bf X}^{\prime2} \frac{{\bf X}^\prime \cdot {\bf X}^{\prime \prime}\lambda^{-2}}{\left(1+ \lambda^{-2}{\bf X}^{\prime2}\right)^2} \, d\sigma \nonumber \\ && \ \ = a(\eta) \frac{k^\prime V W^2}{{\cal R}} \!\int\! d\sigma + {\cal O}(VW^4) \, , \label{Xdot_Xp_NR} \end{eqnarray} where we have used equation (\ref{X_pp_NR}) and defined the parameter $k^\prime$ by: \begin{equation}\label{k_prime} \left\langle \lambda^{-2} \left(\dot{\bf X} \cdot{\bf X}^\prime \right) \left({\bf X}^\prime \cdot \hat {\bf u}\right)/{\cal R} \right \rangle = \frac{k^\prime V W^2}{\cal R} \end{equation} Putting all the terms together, equation (\ref{vv_dot_NR}) can be rewritten (in terms of cosmic time $t$) as: \begin{equation}\label{dV_dt} \frac{dV}{dt}=\frac{1}{{\cal R}} \left( k + k^{\prime\prime} W^2 \right) - 2HV \, , \end{equation} with \begin{equation}\label{kpp} k^{\prime\prime}\equiv k + k^{\prime}\, . \end{equation} Equations (\ref{rho_dot_NR}), (\ref{dV_dt}) form the Non-Relativistic Velocity dependent One-Scale (NRVOS) model. In principle one should consider $W$ as a third dynamical variable and try to derive an evolution equation, as in the case of $V$. As a first approximation we will assume that time variations in $W$ do not have a significant impact, $W$ remaining always small, and we will treat it as a constant parameter. This approximation will be tested in the next section, where we will solve the NRVOS equations numerically, for different choices of the $W$ parameter. Finally, one comment is in order regarding the magnitude of the parameter $k^\prime$. From its definition in equation (\ref{Xdot_Xp_NR}) one expects $k^\prime\ll k$. Indeed, $k^\prime$ measures the average value of $(\dot{\bf X}\cdot{\bf X}^\prime)({\bf X}^\prime \cdot \hat{\bf u})$ the first factor of which contains uncorrelated vectors, while for the second factor string tangents will generally be normal to the local curvature vector. On the other hand $k$ corresponds to the average value of $\dot{\bf X} \cdot \hat {\bf u}$ and these two vectors are correlated, at least for smooth strings/small excitation velocities. Given that the $k^{\prime\prime}$ term in (\ref{dV_dt}) is already suppressed by a factor ${\cal O}(W^2)$ it is a good approximation to set $k^{\prime\prime}\simeq k$. Then, $W$ has the effect of `renormalising' the effective radius of curvature ${\cal R} \rightarrow {\cal R}/(1+W^2)$ (or equivalently the momentum parameter $k\rightarrow k(1+W^2)$), as may be expected from its interpretation as a short-scale structure parameter. \section{\label{RelVnonRel}Relativistic vs Non-Relativistic Network Evolution}\setcounter{equation}{0} In this section we solve numerically the NRVOS equations for a non-relativistic string network and compare to the relativistic case. The naive expectation is that non-relativistic networks are denser than their relativistic counterparts because the small string velocities reduce the effect of the loop production term. Physically, the transverse excitations on strings are non-relativistic so fewer loops are produced per unit time due to string self-intersections. Long string segment interactions are also suppressed due to the low collision rate corresponding to small velocities. To close the NRVOS equations we need to specify an ansatz for the non-relativistic momentum parameter $k$. For a velocity dependent model like the one we have developed, it is not consistent to treat $k$ as a constant parameter. Further, in the relativistic case, its dependence on the rms velocity $v$ (equation (\ref{kans_R})) is important in determining the scaling values of the network variables. The functional dependence of the momentum parameter on $v$ can be obtained by considering `curvature' and `bulk' contributions to string velocities, as explained in Ref.~\cite{vosk}. Following the discussion in that reference we take: \begin{equation}\label{k_NR} k(v) = k_0 (1-v^2) \, , \end{equation} where $k_0$ is a constant. This has the same functional dependence as the low-velocity limit of $k(v)$ in Ref.~\cite{vosk}, but here we have left the overall normalisation $k_0$ as a free parameter. This reflects the fact that the non-relativistic string limit is not merely a low-velocity one. There is a difference between slowly moving, straight, relativistic strings and wiggly, non-relativistic strings. The defining property of the non-relativistic string is that its transverse excitations be Galilei, as opposed to Lorentz, invariant. The difference between relativistic and non-relativistic strings is in the transverse perturbations. In an effective description, non-relativistic strings can be thought of as having a short wavelength cut-off on the string excitations. As a result, arbitrarily small-wavelength relativistic perturbations are not excited and this translates into a reduced curvature parameter normalisation $k_0$. The string can be thought of as a massive rigid rod, but with tension $T$ equal to its mass per unit length $\mu$ \footnote{Relativistic invariance in the longitudinal directions implies that the waves along the string travel at the speed of light $c$ \cite{JGom_Kam_Town}. Note the difference to the other notion of non-relativistic string with $T<\mu$ and longitudinal speed $v<c$.}. In analogy to the relativistic case, where the overall normalisation was determined by comparison to a known analytic solution \cite{vosk}, $k_0$ can be obtained in the non-relativistic case by comparison to a given model of non-relativistic string. In the general discussion below we will simply treat it as a free parameter and examine its effect on the network evolution. Equations (\ref{rho_dot_NR}), (\ref{dV_dt}) and (\ref{k_NR}) have been solved numerically for a range of parameters $k_0$ and $W$. This was done by rewriting equation (\ref{rho_dot_NR}) in terms of the correlation length $L=\sqrt{\mu/\rho}$ and then introducing a function $\gamma(t)=L/t$. Under the assumption $L\simeq {\cal R}$, the resulting equation for $\gamma(t)$ together with (\ref{dV_dt}) form a non-autonomous system of coupled ODE's, which can be integrated numerically. During matter or radiation domination, this system has an attractor solution in which both $\gamma(t)$ and $v(t)$ tend to constant values (scaling). Here, we present numerical results for a radiation dominated universe. In Fig.~\ref{fig_comparison} we plot the evolution of the string energy density and rms velocity for both a non-relativistic and a relativistic string network, that is, the solution of equations (\ref{rho_dot_NR}), (\ref{dV_dt}), (\ref{k_NR}) in the former case and (\ref{rho_dot_full}), (\ref{dv_dt}), (\ref{kans_R}) in the latter. To highlight the effect of non-relativistic velocities, we have chosen a value of the parameter $k_0$ which gives a scaling value of $V\simeq 0.1$ and taken $W<V$. We have also assumed that both networks have the same loop production efficiency parameter $\tilde c$ and chosen the value $\tilde c=0.23$, suggested by relativistic network simulations \cite{vos}. As expected, the non-relativistic network has a much higher scaling string density compared to the non-relativistic one. Of course, non-interacting strings ($\tilde c=0$) do not converge to a scaling solution. \begin{figure} \begin{center} \includegraphics[width=2.7in,keepaspectratio]{rhoR.eps} \includegraphics[width=2.7in,keepaspectratio]{vR.eps} \includegraphics[width=2.7in,keepaspectratio]{rhoNR_v01.eps} \includegraphics[width=2.7in,keepaspectratio]{vNR_v01.eps} \end{center} \caption{\label{fig_comparison} Relativistic versus non-relativistic network evolution. Non-relativistic networks evolve to slower and much denser scaling configurations than their relativistic counterparts. Here, we have plotted the evolution of the normalised string density and rms velocity for a relativistic network and a non-relativistic one with $V\simeq 0.1$.} \end{figure} We now explore the dependence of non-relativistic string evolution on the parameters $k_0$ and $W$. In Fig.~\ref{fig_NR_v0} we plot the normalised string density $\rho t^2/\mu=\gamma^{-2}$ and the rms string velocity $V$ as functions of cosmic time $t$ for different choices of $k_0$ producing string velocities $0<V<1$. We have assumed a constant value of $W<V$, but below we will consider the effect of varying $W$ also, allowing for $W>V$. It is apparent from Fig.~\ref{fig_NR_v0} that the rms string velocities are controlled by the parameter $k_0$. Reducing $k_0$ leads to smaller $V$, which in turn implies a higher string density, due the reduced energy loss term. The fact that the scaling value of the rms velocity is not universal for non-relativistic strings, but instead depends on the parameter $k_0$, is not surprising. In the relativistic case, there is a distinct upper speed limit $c=1$ and the relativistic constraint implies that the rms velocities are smaller than, but not far off, $1/\sqrt{2}$ (see for example Ref.~\cite{book}). On the other hand, in any non-relativistic theory velocities are unbounded. \begin{figure} \begin{center} \includegraphics[width=2.7in,keepaspectratio]{rhoNR_v05.eps} \includegraphics[width=2.7in,keepaspectratio]{vNR_v05.eps} \includegraphics[width=2.7in,keepaspectratio]{rhoNR_v01.eps} \includegraphics[width=2.7in,keepaspectratio]{vNR_v01.eps} \includegraphics[width=2.7in,keepaspectratio]{rhoNR_v005.eps} \includegraphics[width=2.7in,keepaspectratio]{vNR_v005.eps} \end{center} \caption{\label{fig_NR_v0} Evolution of normalised string density and rms velocity for a non-relativistic network with constant $W<V$ for different choices of the parameter $k_0$. Reducing the value of $k_0$ results to lower rms string velocities, which in turns implies a slower rate of string interactions. This results in a dramatic enhancement of string network density.} \end{figure} We then consider the impact of varying the parameter $W$. Looking at the first term of equation (\ref{rho_dot_NR}), which describes dilution due to cosmic expansion, one observes that $W^2$ and $V^2$ appear with opposite signs, so a large $W$ could counterbalance (or even reverse) the effect of $V$ on this term. However, if both $V,W\ll 1$ they play no significant role in that term. Thus, one only needs to check the case $W>V$ when $V,W$ are not negligible. Fig.~\ref{fig_W} shows the time evolution of $\rho$ for a choice of $k_0$ leading to $V\simeq 0.1$, for the cases $W=0, 0.1, 0.5$. The first two figures show identical evolutions, even though in the second one $W\simeq V$. In the third case, however, where $W^2=25 V^2$ the effect of $W$ counterbalances that of $V$ in the dilution term of (\ref{rho_dot_NR}), resulting in an appreciable reduction of the string scaling density, at the $10\%$ level. Since the most important impact of string velocities is through the loop production term of (\ref{rho_dot_NR}), the basic prediction of the model, which is a dramatic enhancement of the string scaling density (Fig. \ref{fig_comparison}), remains robust. \begin{figure} \begin{center} \includegraphics[width=2.7in,keepaspectratio]{rhoNR_v01_w0.eps} \includegraphics[width=2.7in,keepaspectratio]{vNR_v01_w0.eps} \includegraphics[width=2.7in,keepaspectratio]{rhoNR_v01_w01.eps} \includegraphics[width=2.7in,keepaspectratio]{vNR_v01_w01.eps} \includegraphics[width=2.7in,keepaspectratio]{rhoNR_v01_w05.eps} \includegraphics[width=2.7in,keepaspectratio]{vNR_v01_w05.eps} \end{center} \caption{\label{fig_W} Dependence of normalised string density on the parameter $W$ for a network with $V\simeq 0.1$. The plots correspond to $W=0, 0.1$ and $0.5$ respectively. Increasing $W$ does not significantly alter the scaling density until $W$ becomes greater than $V$. For $W=0.5=5V$, the scaling is reduced by $10\%$, so it remains two orders of magnitude greater than that of relativistic strings.} \end{figure} \section{\label{discuss}Discussion}\setcounter{equation}{0} So far we have studied the dynamics and macroscopic evolution of non-relativistic strings in some generality, without discussing any specific setup in which they could be relevant. However, non-relativistic string-like objects arise in several contexts and have been considered before in the literature. For example, Ref.~\cite{Mart_Moor_Shell} studied non-relativistic vortex-strings with motivations from both cosmology \cite{book} and condensed matter physics \cite{Schwarz85, Schwarz88}. In that reference, the non-relativistic limit was taken at the level of the equations of motion by requiring small string velocities $\dot X^2\ll 1$. Here, we have taken the non-relativistic limit at the level of the string action but this involved a rescaling procedure which corresponds to having both $\dot X^2\ll 1$ and $(\partial X/\partial \zeta)^2 \ll 1$, where $\zeta$ is the physical length along the string. The non-relativistic evolution model we have developed in section \ref{NRVOS} can be applied to the condensed matter context considered in \cite{Mart_Moor_Shell} by introducing a friction term relevant to that situation. Adding this term and setting $\dot a/a=0$ equation (\ref{rho_dot_NR}), expressed in terms of the correlation length, reads: \begin{equation}\label{} 2\frac{dL}{dt}=\tilde c V + \frac{L}{\ell_d} V^2\, , \end{equation} as in Ref.~\cite{Mart_Moor_Shell}, where $\ell_d$ is the relevant damping length scale. The velocity evolution equation is also modified by the addition of a friction term $-\ell_d/L$, again as in Ref.~\cite{Mart_Moor_Shell}. The system has a solution with $L\propto t^{1/2}$, which is actually observed experimentally for defects in condensed matter systems and liquid crystals \cite{Mermin, SalVol, ChDTY}. In cosmology, slowly-evolving string networks have been invoked in order to obtain a negative equation of state \cite{SperPen}. Bucher \& Spergel \cite{BuchSper} have proposed a Solid Dark Matter (SDM) model, which could be realised in terms of a frustrated string or domain wall \cite{BatBuchSper} network. Rigidity and stability in this scenario have been studied in Ref.~\cite{BatCarChMo}. More recently, a string network of the SDM kind was revived \cite{Alexand} in an attempt to explain the flat rotational curves and the Tully-Fisher relation observed in galaxies, which were the main motivation for the development of MOND\footnote{For a recent review on the MOND scenario see Ref.~\cite{MOND}.} theories. The fundamental difficulty \cite{BatCarChMo} with the SDM scenario is to explain how an essentially non-relativistic network can naturally arise from an initial tangle of (relativistic) Nambu-Goto strings like the ones believed to be produced in cosmological phase transitions. Indeed, Hubble damping is inefficient at subhorizon scales \cite{book} and there is no known mechanism efficient enough to damp the relativistic short-scale excitations on strings. These affect the equation of state through the velocity dependent term in equation (\ref{eos}), leading to $w>-1/3$\footnote{ One could argue that the velocity which enters the equation of state is the coherent string velocity at the scale of the string correlation length rather than the rms short-scale velocity. While it is true that the coherent velocities are typically smaller, numerical simulations \cite{All_Shell} suggest $v_{\rm coh}\simeq 0.15$ so one still expects significant departure from $w=-1/3$. Furthermore, small-scale structure has the effect of `renormalising' the string mass per unit length \cite{book, Mart_Shell_sss} and string tension so that equation (\ref{eos_Tmu}) should be used instead of (\ref{eos}). This also increases the value of $w$.}. Further, numerical evidence is now accumulating supporting that scaling behaviour in field theory strings and domain walls is rather generic \cite{walls}, so that frustrated networks seem hard to obtain. On the other hand the analysis we did in section \ref{RelVnonRel} points towards a SDM picture for non-relativistic strings, where the above problems are not present. Here, string velocities can be arbitrarily small and, as we saw in section \ref{RelVnonRel}, network densities are dramatically enhanced so that strings could even dominate the universe before scaling is reached. Note that the procedure for obtaining the non-relativistic string action (\ref{S_NR}) required at least one of the spatial directions to be compact. If the action (\ref{S_NR}) is to be treated as a classical effective action this global property can be ignored, but if it is taken to describe a fundamental object, then the winding around a compact dimension is required at quantum level. The fact that a consistent non-relativistic string theory based on the action (\ref{S_NR}) can be constructed \cite{jGom_Oog} allows one to take the view that there is a fundamental winding string obeying this action. Then, a cosmological setup like that of sections \ref{NRVOS} and \ref{RelVnonRel} can still be considered as long as the compactification radius is larger than the horizon. This possibility of having a universe with non-trivial topology is not observationally excluded. Cosmological observations constrain the local geometry as described by the metric to be nearly flat \cite{WMAP3}, but the global topology of spatial hypersurfaces need not be that of the covering space. Indeed, topological identifications under freely-acting subgroups of the isometry group are allowed, and the WMAP sky maps appear to be compatible with finite flat topologies with fundamental domain significantly greater than the distance to the decoupling surface \cite{Luminet} (see also \cite{Cornish}). One can therefore imagine a situation where fundamental non-relativistic strings are wound around 1-cycles in a non-simply-connected universe, in a setup analogous to that of the Brandenberger-Vafa scenario \cite{BrandVaf}. If the compactification radius is larger than the horizon, as required by cosmological observations, a network of such wound strings behaves like an open string network. An analogous situation occurs in ordinary cosmic string simulations, where the network evolves in a periodic box and there is a class of long strings (determined mainly by initial conditions) which wind around the box. As the universe expands these strings tend to straighten out and behave essentially non-relativistically \cite{Paul_private}. These strings are usually discarded as artifacts of the periodicity of the box, but in a universe of compact topology, such configurations can play a physical role. Finally, in theories with compact extra dimensions one has the possibility of non-relativistic strings winding 1-cycles in the internal space. Analogous (but relativistic) objects have been considered in the context of brane inflation \cite{BarnBCS, MatsNecl, cycloops}, which are topologically trapped and behave like monopoles. Although the copious production of such objects in the early universe is inconsistent with the existence of an early radiation era, there are regions in parameter space where they are allowed and in some cases can provide candidates for dark matter. The situation of non-relativistic strings wrapping an internal dimension is qualitatively similar, but the corresponding energy spectrum is different than in the relativistic case. The outstanding question arising from the above is to what extent such non-relativistic strings are `natural' or `generic' objects in cosmology. Even though non-relativistic strings exist in some part of the moduli space of string theory, there is at present no mechanism which produces them in a cosmological setup. Nevertheless, it is clear that the non-relativistic string action and the VOS model developed here are applicable at least as effective descriptions of cosmic- and vortex-strings in certain situations. Indeed, the action we have considered is the only sensible non-relativistic limit, having $T=\mu$, of the standard Nambu-Goto action, and is precisely the action one obtains when considering the low energy dynamics of topological defects in field theory. The macroscopic NRVOS model based on this action, provides a semi-analytic tool for the study of the cosmological evolution of non-relativistic strings. Possible situations of cosmological interest involving non-relativistic strings include strings in de Sitter space, Solid Dark Matter, wound strings, etc, as discussed above. Further, in a condensed matter application we have noted that our model reproduces the correct scaling law, as experimentally observed. It would be interesting to go one step further and perform numerical simulations of string network evolution based on the non-relativistic string action presented here. The comparison of macroscopic string evolution and small-scale structure to the relativistic case could provide an independent means of probing the effect of small-scale structure on string networks, which is an area of current interest and active research. \begin{acknowledgments} We are grateful to Carlos Martins for reading the manuscript and making valuable comments. It is also a pleasure to thank Roberto Emparan, Jaume Garriga, Gary Gibbons, Jaume Gomis, Paul Shellard and Paul Townsend for discussions. This work has been supported in part by the European EC-RTN project MRTN-CT-2004-005104, European Network on Random Geometry (ENRAGE) project MRTN-CT-2004-005616, MCYT FPA 2004-04582-C02-01 and CIRIT GC 2005SGR-00564. We would like to thank the Galileo Galilei Institute for Theoretical Physics for its hospitality and INFN for partial support during the completion of this work. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{} \vspace{-1cm} \footnotetext{\textit{$^{a}$~Leibniz-Institut f\"ur Polymerforschung Dresden, Institut Theorie der Polymere, 01069 Dresden, Germany; E-mail: sharma@ipfdd.de}} \footnotetext{\textit{$^{b}$~Technische Universit\"at Dresden, Institut f\"ur Theoretische Physik, 01069 Dresden, Germany}} \section{Introduction} \label{interoduction} A fundamental feature of Active Brownian Particles (ABPs) is self-propulsion which requires a continual consumption of energy from the local environment ~\cite{fiasconaro2008active,romanczuk2012active, wensink2013differently,walther2013janus, wang2013nanomachines,volpe2014simulation, elgeti2015physics}. Since ABPs are internally driven, they do not require breaking the spatial symmetry to exist in an out-of-equilibrium state. An ABP is generally modelled as a particle which propels itself along a direction which randomizes via rotational diffusion. Given the simplicity of the model, it is not surprising that ABPs serve as a minimalistic model to study the effect of broken time-reversal symmetry and nonequilibrium steady states in general~\cite{cates2012diffusive,fodor2016far,cates2015motility,digregorio2018full, mandal2019motility,lowen2020inertial, singh2020phase}. The interest in ABPs is not purely theoretical as evident in the vast body of research in pharmaceutical and medical applications~\cite{sezer2016smart, yang2012janus,paxton2004catalytic,fournier2005synthetic, howse2007self,ebbens2010pursuit, poon2013clarkia,ekeh2020thermodynamic}. Since an ABP adjusts its propulsion speed in response to the local fuel concentration~\cite{howse2007self, gao2014catalytic}, it is also used as a simple model to understand the emergence of chemotaxis in proto-forms of life~\cite{ghosh2015pseudochemotactic, vuijk2018pseudochemotaxis,merlitz2020pseudo,vuijk2020chemotaxis}. Recently, the behaviour of diffusion systems subjected to an external magnetic field has attracted considerable interest~\cite{filliger2007kramers, balakrishnan2008elements, friz2015physical, chun2018emergence, vuijk2019effect,vuijk2019anomalous, chun2019effect, abdoli2020nondiffusive, vuijk2019lorenz,abdoli2020stationary, abdoli2020correlations, cerrai2020averaging}. It has been shown that Lorentz force due to an external magnetic field induces additional Lorentz fluxes in diffusion systems which are perpendicular to the typical diffusive fluxes~\cite{ vuijk2019anomalous, abdoli2020nondiffusive}. The Lorentz force generates dynamics which are different from those of a purely diffusive system. Interestingly, the unusual properties due to the Lorentz force persist in the small-mass limit in which the dynamics are overdamped. However, the equilibrium properties, as expected from the Bohr-van Leeuwen theorem~\citep{van1921problemes}, are unaffected by the applied magnetic field due to no performance of work on the particle. Since the Lorentz force only influences the dynamics, there are essentially two conditions to observe its unusual effects: (i) the system is out of equilibrium and (ii) there are density gradients in the system. This has been recently demonstrated in a system of ABPs with a uniform activity subjected to an inhomogeneous magnetic field~\cite{vuijk2019lorenz}, which satisfies the aforementioned conditions even in the stationary state. The nonequilibrium steady state in such a system is characterized by density inhomogeneity and bulk fluxes. In order to ensure that there exists a nontrivial stationary state in a system of ABPs, one requires either a confining potential or periodic boundary conditions~\cite{vuijk2019lorenz}. In recent years, stochastic resetting has emerged as a powerful framework which gives rise to nontrivial stationary states in diffusive systems characterized by a non-Gaussian probability distribution and steady-state fluxes~\cite{evans2011diffusion,evans2011optimal,evans2013optimal, durang2014statistical, majumdar2015dynamical, evans2018run,masoliver2019telegraphic, pal2019time,da2019diffusions, evans2020stochastic,magoni2020ising,gupta2020work,belan2020median}. Stochastic resetting is unique in the sense that it renews the underlying process and therefore, in some sense, preserves the dynamics of the underlying process in the steady state. With the recent experimental demonstrations~\cite{tal2020experimental,PhysRevResearch.2.032029}, stochastic resetting is now no longer a pure theoretical pursuit but rather an alternative and practical method to drive and maintain a system out of equilibrium. We have recently shown that stochastically resetting a passive particle to the origin in the presence of Lorentz force gives rise to a novel stationary state which bears the unusual dynamical properties owing to the magnetic field~\cite{abdoli2020stationary}. While the stochastic resetting of passive particles has been thoroughly studied, much less is done about active particles~\cite{evans2018run,scacchi2018mean,santra2020run}. In a very recent work, the motion of an ABP under different resetting protocols has been studied~\cite{kumar2020active} with a focus on the steady-state density distribution. In the present work, we investigate the motion of a charged ABP under resetting and the effect of Lorentz force. The particle is stochastically reset to the line $x=0$ at a constant rate. In addition, the system is periodic in the $y$ direction. We start with a generalized coarse-grained Fokker-Planck equation and analytically determine the density, flux, and polarization, first for a uniform magnetic field and then for a spatially inhomogeneous magnetic field. We show that whereas for a uniform magnetic field the properties of the stationary state of the active system can be obtained from its passive counterpart, novel features emerge in the case of an inhomogeneous magnetic field which have no counterpart in passive systems. In particular, there exists an activity-dependent threshold rate such that for smaller resetting rates, the density distribution of active particles becomes non-monotonic. We also study the Mean First-Passage Time (MFPT) to the $x$ axis and find a surprising result: it takes an active particle more time to reach the target from any given point for the case when magnetic field increases away from the axis. The paper continues as follow. In Sec.~\ref{Model}, we define the model and provide a description of the methods used to analyze the system. In Sec.~\ref{uniform}, we study the system in the presence of a constant magnetic field. We then consider a space-dependent magnetic field and derive expressions for the density, flux, and polarization in the system in Sec.~\ref{spacedependent}. In Sec.~\ref{MFPT_sec}, we obtain the MFPT for the active and passive systems. The conclusion of the paper is presented in Sec.~\ref{conclusion}. \section{Model and theory} \label{Model} We consider a single self-propelled, charged Brownian particle of mass $m$ and charge $q$ subjected to an external magnetic field $\vec{B}(\vec{r})$ of strength $B(\vec{r})$, whose direction is along the $z$ axis where the Lorentz force does not influence the motion of the particle. As a consequence, we study the dynamics of the particle in the $xy$ plane with $\vec{r}=(x, y)$. The particle is stochastically reset to the line $x=0$ at a constant rate $\mu$. The generalized Fokker-Planck equation for the probability density of finding the particle at position $\vec{r}$ with orientation $\vec{p}=(p_x, p_y)$ at time $t$ given that the particle started its motion at the origin, $P(\vec{r}, \vec{p};t)$ is given as \begin{align} \label{fullfpe} \frac{\partial}{\partial t} P(\vec{r}, \vec{p};t) & = \nabla\cdot\left[{\mathpalette\makeGama\relax}^{-1}(\vec{r})\cdot \left(D_t\nabla-v_0\vec{p}\right)P(\vec{r}, \vec{p};t)\right] \nonumber \\ & + D_r \vec{\mathcal{R}}^2 P(\vec{r}, \vec{p};t) + \Phi_l + \Phi_g, \end{align} where $\nabla=(\partial_x, \partial_y)$ and \begin{equation} \label{loss} \Phi_l = - \mu P(\vec{r}, \vec{p};t), \end{equation} is the loss of the probability from the position $\vec{r}$ due to resetting while \begin{equation} \label{gain} \Phi_g = \mu\delta(x)\int P(x',y, p_x,p_y;t)\mathop{}\!\mathrm{d} x', \end{equation} is the gain of the probability at the point $(0,y)$ on the $x$-axis. Although Eq.~\eqref{fullfpe} is not of the form of a continuity equation, the total probability is conserved. Here $v_0=f/\gamma$ is the self-propulsion speed where $\gamma$ is the friction coefficient and $f$ is the magnitude of the self-propulsion force that drives the particle into the direction of its (unit) orientation vector $\vec{p}$. In addition, $\vec{\mathcal{R}}=\vec{p}\times\nabla_{\vec{p}}$ is the rotation operator, $D_t=k_BT/\gamma$ with $k_B$ being the Boltzmann constant is the translational diffusion coefficient and $D_r$ is the rotational diffusion coefficient. The matrix ${\mathpalette\makeGama\relax}$ is defined as $\mathbb{I}+\kappa(\vec{r})\mathrm{\mathbb{M}}$ where $\mathbb{I}$ is the identity matrix, the dimensionless parameter $\kappa(\vec{r})=qB(\vec{r})/\gamma$ quantifies the strength of the Lorentz force relative to the frictional force and $\mathrm{\mathbb{M}}$ is a matrix with elements $M_{ij}=-\epsilon_{ijk}n_k$ where $\epsilon_{ijk}$ is the antisymmetric Levi-Civita symbol in two dimensions and $n_k$ is the $k$ component of the unit vector $\boldsymbol{n}$ along which the magnetic field is pointed. The inverse of ${\mathpalette\makeGama\relax}$ reads \begin{equation} \label{gammainverse} {\mathpalette\makeGama\relax}^{-1}(\vec{r}) = \mathbb{I}-\frac{\kappa(\vec{r})}{1+\kappa^2(\vec{r})}\mathrm{\mathbb{M}} + \frac{\kappa(\vec{r})^2}{1+\kappa^2(\vec{r})}\mathrm{\mathbb{M}}^2. \end{equation} Note that the orientation of the particle remains unchanged under resetting; the particle restarts its motion with the orientation that it had at the time of resetting. \begin{figure} \centering \resizebox*{1\linewidth}{5.5cm}{\includegraphics{figure01.png}} \caption{Schematic of a charged active Brownian particle which is stochastically reset to the line $x=0$ at a constant rate $\mu$. The self-propulsion velocity is shown by an arrow inside the disc. Between any two consecutive resetting events the particle undergoes Brownian motion and self-propulsion. Immediately after a resetting event, the orientation of the ABP remains unchanged. The system is subjected to an external magnetic field, $B(\vec{r})$ in the $z$ direction. } \label{schema} \end{figure} We also perform Brownian dynamics simulations to validate our theoretical predictions. The dynamics of the particle can be described by the following Langevin equations \begin{align} \label{langevinv} \frac{m}{\gamma}\dot \vv(t) &= - {\mathpalette\makeGama\relax}(\vec{r})\cdot\vv + v_0 \vec{p}(t) + \boldsymbol{\xi}(t), \\ \label{Langevinp} \dot \vec{r}(t) &= \vv(t), \,\,\, \text{and}\,\,\, \dot\vec{p}(t) = \vec{\eta}(t)\times\vec{p}(t), \end{align} where the dot over the vectors denotes the time derivative and the stochastic forces $\boldsymbol{\xi}(t)$ and $\vec{\eta}(t)$ satisfy the properties of Gaussian white noise with zero mean value and correlation functions $\langle\boldsymbol{\xi}(t)\boldsymbol{\xi}^{T}(t')\rangle=2D_t\mathbb{I}\delta(t-t')$ and $\langle\vec{\eta}(t)\vec{\eta}^{T}(t')\rangle=2D_r\mathbb{I}\delta(t-t')$. As the resetting mechanism we consider Poisson distribution for the resetting time which gives the probability of the number of resets to the line $x=0$ in a small interval of time with a constant rate $\mu$ (see Fig.~\ref{schema}). We numerically integrate the set of equations in \eqref{langevinv} and \eqref{Langevinp} with a small mass $m=0.002$ and the integration time step $dt = 10^{-6}\tau$ where $\tau=\gamma/k_BT$ is the time the particle takes to diffuse over a unit distance. We also fix $k_B=\gamma=1.0$, the self-propulsion force $f=10.0$, and $D_r=20.0$. The particle starts its motion at the origin with the initial velocity $(v_{0x}, v_{0y})=(1.0,1.0)$ and initial orientation $(p_{0x}, p_{0y})=(1.0, 0.0)$. The choice of the parameters holds throughout the paper. The Fokker-Planck equation in \eqref{fullfpe} provides a full statistical description of the position and orientation of an ABP under stochastic resetting. However, it is formidable task to obtain an exact solution of this equation. To theoretically analyze the system we make the following assumptions: (1) $D_r \gg \mu$ and (2) the gradients in the system are small on the length scale of persistence length of the ABP. Under these assumptions, one can integrate out the orientational degrees of freedom by a gradient expansion (see Appendix~\ref{appendixA} for details) to yield an equation for the (marginal) probability density as a function of time and position degrees of freedom alone~\cite{vuijk2019lorenz}. The final equation for the coarse-grained density reads as \begin{align} \label{fpe} \frac{\partial \rho(\vec{r};t)}{\partial t} = \nabla\cdot\left[{\mathpalette\makeGama\relax}^{-1}(\vec{r})\cdot\left(D_t\nabla+v_0\vec{\pi}(\vec{r};t)\right)\rho(\vec{r};t)\right] +\phi_l +\phi_g, \end{align} where \begin{equation} \label{loss} \phi_l = - \mu \rho(\vec{r};t), \end{equation} \begin{equation} \label{gain} \phi_g = \mu\delta(x)\int \rho(x',y;t)\mathop{}\!\mathrm{d} x', \end{equation} are the loss and gain of probabilities and $\rho(\vec{r};t)$ is the (marginal) probability density of finding the particle at position $\vec{r}$ at time $t$ given that the particle started its motion at the origin. The first and second terms in the square brackets are the (negative) probability fluxes stemming from the thermal fluctuations and activity, respectively. The polarization, $\vec{\pi}(\vec{r};t)$, defined as the average orientation per particle, is given as \begin{equation} \label{polarization} \vec{\pi}(\vec{r};t) = -\frac{l_p}{2\rho(\vec{r};t)}\nabla\cdot[{\mathpalette\makeGama\relax}^{-1}(\vec{r})\rho(\vec{r};t)], \end{equation} where $l_p=v_0/(D_r+\mu)$ denotes the modified persistence length of the ABP. An alternative approach to the above derivation is to treat activity as a perturbation and use the linear response theory for ABPs as outlined in Refs.~\cite{sharma2016communication,merlitz2018linear}. The gradient expansion approach, in contrast, does not require the activity to be small but only that the gradients be small compared to the persistence length of the ABP. It therefore allows one to consider an active system in which the activity dominates over thermal fluctuations. Now we consider periodic boundary conditions in the $y$ direction and a magnetic field which is varying along the $x$ direction. With these choices we effectively restrict ourselves to spatially single-variable analysis. By averaging over the $y$ positional degrees of freedom from the Fokker-Planck equation in \eqref{fpe}, we obtain \begin{equation} \label{fpeoned} \frac{\partial g(x;t)}{\partial t} = -\nabla\cdot\left[\boldsymbol{j}(x;t)+\boldsymbol{j}^{a}(x;t)\right]-\mu g(x;t)+\frac{\mu}{L}\delta(x), \end{equation} where $L$ is the size of the system and $g(x;t)$ is the probability density of finding the particle at position $x$ at time $t$ given that its initial position was at $x=0$. The flux due to thermal fluctuations i \begin{subequations} \label{fluxes1d} \begin{equation} \boldsymbol{j}(x;t) = - D_t{\mathpalette\makeGama\relax}^{-1}(x) \nabla g(x;t), \label{fluxth1d} \end{equation} and \begin{equation} \boldsymbol{j}^{a}(x;t) = - v_0{\mathpalette\makeGama\relax}^{-1}(x) \vec{p}(x;t) g(x;t). \label{fluxa1d} \end{equation} \end{subequations} is the flux due to activity where $\nabla g(x;t)=(\partial_xg(x;t), 0)^\top$, and \begin{equation} \label{polarization} \vec{p}(x;t) = -\frac{l_p}{2g(x;t)}\nabla\cdot[{\mathpalette\makeGama\relax}^{-1}(x)g(x;t)], \end{equation} is the polarization. Note that since there is no variation in the $y$ direction all the derivatives with respect to $y$ are zero resulting in the reduction of $\nabla$ in Eqs.\eqref{fpeoned} and \eqref{polarization} to simply the derivative with respect to $x$. To highlight the new features which emerge in the system of ABPs we make a comparison between the active system and its passive counterpart. As the active system we consider the motion of the particle purely due to the activity by ignoring the thermal term (i.e., $D_t=0$) in Eqs.~\eqref{langevinv} and ~\eqref{Langevinp} and ~\eqref{fluxth1d}. We compare the active system with the passive one wherein the motion of the particle is due to the thermal fluctuations. The governing Langevin equations and corresponding Fokker-Planck equation of the passive system can be obtained by setting the self-propulsion velocity, $v_0$ to zero in Eqs.~\eqref{langevinv} and ~\eqref{Langevinp} and ~\eqref{fluxa1d}. \section{Uniform magnetic field} \label{uniform} We first consider the system subjected to a uniform magnetic field $\kappa(x)\equiv\kappa$. For the active system the stationary probability density, denoted by $g^a(x)$, can be easily obtained by plugging Eq.~\eqref{fluxa1d} into Eq.~\eqref{fpeoned} and setting $\partial_t g(x;t)=0$. The solution can be written as \begin{equation} \label{activeuniform} g^a(x) = \frac{\alpha}{2L}\exp\left(-\alpha|x|\right), \end{equation} where $\alpha=\sqrt{1+\kappa^2}\alpha_a$ with $\alpha_a=\sqrt{\mu/D_a}$ and $D_a=v_0^2/2(D_r+\mu)$ being the modified active diffusivity. The stationary solution in \eqref{activeuniform} is the same as that of the passive system wherein $D_a$ is replaced by $D_t$~\cite{abdoli2020stationary}. The polarization can be obtained by plugging Eq.~\eqref{activeuniform} in Eq.~\eqref{polarization}, which in the $x$ direction can be written as \begin{subequations} \label{polarizationuniform} \begin{equation} p_x(x) = \frac{l_p\alpha_a}{2(1+\kappa^2)}\mathop{}\!\boldsymbol{\mathrm{sign}}(x), \label{pxuniform} \end{equation} and in spite of the translational invariance in the $y$ direction there exists polarization which is given as \begin{equation} p_y(x) = \kappa p_x(x), \label{pyuniform} \end{equation} \end{subequations} However, the substitution of the polarization into Eq.~\eqref{fluxa1d} gives zero fluxes in $y$ direction and \begin{equation} \label{activeuniformflux} j_x^a(x) = \frac{\mu}{2L}\mathop{}\!\boldsymbol{\mathrm{sign}}(x)\exp\left(-\alpha|x|\right), \end{equation} in the $x$ direction where $\mathop{}\!\boldsymbol{\mathrm{sign}}(.)$ denotes the sign function. Note that the stationary polarization and fluxes in the $y$ direction are zero in the absence of the magnetic field. In Fig.~\ref{uniformfig} (a-d) we show the density, stationary flux and the $x$ and $y$ components of the orientation, respectively. Note that despite the translational invariant motion in the $y$ direction there exists polarization in this direction. However, the $y$ component of the stationary flux is zero due to the cancellation of fluxes arising from the polarization in the $x$ and $y$ directions. \section{Inhomogeneous magnetic field} \label{spacedependent} \begin{figure}[t] \centering \resizebox*{1\linewidth}{5cm}{\includegraphics{figure02.png}} \caption{Density, flux in the $x$ direction, and orientations in the $x$ and $y$ directions are shown in (a) to (d), respectively. An ABP is subjected to a constant magnetic field such that $\kappa=3.0$ and is stochastically reset to the line $x=0$ at the rate $\mu=1.0$. Despite the translational invariant motion in the $y$ direction the magnetic field induces polarization in the $y$ direction. However, the $y$ component of the stationary flux is zero due to the cancellation of fluxes arising from the polarization in the $x$ and $y$ directions. The solid lines show the analytical solutions from Eqs.~\eqref{activeuniform} to \eqref{activeuniformflux} and the circles depict the results from Brownian dynamics simulations. } \label{uniformfig} \end{figure} \begin{figure} \centering \resizebox*{1\linewidth}{5cm}{\includegraphics{figure03.png}} \caption{Probability density in (a) the active system and (b) the passive system for different values of $\mu$. The systems are subjected to a spatially inhomogeneous magnetic field such that $\kappa(x)=\sqrt{e^{\lambda|x|}-1}$ with $\lambda=2.0$. For the passive system, the translational diffusivity, $D_t$ has the same value as the active diffusivity, $D_a$. While for the passive system the accumulation of particles is in a vicinity of $x=0.0$, in the active system it is non-monotonic with local maxima at $x=\pm(2/\lambda)\ln\left(\lambda/2\alpha_a\right)$ for $\mu<\lambda^2v_0^2/(8D_r)$. The lines show the theoretical results from Eq.~\eqref{densityIA} and Eq.~\eqref{densityIP} and the symbols depict simulation results.} \label{density} \end{figure} In this section, we show that novel features emerge in the case of an inhomogeneous magnetic field which have no counterpart in passive systems. We consider a system subjected to an exponentially varying magnetic field such that $\kappa(x)=\sqrt{e^{\lambda|x|}-1}$ where $\lambda$ is a constant. With this choice of the magnetic field, the Fokker-Planck equation in \eqref{fpeoned} can be solved exactly. The stationary probability density is given as \begin{equation} \label{densityIA} g^a(x) = \frac{\alpha_a}{2L}\exp\left[\frac{\lambda|x|}{2}-\frac{2\alpha_a}{\lambda}\left(\exp(\frac{\lambda|x|}{2})-1\right)\right]. \end{equation} Using Eq.~\eqref{polarization} the polarization in the $x$ direction is given as \begin{subequations} \label{polarIA} \begin{equation} p_x(x) = \frac{l_p\mathop{}\!\boldsymbol{\mathrm{sign}}(x)}{2}\left[\frac{\lambda}{2}\exp\left(-\lambda|x|\right)+\alpha_a\exp(\frac{-\lambda|x|}{2})\right], \label{polarxIA} \end{equation} and similarly \begin{equation} p_y(x) = \frac{l_p\lambda\mathop{}\!\boldsymbol{\mathrm{sign}}(x)}{4\sqrt{\exp(\lambda|x|)-1}}\left[\frac{4\alpha_a}{\lambda}\sinh(\frac{\lambda|x|}{2})-\exp(-\lambda|x|)\right], \label{polaryIA} \end{equation} \end{subequations} is the polarization in the $y$ direction. The $x$ and $y$ components of the stationary flux can be obtained using Eq.~\eqref{fluxa1d}, which read \begin{subequations} \label{fluxIA} \begin{equation} j_x^a(x) = \alpha_aD_a\mathop{}\!\boldsymbol{\mathrm{sign}}(x)\exp\left(-\frac{\lambda|x|}{2}\right)g^a(x), \label{fluxxIA} \end{equation} \begin{equation} j_y^a(x) = -\frac{\lambda D_a\exp\left(-\lambda|x|\right)}{2\sqrt{\exp(\lambda|x|)-1}}\mathop{}\!\boldsymbol{\mathrm{sign}}(x)g^a(x). \label{fluxyIA} \end{equation} \end{subequations} Note that the stationary polarization and stationary flux in the $y$ direction cease to exist in the absence of the magnetic field. We also consider a passive system under resetting and subjected to the same magnetic field as in the active system. The governing Fokker-Planck equation for the system can be easily written by setting $v_0=0$ in Eq.~\eqref{fluxa1d} and substituting Eq.~\eqref{fluxth1d} into Eq.~\eqref{fpeoned}. The stationary solution, $g^p(x)$, of the resulting equation is \begin{equation} \label{densityIP} g^p(x) = \frac{\alpha_p}{2L\mathop{}\!\mathrm{K}_0(\frac{2\alpha_p}{\lambda})}\exp\left(\frac{\lambda|x|}{2}\right)\mathop{}\!\mathrm{K}_1\left(\frac{2\alpha_p}{\lambda}\exp\left(\frac{\lambda|x|}{2}\right)\right), \end{equation} where $\alpha_p=\sqrt{\mu/D_t}$ and $\mathop{}\!\mathrm{K}_0$ and $\mathop{}\!\mathrm{K}_1$ are the modified Bessel functions of the second kind of order 0 and 1, respectively. The $x$ component of the stationary flux can be written as \begin{align} \label{fluxxIP} j_x^p(x) = \frac{D\alpha_p^2}{2L\mathop{}\!\mathrm{K}_0(\frac{2\alpha_p}{\lambda})}\mathop{}\!\boldsymbol{\mathrm{sign}}(x) \mathop{}\!\mathrm{K}_0\left(\frac{2\alpha_p}{\lambda}\exp\left(\frac{\lambda|x|}{2}\right)\right), \end{align} and similarly \begin{equation} j_y^p(x) = -\sqrt{e^{\lambda|x|}-1} j_x^p(x). \label{fluxyIP} \end{equation} is the flux in the $y$ direction. \begin{figure} \centering \resizebox*{1\linewidth}{5cm}{\includegraphics{figure04.png}} \caption{The $x$ and $y$ components of the flux and orientation are shown in (a) to (d), respectively. An active particle is stochastically reset to the line $x=0.0$ at the rate $\mu=2.0$. the particle is subjected to the magnetic field $\kappa(x)=\sqrt{e^{\lambda|x|}-1}$ with $\lambda=2.0$. The solid lines show the analytical solutions from Eqs.~\eqref{polarxIA} to \eqref{fluxyIA} and the circles depict the results from Brownian dynamics simulations. Note that the polarization and flux in the $y$ direction cease to exist in the absence of the magnetic field.} \label{activeIB} \end{figure} Figure~\ref{density}(a) and Fig.~\ref{density}(b) show the probability density in the active and passive systems, respectively. While for the passive system the particles accumulate in a vicinity of $x=0$ for different values of the parameters, in the active system there exists an activity-dependent threshold rate such that for smaller resetting rates, the density distribution of the particles becomes non-monotonic. Below this threshold rate, $\mu<\lambda^2v_0^2/(8D_r)$, the ABPs accumulate in a vicinity of positions given by $x=\pm (2/\lambda)\ln\left(\lambda/2\alpha_a\right)$ . In Fig.~\ref{activeIB} we use Eq.~\eqref{polarxIA} to Eq.~\eqref{fluxyIA} to plot the fluxes and the polarization in the active system. Whereas in the case of a constant magnetic field there is no flux in the $y$ direction, inhomogeneity in the magnetic field gives rise to the orientation which results in fluxes in the $x$ and $y$ directions. Note that the polarization and flux in the $y$ direction cease to exist in the absence of the magnetic field. Figure~\ref{flux_passive} shows the $x$ and $y$ components of the stationary flux in the passive system. As can be seen the theoretical results from Eq.~\eqref{fluxxIP} and Eq.~\eqref{fluxyIP} are in good agreement with the simulation results. \begin{figure} \centering \resizebox*{1\linewidth}{!}{\includegraphics{figure05.png}} \caption{Flux in the $x$ and $y$ directions are shown in (a) and (b), respectively. A passive particle is stochastically reset to the line $x=0$ at the rate $\mu=2.0$. The particle is subjected to the magnetic field $\kappa(x)=\sqrt{e^{\lambda|x|}-1}$ with $\lambda=2.0$. The translational diffusivity, $D_t$ has the same value as the active diffusivity, $D_a$. The solid lines show the analytical solutions from Eqs.~\eqref{fluxxIP} and \eqref{fluxyIP} and the circles depict the results from Brownian dynamics simulations. } \label{flux_passive} \end{figure} \begin{figure*} \centering \resizebox*{0.49\linewidth}{6.6cm}{\includegraphics{figure06a.png}} \resizebox*{0.5\linewidth}{6.5cm}{\includegraphics{figure06b.png}} \caption{A spatial control of Lorentz force (or self-propulsion speed) can direct transport with no need for structured geometries. (a) A vector plot of the stationary flux whose direction is shown by the arrows and the magnitude is color coded and (b) a surface plot of the stationary probability density to which the flux is attached. An ABP under resetting to the line $x=0$ at the rate $\mu=1.0$ is subjected to the magnetic field such that $\kappa(x)=\sqrt{e^{-\lambda x}-1}$ if $x<0$ and $\kappa(x)=-\sqrt{e^{\lambda x}-1}$ otherwise with $\lambda=2.0$. \label{transport} \end{figure*} Transport properties of Brownian particles have been usually studied by considering systems which are restricted within the confines of structured and inhomogeneous environments. While in many cases, such structured environments can be viewed as confined channels with different boundaries and properties~\cite{malgaretti2019driving, ai2019collective,malgaretti2019special, li2020particle, bressloff2020modeling}, directed transport can be obtained via spatial control of activity~\cite{stenhammar2016light,sharma2017brownian}. Here we show that the Lorentz force can result in directed transport with no need for structured geometries. For a better visualization, we consider an ABP under resetting, subjected to the magnetic field $\kappa(x)=\sqrt{e^{-\lambda x}-1}$ if $x<0$ and $\kappa(x)=-\sqrt{e^{\lambda x}-1}$ otherwise. We show the flux and density in the active system in two dimensions. Figure~\ref{transport} (a) depicts a vector plot of the stationary flux in the system which clearly shows the particle transport along the $y$ axis. In Fig.~\ref{transport} (b) we show a surface plot of the stationary probability density in which the arrows show the direction of the particle transport. \section{Mean first-passage time} \label{MFPT_sec} We now study the first-passage properties of the system in the case of a fixed target at the origin. The searching particle is stochastically reset to its initial position $x_0$ to be fixed. The backward Fokker-Planck equation for the survival probability, $G(x;t)$ -- the probability that the searching particle starting at $x$ at $t=0$ has not reached the target in time $t$ -- can be written as \begin{align} \label{backwardfpe} \frac{\partial G(x;t)}{\partial t} & = A(x)\frac{\partial^2 G(x;t)}{\partial x^2} + B(x)\frac{\partial G(x;t)}{\partial x} -\mu G(x;t)+\mu G(x_0;t), \end{align} where the initial and boundary conditions are $G(x;0)=1$ and $G(0;t)=0$, respectively. While the coefficients $A(x)$ and $B(x)$ for the active system are $D_ae^{-\lambda x}$ and $-D_a\lambda e^{-\lambda x}/2$, those for the passive one are $De^{-\lambda x}$ and $-D\lambda e^{-\lambda x}$, respectively. We first solve Eq.\eqref{backwardfpe} and then set $x$ to $x_0$ to find the MFPT (see Appendix~\ref{appendixB} for details). The Laplace transform of the backward Fokker-Planck equation in \eqref{backwardfpe} reads \begin{align} \label{laplacetransform} B(x)\frac{\partial^2\tilde{G}(x;s)}{\partial x^2} + A(x)\frac{\partial\tilde{G}(x;s)}{\partial x} &-(\mu+s)\tilde{G}(x;s) =-1-\mu\tilde{G}(x_0;s), \end{align} where $\tilde{G}(x;s)=\int_0^\infty\mathop{}\!\mathrm{d} t e^{-st}G(x;s)$ is the Laplace transform of the survival probability. Solving Eq.~\eqref{laplacetransform} and setting $x=x_0$ we obtain the expressions for the survival probability for the active and passive systems in the Laplace space, which when evaluated at $s=0$ gives the MFPT as \begin{equation} \label{mfptactive} T^a(x_0)=\frac{1}{\mu}\left[\exp\left(\frac{2\alpha_a}{\lambda}\left(\exp(\frac{\lambda x_0}{2})-1\right)\right)-1\right], \end{equation} for the active system, and \begin{equation} \label{mfptpassive} T^p(x_0) = \frac{1}{\mu}\left[\frac{\mathop{}\!\mathrm{K}_1\left(\frac{2\alpha_p}{\lambda}\right)\exp(-\frac{\lambda x_0}{2})}{\mathop{}\!\mathrm{K}_1\left(\frac{2\alpha_p}{\lambda}\exp(\frac{\lambda x_0}{2})\right)}-1\right], \end{equation} for the passive system. Note that the MFPTs of the systems diverge as $\mu\rightarrow 0$ or $\mu\rightarrow\infty$. This implies that there exists an optimal rate at which the MFPT becomes minimum. In Fig.~\ref{MFPT} we show the MFPT with respect to the stochastic rate for the active and passive systems. We compare the results from the theory, given by Eq.~\eqref{mfptactive} and Eq.~\eqref{mfptpassive} and those from Brownian dynamics simulations. It is clear that there exists an optimal resetting rate, $\mu^*$ that minimizes the time for the searcher to reach the target. The optimal resetting rate decreases exponentially with increasing starting point $x_0$ due to inhomogeneity in the magnetic field. The inset shows how the optimal resetting rate varies with increasing initial position of the particle in the active system. Figure~\ref{ratio} shows the ratio of the MFPT of the active system to its passive counterpart. Interestingly, the active particle is slower than its passive counterpart to reach the target. The relative slowness increases as $x_0\rightarrow 0$ or $x_0\rightarrow \infty$. It implies that there exists a position, $x_0^*$ where if the particles start from, they reach the target with minimum time difference. In the limit of large $x_0$ the MFPT for the active system to find the target is exponentially longer than the passive one and scales as $\sim e^{\lambda x_0/4}$, which is shown by dashed line. The inset depicts the simulation results of the ratio of the MFPTs for the searcher starting at the origin and the target is set at $x_0$. In this case, either active or passive searcher can be faster. There is also a point at which if the particles started from, they would have the same MFPT which occurs in the case of a constant magnetic field (e.g. $\lambda=0$) as well. \begin{figure}[t] \centering \resizebox*{1\linewidth}{!}{\includegraphics{figure07.png}} \caption{Mean first-passage time in the active and passive systems are shown in red and blue, respectively. The systems are subjected to the magnetic field such that $\kappa(x)=\sqrt{e^{\lambda|x|}-1}$ with $\lambda=2.0$. For the passive system, the translational diffusivity, $D_t$ has the same value as the active diffusivity, $D_a$. The solid lines show the theoretical predictions from Eq.~\eqref{mfptactive} and Eq.~\eqref{mfptpassive} and the symbols depict the results from Brownian dynamics simulations. The inset shows the optimal resetting rate with respect to the initial position $x_0$. The numerical solution of Eq.~\eqref{mfptactive} is compared with the simulation results.} \label{MFPT} \end{figure} \begin{figure}[t] \centering \resizebox*{1\linewidth}{!}{\includegraphics{figure08.png}} \caption{The ratio of the MFPT of the active particle to the passive passive one for different values of $\mu$. The systems are subjected to the magnetic field such that $\kappa(x)=\sqrt{e^{\lambda|x|}-1}$ with $\lambda=2.0$. For the passive system, the translational diffusivity, $D_t$ has the same value as the active diffusivity, $D_a$. The solid lines show the theoretical predictions from Eq.~\eqref{mfptactive} and Eq.~\eqref{mfptpassive} and the symbols depict the results from Brownian dynamics simulations. For a fixed target at the origin, the active particle is slower than the passive one. The relative slowness increases as $x_0\rightarrow 0$ or $x_0\rightarrow \infty$. It implies that there exists a position, $x_0^*$ where if the particles start from, they reach the target with minimum time difference. In the limit of large $x_0$ the MFPT for the active system to find the target is exponentially longer than the passive one and scales as $\sim e^{\lambda x_0/4}$, which is shown by dashed line. The inset depicts the simulation results of the ratio of the MFPTs for a searcher whose initial position is the origin and the target is fixed at $x_0$. In this case, either active or passive searcher can be faster. There is also a point at which if the particles started from, they would have the same MFPT which occurs in the case of a constant magnetic field (e.g. $\lambda=0$) as well.} \label{ratio} \end{figure} \section{Conclusions} \label{conclusion} In this paper, we studied the motion of a charged ABP under resetting and the effect of Lorentz force. We showed that whereas for a uniform magnetic field the properties of the stationary state of the active system can be obtained from its passive counterpart, novel features emerge in the case of an inhomogeneous magnetic field which have no counterpart in passive systems. In particular, there exists an activity-dependent threshold rate such that for smaller resetting rates, the density distribution of active particles becomes non-monotonic. Moreover, somewhat counter intuitively, it may take an active particle much longer to reach a fixed target than its passive counterpart in an inhomogeneous magnetic field. We also showed that the Lorentz force can result in directed transport with no need for structured geometries. We would like to emphasize that the choice of the magnetic field is motivated by the mathematical convenience, which allows us to theoretically analyse the system. The qualitative behaviour of the system will remain unaffected by other choices of the magnetic field. We note that an ABP in an inhomogeneous activity field and subjected to a constant magnetic field will give rise to the same phenomenology as presented in this study. A possible experimental realization is to reset the particle in a rotating frame of reference using optical tweezers. By rotating the reference frame one can induce a Coriolis force which acts the same as the Lorentz force arising from an external magnetic field~\cite{kahlert2012magnetizing}. From a future perspective, it would be interesting to investigate the effect of stochastic resetting on inertial ABPs~\cite{mandal2019motility,caprini2020inertial}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} % \IEEEPARstart{T}{his} work is motivated by developments in data handling in nuclear and particle physics. However, its applicability is not limited to those fields. Experiments in nuclear and particle physics are growing, which implies an increasing amount of data that needs to be handled. This is caused by an increase in the number of detectors employed, finer segmentation and higher event rates. Of particular interest for this work is the recording of signal traces, because this is associated with a dramatic increase of data that need to be transferred, compared with a simple digitization of pulse amplitudes. \revlabel{revtest}% To illustrate the development of experimental setups, we consider two front-line particle physics experiments almost 30 years apart. We compare the ATLAS (\emph{A Toroidal LHC ApparatuS}) experiment at LHC, CERN, which started data-taking in 2009, with the UA1 (\emph{Underground Area 1}) experiment at Sp\={p}S, CERN, which started data-taking in 1981. Concerning data production, UA1 was designed to deliver around \SI{3}{\mega\byte\per\s}, mainly limited by the speed in writing to magnetic tape~\cite{art:astbury}. The data acquisition of ATLAS on the other hand stores around \SI{320}{\mega\byte\per\s}~\cite{pdf:atlasfact}, with much higher internal data rates. The increase of a factor 100 in recorded data rate over a time span of 30 years is compensated by the substantial improvement of commercial development in both communication % and storage. % Considering the evolution of Ethernet between 1980 and 2010, we have witnessed an increase of about a factor 20 every 10 years in bandwidth~\cite{proc:latha,pdf:ethalliance}, with the major increase in the latter half of the timespan. After 2010 however, a lower rate of growth, a factor 4 every 10 years, starts to appear. For data storage, between 1980 and 2010, the increase was on average a factor 30 every 10 years, with a peak between 1990 and 2005 where the area density doubled and prices per byte fell by half on a yearly basis~\cite{art:morris}. Also this pace has slowed down since around 2010, with instead a factor 4 every 10 years~\cite{art:wood,art:nord}. This slowdown in industry development poses new data acquisition challenges for both transmission speed and storage. A particular case when these are in high demand is when scientists are interested in storing entire traces, i.e.\ raw data directly from flash-ADCs, for example during testing or debugging of detectors and data processing procedures. In this case, the amount of data is much larger, easily by a factor 20--1000 \cite{art:gretina}. \IEEEpubidadjcol One way to cope with these challenges is to increase capital expenditure to buy newer and better performing equipment. However the need to reduce costs leads to a different approach, where we aim to reduce the size of the data to be handled. This can be achieved through \revlabel{revnoempthdatacompr}data compression. \revlabel{revtracedescription} A typical example of the traces considered is time-series data from flash-ADCs, which usually are slowly varying, with short intervals of larger variations due to pulses. The series data can also be information from adjacent channels, e.g.\ coupled strips of Si detectors, which can exhibit similar correlation characteristics. If compression is employed as software running on a PC, only data which has already been sent from the signal acquisition unit can be reduced. This gives no reduction in the transfer rate demands. To address both limitations, an implementation of the compression directly on the FPGA, where the initial signal processing takes place, is needed. This article presents a simple yet effective lossless compression method, that can be applied to sequences of correlated data. The method allows a straightforward and fast implementation in FPGAs as well as CPUs, \revlabel{revopensource} and is available as open source software. This paper is structured in the following way: First, already available solutions are reviewed, followed by a description of the present routine. Optimisation possibilities, both regarding compression efficiency and resource utilisation are then discussed. This is followed by descriptions of the interfaces to the FPGA compression module and the CPU decompression code. The storage cost of both noise and pulses are then modeled, and verified using synthetic trace simulations. Finally, the achieved storage cost reduction is benchmarked using traces from actual detectors. \section{Overview of available solutions} Ideas for data compression on front-end electronics are not new. Scientists working on large detectors have already faced the problem of how to efficiently compress data, albeit with different boundary conditions than in our case. Both \emph{lossy} compressions, where a part of the initial information is lost to accomplish a reduction~\cite{art:falchieri,art:nicolaucig}; and \emph{lossless} compressions, where the initial information can be fully reconstructed, can be achieved following different approaches. One is to discard parts of the signal with no or little information (\emph{zero-suppression}~\cite{art:werbrouck}). % Another approach is to use a \emph{variable length coding}~\cite{book:khalid}, such as \emph{Huffman coding} \cite{art:huffman} as shown in~\cite{art:mazza} or \emph{Golomb-Rice coding}, which is used in~\cite{art:ammendola}. The effectiveness of such algorithms is based on the knowledge of the probability distribution of the original data values. Usually this knowledge is gained from inspecting the whole or a representative pool of the data undergoing compression. This requires to store and to analyse a representative sample of the data during setup, in order to tune the compression configuration to the signal and ADC operation parameters. As the signal characteristics have a tendency to change within and between calibration and production data, \revlabel{revopinconvenience} causing operational inconveniences, such approaches are not suitable for our purpose as a generic configuration-free compression method \revlabel{revgenericfortraces} for traces, as it causes additional work when operating detectors. In some cases, through a \emph{pre-processing} of the incoming data, a more advantageous probability distribution can be exploited. A common approach is the calculation of differences between values~\cite{art:patauner,art:duda,art:kobayashi,art:badier,art:biasizzo}. These differences may be between sampled data and a model~\cite{art:patauner} or between sampled data and a reference value (base)~\cite{art:duda,art:kobayashi} or between consecutive samples~\cite{art:patauner,art:kobayashi,art:badier}. When dealing with signal traces, which are sampled at rates high enough that consecutive samples have values close to each other, i.e.\ are correlated, the latter approach delivers a distribution dominated by small values. \section{Operating principle} The difference predicted trace compression (DPTC) presented in this paper is based on preserving only those least significant bits which hold the information necessary to recover each value. Although this does not correspond to a real Huffman coding, the result is to encode the more common smaller values, i.e. closer to zero, with shorter sequences. This approach is quite similar to the one presented in~\cite{art:kobayashi}, where one sample works as base value and the following three samples undergo the differencing treatment. The base value can be chosen arbitrarily. We use the first value of each trace, with all following samples subject to the difference processing. The resulting differences are organised in groups of four, and all the samples in one group are stored using the same number of bits. A small header containing information about the encoding is placed at the beginning of each group. \input{figure_module} Our implementation is organised in two steps, as shown in \cref{fig:module}. First, the procedure calculating the differences is applied to the input data. The original samples consist of a sequence of \emph{n}-bit data words, where \emph{n} is given by the bit resolution of the sampling ADC. The current design allows \emph{n} to have any value in the range 5--16. % The second stage is responsible for packing the differences into a stream of 32-bit words. \subsection{Differencing procedure} \label{sec:diff} The first stage treats each value according to the following rules: \begin{enumerate} \item Calculate the difference to the previous value.% % % % \item % The later storage is due to the binary encoding slightly \emph{asymmetric}. With a certain number of bits, it can store one more negative value than positive. With e.g.\ 3 bits, the eight differences $-4, -3, -2, \ldots, 3$ can be stored. % For flat (noise-like) parts of a trace, any deviation from zero will generally be followed by a difference of the opposite sign. % To make negative values more common than positive, a sign-changing scheme is applied: % If a stored value is negative, the next non-zero value is stored with inverted sign; while, if positive, the next is stored as is. A value of zero does not change how to store following values. \end{enumerate} Note that in all operations, only $n$ bits are considered, i.e.\ the differences are allowed to wrap (arithmetic is modulo-$2^n$). This does not introduce any ambiguity. \subsection{Group creation} \label{sec:chunk} \input{figure_bitschunk} The values are stored in groups of four, using the same number of bits, $m$, for each value in a group. This is illustrated in \cref{fig:bitschunk,fig:single_cnk}. Since the stored values are differences, both positive and negative values must be representable (in \emph{two's complement} representation). Since each value may require a different number of bits to be represented, the widest representation needed by any value in a group is used. The number of bits used for values in each group is stored in a group header, placed before the actual data. Considering consecutive groups, it is worth noticing that the number of bits needed will often not change much and therefore a \emph{short} and \emph{long} encoding of the number of bits is employed, see \cref{fig:bitschunk}. The short header consists of two bits: if the encoded value is 1, 2, or 3, the number of bits to use for the group is the same as for the previous group with a change of $-1$, $0$ or $+1$ bits, respectively. If the value of the two-bit short header is 0, the encoding is long and contains the full difference of bits stored per value. Since some values are already covered by the short encoding, an offset of 2 is applied to the full difference. This is encoded using $k$ bits, which is chosen such that any needed difference, at most $n-3$, can be stored; \revlabel{revformulak}% $k = \lceil \log_2 (n-3) \rceil$, i.e. 1 bit for $n = 5$, 2 bits for $n \leq 7$, 3 bits for $n \leq 11$, and 4 bits for $n \leq 19$. The number of bits per stored difference is interpreted with a bias of 1, meaning that storing 0 bits per value is not supported. This is a conscious choice: supporting 0-bit values (i.e.\ minimal encoding of groups with all value differences 0) would make the code for CPU decompression (and compression) more complicated. Since ADCs usually are operated with noise in the least significant bit, it is also expected to have limited practical use. The data values are then stored with the necessary number of bits for the group. Each data value is stored with a bias relative to the most negative value that can be stored with the given number of bits. This simplifies decoding, as the stored value only has to be unmasked, and the bias subtracted. This avoids a cumbersome sign extension operation by the CPU decoder. As an exception to the above rules, the first data value is stored alone and fully, using \emph{n} bits. This avoids storing the entire first group of data with many bits. \input{figure_single_cnk} \subsection{Output word formation} \label{sec:shifting} The resulting stream of bits is then packed in 32-bit words, being filled from the least significant bits. When a value to store cannot fit, the completed output word is emitted and the remaining bits are stored in the next 32-bit output word. Information about the number of original data values, number of data words produced by the compression and $n$ is needed by the decompression procedure. These values are not recorded by our routine, therefore it is the responsibility of the user to retain this information. \section{Optimisation} The algorithm described in the previous section can be optimised in different ways. However, the improvements obtained by applying additional procedures depend on many aspects, such as noise level, signal shape, and the distribution of signal amplitudes. Note that while improving for some characteristic, an optimisation will undermine other aspects. We present a few ideas together with a short analysis of each one, discussing advantages and disadvantages. \subsection{Compression factor optimisation} \subsubsection{Linear predictor} With this additional pre-processing, the linear component of long sloped parts of a trace are removed by a second differencing of the data. This aims at a distribution of values more narrow % around zero. However, for flat parts of a trace, which mainly contain noise, such a double difference leads to a wider distribution. Thus, in order to give an overall improvement, this procedure must only be applied for sufficiently long, sloped sequences. This is controlled by a heuristic using the observation that consecutive samples in unfavorable regions change sign often, or have 0 difference, and thus can be detected by a three-most-recent rule. The second differencing is switched off when at least one sign change or a zero has occurred for the previous three values. While at first appearing to be promising for synthetic traces, from tests on actual traces, this optimisation does however not bring any improvement. This is connected to the fact that usually most of the pulses in the digitised traces only have small amplitudes, therefore the few improvements by this predictor are neutralised due to it activating spuriously in flat parts. The optimisation is implemented in the code, but deactivated by default. \subsubsection{Number of values in a group} The group size can also be varied to optimise the compression efficiency, see \cref{fig:chunksize}. Using smaller groups require more storage space due to the more frequent headers, while larger groups will encode unnecessary bits for more samples. The figure shows an optimum around six samples per group, with gradual losses at larger values, or steeply below three. \input{figure_chunksize} We have chosen to code four values in each group. The loss is about \SI{1.2}{\percent} compared to groups of six values. Fixing the number as a power of two might be useful for a future parallelized unpack code. \revlabel{revchoosefour}% We choose four rather than eight as this leads to shorter pipelining in the group formation part of the circuit. \subsection{Circuit optimisation} \subsubsection{Additional pipeline stages} \label{sec:pipeline} The achievable minimum clock cycle period in a digital circuit depends on the propagation delay of the longest combinational logic chain between register latches. In our case the circuit is described in VHDL, where the model and grade of the FPGA that is targeted will affect which logic expression becomes the longest. Adding pipeline stages to split the longest paths helps to lower the minimum clock cycle. At the same time however, introducing a pipeline stage causes more LUTs\footnote{Look-up table, a basic FPGA building block. \revlabel{revlutnotff}The other basic unit is signal registers, i.e.\ flip-flops (FF).} to be used, as well as flip-flops; leading to a trade-off between resource-usage and speed. In order to allow flexibility when using the code, a few generic parameters control a number of optional pipeline stages. % Since the synthesized code uses more LUTs than flip-flops, compared to the usually available ratio on FPGAs, we concentrate on the LUT usage for the circuit optimization comparisons. By performing VHDL synthesis for all combinations of the optional pipeline stages, and directing the respective FPGA development toolchain to optimise for speed, the achievable performance as function of resource usage can be determined. The results are shown in \cref{fig:freq_vs_luts} and summarized in \cref{tab:clockocc}. \revlabel{revdescribefreqvslutsfig}% Locations further down in the figure indicate that shorter clock periods can be used, and further to the left mean less resource consumption. For each circuit, only the results which improve the achievable clock frequency for a certain resource usage is kept, thus the short curves mainly show the improvements possible as more pipeline stages are enabled. To a smaller degree they also come from the ability of the toolchains to trade resource usage for speed. \revlabel{revmostused16add}% To compare with the most used constructions (adders, subtractors, comparators), 16-bit adders are also shown in the figure. The VHDL code allows the minimum period of the clock to be below \SI{10}{\ns} (i.e. \SI{100}{\MHz}) even on 10-year old FPGAs, and it can easily be configured to reach below \SI{5}{\ns} with additional pipeline stages. On more modern FPGAs, going below \SI{3}{\ns} seems rather easy. \revlabel{revfastasfadc} If the compression circuit is operated continuously, directly fed by the data generator (e.g.\ flash-ADC), the speed needs to match the sampling period, since the circuit can process one sample per clock cycle. When compressing only selected traces which first have been recorded into temporary memory buffers, a slower clock can be used for the compression circuit. \input{figure_freq_vs_luts} \input{table_clockocc} The single most expensive component of the circuit is the barrel shifter, which aligns the encoded data at the next position in the output word. For $n=16$, the shifter input is 22 bits wide, with the additional 6 bits coming from the potentially long encoded header. The shift amount is in the range 0 to 37, inclusive. 0 to 31 depends on how many bits already are used in the output word. The additional 0, 4, or 6 positions depend on the header (long, short, or none). % This gives a 60 bit output. \revlabel{revmentionbareshifterfig}% The cost and performance of the shifter units are also shown in \cref{fig:freq_vs_luts}. \revlabel{revmovesectIVb2}% \subsubsection{Barrel shifter vs. multiplier units} A barrel shifter on FPGAs is normally realised as one multiplexer for each output bit (sharing some parts of the first stage selectors of each multiplexer). Since it also can be expressed as a multiplication of the input value with $2^i$, where $i$ is the shift value, it can also be implemented using multiplier units in FPGAs. For the second factor, the input value is generated as $2^i$, i.e.\ a one-hot encoding of the shift amount. One could imagine this to be beneficial when generic LUT resources are scarce, however for the cases tested, it is not. The generation of the $2^i$ input value is rather expensive, as it requires $2^i$ individual selectors. Also the combination of the output values from the several multiplier units, often $9 \times 9$ or $18 \times 18$ wide, are rather expensive. The resource usage for 22-bit, 38-position left-shifters implemented in the two ways are also compared in \cref{fig:freq_vs_luts}. \revlabel{revlutvsalln} The results in \cref{fig:freq_vs_luts} and \cref{tab:clockocc} are for $n=16$. Similar tests for $5 \le n < 16$ give that for each bit removed, the needed number of LUTs shrinks on average by \SIrange{4}{5}{\percent}, and the attainable minimum period required decreases by \SIrange{1}{2}{\percent}, depending on FPGA model. \section{VHDL module interface} % The interface to the VHDL compression module is a single entity, with input and output signals as seen at the top and bottom of \cref{fig:module}. Optional pipeline stages are configured using a generic map. The circuit inputs are: \begin{itemize} \item \texttt{clk}: % clock signal; \item \texttt{reset}: % reset signal, given for at least as many cycles as the pipeline has stages; \item \texttt{input}: % $n$-bit data value to compress; \item \texttt{dv\_in}: % data valid signal: set to '1' every clock cycle an input value is provided; \item \texttt{flush}: % flush signal: set to '1', after the last input value has been given. Held until \texttt{done} reported back. % This forces the last output word to be emitted, especially when it is not fully occupied. \end{itemize} The output signals are: \begin{itemize} \item \texttt{output}: % 32-bit output data word; \item \texttt{dv\_out}: % data valid signal: '1' every time the output word is filled, signaling the presence of a completed data word to be stored; \item \texttt{done}: % informs that the last input value has been processed and the final output word was produced (possibly in the current cycle). \end{itemize} \section{Decompression} The decompression is performed by one C function with the following parameters: \begin{itemize} \item \texttt{compr}: pointer to the 32-bit words of the compressed input buffer; \item \texttt{ncompr}: number of elements in the input buffer; \item \texttt{output}: pointer to a buffer of 16-bit items for the decompressed values; \item \texttt{ndata}: number of original/decompressed values; \item \texttt{bits}: number of bits of each value that was stored ($n$). This must be the same as the number configured during compression. \end{itemize} On success, 0 is returned, otherwise a non-zero value. \input{table_decompression} The routine will report decompression failure on malformed compressed data, e.g.\ if there are non-zero bits left in the input buffer, or when entire words have not been used. The decompression routine will not read items beyond the end of the source buffer even if it runs out of data, e.g.\ due to a corrupted data stream. \Cref{tab:decompression} shows the typical performance, which only has a small dependence on the actual data values. \section{Compression efficiency---Storage cost} The contributions to the compressed data size can be divided in two parts: \begin{enumerate} \item % The cost of storing traces with no pulses, i.e.\ only containing the digitization noise. % This is described as a cost per sample. \item % The cost of storing a pulse, described as an additional cost for the entire pulse. \end{enumerate} There is a natural interplay between the two, as the noise affects the additional cost to store a pulse. This effect is also addressed below. In the following, we use the variables $c$ for cost and $b$ for bits. To specify these, subscripts are used: $\mathcal{N}$ for noise, $\mathcal{T}$ for trace, $\mathcal{S}$ for sample, $\mathcal{P}$ for pulse, and $\mathcal{B}$ for a small pulse (bump). Gaussian noise is described by its amplitude $\sigma_{\mathcal{N}}$. The amplitude and width (std. dev.) of Gaussian-shaped pulses are given by $A_{\mathcal{P}}$ and $w_{\mathcal{P}}$. \subsection{Bare trace cost} \label{sec:tracecost} The cost of storing a trace without pulses has two parts: the size of the headers and the size of the encoded values, i.e.\ the differences. The cost of storing the differences depends on the noise content, most easily expressed as the number of bits of noise $b_\mathcal{N} = \log_2 \sigma_{\mathcal{N}}$. Ignoring the pecularities of the first group, which may require a long header encoding, the estimated cost for a trace $\avgbr{c_\mathcal{T}}$ will be proportional to its length $n_\mathcal{T}$: \begin{linenomath*} \begin{equation} \avgbr{c_\mathcal{T}} = n + (n_\mathcal{T}-1) \avgbr{c_\mathcal{S}} + 15.5. \end{equation} \end{linenomath*} The first sample has a fixed cost $n$. The constant 15.5 accounts for the average number of unused bits in the last output word at the end of a trace. A first approximation, denoted by the tilde, for the average cost per noise sample is \begin{linenomath*} \begin{equation} \avgbr{\widetilde{c_\mathcal{S}}} = 0.5 + b_\mathcal{N} + 1 + o. \end{equation} \end{linenomath*} The first half bit comes from the short group header, using two bits every four samples. The additional one comes from differences encoding both positive and negative entries, i.e.\ effectively a sign bit. % The term $o$ is an overhead, since the grouping of values causes some more bits than necessary to be used. % \revlabel{revtotalcostlownoise}% To model the transition from very small noise levels, where the total cost is \SI{1.5}{bits/sample}, to the proportional regime, \revlabel{revsmoothtransition}% a smooth transition function $g(x) = \frac{1}{f}\log_2 (1+x^f)$ is used for $b_\mathcal{N}$, with $x = \sigma_{\mathcal{N}}$. As wanted, $g(x) \to 0$ as $x\to{0}$ and $g(x) \to \log_2 x$ for $x\gg{1}$, while the parameter $f$ controls the smoothing. This yields: \begin{linenomath*} \begin{equation} \avgbr{c_\mathcal{S}} = 0.5 + \frac{1}{f}\log_2 (1+{\sigma_{\mathcal{N}}}^f) + 1 + o. \end{equation} \end{linenomath*} This is illustrated for Gaussian and uniform noise in \cref{fig:comprsample}, \revlabel{revcontrolparamf8}% where good fits are achieved with $f=8$. \revlabel{revmovefig6caption} For uniform noise, the range of differences is twice as large as the value distribution (due to also encoding negative entries), explaining the use of, on average, one more bit per sample in addition to the short header and $b_\mathcal{N}$. % This is modeled by (3) shown as a solid line. % For Gaussian noise, the distribution of differences between consecutive samples is wider by a factor $\sim\sqrt{2}$ than the original distribition, and any large value in a group of four leads to longer encodings. % % % The further small fractional costs per sample are in both cases likely given by the occasional use of long group headers. \input{figure_comprsample} \subsection{Pulse cost} \label{sec:pulsecost} The cost of a pulse is best described as the total cost of the pulse, and not a cost in bits per sample. For the following discussion, pulses are assumed to have a Gaussian shape, \revlabel{revgaussshape} as opposed to the double exponential function considered in \cref{fig:single_cnk}. Detector pulses can be considered as composed of two parts with different time constants (i.e.\ widths) for the rising and falling parts. Even if this may be a rather rough approximation of real pulses, especially for the trailing part, it is practical, \revlabel{revgaussshapetwo} since Gaussian functions are efficiently and familiarly described using their widths and amplitudes. Since it is differences that are stored, the important parameter is not the amplitude $A_\mathcal{P}$ of a pulse, but its steepest slope, which scales as $\frac{A_\mathcal{P}} {w_\mathcal{P}}$. As a first approximation, denoted by the tilde, the cost is proportional to the number of bits needed to store these differences, as well as the width of the pulse, \begin{linenomath*} \begin{equation} \label{eq:pulsecostbadlimitbehaviour} \avgbr{\widetilde{c_\mathcal{P}}} = a w_\mathcal{P} \log_2 \left( \frac{A_\mathcal{P}} {w_\mathcal{P}} \right). \end{equation} \end{linenomath*} The scale is given by the proportionality constant $a$. It turns out that this formula works rather well, if modified to account for the facts that even for small pulses, costs are not negative (by adding 1 inside the logarithm and control parameter $b$), and that very narrow pulses still will affect the storage size of at least one entire group ($d$ within the square root): \begin{linenomath*} \begin{equation} \label{eq:pulsecost} \avgbr{c_\mathcal{P}} = a \sqrt{w_\mathcal{P}^2 + d^2} \;\frac{1}{b} \log_2 \left( 1 + \left( \frac{A_\mathcal{P}} {w_\mathcal{P}} \right)^b \right). \end{equation} \end{linenomath*} The modification is thus adjusted by the control parameters $b$ and $d$. \subsection{Pulse-noise interaction} \label{sec:pulseinteract} The above description \cref{eq:pulsecost} works in the limit where the pulse is large compared to the background noise. When this is not the case, the additional cost of storing the pulse will be \emph{smaller}, since the pulse-associated part of the differences to some extent will be covered by the noise storage cost. This can be modeled by \begin{linenomath*} \begin{equation} \label{eq:smallpulsecost} \avgbr{c_\mathcal{B}} = \sqrt{\avgbr{c_\mathcal{P}}^2 + \avgbr{c_\mathcal{NB}}^2} - \avgbr{c_\mathcal{NB}}. \end{equation} \end{linenomath*} The correction is the cost of storing the noise for a stretch of samples proportional to the pulse width: \begin{linenomath*} \begin{equation} \avgbr{c_\mathcal{NB}} = q \sqrt{w_\mathcal{P}^2 + d^2}\; \frac{1}{f} \log_2 \left( 1 + {\sigma_\mathcal{N}}^f \right) . \end{equation} \end{linenomath*} $q$ is a proportionality constant. \input{figure_compr_pulses} \subsection{Storage cost verification---synthetic traces} The storage cost described above and culminating in \cref{eq:smallpulsecost} has been verified by simulating a large number of traces with Gaussian pulses, where the parameters $A_\mathcal{P}$, $w_\mathcal{P}$, and $\sigma_\mathcal{N}$ were varied. A global fit suggests the following values for the control parameters: $a = \num{5.6}$, $b = \num{1.3}$, $q = \num{33}$, $d = \num{2.6}$ and $f = \num{7.7}$, with a parameter uncertainty of up to \SI{10}{\percent}. \Cref{fig:compr_pulses} shows the $\sigma_{\mathcal{N}} = 2.0$ case. % Simulations were performed by building, for each set of parameters, a set of \num{15E6} traces, each made of 500 samples, with Gaussian noise $\sigma_\mathcal{N}$. % In each, a Gaussian pulse ($A_\mathcal{P}$, $w_\mathcal{P}$) was added to the trace. % To average over discretisation effects, both the (noise) baseline and the center of the pulse were randomised, trace by trace, with fractional offsets. Although \cref{fig:compr_pulses} shows a good agreement between Equation~\cref{eq:smallpulsecost} and the data, larger differences emerge for small values of $A_\mathcal{P}$ and $w_\mathcal{P}$. These correspond to the limits handled by the modifications between \cref{eq:pulsecostbadlimitbehaviour} and \cref{eq:pulsecost}, which are thus seen to only partly address these edge effects. \input{table_realtraces} \subsection{Storage cost verification---actual traces} \label{sec:compr_rate} \cref{tab:realtraces} shows the compression efficiencies for some different collections of actual data. They are compared to the common gzip~\cite{web:gzip} and xz~\cite{web:xz} generic compression routines (at their normal setting). % For the generic routines, all data of each file was stored in a binary file with 16-bit values. % For a fair comparison, the overhead size of storing an empty compressed file was subtracted. % In general, the DPTC results are quite similar to % the LZMA results, and well below the gzip results. % The main exception are the LaBr$_3$ collections marked $^\mathit{a}$, where the data is very flat (virtually no noise) except for the pulses. Here the DPTC routine still uses its minimum of at least 1.5 bits/sample. % This effect is also seen for the three synthetic traces marked $^\mathit{b}$, which have constant values. \revlabel{revrealhuffman} Since Huffman encoding \cite{art:huffman} is a common approach for compression where the typical distribution of values is known, the actual traces have for comparison purposes also been compressed using this approach. It is applied after a difference stage, with the Huffman encodings individually optimised for each data set. To allow average costs below one bit per sample for very flat traces, encodings of up to four consecutive values using one symbol were also allowed, when such stretches of values would account for more than \SI{1}{\percent} of the symbols. In these tests, the \SI{1}{\percent} threshold was only passed for the cases marked $^\mathit{a}$ and $^\mathit{b}$. Overall, the Huffman compression scheme delivers results slightly better than both the DPTC routine and the generic compression routines, but needs to be optimised to the characteristics of the signals. Finally, note how close the costs per sample are to the expectations for only storing the respective noise content, showing that the storage cost contributions from pulses are negligible. \subsection{Caveat emptor---how to ignore ADC noise} In case the original data contains an excessive number of least-significant bits with noise that shall not be stored, they must be shifted out of the original data before the values are given to the DPTC compression routine. Just masking them out will \emph{not} improve the compression efficiency, as the routine is looking for the most significant bit of the differences that need to be stored. On the other hand, using a compressor with $n$ larger than necessary causes little extra cost. Few, if any, extra bits will be used; since mainly $k$ will potentially be affected, see \cref{fig:bitschunk}. \revlabel{revkeepnoisybits} Note that the choice of omitting least-significant bits is delicate decision. The finally achievable resolution of a measurement may be improved by retaining some additional least-significant bits, since it may allow analysis of the later de-compressed traces to partially recover the effects of quantization error and differential non-linearity in the ADC, by averaging or fitting. When applicable, in oversampled parts of a trace, much larger savings than obtained through omitting some least-significant bit may be obtained through downsampling the information by summing adjacent samples before compression, thus storing fewer samples, but with better resolution. \section{Conclusion} A lossless compression routine which addresses both the transmission bandwidth and storage cost challenges associated with recording flash-ADC traces has been presented. The routine can be directly integrated in front-end electronics and \revlabel{revcanhandlefast} can handle data streams on-the-fly at rates of \SI{400}{\mega samples/s} in the controlling FPGA. Calculation of the differences between consecutive trace samples concentrated the most frequently occuring values around zero. The compression was concluded by storing the values in groups of four, yielding a simple yet effective variable-length code, by only storing the necessary least-significant bits, in a stream of 32-bit words. A model for the storage cost was developed, by first considering the influence of the group headers as well as the retained ADC noise. The additional cost of storing a pulse was expressed in terms of its amplitude and width. By compressing a large set of artificial traces with varying characteristics, both the free parameters and the validity of the model were determined. The method was then applied to actual data from different kinds of detectors. The compression efficiency was found to be comparable to popular general-purpose compression methods (gzip and xz). It was shown that the dominating cost of storing actual traces is generally given by the retained ADC noise, and not the pulses. It is therefore important for users to carefully assess how many least-significant bits shall be kept, in case they are noisy. Except for that, there are no parameters that need to be adapted, which is of particular interest for experiments employing hundreds or thousands of detector channels. Computer code for the FPGA implementation in VHDL and for the CPU decompression routine in C are available for download~\cite{web:dtpc} as open source software. \section*{Acknowledgments} \label{sec:Acknowledgment} \addcontentsline{toc}{section}{Acknowledgment} The authors would like to extend their thanks to O. Schulz, B. L\"{o}her, S. Storck, and P. D\'{i}az Fern\'{a}ndez for providing test data, and to A. Heinz and D. Radford for valuable discussions. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{s:Intro} A screening experiment is an initial step in a sequential experimental procedure to understand and/or optimize a process dependent upon many controllable factors. Such experiments are common in pharmaceuticals, agriculture, genetics, defense, and textiles (see \cite{dean2006screening} for a comprehensive overview of screening design methodology and applications). The screening analysis aims to identify the few factors that drive most of the process variation often according to a linear model comprised of main effects, interaction effects, and, in the case of numeric factors, quadratic effects \citep{jones2011class}. Each effect corresponds to one or more factors, and a factor is said to be active if at least one of its corresponding effects is large relative to the process noise; otherwise the factor is said to be inert. Analyses under this class of models often follow effect principles of sparsity, hierarchy, and heredity (see Chapter 9 of \cite{WuHamada}), with the primary goal of correctly classifying each factor as active or inert. A screening design is represented by an $n \times k$ matrix, $\boldsymbol{X}_d$, with rows $\boldsymbol{x}_i^T=(x_{i1},\dots,x_{ik})$ where $x_{ij}$ represents the $j$-th factor's setting for run $i$. To standardize screening designs across applications, continuous factor settings are scaled so $x_{ij} \in [-1,1]$ while categorical factor settings are often restricted to two levels, making $x_{ij}=\pm 1$. We compare $\boldsymbol{X}_d$'s based on the statistical properties of the effects' least-squares estimators because their properties are tractable, particularly their variances and potential biases. The goal then is to identify an $\boldsymbol{X}_d$ that minimizes the individual variances and biases of these effect estimators Suppose the model is correctly specified and there are designs having unique least-squares estimators for all effects. Then these estimators are unbiased and designs may be compared based on their estimation variances. A design having variances that are as small as possible will improve one's ability to correctly classify factors as active/inert. For models comprised solely of main effects and interactions, orthogonal designs have estimation variances simultaneously equal to their minimum possible value across all designs. Such designs exist only when $n$ is a multiple of 4; for other $n$ it is unclear which design will have the best variance properties. Still, designs should be compared based on how close their variances are to their respective minimum possible values. This approach requires knowledge of the minimum values as well as some measure of closeness. One approach for identifying minimum variances is to approximate them using the theoretical value assuming an orthogonal design exists, but such values may be unattainable. The $c$-criterion \citep{atkinson2007} may be used to identify the minimum variance for a given effect, but without any guarantee of the estimability of the other effects of interest. To remedy this estimability issue, \cite{AllenMoyer2021} proposed the $c_\mathcal{E}$-criterion to calculate these minimum variances exactly. It is less clear how to measure the proximity of a design's variances to their $c_\mathcal{E}$ values. The Pareto frontier approach by \cite{LuPareto2011} is well-suited for this problem but can be cumbersome in practice. A more practical solution is to evaluate and rank designs according to a single criterion that involves a scalar measure of all the variances. Such criteria should be straightforward to evaluate and optimize, and the resulting optimal designs should have variances close to their $c_\mathcal{E}$ values. Different forms of the $D$- and $A$-criterion (see Section~2.1) are popular variance-based criteria employed in the screening design literature and will be the focus of this paper. Designs that optimize $D$- and $A$-criteria can coincide for some $n$, but this does not mean the criteria equivalently summarize variances. Consider a screening problem with $n=7$ runs and $k=5$ factors that assumes a main effect model. It is well-known that there always exists a $D$-optimal design comprised of $x_{ij}=\pm 1$, even when $x_{ij} \in [-1,1]$ \citep{box1971factorial}. While other $D$-optimal designs having $x_{ij} \in (-1, 1)$ may exist, the screening literature predominantly fixes $x_{ij}=\pm 1$ with no assumed degradation to the resulting variances. For example, \cite{jones2020Aoptimal} found an $A$-optimal design with $x_{ij}$ values of $\pm 1$ and $0$ having smaller variances compared to $D$-optimal designs comprised of $x_{ij}=\pm 1$ only. Figure~\ref{tab:5F7Rex} shows this $A$-optimal design, which has $x_{14}=x_{15}=0$. Figure~\ref{tab:5F7Rex} also shows the corresponding main effect variances (in ascending order) of the $A$-optimal design and two $D$-optimal designs comprised of $x_{ij}=\pm 1$. The minimum possible variances assuming an orthogonal design exists are $1/7=0.1429$ and the minimum variances under the $c_{\mathcal{E}}$-criterion from \cite{AllenMoyer2021} are $0.1459$. Each of the $A$-optimal design's variances are equal to or smaller than the two competing $D$-optimal designs comprised of $\pm 1$. \begin{figure}[ht] \begin{minipage}[b]{.48\textwidth} \centering \begin{tabular}{|rrrrr|} \hline 1 & 1 & 1 & 0 & 0 \\ -1 & -1 & 1 & -1 & 1 \\ -1 & 1 & -1 & -1 & 1 \\ 1 & -1 & -1 & -1 & -1 \\ -1 & -1 & 1 & 1 & -1 \\ 1 & -1 & -1 & 1 & 1 \\ -1 & 1 & -1 & 1 & -1 \\ \hline \end{tabular} \end{minipage} \hfill \begin{minipage}[b]{.48\textwidth} \centering $\vcenter{\hbox{\includegraphics[width=.75\textwidth, angle = 270]{var5F7R.pdf}}}$ \end{minipage} \caption{(Left) $n = 7,\ k = 5$, $A$-optimal design. (Right) Main effect variances (in ascending order) for $A$- and $D$-optimal designs\label{tab:5F7Rex}. The design ``$D$-optimal $1, 1$'' replaces $x_{14}$ and $x_{15}$ of left design with $1$. Design ``D-optimal $-1, 1$'' is similarly defined. The minimum possible variances assuming an orthogonal design would each be $1/7=0.1429$ and the minimum variances under the $c_{\mathcal{E}}$-criterion from \cite{AllenMoyer2021} are $0.1459$.} \end{figure} As it turns out, the $A$-optimal design in Figure~1 is also $D$-optimal despite having some $x_{ij}=0$. In fact, changing either $x_{14}$ or $x_{15}$ to any value in $[-1, 1]$ produces yet another $D$-optimal design but with equal or larger variances than the $A$-optimal design. The $A$-optimal design in this case, however, is unique. The existence of infinitely many $D$-optimal designs, each with equal or larger variances relative to the $A$-optimal design, is cause for concern about utilizing the $D$-criterion to rank screening designs. In this example, the $A$-criterion was better able to differentiate designs in terms of their ability to minimize the main effect variances simultaneously. This is not to say $D$-optimal designs are less valuable than $A$-optimal designs. Such designs have been used with great success in practice and the relative differences of the variances in Figure 1 do not appear large. Whether these differences impact the analysis depends on the ratio of the true main effect, denoted $\beta_j$, and the process variance, $\sigma^2$. When performing a two-sided $t$-test for the null hypothesis $\beta_j=0$, the associated noncentrality parameter will be $\beta_j/\sigma$ divided by the square root of the variances shown in Figure~1. When $\beta_j/\sigma$ is large, slight differences in the variances will not affect the noncentrality parameter, and hence will not affect power of the tests. The differences in variances will have a significant impact as $\beta_j/\sigma$ gets smaller. For example, suppose $\beta_j/\sigma=1$ and we perform a $t$-test for $\beta_1=0$ with significance level $\alpha=0.05$. The power for this test under the $D$-optimal design with $x_{14}=x_{15}=1$ is $0.6355$ while for the $A$-optimal design it is $0.7135$. Without any prior knowledge of the $\beta_j/\sigma$, it is important then to find a design that decreases the individual variances as much as possible. Based on the effect principles, it is common to fit a main effect model even though interactions and/or quadratic effects may be active. The least-squares estimators for the main effect model may then become biased. Rather than try to estimate all potentially important effects, one can quantify the bias of the estimators and identify a design that simultaneously reduces estimation variance and bias. Let $\boldsymbol{\beta}$ be the vector of the largest collection of effects that may be important and hence captures the true model. Partition $\boldsymbol{\beta}$ into $\boldsymbol{\beta}_1$ and $\boldsymbol{\beta}_2$ where $\boldsymbol{\beta}_1$ are effects we believe are most likely to be important and correspond to the effects in the fitted model, and $\boldsymbol{\beta}_2$ are the remaining effects that are potentially important but ignored in the fitted model. The possible bias from estimating $\boldsymbol{\beta}_1$ under the fitted model when the true model includes all $\boldsymbol{\beta}$ is $\boldsymbol{A}\boldsymbol{\beta}_2$ where $\boldsymbol{A}$ is the design's so-called alias matrix. \cite{dumouchel1994simple} construct designs under model uncertainty by assigning a prior distribution to $\boldsymbol{\beta}_1$ and $\boldsymbol{\beta}_2$, and ranking designs according to the $D$-criterion applied to $\boldsymbol{\beta}$'s posterior covariance matrix. While Bayesian $D$-optimal designs have shown an ability to balance minimizing bias and variance, the possible flaws of the $D$-criterion pointed out earlier are still concerning. Better designs may then be found with a Bayesian $A$-criterion, which has not received much attention in the screening literature. This paper makes two important contributions that build a strong case for constructing screening designs under different forms of the $A$-criterion. The first contribution is a comparison of the behavior of the $D$- and $A$-criteria in response to manipulating a single coordinate of a given design. Our investigation not only provides insights into the criteria's coordinate exchange algorithms, a popular design construction algorithm, but also establishes the existence of $D$-optimal designs with $x_{ij}= \pm 1$ for models including main effects and/or interactions, as well as nuisance effects, such as block effects. We are only aware of such a result for main effect models with an intercept. We also identify cases in which the $D$-criterion is invariant to any possible coordinate exchange, meaning the $D$-criterion considers all such designs as having equal value despite potentially having different variances. For such cases, we show that the $A$-criterion has a unique optimal coordinate exchange. Our second contribution is the promotion of a weighted Bayesian $A$-criterion for constructing designs that balance bias and variance minimization. We compare new screening designs generated under coordinate-exchange algorithms for common factorial models and show the Bayesian $A$-optimal designs have more appealing variance and bias properties than Bayesian $D$-optimal designs. The paper is organized as follows. Section~\ref{s:ADReview} reviews traditional and current screening models and criteria. Section~\ref{s:Theory} investigates the behavior of the $D$- and $A$-criteria following coordinate exchanges to an existing design for models including nuisance effects. It also introduces the Bayesian $A$-criterion and how nuisance effects may be addressed under this criterion through a weight matrix. Examples of $A$-optimal and Bayesian $A$-optimal designs constructed for main effect models, a two-factor interaction model, and a quadratic model are provided in Section~\ref{s:Bayes}. Section~\ref{s:block} constructs a blocked screening design for a pharmaceutical application under our new criteria. We conclude the paper with a discussion of current and future work in Section~\ref{s:Discussion}. \section{Background}\label{s:ADReview} The fitted model for the $i$-th continuous response, $y_i$, has the form \begin{equation}\label{eq:LinearModelVec} y_i = f^T(\boldsymbol{x}_i)\boldsymbol{\beta} + \boldsymbol{z}_i^T\boldsymbol{\theta} + e_i\ ,\ \end{equation} where $e_i \sim N(0,\sigma^2)$ and $i=1,\dots,n$. Henceforth and without loss of generality, we set $\sigma^2 = 1$, since $\sigma^2$ is constant across all designs. Every element of $f(\boldsymbol{x}_i)$, a $p \times 1$ vector, is a function of one or more elements of $\boldsymbol{x}_i$ while $\boldsymbol{z}_i$ is a $b \times 1$ vector that does not depend on $\boldsymbol{x}_i$ and corresponds to nuisance effects, $\boldsymbol{\theta}$. The simplest screening model is the main effect model where $f^T(\boldsymbol{x}_i) = \boldsymbol{x}^T_i$ and $z_i=1$, corresponding to an intercept effect, while a blocked main effect model with $b$ blocks has $\boldsymbol{z}_i$ comprised of all zeroes except for a $1$ in the $h$-th position when $y_i$ comes from block $h$. Full quadratic models append the terms $\boldsymbol{x}^T_i \otimes \boldsymbol{x}^T_i=(x_{ij}x_{ij'})$ to the main effect model's $f^T(\boldsymbol{x}_i)$, where $\otimes$ denotes the Kronecker product. Two-factor interaction models remove all $x_{ij}^2$ terms from the full quadratic model's $f(\boldsymbol{x}_i)$. For a given $\boldsymbol{X}_d$, let $\boldsymbol{F}$ and $\boldsymbol{Z}$ denote matrices with rows $f(\boldsymbol{x}_i)$ and $\boldsymbol{z}_i$, respectively, and define $\boldsymbol{L}=(\boldsymbol{F}|\boldsymbol{Z})$. \subsection{Variance Criteria} When model~\eqref{eq:LinearModelVec} is believed to contain the true model and $n > p+b$, we assume there exists at least one $\boldsymbol{X}_d$ with a unique least-squares estimator $(\hat{\boldsymbol{\beta}}^T|\hat{\boldsymbol{\theta}}^T)^T=(\boldsymbol{L}^T\boldsymbol{L})^{-1}\boldsymbol{L}^T\boldsymbol{y}$. The estimator is unbiased and has variance $(\boldsymbol{L}^T\boldsymbol{L})^{-1}$. Then $\text{Var}(\hat{\boldsymbol{\beta}})=\{\boldsymbol{F}^T(\boldsymbol{I}-\boldsymbol{P}_Z)\boldsymbol{F}\}^{-1}$ where $\boldsymbol{P}_Z=\boldsymbol{Z}(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{Z}^T$ and screening inferences for the elements of $\hat{\boldsymbol{\beta}}$ perform best under an $\boldsymbol{X}_d$ whose $\text{Var}(\hat{\boldsymbol{\beta}})$ has small diagonal elements. Designs may then be ranked based on a scalar function of $(\boldsymbol{L}^T\boldsymbol{L})^{-1}$ that measure variance in some overall sense. To focus attention on estimation of $\boldsymbol{\beta}$, the function should be defined on $\text{Var}(\hat{\boldsymbol{\beta}})$. All we require of $\boldsymbol{\theta}$ is that it can be estimated uniquely. The $D$-criterion ranks designs according to $|(\boldsymbol{L}^T\boldsymbol{L})^{-1}|$ while the $D_s$-criterion is $|\text{Var}(\hat{\boldsymbol{\beta}})|$. In both cases, smaller values are desirable. This paper uses the equivalent criteria of $|\boldsymbol{L}^T\boldsymbol{L}|$ and $|\boldsymbol{F}^T(\boldsymbol{I}-\boldsymbol{P}_Z)\boldsymbol{F}|$, with larger values being desirable. Under a normality assumption of $\boldsymbol{e}$, the $D$-optimal and $D_s$-optimal designs minimize the volume of the confidence ellipsoids for $(\boldsymbol{\beta}^T,\boldsymbol{\theta}^T)$ and $\boldsymbol{\beta}^T$, respectively. Hence these criteria are well-suited for an overall test of the significance of all effects, but not necessarily for individual testing of the parameters. The $A$-criterion ranks designs with respect to $\text{tr}[\{\boldsymbol{L}^T\boldsymbol{L}\}^{-1}]$ and the $A_s$-criterion is $\text{tr}[\text{Var}(\hat{\boldsymbol{\beta}})]$, being the sum of the individual variances of the parameters of interest. In both cases we want to minimize the chosen criterion. For main effect and interaction models, a design is said to be orthogonal when $\boldsymbol{L}^T\boldsymbol{L} = n\boldsymbol{I}$, meaning $\boldsymbol{F}$ is comprised of orthogonal columns of elements $\pm 1$ \citep{MukerjeeWu2006,WuHamada,Schoen2017}. Such designs estimate all main and interaction effects with the minimum possible variance, $1/n$. By minimizing the individual variances, such designs will be both $D_s$- and $A_s$-optimal. Orthogonal designs, however, can only exist when $n$ is a multiple of 4, otherwise the $D_s$- and $A_s$-optimal designs may differ from each other. Existing literature for constructing $A_s$- and $D_s$-optimal screening designs under arbitrary $n$ has predominantly focused on main effect models. These designs are more commonly referred to as chemical balance and spring balance designs \citep{cheng1980optimality,masaro1983optimality,jacroux1983optimality,wong1984optimal,Cheng2014b}. To our knowledge, there are no theoretical results concerning $A_s$-optimal chemical balance designs with $x_{ij} \in \{\pm 1, 0\}$ for main effect models with an intercept nuisance effect. \cite{jones2020Aoptimal} algorithmically constructed and compared $A$- and $D$-optimal designs under different screening models and arbitrary $n$. They found that for $n = 3\ (\text{mod}\ 4)$ and $n$ small relative to $k$, $A$-optimal designs often had $x_{ij} \in \{\pm 1, 0\}$ . In fact, they algorithmically constructed $A$-optimal designs allowing $x_{ij} \in [-1,1]$, yet still found the $A$-optimal designs only took on these three integer settings. Similar to the $D$-optimal design's tendency to only have values $x_{ij}=\pm 1$, Sections~3.2 and 4.1 explore the conjecture that an $A$-optimal design exists where $x_{ij} \in \{\pm 1, 0\}$. \subsection{Variance and Bias Criteria} While fitting the largest possible model incurs little to no bias, one needs a screening design with a large run size ($n \geq p+b$). When $n < p+b$, there is no unique least-squares estimator and the analysis becomes more complicated. Penalized regression, Bayesian methods, and stochastic model searches are increasing in popularity \citep{box1986analysis,Yuan2007,WuHamada,draguljic2014,Mee2017} and have proven to be quite powerful for screening. These analysis approaches, however, do not lend themselves to a tractable design framework. A design theory based on least-squares inference of submodels (e.g., \cite{Daniel1959}, \cite{lenth1989}, \cite{hamada1992analysis}, \cite{HamadaHamada2010}) is preferred. In practice, the main effects model should be the first submodel considered and subsequent models are chosen based on the results of that analysis according to the effect principles \citep{JonesNachtsheim2017}. Partitioning $\boldsymbol{\beta}$ as in Section~1, a submodel may be thought of as fitting model~\eqref{eq:LinearModelVec} assuming $\boldsymbol{\beta}_2=0$. Similarly partitioning $\boldsymbol{F}=(\boldsymbol{F}_1|\boldsymbol{F}_2)$ and defining $\boldsymbol{L}_1=(\boldsymbol{F}_1|\boldsymbol{Z})$, the least-squares estimator $(\hat{\boldsymbol{\beta}}_1^T|\hat{\boldsymbol{\theta}}^T)^T=(\boldsymbol{L}_1^T\boldsymbol{L}_1)^{-1}\boldsymbol{L}_1^T\boldsymbol{y}$. Fitting submodels introduces potential bias, namely the bias of $(\hat{\boldsymbol{\beta}}_1^T|\hat{\boldsymbol{\theta}}^T)^T$ is $\boldsymbol{A}\boldsymbol{\beta}_2$ where \begin{align} \boldsymbol{A}=(\boldsymbol{L}_1^T \boldsymbol{L}_1)^{-1}\boldsymbol{L}_1^T\boldsymbol{F}_2 \end{align} is referred to as the alias matrix. While variances for $\hat{\boldsymbol{\theta}}$ can be ignored when comparing designs, we should consider its bias with respect to $\boldsymbol{\beta}_2$ because we anticipate that some of these potential terms will be eventually considered in the analysis. The experimenter should then identify a design that minimizes both the diagonals of $\text{Var}(\hat{\boldsymbol{\beta}}_1)$ and the elements of $\boldsymbol{A}$. For $n = 0\ (\text{mod}\ 4)$, one strategy is to rank all strength 2 or 3 orthogonal arrays based on an aliasing-based criterion such as minimum aberration or one of its generalizations \citep{MukerjeeWu2006, Cheng2014,Vazquex2022}. Doing so guarantees minimum variances after fitting the main effect model with minimal bias due to model misspecification. For arbitrary $n$, \cite{jones2011efficient} and \cite{LuPareto2011} algorithmically optimize criteria that are some combination of the $D$-criterion and $\text{tr}[\boldsymbol{A}^T\boldsymbol{A}]$ under a given partition of $\boldsymbol{\beta}$. Bias may also be reduced through one's ability to fit many possible submodels, which is the goal of estimation capacity and model robust designs \citep{Li2000,Chen2004,Tsai2010,Smucker2012}, but such criteria are computationally intensive to calculate. \cite{dumouchel1994simple} proposed a flexible Bayesian $D$-criterion to balance main effect variance and bias minimization. A uniform, improper prior is assigned to $\boldsymbol{\beta}_1$ and $\boldsymbol{\theta}$, and a $N(\boldsymbol{0}, \tau^2\boldsymbol{I}_q)$ prior to $\boldsymbol{\beta}_2$. For $\boldsymbol{y} \mid \boldsymbol{\beta},\boldsymbol{\theta}\sim N(\boldsymbol{F}\boldsymbol{\beta}+\boldsymbol{Z}\boldsymbol{\theta},\ \boldsymbol{I})$, the posterior covariance matrix for $(\boldsymbol{\beta}^T,\boldsymbol{\theta}^T)^T$ is then $(\boldsymbol{L}^T\boldsymbol{L} + \tau^{-2}\boldsymbol{K})^{-1}$ where $\boldsymbol{K}$ is a diagonal matrix with $0$'s for the corresponding $p$ primary terms and $1$ for the corresponding $q$ potential terms. The Bayesian $D$-criterion is $|\boldsymbol{L}^T\boldsymbol{L} + \tau^{-2}\boldsymbol{K}|$, where $\tau^{-2}$ tunes the importance of minimizing bias and/or estimation of the potential terms. As $\tau^{-2} \to \infty$, the criterion will be less influenced by changes in aliasing between the primary and potential terms since $\tau^{-2}\boldsymbol{I}_q$ will have large diagonal elements. As $\tau^{-2} \to 0$, the potential terms become primary terms. \cite{dumouchel1994simple} recommended $\tau^{-2} = 1$ and constructed optimal designs via a coordinate exchange algorithm. Other Bayesian approaches have been considered \citep{Toman1994,Joseph2006,Bingham2007,TSAI2007619} but with only only two or three level factors. This paper will also explore the Bayesian $A$-criterion, $\text{tr}[(\boldsymbol{L}^T\boldsymbol{L} + \tau^{-2}\boldsymbol{K})^{-1}]$, and Bayesian $A_s$-criterion, being the trace of the submatrix of $(\boldsymbol{L}^T\boldsymbol{L} + \tau^{-2}\boldsymbol{K})^{-1}$ corresponding to $\boldsymbol{\beta}$. \section{Properties of the $D$- and $A$-criterion}\label{s:Theory} It is challenging to analytically derive optimal designs for a given criterion under an arbitrary $n$ and $k$. In practice, these criteria are optimized via some computer search algorithm, such as the coordinate exchange algorithm \citep{meyer1995coordinate}, branch-and-bound algorithms \citep{Ahipasaoglu2021}, and nonlinear programming \citep{EstebanBravo2017,Duarte2020}. While the two latter algorithms offer some guarantees of identifying the true optimum, the coordinate exchange algorithm is straightforward to implement and is employed in popular statistical software. We focus on the coordinate exchange algorithm (CEA) in this section not only because of its wide adoption, but because it provides an analytical tool to study the behavior of these different forms of the $D$- and $A$-criterion defined in Section~2. Let $\mathcal{X}_j$ denote the set of permissible coordinates for $x_{ij}$, making $\mathcal{X}= \mathcal{X}_1 \times \dots \times \mathcal{X}_k$ the set of permissible rows for $\boldsymbol{X}_d$. Then $\mathcal{X}_j=\pm 1$ for categorical factors and $\mathcal{X}_j = [-1,1]$ for numeric factors. A row exchange of an initial design, $\boldsymbol{X}_{d0}$, exchanges one of its existing rows, $\boldsymbol{x}_i$, with a candidate row $\tilde{\boldsymbol{x}} \in \mathcal{X}$. This leads to a row exchange of $f(\boldsymbol{x}_i)$, the $i$-th row of the initial design's model matrix, $\boldsymbol{F}_0$, with the candidate model matrix row, $f(\tilde{\boldsymbol{x}})$. No exchange is made to $\boldsymbol{Z}$, since nuisance effects are not design dependent. Hence an exchange gives a new design and model matrix, denoted $\widetilde{\boldsymbol{X}}$ and $\widetilde{\boldsymbol{L}}$, respectively. A row exchange algorithm (REA) for a given criterion identifies the optimal $\tilde{\boldsymbol{x}}$ for each $\boldsymbol{x}_i$ sequentially, updating $\boldsymbol{X}_{d0}$ one row at a time. After going through all $n$ runs, the algorithm starts again at $\boldsymbol{x}_1$. The process repeats itself until some convergence criterion is met. The algorithm is often repeated across many initial starting designs and the overall best design is reported. The reported design is not guaranteed to be globally optimal, but it is common in the screening literature to refer to them as optimal. More details about REAs may be found in \cite{atkinson2007}. A coordinate exchange is a specific row exchange that only manipulates $x_{ij}$. Then we may partition $\boldsymbol{x}_i^T=(x_{ij} | \boldsymbol{x}_{i, -j})$ and represent the $i$-th row of $\boldsymbol{L}$ as \begin{align} l(\boldsymbol{x}_i) = \begin{pmatrix} f_1(x_{ij}) \\ \hline f_2(\boldsymbol{x}_{i, -j})\\ \boldsymbol{z}_i \end{pmatrix}=\begin{pmatrix}l_1(x_{ij})\\ \hline l_2(\boldsymbol{x}_{i, -j}) \end{pmatrix}\ ,\ \label{e:partition} \end{align} where $f_1(x_{ij})=l_1(x_{ij})$ is the subvector of $f(\boldsymbol{x}_i)$ that only involves $x_{ij}$ and $f_2(\boldsymbol{x}_{i, -j})$ are the remaining elements. For example, exchanging $x_{i1}$ for a two-factor interaction model with an intercept nuisance parameter has $l_1^T(x_{i1}) = (x_{i1}, \ x_{i1}x_{i2},\dots, \ x_{i1}x_{ik})$ and $l_2^T(\boldsymbol{x}_{i, -j})=( x_{i2},\dots, \ x_{ik},\ x_{i2}x_{i3},\dots, \ x_{i(k-1)}x_{ik},1)$. \cite{meyer1995coordinate} proposed the CEA that proceeds in the same fashion as a REA, but for a given $\boldsymbol{x}_i$, each coordinate $x_{ij}$ is updated sequentially. As the number of candidate coordinates $\mathcal{X}_j$ is smaller than $\mathcal{X}$, a CEA involves fewer computations and does not require the user to specify all possible candidate points in $\mathcal{X}$. Moreover, there exist fast update formulas for the forms of the $D$- and $A$-criterion considered in this paper that do not require repeated matrix inversion. However, compared to a REA, a CEA requires more random starts to avoid converging to a local optimum. The remainder of this section investigates the behavior of the CEA for the different forms of the $D$- and $A$-criteria. \subsection{Properties of $D$-criterion}\label{s:subDoptTheory} For an $\boldsymbol{x}_i$ in $\boldsymbol{X}_{d0}$, the $D$-criterion's REA seeks the exchange $\tilde{\boldsymbol{x}}$ that maximizes \begin{align} \Delta_D(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) &= \frac{|\widetilde{\boldsymbol{L}}^T\widetilde{\boldsymbol{L}}|} {|\boldsymbol{L}^T_{0} \boldsymbol{L}_{0}|} = l^T(\tilde{\boldsymbol{x}})\boldsymbol{V}\, l(\tilde{\boldsymbol{x}}) + \{1 - v(\boldsymbol{x}_i)\}\label{eq:Dobj_REA} \end{align} where $\boldsymbol{V} = \{1 - v(\boldsymbol{x}_i)\}\boldsymbol{D} + \boldsymbol{D} l(\boldsymbol{x}_i)\, l^T(\boldsymbol{x}_i) \boldsymbol{D}$ with $\boldsymbol{D}=(\boldsymbol{L}^T_{0} \boldsymbol{L}_{0})^{-1}$and $v(\boldsymbol{x}_i)=l(\boldsymbol{x}_i)^T\boldsymbol{D}\, l(\boldsymbol{x}_i)$. The matrix $\boldsymbol{V}$ is symmetric and it is positive definite if and only if $v(\boldsymbol{x}_i) < 1$. If $v(\boldsymbol{x}_i)=1$ then $\boldsymbol{V}$ is positive semidefinite. For a coordinate exchange of $x_{ij}$ for $\tilde{x}$, we can permute the rows and columns of $\boldsymbol{V}$ following \eqref{e:partition} giving a function with respect to $\tilde{x}$ \begin{equation}\label{eq:Dobjfun} \Delta^{ij}_D(\tilde{x}) = l^T_1(\tilde{x})\boldsymbol{V}_{11} l_1(\tilde{x}) + \boldsymbol{a}^Tl_1(\tilde{x}) + c \end{equation} \noindent where $\boldsymbol{a} = 2\boldsymbol{V}_{12}l_2(\boldsymbol{x}_{i, -j})$ and $c =l^T_2(\boldsymbol{x}_{i, -j})\boldsymbol{V}_{22}l_2(\boldsymbol{x}_{i, -j}) + \{1 - v(\boldsymbol{x}_i)\}$ are fixed. The CEA for the $D_s$-criterion can be done equivalently through the CEA for the $D$-criterion because $|\boldsymbol{L}^T\boldsymbol{L}|=|\boldsymbol{Z}^T\boldsymbol{Z}| \times |\boldsymbol{F}^T(\boldsymbol{I}-\boldsymbol{P}_Z)\boldsymbol{F}|$. That is, $\Delta^{ij}_D$ evaluates the ratio $|\widetilde{\boldsymbol{F}}^T(\boldsymbol{I}-\boldsymbol{P}_Z)\widetilde{\boldsymbol{F}}|/|\boldsymbol{F}_0^T(\boldsymbol{I}-\boldsymbol{P}_Z)\boldsymbol{F}_0|$, corresponding to the $D_s$-criterion. The CEA for the Bayesian $D$-criterion has a similar update formula to \eqref{eq:Dobjfun} but with matrix $\boldsymbol{D}=(\boldsymbol{L}_0^T\boldsymbol{L}_0 + \tau^{-2}\boldsymbol{K})^{-1}$ \citep{dumouchel1994simple}. The CEA for the Bayesian $D_s$-criterion is easily shown to be equivalent to the Bayesian $D$-criterion, similar to the equivalence of the CEAs for the $D$- and $D_s$-criterion. We refer to the collection of these four criteria (i.e., $D$, $D_s$, Bayesian $D$, and Bayesian $D_s$) as the $\mathcal{D}$-criteria. We now provide a general result about optimal designs for the $\mathcal{D}$-criteria under what we call an $m$-factor interaction model. Let $J_m=\{j_1,\dots,j_m\}$ be a subset of $m$ of the $k$ factor indices. An $m$-factor interaction model has elements of $f(\boldsymbol{x})$ comprised only of \begin{itemize} \item all $k$ main effect coordinates $(x_j)$; \item at least one coordinate of the form $\prod_{j \in J_m} x_j$ for some $J_m$; \item any remaining coordinates are of the form $\prod_{j \in J} x_j$ where $|J|=2,\dots,m$. \end{itemize} The main effect model is then the one-factor interaction model. Equation~\eqref{eq:Dobjfun} provides a proof technique for the following theorem: \begin{theorem}\label{Thm:Dlevels} For any $m$-factor interaction model where $\mathcal{X}_j = \pm 1$ or $\mathcal{X}_j \in [-1,1]$, there exists an optimal design comprised of $x_{ij}=\pm 1$ for each of the $\mathcal{D}$-criteria. \end{theorem} \noindent This proof and all subsequent proofs are provided in the Supplementary Materials. To our knowledge this result has been proven only for main effect models \citep{box1971factorial, mitchell1974algorithm, galil1980d} and non-Bayesian criteria. Not only does our result extend to $m$-factor interaction models, it also applies to any nuisance effect structure that is design independent. A practical consequence of Theorem~\ref{Thm:Dlevels} is that to algorithmically construct an optimal design for such models under one of the $\mathcal{D}$-criteria, we can restrict $\mathcal{X}_{j}=\pm 1$. An unfortunate consequence of Theorem~1 that highlights a potential deficiency is the following corollary: \begin{corollary}\label{Cor:Dlevels} For any $m$-factor interaction model, suppose there exists an optimal design with respect to one of the $\mathcal{D}$-criteria where at least one $x_{ij}\neq \pm 1$ where $\mathcal{X}_j \in [-1,1]$. Then $\Delta_D^{ij}$ for that criterion is constant with respect to $\tilde{x}$, making all such possible exchanges produce another optimal design. \end{corollary} \noindent The phenomenon described in Corollary~\ref{Cor:Dlevels} occurred in Figure~1 for both coordinates $x_{14}$ and $x_{15}$ under the $D$- and $D_s$-criterion. Indeed, the designs with $(x_{i4},x_{i5})=(\pm 1,\pm1)$ produced the worst individual main effect variances. This example raises doubts about the $\mathcal{D}$-criteria's ability to evaluate a design's screening abilities. \subsection{ Properties of $A$-criterion}\label{s:subAoptTheory} The decrease in the $A$-criterion following a row exchange is \begin{align}\label{eq:RowExchange} \Delta_A(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) &= \text{tr}\{(\boldsymbol{L}_0^T\boldsymbol{L}_0)^{-1}\} - \text{tr}\{(\widetilde{\boldsymbol{L}}^T\widetilde{\boldsymbol{L}})^{-1}\} = \frac{l^T(\tilde{\boldsymbol{x}})\boldsymbol{U}l(\tilde{\boldsymbol{x}}) - \phi(\boldsymbol{x}_i)}{\Delta_D(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) \end{align} where $\boldsymbol{U} = \boldsymbol{V}\boldsymbol{D} + \boldsymbol{D}\boldsymbol{V} - [\{1 - v(\boldsymbol{x}_i)\}\boldsymbol{D} + \phi(\boldsymbol{x}_i)\boldsymbol{I}]\boldsymbol{D}$ and $\phi(\boldsymbol{x}_i)=l^T(\boldsymbol{x}_i)\boldsymbol{D}\boldsymbol{D}l(\boldsymbol{x}_i)$. The optimal exchange maximizes \eqref{eq:RowExchange}. Unlike with $\boldsymbol{V}$, $l^T(\tilde{\boldsymbol{x}})\boldsymbol{U}l(\tilde{\boldsymbol{x}})$ can take on positive and negative values. Partitioning $\boldsymbol{U}$ as we did with $\boldsymbol{V}$, the coordinate objective function is \begin{equation}\label{eq:DeltaL} \Delta_A^{ij}(\tilde{x}) = \frac{l^T_1(\tilde{x})\boldsymbol{U}_{11} l_1(\tilde{x}) + \boldsymbol{b}^Tl_1(\tilde{x}) + d }{\Delta_D^{ij}(\tilde{x})}\ ,\ \end{equation} where $\boldsymbol{b} = 2\boldsymbol{U}_{12}l_2(\boldsymbol{x}_{i, -j})$ and $d = l^T_2(\boldsymbol{x}_{i, -j})\boldsymbol{U}_{22}l_2(\boldsymbol{x}_{i, -j}) - \phi(\boldsymbol{x}_i)$ are constant. The equivalence between the $D$- and $D_s$-criterion does not hold for the $A$- and $A_s$-criterion. Other than special cases \citep{Nachtsheim1989} there is no closed-form coordinate exchange formula for $A_s$. Computing the update after a row/coordinate exchange may be accomplished by first updating $(\boldsymbol{L}_0^T\boldsymbol{L})_0^{-1}$ via the Sherman-Morrison-Woodbury formula \citep{sherman1950adjustment} and directly calculating the change, denoted $\Delta_{A_s}^{ij}$. This will not be as computationally efficient as evaluating \eqref{eq:DeltaL}. Following \cite{StallingsMorgan2015}, let $\boldsymbol{W}$ be a diagonal matrix of $p+b$ elements where the first $p$ diagonal entries corresponding to $\boldsymbol{\beta}$ equal 1 and the last $b$ elements corresponding to $\boldsymbol{\theta}$ equal an arbitrarily small value, $w>0$. The weighted $A$-criterion, or $A_W$-criterion, is then \begin{align} \text{tr}[\boldsymbol{W}(\boldsymbol{L}^T\boldsymbol{L})^{-1}]=\sum_j \text{Var}(\hat{\beta}_j) + w \sum_h \text{Var}(\hat{\theta}_h)\ .\ \label{e:Anuisance2} \end{align} The coordinate exchange update for the $A_W$-criterion, denoted $\Delta_{AW}^{ij}$, is similar to \eqref{eq:DeltaL} and is derived in the Supplementary Materials. Note the $A$-criterion is a special case of the $A_W$-criterion with $\boldsymbol{W}=\boldsymbol{I}$. From \eqref{e:Anuisance2}, we see $\lim_{w\to 0}\text{tr}[\boldsymbol{W}(\boldsymbol{L}^T\boldsymbol{L})^{-1}]=\text{tr}\left[\{\boldsymbol{F}^T(\boldsymbol{I}-\boldsymbol{P}_Z)\boldsymbol{F}\}^{-1}\right]$, the $A_s$-criterion. Therefore, $\lim_{w\to0} \Delta_{AW}^{ij} = \Delta_{A_s}^{ij}$. This result provides an efficient way to perform a CEA for the $A_s$-criterion using the $A_W$-criterion and setting $w$ to an arbitrarily small value. We have found $w=10^{-6}$ to perform well for most applications. This limiting result also allows us to study the behavior of $\Delta_{A_s}^{ij}$ through the more tractable $\Delta_{AW}^{ij}$. The update formula for a coordinate exchange under the Bayesian $A$-criterion takes the same form as \eqref{eq:DeltaL} but with $\boldsymbol{D}=(\boldsymbol{L}_0^T\boldsymbol{L}_0 + \tau^{-2}\boldsymbol{K})^{-1}$. For the Bayesian $A_s$-criterion, we can apply the weighted approach to the posterior covariance matrix, \begin{align} \text{tr}[\boldsymbol{W}(\boldsymbol{L}^T\boldsymbol{L}+\tau^{-2}\boldsymbol{K})^{-1}]\ .\ \label{e:BayesAnuisance} \end{align} We refer to this as the Bayesian $A_W$-criterion. To our knowledge, this is one of the earliest attempts at combining techniques from the weighted and Bayesian optimality literature. The Bayesian $A_W$- and Bayesian $A_s$-criterion's ability to balance minimization of the primary variances and their aliasing with the potential terms is investigated with multiple examples in Section~4. We collectively refer to the different criteria discussed here as the $\mathcal{A}$-criteria. We initially sought to prove the conjecture that for any of the $\mathcal{A}$-criteria there always exists an optimal design such that all $x_{ij} \in \{\pm 1, 0\}$ for $m$-factor interaction models. For such models and criteria, the coordinate update formula is a ratio of two quadratic polynomials with respect to $\tilde{x}$ and the optimum coordinate exchange can be found using fractional programming methods \citep{dinkelbach1967nonlinear}. In the Supplementary Materials, we identify situations where the optimum is unique and occurs at a non-integer value. This result by itself does not disprove the conjecture, but it does provide evidence to the contrary. Section~4.1 further explores this conjecture algorithmically for certain $n$ and $k$ under the main effect model. We next considered the unfortunate scenario in Corollary~1 with respect to the $\mathcal{A}$-criteria. As demonstrated in \eqref{eq:DeltaL}, the coordinate update formula for each $\mathcal{A}$-criteria involves a coordinate update for some $\mathcal{D}$-criteria. \begin{corollary}\label{lem:Aconstant1} For an $m$-factor interaction model and design, $\boldsymbol{X}_{d0}$, consider one of the weighted criteria among the $\mathcal{A}$-criteria for $w>0$. If the update formula for the corresponding among the $\mathcal{D}$-criteria is constant, then $\Delta_{AW}^{ij}$ is uniquely maximized. Moreover, $\Delta_{A_s}^{ij}$ is uniquely maximized when $\boldsymbol{z}_i^T(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{z}_i < 1$. \end{corollary} \noindent The corollary's condition $\boldsymbol{z}_i^T(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{z}_i < 1$ holds for practical cases of an intercept-only nuisance effect and block effects from $b$ blocks each of size $2$ or more. It provides further support for the $\mathcal{A}$-criteria's ability to better differentiate designs than the $\mathcal{D}$-criteria. \section{Examples}\label{s:Bayes} This section compares properties of algorithmically-generated optimal designs for three common screening models: (1) main effect models, (2) two-factor interaction models, and (3) quadratic models. All models have an intercept-only nuisance effect. For main effect models, we utilize the $A_s$- and $D_s$-criterion. For the models, we also consider their Bayesian versions. The best designs generated are compared in terms of their main effect variances after fitting the main effect submodel and, when applicable, their aliasing with potential terms (two-factor interactions and/or quadratic effects). \subsection{Main Effect Models}\label{s:CoordExchange} A main effect model with $k$ factors has the scalar form \[ y_{i}=\beta_0 + \sum_{j=1}^k x_{ij} \beta_j + e_i\ ,\ \] where $\beta_0$ is an intercept and $\beta_j$ with $j>0$ are the main effects. We constructed $A_s$- and $D_s$-optimal designs under this model for $k = 3, \dots, 20$ factors and $n = k+1,\dots,24$ runs assuming either only discrete settings ($\mathcal{X}_j = \{\pm 1, 0\}$) or only continuous settings ($\mathcal{X}_j = [-1,1]$). For continuous settings, we optimized \eqref{eq:DeltaL} with box-constrained L-BFGS over $[-1,1]$ \citep{byrd1995limited}. Due to the demanding computations involved as $n$ and $k$ increase, each CEA was first performed with 100 random starts for both the continuous and discrete CEAs. We then recorded the best criterion value for the algorithms separately and the overall best value. If the two values were equal, we declared the value as optimal. Otherwise, the CEA with the inferior value was performed again with another 100 random starts. If the best value among this new batch did not improve the previous overall best value, the search was stopped. If the value did improve the overall best value, the other CEA was run for another 100 starts and the iterative process continued. For our investigation, this back-and-forth search took no more than 1000 overall total starting designs. The $D_s$- or $A_s$-optimal designs were the designs with the best $D_s$- or $A_s$-value found across all iterations of both the discrete and continuous CEAs. Figure~\ref{fig:CoordHeatMap}(a) shows how many of the initial 100 constructed designs under the continuous CEA were $A$-optimal. A 0 value means the continuous CEA never found an $A_s$-optimal design among the initial batch of 100 random starting designs. Figure~\ref{fig:CoordHeatMap}(b) shows the difference between the counts in Figure~\ref{fig:CoordHeatMap}(a) with the same counts under the discrete CEA. Generally, when $n = k + 1$ or $k + 2$, the continuous CEA identified an $A_s$-optimal design more frequently than the discrete CEA. The discrete CEA found the $A_s$-optimal design more frequently in only 24\% of the scenarios considered and struggled particularly in the cases of $(n, k)=(11, 10)$ and $(19, 18)$. For these cases, even increasing the number of starting designs to $10,000$, the discrete CEA was unable to find an $A_s$-optimal design. The continuous CEA was able to find an $A_s$-optimal design for all cases when we increased the number of starting designs to $1000$. We therefore recommend the continuous CEA for constructing $A_s$-optimal designs. \begin{figure}[ht] \centering \includegraphics[width=1\textwidth]{alg_comp.png} \caption{(a) Categories of the number of times out of intial 100 starting designs the continuous CEA identified an $A_s$-optimal design for given $(n,k)$ combinations. The overall proportion for each category is shown in parentheses. (b) Difference between the number of times the initial starting designs for the continuous and discrete CEA identified an $A_s$-optimal design for given $(n,k)$ combinations. \label{fig:CoordHeatMap}} \end{figure} Contrary to our conjecture in Section~3.2, the $A_s$-optimal designs found by the continuous CEA for scenarios $(11, 10)$ and $(19, 18)$ contained non-integer values and are displayed in the Supplementary Materials. These designs, however, do not significantly decrease the $A_s$-criterion compared to the best constructed designs requiring $\mathcal{X}_j=\pm 1$ or $\{\pm 1, 0\}$ as given in \cite{jones2020Aoptimal}. The criterion value for the $(11, 10)$ $A_s$-optimal design we constructed was only $0.28\%$ and $0.35\%$ more $A_s$-efficient than the three- and two-level designs, respectively. The efficiency for the $(19, 18)$ $A_s$-optimal design with non-integer factor settings was $0.08\%$ than the best discrete-level $A_s$-optimal design we generated. Similar to \cite{jones2020Aoptimal}, the main effect variances the $A_s$- and $D_s$-optimal designs we generated were the same except when $n$ was close to $k$ or when $n = 3\ (\text{mod}\ 4)$. For the designs where $n = 3\ (\text{mod}\ 4)$, we calculated the paired differences between the ordered main effect variances of the two designs. Across all such scenarios and pairs, $78\%$ of the main effect variances from the $A_s$-optimal designs were smaller than those from $D_s$-optimal designs, $6\%$ of them were equal, and $16\%$ of them had larger variances for the $A_s$-optimal design. The largest individual decrease an $A_s$-optimal design's variance had compared to the $D_s$-optimal design was $0.05$. There was one scenario where the $D_s$-optimal design decreased a variance over the $A_s$-optimal design by $0.06$. \subsection{Two-factor Interaction Model with $n = 15, \ k = 6$}\label{subsec:2fibayes} Under the main effect model for scenario $n=15$, $k=6$, the $A_s$- and $D_s$-optimal designs were different, with the $A_s$-optimal design having zero coordinates for factors $5$ and $6$. We now consider this scenario under a two-factor interaction model: \[ y_{i}=\beta_0 + \sum_{j=1}^k x_{ij} \beta_j + \sum_{1 \leq j<j' \leq k} x_{ij}x_{ij'} \beta_{jj'}+ e_i\ ,\ \] which adds $k(k-1)/2$ interaction effects $\beta_{jj'}$. Not all effects can be estimated uniquely due to the small $n$. Thus we constructed Bayesian $A_s$- and Bayesian $D_s$-optimal designs where the intercept is a nuisance effect, main effects are primary terms, and two-factor interaction effects are potential terms. We set $\tau^{-2} = 1,5, 10, \dots,100$ and for each value we performed a CEA with $1000$ starting designs. Figure~\ref{fig:6F15RVariances} depicts variances (in ascending order) under the main effect model and alias matrices for the Bayesian $A_s$- and Bayesian $D_s$-optimal designs, as well as the $A_s$- and $D_s$-optimal designs generated in Section~4.1. The displayed alias matrices only show aliasing of the main effects. The Bayesian $A_s$-optimal design was found for all $15 \leq \tau^{-2} \leq 100$ and had settings $x_{ij} \in \{\pm 1, 0\}$. The Bayesian $A_s$-optimal design for $\tau^{-2} = 10$ had smaller aliasing (as measured by $\text{tr}(\boldsymbol{A}^T\boldsymbol{A})$, following \cite{jones2011efficient}) and had non-integer settings. The design is provided in the Supplementary Materials. The Bayesian $D_s$-optimal design shown in Figure~\ref{fig:6F15RVariances} is comprised of $x_{ij}=\pm 1$ and was found for all $20 \leq \tau^{-2} \leq 100$. It was chosen due to its minimizing $\text{tr}(\boldsymbol{A}^T\boldsymbol{A})$ among all constructed Bayesian $D_s$-optimal designs. The $D_s$-optimal design estimates all main effects with equal variance, while the $A_s$-optimal design has smaller variances except for $\hat{\beta}_6$. The Bayesian $A_s$-optimal design has both the smallest and largest individual variances. The Bayesian $A_s$- and $D_s$-optimal designs have superior aliasing properties over their non-Bayesian counterparts. The Bayesian $A_s$-optimal design minimized $\text{tr}(\boldsymbol{A}^T\boldsymbol{A})$ compared to the other three designs. This reduced aliasing can in part be attributed to the $0$ coordinates. When $x_{ij}=\pm 1$, a design with an odd number of runs will necessarily have some degree of column correlation. A design having some $x_{ij}=0$ can achieve orthogonality between columns for such $n$ and hence zero elements in the alias matrix. Orthogonality through including $x_{ij}=0$, however, leads to larger variances for the associated main effects. \begin{figure} \begin{minipage}[b]{.45\textwidth} \centering $\vcenter{\hbox{\includegraphics[width=1\textwidth]{var6F15R.png}}}$ \end{minipage} \hfill \begin{minipage}[b]{.5\textwidth} \centering $\vcenter{\hbox{\includegraphics[width=1\textwidth]{Alias_615.png}}}$ \end{minipage} \caption{(Left) Variances under main effect model for four designs with $n=15$ and $k=6$. (Right) Heatmap of alias matrices in absolute value for main effects .\label{fig:6F15RVariances}} \end{figure} \subsection{Screening Quadratic Models}\label{s:RSM} Effect principles applied to a quadratic model leads to the partitioning of main effects as primary terms and potential terms of all two-factor interaction and quadratic effects. We assigned different Bayesian priors to the two sets of potential effects, letting $\tau_I^{-2}$ and $\tau_Q^{-2}$ denote the prior precision for the two-factor interaction and quadratic effects, respectively. We constructed Bayesian $A_s$- and $D_s$-optimal designs under $\tau_I^{-2} \in \{1,16\}$ and $\tau_Q^{-2} \in \{0,1,16\}$ using $10,000$ starting designs. For $\tau_Q^{-2}=0$, the quadratic effects become primary terms. We considered $k = 6, 8, 10$ and $n=(2k+1),\dots,(1+k+k^2)$. The minimum $n$ value considered is that for a definitive screening design \citep{jones2011class} and the last is a run size that allows estimation of the full model. A definitive screening design (DSD) has $k$ foldover pairs of $\boldsymbol{x}_i$, each comprised of a single zero coordinate and $k-1$ coordinates of $\pm 1$. DSDs have no aliasing of the main effects with the interaction and quadratic terms. For a given design, let $\boldsymbol{F}_M$, $\boldsymbol{F}_I$, and $\boldsymbol{F}_Q$ be the model matrices corresponding to the main effects, interactions, and quadratic effects, respectively. Each design is summarized using three metrics: (1) $\log(A_{M})$ where $A_M$ is the sum of the main effect variances for a fitted main effect model; (2) $\log(SS_Q)$ where $SS_Q$ is the sum of squared off-diagonals of $\boldsymbol{F}_Q^T\boldsymbol{F}_Q$, and (3) $\log(SS_{MI}+1)$ where $SS_{MI}$ is the sum of squared values of $\boldsymbol{F}_M^T\boldsymbol{F}_I$. The metrics $\log(SS_Q)$ and $\log(SS_{MI}+1)$ are surrogates for the information dedicated to quadratic effects and aliasing between main effects and interactions, respectively. Figure~\ref{fig:BayesRSMk6} shows the numerical results for $k=6$ factors; similar conclusions were reached for the $k=8$ and $k=10$ scenarios (see Supplementary Materials). Generally, the Bayesian $A_s$-optimal design's variances under the main effect model were worse than those under the Bayesian $D_s$-optimal design. However, for fixed values $\tau_Q^{-2}$ and $\tau_{I}^{-2}$, the Bayesian $A_s$-optimal design had comparable or smaller values for $\log(SS_Q)$ and $\log(SS_{MI}+1)$, implying better estimation capacity and aliasing properties for the potential effects. The Bayesian $A_s$-optimal designs for $\tau_Q^{-2}=\tau_I^{-2}=16$ closely resemble the structure of DSDs for $n=13,\dots,20$ with no aliasing between main effects and interactions. The Bayesian $A_s$-optimal designs for $n=13$ and $17$ were a DSD and augmented DSD \citep{JonesNachtsheim2017}, respectively. For $n=14$ and $n=18,19,20$, the Bayesian $A_s$-optimal designs added center runs (i.e., $\boldsymbol{x}_i=0$) to the DSD and augmented DSD, respectively. The Bayesian $A_s$-optimal design for $n=15$ had one center run and 7 pairs of foldover runs, mimicking the DSD structure. The Bayesian $D_s$-optimal designs were less likely to identify designs with structures similar to DSDs for the $\tau_Q^{-2}$ and $\tau_{I}^{-2}$ we considered. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{BayesRSM6.png} \caption{Performance measures for best Bayesian-$D_s$ and -$A_s$ designs when $k = 6$ found with $\tau_Q^{-2} \in \{0,1,16\}$ and $\tau_I^{-2} \in \{1,16\}$. (Left) The $A_s$-criterion for the main effect model on the log scale. (Middle) The sum of squares of the off-diagonals for the quadratic terms on the log scale. (Right) The sum of squares of the cross products of the main effects and interactions on the log scale with offset $1$.\label{fig:BayesRSMk6}} \end{figure} The behavior of the Bayesian $A_s$-optimal designs was influenced by the measure's implicit emphasis on quadratic effects due to their estimators having larger minimum variances than main effects and interactions. This phenomenon was mentioned for the $A_s$-criterion by \cite{gilmour2012optimum} and discussed thoroughly by \cite{AllenMoyer2021}. To equally emphasize minimizing variances among all effects, both articles recommend an $A_W$-criterion that incorporates the minimum variances. \cite{AllenMoyer2021} refer to this $A_W$-criterion as the standardized $A_w$-criterion. Note, this modification is unnecessary for main effect and interaction models because these effects have the same minimum variance. Extending the weighted approach by \cite{AllenMoyer2021} to the Bayesian $A_s$-criterion requires the introduction of a weight matrix based on the minimum posterior variances for given prior variances. For the quadratic model, this would lead to a diagonal weight matrix with smaller weights assigned to quadratic effects. However, $\tau_Q^{-2}$ also controls the magnitude of the quadratic effects' posterior variances so manipulating posterior variances for potential terms via weighting can be done equivalently through manipulation of $\tau_Q^{-2}$. Indeed, in Figure~\ref{fig:BayesRSMk6} we see that as $\tau_Q^{-2}$ increases, $\log(A_M)$ decreases and $\log(SS_Q)$ increases implying less focus on quadratic effects. We would expect the same behavior if we were to assign smaller weights to the quadratic effects. \section{A Blocked Screening Design for Vitamin Photosensitivity}\label{s:block} \cite{GoosJones} discuss a blocked screening experiment performed by a pharmaceutical manufacturer that aimed to determine a combination of vitamins and certain fatty molecules to reduce the vitamins' photosensitivity, thereby increasing the product's shelf life. There were six factors studied corresponding to the presence/absence of riboflavin as well as five fatty molecules. The measuring device required daily recalibration, allowing only four measurements per day. The experiment was broken up across eight days that allowed $4$ runs per day, leading to a study of $k=6$ factors and $b=8$ blocks each of size $4$. The experimenters wanted to be able to estimate all six main effects and 15 two-factor interactions because they were concerned about possible large interactions. Many of the techniques for constructing fractional factorial designs can be employed to create blocked screening experiments but only for certain values of $b$ and $u$. For example, if $n=bu=2^k$ and $b=2^\ell$, we can block all $2^k$ treatment combinations by confounding $2^\ell - 1$ factorial effects with the block effects. All remaining factorial effects are estimated with minimal variance. If $n=bu=2^{k-m}$ and $b=2^\ell$, then we may block a fractional factorial design based on certain confounding patterns \citep{Bisgaard1994,chen1999,cheng2001,Cheng2002}. However, \cite{Cheng2004} demonstrate nonregular fractional factorial design may have superior estimation and variance properties. For the vitamin photosensitivity experiment, a blocked regular fractional factorial will not be able to estimate all two factor interactions, an important property for their application. \cite{GoosJones} constructed a $D$-optimal blocked design algorithmically that can estimate all main effects and two-factor interaction effects. The experiment had only categorical factors, but we will treat them here as if they were continuous. We constructed blocked designs with the $D_s$-criterion, $A_s$-criterion, and a Bayesian $A_s$-criterion with $\tau_I^{-2}=16$. The block effects were assigned weight $w=10^{-6}$ and $10,000$ starting designs were used. Although 3 different designs were constructed corresponding to the different criteria, the best design found under the Bayesian $A_s$-criterion, shown in Figure~\ref{fig:BlockDesign}, was optimal across all criteria. Even after increasing the number of starting designs to $100,000$, the CEAs for the $D_s$- and $A_s$-criteria were still unable to identify this design. \begin{figure}[ht] \begin{minipage}[b]{.49\textwidth} \centering \scalebox{0.6}{\begin{tabular}{rrrrrrrrrrrrr} -1 & -1 & -1 & 1 & 1 & -1 & & 1 & -1 & -1 & 1 & -1 & -1 \\ 1 & 1 & 1 & 1 & 1 & 1 & & 1 & 1 & 1 & -1 & 1 & -1 \\ -1 & 1 & 1 & -1 & -1 & -1 & & -1 & 1 & -1 & -1 & -1 & 1 \\ 1 & -1 & -1 & -1 & -1 & 1 & & -1 & -1 & 1 & 1 & 1 & 1 \\ \\ 1 & -1 & 1 & 1 & -1 & 1 & & 1 & 1 & -1 & 1 & -1 & 1 \\ -1 & -1 & -1 & -1 & 1 & 1 & & -1 & -1 & 1 & 1 & -1 & -1 \\ 1 & 1 & -1 & -1 & -1 & -1 & & -1 & 1 & -1 & -1 & 1 & -1 \\ -1 & 1 & 1 & 1 & 1 & -1 & & 1 & -1 & 1 & -1 & 1 & 1 \\ \\ -1 & 1 & -1 & 1 & -1 & -1 & & -1 & -1 & -1 & -1 & -1 & -1 \\ 1 & -1 & 1 & -1 & -1 & -1 & & 1 & 1 & -1 & -1 & 1 & 1 \\ -1 & 1 & 1 & -1 & 1 & 1 & & 1 & -1 & 1 & 1 & 1 & -1 \\ 1 & -1 & -1 & 1 & 1 & 1 & & -1 & 1 & 1 & 1 & -1 & 1 \\ \\ -1 & 1 & -1 & 1 & 1 & 1 & & 1 & 1 & -1 & 1 & 1 & -1 \\ 1 & -1 & -1 & -1 & 1 & -1 & & 1 & 1 & 1 & -1 & -1 & 1 \\ 1 & 1 & 1 & 1 & -1 & -1 & & -1 & -1 & -1 & 1 & -1 & 1 \\ -1 & -1 & 1 & -1 & -1 & 1 & & -1 & -1 & 1 & -1 & 1 & -1 \\ \end{tabular}} \end{minipage} \begin{minipage}[b]{.4\textwidth} \centering $\vcenter{\hbox{\includegraphics[width=1\textwidth]{block_Lcor.png}}}$ \end{minipage} \caption{(Left) Blocked screening design with 6 factors within 8 blocks of size 4 and (Right) heatmap of $L^TL$. Each group of the block design represents one block with $4$ runs and settings for $6$ factors.\label{fig:BlockDesign}} \end{figure} The optimal design we constructed is more $D$-efficient than the design reported in \cite{GoosJones}, and consists entirely of $\pm 1$ coordinates and so can be used in their application. The design turned out to be a nonregular fractional factorial with generalized resolution $3.75$ and has only 6 words of length $4.5$. \cite{Cheng2004} tabulated designs with similar structure but only for $n=12, 16$, and $20$. Others have looked at minimizing generalized aberration for larger run sizes (see \cite{FANG2007740} and \cite{Schoen2017}), but not in the context of blocking. The main effects of the design are all estimated with optimal variance ($1/32$) and have zero aliasing with the block and interaction effects. Five of the 15 interactions are also estimated with optimal variance. The remaining interaction effects are partially correlated with the block effects, with 8 of the interactions having a variance of $0.047$ and the other two having a variance of $0.063$. \section{Discussion}\label{s:Discussion} This paper compares different forms of the $D$- and $A$-criterion for constructing screening designs that simultaneously minimize variances and bias of a predetermined set of effects in a linear model. We challenge two commonly held beliefs concerning screening designs: \begin{itemize} \item Algorithmic optimization of the $D$-criterion produces good screening designs for arbitrary $n$. \item When constructing screening designs, one needs only to consider $x_{ij}=\pm 1$ even if $x_{ij}$ is numeric and can take on other values in $[-1,1]$. \end{itemize} \cite{gilmour2012optimum} and \cite{jones2020Aoptimal} have also pointed out the failing of the $D_s$-criterion, and we have further clarified these failings. Our investigation of the $D_s$-criterion's CEA shows that many $D_s$-optimal designs can exist for a given scenario, having different variance and bias properties. The superior performance of our continuous CEA in Section~\ref{s:CoordExchange} indicates that even if an $A_s$-optimal design is comprised of only $x_{ij} \in \{\pm 1, 0\}$, the continuous CEA more frequently constructs an $A_s$-optimal design than a discrete CEA. Moreover, we also found some combinations of $n$ and $k$ where the $A_s$-optimal design included non-integer coordinates. Our investigation of Bayesian $A_s$- and Bayesian $D_s$-optimal designs in Sections~\ref{subsec:2fibayes} and \ref{s:RSM} revealed that the Bayesian $A_s$-criterion better balances variance and bias minimization for the prior variance values considered. In Section~5, we found that the optimal design constructed under the Bayesian $A_s$-criterion was also optimal under the $A_s$- and $D_s$-criterion, and that it was better than the designs constructed directly under these two criteria. This is unfortunately a possibility with algorithmic construction and we recommend practitioners generate multiple designs under different design criteria and compare them by inspecting their variances and biases directly, similar to \cite{AllenMoyer2021}. There are many directions of future research we are currently investigating. First, this paper is combines weighted and Bayesian criteria, but mainly to ignore the variances of nuisance effects. Section~\ref{s:RSM} hinted at redundancies in weighting posterior effects but there may be opportunities for more flexible weighting applied to primary effects. Next, more investigation is needed to compare the Bayesian $A_s$-criterion to more brute force methods that minimize variance and bias. These methods commonly employ some type of $D$-criterion to measure variance which could easily be modified to be an $A$-criterion. Following \cite{Li2006}, more investigation is needed on the difference of the optimal designs under the $\mathcal{D}$- and $\mathcal{A}$-criteria when higher-order interactions are considered. Finally, following \cite{gilmour2012optimum} and \cite{JonesGOSSD}, we are currently developing an $A_s$-criterion that includes external variance estimation through replication or fake factors. \section{Supplementary Materials} \subsection{Theorem 1 Proof}\label{A:Thm1prof} Under an $m$-factor interaction model, the elements of $f_1(x_{ij})$ are comprised of $x_{ij}$ and products of $x_{ij}$ with the other main effect coordinates. Hence $f_1(\tilde{x}) = \tilde{x} \times f_{(1)}(\boldsymbol{x}_{i,-j})$ where $f_{(1)}(\boldsymbol{x}_{i,-j})$ is a vector whose elements depend only on $\boldsymbol{x}_{i,-j}$. For simplicity, we write $f_{(1)}$ instead of $f_{(1)}(\boldsymbol{x}_{i,-j})$. The definition coordinate update formula, $\Delta_D^{ij}$, for the $D$- and $D_s$-criterion involves matrix $\boldsymbol{D}=(\boldsymbol{L}^T\boldsymbol{L})^{-1}$, and for the Bayesian versions of these criteria $\boldsymbol{D}=(\boldsymbol{L}^T\boldsymbol{L}+\tau^{-2}\boldsymbol{K})^{-1}$. Recall $\boldsymbol{V}=(1-v(\boldsymbol{x}_i))\boldsymbol{D}+\boldsymbol{D}l(\boldsymbol{x}_i)l^T(\boldsymbol{x}_i)\boldsymbol{D}$ where $v(\boldsymbol{x}_i)=l^T(\boldsymbol{x}_i)\boldsymbol{D}\,l(\boldsymbol{x}_i)$. Partition $\boldsymbol{V}$ so $\boldsymbol{V}_{11}$ corresponds to elements of $f_1(\tilde{x})$ and $\boldsymbol{V}_{22}$ corresponds to $l_2(\boldsymbol{x}_{i,-j})=(f_2(\boldsymbol{x}_{i,-j})^T,\boldsymbol{z}_i^T)^T$. Then \[ \Delta^{ij}_D(\tilde{x}) = \tilde{x}^2f^T_{(1)}\boldsymbol{V}_{11} f_{(1)} + \tilde{x}\boldsymbol{a}^Tf_{(1)} + c \] is a quadratic polynomial where $\boldsymbol{a}=2\boldsymbol{V}_{12}l_2(\boldsymbol{x}_{i,-j})$ and $c =l^T_2(\boldsymbol{x}_{i, -j})\boldsymbol{V}_{22}l_2(\boldsymbol{x}_{i, -j}) + \{1 - v(\boldsymbol{x}_i)\}$. The submatrix $\boldsymbol{V}_{11}$ is positive semidefinite so $f^T_{(1)}\boldsymbol{V}_{11} f_{(1)} \geq 0$, making $\Delta^{ij}_D(\tilde{x})$ a convex quadratic polynomial with respect to $\tilde{x}$. Hence either $\tilde{x}=-1$ or $1$ maximizes $\Delta^{ij}_D(\tilde{x})$ across $\tilde{x} \in [-1,1]$ \subsection{Corollary 1 Proof} For the scenario in Theorem 1, if an optimal design includes an $x_{ij} \neq \pm 1$, then Theorem 1 tells us that there exists another $D$-optimal design by exchanging this $x_{ij}$ with either $-1$ or $1$. Denote this equivalent exchange by $\tilde{x}^*$. Theorem~1 tells us that $\Delta_D^{ij}$ is a convex quadratic polynomial and, since $\Delta_D^{ij}(x_{ij})=\Delta_D^{ij}(\tilde{x}^*)=1$ the following properties must hold: \begin{enumerate} \item For all $\tilde{x}$ between $x_{ij}$ and $\tilde{x}^*$, $\Delta_D^{ij}(\tilde{x})\leq 1$ \item For all $\tilde{x}$ not between $x_{ij}$ and $\tilde{x}^*$, $\Delta_D^{ij}(\tilde{x})\geq 1$\ .\ \end{enumerate} Since $x_{ij} \neq \pm 1$, case (2) must have $\Delta_D^{ij}(\tilde{x})= 1$ for all such $\tilde{x}$, otherwise we could find a $\tilde{x}$ that would improve over the initial optimal design, a contradiction. Since $\Delta_D^{ij}(\tilde{x})= 1$ for all $\tilde{x}$ in this nonempty interval, $\Delta_D^{ij}$ must be constant across all $[-1,1]$, meaning all possible exchanges will produce $D$-optimal designs. \subsection{$A_W$-criterion Coordinate Exchange Formula}\label{A:CEA} We first derive the row exchange formulas for the weighted $A$-criterion $\text{tr}\{\boldsymbol{W}(\boldsymbol{L}^T\boldsymbol{L})^{-1}\}$ for a given positive definite matrix $\boldsymbol{W}$. Note $\boldsymbol{W}=\boldsymbol{I}$ yields the traditional $A$-criterion. Define \begin{align} \Delta_{AW}(\boldsymbol{x}_i, \tilde{\boldsymbol{x}})=\text{tr}\{\boldsymbol{W}(\boldsymbol{L}_0^T\boldsymbol{L}_0)^{-1}\} - \text{tr}\{\boldsymbol{W}(\widetilde{\boldsymbol{L}}^T\widetilde{\boldsymbol{L}})^{-1}\}\ .\ \end{align} Since $\widetilde{\boldsymbol{L}}^T\widetilde{\boldsymbol{L}}=\boldsymbol{L}_0^T\boldsymbol{L}_0 + \boldsymbol{L}_{01}\boldsymbol{L}_{02}^T$ for $\boldsymbol{L}_{01} = (l(\tilde{\boldsymbol{x}}), -l(\boldsymbol{x}_i))$ and $\boldsymbol{L}_{02} = (l(\tilde{\boldsymbol{x}}), l(\boldsymbol{x}_i))$, it follows by \cite{sherman1950adjustment} \begin{align} (\widetilde{\boldsymbol{L}}^T\widetilde{\boldsymbol{L}})^{-1}=\boldsymbol{D} - \boldsymbol{D}\boldsymbol{L}_{01}(\boldsymbol{I}+\boldsymbol{L}_{02}^T\boldsymbol{D}\boldsymbol{L}_{01})^{-1}\boldsymbol{L}_{02}^T\boldsymbol{D}\ ,\ \end{align} where $\boldsymbol{D}=(\boldsymbol{L}_{0}^T\boldsymbol{L}_{0})^{-1}$. With $\phi_W(\boldsymbol{x}_i)= l^T(\boldsymbol{x}_i)\boldsymbol{D}\boldsymbol{W}\boldsymbol{D}l(\boldsymbol{x}_i)$ we arrive at the expression \begin{align*} \Delta_{AW}(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) &= \text{tr}\left\{\boldsymbol{W}\boldsymbol{D}\boldsymbol{L}_{01}\left(\boldsymbol{I} + \boldsymbol{L}^T_{02}\boldsymbol{D}\boldsymbol{L}_{01}\right)^{-1}\boldsymbol{L}^T_{02}\boldsymbol{D}\right\} \\ &= \text{tr}\left\{\boldsymbol{W}\boldsymbol{D}\boldsymbol{L}_{01}\begin{pmatrix} 1 + v(\tilde{\boldsymbol{x}})& -v(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) \\ v(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) & 1 - v(\boldsymbol{x}_{i}) \end{pmatrix}^{-1}\boldsymbol{L}^T_{02}\boldsymbol{D}\right\} \\ &= \frac{1}{\Delta_D(\boldsymbol{x}_i, \tilde{\boldsymbol{x}})} \text{tr}\left\{\boldsymbol{W}\boldsymbol{D}\boldsymbol{L}_{01}\begin{pmatrix} 1 - v(\boldsymbol{x}_{i})& v(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) \\ -v(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) & 1 + v(\tilde{\boldsymbol{x}}) \end{pmatrix} \boldsymbol{L}^T_{02}\boldsymbol{D}\right\} \\ &= \frac{1}{\Delta_D(\boldsymbol{x}_i, \tilde{\boldsymbol{x}})} \{l^T(\tilde{\boldsymbol{x}})\boldsymbol{U}l(\tilde{\boldsymbol{x}}) - \phi_W(\boldsymbol{x}_i) \} \ ,\ \end{align*} where $v(\boldsymbol{x}_i,\tilde{\boldsymbol{x}})=\boldsymbol{x}_i^T\boldsymbol{D}\tilde{\boldsymbol{x}}$ and $$\boldsymbol{U} = \{1 - v(\boldsymbol{x}_i)\}\boldsymbol{D}\boldsymbol{W}\boldsymbol{D} + \boldsymbol{D} l(\boldsymbol{x}_i) l^T(\boldsymbol{x}_i)\boldsymbol{D}\boldsymbol{W}\boldsymbol{D} +\boldsymbol{D} \boldsymbol{W}\boldsymbol{D} l(\boldsymbol{x}_i) l^T(\boldsymbol{x}_i)\boldsymbol{D} - \phi_W(\boldsymbol{x}_i)\boldsymbol{D}\ .\ $$ The coordinate exchange formulas follow by straightforward partitioning and permuting. \subsection{CEA of $A_W$-criterion for $m$-factor interaction model}\label{A:Alvl} For $x_{ij} \in [-1,1]$, let $q \in \mathbb{R}$ and define \begin{align*} F(q) &= \max_{\tilde{x} \in [-1,1]}\left\{l^T_1(\tilde{x})(\boldsymbol{U}_{11}-q\boldsymbol{V}_{11}) l_1(\tilde{x}) + (\boldsymbol{b}-q\boldsymbol{a})^T l_1(\tilde{x}) + (d - qc)\right\}\\ a(q) &= l^T_{(1)}(\boldsymbol{U}_{11} - q\boldsymbol{V}_{11})l_{(1)}\\ b(q) &= (\boldsymbol{b}^T - q\boldsymbol{a}^T)l_{(1)}\\ c(q) &= d - qc\ .\ \end{align*} For $q_0 = \underset{\tilde{x} \in [-1, 1]}{\max} \Delta^{ij}_{AW}( \tilde{x})$, define \begin{equation*} G(\tilde{x}) = \tilde{x}^2a(q_0) + \tilde{x}b(q_0) + c(q_0). \end{equation*} Applying the main theorem of \cite{dinkelbach1967nonlinear}, we note that $G(\tilde{x}) \leq 0$ for all $\tilde{x} \in [-1, 1]$ and $\tilde{x}^*$ optimizes $\Delta^{ij}_A$ if and only if $G(\tilde{x}^*) = 0$. To identify the optima of $\Delta^{ij}_{AW}$, it is sufficient to identify the optima of $G(\tilde{x})$, a quadratic function whose optima clearly depend on the sign of $a(q_0)$ and $b(q_0)$. We enumerate the possible optima here: \begin{enumerate} \item $a(q_0) > 0$, $\tilde{x}^* = -1$ or $1$ \item $a(q_0) = 0$ \begin{enumerate} \item If $b(q_0) \neq 0$, $\tilde{x}^* = -1$ or $1$ \item If $b(q_0) = 0$, any value $\tilde{x}^*$ optimizes $\Delta^{ij}_{AW}$ \end{enumerate} \item $a(q_0) < 0$ \begin{enumerate} \item If $b(q_0) \neq 0$, $\Delta^{ij}_A$ maximized at $\tilde{x}^* = \frac{-b(q_0)}{2a(q_0)},$ when $|\frac{-b(q_0)}{2a(q_0)}| \leq 1$, otherwise it is maximized at either $\tilde{x}^*=-1$ or $1$. \item If $b(q_0) = 0$, $\tilde{x}^* = 0$. \end{enumerate} \end{enumerate} An example of a non-integer optima is shown next. \subsection{Demonstration of continuous optimization of $\Delta^{ij}_{AW}$} We consider an $n = 8, \ k = 4$ design for a main effect model. We know that the final $8$ run, $4$ factor design can be constructed by selecting 4 factor columns from an $8 \times 8$ Hadamard matrix. However, during design construction, non-integer exchanges may be optimal. For example, in the middle of a continuous CEA where the coordinate $0.45$ in the $1$st row and $3$rd column of the design in Figure~\ref{fig:DeltaLNonIntSwap} is being exchange with a new optima, by computing $a(q)$, $b(q)$, and $c(q)$ for the original design, we get \begin{align*} a(q) &= -0.0104 - q0.07 \\ b(q) &= -0.0089 + q0.02 \\ c(q) &= -0.002 - q0.98 \ .\ \end{align*} \noindent Using \cite{dinkelbach1967nonlinear}, let $q_0 = \underset{\tilde{x} \in [-1, 1]}{\max} \Delta^{13}_A(x_{ij}, \tilde{x}, \boldsymbol{x}_{i, -j})$. Start with a guess for $q_0$ and $\tilde{x}$ and compute $F(q_0)$ and $\Delta_A^{13}$ iteratively until they converse. In this example, the true $q_0 \approx 0$, so $a(q) < 0$, $b(q) < 0$, meaning the optimum value is at $\frac{-b(q)}{2a(q)} \approx \frac{-0.0089}{2 \times 0.0104} \approx -0.43$. \begin{figure}[ht] \begin{minipage}[b]{.45\textwidth} \centering \begin{tabular}{rrrr} -0.10 & -1 &\fbox{-0.43}& -1 \\ 1 & 1 & 0.53 & -1 \\ -1 & -0.25 & 1 & 0.98 \\ 1 & -1 & -0.43 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & 1 \\ -1 & 1 & -1 & 1 \\ -1 & 1 & -1 & -1 \\ \end{tabular} \end{minipage} \begin{minipage}[b]{.45\textwidth} \centering $\vcenter{\hbox{\includegraphics[width=0.95\textwidth, angle = 270]{nonintegerswapex.pdf}}}$ \end{minipage} \caption{(Left) For a $n = 8,\ k = 4$ design, the objective function (right) $\Delta_{AW}^{13}$ for any replacement of the coordinate in row $1$, column $3$ indicates an exchange of $-0.45$ with the almost equivalent value of $-0.43$ optimizes $\Delta_{AW}^{13}$. \label{fig:DeltaLNonIntSwap}} \end{figure} The same approach holds for a row exchange. Figure~\ref{tab:RowExcEx} shows a $4$ factor $6$ run design with non-integer coordinates. This design is $\widetilde{\boldsymbol{X}}_d$ after exchanging the original row $\boldsymbol{x}_4=(1,\ -1, \ -1, \ 0)$ with the row $(1, -1, -1, 0.17)$. None of the $3^4$ integer-only row exchanges yielded an improved value of the objective function. \begin{figure}[h] \begin{minipage}[b]{.45\textwidth} \centering \begin{tabular}{rrrr} -1 & 1 & -1 & -1\\ 0.44 & 1 & 1 & 1\\ -0.64 & -1 & 1 & -1\\ \rowcolor{lightgray} 1 & -1 & -1 & 0.17\\ 1 & 1 & -1 & 0\\ -1 & 0 & -1 & 1\\ \end{tabular} \end{minipage} \begin{minipage}[b]{.45\textwidth} \centering $\vcenter{\hbox{\includegraphics[width=.8\textwidth, angle = 270]{row_A.pdf}}}$ \end{minipage} \caption{Row 4 of the design (left) exchanges $(1, -1, -1, 0)$ with $(1, -1, -1, 0.17)$. (Right) A-values of every integer-only row exchange compared with the value at the optimal exchange, represented by the horizontal line. Only the original row has an A-value close to the value of the optimal switch ($1.121$ verses $1.122$).\label{tab:RowExcEx}} \end{figure} \clearpage \subsection{Proof of Corollary 2} It easy to show that, in general, $\boldsymbol{V}\, l(\boldsymbol{x}_i)=\boldsymbol{D} \, l(\boldsymbol{x}_i)$ and \[ \boldsymbol{V}\boldsymbol{W}\boldsymbol{D}=(1-v(\boldsymbol{x}_i))\boldsymbol{D}\boldsymbol{W}\boldsymbol{D}+\boldsymbol{V}l(\boldsymbol{x}_i) l^T(\boldsymbol{x}_i)\boldsymbol{V}\boldsymbol{W}\boldsymbol{D}\ .\ \] Then we have the following simplified expression for $\boldsymbol{U}$: \[ \boldsymbol{U} = \boldsymbol{V}\boldsymbol{W}\boldsymbol{D} + \boldsymbol{D}\boldsymbol{W}\boldsymbol{V} \, l(\boldsymbol{x}_i)\, l^T(\boldsymbol{x}_i) \boldsymbol{V}-\phi_W(\boldsymbol{x}_i) \boldsymbol{D}\ .\ \] For the situation described in Corollary 1, the coordinate update for each criterion among the $\mathcal{A}$-criteria involves some $\Delta_D^{ij}$ in the denominator. If this $\Delta_D^{ij}$ is constant, $f^T_{(1)}\boldsymbol{V}_{11} f_{(1)}=0$ and since $\boldsymbol{V}$ is positive semidefinite, $f_{(1)}$ must be a null eigenvector for $\boldsymbol{V}_{11}$. Moreover, $f_{(1)}^T\boldsymbol{V}_{12}=f_{(1)}^T\boldsymbol{V}_{11}\boldsymbol{V}_{11}^-\boldsymbol{V}_{12}=0$. Then \[ \boldsymbol{V}\, l(\tilde{\boldsymbol{x}})=\boldsymbol{V}\begin{pmatrix} \tilde{x} f_{(1)}\\ l_2(\boldsymbol{x}_{i,-j}) \end{pmatrix}=\begin{pmatrix}\boldsymbol{V}_{12}\, l_2(\boldsymbol{x}_{i,-j})\\\boldsymbol{V}_{22}\, l_2(\boldsymbol{x}_{i,-j}) \end{pmatrix}\ ,\ \] which does not involve $\tilde{x}$. This and the expression for $\boldsymbol{U}$ says the coefficient for $\tilde{x}^2$ in $\Delta_{AW}^{ij}$ equals $-\phi_W(\boldsymbol{x}_i)f_{(1)}^T\boldsymbol{D}_{11} f_{(1)}$ where $\boldsymbol{D}_{11}$ is the relevant partition of $\boldsymbol{D}$, a positive definite matrix. The coefficient equals 0 if and only if either $\phi_W(\boldsymbol{x}_i)=0$ or $f_{(1)}$ is the all-zero vector. But neither can occur because the main effect coordinate is always included in the model so $f_{(1)} \neq 0$ and $\boldsymbol{D}\boldsymbol{W}\boldsymbol{D}$ is positive definite for $w>0$. Hence the coefficient must be negative and so $\Delta^{ij}_{AW}$ is a concave quadratic polynomial, having one unique maximum in $[-1,1]$. When adjusting for nuisance effects, we want to consider $\Delta_{A_s}^{ij}=\lim_{w \to 0} \Delta_{AW}^{ij}$. Hence if $\lim_{w\to 0} \phi_W(\boldsymbol{x}_i)>0$ the quadratic coefficient will again be negative, since $f_{(1)}^T\boldsymbol{D}_{11} f_{(1)}$ does not depend on $w$. Now $\phi_W(\boldsymbol{x}_i)=l^T(\boldsymbol{x}_i)\boldsymbol{D}\boldsymbol{W}\boldsymbol{D}l(\boldsymbol{x}_i)$ is a quadratic form of a symmetric matrix so $\lim_{w \to 0} \phi_W(\boldsymbol{x}_i)=0$ if and only if $\lim_{w\to0}\boldsymbol{W}^{1/2}\boldsymbol{D}l(\boldsymbol{x}_i)=0$. Partitioning $\boldsymbol{D}$ according to matrices $\boldsymbol{F}_0$ and $\boldsymbol{Z}$ gives the expression \begin{align*} \lim_{w\to0}\boldsymbol{W}^{1/2}\boldsymbol{D}l(\boldsymbol{x}_i) = \lim_{w\to0} \begin{pmatrix} \boldsymbol{D}_F f(\boldsymbol{x}_i) + \boldsymbol{D}_{FZ} \boldsymbol{z}_i\\ \sqrt{w} \boldsymbol{D}_{ZF} f(\boldsymbol{x}_i) + \sqrt{w}\boldsymbol{D}_{Z} \boldsymbol{z}_i\\ \end{pmatrix}= \begin{pmatrix} \boldsymbol{D}_F f(\boldsymbol{x}_i) + \boldsymbol{D}_{FZ} \boldsymbol{z}_i\\ 0\\ \end{pmatrix}\ .\ \end{align*} So $\lim_{w \to 0} \phi_W(\boldsymbol{x}_i)=0$ if and only if $\boldsymbol{D}_F f(\boldsymbol{x}_i) + \boldsymbol{D}_{FZ} \boldsymbol{z}_i=0$, or \[ f(\boldsymbol{x}_i)=-\boldsymbol{D}_F^{-1}\boldsymbol{D}_{FZ} \boldsymbol{z}_i\ .\ \] For both the $A_W$- and Bayesian $A_W$-criterion, $-\boldsymbol{D}_F^{-1}\boldsymbol{D}_{FZ}\boldsymbol{z}_i=\boldsymbol{F}_0^T\boldsymbol{Z}(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{z}_i$, which implies \begin{align*} f(\boldsymbol{x}_i)&=\sum_{i'=1}^n f(\boldsymbol{x}_{i'})p_{z,i'i}\\ &=\frac{1}{1-p_{z,ii}}\sum_{i' \neq i} f(\boldsymbol{x}_{i'})p_{z,i'i}\ ,\ \end{align*} where $p_{z,i'i}=\boldsymbol{z}_{i'}^T(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{z}_{i}$ are elements of the $i$-th column of $\boldsymbol{P}_Z$. For $p_{z,ii}=\boldsymbol{z}_{i}^T(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{z}_{i}<1$, it also holds that \[ \boldsymbol{z}_i =\frac{1}{1-p_{z,ii}}\sum_{i' \neq i} \boldsymbol{z}_{i'}p_{z,i'i}\ .\ \] Finally, this implies $l(\boldsymbol{x}_i)$ can be written as a linear combination of the other $n-1$ rows of $\boldsymbol{L}_0$ with coefficients $p_{z,i'i}/(1-p_{z,ii})$. But a constant $\Delta_D^{ij}$ implies $v(\boldsymbol{x}_i)=1$ meaning $l(\boldsymbol{x}_i)$ cannot be written as a linear combination of the other $n-1$ rows. Therefore $\lim_{w \to 0} \phi_W(\boldsymbol{x}_i)>0$ and $\Delta_{A_s}^{ij}$ is a concave quadratic polynomial. \subsection{Section 4.2 Designs} The $A_s$- and $D_s$-optimal designs for a main effect model with $n = 15, \ k = 6$ are shown in Table~\ref{tab:6F15R}. They do not account for the potential two-factor interaction effects, and thus lead to worse aliasing than the Bayesian $A_s$- and Bayesian $D_s$-optimal designs found in Table~\ref{tab:Bayes6F15R}. The Bayesian $A_s$-optimal design was found that better minimizes aliasing (as measured by $\text{tr}(\boldsymbol{A}^T\boldsymbol{A})$) over the Bayesian $A_s$-optimal in Table~\ref{tab:Bayes6F15R}. \begin{table}[h] \centering \caption{$n = 15, \ k = 6$ $A_s$- and $D_s$-optimal designs for main effect models.\label{tab:6F15R}} \begin{tabular}{rrrrrr rr rrrrrr} \\ \multicolumn{6}{c}{$A_s$-optimal} & & & \multicolumn{6}{c}{$D_s$-optimal} \\ -1 & 1 & 1 & -1 & 1 & 1 && & 1 & -1 & 1 & 1 & -1 & -1\\ 1 & 1 & -1 & -1 & -1 & 1 && & 1 & 1 & 1 & -1 & -1 & 1 \\ 1 & -1 & -1 & 1 & 1 & 1 & && 1 & -1 & -1 & -1 & -1 & -1 \\ 1 & 1 & -1 & -1 & 1 & -1 & && -1 & -1 & 1 & 1 & -1 & 1 \\ -1 & -1 & 1 & -1 & 1 & -1 & && -1 & 1 & -1 & 1 & -1 & -1 \\ -1 & -1 & -1 & 1 & 1 & -1 & && 1 & -1 & -1 & 1 & 1 & 1 \\ 1 & -1 & 1 & 1 & -1 & -1 & && -1 & 1 & 1 & 1 & 1 & -1 \\ 1 & 1 & -1 & 1 & -1 & -1 & && 1 & 1 & -1 & 1 & -1 & -1 \\ -1 & 1 & 1 & 1 & -1& 0 & &&-1 & 1 & 1 & -1 & 1 & -1 \\ 1 & 1 & 1 & -1 & 1 & -1 & &&1 & -1 & 1 & -1 & 1 & -1 \\ -1 & -1 & -1 & -1 & -1 & 1 &&& 1 & 1 & -1 & -1 & 1 & 1 \\ 1 & -1 & 1 & -1 & -1 & 1 &&& -1 & -1 & -1 & 1 & 1 & 1 \\ -1 & -1 & -1 & -1 & -1 & -1 & && -1 & -1 & -1 & -1 & 1 & -1 \\ -1 & 1 & -1 & 1 & 1 & 1 &&& -1 & -1 & 1 & -1 & -1 & 1 \\ 1 & -1 & 1 & 1 & 1 & 1 &&& -1 & 1 & -1 & -1 & -1 & 1 \\ \end{tabular} \end{table} \begin{table}[ht] \caption{$n = 15, \ k = 6$ Bayesian $A_s$- and $D_s$-optimal designs for main effect models with potential two-factor interaction effects. The Bayesian-$A_s$ comes from $15 \leq \tau^{-2} < 100$ while the Bayesian-$D_s$ has $20 \leq \tau^{-2} \leq 100$.}\label{tab:Bayes6F15R} \centering \begin{tabular}{rrrrrr rr rrrrrr} \multicolumn{6}{c}{\text{Bayesian $A_s$-optimal}} &&& \multicolumn{6}{c}{\text{Bayesian $D_s$-optimal}} \\ -1 & 1 & 1 & -1 & 0 & 0 & && 1 & 1 & 1 & 1 & 1 & -1 \\ -1 & -1 & 1 & 1 & 1 & 1&&& 1 & -1 & 1 & -1 & 1 & 1 \\ -1 & 1 & -1 & 1 & 1 & 1 &&& -1 & 1 & 1 & -1 & 1 & 1 \\ 1 & -1 & -1 & 1 & -1 & 1 &&& -1 & -1 & 1 & 1 & 1 & -1 \\ -1 & 1 & -1 & 1 & -1 & -1 &&& 1 & -1 & -1 & 1 & 1 & -1 \\ -1 & -1 & -1 & -1 & 1 & -1 &&& -1 & -1 & -1 & -1 & -1 & 1 \\ 1 & -1 & -1 & 1 & 1 & -1 &&& -1 & -1 &1 & -1 & -1 & -1 \\ -1 & -1 & -1 & -1 & -1 & 1 &&& -1 & 1 & -1 & 1 & -1 & -1 \\ 1 & -1 & 1 & -1 & 1 & 1& && 1 &1 & -1 & 1 & 1 & 1 \\ -1 & -1 & 1 & 1 & -1 & -1 &&& -1 & -1 &-1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & -1 & 1 &&& 1 & 1 &-1 & -1 & -1 & -1 \\ 1 & 1 & -1 & -1 & -1 & -1 &&& 1 & 1 &1 & -1 & -1 & 1 \\ 1 & -1 & 1 & -1 & -1 & -1 &&& 1 & -1 & 1 & 1 & -1 & 1 \\ 1 & 1 & -1 & -1 & 1 & 1 &&& -1 &1& -1 & -1 & 1 & -1 \\ 1 & 1 & 1 & 1 & 1 & -1 &&& -1 & 1& 1 & 1 & -1 & 1 \\ \end{tabular} \end{table} \begin{table}[ht] \caption{$n = 15, \ k = 6$ Bayesian $A_s$-optimal design for main effect models with potential two-factor interaction effects with $\tau^{-2} = 10$. \label{tab:BayesAnonInt}} \centering \begin{tabular}{rrrrrr} \multicolumn{6}{c}{\text{Bayesian-}$A_s$} \\ 1 & 1 & -1 & -1 & 1 & 1 \\ -1 & 1 & -1 & 1 & -1 & 1 \\ 1 & -1 & -0.62 & 0.62 & -1 & 1 \\ 1 & 1 & -1 & -1 & -1 & -1 \\ 1 & -1 & 1 & -1 & 0.64 & -0.64 \\ 1 & 1 & 1 & 1 & 1 & 1 \\ -1 & -1 & 1 & 1 & -1 & -1 \\ -0.60 & 0.60 & 1 & -1 & -1 & 1 \\ -1 & 1 & 1 & -1 & 1 & -1 \\ -1 & -1 & -1 & -1 & 1 & 1 \\ -1 & -1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 & 1 & -1 \\ -1 & 1 & -1 & 1 & 1 & -1 \\ -1 & -1 & -1 & -1 & -1 & -1 \\ \end{tabular} \end{table} \clearpage \subsection{Section 4.3 Designs and Results for $k=8, 10$} Results for factors $k = 8$ and $k = 10$ are fairly analogous to the $k = 6$ case discussed in Section~4.3 of the main paper. The results, found in Figures~\ref{fig:BayesRSMk8} and \ref{fig:BayesRSMk10}, continue to indicate that the Bayesian $A_s$-optimal design prioritizes estimation of the quadratic effects, while the Bayesian $D_s$-optimal design does not. For $\tau_Q^{-2} = \tau_I^{-2} = 16$, the Bayesian-$A$ for both $k = 8$ and $k = 10$ produced even more designs with $SS_{MI} = 0$, while the Bayesian $D_s$-optimal design for $k = 8$ found no such designs, and for $k = 10$ found few relative to the Bayesian-$A$. This adds to the previous conclusion that the Bayesian $A_s$-optimal criterion reduces aliasing for designs compared with the Bayesian $D_s$-optimal. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{BayesRSM8.png} \caption{Performance measures for the Bayesian $D_s$-optimal and Bayesian $A_s$-optimal designs when $k = 8$ found with $\tau_Q^{-2} \in \{0,1,16\}$ and $\tau_{I}^{-2} \in \{1,16\}$. (Left) The $A_S$-criterion for the main effect model on the log scale. (Middle) The sum of squares of the off-diagonals for the quadratic terms on the log scale. (Right) The sum of squares of the cross products of the main effects and interactions on the log scale with offset $1$. }\label{fig:BayesRSMk8} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{BayesRSM10.png} \caption{Performance measures for the Bayesian $D_s$-optimal and Bayesian $A_s$-optimal designs when $k = 10$ found with $\tau_Q^{-2} \in \{0,1,16\}$ and $\tau_{I}^{-2} \in \{1,16\}$. (Left) The $A$-value for the first-order model on the log scale. (Middle) The sum of squares of the off-diagonals for the quadratic terms on the log scale. (Right) The sum of squares of the cross products of the main effects and interactions on the log scale with offset $1$. }\label{fig:BayesRSMk10} \end{figure} \clearpage \bibliographystyle{asa} \section{Introduction}\label{s:Intro} A screening experiment is an initial step in a sequential experimental procedure to understand and/or optimize a process dependent upon many controllable factors. Such experiments are common in pharmaceuticals, agriculture, genetics, defense, and textiles (see \cite{dean2006screening} for a comprehensive overview of screening design methodology and applications). The screening analysis aims to identify the few factors that drive most of the process variation often according to a linear model comprised of main effects, interaction effects, and, in the case of numeric factors, quadratic effects \citep{jones2011class}. Each effect corresponds to one or more factors, and a factor is said to be active if at least one of its corresponding effects is large relative to the process noise; otherwise the factor is said to be inert. Analyses under this class of models often follow effect principles of sparsity, hierarchy, and heredity (see Chapter 9 of \cite{WuHamada}), with the primary goal of correctly classifying each factor as active or inert. A screening design is represented by an $n \times k$ matrix, $\boldsymbol{X}_d$, with rows $\boldsymbol{x}_i^T=(x_{i1},\dots,x_{ik})$ where $x_{ij}$ represents the $j$-th factor's setting for run $i$. To standardize screening designs across applications, continuous factor settings are scaled so $x_{ij} \in [-1,1]$ while categorical factor settings are often restricted to two levels, making $x_{ij}=\pm 1$. We compare $\boldsymbol{X}_d$'s based on the statistical properties of the effects' least-squares estimators because their properties are tractable, particularly their variances and potential biases. The goal then is to identify an $\boldsymbol{X}_d$ that minimizes the individual variances and biases of these effect estimators Suppose the model is correctly specified and there are designs having unique least-squares estimators for all effects. Then these estimators are unbiased and designs may be compared based on their estimation variances. A design having variances that are as small as possible will improve one's ability to correctly classify factors as active/inert. For models comprised solely of main effects and interactions, orthogonal designs have estimation variances simultaneously equal to their minimum possible value across all designs. Such designs exist only when $n$ is a multiple of 4; for other $n$ it is unclear which design will have the best variance properties. Still, designs should be compared based on how close their variances are to their respective minimum possible values. This approach requires knowledge of the minimum values as well as some measure of closeness. One approach for identifying minimum variances is to approximate them using the theoretical value assuming an orthogonal design exists, but such values may be unattainable. The $c$-criterion \citep{atkinson2007} may be used to identify the minimum variance for a given effect, but without any guarantee of the estimability of the other effects of interest. To remedy this estimability issue, \cite{AllenMoyer2021} proposed the $c_\mathcal{E}$-criterion to calculate these minimum variances exactly. It is less clear how to measure the proximity of a design's variances to their $c_\mathcal{E}$ values. The Pareto frontier approach by \cite{LuPareto2011} is well-suited for this problem but can be cumbersome in practice. A more practical solution is to evaluate and rank designs according to a single criterion that involves a scalar measure of all the variances. Such criteria should be straightforward to evaluate and optimize, and the resulting optimal designs should have variances close to their $c_\mathcal{E}$ values. Different forms of the $D$- and $A$-criterion (see Section~2.1) are popular variance-based criteria employed in the screening design literature and will be the focus of this paper. Designs that optimize $D$- and $A$-criteria can coincide for some $n$, but this does not mean the criteria equivalently summarize variances. Consider a screening problem with $n=7$ runs and $k=5$ factors that assumes a main effect model. It is well-known that there always exists a $D$-optimal design comprised of $x_{ij}=\pm 1$, even when $x_{ij} \in [-1,1]$ \citep{box1971factorial}. While other $D$-optimal designs having $x_{ij} \in (-1, 1)$ may exist, the screening literature predominantly fixes $x_{ij}=\pm 1$ with no assumed degradation to the resulting variances. For example, \cite{jones2020Aoptimal} found an $A$-optimal design with $x_{ij}$ values of $\pm 1$ and $0$ having smaller variances compared to $D$-optimal designs comprised of $x_{ij}=\pm 1$ only. Figure~\ref{tab:5F7Rex} shows this $A$-optimal design, which has $x_{14}=x_{15}=0$. Figure~\ref{tab:5F7Rex} also shows the corresponding main effect variances (in ascending order) of the $A$-optimal design and two $D$-optimal designs comprised of $x_{ij}=\pm 1$. The minimum possible variances assuming an orthogonal design exists are $1/7=0.1429$ and the minimum variances under the $c_{\mathcal{E}}$-criterion from \cite{AllenMoyer2021} are $0.1459$. Each of the $A$-optimal design's variances are equal to or smaller than the two competing $D$-optimal designs comprised of $\pm 1$. \begin{figure}[ht] \begin{minipage}[b]{.48\textwidth} \centering \begin{tabular}{|rrrrr|} \hline 1 & 1 & 1 & 0 & 0 \\ -1 & -1 & 1 & -1 & 1 \\ -1 & 1 & -1 & -1 & 1 \\ 1 & -1 & -1 & -1 & -1 \\ -1 & -1 & 1 & 1 & -1 \\ 1 & -1 & -1 & 1 & 1 \\ -1 & 1 & -1 & 1 & -1 \\ \hline \end{tabular} \end{minipage} \hfill \begin{minipage}[b]{.48\textwidth} \centering $\vcenter{\hbox{\includegraphics[width=.75\textwidth, angle = 270]{var5F7R.pdf}}}$ \end{minipage} \caption{(Left) $n = 7,\ k = 5$, $A$-optimal design. (Right) Main effect variances (in ascending order) for $A$- and $D$-optimal designs\label{tab:5F7Rex}. The design ``$D$-optimal $1, 1$'' replaces $x_{14}$ and $x_{15}$ of left design with $1$. Design ``D-optimal $-1, 1$'' is similarly defined. The minimum possible variances assuming an orthogonal design would each be $1/7=0.1429$ and the minimum variances under the $c_{\mathcal{E}}$-criterion from \cite{AllenMoyer2021} are $0.1459$.} \end{figure} As it turns out, the $A$-optimal design in Figure~1 is also $D$-optimal despite having some $x_{ij}=0$. In fact, changing either $x_{14}$ or $x_{15}$ to any value in $[-1, 1]$ produces yet another $D$-optimal design but with equal or larger variances than the $A$-optimal design. The $A$-optimal design in this case, however, is unique. The existence of infinitely many $D$-optimal designs, each with equal or larger variances relative to the $A$-optimal design, is cause for concern about utilizing the $D$-criterion to rank screening designs. In this example, the $A$-criterion was better able to differentiate designs in terms of their ability to minimize the main effect variances simultaneously. This is not to say $D$-optimal designs are less valuable than $A$-optimal designs. Such designs have been used with great success in practice and the relative differences of the variances in Figure 1 do not appear large. Whether these differences impact the analysis depends on the ratio of the true main effect, denoted $\beta_j$, and the process variance, $\sigma^2$. When performing a two-sided $t$-test for the null hypothesis $\beta_j=0$, the associated noncentrality parameter will be $\beta_j/\sigma$ divided by the square root of the variances shown in Figure~1. When $\beta_j/\sigma$ is large, slight differences in the variances will not affect the noncentrality parameter, and hence will not affect power of the tests. The differences in variances will have a significant impact as $\beta_j/\sigma$ gets smaller. For example, suppose $\beta_j/\sigma=1$ and we perform a $t$-test for $\beta_1=0$ with significance level $\alpha=0.05$. The power for this test under the $D$-optimal design with $x_{14}=x_{15}=1$ is $0.6355$ while for the $A$-optimal design it is $0.7135$. Without any prior knowledge of the $\beta_j/\sigma$, it is important then to find a design that decreases the individual variances as much as possible. Based on the effect principles, it is common to fit a main effect model even though interactions and/or quadratic effects may be active. The least-squares estimators for the main effect model may then become biased. Rather than try to estimate all potentially important effects, one can quantify the bias of the estimators and identify a design that simultaneously reduces estimation variance and bias. Let $\boldsymbol{\beta}$ be the vector of the largest collection of effects that may be important and hence captures the true model. Partition $\boldsymbol{\beta}$ into $\boldsymbol{\beta}_1$ and $\boldsymbol{\beta}_2$ where $\boldsymbol{\beta}_1$ are effects we believe are most likely to be important and correspond to the effects in the fitted model, and $\boldsymbol{\beta}_2$ are the remaining effects that are potentially important but ignored in the fitted model. The possible bias from estimating $\boldsymbol{\beta}_1$ under the fitted model when the true model includes all $\boldsymbol{\beta}$ is $\boldsymbol{A}\boldsymbol{\beta}_2$ where $\boldsymbol{A}$ is the design's so-called alias matrix. \cite{dumouchel1994simple} construct designs under model uncertainty by assigning a prior distribution to $\boldsymbol{\beta}_1$ and $\boldsymbol{\beta}_2$, and ranking designs according to the $D$-criterion applied to $\boldsymbol{\beta}$'s posterior covariance matrix. While Bayesian $D$-optimal designs have shown an ability to balance minimizing bias and variance, the possible flaws of the $D$-criterion pointed out earlier are still concerning. Better designs may then be found with a Bayesian $A$-criterion, which has not received much attention in the screening literature. This paper makes two important contributions that build a strong case for constructing screening designs under different forms of the $A$-criterion. The first contribution is a comparison of the behavior of the $D$- and $A$-criteria in response to manipulating a single coordinate of a given design. Our investigation not only provides insights into the criteria's coordinate exchange algorithms, a popular design construction algorithm, but also establishes the existence of $D$-optimal designs with $x_{ij}= \pm 1$ for models including main effects and/or interactions, as well as nuisance effects, such as block effects. We are only aware of such a result for main effect models with an intercept. We also identify cases in which the $D$-criterion is invariant to any possible coordinate exchange, meaning the $D$-criterion considers all such designs as having equal value despite potentially having different variances. For such cases, we show that the $A$-criterion has a unique optimal coordinate exchange. Our second contribution is the promotion of a weighted Bayesian $A$-criterion for constructing designs that balance bias and variance minimization. We compare new screening designs generated under coordinate-exchange algorithms for common factorial models and show the Bayesian $A$-optimal designs have more appealing variance and bias properties than Bayesian $D$-optimal designs. The paper is organized as follows. Section~\ref{s:ADReview} reviews traditional and current screening models and criteria. Section~\ref{s:Theory} investigates the behavior of the $D$- and $A$-criteria following coordinate exchanges to an existing design for models including nuisance effects. It also introduces the Bayesian $A$-criterion and how nuisance effects may be addressed under this criterion through a weight matrix. Examples of $A$-optimal and Bayesian $A$-optimal designs constructed for main effect models, a two-factor interaction model, and a quadratic model are provided in Section~\ref{s:Bayes}. Section~\ref{s:block} constructs a blocked screening design for a pharmaceutical application under our new criteria. We conclude the paper with a discussion of current and future work in Section~\ref{s:Discussion}. \section{Background}\label{s:ADReview} The fitted model for the $i$-th continuous response, $y_i$, has the form \begin{equation}\label{eq:LinearModelVec} y_i = f^T(\boldsymbol{x}_i)\boldsymbol{\beta} + \boldsymbol{z}_i^T\boldsymbol{\theta} + e_i\ ,\ \end{equation} where $e_i \sim N(0,\sigma^2)$ and $i=1,\dots,n$. Henceforth and without loss of generality, we set $\sigma^2 = 1$, since $\sigma^2$ is constant across all designs. Every element of $f(\boldsymbol{x}_i)$, a $p \times 1$ vector, is a function of one or more elements of $\boldsymbol{x}_i$ while $\boldsymbol{z}_i$ is a $b \times 1$ vector that does not depend on $\boldsymbol{x}_i$ and corresponds to nuisance effects, $\boldsymbol{\theta}$. The simplest screening model is the main effect model where $f^T(\boldsymbol{x}_i) = \boldsymbol{x}^T_i$ and $z_i=1$, corresponding to an intercept effect, while a blocked main effect model with $b$ blocks has $\boldsymbol{z}_i$ comprised of all zeroes except for a $1$ in the $h$-th position when $y_i$ comes from block $h$. Full quadratic models append the terms $\boldsymbol{x}^T_i \otimes \boldsymbol{x}^T_i=(x_{ij}x_{ij'})$ to the main effect model's $f^T(\boldsymbol{x}_i)$, where $\otimes$ denotes the Kronecker product. Two-factor interaction models remove all $x_{ij}^2$ terms from the full quadratic model's $f(\boldsymbol{x}_i)$. For a given $\boldsymbol{X}_d$, let $\boldsymbol{F}$ and $\boldsymbol{Z}$ denote matrices with rows $f(\boldsymbol{x}_i)$ and $\boldsymbol{z}_i$, respectively, and define $\boldsymbol{L}=(\boldsymbol{F}|\boldsymbol{Z})$. \subsection{Variance Criteria} When model~\eqref{eq:LinearModelVec} is believed to contain the true model and $n > p+b$, we assume there exists at least one $\boldsymbol{X}_d$ with a unique least-squares estimator $(\hat{\boldsymbol{\beta}}^T|\hat{\boldsymbol{\theta}}^T)^T=(\boldsymbol{L}^T\boldsymbol{L})^{-1}\boldsymbol{L}^T\boldsymbol{y}$. The estimator is unbiased and has variance $(\boldsymbol{L}^T\boldsymbol{L})^{-1}$. Then $\text{Var}(\hat{\boldsymbol{\beta}})=\{\boldsymbol{F}^T(\boldsymbol{I}-\boldsymbol{P}_Z)\boldsymbol{F}\}^{-1}$ where $\boldsymbol{P}_Z=\boldsymbol{Z}(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{Z}^T$ and screening inferences for the elements of $\hat{\boldsymbol{\beta}}$ perform best under an $\boldsymbol{X}_d$ whose $\text{Var}(\hat{\boldsymbol{\beta}})$ has small diagonal elements. Designs may then be ranked based on a scalar function of $(\boldsymbol{L}^T\boldsymbol{L})^{-1}$ that measure variance in some overall sense. To focus attention on estimation of $\boldsymbol{\beta}$, the function should be defined on $\text{Var}(\hat{\boldsymbol{\beta}})$. All we require of $\boldsymbol{\theta}$ is that it can be estimated uniquely. The $D$-criterion ranks designs according to $|(\boldsymbol{L}^T\boldsymbol{L})^{-1}|$ while the $D_s$-criterion is $|\text{Var}(\hat{\boldsymbol{\beta}})|$. In both cases, smaller values are desirable. This paper uses the equivalent criteria of $|\boldsymbol{L}^T\boldsymbol{L}|$ and $|\boldsymbol{F}^T(\boldsymbol{I}-\boldsymbol{P}_Z)\boldsymbol{F}|$, with larger values being desirable. Under a normality assumption of $\boldsymbol{e}$, the $D$-optimal and $D_s$-optimal designs minimize the volume of the confidence ellipsoids for $(\boldsymbol{\beta}^T,\boldsymbol{\theta}^T)$ and $\boldsymbol{\beta}^T$, respectively. Hence these criteria are well-suited for an overall test of the significance of all effects, but not necessarily for individual testing of the parameters. The $A$-criterion ranks designs with respect to $\text{tr}[\{\boldsymbol{L}^T\boldsymbol{L}\}^{-1}]$ and the $A_s$-criterion is $\text{tr}[\text{Var}(\hat{\boldsymbol{\beta}})]$, being the sum of the individual variances of the parameters of interest. In both cases we want to minimize the chosen criterion. For main effect and interaction models, a design is said to be orthogonal when $\boldsymbol{L}^T\boldsymbol{L} = n\boldsymbol{I}$, meaning $\boldsymbol{F}$ is comprised of orthogonal columns of elements $\pm 1$ \citep{MukerjeeWu2006,WuHamada,Schoen2017}. Such designs estimate all main and interaction effects with the minimum possible variance, $1/n$. By minimizing the individual variances, such designs will be both $D_s$- and $A_s$-optimal. Orthogonal designs, however, can only exist when $n$ is a multiple of 4, otherwise the $D_s$- and $A_s$-optimal designs may differ from each other. Existing literature for constructing $A_s$- and $D_s$-optimal screening designs under arbitrary $n$ has predominantly focused on main effect models. These designs are more commonly referred to as chemical balance and spring balance designs \citep{cheng1980optimality,masaro1983optimality,jacroux1983optimality,wong1984optimal,Cheng2014b}. To our knowledge, there are no theoretical results concerning $A_s$-optimal chemical balance designs with $x_{ij} \in \{\pm 1, 0\}$ for main effect models with an intercept nuisance effect. \cite{jones2020Aoptimal} algorithmically constructed and compared $A$- and $D$-optimal designs under different screening models and arbitrary $n$. They found that for $n = 3\ (\text{mod}\ 4)$ and $n$ small relative to $k$, $A$-optimal designs often had $x_{ij} \in \{\pm 1, 0\}$ . In fact, they algorithmically constructed $A$-optimal designs allowing $x_{ij} \in [-1,1]$, yet still found the $A$-optimal designs only took on these three integer settings. Similar to the $D$-optimal design's tendency to only have values $x_{ij}=\pm 1$, Sections~3.2 and 4.1 explore the conjecture that an $A$-optimal design exists where $x_{ij} \in \{\pm 1, 0\}$. \subsection{Variance and Bias Criteria} While fitting the largest possible model incurs little to no bias, one needs a screening design with a large run size ($n \geq p+b$). When $n < p+b$, there is no unique least-squares estimator and the analysis becomes more complicated. Penalized regression, Bayesian methods, and stochastic model searches are increasing in popularity \citep{box1986analysis,Yuan2007,WuHamada,draguljic2014,Mee2017} and have proven to be quite powerful for screening. These analysis approaches, however, do not lend themselves to a tractable design framework. A design theory based on least-squares inference of submodels (e.g., \cite{Daniel1959}, \cite{lenth1989}, \cite{hamada1992analysis}, \cite{HamadaHamada2010}) is preferred. In practice, the main effects model should be the first submodel considered and subsequent models are chosen based on the results of that analysis according to the effect principles \citep{JonesNachtsheim2017}. Partitioning $\boldsymbol{\beta}$ as in Section~1, a submodel may be thought of as fitting model~\eqref{eq:LinearModelVec} assuming $\boldsymbol{\beta}_2=0$. Similarly partitioning $\boldsymbol{F}=(\boldsymbol{F}_1|\boldsymbol{F}_2)$ and defining $\boldsymbol{L}_1=(\boldsymbol{F}_1|\boldsymbol{Z})$, the least-squares estimator $(\hat{\boldsymbol{\beta}}_1^T|\hat{\boldsymbol{\theta}}^T)^T=(\boldsymbol{L}_1^T\boldsymbol{L}_1)^{-1}\boldsymbol{L}_1^T\boldsymbol{y}$. Fitting submodels introduces potential bias, namely the bias of $(\hat{\boldsymbol{\beta}}_1^T|\hat{\boldsymbol{\theta}}^T)^T$ is $\boldsymbol{A}\boldsymbol{\beta}_2$ where \begin{align} \boldsymbol{A}=(\boldsymbol{L}_1^T \boldsymbol{L}_1)^{-1}\boldsymbol{L}_1^T\boldsymbol{F}_2 \end{align} is referred to as the alias matrix. While variances for $\hat{\boldsymbol{\theta}}$ can be ignored when comparing designs, we should consider its bias with respect to $\boldsymbol{\beta}_2$ because we anticipate that some of these potential terms will be eventually considered in the analysis. The experimenter should then identify a design that minimizes both the diagonals of $\text{Var}(\hat{\boldsymbol{\beta}}_1)$ and the elements of $\boldsymbol{A}$. For $n = 0\ (\text{mod}\ 4)$, one strategy is to rank all strength 2 or 3 orthogonal arrays based on an aliasing-based criterion such as minimum aberration or one of its generalizations \citep{MukerjeeWu2006, Cheng2014,Vazquex2022}. Doing so guarantees minimum variances after fitting the main effect model with minimal bias due to model misspecification. For arbitrary $n$, \cite{jones2011efficient} and \cite{LuPareto2011} algorithmically optimize criteria that are some combination of the $D$-criterion and $\text{tr}[\boldsymbol{A}^T\boldsymbol{A}]$ under a given partition of $\boldsymbol{\beta}$. Bias may also be reduced through one's ability to fit many possible submodels, which is the goal of estimation capacity and model robust designs \citep{Li2000,Chen2004,Tsai2010,Smucker2012}, but such criteria are computationally intensive to calculate. \cite{dumouchel1994simple} proposed a flexible Bayesian $D$-criterion to balance main effect variance and bias minimization. A uniform, improper prior is assigned to $\boldsymbol{\beta}_1$ and $\boldsymbol{\theta}$, and a $N(\boldsymbol{0}, \tau^2\boldsymbol{I}_q)$ prior to $\boldsymbol{\beta}_2$. For $\boldsymbol{y} \mid \boldsymbol{\beta},\boldsymbol{\theta}\sim N(\boldsymbol{F}\boldsymbol{\beta}+\boldsymbol{Z}\boldsymbol{\theta},\ \boldsymbol{I})$, the posterior covariance matrix for $(\boldsymbol{\beta}^T,\boldsymbol{\theta}^T)^T$ is then $(\boldsymbol{L}^T\boldsymbol{L} + \tau^{-2}\boldsymbol{K})^{-1}$ where $\boldsymbol{K}$ is a diagonal matrix with $0$'s for the corresponding $p$ primary terms and $1$ for the corresponding $q$ potential terms. The Bayesian $D$-criterion is $|\boldsymbol{L}^T\boldsymbol{L} + \tau^{-2}\boldsymbol{K}|$, where $\tau^{-2}$ tunes the importance of minimizing bias and/or estimation of the potential terms. As $\tau^{-2} \to \infty$, the criterion will be less influenced by changes in aliasing between the primary and potential terms since $\tau^{-2}\boldsymbol{I}_q$ will have large diagonal elements. As $\tau^{-2} \to 0$, the potential terms become primary terms. \cite{dumouchel1994simple} recommended $\tau^{-2} = 1$ and constructed optimal designs via a coordinate exchange algorithm. Other Bayesian approaches have been considered \citep{Toman1994,Joseph2006,Bingham2007,TSAI2007619} but with only only two or three level factors. This paper will also explore the Bayesian $A$-criterion, $\text{tr}[(\boldsymbol{L}^T\boldsymbol{L} + \tau^{-2}\boldsymbol{K})^{-1}]$, and Bayesian $A_s$-criterion, being the trace of the submatrix of $(\boldsymbol{L}^T\boldsymbol{L} + \tau^{-2}\boldsymbol{K})^{-1}$ corresponding to $\boldsymbol{\beta}$. \section{Properties of the $D$- and $A$-criterion}\label{s:Theory} It is challenging to analytically derive optimal designs for a given criterion under an arbitrary $n$ and $k$. In practice, these criteria are optimized via some computer search algorithm, such as the coordinate exchange algorithm \citep{meyer1995coordinate}, branch-and-bound algorithms \citep{Ahipasaoglu2021}, and nonlinear programming \citep{EstebanBravo2017,Duarte2020}. While the two latter algorithms offer some guarantees of identifying the true optimum, the coordinate exchange algorithm is straightforward to implement and is employed in popular statistical software. We focus on the coordinate exchange algorithm (CEA) in this section not only because of its wide adoption, but because it provides an analytical tool to study the behavior of these different forms of the $D$- and $A$-criterion defined in Section~2. Let $\mathcal{X}_j$ denote the set of permissible coordinates for $x_{ij}$, making $\mathcal{X}= \mathcal{X}_1 \times \dots \times \mathcal{X}_k$ the set of permissible rows for $\boldsymbol{X}_d$. Then $\mathcal{X}_j=\pm 1$ for categorical factors and $\mathcal{X}_j = [-1,1]$ for numeric factors. A row exchange of an initial design, $\boldsymbol{X}_{d0}$, exchanges one of its existing rows, $\boldsymbol{x}_i$, with a candidate row $\tilde{\boldsymbol{x}} \in \mathcal{X}$. This leads to a row exchange of $f(\boldsymbol{x}_i)$, the $i$-th row of the initial design's model matrix, $\boldsymbol{F}_0$, with the candidate model matrix row, $f(\tilde{\boldsymbol{x}})$. No exchange is made to $\boldsymbol{Z}$, since nuisance effects are not design dependent. Hence an exchange gives a new design and model matrix, denoted $\widetilde{\boldsymbol{X}}$ and $\widetilde{\boldsymbol{L}}$, respectively. A row exchange algorithm (REA) for a given criterion identifies the optimal $\tilde{\boldsymbol{x}}$ for each $\boldsymbol{x}_i$ sequentially, updating $\boldsymbol{X}_{d0}$ one row at a time. After going through all $n$ runs, the algorithm starts again at $\boldsymbol{x}_1$. The process repeats itself until some convergence criterion is met. The algorithm is often repeated across many initial starting designs and the overall best design is reported. The reported design is not guaranteed to be globally optimal, but it is common in the screening literature to refer to them as optimal. More details about REAs may be found in \cite{atkinson2007}. A coordinate exchange is a specific row exchange that only manipulates $x_{ij}$. Then we may partition $\boldsymbol{x}_i^T=(x_{ij} | \boldsymbol{x}_{i, -j})$ and represent the $i$-th row of $\boldsymbol{L}$ as \begin{align} l(\boldsymbol{x}_i) = \begin{pmatrix} f_1(x_{ij}) \\ \hline f_2(\boldsymbol{x}_{i, -j})\\ \boldsymbol{z}_i \end{pmatrix}=\begin{pmatrix}l_1(x_{ij})\\ \hline l_2(\boldsymbol{x}_{i, -j}) \end{pmatrix}\ ,\ \label{e:partition} \end{align} where $f_1(x_{ij})=l_1(x_{ij})$ is the subvector of $f(\boldsymbol{x}_i)$ that only involves $x_{ij}$ and $f_2(\boldsymbol{x}_{i, -j})$ are the remaining elements. For example, exchanging $x_{i1}$ for a two-factor interaction model with an intercept nuisance parameter has $l_1^T(x_{i1}) = (x_{i1}, \ x_{i1}x_{i2},\dots, \ x_{i1}x_{ik})$ and $l_2^T(\boldsymbol{x}_{i, -j})=( x_{i2},\dots, \ x_{ik},\ x_{i2}x_{i3},\dots, \ x_{i(k-1)}x_{ik},1)$. \cite{meyer1995coordinate} proposed the CEA that proceeds in the same fashion as a REA, but for a given $\boldsymbol{x}_i$, each coordinate $x_{ij}$ is updated sequentially. As the number of candidate coordinates $\mathcal{X}_j$ is smaller than $\mathcal{X}$, a CEA involves fewer computations and does not require the user to specify all possible candidate points in $\mathcal{X}$. Moreover, there exist fast update formulas for the forms of the $D$- and $A$-criterion considered in this paper that do not require repeated matrix inversion. However, compared to a REA, a CEA requires more random starts to avoid converging to a local optimum. The remainder of this section investigates the behavior of the CEA for the different forms of the $D$- and $A$-criteria. \subsection{Properties of $D$-criterion}\label{s:subDoptTheory} For an $\boldsymbol{x}_i$ in $\boldsymbol{X}_{d0}$, the $D$-criterion's REA seeks the exchange $\tilde{\boldsymbol{x}}$ that maximizes \begin{align} \Delta_D(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) &= \frac{|\widetilde{\boldsymbol{L}}^T\widetilde{\boldsymbol{L}}|} {|\boldsymbol{L}^T_{0} \boldsymbol{L}_{0}|} = l^T(\tilde{\boldsymbol{x}})\boldsymbol{V}\, l(\tilde{\boldsymbol{x}}) + \{1 - v(\boldsymbol{x}_i)\}\label{eq:Dobj_REA} \end{align} where $\boldsymbol{V} = \{1 - v(\boldsymbol{x}_i)\}\boldsymbol{D} + \boldsymbol{D} l(\boldsymbol{x}_i)\, l^T(\boldsymbol{x}_i) \boldsymbol{D}$ with $\boldsymbol{D}=(\boldsymbol{L}^T_{0} \boldsymbol{L}_{0})^{-1}$and $v(\boldsymbol{x}_i)=l(\boldsymbol{x}_i)^T\boldsymbol{D}\, l(\boldsymbol{x}_i)$. The matrix $\boldsymbol{V}$ is symmetric and it is positive definite if and only if $v(\boldsymbol{x}_i) < 1$. If $v(\boldsymbol{x}_i)=1$ then $\boldsymbol{V}$ is positive semidefinite. For a coordinate exchange of $x_{ij}$ for $\tilde{x}$, we can permute the rows and columns of $\boldsymbol{V}$ following \eqref{e:partition} giving a function with respect to $\tilde{x}$ \begin{equation}\label{eq:Dobjfun} \Delta^{ij}_D(\tilde{x}) = l^T_1(\tilde{x})\boldsymbol{V}_{11} l_1(\tilde{x}) + \boldsymbol{a}^Tl_1(\tilde{x}) + c \end{equation} \noindent where $\boldsymbol{a} = 2\boldsymbol{V}_{12}l_2(\boldsymbol{x}_{i, -j})$ and $c =l^T_2(\boldsymbol{x}_{i, -j})\boldsymbol{V}_{22}l_2(\boldsymbol{x}_{i, -j}) + \{1 - v(\boldsymbol{x}_i)\}$ are fixed. The CEA for the $D_s$-criterion can be done equivalently through the CEA for the $D$-criterion because $|\boldsymbol{L}^T\boldsymbol{L}|=|\boldsymbol{Z}^T\boldsymbol{Z}| \times |\boldsymbol{F}^T(\boldsymbol{I}-\boldsymbol{P}_Z)\boldsymbol{F}|$. That is, $\Delta^{ij}_D$ evaluates the ratio $|\widetilde{\boldsymbol{F}}^T(\boldsymbol{I}-\boldsymbol{P}_Z)\widetilde{\boldsymbol{F}}|/|\boldsymbol{F}_0^T(\boldsymbol{I}-\boldsymbol{P}_Z)\boldsymbol{F}_0|$, corresponding to the $D_s$-criterion. The CEA for the Bayesian $D$-criterion has a similar update formula to \eqref{eq:Dobjfun} but with matrix $\boldsymbol{D}=(\boldsymbol{L}_0^T\boldsymbol{L}_0 + \tau^{-2}\boldsymbol{K})^{-1}$ \citep{dumouchel1994simple}. The CEA for the Bayesian $D_s$-criterion is easily shown to be equivalent to the Bayesian $D$-criterion, similar to the equivalence of the CEAs for the $D$- and $D_s$-criterion. We refer to the collection of these four criteria (i.e., $D$, $D_s$, Bayesian $D$, and Bayesian $D_s$) as the $\mathcal{D}$-criteria. We now provide a general result about optimal designs for the $\mathcal{D}$-criteria under what we call an $m$-factor interaction model. Let $J_m=\{j_1,\dots,j_m\}$ be a subset of $m$ of the $k$ factor indices. An $m$-factor interaction model has elements of $f(\boldsymbol{x})$ comprised only of \begin{itemize} \item all $k$ main effect coordinates $(x_j)$; \item at least one coordinate of the form $\prod_{j \in J_m} x_j$ for some $J_m$; \item any remaining coordinates are of the form $\prod_{j \in J} x_j$ where $|J|=2,\dots,m$. \end{itemize} The main effect model is then the one-factor interaction model. Equation~\eqref{eq:Dobjfun} provides a proof technique for the following theorem: \begin{theorem}\label{Thm:Dlevels} For any $m$-factor interaction model where $\mathcal{X}_j = \pm 1$ or $\mathcal{X}_j \in [-1,1]$, there exists an optimal design comprised of $x_{ij}=\pm 1$ for each of the $\mathcal{D}$-criteria. \end{theorem} \noindent This proof and all subsequent proofs are provided in the Supplementary Materials. To our knowledge this result has been proven only for main effect models \citep{box1971factorial, mitchell1974algorithm, galil1980d} and non-Bayesian criteria. Not only does our result extend to $m$-factor interaction models, it also applies to any nuisance effect structure that is design independent. A practical consequence of Theorem~\ref{Thm:Dlevels} is that to algorithmically construct an optimal design for such models under one of the $\mathcal{D}$-criteria, we can restrict $\mathcal{X}_{j}=\pm 1$. An unfortunate consequence of Theorem~1 that highlights a potential deficiency is the following corollary: \begin{corollary}\label{Cor:Dlevels} For any $m$-factor interaction model, suppose there exists an optimal design with respect to one of the $\mathcal{D}$-criteria where at least one $x_{ij}\neq \pm 1$ where $\mathcal{X}_j \in [-1,1]$. Then $\Delta_D^{ij}$ for that criterion is constant with respect to $\tilde{x}$, making all such possible exchanges produce another optimal design. \end{corollary} \noindent The phenomenon described in Corollary~\ref{Cor:Dlevels} occurred in Figure~1 for both coordinates $x_{14}$ and $x_{15}$ under the $D$- and $D_s$-criterion. Indeed, the designs with $(x_{i4},x_{i5})=(\pm 1,\pm1)$ produced the worst individual main effect variances. This example raises doubts about the $\mathcal{D}$-criteria's ability to evaluate a design's screening abilities. \subsection{ Properties of $A$-criterion}\label{s:subAoptTheory} The decrease in the $A$-criterion following a row exchange is \begin{align}\label{eq:RowExchange} \Delta_A(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) &= \text{tr}\{(\boldsymbol{L}_0^T\boldsymbol{L}_0)^{-1}\} - \text{tr}\{(\widetilde{\boldsymbol{L}}^T\widetilde{\boldsymbol{L}})^{-1}\} = \frac{l^T(\tilde{\boldsymbol{x}})\boldsymbol{U}l(\tilde{\boldsymbol{x}}) - \phi(\boldsymbol{x}_i)}{\Delta_D(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) \end{align} where $\boldsymbol{U} = \boldsymbol{V}\boldsymbol{D} + \boldsymbol{D}\boldsymbol{V} - [\{1 - v(\boldsymbol{x}_i)\}\boldsymbol{D} + \phi(\boldsymbol{x}_i)\boldsymbol{I}]\boldsymbol{D}$ and $\phi(\boldsymbol{x}_i)=l^T(\boldsymbol{x}_i)\boldsymbol{D}\boldsymbol{D}l(\boldsymbol{x}_i)$. The optimal exchange maximizes \eqref{eq:RowExchange}. Unlike with $\boldsymbol{V}$, $l^T(\tilde{\boldsymbol{x}})\boldsymbol{U}l(\tilde{\boldsymbol{x}})$ can take on positive and negative values. Partitioning $\boldsymbol{U}$ as we did with $\boldsymbol{V}$, the coordinate objective function is \begin{equation}\label{eq:DeltaL} \Delta_A^{ij}(\tilde{x}) = \frac{l^T_1(\tilde{x})\boldsymbol{U}_{11} l_1(\tilde{x}) + \boldsymbol{b}^Tl_1(\tilde{x}) + d }{\Delta_D^{ij}(\tilde{x})}\ ,\ \end{equation} where $\boldsymbol{b} = 2\boldsymbol{U}_{12}l_2(\boldsymbol{x}_{i, -j})$ and $d = l^T_2(\boldsymbol{x}_{i, -j})\boldsymbol{U}_{22}l_2(\boldsymbol{x}_{i, -j}) - \phi(\boldsymbol{x}_i)$ are constant. The equivalence between the $D$- and $D_s$-criterion does not hold for the $A$- and $A_s$-criterion. Other than special cases \citep{Nachtsheim1989} there is no closed-form coordinate exchange formula for $A_s$. Computing the update after a row/coordinate exchange may be accomplished by first updating $(\boldsymbol{L}_0^T\boldsymbol{L})_0^{-1}$ via the Sherman-Morrison-Woodbury formula \citep{sherman1950adjustment} and directly calculating the change, denoted $\Delta_{A_s}^{ij}$. This will not be as computationally efficient as evaluating \eqref{eq:DeltaL}. Following \cite{StallingsMorgan2015}, let $\boldsymbol{W}$ be a diagonal matrix of $p+b$ elements where the first $p$ diagonal entries corresponding to $\boldsymbol{\beta}$ equal 1 and the last $b$ elements corresponding to $\boldsymbol{\theta}$ equal an arbitrarily small value, $w>0$. The weighted $A$-criterion, or $A_W$-criterion, is then \begin{align} \text{tr}[\boldsymbol{W}(\boldsymbol{L}^T\boldsymbol{L})^{-1}]=\sum_j \text{Var}(\hat{\beta}_j) + w \sum_h \text{Var}(\hat{\theta}_h)\ .\ \label{e:Anuisance2} \end{align} The coordinate exchange update for the $A_W$-criterion, denoted $\Delta_{AW}^{ij}$, is similar to \eqref{eq:DeltaL} and is derived in the Supplementary Materials. Note the $A$-criterion is a special case of the $A_W$-criterion with $\boldsymbol{W}=\boldsymbol{I}$. From \eqref{e:Anuisance2}, we see $\lim_{w\to 0}\text{tr}[\boldsymbol{W}(\boldsymbol{L}^T\boldsymbol{L})^{-1}]=\text{tr}\left[\{\boldsymbol{F}^T(\boldsymbol{I}-\boldsymbol{P}_Z)\boldsymbol{F}\}^{-1}\right]$, the $A_s$-criterion. Therefore, $\lim_{w\to0} \Delta_{AW}^{ij} = \Delta_{A_s}^{ij}$. This result provides an efficient way to perform a CEA for the $A_s$-criterion using the $A_W$-criterion and setting $w$ to an arbitrarily small value. We have found $w=10^{-6}$ to perform well for most applications. This limiting result also allows us to study the behavior of $\Delta_{A_s}^{ij}$ through the more tractable $\Delta_{AW}^{ij}$. The update formula for a coordinate exchange under the Bayesian $A$-criterion takes the same form as \eqref{eq:DeltaL} but with $\boldsymbol{D}=(\boldsymbol{L}_0^T\boldsymbol{L}_0 + \tau^{-2}\boldsymbol{K})^{-1}$. For the Bayesian $A_s$-criterion, we can apply the weighted approach to the posterior covariance matrix, \begin{align} \text{tr}[\boldsymbol{W}(\boldsymbol{L}^T\boldsymbol{L}+\tau^{-2}\boldsymbol{K})^{-1}]\ .\ \label{e:BayesAnuisance} \end{align} We refer to this as the Bayesian $A_W$-criterion. To our knowledge, this is one of the earliest attempts at combining techniques from the weighted and Bayesian optimality literature. The Bayesian $A_W$- and Bayesian $A_s$-criterion's ability to balance minimization of the primary variances and their aliasing with the potential terms is investigated with multiple examples in Section~4. We collectively refer to the different criteria discussed here as the $\mathcal{A}$-criteria. We initially sought to prove the conjecture that for any of the $\mathcal{A}$-criteria there always exists an optimal design such that all $x_{ij} \in \{\pm 1, 0\}$ for $m$-factor interaction models. For such models and criteria, the coordinate update formula is a ratio of two quadratic polynomials with respect to $\tilde{x}$ and the optimum coordinate exchange can be found using fractional programming methods \citep{dinkelbach1967nonlinear}. In the Supplementary Materials, we identify situations where the optimum is unique and occurs at a non-integer value. This result by itself does not disprove the conjecture, but it does provide evidence to the contrary. Section~4.1 further explores this conjecture algorithmically for certain $n$ and $k$ under the main effect model. We next considered the unfortunate scenario in Corollary~1 with respect to the $\mathcal{A}$-criteria. As demonstrated in \eqref{eq:DeltaL}, the coordinate update formula for each $\mathcal{A}$-criteria involves a coordinate update for some $\mathcal{D}$-criteria. \begin{corollary}\label{lem:Aconstant1} For an $m$-factor interaction model and design, $\boldsymbol{X}_{d0}$, consider one of the weighted criteria among the $\mathcal{A}$-criteria for $w>0$. If the update formula for the corresponding among the $\mathcal{D}$-criteria is constant, then $\Delta_{AW}^{ij}$ is uniquely maximized. Moreover, $\Delta_{A_s}^{ij}$ is uniquely maximized when $\boldsymbol{z}_i^T(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{z}_i < 1$. \end{corollary} \noindent The corollary's condition $\boldsymbol{z}_i^T(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{z}_i < 1$ holds for practical cases of an intercept-only nuisance effect and block effects from $b$ blocks each of size $2$ or more. It provides further support for the $\mathcal{A}$-criteria's ability to better differentiate designs than the $\mathcal{D}$-criteria. \section{Examples}\label{s:Bayes} This section compares properties of algorithmically-generated optimal designs for three common screening models: (1) main effect models, (2) two-factor interaction models, and (3) quadratic models. All models have an intercept-only nuisance effect. For main effect models, we utilize the $A_s$- and $D_s$-criterion. For the models, we also consider their Bayesian versions. The best designs generated are compared in terms of their main effect variances after fitting the main effect submodel and, when applicable, their aliasing with potential terms (two-factor interactions and/or quadratic effects). \subsection{Main Effect Models}\label{s:CoordExchange} A main effect model with $k$ factors has the scalar form \[ y_{i}=\beta_0 + \sum_{j=1}^k x_{ij} \beta_j + e_i\ ,\ \] where $\beta_0$ is an intercept and $\beta_j$ with $j>0$ are the main effects. We constructed $A_s$- and $D_s$-optimal designs under this model for $k = 3, \dots, 20$ factors and $n = k+1,\dots,24$ runs assuming either only discrete settings ($\mathcal{X}_j = \{\pm 1, 0\}$) or only continuous settings ($\mathcal{X}_j = [-1,1]$). For continuous settings, we optimized \eqref{eq:DeltaL} with box-constrained L-BFGS over $[-1,1]$ \citep{byrd1995limited}. Due to the demanding computations involved as $n$ and $k$ increase, each CEA was first performed with 100 random starts for both the continuous and discrete CEAs. We then recorded the best criterion value for the algorithms separately and the overall best value. If the two values were equal, we declared the value as optimal. Otherwise, the CEA with the inferior value was performed again with another 100 random starts. If the best value among this new batch did not improve the previous overall best value, the search was stopped. If the value did improve the overall best value, the other CEA was run for another 100 starts and the iterative process continued. For our investigation, this back-and-forth search took no more than 1000 overall total starting designs. The $D_s$- or $A_s$-optimal designs were the designs with the best $D_s$- or $A_s$-value found across all iterations of both the discrete and continuous CEAs. Figure~\ref{fig:CoordHeatMap}(a) shows how many of the initial 100 constructed designs under the continuous CEA were $A$-optimal. A 0 value means the continuous CEA never found an $A_s$-optimal design among the initial batch of 100 random starting designs. Figure~\ref{fig:CoordHeatMap}(b) shows the difference between the counts in Figure~\ref{fig:CoordHeatMap}(a) with the same counts under the discrete CEA. Generally, when $n = k + 1$ or $k + 2$, the continuous CEA identified an $A_s$-optimal design more frequently than the discrete CEA. The discrete CEA found the $A_s$-optimal design more frequently in only 24\% of the scenarios considered and struggled particularly in the cases of $(n, k)=(11, 10)$ and $(19, 18)$. For these cases, even increasing the number of starting designs to $10,000$, the discrete CEA was unable to find an $A_s$-optimal design. The continuous CEA was able to find an $A_s$-optimal design for all cases when we increased the number of starting designs to $1000$. We therefore recommend the continuous CEA for constructing $A_s$-optimal designs. \begin{figure}[ht] \centering \includegraphics[width=1\textwidth]{alg_comp.png} \caption{(a) Categories of the number of times out of intial 100 starting designs the continuous CEA identified an $A_s$-optimal design for given $(n,k)$ combinations. The overall proportion for each category is shown in parentheses. (b) Difference between the number of times the initial starting designs for the continuous and discrete CEA identified an $A_s$-optimal design for given $(n,k)$ combinations. \label{fig:CoordHeatMap}} \end{figure} Contrary to our conjecture in Section~3.2, the $A_s$-optimal designs found by the continuous CEA for scenarios $(11, 10)$ and $(19, 18)$ contained non-integer values and are displayed in the Supplementary Materials. These designs, however, do not significantly decrease the $A_s$-criterion compared to the best constructed designs requiring $\mathcal{X}_j=\pm 1$ or $\{\pm 1, 0\}$ as given in \cite{jones2020Aoptimal}. The criterion value for the $(11, 10)$ $A_s$-optimal design we constructed was only $0.28\%$ and $0.35\%$ more $A_s$-efficient than the three- and two-level designs, respectively. The efficiency for the $(19, 18)$ $A_s$-optimal design with non-integer factor settings was $0.08\%$ than the best discrete-level $A_s$-optimal design we generated. Similar to \cite{jones2020Aoptimal}, the main effect variances the $A_s$- and $D_s$-optimal designs we generated were the same except when $n$ was close to $k$ or when $n = 3\ (\text{mod}\ 4)$. For the designs where $n = 3\ (\text{mod}\ 4)$, we calculated the paired differences between the ordered main effect variances of the two designs. Across all such scenarios and pairs, $78\%$ of the main effect variances from the $A_s$-optimal designs were smaller than those from $D_s$-optimal designs, $6\%$ of them were equal, and $16\%$ of them had larger variances for the $A_s$-optimal design. The largest individual decrease an $A_s$-optimal design's variance had compared to the $D_s$-optimal design was $0.05$. There was one scenario where the $D_s$-optimal design decreased a variance over the $A_s$-optimal design by $0.06$. \subsection{Two-factor Interaction Model with $n = 15, \ k = 6$}\label{subsec:2fibayes} Under the main effect model for scenario $n=15$, $k=6$, the $A_s$- and $D_s$-optimal designs were different, with the $A_s$-optimal design having zero coordinates for factors $5$ and $6$. We now consider this scenario under a two-factor interaction model: \[ y_{i}=\beta_0 + \sum_{j=1}^k x_{ij} \beta_j + \sum_{1 \leq j<j' \leq k} x_{ij}x_{ij'} \beta_{jj'}+ e_i\ ,\ \] which adds $k(k-1)/2$ interaction effects $\beta_{jj'}$. Not all effects can be estimated uniquely due to the small $n$. Thus we constructed Bayesian $A_s$- and Bayesian $D_s$-optimal designs where the intercept is a nuisance effect, main effects are primary terms, and two-factor interaction effects are potential terms. We set $\tau^{-2} = 1,5, 10, \dots,100$ and for each value we performed a CEA with $1000$ starting designs. Figure~\ref{fig:6F15RVariances} depicts variances (in ascending order) under the main effect model and alias matrices for the Bayesian $A_s$- and Bayesian $D_s$-optimal designs, as well as the $A_s$- and $D_s$-optimal designs generated in Section~4.1. The displayed alias matrices only show aliasing of the main effects. The Bayesian $A_s$-optimal design was found for all $15 \leq \tau^{-2} \leq 100$ and had settings $x_{ij} \in \{\pm 1, 0\}$. The Bayesian $A_s$-optimal design for $\tau^{-2} = 10$ had smaller aliasing (as measured by $\text{tr}(\boldsymbol{A}^T\boldsymbol{A})$, following \cite{jones2011efficient}) and had non-integer settings. The design is provided in the Supplementary Materials. The Bayesian $D_s$-optimal design shown in Figure~\ref{fig:6F15RVariances} is comprised of $x_{ij}=\pm 1$ and was found for all $20 \leq \tau^{-2} \leq 100$. It was chosen due to its minimizing $\text{tr}(\boldsymbol{A}^T\boldsymbol{A})$ among all constructed Bayesian $D_s$-optimal designs. The $D_s$-optimal design estimates all main effects with equal variance, while the $A_s$-optimal design has smaller variances except for $\hat{\beta}_6$. The Bayesian $A_s$-optimal design has both the smallest and largest individual variances. The Bayesian $A_s$- and $D_s$-optimal designs have superior aliasing properties over their non-Bayesian counterparts. The Bayesian $A_s$-optimal design minimized $\text{tr}(\boldsymbol{A}^T\boldsymbol{A})$ compared to the other three designs. This reduced aliasing can in part be attributed to the $0$ coordinates. When $x_{ij}=\pm 1$, a design with an odd number of runs will necessarily have some degree of column correlation. A design having some $x_{ij}=0$ can achieve orthogonality between columns for such $n$ and hence zero elements in the alias matrix. Orthogonality through including $x_{ij}=0$, however, leads to larger variances for the associated main effects. \begin{figure} \begin{minipage}[b]{.45\textwidth} \centering $\vcenter{\hbox{\includegraphics[width=1\textwidth]{var6F15R.png}}}$ \end{minipage} \hfill \begin{minipage}[b]{.5\textwidth} \centering $\vcenter{\hbox{\includegraphics[width=1\textwidth]{Alias_615.png}}}$ \end{minipage} \caption{(Left) Variances under main effect model for four designs with $n=15$ and $k=6$. (Right) Heatmap of alias matrices in absolute value for main effects .\label{fig:6F15RVariances}} \end{figure} \subsection{Screening Quadratic Models}\label{s:RSM} Effect principles applied to a quadratic model leads to the partitioning of main effects as primary terms and potential terms of all two-factor interaction and quadratic effects. We assigned different Bayesian priors to the two sets of potential effects, letting $\tau_I^{-2}$ and $\tau_Q^{-2}$ denote the prior precision for the two-factor interaction and quadratic effects, respectively. We constructed Bayesian $A_s$- and $D_s$-optimal designs under $\tau_I^{-2} \in \{1,16\}$ and $\tau_Q^{-2} \in \{0,1,16\}$ using $10,000$ starting designs. For $\tau_Q^{-2}=0$, the quadratic effects become primary terms. We considered $k = 6, 8, 10$ and $n=(2k+1),\dots,(1+k+k^2)$. The minimum $n$ value considered is that for a definitive screening design \citep{jones2011class} and the last is a run size that allows estimation of the full model. A definitive screening design (DSD) has $k$ foldover pairs of $\boldsymbol{x}_i$, each comprised of a single zero coordinate and $k-1$ coordinates of $\pm 1$. DSDs have no aliasing of the main effects with the interaction and quadratic terms. For a given design, let $\boldsymbol{F}_M$, $\boldsymbol{F}_I$, and $\boldsymbol{F}_Q$ be the model matrices corresponding to the main effects, interactions, and quadratic effects, respectively. Each design is summarized using three metrics: (1) $\log(A_{M})$ where $A_M$ is the sum of the main effect variances for a fitted main effect model; (2) $\log(SS_Q)$ where $SS_Q$ is the sum of squared off-diagonals of $\boldsymbol{F}_Q^T\boldsymbol{F}_Q$, and (3) $\log(SS_{MI}+1)$ where $SS_{MI}$ is the sum of squared values of $\boldsymbol{F}_M^T\boldsymbol{F}_I$. The metrics $\log(SS_Q)$ and $\log(SS_{MI}+1)$ are surrogates for the information dedicated to quadratic effects and aliasing between main effects and interactions, respectively. Figure~\ref{fig:BayesRSMk6} shows the numerical results for $k=6$ factors; similar conclusions were reached for the $k=8$ and $k=10$ scenarios (see Supplementary Materials). Generally, the Bayesian $A_s$-optimal design's variances under the main effect model were worse than those under the Bayesian $D_s$-optimal design. However, for fixed values $\tau_Q^{-2}$ and $\tau_{I}^{-2}$, the Bayesian $A_s$-optimal design had comparable or smaller values for $\log(SS_Q)$ and $\log(SS_{MI}+1)$, implying better estimation capacity and aliasing properties for the potential effects. The Bayesian $A_s$-optimal designs for $\tau_Q^{-2}=\tau_I^{-2}=16$ closely resemble the structure of DSDs for $n=13,\dots,20$ with no aliasing between main effects and interactions. The Bayesian $A_s$-optimal designs for $n=13$ and $17$ were a DSD and augmented DSD \citep{JonesNachtsheim2017}, respectively. For $n=14$ and $n=18,19,20$, the Bayesian $A_s$-optimal designs added center runs (i.e., $\boldsymbol{x}_i=0$) to the DSD and augmented DSD, respectively. The Bayesian $A_s$-optimal design for $n=15$ had one center run and 7 pairs of foldover runs, mimicking the DSD structure. The Bayesian $D_s$-optimal designs were less likely to identify designs with structures similar to DSDs for the $\tau_Q^{-2}$ and $\tau_{I}^{-2}$ we considered. \begin{figure} \centering \includegraphics[width=0.95\textwidth]{BayesRSM6.png} \caption{Performance measures for best Bayesian-$D_s$ and -$A_s$ designs when $k = 6$ found with $\tau_Q^{-2} \in \{0,1,16\}$ and $\tau_I^{-2} \in \{1,16\}$. (Left) The $A_s$-criterion for the main effect model on the log scale. (Middle) The sum of squares of the off-diagonals for the quadratic terms on the log scale. (Right) The sum of squares of the cross products of the main effects and interactions on the log scale with offset $1$.\label{fig:BayesRSMk6}} \end{figure} The behavior of the Bayesian $A_s$-optimal designs was influenced by the measure's implicit emphasis on quadratic effects due to their estimators having larger minimum variances than main effects and interactions. This phenomenon was mentioned for the $A_s$-criterion by \cite{gilmour2012optimum} and discussed thoroughly by \cite{AllenMoyer2021}. To equally emphasize minimizing variances among all effects, both articles recommend an $A_W$-criterion that incorporates the minimum variances. \cite{AllenMoyer2021} refer to this $A_W$-criterion as the standardized $A_w$-criterion. Note, this modification is unnecessary for main effect and interaction models because these effects have the same minimum variance. Extending the weighted approach by \cite{AllenMoyer2021} to the Bayesian $A_s$-criterion requires the introduction of a weight matrix based on the minimum posterior variances for given prior variances. For the quadratic model, this would lead to a diagonal weight matrix with smaller weights assigned to quadratic effects. However, $\tau_Q^{-2}$ also controls the magnitude of the quadratic effects' posterior variances so manipulating posterior variances for potential terms via weighting can be done equivalently through manipulation of $\tau_Q^{-2}$. Indeed, in Figure~\ref{fig:BayesRSMk6} we see that as $\tau_Q^{-2}$ increases, $\log(A_M)$ decreases and $\log(SS_Q)$ increases implying less focus on quadratic effects. We would expect the same behavior if we were to assign smaller weights to the quadratic effects. \section{A Blocked Screening Design for Vitamin Photosensitivity}\label{s:block} \cite{GoosJones} discuss a blocked screening experiment performed by a pharmaceutical manufacturer that aimed to determine a combination of vitamins and certain fatty molecules to reduce the vitamins' photosensitivity, thereby increasing the product's shelf life. There were six factors studied corresponding to the presence/absence of riboflavin as well as five fatty molecules. The measuring device required daily recalibration, allowing only four measurements per day. The experiment was broken up across eight days that allowed $4$ runs per day, leading to a study of $k=6$ factors and $b=8$ blocks each of size $4$. The experimenters wanted to be able to estimate all six main effects and 15 two-factor interactions because they were concerned about possible large interactions. Many of the techniques for constructing fractional factorial designs can be employed to create blocked screening experiments but only for certain values of $b$ and $u$. For example, if $n=bu=2^k$ and $b=2^\ell$, we can block all $2^k$ treatment combinations by confounding $2^\ell - 1$ factorial effects with the block effects. All remaining factorial effects are estimated with minimal variance. If $n=bu=2^{k-m}$ and $b=2^\ell$, then we may block a fractional factorial design based on certain confounding patterns \citep{Bisgaard1994,chen1999,cheng2001,Cheng2002}. However, \cite{Cheng2004} demonstrate nonregular fractional factorial design may have superior estimation and variance properties. For the vitamin photosensitivity experiment, a blocked regular fractional factorial will not be able to estimate all two factor interactions, an important property for their application. \cite{GoosJones} constructed a $D$-optimal blocked design algorithmically that can estimate all main effects and two-factor interaction effects. The experiment had only categorical factors, but we will treat them here as if they were continuous. We constructed blocked designs with the $D_s$-criterion, $A_s$-criterion, and a Bayesian $A_s$-criterion with $\tau_I^{-2}=16$. The block effects were assigned weight $w=10^{-6}$ and $10,000$ starting designs were used. Although 3 different designs were constructed corresponding to the different criteria, the best design found under the Bayesian $A_s$-criterion, shown in Figure~\ref{fig:BlockDesign}, was optimal across all criteria. Even after increasing the number of starting designs to $100,000$, the CEAs for the $D_s$- and $A_s$-criteria were still unable to identify this design. \begin{figure}[ht] \begin{minipage}[b]{.49\textwidth} \centering \scalebox{0.6}{\begin{tabular}{rrrrrrrrrrrrr} -1 & -1 & -1 & 1 & 1 & -1 & & 1 & -1 & -1 & 1 & -1 & -1 \\ 1 & 1 & 1 & 1 & 1 & 1 & & 1 & 1 & 1 & -1 & 1 & -1 \\ -1 & 1 & 1 & -1 & -1 & -1 & & -1 & 1 & -1 & -1 & -1 & 1 \\ 1 & -1 & -1 & -1 & -1 & 1 & & -1 & -1 & 1 & 1 & 1 & 1 \\ \\ 1 & -1 & 1 & 1 & -1 & 1 & & 1 & 1 & -1 & 1 & -1 & 1 \\ -1 & -1 & -1 & -1 & 1 & 1 & & -1 & -1 & 1 & 1 & -1 & -1 \\ 1 & 1 & -1 & -1 & -1 & -1 & & -1 & 1 & -1 & -1 & 1 & -1 \\ -1 & 1 & 1 & 1 & 1 & -1 & & 1 & -1 & 1 & -1 & 1 & 1 \\ \\ -1 & 1 & -1 & 1 & -1 & -1 & & -1 & -1 & -1 & -1 & -1 & -1 \\ 1 & -1 & 1 & -1 & -1 & -1 & & 1 & 1 & -1 & -1 & 1 & 1 \\ -1 & 1 & 1 & -1 & 1 & 1 & & 1 & -1 & 1 & 1 & 1 & -1 \\ 1 & -1 & -1 & 1 & 1 & 1 & & -1 & 1 & 1 & 1 & -1 & 1 \\ \\ -1 & 1 & -1 & 1 & 1 & 1 & & 1 & 1 & -1 & 1 & 1 & -1 \\ 1 & -1 & -1 & -1 & 1 & -1 & & 1 & 1 & 1 & -1 & -1 & 1 \\ 1 & 1 & 1 & 1 & -1 & -1 & & -1 & -1 & -1 & 1 & -1 & 1 \\ -1 & -1 & 1 & -1 & -1 & 1 & & -1 & -1 & 1 & -1 & 1 & -1 \\ \end{tabular}} \end{minipage} \begin{minipage}[b]{.4\textwidth} \centering $\vcenter{\hbox{\includegraphics[width=1\textwidth]{block_Lcor.png}}}$ \end{minipage} \caption{(Left) Blocked screening design with 6 factors within 8 blocks of size 4 and (Right) heatmap of $L^TL$. Each group of the block design represents one block with $4$ runs and settings for $6$ factors.\label{fig:BlockDesign}} \end{figure} The optimal design we constructed is more $D$-efficient than the design reported in \cite{GoosJones}, and consists entirely of $\pm 1$ coordinates and so can be used in their application. The design turned out to be a nonregular fractional factorial with generalized resolution $3.75$ and has only 6 words of length $4.5$. \cite{Cheng2004} tabulated designs with similar structure but only for $n=12, 16$, and $20$. Others have looked at minimizing generalized aberration for larger run sizes (see \cite{FANG2007740} and \cite{Schoen2017}), but not in the context of blocking. The main effects of the design are all estimated with optimal variance ($1/32$) and have zero aliasing with the block and interaction effects. Five of the 15 interactions are also estimated with optimal variance. The remaining interaction effects are partially correlated with the block effects, with 8 of the interactions having a variance of $0.047$ and the other two having a variance of $0.063$. \section{Discussion}\label{s:Discussion} This paper compares different forms of the $D$- and $A$-criterion for constructing screening designs that simultaneously minimize variances and bias of a predetermined set of effects in a linear model. We challenge two commonly held beliefs concerning screening designs: \begin{itemize} \item Algorithmic optimization of the $D$-criterion produces good screening designs for arbitrary $n$. \item When constructing screening designs, one needs only to consider $x_{ij}=\pm 1$ even if $x_{ij}$ is numeric and can take on other values in $[-1,1]$. \end{itemize} \cite{gilmour2012optimum} and \cite{jones2020Aoptimal} have also pointed out the failing of the $D_s$-criterion, and we have further clarified these failings. Our investigation of the $D_s$-criterion's CEA shows that many $D_s$-optimal designs can exist for a given scenario, having different variance and bias properties. The superior performance of our continuous CEA in Section~\ref{s:CoordExchange} indicates that even if an $A_s$-optimal design is comprised of only $x_{ij} \in \{\pm 1, 0\}$, the continuous CEA more frequently constructs an $A_s$-optimal design than a discrete CEA. Moreover, we also found some combinations of $n$ and $k$ where the $A_s$-optimal design included non-integer coordinates. Our investigation of Bayesian $A_s$- and Bayesian $D_s$-optimal designs in Sections~\ref{subsec:2fibayes} and \ref{s:RSM} revealed that the Bayesian $A_s$-criterion better balances variance and bias minimization for the prior variance values considered. In Section~5, we found that the optimal design constructed under the Bayesian $A_s$-criterion was also optimal under the $A_s$- and $D_s$-criterion, and that it was better than the designs constructed directly under these two criteria. This is unfortunately a possibility with algorithmic construction and we recommend practitioners generate multiple designs under different design criteria and compare them by inspecting their variances and biases directly, similar to \cite{AllenMoyer2021}. There are many directions of future research we are currently investigating. First, this paper is combines weighted and Bayesian criteria, but mainly to ignore the variances of nuisance effects. Section~\ref{s:RSM} hinted at redundancies in weighting posterior effects but there may be opportunities for more flexible weighting applied to primary effects. Next, more investigation is needed to compare the Bayesian $A_s$-criterion to more brute force methods that minimize variance and bias. These methods commonly employ some type of $D$-criterion to measure variance which could easily be modified to be an $A$-criterion. Following \cite{Li2006}, more investigation is needed on the difference of the optimal designs under the $\mathcal{D}$- and $\mathcal{A}$-criteria when higher-order interactions are considered. Finally, following \cite{gilmour2012optimum} and \cite{JonesGOSSD}, we are currently developing an $A_s$-criterion that includes external variance estimation through replication or fake factors. \section{Supplementary Materials} \subsection{Theorem 1 Proof}\label{A:Thm1prof} Under an $m$-factor interaction model, the elements of $f_1(x_{ij})$ are comprised of $x_{ij}$ and products of $x_{ij}$ with the other main effect coordinates. Hence $f_1(\tilde{x}) = \tilde{x} \times f_{(1)}(\boldsymbol{x}_{i,-j})$ where $f_{(1)}(\boldsymbol{x}_{i,-j})$ is a vector whose elements depend only on $\boldsymbol{x}_{i,-j}$. For simplicity, we write $f_{(1)}$ instead of $f_{(1)}(\boldsymbol{x}_{i,-j})$. The definition coordinate update formula, $\Delta_D^{ij}$, for the $D$- and $D_s$-criterion involves matrix $\boldsymbol{D}=(\boldsymbol{L}^T\boldsymbol{L})^{-1}$, and for the Bayesian versions of these criteria $\boldsymbol{D}=(\boldsymbol{L}^T\boldsymbol{L}+\tau^{-2}\boldsymbol{K})^{-1}$. Recall $\boldsymbol{V}=(1-v(\boldsymbol{x}_i))\boldsymbol{D}+\boldsymbol{D}l(\boldsymbol{x}_i)l^T(\boldsymbol{x}_i)\boldsymbol{D}$ where $v(\boldsymbol{x}_i)=l^T(\boldsymbol{x}_i)\boldsymbol{D}\,l(\boldsymbol{x}_i)$. Partition $\boldsymbol{V}$ so $\boldsymbol{V}_{11}$ corresponds to elements of $f_1(\tilde{x})$ and $\boldsymbol{V}_{22}$ corresponds to $l_2(\boldsymbol{x}_{i,-j})=(f_2(\boldsymbol{x}_{i,-j})^T,\boldsymbol{z}_i^T)^T$. Then \[ \Delta^{ij}_D(\tilde{x}) = \tilde{x}^2f^T_{(1)}\boldsymbol{V}_{11} f_{(1)} + \tilde{x}\boldsymbol{a}^Tf_{(1)} + c \] is a quadratic polynomial where $\boldsymbol{a}=2\boldsymbol{V}_{12}l_2(\boldsymbol{x}_{i,-j})$ and $c =l^T_2(\boldsymbol{x}_{i, -j})\boldsymbol{V}_{22}l_2(\boldsymbol{x}_{i, -j}) + \{1 - v(\boldsymbol{x}_i)\}$. The submatrix $\boldsymbol{V}_{11}$ is positive semidefinite so $f^T_{(1)}\boldsymbol{V}_{11} f_{(1)} \geq 0$, making $\Delta^{ij}_D(\tilde{x})$ a convex quadratic polynomial with respect to $\tilde{x}$. Hence either $\tilde{x}=-1$ or $1$ maximizes $\Delta^{ij}_D(\tilde{x})$ across $\tilde{x} \in [-1,1]$ \subsection{Corollary 1 Proof} For the scenario in Theorem 1, if an optimal design includes an $x_{ij} \neq \pm 1$, then Theorem 1 tells us that there exists another $D$-optimal design by exchanging this $x_{ij}$ with either $-1$ or $1$. Denote this equivalent exchange by $\tilde{x}^*$. Theorem~1 tells us that $\Delta_D^{ij}$ is a convex quadratic polynomial and, since $\Delta_D^{ij}(x_{ij})=\Delta_D^{ij}(\tilde{x}^*)=1$ the following properties must hold: \begin{enumerate} \item For all $\tilde{x}$ between $x_{ij}$ and $\tilde{x}^*$, $\Delta_D^{ij}(\tilde{x})\leq 1$ \item For all $\tilde{x}$ not between $x_{ij}$ and $\tilde{x}^*$, $\Delta_D^{ij}(\tilde{x})\geq 1$\ .\ \end{enumerate} Since $x_{ij} \neq \pm 1$, case (2) must have $\Delta_D^{ij}(\tilde{x})= 1$ for all such $\tilde{x}$, otherwise we could find a $\tilde{x}$ that would improve over the initial optimal design, a contradiction. Since $\Delta_D^{ij}(\tilde{x})= 1$ for all $\tilde{x}$ in this nonempty interval, $\Delta_D^{ij}$ must be constant across all $[-1,1]$, meaning all possible exchanges will produce $D$-optimal designs. \subsection{$A_W$-criterion Coordinate Exchange Formula}\label{A:CEA} We first derive the row exchange formulas for the weighted $A$-criterion $\text{tr}\{\boldsymbol{W}(\boldsymbol{L}^T\boldsymbol{L})^{-1}\}$ for a given positive definite matrix $\boldsymbol{W}$. Note $\boldsymbol{W}=\boldsymbol{I}$ yields the traditional $A$-criterion. Define \begin{align} \Delta_{AW}(\boldsymbol{x}_i, \tilde{\boldsymbol{x}})=\text{tr}\{\boldsymbol{W}(\boldsymbol{L}_0^T\boldsymbol{L}_0)^{-1}\} - \text{tr}\{\boldsymbol{W}(\widetilde{\boldsymbol{L}}^T\widetilde{\boldsymbol{L}})^{-1}\}\ .\ \end{align} Since $\widetilde{\boldsymbol{L}}^T\widetilde{\boldsymbol{L}}=\boldsymbol{L}_0^T\boldsymbol{L}_0 + \boldsymbol{L}_{01}\boldsymbol{L}_{02}^T$ for $\boldsymbol{L}_{01} = (l(\tilde{\boldsymbol{x}}), -l(\boldsymbol{x}_i))$ and $\boldsymbol{L}_{02} = (l(\tilde{\boldsymbol{x}}), l(\boldsymbol{x}_i))$, it follows by \cite{sherman1950adjustment} \begin{align} (\widetilde{\boldsymbol{L}}^T\widetilde{\boldsymbol{L}})^{-1}=\boldsymbol{D} - \boldsymbol{D}\boldsymbol{L}_{01}(\boldsymbol{I}+\boldsymbol{L}_{02}^T\boldsymbol{D}\boldsymbol{L}_{01})^{-1}\boldsymbol{L}_{02}^T\boldsymbol{D}\ ,\ \end{align} where $\boldsymbol{D}=(\boldsymbol{L}_{0}^T\boldsymbol{L}_{0})^{-1}$. With $\phi_W(\boldsymbol{x}_i)= l^T(\boldsymbol{x}_i)\boldsymbol{D}\boldsymbol{W}\boldsymbol{D}l(\boldsymbol{x}_i)$ we arrive at the expression \begin{align*} \Delta_{AW}(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) &= \text{tr}\left\{\boldsymbol{W}\boldsymbol{D}\boldsymbol{L}_{01}\left(\boldsymbol{I} + \boldsymbol{L}^T_{02}\boldsymbol{D}\boldsymbol{L}_{01}\right)^{-1}\boldsymbol{L}^T_{02}\boldsymbol{D}\right\} \\ &= \text{tr}\left\{\boldsymbol{W}\boldsymbol{D}\boldsymbol{L}_{01}\begin{pmatrix} 1 + v(\tilde{\boldsymbol{x}})& -v(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) \\ v(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) & 1 - v(\boldsymbol{x}_{i}) \end{pmatrix}^{-1}\boldsymbol{L}^T_{02}\boldsymbol{D}\right\} \\ &= \frac{1}{\Delta_D(\boldsymbol{x}_i, \tilde{\boldsymbol{x}})} \text{tr}\left\{\boldsymbol{W}\boldsymbol{D}\boldsymbol{L}_{01}\begin{pmatrix} 1 - v(\boldsymbol{x}_{i})& v(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) \\ -v(\boldsymbol{x}_i, \tilde{\boldsymbol{x}}) & 1 + v(\tilde{\boldsymbol{x}}) \end{pmatrix} \boldsymbol{L}^T_{02}\boldsymbol{D}\right\} \\ &= \frac{1}{\Delta_D(\boldsymbol{x}_i, \tilde{\boldsymbol{x}})} \{l^T(\tilde{\boldsymbol{x}})\boldsymbol{U}l(\tilde{\boldsymbol{x}}) - \phi_W(\boldsymbol{x}_i) \} \ ,\ \end{align*} where $v(\boldsymbol{x}_i,\tilde{\boldsymbol{x}})=\boldsymbol{x}_i^T\boldsymbol{D}\tilde{\boldsymbol{x}}$ and $$\boldsymbol{U} = \{1 - v(\boldsymbol{x}_i)\}\boldsymbol{D}\boldsymbol{W}\boldsymbol{D} + \boldsymbol{D} l(\boldsymbol{x}_i) l^T(\boldsymbol{x}_i)\boldsymbol{D}\boldsymbol{W}\boldsymbol{D} +\boldsymbol{D} \boldsymbol{W}\boldsymbol{D} l(\boldsymbol{x}_i) l^T(\boldsymbol{x}_i)\boldsymbol{D} - \phi_W(\boldsymbol{x}_i)\boldsymbol{D}\ .\ $$ The coordinate exchange formulas follow by straightforward partitioning and permuting. \subsection{CEA of $A_W$-criterion for $m$-factor interaction model}\label{A:Alvl} For $x_{ij} \in [-1,1]$, let $q \in \mathbb{R}$ and define \begin{align*} F(q) &= \max_{\tilde{x} \in [-1,1]}\left\{l^T_1(\tilde{x})(\boldsymbol{U}_{11}-q\boldsymbol{V}_{11}) l_1(\tilde{x}) + (\boldsymbol{b}-q\boldsymbol{a})^T l_1(\tilde{x}) + (d - qc)\right\}\\ a(q) &= l^T_{(1)}(\boldsymbol{U}_{11} - q\boldsymbol{V}_{11})l_{(1)}\\ b(q) &= (\boldsymbol{b}^T - q\boldsymbol{a}^T)l_{(1)}\\ c(q) &= d - qc\ .\ \end{align*} For $q_0 = \underset{\tilde{x} \in [-1, 1]}{\max} \Delta^{ij}_{AW}( \tilde{x})$, define \begin{equation*} G(\tilde{x}) = \tilde{x}^2a(q_0) + \tilde{x}b(q_0) + c(q_0). \end{equation*} Applying the main theorem of \cite{dinkelbach1967nonlinear}, we note that $G(\tilde{x}) \leq 0$ for all $\tilde{x} \in [-1, 1]$ and $\tilde{x}^*$ optimizes $\Delta^{ij}_A$ if and only if $G(\tilde{x}^*) = 0$. To identify the optima of $\Delta^{ij}_{AW}$, it is sufficient to identify the optima of $G(\tilde{x})$, a quadratic function whose optima clearly depend on the sign of $a(q_0)$ and $b(q_0)$. We enumerate the possible optima here: \begin{enumerate} \item $a(q_0) > 0$, $\tilde{x}^* = -1$ or $1$ \item $a(q_0) = 0$ \begin{enumerate} \item If $b(q_0) \neq 0$, $\tilde{x}^* = -1$ or $1$ \item If $b(q_0) = 0$, any value $\tilde{x}^*$ optimizes $\Delta^{ij}_{AW}$ \end{enumerate} \item $a(q_0) < 0$ \begin{enumerate} \item If $b(q_0) \neq 0$, $\Delta^{ij}_A$ maximized at $\tilde{x}^* = \frac{-b(q_0)}{2a(q_0)},$ when $|\frac{-b(q_0)}{2a(q_0)}| \leq 1$, otherwise it is maximized at either $\tilde{x}^*=-1$ or $1$. \item If $b(q_0) = 0$, $\tilde{x}^* = 0$. \end{enumerate} \end{enumerate} An example of a non-integer optima is shown next. \subsection{Demonstration of continuous optimization of $\Delta^{ij}_{AW}$} We consider an $n = 8, \ k = 4$ design for a main effect model. We know that the final $8$ run, $4$ factor design can be constructed by selecting 4 factor columns from an $8 \times 8$ Hadamard matrix. However, during design construction, non-integer exchanges may be optimal. For example, in the middle of a continuous CEA where the coordinate $0.45$ in the $1$st row and $3$rd column of the design in Figure~\ref{fig:DeltaLNonIntSwap} is being exchange with a new optima, by computing $a(q)$, $b(q)$, and $c(q)$ for the original design, we get \begin{align*} a(q) &= -0.0104 - q0.07 \\ b(q) &= -0.0089 + q0.02 \\ c(q) &= -0.002 - q0.98 \ .\ \end{align*} \noindent Using \cite{dinkelbach1967nonlinear}, let $q_0 = \underset{\tilde{x} \in [-1, 1]}{\max} \Delta^{13}_A(x_{ij}, \tilde{x}, \boldsymbol{x}_{i, -j})$. Start with a guess for $q_0$ and $\tilde{x}$ and compute $F(q_0)$ and $\Delta_A^{13}$ iteratively until they converse. In this example, the true $q_0 \approx 0$, so $a(q) < 0$, $b(q) < 0$, meaning the optimum value is at $\frac{-b(q)}{2a(q)} \approx \frac{-0.0089}{2 \times 0.0104} \approx -0.43$. \begin{figure}[ht] \begin{minipage}[b]{.45\textwidth} \centering \begin{tabular}{rrrr} -0.10 & -1 &\fbox{-0.43}& -1 \\ 1 & 1 & 0.53 & -1 \\ -1 & -0.25 & 1 & 0.98 \\ 1 & -1 & -0.43 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & -1 & 1 \\ -1 & 1 & -1 & 1 \\ -1 & 1 & -1 & -1 \\ \end{tabular} \end{minipage} \begin{minipage}[b]{.45\textwidth} \centering $\vcenter{\hbox{\includegraphics[width=0.95\textwidth, angle = 270]{nonintegerswapex.pdf}}}$ \end{minipage} \caption{(Left) For a $n = 8,\ k = 4$ design, the objective function (right) $\Delta_{AW}^{13}$ for any replacement of the coordinate in row $1$, column $3$ indicates an exchange of $-0.45$ with the almost equivalent value of $-0.43$ optimizes $\Delta_{AW}^{13}$. \label{fig:DeltaLNonIntSwap}} \end{figure} The same approach holds for a row exchange. Figure~\ref{tab:RowExcEx} shows a $4$ factor $6$ run design with non-integer coordinates. This design is $\widetilde{\boldsymbol{X}}_d$ after exchanging the original row $\boldsymbol{x}_4=(1,\ -1, \ -1, \ 0)$ with the row $(1, -1, -1, 0.17)$. None of the $3^4$ integer-only row exchanges yielded an improved value of the objective function. \begin{figure}[h] \begin{minipage}[b]{.45\textwidth} \centering \begin{tabular}{rrrr} -1 & 1 & -1 & -1\\ 0.44 & 1 & 1 & 1\\ -0.64 & -1 & 1 & -1\\ \rowcolor{lightgray} 1 & -1 & -1 & 0.17\\ 1 & 1 & -1 & 0\\ -1 & 0 & -1 & 1\\ \end{tabular} \end{minipage} \begin{minipage}[b]{.45\textwidth} \centering $\vcenter{\hbox{\includegraphics[width=.8\textwidth, angle = 270]{row_A.pdf}}}$ \end{minipage} \caption{Row 4 of the design (left) exchanges $(1, -1, -1, 0)$ with $(1, -1, -1, 0.17)$. (Right) A-values of every integer-only row exchange compared with the value at the optimal exchange, represented by the horizontal line. Only the original row has an A-value close to the value of the optimal switch ($1.121$ verses $1.122$).\label{tab:RowExcEx}} \end{figure} \clearpage \subsection{Proof of Corollary 2} It easy to show that, in general, $\boldsymbol{V}\, l(\boldsymbol{x}_i)=\boldsymbol{D} \, l(\boldsymbol{x}_i)$ and \[ \boldsymbol{V}\boldsymbol{W}\boldsymbol{D}=(1-v(\boldsymbol{x}_i))\boldsymbol{D}\boldsymbol{W}\boldsymbol{D}+\boldsymbol{V}l(\boldsymbol{x}_i) l^T(\boldsymbol{x}_i)\boldsymbol{V}\boldsymbol{W}\boldsymbol{D}\ .\ \] Then we have the following simplified expression for $\boldsymbol{U}$: \[ \boldsymbol{U} = \boldsymbol{V}\boldsymbol{W}\boldsymbol{D} + \boldsymbol{D}\boldsymbol{W}\boldsymbol{V} \, l(\boldsymbol{x}_i)\, l^T(\boldsymbol{x}_i) \boldsymbol{V}-\phi_W(\boldsymbol{x}_i) \boldsymbol{D}\ .\ \] For the situation described in Corollary 1, the coordinate update for each criterion among the $\mathcal{A}$-criteria involves some $\Delta_D^{ij}$ in the denominator. If this $\Delta_D^{ij}$ is constant, $f^T_{(1)}\boldsymbol{V}_{11} f_{(1)}=0$ and since $\boldsymbol{V}$ is positive semidefinite, $f_{(1)}$ must be a null eigenvector for $\boldsymbol{V}_{11}$. Moreover, $f_{(1)}^T\boldsymbol{V}_{12}=f_{(1)}^T\boldsymbol{V}_{11}\boldsymbol{V}_{11}^-\boldsymbol{V}_{12}=0$. Then \[ \boldsymbol{V}\, l(\tilde{\boldsymbol{x}})=\boldsymbol{V}\begin{pmatrix} \tilde{x} f_{(1)}\\ l_2(\boldsymbol{x}_{i,-j}) \end{pmatrix}=\begin{pmatrix}\boldsymbol{V}_{12}\, l_2(\boldsymbol{x}_{i,-j})\\\boldsymbol{V}_{22}\, l_2(\boldsymbol{x}_{i,-j}) \end{pmatrix}\ ,\ \] which does not involve $\tilde{x}$. This and the expression for $\boldsymbol{U}$ says the coefficient for $\tilde{x}^2$ in $\Delta_{AW}^{ij}$ equals $-\phi_W(\boldsymbol{x}_i)f_{(1)}^T\boldsymbol{D}_{11} f_{(1)}$ where $\boldsymbol{D}_{11}$ is the relevant partition of $\boldsymbol{D}$, a positive definite matrix. The coefficient equals 0 if and only if either $\phi_W(\boldsymbol{x}_i)=0$ or $f_{(1)}$ is the all-zero vector. But neither can occur because the main effect coordinate is always included in the model so $f_{(1)} \neq 0$ and $\boldsymbol{D}\boldsymbol{W}\boldsymbol{D}$ is positive definite for $w>0$. Hence the coefficient must be negative and so $\Delta^{ij}_{AW}$ is a concave quadratic polynomial, having one unique maximum in $[-1,1]$. When adjusting for nuisance effects, we want to consider $\Delta_{A_s}^{ij}=\lim_{w \to 0} \Delta_{AW}^{ij}$. Hence if $\lim_{w\to 0} \phi_W(\boldsymbol{x}_i)>0$ the quadratic coefficient will again be negative, since $f_{(1)}^T\boldsymbol{D}_{11} f_{(1)}$ does not depend on $w$. Now $\phi_W(\boldsymbol{x}_i)=l^T(\boldsymbol{x}_i)\boldsymbol{D}\boldsymbol{W}\boldsymbol{D}l(\boldsymbol{x}_i)$ is a quadratic form of a symmetric matrix so $\lim_{w \to 0} \phi_W(\boldsymbol{x}_i)=0$ if and only if $\lim_{w\to0}\boldsymbol{W}^{1/2}\boldsymbol{D}l(\boldsymbol{x}_i)=0$. Partitioning $\boldsymbol{D}$ according to matrices $\boldsymbol{F}_0$ and $\boldsymbol{Z}$ gives the expression \begin{align*} \lim_{w\to0}\boldsymbol{W}^{1/2}\boldsymbol{D}l(\boldsymbol{x}_i) = \lim_{w\to0} \begin{pmatrix} \boldsymbol{D}_F f(\boldsymbol{x}_i) + \boldsymbol{D}_{FZ} \boldsymbol{z}_i\\ \sqrt{w} \boldsymbol{D}_{ZF} f(\boldsymbol{x}_i) + \sqrt{w}\boldsymbol{D}_{Z} \boldsymbol{z}_i\\ \end{pmatrix}= \begin{pmatrix} \boldsymbol{D}_F f(\boldsymbol{x}_i) + \boldsymbol{D}_{FZ} \boldsymbol{z}_i\\ 0\\ \end{pmatrix}\ .\ \end{align*} So $\lim_{w \to 0} \phi_W(\boldsymbol{x}_i)=0$ if and only if $\boldsymbol{D}_F f(\boldsymbol{x}_i) + \boldsymbol{D}_{FZ} \boldsymbol{z}_i=0$, or \[ f(\boldsymbol{x}_i)=-\boldsymbol{D}_F^{-1}\boldsymbol{D}_{FZ} \boldsymbol{z}_i\ .\ \] For both the $A_W$- and Bayesian $A_W$-criterion, $-\boldsymbol{D}_F^{-1}\boldsymbol{D}_{FZ}\boldsymbol{z}_i=\boldsymbol{F}_0^T\boldsymbol{Z}(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{z}_i$, which implies \begin{align*} f(\boldsymbol{x}_i)&=\sum_{i'=1}^n f(\boldsymbol{x}_{i'})p_{z,i'i}\\ &=\frac{1}{1-p_{z,ii}}\sum_{i' \neq i} f(\boldsymbol{x}_{i'})p_{z,i'i}\ ,\ \end{align*} where $p_{z,i'i}=\boldsymbol{z}_{i'}^T(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{z}_{i}$ are elements of the $i$-th column of $\boldsymbol{P}_Z$. For $p_{z,ii}=\boldsymbol{z}_{i}^T(\boldsymbol{Z}^T\boldsymbol{Z})^{-1}\boldsymbol{z}_{i}<1$, it also holds that \[ \boldsymbol{z}_i =\frac{1}{1-p_{z,ii}}\sum_{i' \neq i} \boldsymbol{z}_{i'}p_{z,i'i}\ .\ \] Finally, this implies $l(\boldsymbol{x}_i)$ can be written as a linear combination of the other $n-1$ rows of $\boldsymbol{L}_0$ with coefficients $p_{z,i'i}/(1-p_{z,ii})$. But a constant $\Delta_D^{ij}$ implies $v(\boldsymbol{x}_i)=1$ meaning $l(\boldsymbol{x}_i)$ cannot be written as a linear combination of the other $n-1$ rows. Therefore $\lim_{w \to 0} \phi_W(\boldsymbol{x}_i)>0$ and $\Delta_{A_s}^{ij}$ is a concave quadratic polynomial. \subsection{Section 4.2 Designs} The $A_s$- and $D_s$-optimal designs for a main effect model with $n = 15, \ k = 6$ are shown in Table~\ref{tab:6F15R}. They do not account for the potential two-factor interaction effects, and thus lead to worse aliasing than the Bayesian $A_s$- and Bayesian $D_s$-optimal designs found in Table~\ref{tab:Bayes6F15R}. The Bayesian $A_s$-optimal design was found that better minimizes aliasing (as measured by $\text{tr}(\boldsymbol{A}^T\boldsymbol{A})$) over the Bayesian $A_s$-optimal in Table~\ref{tab:Bayes6F15R}. \begin{table}[h] \centering \caption{$n = 15, \ k = 6$ $A_s$- and $D_s$-optimal designs for main effect models.\label{tab:6F15R}} \begin{tabular}{rrrrrr rr rrrrrr} \\ \multicolumn{6}{c}{$A_s$-optimal} & & & \multicolumn{6}{c}{$D_s$-optimal} \\ -1 & 1 & 1 & -1 & 1 & 1 && & 1 & -1 & 1 & 1 & -1 & -1\\ 1 & 1 & -1 & -1 & -1 & 1 && & 1 & 1 & 1 & -1 & -1 & 1 \\ 1 & -1 & -1 & 1 & 1 & 1 & && 1 & -1 & -1 & -1 & -1 & -1 \\ 1 & 1 & -1 & -1 & 1 & -1 & && -1 & -1 & 1 & 1 & -1 & 1 \\ -1 & -1 & 1 & -1 & 1 & -1 & && -1 & 1 & -1 & 1 & -1 & -1 \\ -1 & -1 & -1 & 1 & 1 & -1 & && 1 & -1 & -1 & 1 & 1 & 1 \\ 1 & -1 & 1 & 1 & -1 & -1 & && -1 & 1 & 1 & 1 & 1 & -1 \\ 1 & 1 & -1 & 1 & -1 & -1 & && 1 & 1 & -1 & 1 & -1 & -1 \\ -1 & 1 & 1 & 1 & -1& 0 & &&-1 & 1 & 1 & -1 & 1 & -1 \\ 1 & 1 & 1 & -1 & 1 & -1 & &&1 & -1 & 1 & -1 & 1 & -1 \\ -1 & -1 & -1 & -1 & -1 & 1 &&& 1 & 1 & -1 & -1 & 1 & 1 \\ 1 & -1 & 1 & -1 & -1 & 1 &&& -1 & -1 & -1 & 1 & 1 & 1 \\ -1 & -1 & -1 & -1 & -1 & -1 & && -1 & -1 & -1 & -1 & 1 & -1 \\ -1 & 1 & -1 & 1 & 1 & 1 &&& -1 & -1 & 1 & -1 & -1 & 1 \\ 1 & -1 & 1 & 1 & 1 & 1 &&& -1 & 1 & -1 & -1 & -1 & 1 \\ \end{tabular} \end{table} \begin{table}[ht] \caption{$n = 15, \ k = 6$ Bayesian $A_s$- and $D_s$-optimal designs for main effect models with potential two-factor interaction effects. The Bayesian-$A_s$ comes from $15 \leq \tau^{-2} < 100$ while the Bayesian-$D_s$ has $20 \leq \tau^{-2} \leq 100$.}\label{tab:Bayes6F15R} \centering \begin{tabular}{rrrrrr rr rrrrrr} \multicolumn{6}{c}{\text{Bayesian $A_s$-optimal}} &&& \multicolumn{6}{c}{\text{Bayesian $D_s$-optimal}} \\ -1 & 1 & 1 & -1 & 0 & 0 & && 1 & 1 & 1 & 1 & 1 & -1 \\ -1 & -1 & 1 & 1 & 1 & 1&&& 1 & -1 & 1 & -1 & 1 & 1 \\ -1 & 1 & -1 & 1 & 1 & 1 &&& -1 & 1 & 1 & -1 & 1 & 1 \\ 1 & -1 & -1 & 1 & -1 & 1 &&& -1 & -1 & 1 & 1 & 1 & -1 \\ -1 & 1 & -1 & 1 & -1 & -1 &&& 1 & -1 & -1 & 1 & 1 & -1 \\ -1 & -1 & -1 & -1 & 1 & -1 &&& -1 & -1 & -1 & -1 & -1 & 1 \\ 1 & -1 & -1 & 1 & 1 & -1 &&& -1 & -1 &1 & -1 & -1 & -1 \\ -1 & -1 & -1 & -1 & -1 & 1 &&& -1 & 1 & -1 & 1 & -1 & -1 \\ 1 & -1 & 1 & -1 & 1 & 1& && 1 &1 & -1 & 1 & 1 & 1 \\ -1 & -1 & 1 & 1 & -1 & -1 &&& -1 & -1 &-1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & -1 & 1 &&& 1 & 1 &-1 & -1 & -1 & -1 \\ 1 & 1 & -1 & -1 & -1 & -1 &&& 1 & 1 &1 & -1 & -1 & 1 \\ 1 & -1 & 1 & -1 & -1 & -1 &&& 1 & -1 & 1 & 1 & -1 & 1 \\ 1 & 1 & -1 & -1 & 1 & 1 &&& -1 &1& -1 & -1 & 1 & -1 \\ 1 & 1 & 1 & 1 & 1 & -1 &&& -1 & 1& 1 & 1 & -1 & 1 \\ \end{tabular} \end{table} \begin{table}[ht] \caption{$n = 15, \ k = 6$ Bayesian $A_s$-optimal design for main effect models with potential two-factor interaction effects with $\tau^{-2} = 10$. \label{tab:BayesAnonInt}} \centering \begin{tabular}{rrrrrr} \multicolumn{6}{c}{\text{Bayesian-}$A_s$} \\ 1 & 1 & -1 & -1 & 1 & 1 \\ -1 & 1 & -1 & 1 & -1 & 1 \\ 1 & -1 & -0.62 & 0.62 & -1 & 1 \\ 1 & 1 & -1 & -1 & -1 & -1 \\ 1 & -1 & 1 & -1 & 0.64 & -0.64 \\ 1 & 1 & 1 & 1 & 1 & 1 \\ -1 & -1 & 1 & 1 & -1 & -1 \\ -0.60 & 0.60 & 1 & -1 & -1 & 1 \\ -1 & 1 & 1 & -1 & 1 & -1 \\ -1 & -1 & -1 & -1 & 1 & 1 \\ -1 & -1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 & 1 & -1 \\ -1 & 1 & -1 & 1 & 1 & -1 \\ -1 & -1 & -1 & -1 & -1 & -1 \\ \end{tabular} \end{table} \clearpage \subsection{Section 4.3 Designs and Results for $k=8, 10$} Results for factors $k = 8$ and $k = 10$ are fairly analogous to the $k = 6$ case discussed in Section~4.3 of the main paper. The results, found in Figures~\ref{fig:BayesRSMk8} and \ref{fig:BayesRSMk10}, continue to indicate that the Bayesian $A_s$-optimal design prioritizes estimation of the quadratic effects, while the Bayesian $D_s$-optimal design does not. For $\tau_Q^{-2} = \tau_I^{-2} = 16$, the Bayesian-$A$ for both $k = 8$ and $k = 10$ produced even more designs with $SS_{MI} = 0$, while the Bayesian $D_s$-optimal design for $k = 8$ found no such designs, and for $k = 10$ found few relative to the Bayesian-$A$. This adds to the previous conclusion that the Bayesian $A_s$-optimal criterion reduces aliasing for designs compared with the Bayesian $D_s$-optimal. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{BayesRSM8.png} \caption{Performance measures for the Bayesian $D_s$-optimal and Bayesian $A_s$-optimal designs when $k = 8$ found with $\tau_Q^{-2} \in \{0,1,16\}$ and $\tau_{I}^{-2} \in \{1,16\}$. (Left) The $A_S$-criterion for the main effect model on the log scale. (Middle) The sum of squares of the off-diagonals for the quadratic terms on the log scale. (Right) The sum of squares of the cross products of the main effects and interactions on the log scale with offset $1$. }\label{fig:BayesRSMk8} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{BayesRSM10.png} \caption{Performance measures for the Bayesian $D_s$-optimal and Bayesian $A_s$-optimal designs when $k = 10$ found with $\tau_Q^{-2} \in \{0,1,16\}$ and $\tau_{I}^{-2} \in \{1,16\}$. (Left) The $A$-value for the first-order model on the log scale. (Middle) The sum of squares of the off-diagonals for the quadratic terms on the log scale. (Right) The sum of squares of the cross products of the main effects and interactions on the log scale with offset $1$. }\label{fig:BayesRSMk10} \end{figure} \clearpage \bibliographystyle{asa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{#1}} \newcommand{\setcounter{equation}{0} \section*{Appendix}}{\addtocounter{section}{1} \setcounter{equation}{0} \section*{Appendix \Alph{section}}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\nonumber}{\nonumber} \newcommand{\noindent}{\noindent} \newcommand{\hspace{0.1cm}}{\hspace{0.1cm}} \newcommand{\hspace{0.7cm}}{\hspace{0.7cm}} \newcommand{\stackrel}{\stackrel} \newcommand{\epsilon}{\epsilon} \newcommand{\varepsilon}{\varepsilon} \newcommand{\alpha}{\alpha} \newcommand{\sigma}{\sigma} \newcommand{\lambda}{\lambda} \newcommand{\delta}{\delta} \newcommand{\Delta}{\Delta} \newcommand{\rightarrow}{\rightarrow} \newcommand{\label}{\label} \newcommand{\beta}{\beta} \newcommand{\bar{z}}{\bar{z}} \newcommand{\partial}{\partial} \newcommand{\varphi}{\varphi} \newcommand{\rangle}{\rangle} \newcommand{\langle}{\langle} \newcommand{\Gamma}{\Gamma} \newcommand{\gamma}{\gamma} \newcommand{\approx}{\approx} \newcommand{\uparrow}{\uparrow} \newcommand{\downarrow}{\downarrow} \newcommand{\Uparrow}{\Uparrow} \newcommand{\Downarrow}{\Downarrow} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{\Leftrightarrow}{\Leftrightarrow} \newcommand{\theta}{\theta} \begin{document} \title{Nuclear spin ferromagnetic phase transition in an interacting $2D$ electron gas} \author{Pascal Simon$^{1,2}$ and Daniel Loss$^1$} \affiliation{$^{1}$ Department of Physics and Astronomy, University of Basel, Klingelbergstrasse 82, CH-4056 Basel, Switzerland} \affiliation{$^{2}$ Laboratoire de Physique et Mod\'elisation des Milieux Condens\'es, CNRS and Universit\'e Joseph Fourier, BP 166, 38042 Grenoble, France} \date{\today} \begin{abstract} Electrons in a two-dimensional semiconducting heterostructure interact with nuclear spins via the hyperfine interaction. Using a a Kondo lattice formulation of the electron-nuclear spin interaction, we show that the nuclear spin system within an interacting two-dimensional electron gas undergoes a ferromagnetic phase transition at finite temperatures. We find that electron-electron interactions and non-Fermi liquid behavior substantially enhance the nuclear spin Curie temperature into the $mK$ range with decreasing electron density. \end{abstract} \pacs{71.10.Ay,71.10.Ca,71.70.Gm} \maketitle The use of the electron spin as a qubit for quantum computing relies on the ability to coherently control single electron spins in semiconductor quantum dots \cite{loss:1998}. Over the last years much progress has been made for dots in GaAs semiconductors, where single spin lifetimes have been measured to range far into the $\mathrm{ms}$-range \cite{kroutvar:2004a,elzerman:2004,amasha:2006a}, and where coherent manipulation of single- and two-spin states was successfully implemented \cite{petta:2005, koppens:2006}. Still, a major obstacle to further progress is the comparatively short spin decoherence time in these materials, ranging from $100\,\mathrm{ns}$ in bulk \cite{kikkawa:1998} to $\mu s$ in dots\cite{petta:2005}. The main source of decoherence for a single electron spin confined to a GaAs dot is coming from the contact hyperfine interaction with the surrounding nuclear spins \cite{burkard:1999a,khaetskii:2003,coish:2004a}. Several ways to overcome this problem have been proposed such as spin echo techniques \cite{coish:2004a,petta:2005}, projection of the nuclear spin state \cite{coish:2004a} or polarization of the nuclear spins \cite{burkard:1999a,khaetskii:2003,coish:2004a,imamoglu:2003}. However, in order to extend the spin decay time by one order of magnitude, a polarization of above 99\% is required \cite{coish:2004a}, which is still far away from the $60\%$ so far reached in quantum dots via optical pumping \cite{bracker:2005a}. One way to overcome this problem would be that nuclear spins become fully polarized at low enough temperatures, without any external magnetic field or optical pumping. This is the case if the nuclear spins undergo a ferromagnetic phase transition at a finite Curie temperature $T_c$. Quite remarkably, the possibility of such a nuclear-spin phase transition to occur in a metal was studied more than sixty years ago by Fr\"ohlich and Nabarro (FN) \cite{FN:1940}. Using a Weiss mean field argument they showed that the Curie temperature $T_c$ of nuclear spins in a three dimensional ($3D$) metal becomes \begin{equation} \label{eq:FN} k_BT_c\sim \frac{A^2}{8E_F}\, , \end{equation} where $A$ denotes the hyperfine coupling strength between the nuclear and electron spin and $E_F$ the Fermi energy. For a typical metal, $T_c$ is of the order of micro-Kelvin or less. However, for a two-dimensional electron gas (2DEG) in GaAs semiconductors, Eq. (\ref{eq:FN}) would predict nuclear ferromagnetism with $T_c \sim 1 {\mathrm mK}$, which is surprisingly high However, the direct use of Eq. (\ref{eq:FN}), which was derived for a bulk metal, to a 2DEG in a semiconductor is very problematic. The purpose of this letter, therefore, is to reconsider this issue for a 2DEG and to estimate the nuclear spin Curie temperature. Our analysis below will be based on the Kondo lattice model \cite{sigrist:1997}, where we integrate out the electron degrees of freedom to derive an effective spin Hamiltonian whose exchange is given in terms of the static electronic spin susceptibility $\chi_{s}(q)$. Using a spin-wave analysis, we will show that the electron-electron (e-e) interactions in the 2DEG and the induced non-Fermi liquid behavior in $\chi_{s}(q)$ \cite{belitz:1997,maslov:2003,maslov:2006,efetov:2006} ultimately enables a ferromagnetic phase transition of the nuclear spins. For sufficiently strong interactions and/or low electronic densities (with the dimensionless interaction parameter $r_{s} \sim 5-10$) the Curie temperature can be pushed into the milli-Kelvin regime, and thus, the phase transition should become accessible experimentally. {\em Model Hamiltonian}. In order to study an interacting 2DEG coupled to nuclear spins within the 2DEG, we adopt a tight-binding representation in which each lattice site contains a single nuclear spin and electrons can hop between neighboring sites. The Hamiltonian describing such a system reads \begin{equation}\label{eq:kl} H=H_0+\frac{1}{2}\sum\limits_{j=1}^N A_j c^\dag_{j\alpha}\vec \sigma_{\alpha\beta} c_{j\beta}\cdot \vec I_j=H_0+H_n, \end{equation} where $H_0$ denotes the conduction electron Hamiltonian and $H_n$ the electron-nuclear spin hyperfine interaction. $H_0$ can be rather general and includes e-e interactions. In Eq. (\ref{eq:kl}), $c^\dag_{j\alpha}$ creates an electron at the lattice site $\vec r_j$ with spin $\alpha$ and $\vec \sigma$ represent the Pauli matrices. We have also introduced $\vec I_j$ the nuclear spin located at the lattice site $\vec r_j$, and $A_j$ the hyperfine coupling constants between the electron and nuclear spins at site $\vec r_j$. The electron spin operator is defined by $\vec S_j= \frac{1}{2}c^\dag_{j\alpha}\vec \sigma_{\alpha\beta}c_{j\beta}$. $N$ denotes the total number of sites on the $2D$ lattice. In our formulation, the nuclear spin density is $n_s=a^{-2}$ where $a$ is the lattice spacing. From here on, we assume $A_j=A>0$ which means we assume the hyperfine interaction to be the same for all atoms that constitute the heterostructures (typically Ga and As). We also neglect direct dipolar interactions between the nuclear spins which is in general smaller than the indirect interaction as we will see. This amounts to assume that the dipolar interaction energy scale $E_{dip}$ is among the smallest one and particularly that $k_BT\gg E_{dip}$, where $T$ is the temperature. This assumption is crucial since it allows us to focus on the nuclear spins which are within the $2D$ electron gas thickness (in growth direction) and justifies our $2D$ description \cite{note:3d}. The general Hamiltonian in Eq. (\ref{eq:kl}) is the well-known $2D$ Kondo lattice Hamiltonian (KLH), though $H_0$ contains also e-e interactions. The regime we are interested in corresponds to the weak Kondo coupling regime in the sense that $A\ll E_F$, where $E_F$ is the Fermi energy. The KLH has been introduced to describe various physical properties of heavy-fermion materials \cite{lee:1986,sigrist:1997}, and more recently also of ferromagnetic semiconductors \cite{fs}. Before turning to the extended system let us briefly consider the special case of a single electron confined to a quantum dot which interacts typically with $10^6$ nuclear spins \cite{khaetskii:2003,coish:2004a}. This case can be described by the above KLH by allowing in $H_0$ for a confinement potential for the dot, which provides the largest energy scale. Indeed, we can then project $H_n$ into the ground state of $H_0$, and the hyperfine Hamiltonian then takes the known central spin form $H=\sum_i \widetilde A_i\vec S_e\cdot \vec I_i$ \cite{khaetskii:2003,coish:2004a}, where $\vec S_e$ is the single electron spin, and $\widetilde A_i=A|\psi(\vec r_i)|^2$ the non-uniform coupling constant with $\psi(\vec r_i)$ the electronic ground state wave function at site $\vec r_i$. The reformulation of the central spin problem in terms of the KLH should be particularly useful for numerical evaluations. To continue with the general case, it is convenient to go to Fourier space and rewrite $H_n$ in Eq. (\ref{eq:kl}) as $H_n=\frac{A}{N}\sum_{\vec q} \vec S_{\vec q}\cdot\vec I_{\vec q}$, where $\vec I_{\vec q}=\sum_j e^{-i \vec q\cdot\vec r_j}\vec I_j$ is the Fourier transform of $\vec I_j$, and similarly for $\vec S_{\vec q}$. Since $A$ is a small energy scale in our case, we may perform a Schrieffer-Wolff (SW) transformation in order to eliminate terms linear in $A$, and thereby integrate out the electronic degrees of freedom. Keeping the lowest order terms in $A^2$ of the SW transformation, we are left with an effective Hamiltonian $ H_{eff}=H_0-\frac{1}{2}[S,[S,H_0]]$. $S$ is defined by $H_n+[S,H_0]=0$, which is solved as $S=L_0^{-1}H_n$ where $L_0$ is the Liouvillian. Let us define $U=\frac{1}{2}[S,[S,H_0]]$ which can be rewritten as $U=\frac{1}{2}[L_0^{-1}H_n,H_n]$. Using an integral representation for $L_0$, one obtains $ U=-\frac{i}{2}\int_0^{\infty}dt e^{-\eta t} [H_n(t),H_n], $ where $\eta \to 0^+$ ensures convergence. We next take the equilibrium expectation value over electronic degrees of freedom, denoted by $\langle\dots\rangle$. The only assumptions we make are $\langle S_i^x\rangle=\langle S_i^y\rangle=0$, and translational invariance in the 2DEG. We then get \begin{equation}\label{eq:ueff} \langle U\rangle=\frac{A^2}{8n_s}\sum\limits_{\vec q} I_{\vec q}^\alpha~ \chi_{\alpha \beta}( q) ~I^{\beta}_{-\vec q}~, \end{equation} where $ \chi_{\alpha\beta}( q)=-i\int_0^\infty dt~ e^{-\eta t}\langle[ S_{\vec q}^\alpha,S_{-\vec q}^\beta]\rangle, $ and where summation over the spin components $\alpha,\beta=x,y,z$ is implied. If we also assume $\langle S^z_i\rangle=0$, then $\chi_{\alpha \beta}(q)=\delta_{\alpha \beta} \chi_{s}(q)$, where $\chi_{s}(q)$ is the electronic spin susceptibility in the static limit. We stress that Eq. (\ref{eq:ueff}) is rather general and requires only weak assumptions on $H_0$. In real space we have $\langle U\rangle=-\frac{1}{2}\sum_{\vec r,\vec r'} J_{\vec r-\vec r'}^{\alpha\beta} I_{\vec r}^\alpha I_{\vec r'}^\beta$, where $ J_{\vec r}^{\alpha\beta}=-({A^2}/{4n_s})\chi_{\alpha\beta}(\vec r)$ is the effective exchange coupling. The nuclear spins $\vec {I}_{\vec r}$ are therefore interacting with each other, this interaction being mediated by the conduction electrons. This is nothing but the standard Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction, which, as we shall see, can be substantially modified by e-e interactions compared to the free electron case. Let us first analyze the case of non-interacting electrons. In this case, $\chi_{s}$ coincides with the usual density-density response (Lindhard) function $\chi_{0}$ \cite{GV}. We first perform a mean field analysis. The Weiss mean field theory predicts a Curie temperature \begin{equation}\label{eq:mf} T_c=-\frac{I(I+1)}{3k_B}\frac{A^2}{4n_s}\chi_0(q=0), \end{equation} where $I$ is the nuclear spin value. In $2D$, $\chi_0(q=0)=-N_e= -m^*/\pi$, where $N_e=n_e/E_F$ is the electronic density of states, and $m^*$ is the effective electron mass in a 2DEG (we set $\hbar=1$). For a $3D$ bulk metal with one conduction electron per nucleus, the ratio $n_e/n_s\sim 1$ and we recover the result in Eq. (\ref{eq:FN}) derived more than sixty years ago by Fr\"ohlich and Nabarro \cite{FN:1940}. For a $2D$ metal, the Weiss mean field theory predicts $k_BT_c=I(I+1)A^2/12E_F$. For a $2D$ semiconductor, however, the ratio $n_e/n_s$ is much smaller than $1$. With typical values for GaAs heterostructures, $I=3/2$, $A\sim 90~\mu eV$ and $a\sim$ 2\AA \cite{coish:2004a}, we estimate $T_c\sim 1~\mu K$, which is very low. (For such low $T_{c}$'s, ignoring nuclear dipole-dipole interactions from the start would not be valid.) However, this estimate is just based on the simplest mean field theory and, moreover, does not include the effect of e-e interactions. We shall now go beyond above mean field approximation. For this we assume that the ordering (if it takes place) leads to a ferromagnetic phase where the collective low-energy excitations are given by spin waves. Then, we define the Curie temperature $T_{c}$ as the temperature at which the magnetic order is destroyed by those spin waves. This procedure is equivalent to the Tyablikov decoupling scheme \cite{Tyablikov}. The dispersion relation of the spin wave (or magnon) reads \begin{equation} \omega_q=I(J_0-J_{q})=I \frac{A^2}{4}a^2(\chi_{s}({q})-\chi_{s}(0)), \end{equation} where $J_{q}$ is the Fourier transform of $J_{\vec r}$. The magnetization $m$ per site at finite $T$ is $m(T)=I-\frac{1}{N}\sum_{\vec q} n_q$, where $n_{q}=(e^{\omega_q/k_{B}T_{c}}-1)^{-1}$ is the magnon occupation number. The Curie temperature $T_c$ follows then from the vanishing of the magnetization, i.e. $m(T_c)=0$, which, in the continuum limit, becomes \begin{equation}\label{eq:tc} 1=\frac{a^2}{I}\int \frac{d\vec q}{(2\pi)^2} \frac{1}{e^{ \omega_q/k_{B}T_{c}}-1}. \end{equation} For non-interacting electrons in $2D$, $\chi_{s}(q)-\chi_{s}(0)=0$ for $q< 2k_F$ \cite{GV}, where $k_F$ is the Fermi wave vector. The spin wave analysis therefore predicts $T_c=0$, in agreement with a recent conjecture extending the Mermin-Wagner theorem for RKKY interactions in a non-interacting 2D system \cite{bruno:2001}. The study of thermodynamic quantities in {\em interacting} electron liquids especially in 2D has attracted quite some interest recently with the goal to find deviations from the standard Landau-Fermi liquid behavior, such as non-analytic dependences on the wave vector \cite{belitz:1997,maslov:2003,maslov:2006,efetov:2006}. In particular, it was found \cite{maslov:2003} that the static non-uniform spin susceptibility $\chi_s(q)$ depends {\em linearly } on the wave vector $q=|\vec q|$ for $q\to 0$ in 2D (while it is $q^2$ in 3D). This non-analyticity arises from the long-range correlation between quasiparticles mediated by virtual particle-hole pairs. Since the integral in Eq. (\ref{eq:tc}) is dominated by the low $q$-behavior, one may replace $\omega_q$ by its low-$q$ limit which turns out to be linear in $q$ (see below) \cite{dispersion}. The integral in Eq. (\ref{eq:tc}) can then be performed easily, allowing us to express $T_c^{}$ in terms of the derivative of the spin sucseptibility, \begin{equation} \label{eq:tcsw} T_c^{}=\frac{A^2 I}{2k_B}\sqrt{ \frac{3I}{\pi n_s}}\left. \frac{\partial \chi_s(q)}{\partial q}\right|_{q\to 0}. \end{equation} For non-interacting electrons, $\delta\chi_s(q)=0$ at low $q$ and we recover $T_c^{}=0$, in accordance with the MWT. Let us include now e-e interactions. To calculate $\chi_s(q)$, we start from the Bethe-Salpeter (BS) equation for the two-body scattering amplitude \cite{GV}. Solving the BS equation formally, we can derive an exact and closed expression for the spin susceptibility given by \begin{equation}\label{eq:chis} \chi_s(\bar q)=\frac{1}{L^{2D}}\sum\limits_{\bar p,\bar p'}\left( R(\bar q)\frac{1}{1-\Gamma^-_{ir}(\bar q)R(\bar q)}\right)_{\bar p\bar p'}\, , \end{equation} where $L=\sqrt {Na^2}$ is the system length, $ (\Gamma_{ir}^-)_{\bar p\bar p'}(\bar q)$ the exact irreducible electron-hole scattering amplitude in the spin channel (see \cite{GV}), $R_{\bar p}(\bar q)= -2i G(\bar p+\bar q/2)G(\bar p-\bar q/2)$ is the electron-hole bubble where $G(\bar p)$ is the exact propagator and $\bar p\equiv (p_0,\vec p)$ is the (D+1)-momentum with $p_0$ the frequency. We have used a matrix notation in Eq. (\ref{eq:chis}) where the indices run over $\bar p$ ($R$ is a diagonal matrix). Unfortunately, $\Gamma^-_{ir}$ cannot be calculated exactly and some drastic approximations are required. The approximation we use consists in replacing the exact irreducible electron-hole scattering amplitude $(\Gamma^-_{ir})_{\bar p,\bar p'}$ by an averaged value calculated with respect to all possible values of $p$ and $p'$ near the Fermi surface, therefore we assume $(\Gamma^-_{ir})_{\bar p,\bar p'}=\Gamma^-_{ir}(\bar q)~\forall~p,p'$ \cite{note:gamma}. Let us now put $q_0=0$ (and suppress the $q_0$-argument from here on) and consider a $q$-independent short-ranged (screened) interaction potential, yielding $\Gamma^-_{ir}(\bar q)=-U$. This allows us to derive from Eq. (\ref{eq:chis}) a simple formula for $\partial\chi_s /{\partial q}$ given by \begin{equation}\label{eq:deltachis} \frac{\partial\chi_s}{\partial q}(q) =\frac{\partial\Pi(q)}{\partial q}\frac{1}{(1+U\Pi(q))^2}, \end{equation} where $\Pi(q)=\sum_{\bar p} R_{\bar p}(q)/L^{D}$. In the $q\to 0$ limit, one can approximate the term $\Pi(q)$ in the denominator of Eq. (\ref{eq:deltachis}) by $\chi_0(0)=-N_e$. The resulting factor $1/(1-UN_e)^2$ in Eq. (\ref{eq:deltachis}) can be interpreted as a type of random phase approximation (RPA) for the electron-hole scattering amplitude \cite{wolff:1960}. The corrections to the polarization bubble $\Pi(q)$ (dominated by the first bubble correction to the self-energy) have been calculated in second order in perturbation theory (in U) at small q by Chubukov and Maslov \cite{maslov:2003}. The result of this perturbative approach is $\delta\Pi(q)=\Pi(q)-\Pi(0)\approx -{4q\chi_{s}(0)\Gamma_s^2}/{3\pi k_F}$, where $\Gamma_s\sim-Um^*/4\pi$ denotes the backscattering amplitude. When $U N_e\ll 1$, we recover from Eq. (\ref{eq:deltachis}) the known result $\delta \chi_s(q)=\delta \Pi(q)$ \cite{maslov:2003}. Now we are ready to obtain an estimate for the Curie temperature $T_{c}$. Replacing $\chi_s(0)$ in $\delta \chi_s(q)$ by its non-interacting limit $\chi_0(0)$, and assuming $\Gamma_s=O(1)$ (this is an upper bound because $\Gamma_s $ is a small parameter controlling the perturbation theory), we obtain then from Eq. (\ref{eq:tcsw}) $T_c^{}\sim 25~\mu K$ for typical 2DEG parameters. This value of $T_c$ becomes further enhanced by a numerical factor (e.g. of order $5$ for $r_s\sim 8$ \cite{GV}) if one uses an effective renormalized value for the spin susceptibility $\chi_S=\chi_s(0)$ instead of $\chi_0(0)$. Though $T_c^{}$ is still rather small, it is now finite, confirming our arguments related to the Mermin-Wagner theorem that e-e interactions increase the Curie temperature. When $UN_e$ is no longer negligible compared to $1$, $T_c$ is even further enhanced by an additional numerical factor $1/(1-UN_e)^2$ (see Eq. (\ref{eq:deltachis})). Close to the ferromagnetic Stoner instability of the electron system, reached when $UN_e\sim 1$, the Curie temperature $T_c$ for the nuclear system is dramatically enhanced as could have been anticipated. In the preceeding paragraphs, we replaced $\Gamma_{ir}^-(q)$ by a $q-$independent constant operator. One can use instead another approximation called the local field factor approximation (LFFA). The idea of the LFFA is to replace the average electrostatic potential by a local field potential seen by an electron with spin $\sigma$ (see \cite{GV} for a review). In this scheme $(\Gamma^-_{ir}(q))_{pp'}\approx -V(q)G_-(q)$, where $G_-(q)$ is a local field factor and ${ V}(q)=2\pi e^2/\kappa q$ the bare {\em unscreened} Coulomb interaction ($\kappa$ is the dielectric constant). Within this approximation scheme the static spin susceptibility $\chi_s$ becomes \begin{equation}\label{eq:chi_lfft} \chi_s(q)=\frac{\chi_0(q)}{1+{V}(q)G_-(q)\chi_0(q)}. \end{equation} Determining precisely $G_-(q)$ for all $q$ is still an open issue. However, the asymptotic regimes are quite well established nowadays \cite{GV}. A semi-phenomenological interpolation formula based on the original Hubbard local field factor \cite{hubbard} and modified in such a way that the compressibility sum rule is exactly satisfied reads \cite{GV,pines}: \begin{equation}\label{eq:lfft} G_-(q)\approx g_0\frac{q}{q+g_0(1-\chi_P/\chi_S)^{-1}\kappa_2}, \end{equation} where $g_0$ is related to the probability of finding two electrons (of opposite spins) at the same position in the electron liquid, $(g\mu_B)^{-2}\chi_P$ is the Pauli susceptibility and $\mu_B$ the Bohr magneton. For non-interacting electrons $\chi_P/\chi_S=1$. An approximate form for $g_0$ giving good agreement with quantum Monte Carlo (QMC) calculations has been proposed recently by Gori-Giorgi {\it et al.} \cite{gori:2004}: $g_0(r_s)\approx (1+Ar_s+B r_s^2+Cr_s^3)e^{-Dr_s}/2$. In a 2DEG, $r_s=1/\sqrt{\pi n_e}a_B^*$ where $a_B^*=\kappa/m^*e^2$ is the effective Bohr radius. The parameters $A=0.088,~B=0.258,~C=0.00037,~D=1.46$ are fitting parameters reproducing QMC results for the 2DEG \cite{gori:2004}. From Eqs. (\ref{eq:tcsw}) and (\ref{eq:chi_lfft}), one can easily determine $T_{c}$ within the LLFA scheme to be given by \begin{equation} \label{eq:tcsw1} T_{c}^{}=\frac{IA}{2k_B}\sqrt{ \frac{3I}{\pi }}\frac{A}{(\alpha-1)^2g_0 {V}(a)}, \end{equation} where $\alpha=(1-\chi_P/\chi_S)^{-1}$ and ${V}(a)$ is the Coulomb potential evaluated at the interatomic distance $a$. The energy scale $(\alpha-1)^2g_0 {V}(a)$ can be interpreted as a renormalized screened potential due to collective interaction effects that are incorporated in the LFFA. The ratio ${A}/(\alpha-1)^2g_0 {V}(a)$ can be regarded as the small parameter of our theory. Quite remarkably, the LFFA predicts an exponential enhancement of $T_{c}^{}$ with increasing interaction parameter $r_s$. For a value of $r_s\sim 5$, this theory already predicts a large $T_{c}^{}\sim 25~mK$, a temperature which is routinely achieved nowadays. Obviously, for some value of $r_s$, the dimensionless parameter ${A}/(\alpha-1)^2g_0 {V}(a)$ exceeds unity. The truncation of the Schrieffer-Wolff transformation at lowest order becomes unjustified and feedback effects between the electron gas and the nuclear spins, not incorporated in our theory, become important. Nevertheless for relatively large values of $r_s\lesssim 6$, the condition $A\ll (\alpha-1)^2g_0{ V}(a)$ is satisfied. Although the spin wave analysis may overestimate $T_c$, the trend in all the approximation schemes we used is that e-e interactions increase dramatically the Curie temperature, possibly into the $mK$ range for large $r_s$ (therefore three orders of magnitude larger than $E_{dip}$ which justifies our starting Hamiltonian). We note that the non-perturbative LFFA theory predicts higher $T_c$'s than the perturbative calculation in the short-ranged interaction. Finally, below $T_c$, the nuclear spins within the 2DEG polarize and generate an effective magnetic field of order of a few Tesla. This will create a small Zeeman splitting \cite{note:self} in the 2DEG which should be detectable with e.g. optical or transport methods. In summary, we have analyzed the Curie temperature $T_c$ of nuclear spins in an interacting 2DEG using a mean field and a spin wave analysis. We have shown that electron-electron interactions considerably enhance the temperature for a ferromagnetic phase transition in the nuclear system, with $T_{c}$ in the milli-Kelvin range for 2DEGs with $r_s \sim 5-10$. We thank B. Coish, L. Glazman, L. Kouwenhoven, and A. Yacoby for useful discussions. This work is supported by the Swiss NSF, NCCR Nanoscience, ONR, and JST ICORP.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsubsection*{References}} \usepackage{microtype} \usepackage{graphicx} \usepackage{subfigure} \usepackage{booktabs} \usepackage{hyperref} \newcommand{\arabic{algorithm}}{\arabic{algorithm}} \usepackage{url} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{amsmath} \usepackage{amsthm} \makeatletter \setlength{\@fptop}{0pt} \makeatother \def\eqref#1{Equation~\ref{#1}} \def\figref#1{Figure~\ref{#1}} \def\tabref#1{Table~\ref{#1}} \def\secref#1{Section~\ref{#1}} \def\algref#1{Algorithm~\ref{#1}} \def\asmref#1{Assumption~\ref{#1}} \def\thmref#1{Theorem~\ref{#1}} \newtheorem{prp}{Proposition} \newtheorem{dff}{Definition} \newtheorem{thm}{Theorem} \newtheorem{lem}{Lemma} \newtheorem{asm}{Assumption} \DeclareMathOperator{\argmin}{arg\,min} \newcommand{\gspace}[2]{\operatorname{\mathcal{G}^{[#1]}}(#2)} \usepackage{cleveref} \crefrangeformat{figure}{Figure #3#1#4--#5#2#6} \makeatletter \newcommand*{\indep}{% \mathbin{% \mathpalette{\@indep}{}% }% } \newcommand*{\nindep}{% \mathbin \mathpalette{\@indep}{\not }% } \newcommand*{\@indep}[2]{% \sbox0{$#1\perp\m@th$ \sbox2{$#1=$ \sbox4{$#1\vcenter{}$ \rlap{\copy0 \dimen@=\dimexpr\ht2-\ht4-.2pt\relax \kern\dimen@ {#2}% \kern\dimen@ \copy0 } \makeatother \usepackage{mwe} \title{Improving Efficiency and Accuracy of Causal Discovery\\ Using a Hierarchical Wrapper} \author[ ]{Shami Nisimov} \author[ ]{Yaniv Gurwicz} \author[ ]{Raanan Y.~~Rohekar} \author[ ]{Gal Novik} \affil[ ]{Intel Labs} \begin{document} \maketitle \vskip 0.3in \begin{abstract} Causal discovery from observational data is an important tool in many branches of science. Under certain assumptions it allows scientists to explain phenomena, predict, and make decisions. In the large sample limit, sound and complete causal discovery algorithms have been previously introduced, where a directed acyclic graph (DAG), or its equivalence class, representing causal relations is searched. However, in real-world cases, only finite training data is available, which limits the power of statistical tests used by these algorithms, leading to errors in the inferred causal model. This is commonly addressed by devising a strategy for using as few as possible statistical tests. In this paper, we introduce such a strategy in the form of a recursive wrapper for existing constraint-based causal discovery algorithms, which preserves soundness and completeness. It recursively clusters the observed variables using the normalized min-cut criterion from the outset, and uses a baseline causal discovery algorithm during backtracking for learning local sub-graphs. It then combines them and ensures completeness. By an ablation study, using synthetic data, and by common real-world benchmarks, we demonstrate that our approach requires significantly fewer statistical tests, learns more accurate graphs, and requires shorter run-times than the baseline algorithm. \end{abstract} \section{Introduction} \citep{glymour2019review} A fundamental task in various disciplines of science is to discover causal relations among domain variables \citep{glymour2019review, shen2020challenges}. In many cases, the causal relations can be properly represented by a DAG \citep{pearl2009causality}. Then, by interpreting this causal DAG as a statistical model, many of these causal relations can be discovered using observational data alone \citep{spirtes2000, pearl1991theory, peters2017elements}, known as causal discovery. In constraint-based causal discovery algorithms, statistical independence is tested between pairs of variables conditioned on subsets of the remaining domain variables \citep{spirtes2000, colombo2012learning, claassen2013learning, tsamardinos2006max, yehezkel2009rai, cheng2002learning}. As not all causal relations can be discovered purely from these statistical tests using observational data, these algorithms return an equivalence class of the true underlying DAG. Nevertheless, constraint-based algorithms are generally proven to be asymptotically correct. In this paper, we will consider this family of algorithms. In most real-world cases, limited observational data is available and statistical tests are prone to errors. Moreover, statistical tests for conditional independence (CI) often suffer from the curse-of-dimensionality. Tests with large condition sets are more prone to errors than tests with smaller condition sets. Thus, a common principle in constraint-based algorithms is to derive the next CI tests to perform, from the result of previous CI tests of smaller condition sets \citep{spirtes2000}. Another challenge is that learning causal DAGs from observed data is NP-hard \citep{chickering2004large}. The number of possible DAGs grows super-exponentially with the number of domain variables, posing a serious limitation on the expected computational complexity of algorithms for real-world applications. On one hand, it is assumed that enough data points are available such that the statistical test results will be reliable, but on the other hand, the computational complexity of these statistical tests increases with the number of data points. Thus, the number of statistical tests commonly serves as a measure of computational complexity. The common approach to address these problems is to reduce the overall number of CI tests required by the algorithm, and favor those that have greater statistical power. In this paper, we propose a wrapper---hierarchical clustering for causal discovery (HCCD)---for existing casual discovery algorithms, referred in the paper as baseline algorithms. This wrapper recursively clusters the domain variables, thus limiting the condition set size of CI tests within each cluster, which alleviates the curse-of-dimensionality. That is, the wrapper relies on the level of correlation between variables, as apposed to the Boolean result of the statistical CI tests. Using spectral clustering, sub-domains are derived with respect to the \emph{relative} correlation between variables, recursively. Once a (sub-) domain cannot be divided into smaller sub-domains, a baseline causal discovery algorithm is called. Tracing back from the recursion, the causal graphs for each sub-domain are merged and the baseline algorithm is called again on the merged graph for the edges between sub-domains (inter-domain), retaining edges within each sub-domain (intra-domain). The proposed wrapper improves accuracy in common finite datasets while preserving the soundness and completeness of the baseline algorithm. \begin{figure*} \centering \subfigure[]{\includegraphics[width=0.45\textwidth]{Method_Illustration.pdf}} \subfigure[]{\includegraphics[width=0.53\textwidth]{hcc_alarm_2_levels.pdf}} \vskip -0.02in \caption{(a) An illustration of a top-down 2-way clustering of feature set followed by and a bottom-up causal discovery. The domain variables are clustered hierarchically. Then, from the leaves upwards, causal discovery (shown as a colored graph) is applied disjointly to the variables in each cluster, and then the resultant graphs are unified at their parent cluster, conditioned on their white-lists as depicted by those graphs. This process is backtracked until the root of the cluster tree. Best viewed in color. (b) An example to the a top-down 2-way clustering of the ALARM dataset's domain variables. In the first level, HCCD creates 2 clusters $\{A,B\}$. Then, for each cluster a recursive call is evaluated, and clusters $\{A_1, A_2\}$ and $\{B_1, B_2\}$ are created, respectively. The nodes within each cluster are highlighted by a different color, and presented on the ground truth structure. Best viewed in color.} \label{fig:method_illustration} \vskip -0.08in \end{figure*} \section{Background} Constraint-based algorithms for causal discovery rely on the correctness of statistical tests for conditional independence. In practice, as only limited data is available, these tests are prone to errors, and often suffer from the curse-of dimensionality. The PC algorithm \citep{spirtes2000} iteratively refines an initial fully-connected graph. In each iteration, connected nodes are tested for independence conditioned on a subset of their neighbors, where this subset is restricted to a constant size. The edge is removed if an independence is found. The restriction on the condition set size is increased by one in the next iteration. Thus, this approach has an advantage when CI tests with smaller condition sets are more reliable than CI tests with larger condition sets. The RAI algorithm \citep{yehezkel2009rai} follows this approach but relies heavily on information from CI tests with smaller condition sets. It orients the graph's edges and decomposes it into sub-graphs before additional CI testing. Thus, errors in earlier stages may cause errors in later stages. Other works \citep{cai2017sada, aliferis2010local, xie2008recursive} also leverage divide-and-conquer or local-search strategies in a hierarchical or recursive way, and report improved results by partitioning the nodes into subsets and learning local structure for each. A line of work \citep{sondhi2019reduced, chickering2015selective} propose to leverages properties of the graph to improve running time. Previously, it was shown that relying on correlation level between pairs of variables, in addition to the Boolean result of the CI tests, can reduce the overall number of CI tests and improve accuracy. The TPDA algorithm \citep{cheng2002learning}, having a complexity of $O(n^4)$ ($n$ is the number of nodes), relies on the monotone-DAG-faithfulness assumption. It assumes that the mutual information between any pair of nodes cannot decrease by opening more dependency inducing paths. Nevertheless, although TPDA has lower complexity than algorithms that do not utilize the correlation level among variables, it performs more CI tests having large condition sets, rendering it unstable for limited data \citep{tsamardinos2006max}. Recently, it was proposed to utilize inhomogeneity in the domain as a heuristic for improving accuracy and speed of existing causal discovery algorithms \citep{pashami2018causal, zhang2018learning}. For example, TSCB \citep{zhang2018learning} is a 2-step wrapper algorithm that first clusters the domain variables invoking an existing causal discovery algorithm for each cluster, and then applies the same causal discovery algorithm to inter-cluster edges. However, it is not clear under which conditions clustering-based wrappers retain soundness and completeness of the baseline algorithm, and under which conditions they are faster and learn a more accurate graph than the baseline algorithm. In this paper we discuss the implication of domain variables clustering on the soundness and completeness of causal discovery. We also discuss the properties of such clustering that may reduce or increase the probability of errors and the overall efficiency. \section{Variables Clustering for Causal Discovery} As discussed, dividing the set of variables into smaller subsets can be appealing for causal discovery algorithms. However, in the common scenario where the underlying causal DAG is connected, an optimal clustering, from which a causal discovery algorithm can benefit, is not clear. \iffalse \begin{dff}[faithfulness] A probability distribution $P$ and a DAG $\mathcal{G}$ are said to be faithful to one another if all and only the conditional independence relations true in P are entailed by applying the Markov condition applied to $\mathcal{G}$. \end{dff} \fi Constraint-based algorithms often rely on the causal Markov and faithfulness assumptions \citep{spirtes2000}. A probability distribution $P$ and a DAG $\mathcal{G}$ are said to be faithful to one another if in $P$, variables $A$ and $B$ are independent conditioned on set $\boldsymbol{Z}$ if and only if $A$ and $B$ are d-separated by $\boldsymbol{Z}$ in $\mathcal{G}$, $A\indep B | \boldsymbol{Z}$. It is then key in constraint-based algorithms to identify conditional independence relations for constructing the underlying graph. Let $\mathrm{Alg}$ be a causal discovery algorithm. Let $\mathrm{ClustCD}$ (Cluster Causal Discovery) be the following procedure. 1) Given observed data for domain variables $\boldsymbol{X}=\{A, B, \ldots\}$, partition $\boldsymbol{X}$ into $k$ disjoint subsets $\boldsymbol{X}_1, \ldots, \boldsymbol{X}_k$, i.e., $\cup_{i=1}^{k}\boldsymbol{X}_i=\boldsymbol{X}$ and $\boldsymbol{X}_i\cap\boldsymbol{X}_j=\emptyset, \forall i\neq j$. 2) Call $\mathrm{Alg}$ for each $\boldsymbol{X}_i$ (intra-cluster). 3) Call $\mathrm{Alg}$ for edges between any pair $(A,B)$ such that $A\in\boldsymbol{X}_i$ and $B\in\boldsymbol{X}_j$, for all $i,j\in\{1,\ldots,k\}, i\neq j$ (inter-cluster). \begin{thm}\label{thm:twostep} If $\mathrm{Alg}$ is a sound and complete causal discovery algorithm, then procedure $\mathrm{ClustCD}$ is sound for $\boldsymbol{X}$, but not complete. \end{thm} \begin{proof} \vskip -0.15in The proof is given in Appendix A. \vskip -0.3in \end{proof} Given a probability distribution $P$ faithful to DAG $\mathcal{G}$, a complete algorithm can identify from observed data of $\boldsymbol{X}_i$ the conditional independence relation between a pair $A,B\in\boldsymbol{X}_i$, not adjacent in $\mathcal{G}$, if there is at least one separating set, $Z\in\boldsymbol{X}_i$, i.e. $A\indep B | Z$, where $A, B, Z$ are disjoint sets. In general, it is not guaranteed that a partition of $\boldsymbol{X}$ into disjoint subsets (clustering) exists such that at least one separating set for every pair of conditionally independent variables are in the same cluster. Of course, there are cases where such a clustering does exists; for example, the clustering $\{A\}, \{B\}, \{C,D,E\}$ when the underlying graph is $A\rightarrow D \leftarrow C \rightarrow E \leftarrow B$. Now, consider the case of two clusters. In one extreme, the first cluster contains a single variable, and the second cluster contains the remaining variables. In such a case, the expected number of undetectable intra-cluster independence relations is minimized. However, the complexity of the number of independence tests is maximal. On the other extreme, the two clusters have equal number of variables. This minimizes the complexity of the number of independence tests performed by the algorithm. For example, the complexity of the PC algorithm is $O(n^m)$, ($m$ is the maximal in-degree), so if one cluster has $n_1$ variables and the other $n-n_1$, then $O(n_1^m) + O((n-n_1)^m)$ is minimal for $n_1=\nicefrac{n}{2}$. However, the expected number of undetectable intra-domain independence relations is maximal. A clustering method used by procedure $\mathrm{ClustCD}$ should balance minimizing the number of undetectable independence relations and the complexity of CI tests. For reducing the number of CI tests in the typical case, we assume that unconnected pairs of variables in $\mathcal{G}$ are more correlated to nodes of the minimal separating set, relative to other nodes. \begin{asm} Let $I$ be a pairwise symmetric correlation function. For every disjoint pairs of nodes $(X,Y)$ in the true underlying graph, such that $X \indep Y | \boldsymbol{Z}$, where $\boldsymbol{Z}$ is a minimal separating set, there exists $\boldsymbol{V} \subset \boldsymbol{X} \setminus (\{X,Y\}\cup\boldsymbol{Z})$, called a redundant set, such that \vskip -0.3in \begin{equation*} \begin{split} \min_{Z\in\boldsymbol{Z}}\left[\max\left[I(X,Z), I(Y,Z)\right]\right] \;\ge\; I(X,Y) \;>\; \\ \min_{V\in \boldsymbol{V}}\left[\max\left[I(X,V), I(Y,V)\right]\right]. \end{split} \end{equation*} \label{asm:clust} \end{asm} \vskip -0.33in This assumption is derived as follows. Let $X \indep Y | \boldsymbol{Z}$, where $\boldsymbol{Z}$ is a minimal separating set. For a constraint-based causal discovery algorithm to identify this independence, it is essential that every $Z\in\boldsymbol{Z}$ is in the same cluster that includes $X$ and $Y$. To ensure this, every $Z\in\boldsymbol{Z}$ should have a correlation level with $X$ or $Y$, at least as the correlation level between $X$ and $Y$. That is, $\forall Z\in\boldsymbol{Z}, $ $I(Z,X)>I(X,Y)$ or $I(Z,Y)>I(X,Y)$. Thus, if $X$ and $Y$ are in the same cluster, $\boldsymbol{Z}$ is also in that cluster. This is formally expressed by the first relation of \asmref{asm:clust}: $\min_{Z\in\boldsymbol{Z}}[\max[I(X,Z), I(Y,Z)]]\geq I(X,Y)$, where $\min_{Z\in\boldsymbol{Z}}$ essentially represents ``$\forall Z\in\mathbf{Z}$''. The second relation in \asmref{asm:clust} is $I(X,Y)>\min_{V\in\boldsymbol{V}}[\max[I(X,V), I(Y,V)]]$, where $\boldsymbol{V}$ is a set that does not include any $Z\in\boldsymbol{Z}$, $X$, and, $Y$. This relation assumes that the variables can be clustered. If no such redundant set, $\boldsymbol{V}$, exists it means that every variable in $\boldsymbol{X} \setminus (\{X,Y\}\cup\mathbf{Z})$ will have a stronger correlation with $X$ or $Y$ than the correlation between $X$ and $Y$. Thus, if $X$ and $Y$ are in the same cluster, then all other variables will be in the same cluster as well. \asmref{asm:clust} is required for achieving efficiency\footnote{Soundness and completeness of the method described in this paper does not rely on this assumption.} in the number of CI tests, and balances between: 1) allowing minimal separating sets to be discovered by $\mathrm{Alg}$ applied to a cluster, and 2) partitioning the variables into clusters. \subsection{Domain Variable Clustering}\label{sec:spectral-clustering} We now derive a clustering approach that complies with \asmref{asm:clust}. Consider a fully connected undirected graph $\mathcal{U}$ over the domain variables $\boldsymbol{X}$. A symmetric similarity matrix $\boldsymbol{W}$ represents the weights of the edges in $\mathcal{U}$. The value of $\boldsymbol{W}_{i,j}$ is the weight of the edge between nodes $X_i,X_j\in\boldsymbol{X}$ and represents the correlation ``strength'' between these variables. The weight is the statistical measure of correlation, denoted $I$, and calculated by the statistical independence test that is used by the baseline causal discovery algorithm. For example, mutual information for discrete variables and correlation coefficient for continuous variables (with a rapid density estimation, e.g., using \cite{gurwicz2004rapid}). Clustering can then be viewed as partitioning $\mathcal{U}$ into disjoint sub-graphs $\mathcal{U}_1,\ldots,\mathcal{U}_k$ by removing edges connecting the sub-graphs, where a cluster $\boldsymbol{X}_i$ consists of the nodes in sub-graph $\mathcal{U}_i$. Partitioning $\mathcal{U}$ by minimizing the sum of weights of removed edges violates \asmref{asm:clust}, as discussed later. Moreover, as this sum increases with the number of removed edges, clustering algorithms based on this criterion favor creating small clusters of isolated nodes \citep{wu1993optimal}. As a solution, we follow \citet{shi2000normalized} that proposed the $k$-way normalized cut (Ncut), \begin{multline}\label{eq:ncut} \mathrm{Ncut}(\{\boldsymbol{X}_1,\ldots,\boldsymbol{X}_k\}) = \\ = \sum_{i=1}^{k-1}\sum_{j=k+1}^k \nicefrac{\mathrm{cut}(\boldsymbol{X}_i, \boldsymbol{X}_j)}{\mathrm{assoc}(\boldsymbol{X}_i, \boldsymbol{X})}, \end{multline} where $\mathrm{assoc}(\boldsymbol{X}_i, \boldsymbol{X})$ is the sum of weights of edges connecting each node in cluster $i$ to every other node in $\boldsymbol{X}$, and $\mathrm{cut}(\boldsymbol{X}_i, \boldsymbol{X}_j)$ is the sum of weights of edges connecting each node in cluster $i$ to every node in cluster $j$. This criterion complies with \asmref{asm:clust}. Let $X \indep Y | \boldsymbol{Z}$ where $\boldsymbol{Z}$ is a minimal separating set. Now, consider an undesired clustering: $\boldsymbol{X_1}=\{X,Y\}$ and $\boldsymbol{X_2}=\boldsymbol{Z}$. Then, $\mathrm{Ncut}(\boldsymbol{X}_1,\boldsymbol{X}_2) = \nicefrac{I(X,Z) + I(Y,Z)}{I(X,Z) + I(Y,Z) + I(X,Y)}$. To avoid such clustering, it is required to maximize the $\mathrm{Ncut}$ value $\forall Z\in\mathbf{Z}$. It is easy to see that this value is greater when $I(X,Z) > I(X,Y)$ than the value when $I(X,Z) < I(X,Y)$ and similarly for $I(Y,Z) > I(X,Y)$. Thus, this criterion complies with \asmref{asm:clust}. It is important to note that a criterion equal to the numerator of \eqref{eq:ncut} does not support \asmref{asm:clust}, as it ignores $I(X,Y)$. In addition, $\mathrm{Ncut}$ diminishes the creation of small clusters. In fact, in the extreme case of equal weights for all edges, $\mathrm{Ncut}$ is minimized for clusters with equal sizes. \citet{shi2000normalized} showed that minimizing 2-way Ncut is equivalent to \begin{equation}\label{eq:lapmincut} \min_u \nicefrac{(u^{\mathrm{T}}\boldsymbol{L}u)}{(u^{\mathrm{T}}\boldsymbol{D}u)} \qquad \mathrm{s.~t.}\quad u^{\mathrm{T}}\boldsymbol{D}1=0, \end{equation} \vskip -0.1in where $u$ is an indicator vector of length $n$, $\boldsymbol{D}$ is a diagonal matrix with elements $\boldsymbol{D}_{i,i}=\sum_{j=1}^n \boldsymbol{W}_{i,j}$, and $\boldsymbol{L}=\boldsymbol{D}-\boldsymbol{W}$ is the Laplacian matrix. In our case, we can relax $u$ to take on real values, and the criterion can be minimized by solving the generalized eigenvalue system, $(\boldsymbol{D}-\boldsymbol{W})u=\lambda \boldsymbol{D}u$. \iffalse \begin{equation} Lu=\lambda \boldsymbol{D}u. \end{equation} \fi Taking the eigenvector corresponding to the smallest non-zero eigenvalue minimizes \begin{equation}\label{eq:rel-dist} \nicefrac{(\sum_{i,j} \boldsymbol{W}_{i,j}(u_i-u_j)^2)}{(\sum_i \boldsymbol{D}_{i,i}u_i^2)}. \end{equation} A Laplacian eigenmap \citep{belkin2003laplacian} is formed by concatenating the $m$ eigenvectors corresponding to the lowest non-zeros eigenvalues, $\boldsymbol{\Tilde{u}}=[u^1,\ldots,u^m]$. Thus, each domain variable $X_i\in\boldsymbol{X}$ is represented by a point $\boldsymbol{\Tilde{u}}_{(i,\cdot)}$ in $\mathbb{R}^m$. For our task, from \eqref{eq:rel-dist}, variables that are strongly correlated, \emph{relatively} to other pairs, will have a relatively small Euclidean distance $\mathbb{R}^m$. Finally, points $\Tilde{\boldsymbol{u}}$, representing variables $\boldsymbol{X}$ in $\mathbb{R}^m$, are clustered using k-means++ \citep{arthur2006k}. This procedure is known as spectral clustering. \subsection{Proposed Method} We consider the problem of learning a causal model, given a dataset for $n$ domain variables, $\{X_i\}^n_{i=1}$. Our method is composed of two main stages, commencing with a top-down hierarchical clustering stage followed by a bottom-up causal discovery in the backtracking stage. In the first stage, hierarchical clustering aims to alleviate the curse-of-dimensionality by partitioning the variable set into clusters, each of which potentially contains variables that are statistically related to each other to a large extent, thereby avoiding spurious connectivity to weaker and undesirable variables (\asmref{asm:clust}). Our method starts off by clustering the entire variable set from the outset into a number of clusters (see \secref{sec:spectral-clustering}), and thereafter successively clusters each of the resultant clusters furthermore, independently of the other clusters. This successive independent clustering process continues for each sub-cluster recursively, forming a tree of clusters, until a separability condition is met (explained later), at which point the entire variable set is clustered to subsets of variables. \figref{fig:method_illustration}(a) illustrates this process. We postulate that each such variable set has a high probability to manifest some structural motif \citep{milo2002network, Yang2018Learning}. A separability condition is used to determine the termination of the hierarchical clustering, and for that the eigenvalues of the graph's Laplacian are used (\eqref{eq:lapmincut}). Generally, those close to zero correspond to isolated groups in the graph, and therefore if more than one such eigenvalue exists then the variable set of the sub-cluster is likely to contain more (relatively) disjoint groups within it, hence the hierarchical clustering process continues. In this case, the number of clusters for the next recursion call is the number of Laplacian's eigenvalues that are close to zero ($k'$ in Algorithm \hyperlink{HCPCALG}{1}, line 9). Experimentally, it was observed that this criterion mostly terminates the clustering at optimal points. In the second stage, a bottom-up causal discovery algorithm, denoted $\mathrm{Alg}$, is applied to the sub-clusters, starting from the leaves of the cluster tree, and moving upwards towards the root of the tree. $\mathrm{Alg}$ is applied to the variable set of each sub-cluster independently to the other sub-clusters, assuming that being secluded and isolated by other irrelevant variables from other sub-clusters, is more probable to learn graphical models with reliable edges, i.e. with higher degree of certainty. In this paper we use the PC algorithm as $\mathrm{Alg}$. Even though the PC algorithm was chosen as the baseline algorithm, other constraint-based causal discovery methods \citep{tsamardinos2006max, rohekar2018bayesian, colombo2012learning} may be used and arguably improved, since this stage poses no assumptions or restrictions on the elements of any prospect method. After learning a graph for each sub-cluster, it is thereafter represented as a sub-graph. Further on, adjacent sub-graphs (those belonging to the same parent cluster) are backtracked in tandem upwards to their parent cluster, at which point they are merged as a single unified variable set. For that, edges are added between every node in one sub-cluster to every node in the other sub-clusters, and a list, $\mathcal{E}$, is formed from these added edges. That is, $\mathcal{E}$ lists the edges of bipartite graphs between every pair of sub-clusters. $\mathrm{Alg}$ is applied to the unified variable set and only the edges listed in $\mathcal{E}$ are tested for removal. That is, $\mathrm{Alg}$ does not consider new connections or removal of edges between any pair of variables within each sub-cluster. Ultimately, each sub-graph keeps its intra-cluster connectivity, presumably stemming from a more reliable variable set, and appends new inter-cluster connectivity, which were not taken into consideration at the former stage. Then, for preserving the completeness of $\mathrm{Alg}$, we apply $\mathrm{Alg}$ again to the unified variable set, this time considering all the remaining edges. The above process continues upwards the clusters tree and terminates after been applied at the root, at which point the final graph is formed from the entire variable set. Note that $\mathrm{Alg}$ is required to learn about the edges in list $\mathcal{E}$. For the case of $\mathrm{Alg}=$PC-algorithm, we set the conditional-independence test function to return a result only for edges in $\mathcal{E}$. For edges not in this list, the function simply returns the existence or absence of the edge in the current graph as ``dependent'' or ``independent'' respectfully. The main purpose of our approach is to improve the accuracy and efficiency of a given baseline algorithm by reducing the number of (unique) statistical tests, while maintaining the soundness and completeness of the baseline algorithm. Although we run the baseline algorithm on all the clusters in the backtracking phase, this inclusion will not undo any advantages of the clustering in terms of efficiency and accuracy. The reason for this is that each of the clusters is effectively: (a) containing only part of the nodes, so many conditional tests are avoided, and (b) sparser, since some edges were already removed and would not be tested anymore, and (c) condition tests that were already applied in previous steps would not be reapplied. So in essence, effectively we only avoid applying unnecessary condition tests, and consequently improve both speed and accuracy. An improvement in performance of our method over a baseline is expected when \asmref{asm:clust} is complied, and this improvement is maximal when the sizes of the clusters are equal. As more clusters are revealed (while preserving causal sufficiency) the greater the improvement. An additional virtue of our method is parallelism, that can be applied to successive independent clustering during the top-down stage, as well as to the causal discovery in independent sub-clusters during the bottom-up stage. The method is illustrated in \figref{fig:method_illustration}(a), and presented as Algorithm \hyperlink{HCPCALG}{1}. \figref{fig:method_illustration}(b) exemplifies the top-down 2-way clustering of the ALARM dataset. \begin{thm}{ Let $\mathrm{Alg}$ be a causal discovery algorithm that take as input an initial graph and a list of edges to be learned. If $\mathrm{Alg}$ is sound and complete, then Algorithm \hyperlink{HCPCALG}{1} is sound and complete. } \end{thm} \begin{proof} \vskip -0.15in The proof does not rely on \asmref{asm:clust} (applies to arbitrary partitions). The proof is given in Appendix A. \vskip -0.3in \end{proof} Completeness of our approach is achieved by calling the sound and complete algorithm $\mathrm{Alg}(\boldsymbol{X}, \mathrm{edges}(\mathcal{G}_{\boldsymbol{X}}))$ for refining the result $\mathcal{G}_{\boldsymbol{X}}$ of the merged cluster (Algorithm \hyperlink{HCPCALG}{1}, line 18). In this call, all the graph edges are considered for learning, allowing undetected independence-relations withing the clusters to be detected. \hypertarget{HCPCALG}{} \begin{figure}[tb] \includegraphics[width=0.483\textwidth]{Alg_HCPC.pdf \vskip -0.25in \end{figure} \section{Experiments} First, we evaluate several aspects of the HCCD wrapper using synthetically generated data. The process we used for generating the data is detailed in Appendix B1. In addition, in appendix B2 we examine the gain achieved by the recursion, and the effect that the completeness requirement has on the accuracy. Then, we evaluate qualitative measures of graphs learned using publicly available datasets. In all our experiments, $\mathrm{Alg}$ is PC, a sound and complete algorithm \citep{spirtes2000}. Although it relies on the causal sufficiency assumption, it is often used as a first step of causal discovery in the presence of latent confounders and selection bias \citep{spirtes2000, claassen2013learning,colombo2012learning}. \subsection{An Analysis using Synthetic Data} In this section we evaluate the performance of the HCCD with respect to the number of training samples, and to the number of nodes in the graphs, compared to the baseline method (PC). For that, we measure the behaviour of 5 key aspects: 3 metrics of structural correctness - SID score \citep{peters2015structural}, structural Hamming distance (SHD), and causal accuracy. In addition, we measure the number of CI tests, and the run-time of the method, including the clustering. \begin{figure}[ht!] \centering \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_training_samples_exp_1_small_figs_causal_accuracy.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_training_samples_exp_1_small_figs_shd.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_training_samples_exp_1_small_figs_sid.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_training_samples_exp_1_small_figs_number_ci_test.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_training_samples_exp_1_small_figs_run_time.pdf}} \vskip -0.05in \caption{Performance of the HCCD wrapper, relatively to the baseline (PC), as a function of the number of training samples, for $100$ graph nodes. Values are average over 500 DAGs, and normalized by the PC score. (a) Causal accuracy (higher is better); (b) SHD (lower is better); (c) SID (lower is better); (d) Number of CI tests (lower is better); (e) Run-time (lower is better). The HCCD wrapper achieves improvements in all the metrics.} \vskip -0.15in \label{fig:scalability_to_training_set_100} \end{figure} \begin{figure}[h!] \centering \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_number_of_nodes_exp_7_small_figs_causal_accuracy.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_number_of_nodes_exp_7_small_figs_shd.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_number_of_nodes_exp_7_small_figs_sid.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_number_of_nodes_exp_7_small_figs_number_ci_test.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_number_of_nodes_exp_7_small_figs_run_time.pdf}} \vskip -0.07in \caption{Performance of the HCCD wrapper, relatively to the baseline (PC), as a function of the number of graph nodes, for $500$ training samples. Values are average over $500$ DAGs, and normalized by the PC score. (a) Causal accuracy (higher is better); (b) SHD (lower is better); (c) SID (lower is better); (d) Number of CI tests (lower is better); (e) Run-time (lower is better). The HCCD wrapper achieves improvements in all the metrics.} \vskip -0.23in \label{fig:scalability_to_number_of_nodes} \end{figure} \figref{fig:scalability_to_training_set_100} shows the performance of the HCCD wrapper with respect to the number of training samples, for graphs with $n=100$ nodes. The figures show mean $\pm$ std of $500$ independent tests (DAGs), and values are normalized by the PC score in order to visualize the improvement over the baseline method. Additional experiments, for $n\in\{20,50,200,1000\}$, are presented in Appendix C. It is evident that the HCCD wrapper is superior to the baseline for all 3 structural correctness metrics along the entire range of the training set size and for every $n$. In addition, there is an evident saving in the number of CI tests, and importantly in run-time (includes the clustering stage) along the entire range of the training set size. One exception is for the case of $n=20$, for which the HCCD run-time is higher. This is expected since the run-time overhead of the clustering stage overtakes the saving in run-time gained by using fewer statistical tests in datasets with a small number of nodes. Nevertheless, for the common real-world cases, datasets having many variables, the HCCD wrapper achieves a significant reduction in run-time. Additionally, it is evident that the run-time reduction increases with the increase of the number of training samples, i.e. larger training sets benefit from a greater decrease in run-time. \figref{fig:scalability_to_number_of_nodes} shows the performance of the HCCD wrapper with respect to $n$, the number of nodes, for $500$ training samples. The figures show mean $\pm$ std of 500 independent tests (DAGs), and values are normalized by the PC score in order to analyze the improvement over the baseline method. It is evident that the HCCD wrapper is superior to the baseline for all 3 structural correctness metrics along the entire range of $n$. Moreover, saving in number of CI tests is evident, and importantly a reduction in run-time (which includes the clustering stage) for the entire range of $n$. \subsection{Real-World Data} In this section we evaluate and compare the accuracy of our method over 10 publicly available datasets from the bnlearn package \citep{marco2010bnlearn}, and 1 dataset from the Neuropathic Pain Diagnosis Simulator \citep{rubio2019pain}, all of which represent real decision support systems that cover a wide range of real life applications, such as medicine, agriculture, weather forecasting, financial modeling and animal breeding. Each of those datasets consists of 10 training sets, having 500 samples each, and corresponding 10 separate test sets, having 5000 samples each, for evaluating several qualitative measures of structural correctness. Thus, for each of the 11 datasets, the experiments were repeated 10 times. The number of domain variables across the datasets spans from tens to hundreds. \vskip 0.1in The first metric we measure is the BDeu score \citep{chickering1995learning}, which under certain assumptions corresponds to the posterior probability of the learned graph. \citet{tsamardinos2006max} noted that this score does not rely on the true graph and may not be related to it, as it is not known in practice to what extent its underlying assumptions hold (e.g., Dirichlet distribution of the hyper-parameters). Nevertheless, since this score does not require knowing the true graph, it has a great value in practical situations. Moreover, this score is often used to tune the baseline parameters \citep{yehezkel2009rai}. \figref{fig:scatter_plot_score} shows a scatter plot of normalized BDeu score, comparing HCCD, TSCB, and PC, evaluated on the 11 datasets, each consists of the 10 different training and test sets (total of 100 points). The BDeu scores are normalized by the PC BDeu score, and so a lower normalized score is better. In 97\% of the cases, HCCD is better than PC. In 82\% of the cases, TSCB is better than PC. Lastly, in 90\% of the cases, HCCD is better than TSCB. As evident from the figure, HCCD is superior to the other methods. Additionally, for each complete dataset, the mean $\pm$ std BDeu score (unnormalized) is presented in \tabref{table_BDeu_Scores}, and better results for the HCCD are observed on all the datasets. \vskip 0.01in \begin{figure}[h!] \centering \includegraphics[width=0.48\textwidth]{experiments_scatter_plot_score_w_colors_n5.pdf} \vskip -0.05in \caption{Scatter plot of normalized BDeu score, comparing HCCD, TSCB, and PC, evaluated on the 11 datasets from \tabref{table_BDeu_Scores}, each consists of 10 different training and test sets. The scores are normalized by the PC BDeu score, and so lower is better. Points below the green dashed line correspond to better results of HCCD compared to the PC, which are 97\% of the cases. Points to the left of the red dashed line correspond to better results of TSCB compared to PC, which are 82\% of the cases. Points below the blue dashed line correspond to better results of HCCD compared to TSCB, which are 90\% of the cases.} \label{fig:scatter_plot_score} \end{figure} \begin{table}[h!] \caption{BDeu Scores (higher is better) of PC, TSCB, and HCCD on various datasets.} \vskip -0.2in \label{table_BDeu_Scores} \begin{center} \begin{tiny} \begin{sc} \begin{tabular}{lcccr} \toprule Data set & PC & TSCB & HCCD \\ \midrule Alarm & -60290 $\pm$ 2750 & -57920 $\pm$ 1280& \textbf{-55852}$\pm$ 1703 \\ Child & -67309 $\pm$ 1059 & -66554 $\pm$ 765& \textbf{-64539}$\pm$ 290 \\ Insurance & -74690 $\pm$ 1543 & -73848 $\pm$ 1315& \textbf{-73469}$\pm$ 1038 \\ Mildew & -293679 $\pm$ 11072 & -290456 $\pm$ 14157 & \textbf{-267266}$\pm$ 3858 \\ Hailfinder & -301499 $\pm$ 2309 & -292020 $\pm$ 3365 & \textbf{-290200}$\pm$ 3503 \\ Barley & -358461 $\pm$ 3592 & -352608 $\pm$ 6365 & \textbf{-350807}$\pm$ 3419 \\ Munin & -451571 $\pm$ 2686 & -434700 $\pm$ 4394 & \textbf{-401007}$\pm$ 3543 \\ WIN95PTS & -64439 $\pm$ 768 & -62990 $\pm$ 947 & \textbf{-60807}$\pm$ 876 \\ PathFinder & -274163 $\pm$ 2334 & -262620 $\pm$ 5004 & \textbf{-248600}$\pm$ 2822 \\ Hepar2 & -168822 $\pm$ 582 & -168528 $\pm$ 549 & \textbf{-167489}$\pm$ 533 \\ NeuroPain & -185340 $\pm$ 678 & -182572 $\pm$ 1265 & \textbf{-181670}$\pm$ 538 \\ \bottomrule \end{tabular} \end{sc} \end{tiny} \end{center} \vskip -0.3in \end{table} We also calculate causal accuracy \citep{claassen2012bayesian} as an evaluation metric to causal discovery. \figref{fig:scatter_plot_acc} shows a scatter plot of causal accuracy, comparing HCCD, TSCB, and PC, evaluated on the 11 datasets, each consists of 10 different training sets (total of 100 points). The causal accuracies are normalized by the PC causal accuracy, and so higher is better. In 91\% of the cases, HCCD is better than PC. In 51\% of the cases, TSCB is better than PC. Lastly, in 93\% of the cases, HCCD is better than TSCB. As evident from the figure, HCCD is superior to the other methods. Additionally, for each complete dataset, the mean $\pm$ std causal accuracy is presented in \tabref{tab:shd_causal_acc}, and better results for the HCCD are observed on all the datasets, demonstrating improved ability to recover the ground-truth causal graph. \begin{figure}[h!] \centering \includegraphics[width=0.48\textwidth]{experiments_scatter_plot_acc_w_colors_n5.pdf} \vskip -0.05in \caption{Scatter plot of normalized causal accuracy, comparing HCCD, TSCB, and PC, evaluated on the 11 datasets from \tabref{tab:shd_causal_acc}, each consists of 10 different training sets. The values are normalized by the PC value, and higher is better. Points above the green dashed line correspond to better results of HCCD compared to the PC, which are 91\% of the cases. Points to the right of the red dashed line correspond to better results of TSCB compared to PC, which are 51\% of the cases. Lastly, points above the blue dashed line correspond to better results of HCCD compared to TSCB, which are 93\% of the cases.} \label{fig:scatter_plot_acc} \vskip -0.2in \end{figure} \begin{table}[h!] \caption{SHD (lower is better) and causal accuracy (higher is better) comparison for various datasets.} \vskip -0.2in \label{tab:shd_causal_acc} \begin{center} \begin{tiny} \begin{sc} \begin{tabular}{lccc} \toprule & \multicolumn{3}{c}{Structural Hamming distance} \\ Data set & PC & TSCB & HCCD \\ \midrule Alarm & 30.50$\pm$ 3.14 & 39.6$\pm$ 4.55 & \textbf{28.80}$\pm$ 3.55 \\ Child & 18.50$\pm$ 1.27 & 19.4$\pm$ 1.35 & \textbf{18.0}$\pm$ 1.25 \\ Insurance & 42.90$\pm$ 2.62 & 46.40$\pm$ 4.27 & \textbf{42.20}$\pm$ 2.84 \\ Mildew & 46.80$\pm$ 1.62 & 46.80$\pm$ 2.04 & \textbf{45.70}$\pm$ 0.95 \\ Hailfinder & 80.30$\pm$ 2.63 & 86.20$\pm$ 2.49 & \textbf{80.10}$\pm$ 2.03 \\ Barley & 83.90$\pm$ 0.74 & 83.90$\pm$ 0.99 & \textbf{81.60}$\pm$ 1.26 \\ Munin & 283$\pm$ 1.06 & 286.10$\pm$ 1.37 & \textbf{279.60}$\pm$ 2.60 \\ WIN95PTS & 99.30$\pm$ 4.30 & 103.20$\pm$ 7.28 & \textbf{97.80}$\pm$ 7.06 \\ PathFinder & \textbf{193.10}$\pm$ 0.94 & 199.30$\pm$ 2.21 & 195.20$\pm$ 2.56 \\ Hepar2 & 115.70$\pm$ 2.21& 117.80$\pm$ 0.92 & \textbf{114.20}$\pm$ 2.53 \\ NeuroPain & 796.70$\pm$ 13.71& 804$\pm$ 23.04 & \textbf{791}$\pm$ 8.24 \\ \bottomrule \end{tabular} \end{sc} \end{tiny} \end{center} \begin{center} \begin{tiny} \begin{sc} \begin{tabular}{lccc} \toprule & \multicolumn{3}{c}{Causal accuracy}\\ Data set & PC & TSCB & HCCD\\ \midrule Alarm & 0.700 $\pm$ 0.039 & 0.608 $\pm$ 0.044 & \textbf{0.727}$\pm$ 0.036 \\ Child & 0.440 $\pm$ 0.067 & 0.448 $\pm$ 0.064 & \textbf{0.597}$\pm$ 0.044 \\ Insurance & 0.476 $\pm$ 0.038 & 0.446 $\pm$ 0.044 & \textbf{0.485}$\pm$ 0.038 \\ Mildew & 0.143 $\pm$ 0.029 & 0.126 $\pm$ 0.022 & \textbf{0.235}$\pm$ 0.019 \\ Hailfinder & 0.056 $\pm$ 0.012 & 0.060 $\pm$ 0.010 & \textbf{0.082}$\pm$ 0.010 \\ Barley & 0.180 $\pm$ 0.027 & 0.203 $\pm$ 0.022 & \textbf{0.233}$\pm$ 0.017 \\ Munin & 0.051 $\pm$ 0.002 & 0.061 $\pm$ 0.005 & \textbf{0.121}$\pm$ 0.011 \\ WIN95PTS & 0.320 $\pm$ 0.030 & 0.327 $\pm$ 0.026 & \textbf{0.441}$\pm$ 0.015 \\ PathFinder & 0.066 $\pm$ 0.003 & 0.064 $\pm$ 0.011 & \textbf{0.088}$\pm$ 0.008 \\ Hepar2 & 0.132 $\pm$ 0.024 & 0.139 $\pm$ 0.019 & \textbf{0.181}$\pm$ 0.017 \\ NeuroPain & 0.037 $\pm$ 0.005 & 0.042 $\pm$ 0.003 & \textbf{0.057}$\pm$ 0.004 \\ \bottomrule \end{tabular} \end{sc} \end{tiny} \end{center} \vskip -0.3in \end{table} \vskip 0.2in In addition, we measure the structural hamming distance (SHD) between the learned graph and the ground-truth graph. SHD calculates the number of edge insertions, deletions or flips in order to transform one graph to another graph. For each of the 11 dataset, the mean $\pm$ std SHD is presented in \tabref{tab:shd_causal_acc}. For all the datasets except one, HCCD is better than the other methods. \section{Conclusions} We propose the HCCD wrapper for causal discovery algorithms (baseline algorithms). HCCD preserves soundness and completeness of the baseline algorithm, while reducing the number of statistical tests, increasing the accuracy of the resulting graph, and reducing the run-time. For constraint-based baseline algorithms, it is assumed that each pair of variables, not adjacent in the true underlying graph, is more strongly correlated to at least one of its separating sets than to other variables not in any of their separating set. Therefore, this property of relative strength of correlation is used by our method to hierarchically partition the domain variables, minimizing the number of independence relations that are not detectable from the cluster variables alone. Using synthetically generated graphs and data, and selectively limiting certain aspect of HCCD, we demonstrated that recursion and completeness-requirement greatly improve efficiency of the learning procedure and accuracy of the resulting causal graph. Applying our method to real-world graphs, and common publicly available datasets, we demonstrated that HCCD learns significantly more accurate graphs, compared to the PC baseline algorithm. Finally, we conjecture that scoring-based algorithms may benefit from the HCCD wrapper as well by defining corresponding ``similarty'' measures. Thus, the search strategy is applied to smaller search spaces, independently and in parallel. We suspect that this will lead to avoiding local maximum and finding higher maximum points. \subsubsection*{References}} \usepackage{microtype} \usepackage{graphicx} \usepackage{subfigure} \usepackage{booktabs} \usepackage{hyperref} \newcommand{\arabic{algorithm}}{\arabic{algorithm}} \usepackage{url} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{amsmath} \usepackage{amsthm} \makeatletter \setlength{\@fptop}{0pt} \makeatother \def\eqref#1{Equation~\ref{#1}} \def\figref#1{Figure~\ref{#1}} \def\tabref#1{Table~\ref{#1}} \def\secref#1{Section~\ref{#1}} \def\algref#1{Algorithm~\ref{#1}} \def\asmref#1{Assumption~\ref{#1}} \def\thmref#1{Theorem~\ref{#1}} \newtheorem{prp}{Proposition} \newtheorem{dff}{Definition} \newtheorem{thm}{Theorem} \newtheorem{lem}{Lemma} \newtheorem{asm}{Assumption} \DeclareMathOperator{\argmin}{arg\,min} \newcommand{\gspace}[2]{\operatorname{\mathcal{G}^{[#1]}}(#2)} \usepackage{cleveref} \crefrangeformat{figure}{Figure #3#1#4--#5#2#6} \makeatletter \newcommand*{\indep}{% \mathbin{% \mathpalette{\@indep}{}% }% } \newcommand*{\nindep}{% \mathbin \mathpalette{\@indep}{\not }% } \newcommand*{\@indep}[2]{% \sbox0{$#1\perp\m@th$ \sbox2{$#1=$ \sbox4{$#1\vcenter{}$ \rlap{\copy0 \dimen@=\dimexpr\ht2-\ht4-.2pt\relax \kern\dimen@ {#2}% \kern\dimen@ \copy0 } \makeatother \usepackage{mwe} \title{Improving Efficiency and Accuracy of Causal Discovery\\ Using a Hierarchical Wrapper} \author[ ]{Shami Nisimov} \author[ ]{Yaniv Gurwicz} \author[ ]{Raanan Y.~~Rohekar} \author[ ]{Gal Novik} \affil[ ]{Intel Labs} \begin{document} \maketitle \vskip 0.3in \begin{abstract} Causal discovery from observational data is an important tool in many branches of science. Under certain assumptions it allows scientists to explain phenomena, predict, and make decisions. In the large sample limit, sound and complete causal discovery algorithms have been previously introduced, where a directed acyclic graph (DAG), or its equivalence class, representing causal relations is searched. However, in real-world cases, only finite training data is available, which limits the power of statistical tests used by these algorithms, leading to errors in the inferred causal model. This is commonly addressed by devising a strategy for using as few as possible statistical tests. In this paper, we introduce such a strategy in the form of a recursive wrapper for existing constraint-based causal discovery algorithms, which preserves soundness and completeness. It recursively clusters the observed variables using the normalized min-cut criterion from the outset, and uses a baseline causal discovery algorithm during backtracking for learning local sub-graphs. It then combines them and ensures completeness. By an ablation study, using synthetic data, and by common real-world benchmarks, we demonstrate that our approach requires significantly fewer statistical tests, learns more accurate graphs, and requires shorter run-times than the baseline algorithm. \end{abstract} \section{Introduction} \citep{glymour2019review} A fundamental task in various disciplines of science is to discover causal relations among domain variables \citep{glymour2019review, shen2020challenges}. In many cases, the causal relations can be properly represented by a DAG \citep{pearl2009causality}. Then, by interpreting this causal DAG as a statistical model, many of these causal relations can be discovered using observational data alone \citep{spirtes2000, pearl1991theory, peters2017elements}, known as causal discovery. In constraint-based causal discovery algorithms, statistical independence is tested between pairs of variables conditioned on subsets of the remaining domain variables \citep{spirtes2000, colombo2012learning, claassen2013learning, tsamardinos2006max, yehezkel2009rai, cheng2002learning}. As not all causal relations can be discovered purely from these statistical tests using observational data, these algorithms return an equivalence class of the true underlying DAG. Nevertheless, constraint-based algorithms are generally proven to be asymptotically correct. In this paper, we will consider this family of algorithms. In most real-world cases, limited observational data is available and statistical tests are prone to errors. Moreover, statistical tests for conditional independence (CI) often suffer from the curse-of-dimensionality. Tests with large condition sets are more prone to errors than tests with smaller condition sets. Thus, a common principle in constraint-based algorithms is to derive the next CI tests to perform, from the result of previous CI tests of smaller condition sets \citep{spirtes2000}. Another challenge is that learning causal DAGs from observed data is NP-hard \citep{chickering2004large}. The number of possible DAGs grows super-exponentially with the number of domain variables, posing a serious limitation on the expected computational complexity of algorithms for real-world applications. On one hand, it is assumed that enough data points are available such that the statistical test results will be reliable, but on the other hand, the computational complexity of these statistical tests increases with the number of data points. Thus, the number of statistical tests commonly serves as a measure of computational complexity. The common approach to address these problems is to reduce the overall number of CI tests required by the algorithm, and favor those that have greater statistical power. In this paper, we propose a wrapper---hierarchical clustering for causal discovery (HCCD)---for existing casual discovery algorithms, referred in the paper as baseline algorithms. This wrapper recursively clusters the domain variables, thus limiting the condition set size of CI tests within each cluster, which alleviates the curse-of-dimensionality. That is, the wrapper relies on the level of correlation between variables, as apposed to the Boolean result of the statistical CI tests. Using spectral clustering, sub-domains are derived with respect to the \emph{relative} correlation between variables, recursively. Once a (sub-) domain cannot be divided into smaller sub-domains, a baseline causal discovery algorithm is called. Tracing back from the recursion, the causal graphs for each sub-domain are merged and the baseline algorithm is called again on the merged graph for the edges between sub-domains (inter-domain), retaining edges within each sub-domain (intra-domain). The proposed wrapper improves accuracy in common finite datasets while preserving the soundness and completeness of the baseline algorithm. \begin{figure*} \centering \subfigure[]{\includegraphics[width=0.45\textwidth]{Method_Illustration.pdf}} \subfigure[]{\includegraphics[width=0.53\textwidth]{hcc_alarm_2_levels.pdf}} \vskip -0.02in \caption{(a) An illustration of a top-down 2-way clustering of feature set followed by and a bottom-up causal discovery. The domain variables are clustered hierarchically. Then, from the leaves upwards, causal discovery (shown as a colored graph) is applied disjointly to the variables in each cluster, and then the resultant graphs are unified at their parent cluster, conditioned on their white-lists as depicted by those graphs. This process is backtracked until the root of the cluster tree. Best viewed in color. (b) An example to the a top-down 2-way clustering of the ALARM dataset's domain variables. In the first level, HCCD creates 2 clusters $\{A,B\}$. Then, for each cluster a recursive call is evaluated, and clusters $\{A_1, A_2\}$ and $\{B_1, B_2\}$ are created, respectively. The nodes within each cluster are highlighted by a different color, and presented on the ground truth structure. Best viewed in color.} \label{fig:method_illustration} \vskip -0.08in \end{figure*} \section{Background} Constraint-based algorithms for causal discovery rely on the correctness of statistical tests for conditional independence. In practice, as only limited data is available, these tests are prone to errors, and often suffer from the curse-of dimensionality. The PC algorithm \citep{spirtes2000} iteratively refines an initial fully-connected graph. In each iteration, connected nodes are tested for independence conditioned on a subset of their neighbors, where this subset is restricted to a constant size. The edge is removed if an independence is found. The restriction on the condition set size is increased by one in the next iteration. Thus, this approach has an advantage when CI tests with smaller condition sets are more reliable than CI tests with larger condition sets. The RAI algorithm \citep{yehezkel2009rai} follows this approach but relies heavily on information from CI tests with smaller condition sets. It orients the graph's edges and decomposes it into sub-graphs before additional CI testing. Thus, errors in earlier stages may cause errors in later stages. Other works \citep{cai2017sada, aliferis2010local, xie2008recursive} also leverage divide-and-conquer or local-search strategies in a hierarchical or recursive way, and report improved results by partitioning the nodes into subsets and learning local structure for each. A line of work \citep{sondhi2019reduced, chickering2015selective} propose to leverages properties of the graph to improve running time. Previously, it was shown that relying on correlation level between pairs of variables, in addition to the Boolean result of the CI tests, can reduce the overall number of CI tests and improve accuracy. The TPDA algorithm \citep{cheng2002learning}, having a complexity of $O(n^4)$ ($n$ is the number of nodes), relies on the monotone-DAG-faithfulness assumption. It assumes that the mutual information between any pair of nodes cannot decrease by opening more dependency inducing paths. Nevertheless, although TPDA has lower complexity than algorithms that do not utilize the correlation level among variables, it performs more CI tests having large condition sets, rendering it unstable for limited data \citep{tsamardinos2006max}. Recently, it was proposed to utilize inhomogeneity in the domain as a heuristic for improving accuracy and speed of existing causal discovery algorithms \citep{pashami2018causal, zhang2018learning}. For example, TSCB \citep{zhang2018learning} is a 2-step wrapper algorithm that first clusters the domain variables invoking an existing causal discovery algorithm for each cluster, and then applies the same causal discovery algorithm to inter-cluster edges. However, it is not clear under which conditions clustering-based wrappers retain soundness and completeness of the baseline algorithm, and under which conditions they are faster and learn a more accurate graph than the baseline algorithm. In this paper we discuss the implication of domain variables clustering on the soundness and completeness of causal discovery. We also discuss the properties of such clustering that may reduce or increase the probability of errors and the overall efficiency. \section{Variables Clustering for Causal Discovery} As discussed, dividing the set of variables into smaller subsets can be appealing for causal discovery algorithms. However, in the common scenario where the underlying causal DAG is connected, an optimal clustering, from which a causal discovery algorithm can benefit, is not clear. \iffalse \begin{dff}[faithfulness] A probability distribution $P$ and a DAG $\mathcal{G}$ are said to be faithful to one another if all and only the conditional independence relations true in P are entailed by applying the Markov condition applied to $\mathcal{G}$. \end{dff} \fi Constraint-based algorithms often rely on the causal Markov and faithfulness assumptions \citep{spirtes2000}. A probability distribution $P$ and a DAG $\mathcal{G}$ are said to be faithful to one another if in $P$, variables $A$ and $B$ are independent conditioned on set $\boldsymbol{Z}$ if and only if $A$ and $B$ are d-separated by $\boldsymbol{Z}$ in $\mathcal{G}$, $A\indep B | \boldsymbol{Z}$. It is then key in constraint-based algorithms to identify conditional independence relations for constructing the underlying graph. Let $\mathrm{Alg}$ be a causal discovery algorithm. Let $\mathrm{ClustCD}$ (Cluster Causal Discovery) be the following procedure. 1) Given observed data for domain variables $\boldsymbol{X}=\{A, B, \ldots\}$, partition $\boldsymbol{X}$ into $k$ disjoint subsets $\boldsymbol{X}_1, \ldots, \boldsymbol{X}_k$, i.e., $\cup_{i=1}^{k}\boldsymbol{X}_i=\boldsymbol{X}$ and $\boldsymbol{X}_i\cap\boldsymbol{X}_j=\emptyset, \forall i\neq j$. 2) Call $\mathrm{Alg}$ for each $\boldsymbol{X}_i$ (intra-cluster). 3) Call $\mathrm{Alg}$ for edges between any pair $(A,B)$ such that $A\in\boldsymbol{X}_i$ and $B\in\boldsymbol{X}_j$, for all $i,j\in\{1,\ldots,k\}, i\neq j$ (inter-cluster). \begin{thm}\label{thm:twostep} If $\mathrm{Alg}$ is a sound and complete causal discovery algorithm, then procedure $\mathrm{ClustCD}$ is sound for $\boldsymbol{X}$, but not complete. \end{thm} \begin{proof} \vskip -0.15in The proof is given in Appendix A. \vskip -0.3in \end{proof} Given a probability distribution $P$ faithful to DAG $\mathcal{G}$, a complete algorithm can identify from observed data of $\boldsymbol{X}_i$ the conditional independence relation between a pair $A,B\in\boldsymbol{X}_i$, not adjacent in $\mathcal{G}$, if there is at least one separating set, $Z\in\boldsymbol{X}_i$, i.e. $A\indep B | Z$, where $A, B, Z$ are disjoint sets. In general, it is not guaranteed that a partition of $\boldsymbol{X}$ into disjoint subsets (clustering) exists such that at least one separating set for every pair of conditionally independent variables are in the same cluster. Of course, there are cases where such a clustering does exists; for example, the clustering $\{A\}, \{B\}, \{C,D,E\}$ when the underlying graph is $A\rightarrow D \leftarrow C \rightarrow E \leftarrow B$. Now, consider the case of two clusters. In one extreme, the first cluster contains a single variable, and the second cluster contains the remaining variables. In such a case, the expected number of undetectable intra-cluster independence relations is minimized. However, the complexity of the number of independence tests is maximal. On the other extreme, the two clusters have equal number of variables. This minimizes the complexity of the number of independence tests performed by the algorithm. For example, the complexity of the PC algorithm is $O(n^m)$, ($m$ is the maximal in-degree), so if one cluster has $n_1$ variables and the other $n-n_1$, then $O(n_1^m) + O((n-n_1)^m)$ is minimal for $n_1=\nicefrac{n}{2}$. However, the expected number of undetectable intra-domain independence relations is maximal. A clustering method used by procedure $\mathrm{ClustCD}$ should balance minimizing the number of undetectable independence relations and the complexity of CI tests. For reducing the number of CI tests in the typical case, we assume that unconnected pairs of variables in $\mathcal{G}$ are more correlated to nodes of the minimal separating set, relative to other nodes. \begin{asm} Let $I$ be a pairwise symmetric correlation function. For every disjoint pairs of nodes $(X,Y)$ in the true underlying graph, such that $X \indep Y | \boldsymbol{Z}$, where $\boldsymbol{Z}$ is a minimal separating set, there exists $\boldsymbol{V} \subset \boldsymbol{X} \setminus (\{X,Y\}\cup\boldsymbol{Z})$, called a redundant set, such that \vskip -0.3in \begin{equation*} \begin{split} \min_{Z\in\boldsymbol{Z}}\left[\max\left[I(X,Z), I(Y,Z)\right]\right] \;\ge\; I(X,Y) \;>\; \\ \min_{V\in \boldsymbol{V}}\left[\max\left[I(X,V), I(Y,V)\right]\right]. \end{split} \end{equation*} \label{asm:clust} \end{asm} \vskip -0.33in This assumption is derived as follows. Let $X \indep Y | \boldsymbol{Z}$, where $\boldsymbol{Z}$ is a minimal separating set. For a constraint-based causal discovery algorithm to identify this independence, it is essential that every $Z\in\boldsymbol{Z}$ is in the same cluster that includes $X$ and $Y$. To ensure this, every $Z\in\boldsymbol{Z}$ should have a correlation level with $X$ or $Y$, at least as the correlation level between $X$ and $Y$. That is, $\forall Z\in\boldsymbol{Z}, $ $I(Z,X)>I(X,Y)$ or $I(Z,Y)>I(X,Y)$. Thus, if $X$ and $Y$ are in the same cluster, $\boldsymbol{Z}$ is also in that cluster. This is formally expressed by the first relation of \asmref{asm:clust}: $\min_{Z\in\boldsymbol{Z}}[\max[I(X,Z), I(Y,Z)]]\geq I(X,Y)$, where $\min_{Z\in\boldsymbol{Z}}$ essentially represents ``$\forall Z\in\mathbf{Z}$''. The second relation in \asmref{asm:clust} is $I(X,Y)>\min_{V\in\boldsymbol{V}}[\max[I(X,V), I(Y,V)]]$, where $\boldsymbol{V}$ is a set that does not include any $Z\in\boldsymbol{Z}$, $X$, and, $Y$. This relation assumes that the variables can be clustered. If no such redundant set, $\boldsymbol{V}$, exists it means that every variable in $\boldsymbol{X} \setminus (\{X,Y\}\cup\mathbf{Z})$ will have a stronger correlation with $X$ or $Y$ than the correlation between $X$ and $Y$. Thus, if $X$ and $Y$ are in the same cluster, then all other variables will be in the same cluster as well. \asmref{asm:clust} is required for achieving efficiency\footnote{Soundness and completeness of the method described in this paper does not rely on this assumption.} in the number of CI tests, and balances between: 1) allowing minimal separating sets to be discovered by $\mathrm{Alg}$ applied to a cluster, and 2) partitioning the variables into clusters. \subsection{Domain Variable Clustering}\label{sec:spectral-clustering} We now derive a clustering approach that complies with \asmref{asm:clust}. Consider a fully connected undirected graph $\mathcal{U}$ over the domain variables $\boldsymbol{X}$. A symmetric similarity matrix $\boldsymbol{W}$ represents the weights of the edges in $\mathcal{U}$. The value of $\boldsymbol{W}_{i,j}$ is the weight of the edge between nodes $X_i,X_j\in\boldsymbol{X}$ and represents the correlation ``strength'' between these variables. The weight is the statistical measure of correlation, denoted $I$, and calculated by the statistical independence test that is used by the baseline causal discovery algorithm. For example, mutual information for discrete variables and correlation coefficient for continuous variables (with a rapid density estimation, e.g., using \cite{gurwicz2004rapid}). Clustering can then be viewed as partitioning $\mathcal{U}$ into disjoint sub-graphs $\mathcal{U}_1,\ldots,\mathcal{U}_k$ by removing edges connecting the sub-graphs, where a cluster $\boldsymbol{X}_i$ consists of the nodes in sub-graph $\mathcal{U}_i$. Partitioning $\mathcal{U}$ by minimizing the sum of weights of removed edges violates \asmref{asm:clust}, as discussed later. Moreover, as this sum increases with the number of removed edges, clustering algorithms based on this criterion favor creating small clusters of isolated nodes \citep{wu1993optimal}. As a solution, we follow \citet{shi2000normalized} that proposed the $k$-way normalized cut (Ncut), \begin{multline}\label{eq:ncut} \mathrm{Ncut}(\{\boldsymbol{X}_1,\ldots,\boldsymbol{X}_k\}) = \\ = \sum_{i=1}^{k-1}\sum_{j=k+1}^k \nicefrac{\mathrm{cut}(\boldsymbol{X}_i, \boldsymbol{X}_j)}{\mathrm{assoc}(\boldsymbol{X}_i, \boldsymbol{X})}, \end{multline} where $\mathrm{assoc}(\boldsymbol{X}_i, \boldsymbol{X})$ is the sum of weights of edges connecting each node in cluster $i$ to every other node in $\boldsymbol{X}$, and $\mathrm{cut}(\boldsymbol{X}_i, \boldsymbol{X}_j)$ is the sum of weights of edges connecting each node in cluster $i$ to every node in cluster $j$. This criterion complies with \asmref{asm:clust}. Let $X \indep Y | \boldsymbol{Z}$ where $\boldsymbol{Z}$ is a minimal separating set. Now, consider an undesired clustering: $\boldsymbol{X_1}=\{X,Y\}$ and $\boldsymbol{X_2}=\boldsymbol{Z}$. Then, $\mathrm{Ncut}(\boldsymbol{X}_1,\boldsymbol{X}_2) = \nicefrac{I(X,Z) + I(Y,Z)}{I(X,Z) + I(Y,Z) + I(X,Y)}$. To avoid such clustering, it is required to maximize the $\mathrm{Ncut}$ value $\forall Z\in\mathbf{Z}$. It is easy to see that this value is greater when $I(X,Z) > I(X,Y)$ than the value when $I(X,Z) < I(X,Y)$ and similarly for $I(Y,Z) > I(X,Y)$. Thus, this criterion complies with \asmref{asm:clust}. It is important to note that a criterion equal to the numerator of \eqref{eq:ncut} does not support \asmref{asm:clust}, as it ignores $I(X,Y)$. In addition, $\mathrm{Ncut}$ diminishes the creation of small clusters. In fact, in the extreme case of equal weights for all edges, $\mathrm{Ncut}$ is minimized for clusters with equal sizes. \citet{shi2000normalized} showed that minimizing 2-way Ncut is equivalent to \begin{equation}\label{eq:lapmincut} \min_u \nicefrac{(u^{\mathrm{T}}\boldsymbol{L}u)}{(u^{\mathrm{T}}\boldsymbol{D}u)} \qquad \mathrm{s.~t.}\quad u^{\mathrm{T}}\boldsymbol{D}1=0, \end{equation} \vskip -0.1in where $u$ is an indicator vector of length $n$, $\boldsymbol{D}$ is a diagonal matrix with elements $\boldsymbol{D}_{i,i}=\sum_{j=1}^n \boldsymbol{W}_{i,j}$, and $\boldsymbol{L}=\boldsymbol{D}-\boldsymbol{W}$ is the Laplacian matrix. In our case, we can relax $u$ to take on real values, and the criterion can be minimized by solving the generalized eigenvalue system, $(\boldsymbol{D}-\boldsymbol{W})u=\lambda \boldsymbol{D}u$. \iffalse \begin{equation} Lu=\lambda \boldsymbol{D}u. \end{equation} \fi Taking the eigenvector corresponding to the smallest non-zero eigenvalue minimizes \begin{equation}\label{eq:rel-dist} \nicefrac{(\sum_{i,j} \boldsymbol{W}_{i,j}(u_i-u_j)^2)}{(\sum_i \boldsymbol{D}_{i,i}u_i^2)}. \end{equation} A Laplacian eigenmap \citep{belkin2003laplacian} is formed by concatenating the $m$ eigenvectors corresponding to the lowest non-zeros eigenvalues, $\boldsymbol{\Tilde{u}}=[u^1,\ldots,u^m]$. Thus, each domain variable $X_i\in\boldsymbol{X}$ is represented by a point $\boldsymbol{\Tilde{u}}_{(i,\cdot)}$ in $\mathbb{R}^m$. For our task, from \eqref{eq:rel-dist}, variables that are strongly correlated, \emph{relatively} to other pairs, will have a relatively small Euclidean distance $\mathbb{R}^m$. Finally, points $\Tilde{\boldsymbol{u}}$, representing variables $\boldsymbol{X}$ in $\mathbb{R}^m$, are clustered using k-means++ \citep{arthur2006k}. This procedure is known as spectral clustering. \subsection{Proposed Method} We consider the problem of learning a causal model, given a dataset for $n$ domain variables, $\{X_i\}^n_{i=1}$. Our method is composed of two main stages, commencing with a top-down hierarchical clustering stage followed by a bottom-up causal discovery in the backtracking stage. In the first stage, hierarchical clustering aims to alleviate the curse-of-dimensionality by partitioning the variable set into clusters, each of which potentially contains variables that are statistically related to each other to a large extent, thereby avoiding spurious connectivity to weaker and undesirable variables (\asmref{asm:clust}). Our method starts off by clustering the entire variable set from the outset into a number of clusters (see \secref{sec:spectral-clustering}), and thereafter successively clusters each of the resultant clusters furthermore, independently of the other clusters. This successive independent clustering process continues for each sub-cluster recursively, forming a tree of clusters, until a separability condition is met (explained later), at which point the entire variable set is clustered to subsets of variables. \figref{fig:method_illustration}(a) illustrates this process. We postulate that each such variable set has a high probability to manifest some structural motif \citep{milo2002network, Yang2018Learning}. A separability condition is used to determine the termination of the hierarchical clustering, and for that the eigenvalues of the graph's Laplacian are used (\eqref{eq:lapmincut}). Generally, those close to zero correspond to isolated groups in the graph, and therefore if more than one such eigenvalue exists then the variable set of the sub-cluster is likely to contain more (relatively) disjoint groups within it, hence the hierarchical clustering process continues. In this case, the number of clusters for the next recursion call is the number of Laplacian's eigenvalues that are close to zero ($k'$ in Algorithm \hyperlink{HCPCALG}{1}, line 9). Experimentally, it was observed that this criterion mostly terminates the clustering at optimal points. In the second stage, a bottom-up causal discovery algorithm, denoted $\mathrm{Alg}$, is applied to the sub-clusters, starting from the leaves of the cluster tree, and moving upwards towards the root of the tree. $\mathrm{Alg}$ is applied to the variable set of each sub-cluster independently to the other sub-clusters, assuming that being secluded and isolated by other irrelevant variables from other sub-clusters, is more probable to learn graphical models with reliable edges, i.e. with higher degree of certainty. In this paper we use the PC algorithm as $\mathrm{Alg}$. Even though the PC algorithm was chosen as the baseline algorithm, other constraint-based causal discovery methods \citep{tsamardinos2006max, rohekar2018bayesian, colombo2012learning} may be used and arguably improved, since this stage poses no assumptions or restrictions on the elements of any prospect method. After learning a graph for each sub-cluster, it is thereafter represented as a sub-graph. Further on, adjacent sub-graphs (those belonging to the same parent cluster) are backtracked in tandem upwards to their parent cluster, at which point they are merged as a single unified variable set. For that, edges are added between every node in one sub-cluster to every node in the other sub-clusters, and a list, $\mathcal{E}$, is formed from these added edges. That is, $\mathcal{E}$ lists the edges of bipartite graphs between every pair of sub-clusters. $\mathrm{Alg}$ is applied to the unified variable set and only the edges listed in $\mathcal{E}$ are tested for removal. That is, $\mathrm{Alg}$ does not consider new connections or removal of edges between any pair of variables within each sub-cluster. Ultimately, each sub-graph keeps its intra-cluster connectivity, presumably stemming from a more reliable variable set, and appends new inter-cluster connectivity, which were not taken into consideration at the former stage. Then, for preserving the completeness of $\mathrm{Alg}$, we apply $\mathrm{Alg}$ again to the unified variable set, this time considering all the remaining edges. The above process continues upwards the clusters tree and terminates after been applied at the root, at which point the final graph is formed from the entire variable set. Note that $\mathrm{Alg}$ is required to learn about the edges in list $\mathcal{E}$. For the case of $\mathrm{Alg}=$PC-algorithm, we set the conditional-independence test function to return a result only for edges in $\mathcal{E}$. For edges not in this list, the function simply returns the existence or absence of the edge in the current graph as ``dependent'' or ``independent'' respectfully. The main purpose of our approach is to improve the accuracy and efficiency of a given baseline algorithm by reducing the number of (unique) statistical tests, while maintaining the soundness and completeness of the baseline algorithm. Although we run the baseline algorithm on all the clusters in the backtracking phase, this inclusion will not undo any advantages of the clustering in terms of efficiency and accuracy. The reason for this is that each of the clusters is effectively: (a) containing only part of the nodes, so many conditional tests are avoided, and (b) sparser, since some edges were already removed and would not be tested anymore, and (c) condition tests that were already applied in previous steps would not be reapplied. So in essence, effectively we only avoid applying unnecessary condition tests, and consequently improve both speed and accuracy. An improvement in performance of our method over a baseline is expected when \asmref{asm:clust} is complied, and this improvement is maximal when the sizes of the clusters are equal. As more clusters are revealed (while preserving causal sufficiency) the greater the improvement. An additional virtue of our method is parallelism, that can be applied to successive independent clustering during the top-down stage, as well as to the causal discovery in independent sub-clusters during the bottom-up stage. The method is illustrated in \figref{fig:method_illustration}(a), and presented as Algorithm \hyperlink{HCPCALG}{1}. \figref{fig:method_illustration}(b) exemplifies the top-down 2-way clustering of the ALARM dataset. \begin{thm}{ Let $\mathrm{Alg}$ be a causal discovery algorithm that take as input an initial graph and a list of edges to be learned. If $\mathrm{Alg}$ is sound and complete, then Algorithm \hyperlink{HCPCALG}{1} is sound and complete. } \end{thm} \begin{proof} \vskip -0.15in The proof does not rely on \asmref{asm:clust} (applies to arbitrary partitions). The proof is given in Appendix A. \vskip -0.3in \end{proof} Completeness of our approach is achieved by calling the sound and complete algorithm $\mathrm{Alg}(\boldsymbol{X}, \mathrm{edges}(\mathcal{G}_{\boldsymbol{X}}))$ for refining the result $\mathcal{G}_{\boldsymbol{X}}$ of the merged cluster (Algorithm \hyperlink{HCPCALG}{1}, line 18). In this call, all the graph edges are considered for learning, allowing undetected independence-relations withing the clusters to be detected. \hypertarget{HCPCALG}{} \begin{figure}[tb] \includegraphics[width=0.483\textwidth]{Alg_HCPC.pdf \vskip -0.25in \end{figure} \section{Experiments} First, we evaluate several aspects of the HCCD wrapper using synthetically generated data. The process we used for generating the data is detailed in Appendix B1. In addition, in appendix B2 we examine the gain achieved by the recursion, and the effect that the completeness requirement has on the accuracy. Then, we evaluate qualitative measures of graphs learned using publicly available datasets. In all our experiments, $\mathrm{Alg}$ is PC, a sound and complete algorithm \citep{spirtes2000}. Although it relies on the causal sufficiency assumption, it is often used as a first step of causal discovery in the presence of latent confounders and selection bias \citep{spirtes2000, claassen2013learning,colombo2012learning}. \subsection{An Analysis using Synthetic Data} In this section we evaluate the performance of the HCCD with respect to the number of training samples, and to the number of nodes in the graphs, compared to the baseline method (PC). For that, we measure the behaviour of 5 key aspects: 3 metrics of structural correctness - SID score \citep{peters2015structural}, structural Hamming distance (SHD), and causal accuracy. In addition, we measure the number of CI tests, and the run-time of the method, including the clustering. \begin{figure}[ht!] \centering \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_training_samples_exp_1_small_figs_causal_accuracy.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_training_samples_exp_1_small_figs_shd.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_training_samples_exp_1_small_figs_sid.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_training_samples_exp_1_small_figs_number_ci_test.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_training_samples_exp_1_small_figs_run_time.pdf}} \vskip -0.05in \caption{Performance of the HCCD wrapper, relatively to the baseline (PC), as a function of the number of training samples, for $100$ graph nodes. Values are average over 500 DAGs, and normalized by the PC score. (a) Causal accuracy (higher is better); (b) SHD (lower is better); (c) SID (lower is better); (d) Number of CI tests (lower is better); (e) Run-time (lower is better). The HCCD wrapper achieves improvements in all the metrics.} \vskip -0.15in \label{fig:scalability_to_training_set_100} \end{figure} \begin{figure}[h!] \centering \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_number_of_nodes_exp_7_small_figs_causal_accuracy.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_number_of_nodes_exp_7_small_figs_shd.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_number_of_nodes_exp_7_small_figs_sid.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_number_of_nodes_exp_7_small_figs_number_ci_test.pdf}} \subfigure[]{\includegraphics[width=0.22\textwidth]{scalability_to_number_of_nodes_exp_7_small_figs_run_time.pdf}} \vskip -0.07in \caption{Performance of the HCCD wrapper, relatively to the baseline (PC), as a function of the number of graph nodes, for $500$ training samples. Values are average over $500$ DAGs, and normalized by the PC score. (a) Causal accuracy (higher is better); (b) SHD (lower is better); (c) SID (lower is better); (d) Number of CI tests (lower is better); (e) Run-time (lower is better). The HCCD wrapper achieves improvements in all the metrics.} \vskip -0.23in \label{fig:scalability_to_number_of_nodes} \end{figure} \figref{fig:scalability_to_training_set_100} shows the performance of the HCCD wrapper with respect to the number of training samples, for graphs with $n=100$ nodes. The figures show mean $\pm$ std of $500$ independent tests (DAGs), and values are normalized by the PC score in order to visualize the improvement over the baseline method. Additional experiments, for $n\in\{20,50,200,1000\}$, are presented in Appendix C. It is evident that the HCCD wrapper is superior to the baseline for all 3 structural correctness metrics along the entire range of the training set size and for every $n$. In addition, there is an evident saving in the number of CI tests, and importantly in run-time (includes the clustering stage) along the entire range of the training set size. One exception is for the case of $n=20$, for which the HCCD run-time is higher. This is expected since the run-time overhead of the clustering stage overtakes the saving in run-time gained by using fewer statistical tests in datasets with a small number of nodes. Nevertheless, for the common real-world cases, datasets having many variables, the HCCD wrapper achieves a significant reduction in run-time. Additionally, it is evident that the run-time reduction increases with the increase of the number of training samples, i.e. larger training sets benefit from a greater decrease in run-time. \figref{fig:scalability_to_number_of_nodes} shows the performance of the HCCD wrapper with respect to $n$, the number of nodes, for $500$ training samples. The figures show mean $\pm$ std of 500 independent tests (DAGs), and values are normalized by the PC score in order to analyze the improvement over the baseline method. It is evident that the HCCD wrapper is superior to the baseline for all 3 structural correctness metrics along the entire range of $n$. Moreover, saving in number of CI tests is evident, and importantly a reduction in run-time (which includes the clustering stage) for the entire range of $n$. \subsection{Real-World Data} In this section we evaluate and compare the accuracy of our method over 10 publicly available datasets from the bnlearn package \citep{marco2010bnlearn}, and 1 dataset from the Neuropathic Pain Diagnosis Simulator \citep{rubio2019pain}, all of which represent real decision support systems that cover a wide range of real life applications, such as medicine, agriculture, weather forecasting, financial modeling and animal breeding. Each of those datasets consists of 10 training sets, having 500 samples each, and corresponding 10 separate test sets, having 5000 samples each, for evaluating several qualitative measures of structural correctness. Thus, for each of the 11 datasets, the experiments were repeated 10 times. The number of domain variables across the datasets spans from tens to hundreds. \vskip 0.1in The first metric we measure is the BDeu score \citep{chickering1995learning}, which under certain assumptions corresponds to the posterior probability of the learned graph. \citet{tsamardinos2006max} noted that this score does not rely on the true graph and may not be related to it, as it is not known in practice to what extent its underlying assumptions hold (e.g., Dirichlet distribution of the hyper-parameters). Nevertheless, since this score does not require knowing the true graph, it has a great value in practical situations. Moreover, this score is often used to tune the baseline parameters \citep{yehezkel2009rai}. \figref{fig:scatter_plot_score} shows a scatter plot of normalized BDeu score, comparing HCCD, TSCB, and PC, evaluated on the 11 datasets, each consists of the 10 different training and test sets (total of 100 points). The BDeu scores are normalized by the PC BDeu score, and so a lower normalized score is better. In 97\% of the cases, HCCD is better than PC. In 82\% of the cases, TSCB is better than PC. Lastly, in 90\% of the cases, HCCD is better than TSCB. As evident from the figure, HCCD is superior to the other methods. Additionally, for each complete dataset, the mean $\pm$ std BDeu score (unnormalized) is presented in \tabref{table_BDeu_Scores}, and better results for the HCCD are observed on all the datasets. \vskip 0.01in \begin{figure}[h!] \centering \includegraphics[width=0.48\textwidth]{experiments_scatter_plot_score_w_colors_n5.pdf} \vskip -0.05in \caption{Scatter plot of normalized BDeu score, comparing HCCD, TSCB, and PC, evaluated on the 11 datasets from \tabref{table_BDeu_Scores}, each consists of 10 different training and test sets. The scores are normalized by the PC BDeu score, and so lower is better. Points below the green dashed line correspond to better results of HCCD compared to the PC, which are 97\% of the cases. Points to the left of the red dashed line correspond to better results of TSCB compared to PC, which are 82\% of the cases. Points below the blue dashed line correspond to better results of HCCD compared to TSCB, which are 90\% of the cases.} \label{fig:scatter_plot_score} \end{figure} \begin{table}[h!] \caption{BDeu Scores (higher is better) of PC, TSCB, and HCCD on various datasets.} \vskip -0.2in \label{table_BDeu_Scores} \begin{center} \begin{tiny} \begin{sc} \begin{tabular}{lcccr} \toprule Data set & PC & TSCB & HCCD \\ \midrule Alarm & -60290 $\pm$ 2750 & -57920 $\pm$ 1280& \textbf{-55852}$\pm$ 1703 \\ Child & -67309 $\pm$ 1059 & -66554 $\pm$ 765& \textbf{-64539}$\pm$ 290 \\ Insurance & -74690 $\pm$ 1543 & -73848 $\pm$ 1315& \textbf{-73469}$\pm$ 1038 \\ Mildew & -293679 $\pm$ 11072 & -290456 $\pm$ 14157 & \textbf{-267266}$\pm$ 3858 \\ Hailfinder & -301499 $\pm$ 2309 & -292020 $\pm$ 3365 & \textbf{-290200}$\pm$ 3503 \\ Barley & -358461 $\pm$ 3592 & -352608 $\pm$ 6365 & \textbf{-350807}$\pm$ 3419 \\ Munin & -451571 $\pm$ 2686 & -434700 $\pm$ 4394 & \textbf{-401007}$\pm$ 3543 \\ WIN95PTS & -64439 $\pm$ 768 & -62990 $\pm$ 947 & \textbf{-60807}$\pm$ 876 \\ PathFinder & -274163 $\pm$ 2334 & -262620 $\pm$ 5004 & \textbf{-248600}$\pm$ 2822 \\ Hepar2 & -168822 $\pm$ 582 & -168528 $\pm$ 549 & \textbf{-167489}$\pm$ 533 \\ NeuroPain & -185340 $\pm$ 678 & -182572 $\pm$ 1265 & \textbf{-181670}$\pm$ 538 \\ \bottomrule \end{tabular} \end{sc} \end{tiny} \end{center} \vskip -0.3in \end{table} We also calculate causal accuracy \citep{claassen2012bayesian} as an evaluation metric to causal discovery. \figref{fig:scatter_plot_acc} shows a scatter plot of causal accuracy, comparing HCCD, TSCB, and PC, evaluated on the 11 datasets, each consists of 10 different training sets (total of 100 points). The causal accuracies are normalized by the PC causal accuracy, and so higher is better. In 91\% of the cases, HCCD is better than PC. In 51\% of the cases, TSCB is better than PC. Lastly, in 93\% of the cases, HCCD is better than TSCB. As evident from the figure, HCCD is superior to the other methods. Additionally, for each complete dataset, the mean $\pm$ std causal accuracy is presented in \tabref{tab:shd_causal_acc}, and better results for the HCCD are observed on all the datasets, demonstrating improved ability to recover the ground-truth causal graph. \begin{figure}[h!] \centering \includegraphics[width=0.48\textwidth]{experiments_scatter_plot_acc_w_colors_n5.pdf} \vskip -0.05in \caption{Scatter plot of normalized causal accuracy, comparing HCCD, TSCB, and PC, evaluated on the 11 datasets from \tabref{tab:shd_causal_acc}, each consists of 10 different training sets. The values are normalized by the PC value, and higher is better. Points above the green dashed line correspond to better results of HCCD compared to the PC, which are 91\% of the cases. Points to the right of the red dashed line correspond to better results of TSCB compared to PC, which are 51\% of the cases. Lastly, points above the blue dashed line correspond to better results of HCCD compared to TSCB, which are 93\% of the cases.} \label{fig:scatter_plot_acc} \vskip -0.2in \end{figure} \begin{table}[h!] \caption{SHD (lower is better) and causal accuracy (higher is better) comparison for various datasets.} \vskip -0.2in \label{tab:shd_causal_acc} \begin{center} \begin{tiny} \begin{sc} \begin{tabular}{lccc} \toprule & \multicolumn{3}{c}{Structural Hamming distance} \\ Data set & PC & TSCB & HCCD \\ \midrule Alarm & 30.50$\pm$ 3.14 & 39.6$\pm$ 4.55 & \textbf{28.80}$\pm$ 3.55 \\ Child & 18.50$\pm$ 1.27 & 19.4$\pm$ 1.35 & \textbf{18.0}$\pm$ 1.25 \\ Insurance & 42.90$\pm$ 2.62 & 46.40$\pm$ 4.27 & \textbf{42.20}$\pm$ 2.84 \\ Mildew & 46.80$\pm$ 1.62 & 46.80$\pm$ 2.04 & \textbf{45.70}$\pm$ 0.95 \\ Hailfinder & 80.30$\pm$ 2.63 & 86.20$\pm$ 2.49 & \textbf{80.10}$\pm$ 2.03 \\ Barley & 83.90$\pm$ 0.74 & 83.90$\pm$ 0.99 & \textbf{81.60}$\pm$ 1.26 \\ Munin & 283$\pm$ 1.06 & 286.10$\pm$ 1.37 & \textbf{279.60}$\pm$ 2.60 \\ WIN95PTS & 99.30$\pm$ 4.30 & 103.20$\pm$ 7.28 & \textbf{97.80}$\pm$ 7.06 \\ PathFinder & \textbf{193.10}$\pm$ 0.94 & 199.30$\pm$ 2.21 & 195.20$\pm$ 2.56 \\ Hepar2 & 115.70$\pm$ 2.21& 117.80$\pm$ 0.92 & \textbf{114.20}$\pm$ 2.53 \\ NeuroPain & 796.70$\pm$ 13.71& 804$\pm$ 23.04 & \textbf{791}$\pm$ 8.24 \\ \bottomrule \end{tabular} \end{sc} \end{tiny} \end{center} \begin{center} \begin{tiny} \begin{sc} \begin{tabular}{lccc} \toprule & \multicolumn{3}{c}{Causal accuracy}\\ Data set & PC & TSCB & HCCD\\ \midrule Alarm & 0.700 $\pm$ 0.039 & 0.608 $\pm$ 0.044 & \textbf{0.727}$\pm$ 0.036 \\ Child & 0.440 $\pm$ 0.067 & 0.448 $\pm$ 0.064 & \textbf{0.597}$\pm$ 0.044 \\ Insurance & 0.476 $\pm$ 0.038 & 0.446 $\pm$ 0.044 & \textbf{0.485}$\pm$ 0.038 \\ Mildew & 0.143 $\pm$ 0.029 & 0.126 $\pm$ 0.022 & \textbf{0.235}$\pm$ 0.019 \\ Hailfinder & 0.056 $\pm$ 0.012 & 0.060 $\pm$ 0.010 & \textbf{0.082}$\pm$ 0.010 \\ Barley & 0.180 $\pm$ 0.027 & 0.203 $\pm$ 0.022 & \textbf{0.233}$\pm$ 0.017 \\ Munin & 0.051 $\pm$ 0.002 & 0.061 $\pm$ 0.005 & \textbf{0.121}$\pm$ 0.011 \\ WIN95PTS & 0.320 $\pm$ 0.030 & 0.327 $\pm$ 0.026 & \textbf{0.441}$\pm$ 0.015 \\ PathFinder & 0.066 $\pm$ 0.003 & 0.064 $\pm$ 0.011 & \textbf{0.088}$\pm$ 0.008 \\ Hepar2 & 0.132 $\pm$ 0.024 & 0.139 $\pm$ 0.019 & \textbf{0.181}$\pm$ 0.017 \\ NeuroPain & 0.037 $\pm$ 0.005 & 0.042 $\pm$ 0.003 & \textbf{0.057}$\pm$ 0.004 \\ \bottomrule \end{tabular} \end{sc} \end{tiny} \end{center} \vskip -0.3in \end{table} \vskip 0.2in In addition, we measure the structural hamming distance (SHD) between the learned graph and the ground-truth graph. SHD calculates the number of edge insertions, deletions or flips in order to transform one graph to another graph. For each of the 11 dataset, the mean $\pm$ std SHD is presented in \tabref{tab:shd_causal_acc}. For all the datasets except one, HCCD is better than the other methods. \section{Conclusions} We propose the HCCD wrapper for causal discovery algorithms (baseline algorithms). HCCD preserves soundness and completeness of the baseline algorithm, while reducing the number of statistical tests, increasing the accuracy of the resulting graph, and reducing the run-time. For constraint-based baseline algorithms, it is assumed that each pair of variables, not adjacent in the true underlying graph, is more strongly correlated to at least one of its separating sets than to other variables not in any of their separating set. Therefore, this property of relative strength of correlation is used by our method to hierarchically partition the domain variables, minimizing the number of independence relations that are not detectable from the cluster variables alone. Using synthetically generated graphs and data, and selectively limiting certain aspect of HCCD, we demonstrated that recursion and completeness-requirement greatly improve efficiency of the learning procedure and accuracy of the resulting causal graph. Applying our method to real-world graphs, and common publicly available datasets, we demonstrated that HCCD learns significantly more accurate graphs, compared to the PC baseline algorithm. Finally, we conjecture that scoring-based algorithms may benefit from the HCCD wrapper as well by defining corresponding ``similarty'' measures. Thus, the search strategy is applied to smaller search spaces, independently and in parallel. We suspect that this will lead to avoiding local maximum and finding higher maximum points.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In recent years, Reinforcement Learning(RL) has come into fasion. General method in ordinary Reinforcement Learning using Markov decision processes use a state action value functions[1]. Agents created by these algorithms take strategies to maximize the expected value of the cumulative reward. However, in practical use , there are many situations where it is necessary to consider not only expected values but also risks. Therefore, Distributional Reinforcement Learning(DRL) that considers the distribution of cumulative rewards has also been studied. DRL research presents a particle method of risk responsive algorithm[2]. As for similar research, there are[3][4],which is equivalent to [2] mathematically,but used the different algorithm and parametric methods[5]. [4] discusses the convergence of measures in discrete steps. Another way to practice DRL is using the Bayesian approach. In [22],it is regarded as an estimation of the uncertainty of the expected value.But in fact, the Bayesian inferece can approximate the distribution of uncertain objecsion. It can perform distributed reinforcement learning. There are other existing papers on Bayesian reinforcement learning. We want to take [6][7] up this time.It is a method using Gaussian processes, and it can be said that the reward follows Gaussian distributions. [5] also supports unbounded rewards like Gaussian distributions. We want to show that the approximation of the cumulative reward distribution converges even in unbounded rewards. In this paper, we prove the convergence of the normal state action value function as a preliminary step. In addition, we perform the convergence proof for Q functions with continuous concentration domain,taking Deep Q-learning(DQN) into consideration. \subsection{Related works} The proof history of Q-function convergence is long. For example, there are papers such as [8], [9], [10], and [11] using [10]. A paper on an unusual proof method is [12] using ordinary differential equations. For DQN, there is a study [13] summarizing the approximation error. The approximation error due to the neural network is verified there. Other research results include [14][15][16][17][18]. All of these studies assume that rewards are bounded. That is, there is a certain constant $R_{max}<\infty$ and \begin{align} |r(s,a)|\leq R_{max}\ a.e. \end{align} holds. Therefore, Gaussian distributions cannot be assumed. In this paper, we prove the convergence of the Q function under condition , \begin{align} \forall s,a\in S\times A,E[r(s,a)^2]<\infty \end{align} which is more relaxed than (1.1), with normal distribution in mind. Finally, we prove the convergence of the Q function in the domain of continuous concentration under ideal conditions. This is a frequent concept in reinforcement learning. \section{Background} \subsection{transition kearnel} Let two tuples $(S,\mathcal{S}),(T,\mathcal{T}) $ be both measurable spaces.Transition kernel $k:S\times \mathcal{T}\to\mathbb{R}_+$ is defined to satisfy the following two conditions. \begin{align} \cdot& \forall B\in\mathcal{T}, k(\cdot,B)\ on\ S\ is\ measurable\\ \cdot& \forall s\in S,k(s,\cdot)is\ measure\ on\ T \end{align} This is used in situations where $ s $ is fixed and the distribution on $ T $ is fixed. \subsection{Markov decision process} Assume that both the set of states $S$ and the set of actions $A$ are finite sets. A transition kernel $p$ is defined on $(S\times A,2^{S\times A}),(S\times \mathbb{R},2^S\otimes\mathcal{B}(\mathbb{R}))$. That is, $p(r,s|s,a)$ is a probability measure that governs the distribution of the next state $s\in S$ and immediate reward $r \in \mathbb{R}$ when an action $a\in A$ is taken in state $ s \in S$ is there. The strategy $ \pi: S \to \mathcal{P}(A) $ is the action probability determined from the current situation, as can be seen from the definition. The deterministic approach is that for any $ s$, there is a $ a $ and $ \pi (a | s) = 1 $. A set of random variables $ s_t, a_t, r_t $ taking values in $ S, A, \mathbb{R} $ is written as $ (s_t, a_t, r_t)^\infty_{t = 0} $. This stochastic process is called Markov decision process(MDP). \subsection{Optimal measures and state action value functions} Put the whole set of policies as $ \Pi $. The state action value function $ Q^\pi: S \times A \to \mathbb{R} $ for the policy $ \pi $ is defined as follows. \begin{align} Q^\pi(s,a):=E[\sum^\infty_{t=0}\gamma^t R_t|s_0=s,a_t=a(r_t,s_{t+1}),p(r_t,s_{t+1}|s_t,a_t),\pi(a_t|s_t)] \end{align} Furthermore, the state value function $ V^\pi (s) $ is defined as follows. \begin{align} V^\pi(s):=\sum_{a\in A} \pi(a|s)Q^\pi(s,a) \end{align} Define the optimal strategy $ \pi^* $ as \begin{align} \pi^*:=argmax_{\pi\in\Pi} V^\pi(s_0) \end{align} In addition, the state action value function $ Q^{\pi^*} $ for the optimal policy is called the optimum state action value function, and simply expressed as $ Q^* $. The action that takes the maximum value for the optimal state action function is the optimal policy. \begin{align} \pi^*(a|s)=\begin{cases} 1\ argmax_{a\in A} Q(s,a)\\ 0\ else \end{cases} \end{align} holds for any $ s, a $. \section{Update of state action value function and Robbins Monro condition} Update the Q-unction as follows \begin{align} Q_{n+1}(s,a)=(1-\alpha(s,a,s_t,a_t,t))Q_t(s,a)+\alpha(s,a,s_t,a_t,t)[r_t(s_t,a_t)+\max_{b\in A}Q_t(s_{t+1},b)] \end{align} The following sequence $ \{c_t \}^\infty_{t = 0} $ satisfies the Robbins-Monro condition. \begin{align} \forall t, c_t\in[0,1]\\ \sum^\infty_{t=0} c_t=\infty\\ \sum^\infty_{t=0}c^2_t<\infty \end{align} Using this, the mapping $ \alpha: S \times A \times S \times A \times \mathbb{N} \to (0,1) $ is defined as follows. \begin{align} \alpha(s,a,s_t,a_t,t)=\begin{cases} c_t \ s_t=s,a_t=a\\ 0\ else \end{cases} \end{align} In addition,it is assumed that this also satisfies the Robbins Monroe condition stochastically uniformly for arbitrary $ s, a $. \begin{align} \sum^\infty_{t=0} \alpha(s,a,s_t,a_t,t)=\infty\ a.e.\\ \sum^\infty_{t=0}\alpha(s,a,s_t,a_t,t)^2<\infty\ a.e. \end{align} \section{Proof of Q-function convergence for unbounded rewards} Consider a real-valued function $ w_t (x) $ on a finite set $ \mathcal{X} $. \begin{theorem} Convergence of Q-value in case of Gaussian rewards $\mathcal{X}$ is finite set. Let ramdom value $r_t(x), \mathcal{X}: = S \times A $. Let $ W $ be the set of functions $ f: \mathcal{X} \to \mathbb{R} $ and $||f||_W$is defined as$ || f ||_W: = \max_ {x \in \mathcal{X}} f (x)$ . For any $ s, a $, let $ E [r^2 (s, a)] <\infty $. \begin{align} ||Q_t-Q^*||_W \to 0 \end{align} proof. In line with the proof of [9]. The $ F $ condition is relaxed and the statement is stronger, so it needs to be done more precisely. Consider a stochastic process of $ \Delta_t (x): = Q_t (x) -Q^* (x) $. Since $ Q^* (x) $ is a constant, $ V (\Delta_t (x)) = V (Q_t (x)) $. Putting$ F_t (x): = r_t (x) + \gamma \sup_b Q_t (X (s, a), b) -Q^* (x) $, this is $ \mathcal{F}_{t + 1} $ measurable stochastic process. Furthermore, if we put $ G_t (x): = r_t (x) + \gamma \sup_b Q_t (X (s, a), b) $, by definition $ G_t-E [G_t (x) | \mathcal { F}_t] = F_t-E [F_t (x) | \mathcal{F}_t] $. The two stochastic processes $ \delta_t, w_t \in W $ are taken so that $ \Delta_0 (x) = \delta_0 (x) + w_0 (x) $. Define time evolution as \begin{align} &\delta_{t+1}(x)=(1-a_t(x))\delta_t(x)+a_t(x)E[F_t(x)|\mathcal{F}_t]\\ & w_{t+1}(x)=(1-a_t(x))w_t(x)+a_t(x)p_t(x) \end{align} However, $ p_t (x): = F_t (x) -E [F_t (x) | \mathcal{F}_t] $. At this time, $ \Delta_t (x) = w_t (x) + \delta_t (x) $. First, we show that $ w_t $ converges to 0 for $ \mathcal{X} $ with probability 1 by using Lemma 2. By definition, $ E [p_t | \mathcal {F}_t] = 0 $, so $ \sum_t E |[p_t | \mathcal{F} _t]| = 0 $ holds. From Lemma 1 and the definition of $ p_t, G_t $, $ E [p_t^2] \leq 4E [G_t^2]$ holds. Putting $ L_t (\omega): = \sup_x | Q_t (x) | $, this random variable is $ \mathcal{F}_t $ -measurable and takes a finite value with probability $ 1 $. Since $ L_0 $ is a finite value, a certain constant $ K_0 $ can be taken so that $ E [L_0^2] \leq K^2_0C_R $ holds. And the following holds with probability 1. \begin{align} L_{t+1}\leq \max(L_t,(1-b_t)L_t+b_t(\sup_x|r_t(x)|+\gamma L_t)) \end{align} Using the above formula, the following holds \begin{align} E[L^2_{t+1}]&\leq \max(E[L^2_t],E[((1-b_t)L_t+b_t(\sup_x|r_t(x)|+\gamma L_t))^2]) \end{align} Suppose there is $ K_t \in \mathbb{R} $ that is $ E [L_t ^ 2] \leq K^2_tC_R $. At this time, put $ H_t: = \sup_x | r_t (x) | + \gamma L_t $ \begin{align} E[H_t^2]&=E[\sup_x|r_t(x)|^2]+2E[\sup_x|r_t(x)|\gamma L_t]+\gamma^2E[L_t^2]\\ &\leq C_R+2\gamma\sqrt{C_RK_t^2C_R}+K^2_tC_R\\ &=(1+\gamma K_t)^2C_R \end{align} Then, \begin{align} E[((1-b_t)L_t+b_t(\sup_x|r_t(x)|+\gamma L_t))^2]&\leq (1-b_t)^2E[L_t^2]+2(1-b_t)b_t\sqrt{E[L_t^2]E[H_t^2]}+b_t^2\gamma^2E[H_t^2]\\ &\leq (1-b_t)^2K_t^2C_R+2(1-b_t)b_tK_t(1+\gamma K_t)C_R+(1+\gamma K_t)^2C_R\\ &=((1-b_t)K_t+b_t(1+\gamma K_t))^2C_R\\ &=(K_t+b_t(1-(1-\gamma)K_t))^2 C_R \end{align} Putting $ K_{t + 1} = max (K_t, K_t + b_t (1- (1- \gamma) K_t)) $, $ E {L^2_{t + 1}} \leq K_{t + 1} C_R $ can be said. Since $ K_0 \in \mathbb {R} $ exists, $ K_t \in \mathbb {R} $ exists for any $ t $, and $ E [L_t^2] \leq K_t^2 C_R$ can be said. It is clear from the equation that $ K_{t + 1} = K_t $ when $ K_t> \frac {1} {1- \gamma} $, and $ K_t \leq \frac {1} {1- \gamma} $ Then, $ K_{t + 1} \leq \frac {1} {1- \gamma} + 1 $ holds. Therefore, it was shown earlier that $ K_t $ exists for any $ t $, in addition $ K_t \leq K^*: = \max (K_0, \frac {1} {1- \gamma} + 1) $ can be also said. $ | G_t (x) | \leq | r_t (x) | + \gamma L_t $ holds, so the following equation hold.for all $x$ \begin{align} \frac{1}{4}E[p_t^2(x)]&\leq E[G_t^2(x)]\\ &=E[r_t(x)^2]+2\gamma \sqrt{E[r_t(x)^2]E[L_t^2]}+E[L_t^2]\\ &\leq (1+\gamma K^*)C_R \end{align} Then, \begin{align} \sum_tE[a_t^2p_t^2]&\leq \sum_t 4b^2_t (1+\gamma K^*)C_R\\ &\leq4 M (1+\gamma K^*)C_R<\infty \end{align} holds for all $x$. When we use Lemma2,putting \begin{align} &U_t:=a_t(x)p_t(x)\\ &T(w_t,\omega):=(1-a_t(x))w_n \end{align} $ \sum_t E [U_t^2] <\infty$can be said. Since $ E [U_t | \mathcal{F}_n] = 0 $, $ \sum_t | E [U_t | \mathcal {F}_n] | = 0 $ holds. Then, for any $ \epsilon> 0 $, set $ \alpha = \epsilon, \beta_t (x) = b^2_t (x) $ and $ \gamma_t (x) = \epsilon (2a_t ( x) -a_t^2 (x)) $, then \begin{align} &T^2(w_t,\omega)\leq \max(\alpha,(1+\beta_t)w^2_t-\gamma_t)\\ &\sum_t\gamma_t =\infty\ a.e \end{align} holds. The latter is based on Robbins Monro conditions. Therefore, $ w_t (x) \to0 $ holds for any $ x $. Define the linear operator $ \mathcal{T}: W \to W $ as follows: for $ q \in W $ \begin{align} \mathcal{T}q(s,a)&=\int_\mathbb{R}\sum_{s'} [r(s,a)+\gamma \sup_b q(s',b) ]p(dr,s'|s,a)\\ &=E[r(s,a)+\sup_bq(X(s,a),b)] \end{align} $ Q^* $ is a fixed point for this operator. For any $q_1,q_2\in W$ \begin{align} ||\mathcal{T}q_1-\mathcal{T}q_2||_W&=sup_{s,a}[|\int_\mathbb{R}\sum_{s'} [r(s,a)+\gamma \sup_b q_1(s',b) ]p(dr,s'|s,a)-\int_\mathbb{R}\sum_{s'} [r(s,a)+\gamma \sup_b q_2(s^`,b) ]p(dr,s'|s,a)|]\\ &\leq\int_\mathbb{R}\sum_{s'} [\gamma |\sup_b q_1(s',b)-\sup_bq_2(s',b)| ]p(dr,s'|s,a)\\ &\leq \int_\mathbb{R}\sum_{s'} [\gamma sup_b |q_1(s',b)-q_2(s,b)| ]p(dr,s'|s,a)\\ &=\gamma ||q_1-q_2||_W \end{align} Thus $ \mathcal{T} $ is a reduction operator. \begin{align} |E[F_t(x,a)|\mathcal{F}_t]|&\leq\int_\mathbb{R}\sum_{s'} |r(s,a)+\gamma \sup_b Q_t(s',b) -Q^*(s,a)|p(dr,s'|s,a)\\ &=|\mathcal{T}Q_t(x,a)-Q^*(s,a)|\\ &=|\mathcal{T}Q_t(x,a)-\mathcal{T}Q^*(s,a)|\\ &\leq \gamma ||\Delta_t||_W \end{align} Then, \begin{align} ||\delta_{t+1}||&\leq (1-a_t(x))||\delta_t||+a_t(x)||\delta_t+w_t||\\ &\leq(1-a_t(x))||\delta_t||+a_t(x)(||\delta_t||+||w_t||) \end{align} $ || w_t (x) || $ converges uniformly to 0 with a probability of 1 for any $ x $ as described above. Therefore, from Lemma 3, $ || \delta_{t + 1} (x) || \to0 $ for any $ x $. That is, for any $ x $, $ || \Delta_t (x) ||_W \to0 $, which holds the main theorem assertion. \end{theorem} \section{Theorem for SARASA} The method in Chapter 3 is called Q-learning, and the value is updated before performing the next action. On the other hand, SARASA updates the value after performing the following actions. \begin{align} Q_{t+1}(s,a)=(1-\alpha(s,a,s_t,a_t,t))Q_t(s,a)+\alpha(s,a,s_t,a_t,t)(r_t(s,a)+Q_t(s_{t+1},a_{t+1})) \end{align} $ a_{t + 1} $ is often stochastically determined by softmax function or the like. \begin{theorem} Suppose that the Q function is updated by the above SARASA method. At this time, \begin{align} ||Q_t-Q^*||_W\to0\ in \ t\to\infty \end{align} proof. Put $ L'_t: = \ max_ {x, y \ in \ mathcal {X}} | Q_t (x) -Q_t (y) | $ It is clear from the definition that $ L'_t \leq 2L_t $. Later along this follows the proof of Theorem 1. \section{Convergence proof for unbounded rewards under continuous concentration} For example, in a situation such as DQN, an update for one $ s, a $ has an effect on other state actions. As a simple model to take such situations into account, we put the ripple function $ f (x_1, x_2) $ defined on the compact set $ \mathcal {X}^2 $. This satisfies the next conditions. \begin{align} &f(x,x)=1\\ &f(x,y)\ is\ continue. \end{align} \end{theorem} If $ Q^* $ is a continuous function, it can be used to depart from any continuous function and have the same convergence on the compact set. Let $ \mathcal {X} \subset \mathbb{R}^d $ be a simple connected compact set. Let $ Q^*, Q_0 $ be a continuous function on $ \mathcal{X} $. Let $ W $ be a continuous function on $ \mathcal{X} $. $ || f ||_W: = \max_{x \in \mathcal {X}} f (x) $ \begin {align} Q_{t + 1} (s, a) = (1-f (s, a, s_t, a_t) \alpha (s, a, s_t, a_t, t)) Q_t (s, a) + f (s, a, s_t, a_t) \alpha (s, a, s_t, a_t, t) (r_t (s, a) + \max_ {b \in A} Q_t (s_{t + 1}, b)) \end {align} At this time, $ || Q_t-Q^* ||_W \to 0 $ proof. Consider a finite set $ K_N: = \{x_1, x_2, x_3, ......, x_N \} $ on $ \mathcal{X} $. Limiting $ Q $ to $ K $ converges to a correct function uniformly over $ K $ from Theorem 1.For any $ \epsilon $ Since $ Q^* $ is a continuous function, the function whose value is defined on a dense set is uniquely determined. Convergence can be said. \section{Conclusion and Future Work} As we mentioned earlier,we want to prove the convergence of the distribution. An order evaluation of the expected value should be also performed. We also want to estimate the convergence order for a specific neural network such as [13]. According to [13], as with Theorem 3, in the domain of continuous concentration, as $ R_ {max}: = \sup r(\omega, s, a) $, using constants $ C_1, C_2, \xi, \alpha $ \begin{align} ||Q^*-Q_n||_W\leq C_1\cdot (\log n)^\xi n^{-\alpha}+C_2R_{max} \end{align} is established. However, when $ r $ follows a normal distribution, $ R_{max} = \infty $, so the upper limit of the error is infinite, and this unexpected expression has no meaning. In case of using unbounded rewards, stronger inequality proofs are needed.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In topology there is the famous Lefschetz-Hopf trace formula, which roughly says that if $f:X\to X$ is an endomorphism of a compact connected oriented space $X$ with isolated fixed points, then the number of fixed points of $f,$ counted with multiplicity, is equal to the alternating sum of the traces of $f^*$ on the singular cohomology groups $H^i(X,\mathbb Q).$ There is also a trace formula in algebraic geometry, for schemes over finite fields, due to Grothendieck. It says that if $X_0$ is a scheme over $\mathbb F_q,$ separated and of finite type, and $F_q$ is the $q$-geometric Frobenius map, then $$ \#X_0(\mathbb F_q)=\sum_{i=0}^{2\dim X_0}(-1)^i\text{Tr}(F_q, H^i_c(X,\overline{\mathbb Q}_{\ell})), $$ where $H^i_c(X,\overline{\mathbb Q}_{\ell})$ is the $\ell$-adic cohomology with compact support. In fact he proved the trace formula for an arbitrary constructible sheaf. See \cite{Gro, Ver, SGA4.5}. Behrend conjectured the trace formula for smooth algebraic stacks over $\mathbb F_q$ in his thesis and \cite{Beh1}, and proved it in \cite{Beh2}. However, he used ordinary cohomology and arithmetic Frobenius (rather than compact support cohomology and geometric Frobenius) to prove the ``dual statement", probably because at that time the theory of dualizing complexes of algebraic stacks, as well as compact support cohomology groups of stacks, were not developed. Later Laszlo and Olsson developed the theory of the six operations for algebraic stacks \cite{LO1, LO2}, which makes it possible to reprove the trace formula, and remove the smoothness assumption in Behrend's result. Also we will work with a fixed isomorphism of fields $\iota:\overline{\mathbb Q}_{\ell}\overset{\sim}{\to} \mathbb C,$ namely we will work with \textit{$\iota$-mixed} complexes, rather than \textit{mixed} ones, and this is a more general setting (see (\ref{Laff})). Once we have the trace formula, we get a factorization of the zeta function into a possibly infinite product of $L$-factors, and from this one can deduce the meromorphic continuation of the zeta functions, generalizing a result of Behrend (\cite{Beh1}, 3.2.4). Furthermore, to locate the zeros and poles of the zeta functions, we give a result on the weights of cohomology groups of stacks. We briefly mention the technical issues. As pointed out in \cite{Beh2}, a big difference between schemes and stacks is the following. If $f:X_0\to Y_0$ is a morphism of $\mathbb F_q$-schemes of finite type, and $K_0\in D_c^b(X_0,\overline{\mathbb Q}_{\ell}),$ then $f_*K_0$ and $f_!K_0$ are also bounded complexes. Since often we are mainly interested in the simplest case when $K_0$ is a sheaf concentrated in degree 0 (for instance, the constant sheaf $\overline{\mathbb Q}_{\ell}),$ and $D^b_c$ is stable under $f_*$ and $f_!,$ it is enough to consider $D^b_c$ only. But for a morphism $f:\mathscr X_0\to\mathscr Y_0$ of $\mathbb F_q$-algebraic stacks of finite type, $f_*$ and $f_!$ do not necessarily preserve boundedness. For instance, the cohomology ring $H^*(B\mathbb G_m,\overline{\mathbb Q}_{\ell})$ is the polynomial ring $\overline{\mathbb Q}_{\ell}[T]$ with $\deg(T)=2.$ So for stacks we have to consider unbounded complexes, even if we are only interested in the constant sheaf $\overline{\mathbb Q}_{\ell}.$ In order to define the trace of the Frobenius on cohomology groups, we need to consider the convergence of the complex series of the traces. This leads to the notion of an $\iota$-convergent complex of sheaves (see (\ref{D4.1})). Another issue is the following. In the scheme case one considers bounded complexes, and for any bounded complex $K_0$ on a scheme $X_0,$ there exists a stratification of $X_0$ that ``trivializes the complex $K_0"$ (i.e. the restrictions of all cohomology sheaves $\mathscr H^iK_0$ to each stratum are lisse). But in the stack case we have to consider unbounded complexes, and in general there might be no stratification of the stack that trivializes every cohomology sheaf. This leads to the notion of a stratifiable complex of sheaves (see (\ref{D3.1})). We need the stratifiability condition to control the dimensions of cohomology groups (see (\ref{L3.11})). All bounded complexes are stratifiable (\ref{L3.3}v). Also we will have to impose the condition of $\iota$-mixedness, due to unboundedness. For bounded complexes on schemes, the trace formula can be proved without using this assumption, although the conjecture of Deligne (\cite{Del2}, 1.2.9) that all sheaves are $\iota$-mixed is proved by Laurent Lafforgue. See the remark (\ref{Laff}). We briefly introduce the main results of this paper. \begin{flushleft} \textbf{Fixed point formula.} \end{flushleft} \begin{theorem}\label{T1.1} Let $\mathscr X_0$ be an Artin stack of finite type over $\mathbb F_q,$ and let $[\mathscr X_0(\mathbb F_q)]$ be the set of isomorphism classes of the groupoid of $\mathbb F_q$-points of $\mathscr X_0.$ Then the series $$ \sum_{n\in\mathbb Z}(-1)^n\emph{Tr}(F_q,H^n_c(\mathscr X,\overline{\mathbb Q}_{\ell})), $$ regarded as a complex series via $\iota,$ is absolutely convergent, and its limit is ``the number of $\mathbb F_q$-points of $\mathscr X_0",$ namely $$ \#\mathscr X_0(\mathbb F_q):=\sum_{x\in[\mathscr X_0(\mathbb F_q)]}\frac{1}{\#\emph{Aut}_x(\mathbb F_q)}. $$ \end{theorem} Here $F_q$ denotes the $q$-geometric Frobenius. To generalize, one wants to impose some condition (P) on complexes $K_0\in D_c^-(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ such that (1) (P) is preserved by $f_!;$ (2) if a complex $K_0$ satisfies (P), then the ``naive local terms" are well-defined, and (3) trace formula holds in this case. The condition (P) on $K_0$ turns out to be a combination of three parts: $\iota$-convergence (which implies (2) for $K_0$), $\iota$-mixedness and stratifiability (which, together with the first part, implies (2) for $f_!K_0$). See (\ref{T4.3}) for the general statement of the theorem. These conditions are due to Behrend \cite{Beh2}. \vskip.5truecm \begin{flushleft} \textbf{Meromorphic continuation.} \end{flushleft} The rationality in Weil conjecture was first proved by Dwork, namely the zeta function $Z(X_0,t)$ of every variety $X_0$ over $\mathbb F_q$ is a rational function in $t.$ Later, this was reproved using the fixed point formula \cite{Gro, SGA5}. Following (\cite{Beh1}, 3.2.3), we define the zeta functions of stacks as follows. \begin{definition}\label{D1.2} For an $\mathbb F_q$-algebraic stack $\mathscr X_0$ of finite type, define the \emph{zeta function} $$ Z(\mathscr X_0,t)=\exp\Big(\sum_{v\ge1}\frac{t^v}{v}\sum _{x\in[\mathscr X_0(\mathbb F_{q^v})]}\frac{1}{\#\emph{Aut}_x(\mathbb F_{q^v})}\Big), $$ as a formal power series in the variable $t.$ \end{definition} Notice that in general, the zeta function is not rational (cf. $\S7$). Behrend proved that, if $\mathscr X_0$ is a smooth algebraic stack, and it is a quotient of an algebraic space by a linear algebraic group, then its zeta function $Z(\mathscr X_0,t)$ is a meromorphic function in the complex $t$-plane; if $\mathscr X_0$ is a smooth Deligne-Mumford stack, then $Z(\mathscr X_0,t)$ is a rational function (\cite{Beh1}, 3.2.4, 3.2.5). These results can be generalized as follows. \begin{theorem}\label{T1.3} For every $\mathbb F_q$-algebraic stack $\mathscr X_0$ of finite type, its zeta function $Z(\mathscr X_0,t)$ defines a meromorphic function in the whole complex $t$-plane. If $\mathscr X_0$ is Deligne-Mumford, then $Z(\mathscr X_0,t)$ is a rational function. \end{theorem} See (\ref{C7.2}) and (\ref{T8.1}) for the general statement. \vskip.5truecm \begin{flushleft} \textbf{A theorem of weights.} \end{flushleft} One of the main results in \cite{Del2} is that, if $X_0$ is an $\mathbb F_q$-scheme, separated and of finite type, and $\mathscr F_0$ is an $\iota$-mixed sheaf on $X_0$ of punctual $\iota$-weights $\le w\in\mathbb R,$ then for every $n,$ the punctual $\iota$-weights of $H^n_c(X,\mathscr F)$ are $\le w+n.$ The cohomology groups are zero unless $0\le n\le2\dim X_0.$ We will see later (\ref{R7.1}) that the upper bound $w+n$ for the punctual $\iota$-weights does not work in general for algebraic stacks. We will give an upper bound that applies to all algebraic stacks. Deligne's upper bound of weights still applies to stacks for which all the automorphism groups are affine. \begin{theorem}\label{T1.4} Let $\mathscr X_0$ be an $\mathbb F_q$-algebraic stack of finite type, and let $\mathscr F_0$ be an $\iota$-mixed sheaf on $\mathscr X_0,$ with punctual $\iota$-weights $\le w,$ for some $w\in\mathbb R.$ Then the $\iota$-weights of $H^n_c(\mathscr{X,F})$ are $\le\dim\mathscr X_0+\frac{n}{2}+w,$ and they are congruent mod $\mathbb Z$ to weights that appear in $\mathscr F_0.$ If $n>2\dim\mathscr X_0,$ then $H^n_c(\mathscr X,-)=0$ on sheaves. If for all points $\overline{x}\in\mathscr X(\mathbb F)$ in the support of $\mathscr F,$ the automorphism group schemes $\emph{Aut}_{\overline{x}}$ are affine, then the $\iota$-weights of $H^n_c(\mathscr{X,F})$ are $\le n+w.$ \end{theorem} \textbf{Organization.} In $\S2$ we review some preliminaries on derived categories of $\ell$-adic sheaves on algebraic stacks over $\mathbb F_q$ and $\iota$-mixed complexes, and show that $\iota$-mixedness is stable under the six operations. In $\S3$ we develop the notion of stratifiable complexes in the context of Laszlo and Olsson's $\ell$-adic derived categories, and prove its stability under the six operations. In $\S4$ we discuss convergent complexes, and show that they are preserved by $f_!.$ In $\S5$ we prove the trace formula for stacks. These two theorems are stated and proved in \cite{Beh2} in terms of ordinary cohomology and arithmetic Frobenius, and the proof we give here uses geometric Frobenius. In $\S6$ we discuss convergence of infinite products of formal power series, which will be used in the proof of the meromorphic continuation. In $\S7$ we give some examples of zeta functions of stacks, and give the functional equation of the zeta functions and independence of $\ell$ of Frobenius eigenvalues for proper varieties with quotient singularities (\ref{T7.3}). In $\S8$ and $\S9,$ we prove the meromorphic continuation and the weight theorem respectively. Finally in $\S10$ we discuss ``independence of $\ell"$ for stacks, and prove (\ref{P10.6}) that for the quotient stack $[X_0/G_0],$ where $X_0$ is a proper smooth variety and $G_0$ is a linear algebraic group acting on $X_0,$ the Frobenius eigenvalues on its cohomology groups are independent of $\ell.$ \begin{notation-convention}\label{NC} \begin{anitem} We fix a prime power $q=p^a$ and an algebraic closure $\mathbb F$ of the finite field $\mathbb F_q$ with $q$ elements. Let $F$ or $F_q$ be the $q$-geometric Frobenius, namely the $q$-th root automorphism on $\mathbb F.$ Let $\ell$ be a prime number, $\ell\ne p,$ and fix an isomorphism of fields $\overline{\mathbb Q}_{\ell}\overset{\iota}{\to}\mathbb C.$ For simplicity, let $|\alpha|$ denote the complex absolute value $|\iota\alpha|,$ for $\alpha\in\overline{\mathbb Q}_{\ell}.$ \end{anitem} \begin{anitem} In this paper, by an Artin stack (or an algebraic stack) over a base scheme $S,$ we mean an $S$-algebraic stack in the sense of M. Artin (\cite{LMB}, 4.1) \textit{of finite type}. When we want the more general setting of Artin stacks \textit{locally of finite type}, we will mention that explicitly. \end{anitem} \begin{anitem} Objects over $\mathbb F_q$ will be denoted with an index $_0.$ For instance, $\mathscr X_0$ will denote an $\mathbb F_q$-Artin stack; if $\mathscr F_0$ is a lisse-\'etale sheaf (or more generally, a Weil sheaf (\ref{Weil-cplx})) on $\mathscr X_0,$ then $\mathscr F$ denotes its inverse image on $\mathscr X:=\mathscr X_0\otimes_{\mathbb F_q}\mathbb F.$ \end{anitem} \begin{anitem} For a field $k,$ let $\text{Gal}(k)$ denote its absolute Galois group $\text{Gal}(k^\text{sep}/k).$ By a variety over $k$ we mean a separated reduced $k$-scheme of finite type. Let $W(\mathbb F_q)$ be the \textit{Weil group} $F_q^{\mathbb Z}$ of $\mathbb F_q.$ \end{anitem} \begin{anitem} For a profinite group $H,$ by $\overline{\mathbb Q}_{\ell}$-representations of $H$ we always mean finite-dimensional continuous representations (\cite{Del2}, 1.1.6), and denote by $\text{Rep}_{\overline{\mathbb Q}_{\ell}}(H)$ the category of such representations. \end{anitem} \begin{anitem} For a scheme $X,$ we denote by $|X|$ the set of its closed points. For a category $\mathscr C$ we write $[\mathscr C]$ for the collection of isomorphism classes of objects in $\mathscr C.$ For example, if $v\ge1$ is an integer, then $[\mathscr X_0(\mathbb F_{q^v})]$ denotes the set of isomorphism classes of $\mathbb F_{q^v}$-points of the stack $\mathscr X_0.$ This is a finite set. For $x\in\mathscr X_0(\mathbb F_{q^v})$ we will write $k(x)$ for the field $\mathbb F_{q^v}.$ For an $\mathbb F_q$-scheme $X_0$ (always of finite type) and $x\in|X_0|,$ we denote by $k(x)$ the residue field of $x.$ In both cases, let $d(x)$ be the degree of the field extension $[k(x):\mathbb F_q],$ and $N(x)=q^{d(x)}=\#k(x).$ Also in both cases let $x:\text{Spec }\mathbb F_{q^v}\to\mathscr X_0$ (or $X_0$) be the natural map ($v=d(x)$), although in the second case the map is defined only up to an automorphism in $\text{Gal}(k(x)/\mathbb F_q).$ Given a $K_0\in D_c(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ (cf. $\S2$), the pullback $x^*K_0\in D_c(\text{Spec }k(x),\overline{\mathbb Q}_{\ell})=D_c(\text{Rep}_{\overline{\mathbb Q}_{\ell}}(\text{Gal}(k(x))))$ gives a complex $K_{\overline{x}}$ of representations of $\text{Gal}(k(x)),$ and we let $F_x$ be the geometric Frobenius generator $F_{q^{d(x)}}$ of this group, called ``the local Frobenius". \end{anitem} \begin{anitem} Let $V$ be a finite dimensional $\overline{\mathbb Q}_{\ell}$-vector space and $\varphi$ an endomorphism of $V.$ For a function $f:\overline{\mathbb Q}_{\ell}\to\mathbb C,$ we denote by $\sum_{V,\varphi}f(\alpha)$ the sum of values of $f$ in $\alpha,$ with $\alpha$ ranging over all the eigenvalues of $\varphi$ on $V$ with multiplicities. For instance, $\sum_{V,\varphi}\alpha=\text{Tr}(\varphi,V).$ A $0\times0$-matrix has trace 0 and determinant 1. For $K\in D^b_c(\overline{\mathbb Q}_{\ell})$ and an endomorphism $\varphi$ of $K,$ we define (following \cite{SGA4.5}) $$ \text{Tr}(\varphi,K):=\sum_{n\in\mathbb Z}(-1)^n\text{Tr} (H^n(\varphi),H^n(K)) $$ and $$ \det(1-\varphi t,K):=\prod_{n\in\mathbb Z}\det(1-H^n(\varphi) t,H^n(K))^{(-1)^n}. $$ For unbounded complexes $K$ we use similar notations, if the series (resp. the infinite product) converges (resp. converges term by term; cf. (\ref{D6.1})). \end{anitem} \begin{anitem} For a map $f:X\to Y$ and a sheaf $\mathscr F$ on $Y,$ we sometimes write $H^n(X,\mathscr F)$ for $H^n(X,f^*\mathscr F).$ We will write $H^n(\mathscr X)$ for $H^n(\mathscr X,\overline{\mathbb Q}_{\ell}),$ and $h^n(\mathscr{X,F})$ for $\dim H^n(\mathscr{X,F}),$ and ditto for $H^n_c(\mathscr X)$ and $h^n_c(\mathscr{X,F}).$ \end{anitem} \begin{anitem} For an $\mathbb F_q$-algebraic stack $\mathscr X_0$ and a Weil complex $K_0$ on $\mathscr X_0,$ by $R\Gamma(\mathscr X_0,K_0)$ (resp. $R\Gamma_c(\mathscr X_0,K_0)$) we mean $Ra_*K_0$ (resp. $Ra_!K_0$), where $a:\mathscr X_0\to \text{Spec }\mathbb F_q$ is the structural map. The derived functors $Rf_*,Rf_!,Lf^*$ and $Rf^!$ are usually abbreviated as $f_*,f_!,f^*,f^!.$ But we reserve $\otimes,\mathscr Hom$ and $Hom$ for the ordinary sheaf tensor product, sheaf Hom and Hom group respectively, and use $\otimes^L,R\mathscr Hom$ and $RHom$ for their derived functors. \end{anitem} \end{notation-convention} \begin{flushleft} \textbf{Acknowledgment.} \end{flushleft} I would like to thank my advisor Martin Olsson for introducing this topic to me, and giving so many suggestions during the writing. Weizhe Zheng sent me many helpful comments, and Yves Laszlo helped me to correct some inaccuracies in the first version of the paper. Many people, especially Brian Conrad and Matthew Emerton, have helped me on mathoverflow during the writing. The revision of the paper was done during the stay in Ecole polytechnique CMLS and Universit\'e Paris-Sud, while I was supported by ANR grant G-FIB. \section{Derived category of $\ell$-adic sheaves and mixedness} We briefly review the definition in \cite{LO1, LO2} for derived category of $\ell$-adic sheaves on stacks. Then we show that $\iota$-mixedness is stable under the six operations. As a consequence of Lafforgue's result (\ref{Laff}), this is automatic, but we still want to give a much more elementary argument. The proof works for \textit{mixed} complexes as well (\ref{mixed-variant}). One can also generalize the structure theorem of $\iota$-mixed sheaves in \cite{Del2} to algebraic stacks (\ref{R2.7}). \begin{blank}\label{adic-setting} Let $\Lambda$ be a complete discrete valuation ring with maximal ideal $\mathfrak m$ and residual characteristic $\ell.$ Let $\Lambda_n=\Lambda/\mathfrak m^{n+1},$ and let $\Lambda_{\bullet}$ be the pro-ring $(\Lambda_n)_n.$ We take the base scheme $S$ to be a scheme that satisfies the following condition denoted (LO): it is noetherian affine excellent finite-dimensional, in which $\ell$ is invertible, and all $S$-schemes of finite type have finite $\ell$-cohomological dimension. We denote by $\cal{X,Y}\cdots$ Artin stacks locally of finite type over $S.$ Consider the ringed topos $\mathscr A=\mathscr A(\mathcal X):=\text{Mod}(\mathcal X_{\text{lis-\'et}}^{\mathbb N},\Lambda_{\bullet})$ of projective systems $(M_n)_n$ of $\text{Ab}(\mathcal X_{\text{lis-\'et}})$ such that $M_n$ is a $\Lambda_n$-module for each $n,$ and the transition maps are $\Lambda$-linear. An object $M\in\mathscr A$ is said to be \textit{AR-null}, if there exists an integer $r>0$ such that for every integer $n,$ the composed map $M_{n+r}\to M_n$ is the zero map. A complex $K$ in $\mathscr A$ is called \textit{AR-null}, if all cohomology systems $\mathscr H^i(K)$ are AR-null; it is called \textit{almost AR-null}, if for every $U$ in $\text{Lis-\'et}(\mathcal X)$ (assumed to be of finite type over $S$), the restriction of $\mathscr H^i(K)$ to $\text{\'Et}(U)$ is AR-null. Let $\mathscr{D(A)}$ be the ordinary derived category of $\mathscr A.$ See (\cite{LMB}, 18.1.4) for the definition of constructible sheaves on $\mathcal X_{\text{lis-\'et}}.$ \end{blank} \begin{definition}\label{D2.1} An object $M=(M_n)_n\in\mathscr A$ is \emph{adic} if all the $M_n$'s are constructible, and for every $n,$ the natural map $$ \Lambda_n\otimes_{\Lambda_{n+1}}M_{n+1}\to M_n $$ is an isomorphism. It is called \emph{almost adic} if all the $M_n$'s are constructible, and for every object $U$ in $\emph{Lis-\'et}(\mathcal X),$ the restriction $M|_U$ is AR-adic, i.e. there exists an adic $N_U\in\emph{Mod}(U_{\emph {\'et}}^{\mathbb N},\Lambda_{\bullet})$ and a morphism $N_U\to M|_U$ with AR-null kernel and cokernel. A complex $K$ in $\mathscr A$ is a $\lambda$-\emph{complex} if $\mathscr H^i(K)\in\mathscr A$ are almost adic, for all $i.$ Let $\mathscr D_c(\mathscr A)$ be the full triangulated subcategory of $\mathscr{D(A)}$ consisting of $\lambda$-complexes, and let $D_c(\mathcal X,\Lambda)$ be the quotient of $\mathscr D_c(\mathscr A)$ by the thick full subcategory of those which are almost AR-null. This is called the \emph{derived category of $\Lambda$-adic sheaves on} $\mathcal X.$ \end{definition} \begin{subremark}\label{R2.2} (i) $D_c(\mathcal X,\Lambda)$ is a triangulated category with a natural $t$-structure, and its heart is the quotient of the category of almost adic systems in $\mathscr A$ by the thick full subcategory of almost AR-null systems. One can use this $t$-structure to define the subcategories $D_c^{\dagger}(\mathcal X,\Lambda)$ for $\dagger=\pm, b.$ If $\mathcal X/S$ is of finite type (in particular, quasi-compact), it is clear that $K\in\mathscr D_{\text{cart}}(\mathscr A)$ is AR-null if it is almost AR-null. Also if $M\in\mathscr A$ is almost adic, the adic system $N_U$ and the map $N_U\to M|_U$ in the definition above are unique up to unique isomorphism, for each $U,$ so by (\cite{LMB}, 12.2.1) they give an adic system $N$ of Cartesian sheaves on $\mathcal X_{\text{lis-\'et}},$ and an AR-isomorphism $N\to M.$ This shows that an almost adic system is AR-adic, and it is clear (\cite{SGA5}, p.234) that the natural functor $$ \Lambda\text{-Sh}(\mathcal X)\to\text{heart }D_c(\mathcal X,\Lambda) $$ is an equivalence of categories, where $\Lambda$-Sh$(\mathcal X)$ denotes the category of $\Lambda$-adic (in particular, constructible) systems. (ii) $D_c(\mathcal X,\Lambda)$ is different from the ordinary derived category of $\text{Mod}(\mathcal X_{\text{lis-\'et}},\Lambda)$ with constructible cohomology; the latter can be denoted by $\mathscr D_c(\mathcal X,\Lambda).$ Here $\text{Mod}(\mathcal X_{\text{lis-\'et}},\Lambda)$ denotes the abelian category of $\Lambda_{\mathcal X}$-modules, not adic sheaves $\Lambda\text{-Sh}(\mathcal X).$ (iii) When $S=\text{Spec }k$ for $k$ a finite field or an algebraically closed field, and $\mathcal X=X$ is a separated $S$-scheme, (\cite{LO2}, 3.1.6) gives a natural equivalence of triangulated categories between $D^b_c(X,\Lambda)$ and Deligne's definition $\mathscr D_c^b(X,\Lambda)$ in (\cite{Del2}, 1.1.2). \end{subremark} \begin{blank}\label{normalization} Let $\pi:\mathcal X^{\mathbb N}_{\text{lis-\'et}}\to\mathcal X_{\text{lis-\'et}}$ be the morphism of topoi, where $\pi^{-1}$ takes a sheaf $F$ to the constant projective system $(F)_n,$ and $\pi_*$ takes a projective system to the inverse limit. This morphism induces a morphism of ringed topoi $(\pi^*,\pi_*):(\mathcal X^{\mathbb N}_{\text{lis-\'et}},\Lambda_{\bullet})\to(\mathcal X_{\text{lis-\'et}},\Lambda).$ The functor $R\pi_*:\mathscr D_c(\mathscr A)\to\mathscr D(\mathcal X,\Lambda)$ vanishes on almost AR-null objects (\cite{LO2}, 2.2.2), hence factors through $D_c(\mathcal X,\Lambda).$ In (\cite{LO2}, 3.0.8), the normalization functor is defined to be $$ K\mapsto\widehat{K}:=L\pi^*R\pi_*K:\ D_c(\mathcal X,\Lambda) \to\mathscr{D(A)}. $$ This functor plays an important role in defining the six operations \cite{LO2}. For instance: $\bullet$ For $F\in D_c^-(\mathcal X,\Lambda)$ and $G\in D_c^+(\mathcal X,\Lambda),\ R\mathscr Hom(F,G)$ is defined to be the image of $R\mathscr Hom_{\Lambda_{\bullet}}(\widehat {F},\widehat{G})$ in $D_c(\mathcal X,\Lambda).$ $\bullet$ For $F,G\in D_c^-(\mathcal X,\Lambda),$ the derived tensor product $F\otimes^LG$ is defined to be the image of $\widehat{F}\otimes^L_{\Lambda_{\bullet}}\widehat{G}.$ $\bullet$ For a morphism $f:\mathcal X\to\mathcal Y$ and $F\in D_c^+(\mathcal X,\Lambda),$ the derived direct image $f_*F$ is defined to be the image of $f^{\mathbb N}_*\widehat{F}.$ Let $E_{\lambda}$ be a finite extension of $\mathbb Q_{\ell}$ with ring of integers $\mathscr O_{\lambda}.$ Following \cite{LO2} we define $D_c(\mathcal X,E_{\lambda})$ to be the quotient of $D_c(\mathcal X,\mathscr O_{\lambda})$ by the full subcategory consisting of complexes $K$ such that, for every integer $i,$ there exists an integer $n_i\ge1$ such that $\mathscr H^i(K)$ is annihilated by $\lambda^{n_i}.$ Then we define $$ D_c(\mathcal X,\overline{\mathbb Q}_{\ell})=\text{2-colim} _{E_{\lambda}}D_c(\mathcal X,E_{\lambda}), $$ where $E_{\lambda}$ ranges over all finite subextensions of $\overline{\mathbb Q}_{\ell}/\mathbb Q_{\ell},$ and the transition functors are $$ E_{\lambda'}\otimes_{E_{\lambda}}-:D_c(\mathcal X,E_{\lambda})\to D_c(\mathcal X,E_{\lambda'}) $$ for $E_{\lambda}\subset E_{\lambda'}.$ \end{blank} \begin{blank}\label{Weil-cplx} From now on in this section, $S=\text{Spec }\mathbb F_q.$ We recall some notions of weights and mixedness from \cite{Del2}, generalized to $\mathbb F_q$-algebraic stacks. \begin{anitem} \textbf{Frobenius endomorphism.} For an $\mathbb F_q$-scheme $X_0,$ let $F_{X_0}:X_0\to X_0$ be the morphism that is identity on the underlying topological space and $q$-th power on the structure sheaf $\mathscr O_{X_0};$ this is an $\mathbb F_q$-morphism. Let $F_X:X\to X$ be the induced $\mathbb F$-morphism $F_{X_0}\times\text{id}_{\mathbb F}$ on $X=X_0\otimes\mathbb F.$ By functoriality of the maps $F_{X_0},$ we can extend it to stacks. For an $\mathbb F_q$-algebraic stack $\mathscr X_0,$ define $F_{\mathscr X_0}:\mathscr X_0\to\mathscr X_0$ to be such that for every $\mathbb F_q$-scheme $X_0,$ the map $$ \xymatrix@C=.7cm{ F_{\mathscr X_0}(X_0):\mathscr X_0(X_0) \ar[r] & \mathscr X_0(X_0)} $$ sends $x$ to $x\circ F_{X_0}.$ We also define $F_{\mathscr X}:\mathscr X\to\mathscr X$ to be $F_{\mathscr X_0}\times \text{id}_{\mathbb F}.$ This morphism is a universal homeomorphism, hence $F_{\mathscr X}^*$ and $F_{\mathscr X*}$ are quasi-inverse to each other, and $F_{\mathscr X}^*\simeq F_{\mathscr X}^!,F_{\mathscr X*}\simeq F_{\mathscr X!}.$ \end{anitem} \begin{anitem} \textbf{Weil complexes.} A \textit{Weil complex $K_0$ on $\mathscr X_0$} is a pair $(K,\varphi),$ where $K\in D_c(\mathscr X,\overline{\mathbb Q}_{\ell})$ and $\varphi:F^*_{\mathscr X}K\to K$ is an isomorphism. A morphism of Weil complexes on $\mathscr X_0$ is a morphism of complexes on $\mathscr X$ commuting with $\varphi.$ We also call $K_0$ a \textit{Weil sheaf} if $K$ is a sheaf. Let $W(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ be the category of Weil complexes on $\mathscr X_0;$ it is a triangulated category with the standard $t$-structure, and its core is the category of Weil sheaves. There is a natural fully faithful triangulated functor $$ D_c(\mathscr X_0,\overline{\mathbb Q}_{\ell})\to W(\mathscr X_0,\overline{\mathbb Q}_{\ell}). $$ The usual six operations are well-defined on Weil complexes. $\bullet$ Verdier duality. The Weil complex structure on $D_{\mathscr X}K$ is given by the inverse of the isomorphism $$ \xymatrix@C=.6cm{ D_{\mathscr X}K \ar[r]^-{D\varphi} & D_{\mathscr X} F_{\mathscr X}^*K \ar[r]^-{\sim} & F_{\mathscr X}^*D_{\mathscr X}K.} $$ $\bullet$ Tensor product. Let $K_0$ and $L_0$ be two Weil complexes such that $K\otimes^LL$ (which is $K\otimes L$ since they are of $\overline{\mathbb Q}_{\ell}$-coefficients) is constructible. This is the case when they are both bounded above. The Weil complex structure on $K\otimes L$ is given by $$ \xymatrix@C=.6cm{ F_{\mathscr X}^*(K\otimes L) \ar[r]^-{\sim} & F_{\mathscr X}^*K\otimes F_{\mathscr X}^*L \ar[rr]^-{\varphi_K\otimes \varphi_L} && K\otimes L.} $$ $\bullet$ Pullback. This is clear: $$ \xymatrix@C=.6cm{ F_{\mathscr X}^*f^*K \ar[r]^-{\sim} & f^*F_{\mathscr Y}^*K \ar[r]^-{f^*\varphi} & f^*K.} $$ Here $f:\mathscr X_0\to\mathscr Y_0$ is an $\mathbb F_q$-morphism and $(K,\varphi)$ is a Weil complex on $\mathscr Y_0.$ $\bullet$ Pushforward. Let $f:\mathscr X_0\to\mathscr Y_0$ and $K_0\in W^+(\mathscr X_0,\overline{\mathbb Q}_{\ell}).$ The Weil complex structure on $f_*K$ is given by $$ \xymatrix@C=.6cm{ F_{\mathscr Y}^*f_*K \ar[r] & f_*F_{\mathscr X}^*K \ar[r]^-{f_*\varphi} & f_*K,} $$ where the first arrow is an isomorphism, because it is adjoint to $$ f_*K\to F_{\mathscr Y*}f_*F_{\mathscr X}^*K\simeq f_* F_{\mathscr X*}F_{\mathscr X}^*K $$ obtained by applying $f_*$ to the adjunction morphism $K\to F_{\mathscr X*}F_{\mathscr X}^*K,$ which is an isomorphism. $\bullet$ The remaining cases $f^!,f_!$ and $R\mathscr Hom$ follow from the previous cases. In this article, when discussing stacks over $\mathbb F_q,$ by a ``sheaf" or ``complex of sheaves", we usually mean a ``Weil sheaf" or ``Weil complex", whereas a ``lisse-\'etale sheaf or complex" will be an ordinary constructible $\overline{\mathbb Q}_{\ell}$-sheaf or complex on the lisse-\'etale site of $\mathscr X_0.$ For $x\in\mathscr X_0(\mathbb F_{q^v}),$ it is a fixed point of $F_{\mathscr X}^v,$ hence there is a ``local Frobenius automorphism" $F_x:K_{\overline{x}}\to K_{\overline{x}}$ for every Weil complex $K_0,$ defined to be $$ K_{\overline{x}}\simeq K_{F_{\mathscr X}(\overline{x})} =(F_{\mathscr X}^*K)_{\overline{x}}\overset{\varphi}{\to} K_{\overline{x}}. $$ \end{anitem} \begin{anitem} \textbf{$\iota$-Weights and $\iota$-mixedness.} Recall that $\iota$ is a fixed isomorphism $\overline{\mathbb Q}_{\ell}\to\mathbb C.$ For $\alpha\in\overline{\mathbb Q}_{\ell}^*,$ let $w_q(\alpha):=2\log_q|\iota\alpha|,$ called the $\iota$-weight of $\alpha$ relative to $q.$ For a real number $\beta,$ a sheaf $\mathscr F_0$ on $\mathscr X_0$ is said to be \textit{punctually $\iota$-pure of weight} $\beta,$ if for every integer $v\ge1$ and every $x\in\mathscr X_0(\mathbb F_{q^v}),$ and every eigenvalue $\alpha$ of $F_x$ acting on $\mathscr F_{\overline{x}},$ we have $w_{N(x)}(\alpha)=\beta.$ We say $\mathscr F_0$ is \textit{$\iota$-mixed} if it has a finite filtration with successive quotients punctually $\iota$-pure, and the weights of these quotients are called the punctual $\iota$-weights of $\mathscr F_0.$ A complex $K_0\in W(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ is said to be $\iota$-mixed if all the cohomology sheaves $\mathscr H^iK_0$ are $\iota$-mixed. Let $W_m(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ (resp. $D_m(\mathscr X_0,\overline{\mathbb Q}_{\ell})$) be the full subcategory of $\iota$-mixed complexes in $W(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ (resp. $D_c(\mathscr X_0,\overline{\mathbb Q}_{\ell})$). One can also define \textit {punctually pure sheaves, mixed sheaves} and \textit {mixed complexes} for algebraic stacks. \end{anitem} \begin{anitem} \textbf{Twists.} For $b\in\overline{\mathbb Q}_{\ell}^*,$ let $\overline{\mathbb Q}_{\ell}^{(b)}$ be the Weil sheaf on $\text{Spec }\mathbb F_q$ of rank one, where $F$ acts by multiplication by $b.$ This is an \'etale sheaf if and only if $b$ is an $\ell$-adic unit (\cite{Del2}, 1.2.7). For an algebraic stack $\mathscr X_0/\mathbb F_q,$ we also denote by $\overline{\mathbb Q}_{\ell}^{(b)}$ the inverse image on $\mathscr X_0$ of the above Weil sheaf via the structural map. If $\mathscr F_0$ is a sheaf on $\mathscr X_0,$ we denote by $\mathscr F_0^{(b)}$ the tensor product $\mathscr F_0\otimes\overline{\mathbb Q}_{\ell}^{(b)},$ and say that $\mathscr F_0^{(b)}$ is deduced from $\mathscr F_0$ by a generalized Tate twist. Note that the operation $\mathscr F_0\mapsto\mathscr F_0^{(b)}$ adds the weights by $w_q(b).$ For an integer $d,$ the usual Tate twist $\overline{\mathbb Q}_{\ell}(d)$ is $\overline{\mathbb Q}_{\ell}^{(q^{-d})}.$ We denote by $\langle d\rangle$ the operation $(d)[2d]$ on complexes of sheaves, where $[2d]$ means shifting $2d$ to the left. Note that $\iota$-mixedness is stable under the operation $\langle d\rangle.$ \end{anitem} \end{blank} \begin{lemma}\label{L2.3} Let $\mathscr X_0$ be an $\mathbb F_q$-algebraic stack. (i) If $\mathscr F_0$ is an $\iota$-mixed sheaf on $\mathscr X_0,$ then so is every sub-quotient of $\mathscr F_0.$ (ii) If $0\to\mathscr F_0'\to\mathscr F_0\to\mathscr F_0''\to0$ is an exact sequence of sheaves on $\mathscr X_0,$ and $\mathscr F_0'$ and $\mathscr F_0''$ are $\iota$-mixed, then so is $\mathscr F_0.$ (iii) The full subcategory $W_m(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ (resp. $D_m(\mathscr X_0,\overline{\mathbb Q}_{\ell})$) of $W(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ (resp. $D_c(\mathscr X_0,\overline{\mathbb Q}_{\ell})$) is a triangulated subcategory with induced standard $t$-structure. (iv) If $f$ is a morphism of $\mathbb F_q$-algebraic stacks, then $f^*$ on complexes of sheaves preserves $\iota$-mixedness. (v) If $j:\mathscr U_0\hookrightarrow\mathscr X_0$ is an open immersion and $i:\mathscr Z_0\hookrightarrow\mathscr X_0$ is its complement, then $K_0\in W(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ is $\iota$-mixed if and only if $j^*K_0$ and $i^*K_0$ are $\iota$-mixed. \end{lemma} \begin{proof} (i) If $\mathscr F_0$ is punctually $\iota$-pure of weight $\beta,$ then so is every sub-quotient of it. Now suppose $\mathscr F_0$ is $\iota$-mixed and $\mathscr F_0'$ is a subsheaf of $\mathscr F_0.$ Let $W$ be a finite filtration $$ 0\subset\cdots\subset\mathscr F_0^{i-1}\subset\mathscr F_0^i \subset\cdots\subset\mathscr F_0 $$ of $\mathscr F_0$ such that $\text{Gr}^W_i(\mathscr F_0):= \mathscr F_0^i/\mathscr F_0^{i-1}$ is punctually $\iota$-pure for every $i.$ Let $W'$ be the induced filtration $W\cap\mathscr F_0'$ of $\mathscr F_0'.$ Then $\text{Gr}^{W'}_i(\mathscr F_0')$ is the image of $$ \mathscr F_0^i\cap\mathscr F_0'\subset\mathscr F_0^i \twoheadrightarrow\text{Gr}^W_i(\mathscr F_0), $$ so it is punctually $\iota$-pure. Let $\mathscr F_0''=\mathscr F_0/\mathscr F_0'$ be a quotient of $\mathscr F_0,$ and let $W''$ be the induced filtration of $\mathscr F_0'',$ namely $(\mathscr F_0'')^i:=\mathscr F_0^i/(\mathscr F_0^i\cap\mathscr F_0').$ Then $\text{Gr}^{W''}_i(\mathscr F_0'')=\mathscr F_0^i/(\mathscr F_0^{i-1}+\mathscr F_0^i\cap\mathscr F_0'),$ which is a quotient of $\mathscr F_0^i/\mathscr F_0^{i-1}=\text{Gr}^W_i(\mathscr F_0),$ so it is punctually $\iota$-pure. Hence every sub-quotient of $\mathscr F_0$ is $\iota$-mixed. (ii) Let $W'$ and $W''$ be finite filtrations of $\mathscr F_0'$ and $\mathscr F_0''$ respectively, such that $\text{Gr}^{W'}_i(\mathscr F_0')$ and $\text{Gr}^{W''}_i(\mathscr F_0'')$ are punctually $\iota$-pure for every $i.$ Then $W'$ can be regarded as a finite filtration of $\mathscr F_0$ such that every member of the filtration is contained in $\mathscr F_0',$ and $W''$ can be regarded as a finite filtration of $\mathscr F_0$ such that every member contains $\mathscr F_0'.$ Putting these two filtrations together, we get the desired filtration for $\mathscr F_0.$ (iii) Being a triangulated subcategory means (\cite{SGA4.5}, p.271) that, if $K_0'\to K_0\to K_0''\to K_0'[1]$ is an exact triangle in $W(\mathscr X_0,\overline{\mathbb Q}_{\ell}),$ and two of the three complexes are $\iota$-mixed, then so is the third. By the rotation axiom of a triangulated category, we can assume $K_0'$ and $K_0''$ are $\iota$-mixed. We have the exact sequence $$ \xymatrix@C=.5cm{ \cdots \ar[r] & \mathscr H^nK_0' \ar[r] & \mathscr H^nK_0 \ar[r] & \mathscr H^nK_0'' \ar[r] & \cdots,} $$ and by (i) and (ii) we see that $\mathscr H^nK_0$ is $\iota$-mixed. $W_m(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ has the induced $t$-structure because if $K_0$ is $\iota$-mixed, then its truncations $\tau_{\le n}K_0$ and $\tau_{\ge n}K_0$ are $\iota$-mixed. (iv) On sheaves, $f^*$ preserves stalks, so it is exact and preserves punctual $\iota$-purity on sheaves. Let $f:\mathscr X_0\to\mathscr Y_0.$ Given an $\iota$-mixed sheaf $\mathscr F_0$ on $\mathscr Y_0,$ let $W$ be a finite filtration of $\mathscr F_0$ such that each $\text{Gr}^W_i(\mathscr F_0)$ is punctually $\iota$-pure. Then $f^*W$ gives a finite filtration of $f^*\mathscr F_0$ and each $\text{Gr}^{f^*W}_i(f^*\mathscr F_0)=f^*\text{Gr}^W_i(\mathscr F_0)$ is punctually $\iota$-pure. So $f^*\mathscr F_0$ is $\iota$-mixed. For an $\iota$-mixed complex $K_0$ on $\mathscr Y_0,$ note that $\mathscr H^n(f^*K_0)=f^*\mathscr H^n(K_0),$ hence $f^*K_0$ is $\iota$-mixed. (v) One direction follows from (iv). For the other direction, note that $j_!$ and $i_*$ are exact and preserve punctual $\iota$-purity on sheaves. If $\mathscr F_0$ is an $\iota$-mixed sheaf on $\mathscr U_0,$ with a finite filtration $W$ such that each $\text{Gr}^W_i(\mathscr F_0)$ is punctually $\iota$-pure, then for the induced filtration $j_!W$ of $j_!\mathscr F_0,$ we see that $\text{Gr}^{j_!W}_i(j_!\mathscr F_0)=j_!\text{Gr}^W_i(\mathscr F_0)$ is punctually $\iota$-pure, so $j_!\mathscr F_0$ is $\iota$-mixed. For an $\iota$-mixed complex $K_0$ on $\mathscr U_0,$ use $\mathscr H^n(j_!K_0)=j_!\mathscr H^n(K_0).$ Similarly $i_*$ also preserves $\iota$-mixedness on complexes. Finally the result follows from (iii) and the exact triangle $$ \xymatrix@C=.5cm{ j_!j^*K_0 \ar[r] & K_0 \ar[r] & i_*i^*K_0 \ar[r] &.} $$ \end{proof} To show that $\iota$-mixedness is stable under the six operations, we need to show that $\iota$-mixedness of complexes on stacks can be checked locally on their presentations. To descend a filtration on a presentation to the stack, we generalize the structure theorem of $\iota$-mixed sheaves to algebraic spaces. Recall the following theorem of Deligne (\cite{Del2}, 3.4.1). \begin{theorem}\label{T2.4} Let $\mathscr F_0$ be an $\iota$-mixed sheaf on a scheme $X_0$ over $\mathbb F_q.$ (i) $\mathscr F_0$ has a unique decomposition $\mathscr F_0=\bigoplus_{b\in\mathbb R/\mathbb Z}\mathscr F_0(b),$ called the \emph{decomposition according to the weights mod $\mathbb Z,$} such that the punctual $\iota$-weights of $\mathscr F_0(b)$ are all in the coset $b.$ This decomposition, in which almost all the $\mathscr F_0(b)$ are zero, is functorial in $\mathscr F_0.$ Note that each $\mathscr F_0(b)$ is deduced by twist from an $\iota$-mixed sheaf with integer punctual weights. (ii) If the punctual weights of $\mathscr F_0$ are integers and $\mathscr F_0$ is lisse, $\mathscr F_0$ has a unique finite increasing filtration $W$ by lisse subsheaves, called the \emph{filtration by punctual weights,} such that $\emph{Gr}_i^W(\mathscr F_0)$ is punctually $\iota$-pure of weight $i.$ This filtration is functorial in $\mathscr F_0.$ More precisely, any morphism between $\iota$-mixed lisse sheaves of integer punctual weights is strictly compatible with their filtrations by punctual weights. (iii) If $\mathscr F_0$ is lisse and punctually $\iota$-pure, and $X_0$ is normal, then the sheaf $\mathscr F$ on $X$ is semi-simple. \end{theorem} \begin{subremark}\label{R2.5} (i) If $\mathscr C$ is an abelian category and $\mathscr D$ is an abelian full subcategory of $\mathscr C,$ and $C$ is an object in $\mathscr D,$ then every direct summand of $C$ in $\mathscr C$ lies in $\mathscr D$ (or isomorphic to some object in $\mathscr D$). This is because the kernel of the composition $$ \xymatrix@C=.7cm{ A\oplus B \ar @{->>}[r]^-{\emph{pr}_A} & A \ar @{^{(}->}[r]^-{i_A} & A\oplus B} $$ is $B.$ So direct summands of a lisse sheaf are lisse. If $\mathscr F_0$ in (\ref{T2.4}i) is lisse, then each $\mathscr F_0(b)$ is lisse. (ii) If the $\overline{\mathbb Q}_{\ell}$-sheaf $\mathscr F_0$ is defined over some finite subextension $E_{\lambda}$ of $\overline{\mathbb Q}_{\ell}/\mathbb Q_{\ell},$ then its decomposition in (\ref{T2.4}i) and filtration in (\ref{T2.4}ii) are defined over $E_{\lambda}.$ This is because the $E_{\lambda}$-action commutes with the Galois action. (iii) In \cite{Del2} Deligne made the assumption that all schemes are separated, at least in order to use Nagata compactification to define $f_!.$ After the work of Laszlo and Olsson \cite{LO1, LO2}, one can remove this assumption, and many results in \cite{Del2}, for instance this one and (3.3.1), remain valid. For (\cite{Del2}, 3.4.1) one can take a cover of a not necessarily separated scheme $X_0$ by open affines (which are separated), and use the functoriality to glue the decomposition or filtration on intersections. \end{subremark} \begin{lemma}\label{L2.6} Let $X_0$ be an $\mathbb F_q$-algebraic space, and $\mathscr F_0$ an $\iota$-mixed sheaf on $X_0.$ (i) $\mathscr F_0$ has a unique decomposition $\mathscr F_0=\bigoplus_{b\in\mathbb R/\mathbb Z}\mathscr F_0(b),$ the \emph{decomposition according to the weights mod $\mathbb Z,$} with the same property as in (\ref{T2.4}i). This decomposition is functorial in $\mathscr F_0.$ (ii) If the punctual $\iota$-weights of $\mathscr F_0$ are integers and $\mathscr F_0$ is lisse, $\mathscr F_0$ has a unique finite increasing filtration $W$ by lisse subsheaves, called the \emph{filtration by punctual weights,} with the same property as in (\ref{T2.4}ii). This filtration is functorial in $\mathscr F_0.$ \end{lemma} \begin{proof} Let $P:X_0'\to X_0$ be an \'etale presentation, and let $\mathscr F_0'=P^*\mathscr F_0,$ which is also $\iota$-mixed (\ref{L2.3}iv). Let $X_0''$ be the fiber product $$ X_0''=\xymatrix@C=.7cm{ X_0'\times_{X_0}X_0' \ar[r]^-{p_1} \ar[d]_-{p_2} & X_0' \ar[d]^-P \\ X_0' \ar[r]_-P & X_0.} $$ Then $X_0''$ is an $\mathbb F_q$-scheme of finite type. (i) Applying (\ref{T2.4}i) to $\mathscr F_0'$ we get a decomposition $\mathscr F_0'=\bigoplus_{b\in\mathbb R/\mathbb Z}\mathscr F_0'(b).$ For $j=1,2,$ applying $p_j^*$ we get a decomposition $$ p_j^*\mathscr F_0'=\bigoplus_{b\in\mathbb R/\mathbb Z}p_j^*\mathscr F_0'(b). $$ Since $p_j^*$ preserves weights, by the uniqueness in (\ref{T2.4}i), this decomposition is the decomposition of $p_j^*\mathscr F_0'$ according to the weights mod $\mathbb Z.$ By the functoriality in (\ref{T2.4}i), the canonical isomorphism $\mu:p_1^*\mathscr F_0'\to p_2^*\mathscr F_0'$ takes the form $\bigoplus_{b\in\mathbb R/\mathbb Z}\mu_b,$ where $\mu_b:p_1^*\mathscr F_0'(b)\to p_2^*\mathscr F_0'(b)$ is an isomorphism satisfying cocycle condition as $\mu$ does. Therefore the decomposition $\mathscr F_0'=\bigoplus_{b\in \mathbb R/\mathbb Z}\mathscr F_0'(b)$ descends to a decomposition $\mathscr F_0=\bigoplus_{b\in\mathbb R/\mathbb Z}\mathscr F_0(b).$ We still need to show each direct summand $\mathscr F_0(b)$ is $\iota$-mixed. Fix a coset $b$ and consider the summand $\mathscr F_0(b).$ Twisting it appropriately, we can assume that its inverse image $\mathscr F_0'(b)$ is $\iota$-mixed with integer punctual $\iota$-weights. By (\ref{L2.3}v)and noetherian induction, we can shrink $X_0$ to a nonempty open subspace and assume $\mathscr F_0(b)$ is lisse. Then $\mathscr F_0'(b)$ is also lisse, and applying (\ref{T2.4}ii) we get a finite increasing filtration $W'$ of $\mathscr F_0'(b)$ by lisse subsheaves $\mathscr F_0'(b)^i,$ such that each $\text{Gr}^{W'}_i(\mathscr F_0'(b))$ is punctually $\iota$-pure of weight $i.$ Pulling back this filtration via $p_j,$ we get a finite increasing filtration $p_j^*W'$ of $p_j^*\mathscr F_0'(b),$ and since $\text{Gr}_i^{p_j^*W'}(p_j^*\mathscr F_0'(b))=p_j^*\text{Gr}_i^{W'}(\mathscr F_0'(b))$ is punctually $\iota$-pure of weight $i,$ it is the filtration by punctual weights given by (\ref{T2.4}ii), hence functorial. So the canonical isomorphism $\mu_b:p_1^*\mathscr F_0'(b)\to p_2^*\mathscr F_0'(b)$ maps $p_1^*\mathscr F_0'(b)^i$ isomorphically onto $p_2^*\mathscr F_0'(b)^i,$ satisfying cocycle condition. Therefore the filtration $W'$ of $\mathscr F_0'(b)$ descends to a filtration $W$ of $\mathscr F_0(b),$ and $P^*\text{Gr}_i^W(\mathscr F_0(b))=\text{Gr}_i ^{W'}(\mathscr F_0'(b))$ is punctually $\iota$-pure of weight $i.$ Note that $P$ is surjective, so every point $x\in X_0(\mathbb F_{q^v})$ can be lifted to a point $x'\in X_0'(\mathbb F_{q^{nv}})$ after some base extension $\mathbb F_{q^{nv}}$ of $\mathbb F_{q^v}.$ This shows $\text{Gr}_i^W(\mathscr F_0(b))$ is punctually $\iota$-pure of weight $i,$ therefore $\mathscr F_0(b)$ is $\iota$-mixed. This proves the existence of the decomposition in (i). For uniqueness, let $\mathscr F_0=\bigoplus\widetilde{ \mathscr F}_0(b)$ be another decomposition with the desired property. Then their restrictions to $X_0'$ are both equal to the decomposition of $\mathscr F_0',$ which is unique (\ref{T2.4}i), so they are both obtained by descending this decomposition, and so they are isomorphic, i.e. for every coset $b$ there exists an isomorphism making the diagram commute: $$ \xymatrix@C=.8cm{ \mathscr F_0(b) \ar[rr]^-{\sim} \ar@{^{(}->}[rd] && \widetilde{\mathscr F}_0(b) \ar@{^{(}->}[ld] \\ & \mathscr F_0. &} $$ For functoriality, let $\mathscr G_0=\bigoplus\mathscr G_0(b)$ be another $\iota$-mixed sheaf with decomposition on $X_0,$ and let $\varphi:\mathscr F_0\to\mathscr G_0$ be a morphism of sheaves. Pulling $\varphi$ back via $P$ we get a morphism $\varphi':\mathscr F_0'\to\mathscr G_0'$ on $X_0',$ and the diagram $$ \xymatrix@C=.9cm{ p_1^*\mathscr F_0' \ar[r]^-{\mu_{\mathscr F_0}} \ar[d]_-{p_1^*\varphi'} & p_2^*\mathscr F_0' \ar[d]^-{p_2^*\varphi'} \\ p_1^*\mathscr G_0' \ar[r]_-{\mu_{\mathscr G_0}} & p_2^*\mathscr G_0'} $$ commutes. By (\ref{T2.4}i) $\varphi'=\bigoplus\varphi'(b)$ for morphisms $\varphi'(b):\mathscr F_0'(b)\to\mathscr G_0'(b),$ and the diagram $$ \xymatrix@C=.9cm{ p_1^*\mathscr F_0'(b) \ar[r]^-{\text{can}} \ar[d]_-{p_1^*\varphi'} & p_2^*\mathscr F_0'(b) \ar[d]^-{p_2^*\varphi'} \\ p_1^*\mathscr G_0'(b) \ar[r]_-{\text{can}} & p_2^*\mathscr G_0'(b)} $$ commutes for each $b.$ Then the morphisms $\varphi'(b)$ descend to morphisms $\varphi(b):\mathscr F_0(b)\to\mathscr G_0(b)$ such that $\varphi=\bigoplus\varphi(b).$ (ii) The proof is similar to part (i). Applying (\ref{T2.4}ii) to $\mathscr F_0'$ on $X_0'$ we get a finite increasing filtration $W'$ of $\mathscr F_0'$ by lisse subsheaves $\mathscr F_0'^i$ with desired property. Pulling back this filtration via $p_j:X_0''\to X_0'$ we get the filtration by punctual weights of $p_j^*\mathscr F_0'.$ By functoriality in (\ref{T2.4}ii), the canonical isomorphism $\mu:p_1^*\mathscr F_0'\to p_2^*\mathscr F_0'$ maps $p_1^*\mathscr F_0'^i$ isomorphically onto $p_2^*\mathscr F_0'^i$ satisfying cocycle condition, therefore the filtration $W'$ descends to a finite increasing filtration $W$ of $\mathscr F_0$ by certain subsheaves $\mathscr F_0^i.$ By (\cite{Ols3}, 9.1) they are lisse subsheaves. For uniqueness, if $\widetilde{W}$ is another filtration of $\mathscr F_0$ by certain subsheaves $\widetilde{\mathscr F}_0^i$ with desired property, then their restrictions to $X_0'$ are both equal to the filtration $W'$ by punctual weights, which is unique (\ref{T2.4}ii), so they are both obtained by descending this filtration $W',$ and therefore they are isomorphic. For functoriality, let $\mathscr G_0$ be another lisse $\iota$-mixed sheaf with integer punctual $\iota$-weights, and let $V$ be its filtration by punctual weights, and let $\varphi:\mathscr F_0\to\mathscr G_0$ be a morphism. Pulling $\varphi$ back via $P$ we get a morphism $\varphi':\mathscr F_0'\to\mathscr G_0'$ on $X_0',$ and the diagram $$ \xymatrix@C=.9cm{ p_1^*\mathscr F_0' \ar[r]^-{\mu_{\mathscr F_0}} \ar[d]_-{p_1^*\varphi'} & p_2^*\mathscr F_0' \ar[d]^-{p_2^*\varphi'} \\ p_1^*\mathscr G_0' \ar[r]_-{\mu_{\mathscr G_0}} & p_2^*\mathscr G_0'} $$ commutes. By (\ref{T2.4}ii) we have $\varphi'(\mathscr F_0'^i)\subset\mathscr G_0'^i,$ and the diagram $$ \xymatrix@C=.9cm{ p_1^*\mathscr F_0'^i \ar[r]^-{\mu_{\mathscr F_0}} \ar[d]_-{p_1^*\varphi'} & p_2^*\mathscr F_0'^i \ar[d]^-{p_2^*\varphi'} \\ p_1^*\mathscr G_0'^i \ar[r]_-{\mu_{\mathscr G_0}} & p_2^*\mathscr G_0'^i} $$ commutes for each $i.$ Let $\varphi'^i:\mathscr F_0'^i\to\mathscr G_0'^i$ be the restriction of $\varphi'.$ Then they descend to morphisms $\varphi^i:\mathscr F_0^i\to\mathscr G_0^i,$ which are restrictions of $\varphi.$ \end{proof} \begin{subremark}\label{R2.7} One can prove a similar structure theorem of $\iota$-mixed sheaves on algebraic stacks over $\mathbb F_q:$ the proof of (\ref{L2.6}) carries over verbatim to the case of algebraic stacks, except that for a presentation $X_0'\to\mathscr X_0,$ the fiber product $X_0''=X_0'\times_{\mathscr X_0}X_0'$ may not be a scheme, so we use the case for algebraic spaces and replace every ``(\ref{T2.4})" in the proof by ``(\ref{L2.6})". It turns out that (\ref{T2.4}iii) also holds for algebraic stacks, as a consequence of the proof of (\ref{T1.4}). As we will not use these results in this paper, we do not give the proof in detail here. See (\cite{Decom}, 2.1). \end{subremark} \begin{proposition}\label{L2.8} Let $\mathscr X_0$ be an $\mathbb F_q$-algebraic stack, and let $P:X_0\to\mathscr X_0$ be a presentation (i.e. a smooth surjection with $X_0$ a scheme). Then a complex $K_0\in W(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ is $\iota$-mixed if and only if $P^*K_0$ (resp. $P^!K_0$) is $\iota$-mixed. \end{proposition} \begin{proof} We consider $P^*K_0$ first. The ``only if" part follows from (\ref{L2.3}iv). For the ``if" part, since $P^*$ is exact on sheaves and so $\mathscr H^i(P^*K_0)=P^*\mathscr H^i(K_0),$ we reduce to the case when $K_0=\mathscr F_0$ is a sheaf. So we assume the sheaf $\mathscr F_0':=P^*\mathscr F_0$ on $X_0$ is $\iota$-mixed, and want to show $\mathscr F_0$ is also $\iota$-mixed. The proof is similar to the argument in (\ref{L2.6}). Let $X_0''$ be the fiber product $$ X_0''=\xymatrix@C=.7cm{ X_0\times_{\mathscr X_0}X_0 \ar[r]^-{p_1} \ar[d]_-{p_2} & X_0 \ar[d]^-P \\ X_0 \ar[r]_-P & \mathscr X_0.} $$ Then $X_0''$ is an algebraic space of finite type. Applying (\ref{T2.4}i) to $\mathscr F_0'$ we get a decomposition $\mathscr F_0'=\bigoplus_{b\in\mathbb R/\mathbb Z}\mathscr F_0'(b).$ For $j=1,2,$ applying $p_j^*$ we get a decomposition $$ p_j^*\mathscr F_0'=\bigoplus_{b\in\mathbb R/\mathbb Z}p_j^*\mathscr F_0'(b), $$ which is the decomposition of $p_j^*\mathscr F_0'$ according to the weights mod $\mathbb Z.$ By the functoriality in (\ref{L2.6}i), the canonical isomorphism $\mu:p_1^*\mathscr F_0'\to p_2^*\mathscr F_0'$ takes the form $\bigoplus_{b\in \mathbb R/\mathbb Z}\mu_b,$ where $\mu_b:p_1^*\mathscr F_0'(b)\to p_2^*\mathscr F_0'(b)$ is an isomorphism satisfying cocycle condition as $\mu$ does. Therefore the decomposition of $\mathscr F_0'$ descends to a decomposition $\mathscr F_0=\bigoplus_{b\in\mathbb R/\mathbb Z}\mathscr F_0(b).$ The $\iota$-weights of the local Frobenius eigenvalues of $\mathscr F_0(b)$ at each point of $\mathscr X_0$ are in the coset $b.$ Next we show that $\mathscr F_0(b)$'s are $\iota$-mixed. Replacing $\mathscr F_0$ by a direct summand $\mathscr F_0(b)$ and then twisting it appropriately, we may assume its inverse image $\mathscr F_0'$ is $\iota$-mixed with integer punctual $\iota$-weights. By (\ref{L2.3}v) we can shrink $\mathscr X_0$ to a nonempty open substack and assume $\mathscr F_0$ is lisse. Then $\mathscr F_0'$ is also lisse, and applying (\ref{T2.4}ii) we get a finite increasing filtration $W'$ of $\mathscr F_0'$ by lisse subsheaves $\mathscr F_0'^i,$ such that each $\text{Gr}^{W'}_i(\mathscr F_0')$ is punctually $\iota$-pure of weight $i.$ Pulling back this filtration via $p_j,$ we get a finite increasing filtration $p_j^*W'$ of $p_j^*\mathscr F_0',$ and since $\text{Gr}_i^{p_j^*W'}(p_j^*\mathscr F_0')=p_j^*\text{Gr}_i^{W'}(\mathscr F_0')$ is punctually $\iota$-pure of weight $i,$ it is the filtration by punctual weights given by (\ref{L2.6}ii). By functoriality, the canonical isomorphism $\mu:p_1^*\mathscr F_0'\to p_2^*\mathscr F_0'$ maps $p_1^*\mathscr F_0'^i$ isomorphically onto $p_2^*\mathscr F_0'^i,$ satisfying cocycle condition. Therefore the filtration $W'$ of $\mathscr F_0'$ descends to a filtration $W$ of $\mathscr F_0,$ and $P^*\text{Gr}_i^W(\mathscr F_0)=\text{Gr}_i^{W'}(\mathscr F_0')$ is punctually $\iota$-pure of weight $i.$ Since $\pi$ is surjective, $\text{Gr}_i^W(\mathscr F_0)$ is also punctually $\iota$-pure of weight $i,$ therefore $\mathscr F_0$ is $\iota$-mixed. Next we consider $P^!K_0.$ We know that $P$ is smooth of relative dimension $d,$ for some function $d:\pi_0(X_0)\to \mathbb N.$ Let $X_0^0$ be a connected component of $X_0.$ Since $\pi_0(X_0)$ is finite, $X_0^0$ is both open and closed in $X_0,$ so $f:X_0^0\overset{j}{\to}X_0\overset{P} {\to}\mathscr X_0$ is smooth of relative dimension $d(X_0^0).$ Then $P^*K_0$ is $\iota$-mixed if and only if $f^*K_0=j^*P^*K_0$ is $\iota$-mixed for the inclusion $j$ of every connected component, if and only if $f^!K_0=f^*K_0 \langle d(X_0^0)\rangle$ is $\iota$-mixed, if and only if $P^!K_0$ is $\iota$-mixed, since $f^!=j^!P^!=j^*P^!.$ \end{proof} \begin{subremark}\label{Laff} As a consequence of Lafforgue's theorem on the Langlands correspondence for function fields and a Ramanujan-Petersson type of result, one deduces that all complexes on any $\mathbb F_q$-algebraic stack is $\iota$-mixed, for any $\iota.$ To see this, by (\ref{L2.8}, \ref{L2.3}v,ii), we reduce to the case of an irreducible lisse sheaf on a smooth (in particular, normal) $\mathbb F_q$-scheme. By (\cite{Del2}, 1.3.6) we reduce to the case where the determinant of the lisse sheaf has finite order, and Lafforgue's result applies (\cite{Lau}, 1.3). In the following, when we want to emphasize the assumption of $\iota$-mixedness, we will still write $``W_m(\mathscr X_0,\overline{\mathbb Q}_{\ell})",$ although it equals the full category $W(\mathscr X_0,\overline{\mathbb Q}_{\ell}).$ \end{subremark} Next we show the stability of $\iota$-mixedness, first for a few operations on complexes on algebraic spaces, and then for all the six operations on stacks. Denote by $D_{\mathscr X_0}$ or just $D$ the dualizing functor $R\mathscr Hom(-,K_{\mathscr X_0}),$ where $K_{\mathscr X_0}$ is a dualizing complex on $\mathscr X_0$ (\cite{LO2}, $\S7$). \begin{blank}\label{R2.9} Recall (\cite{KW}, II 12.2) that, for $\mathbb F_q$-schemes and bounded complexes of sheaves on them, the operations $f_*,f_!,f^*,f^!,D$ and $-\otimes^L-$ all preserve $\iota$-mixedness. Since we are working with $\overline {\mathbb Q}_{\ell}$-coefficients, $\otimes^L=\otimes.$ \end{blank} \begin{lemma}\label{L2.10} Let $f:X_0\to Y_0$ be a morphism of $\mathbb F_q$-algebraic spaces. Then the operations $-\otimes-, D_{X_0}, f_*$ and $f_!$ all preserve $\iota$-mixedness, namely, they induce functors \begin{gather*} -\otimes-:W^-_m(X_0,\overline{\mathbb Q}_{\ell})\times W^-_m(X_0,\overline{\mathbb Q}_{\ell})\longrightarrow W^-_m(X_0,\overline{\mathbb Q}_{\ell}), \\ D:W_m(X_0,\overline{\mathbb Q}_{\ell})\longrightarrow W_m(X_0,\overline{\mathbb Q}_{\ell})^{\emph{op}}, \\ f_*:W^+_m(X_0,\overline{\mathbb Q}_{\ell})\longrightarrow W^+_m(Y_0,\overline{\mathbb Q}_{\ell})\quad\emph{and}\quad f_!:W^-_m(X_0,\overline{\mathbb Q}_{\ell})\longrightarrow W^-_m(Y_0,\overline{\mathbb Q}_{\ell}). \end{gather*} \end{lemma} \begin{proof} We will reduce to the case of unbounded complexes on schemes, and then prove the scheme case. Let $P:X_0'\to X_0$ be an \'etale presentation. \textbf{Reduction for $\otimes$.} For $K_0,L_0\in W^-_m(X_0,\overline{\mathbb Q}_{\ell}),$ we have $P^*(K_0\otimes L_0)=(P^*K_0)\otimes(P^*L_0),$ and the reduction follows from (\ref{L2.8}). \textbf{Reduction for $D$.} For $K_0\in W_m(X_0,\overline{ \mathbb Q}_{\ell}),$ we have $P^*DK_0=DP^!K_0,$ so the reduction follows from (\ref{L2.8}). \textbf{Reduction for $f_*$ and $f_!$.} By definition (\cite{LO2}, 9.1) we have $f_*=Df_!D,$ so it suffices to prove the case for $f_!.$ Let $K_0\in W^-_m(X_0,\overline{\mathbb Q}_{\ell}),$ and let $P':Y_0'\to Y_0$ and $X_0'\to X_0\times_{Y_0}Y_0'$ be \'etale presentations: $$ \xymatrix@C=1cm{ X_0' \ar @/^2pc/[rr]^-g \ar[r] \ar[rd]_-P & (X_0)_{Y_0'} \ar[r]_-{f'} \ar[d]^-h & Y_0' \ar[d]^-{P'} \\ & X_0 \ar[r]^-f & Y_0.} $$ By smooth base change (\cite{LO2}, 12.1) we have $P'^*f_!K_0=f'_!h^*K_0.$ Replacing $f$ by $f'$ we can assume $Y_0$ is a scheme. Let $j:U_0\to X_0$ be an open dense subscheme (\cite{Knu}, II 6.7), with complement $i:Z_0\to X_0.$ Applying $f_!$ to the exact triangle $$ \xymatrix@C=.5cm{ j_!j^*K_0 \ar[r] & K_0 \ar[r] & i_*i^*K_0 \ar[r] &} $$ we get $$ \xymatrix@C=.5cm{ (fj)_!j^*K_0 \ar[r] & f_!K_0 \ar[r] & (fi)_!i^*K_0 \ar[r] &.} $$ By (\ref{L2.3}iii) and noetherian induction, we can replace $X_0$ by $U_0,$ and reduce to the case where $f$ is a morphism between schemes. This finishes the reduction to the case of unbounded complexes on schemes, and now we prove this case. For the Verdier dual $D_{X_0},$ since the dualizing complex $K_{X_0}$ has finite quasi-injective dimension, for every $K_0\in W_m(X_0,\overline{\mathbb Q}_{\ell})$ and every integer $i,$ there exist integers $a$ and $b$ such that $$ \mathscr H^i(D_{X_0}K_0)\simeq\mathscr H^i(D_{X_0}\tau_{[a,b]}K_0), $$ and by (\ref{R2.9}), we see that $D_{X_0}K_0$ is $\iota$-mixed. Next we prove the case of $\otimes.$ For $K_0$ and $L_0\in W^-_m(X_0,\overline{\mathbb Q}_{\ell}),$ we have $$ \mathscr H^r(K_0\otimes L_0)= \bigoplus_{i+j=r}\mathscr H^i(K_0)\otimes\mathscr H^j(L_0). $$ The result follows from (\ref{R2.9}). Finally we prove the case of $f_*$ and $f_!.$ Let $K_0\in W^+_m(X_0,\overline{\mathbb Q}_{\ell}).$ Then we have the spectral sequence $$ E_2^{ij}=R^if_*(\mathscr H^jK_0)\Longrightarrow R^{i+j}f_*K_0, $$ and the result follows from (\ref{R2.9}) and (\ref{L2.3}i, ii). The case for $f_!=Df_*D$ also follows. \end{proof} Finally we prove the main result of this section. This generalizes (\cite{Beh2}, 6.3.7). \begin{theorem}\label{T2.11} Let $f:\mathscr X_0\to\mathscr Y_0$ be a morphism of $\mathbb F_q$-algebraic stacks. Then the operations $f_*,f_!,f^*,f^!,D_{\mathscr X_0},-\otimes-$ and $R\mathscr Hom(-,-)$ all preserve $\iota$-mixedness, namely, they induce functors \begin{gather*} f_*:W_m^+(\mathscr X_0,\overline{\mathbb Q}_{\ell}) \longrightarrow W_m^+(\mathscr Y_0,\overline{\mathbb Q}_{\ell}), \qquad f_!:W_m^-(\mathscr X_0,\overline{\mathbb Q}_{\ell}) \longrightarrow W_m^-(\mathscr Y_0,\overline{\mathbb Q}_{\ell}), \\ f^*:W_m(\mathscr Y_0,\overline{\mathbb Q}_{\ell}) \longrightarrow W_m(\mathscr X_0,\overline{\mathbb Q}_{\ell}), \qquad f^!:W_m(\mathscr Y_0,\overline{\mathbb Q}_{\ell}) \longrightarrow W_m(\mathscr X_0,\overline{\mathbb Q}_{\ell}), \\ D:W_m(\mathscr X_0,\overline{\mathbb Q}_{\ell}) \longrightarrow W_m(\mathscr X_0,\overline{\mathbb Q}_{\ell})^{\emph{op}}, \\ \otimes:W_m^-(\mathscr X_0,\overline{\mathbb Q}_{\ell}) \times W_m^-(\mathscr X_0,\overline{\mathbb Q}_{\ell}) \longrightarrow W_m^-(\mathscr X_0,\overline{\mathbb Q}_{\ell})\quad\emph{and} \\ R\mathscr Hom(-,-):W_m^-(\mathscr X_0,\overline{\mathbb Q}_{\ell})^{\emph{op}}\times W_m^+(\mathscr X_0,\overline{\mathbb Q}_{\ell}) \longrightarrow W_m^+(\mathscr X_0,\overline{\mathbb Q}_{\ell}). \end{gather*} \end{theorem} \begin{proof} Recall from (\cite{LO2}, 9.1) that $f_!:=Df_*D$ and $f^!:=Df^*D.$ By (\cite{LO2}, 6.0.12, 7.3.1), for $K_0\in W^-(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ and $L_0\in W^+(\mathscr X_0,\overline{\mathbb Q}_{\ell}),$ we have \begin{equation*} \begin{split} D(K_0\otimes DL_0) &=R\mathscr Hom(K_0\otimes DL_0,K_{\mathscr X_0})=R\mathscr Hom(K_0,R\mathscr Hom(DL_0,K_{\mathscr X_0})) \\ &=R\mathscr Hom(K_0,DDL_0)=R\mathscr Hom(K_0,L_0). \end{split} \end{equation*} Therefore it suffices to prove the result for $f_*,f^*,D$ and $-\otimes-.$ The case of $f^*$ is proved in (\ref{L2.3}iv). For $D:$ let $P:X_0\to\mathscr X_0$ be a presentation. Since $P^*D=DP^!,$ the result follows from (\ref{L2.8}) and (\ref{L2.10}). For $\otimes:$ since $P^*(K_0\otimes L_0)=P^*K_0\otimes P^*L_0,$ the result follows from (\ref{L2.8}) and (\ref{L2.10}). For $f_*$ and $f_!:$ we will start with $f_!,$ in order to use smooth base change to reduce to the case when $\mathscr Y_0$ is a scheme, and then turn to $f_*$ in order to use cohomological descent. Let $K_0\in W^-_m(\mathscr X_0,\overline{\mathbb Q}_{\ell}),$ and let $P:Y_0\to\mathscr Y_0$ be a presentation and the following diagram be 2-Cartesian: $$ \xymatrix@C=.8cm{ (\mathscr X_0)_{Y_0} \ar[r]^-{f'} \ar[d]_-{P'} & Y_0 \ar[d]^P \\ \mathscr X_0 \ar[r]^-f & \mathscr Y_0.} $$ We have (\cite{LO2}, 12.1) that $P^*f_!K_0=f'_!P'^*K_0,$ so by (\ref{L2.8}) we can assume $\mathscr Y_0=Y_0$ is a scheme. Now we switch to $f_*,$ where $f:\mathscr X_0\to Y_0,$ and $K_0\in W_m^+(\mathscr X_0,\overline{\mathbb Q}_{\ell}).$ Let $X_0\to\mathscr X_0$ be a presentation. Then it gives a strictly simplicial smooth hypercover $X_{0,\bullet}$ of $\mathscr X_0:$ $$ X_{0,n}:=\underbrace{X_0\times_{\mathscr X_0}\cdots\times_{\mathscr X_0}X_0}_{n+1\text{\ factors}}, $$ where each $X_{0,n}$ is an $\mathbb F_q$-algebraic space of finite type. Let $f_n:X_{0,n}\to Y_0$ be the restriction of $f$ to $X_{0,n}.$ Then we have the spectral sequence (\cite{LO2}, 10.0.9) $$ E_1^{ij}=R^jf_{i*}(K_0|_{X_{0,i}})\Longrightarrow R^{i+j}f_*K_0. $$ Since $f_i$'s are morphisms of algebraic spaces, the result follows from (\ref{L2.10}) and (\ref{L2.3}i, ii). \end{proof} \begin{remark}\label{mixed-variant} In fact, we can take the dualizing complex $K_{\mathscr X_0}$ to be \textit{mixed}, and results in this section hold (and can be proved verbatim) for \textit{mixed} complexes. In particular, mixedness is preserved by the six operations and the Verdier dualizing functor for stacks (if we take a mixed dualizing complex). \end{remark} \section{Stratifiable complexes} In this section, we use the same notations and hypotheses in (\ref{adic-setting}). For the purpose of this article, it suffices to take $S$ to be $\text{Spec }k$ for an algebraically closed field $k$ of characteristic not equal to $\ell,$ but we want to work in the general setting (namely, any scheme that satisfies (LO)) for future applications; for instance see \cite{Decom}. Let $\mathcal{X,Y}\cdots$ be $S$-algebraic stacks of finite type. By ``sheaves" we mean ``lisse-\'etale sheaves". ``Jordan-H\"older" and ``locally constant constructible" are abbreviated as ``JH" and ``lcc" respectively. A \textit{stratification} $\mathscr S$ of an $S$-algebraic stack $\mathcal X$ is a finite set of disjoint locally closed substacks that cover $\mathcal X.$ If $\mathscr F$ is a lcc $(\Lambda_n)_{\mathcal X}$-module, a \textit{decomposition series} of $\mathscr F$ is a filtration by lcc $\Lambda_{\mathcal X}$-subsheaves, such that the successive quotients are simple $\Lambda_{\mathcal X}$-modules. Note that the filtration is always finite, and the simple successive quotients, which are $(\Lambda_0)_{\mathcal X}$-modules, are independent (up to order) of the decomposition series chosen. They are called the \textit{JH components of $\mathscr F.$} \begin{definition}\label{D3.1} (i) A complex $K=(K_n)_n\in\mathscr D_c(\mathscr A)$ is said to be \emph{stratifiable}, if there exists a pair $(\mathscr S,\mathcal L),$ where $\mathscr S$ is a stratification of $\mathcal X,$ and $\mathcal L$ is a function that assigns to every stratum $\mathcal U\in\mathscr S$ a finite set $\mathcal{L(U)}$ of isomorphism classes of simple (i.e. irreducible) lcc $\Lambda_0$-modules on $\mathcal U_{\emph{lis-\'et}},$ such that for each pair $(i,n)$ of integers, the restriction of the sheaf $\mathscr H^i(K_n)\in \emph{Mod}_c(\mathcal X_{\emph{lis-\'et}},\Lambda_n)$ to each stratum $\mathcal U\in\mathscr S$ is lcc, with JH components (as a $\Lambda_{\mathcal U}$-module) contained in $\mathcal{L(U)}.$ We say that the pair $(\mathscr S,\mathcal L)$ \emph{trivializes} $K$ (or $K$ is $(\mathscr S,\mathcal L)$\emph{-stratifiable}), and denote the full subcategory of $(\mathscr S,\mathcal L)$-stratifiable complexes by $\mathscr D_{\mathscr S,\mathcal L}(\mathscr A).$ The full subcategory of stratifiable complexes in $\mathscr D_c(\mathscr A)$ is denoted by $\mathscr D_c^{\emph{stra}}(\mathscr A).$ (ii) Let $D_c^{\emph{stra}}(\mathcal X,\Lambda)$ be the essential image of $\mathscr D_c^{\emph{stra}}(\mathscr A)$ in $D_c(\mathcal X,\Lambda),$ and we call the objects of $D_c^{\emph{stra}}(\mathcal X,\Lambda)$ \emph{stratifiable complexes of sheaves.} (iii) Let $E_{\lambda}$ be a finite extension of $\mathbb Q_{\ell}$ with ring of integers $\mathscr O_{\lambda}.$ Then the definition above applies to $\Lambda=\mathscr O_{\lambda}.$ Let $D_c^{\emph{stra}}(\mathcal X,E_{\lambda})$ be the essential image of $D_c^{\emph{stra}}(\mathcal X,\mathscr O_{\lambda})$ in $D_c(\mathcal X,E_{\lambda}).$ Finally we define $$ D_c^{\emph{stra}}(\mathcal X,\overline{\mathbb Q}_{\ell})= \emph{2-colim}_{E_{\lambda}}D_c^{\emph{stra}}(\mathcal X, E_{\lambda}). $$ \end{definition} \begin{subremark}\label{R3.2} (i) This notion is due to Beilinson, Bernstein and Deligne \cite{BBD}, and Behrend \cite{Beh2} used it to define his derived category for stacks. Many results in this section are borrowed from \cite{Beh2}, but reformulated and reproved in terms of the derived categories defined in \cite{LO2}. (ii) Let $\mathscr F$ be a $\Lambda_n$-sheaf trivialized by a pair $(\mathscr S,\mathcal L),$ and let $\mathscr G$ be a sub-quotient sheaf of $\mathscr F.$ Then $\mathscr G$ is not necessarily trivialized by $(\mathscr S,\mathcal L).$ But if $\mathscr G$ is lcc on each stratum in $\mathscr S,$ then it is necessarily trivialized by $(\mathscr S,\mathcal L).$ \end{subremark} \begin{blank}\label{refinement} We say that the pair $(\mathscr S',\mathcal L')$ \textit{refines the pair} $(\mathscr S,\mathcal L),$ if $\mathscr S'$ refines $\mathscr S,$ and for every $\mathcal V\in\mathscr S',\ \mathcal U\in\mathscr S$ and $L\in\mathcal{L(U)},$ such that $\mathcal V\subset\mathcal U,$ the restriction $L|_{\mathcal V}$ is trivialized by $\mathcal{L'(V)}.$ Given a pair $(\mathscr S,\mathcal L)$ and a refined stratification $\mathscr S'$ of $\mathscr S,$ there is a canonical way to define $\mathcal L'$ such that $(\mathscr S',\mathcal L')$ refines $(\mathscr S,\mathcal L):$ for every $\mathcal V\in\mathscr S',$ we take $\mathcal{L'(V)}$ to be the set of isomorphism classes of JH components of the lcc sheaves $L|_{\mathcal V}$ for $L\in\mathcal{L(U)},$ where $\mathcal U$ ranges over all strata in $\mathscr S$ that contains $\mathcal V.$ It is clear that the set of all pairs $(\mathscr S,\mathcal L)$ form a filtered direct system. A pair $(\mathscr S,\mathcal L)$ is said to be \textit{tensor closed} if for every $\mathcal U\in\mathscr S$ and $L,M\in\mathcal{L(U)},$ the sheaf tensor product $L\otimes_{\Lambda_0}M$ has JH components in $\mathcal{L(U)}.$ For a pair $(\mathscr S,\mathcal L),$ a \textit{tensor closed hull} of this pair is a tensor closed refinement. \end{blank} \begin{lemma}\label{L3.2.5} Every pair $(\mathscr S,\mathcal L)$ can be refined to a tensor closed pair $(\mathscr S',\mathcal L').$ \end{lemma} \begin{proof} First we show that, for a lcc sheaf of sets $\mathscr F$ on $\mathcal X_{\text{lis-\'et}},$ there exists a finite \'etale morphism $f:\mathcal Y\to\mathcal X$ of algebraic $S$-stacks such that $f^{-1}\mathscr F$ is constant. Consider the total space $[\mathscr F]$ of the sheaf $\mathscr F.$ Precisely, this is the category fibered in groupoids over $(\text{Aff}/S)$ with the underlying category described as follows. Its objects are triples $(U\in\text{obj(Aff}/S),u\in\text{obj }\mathcal X(U),s\in(u^{-1}\mathscr F)(U)),$ and morphisms from $(U,u,s)$ to $(V,v,t)$ are pairs $(f:U\to V,\alpha:vf\Rightarrow u)$ such that $t$ is mapped to $s$ under the identification $\alpha:f^{-1}v^{-1}\mathscr F\cong u^{-1}\mathscr F.$ The map $(U,u,s)\mapsto(U,u)$ gives a map $g:[\mathscr F]\to\mathcal X,$ which is representable finite \'etale (because it is so locally). The pullback sheaf $g^{-1}\mathscr F$ on $[\mathscr F]$ has a global section, so the total space breaks up into two parts, one part being mapped isomorphically onto the base $[\mathscr F].$ By induction on the degree of $g$ we are done. Next we show that, for a fixed representable finite \'etale morphism $\mathcal Y\to\mathcal X,$ there are only finitely many isomorphism classes of simple lcc $\Lambda_0$-sheaves on $\mathcal X$ that become constant when pulled back to $\mathcal Y.$ We can assume that both $\mathcal X$ and $\mathcal Y$ are connected. By the following lemma (\ref{Gal}), we reduce to the case where $\mathcal Y\to\mathcal X$ is Galois with group $G,$ for some finite group $G.$ Then simple lcc $\Lambda_0$-sheaves on $\mathcal X$ that become constant on $\mathcal Y$ correspond to simple left $\Lambda_0[G]$-modules, which are cyclic and hence isomorphic to $\Lambda_0[G]/I$ for left maximal ideals $I$ of $\Lambda_0[G].$ There are only finitely many such ideals since $\Lambda_0[G]$ is a finite set. Also note that, a lcc subsheaf of a constant constructible sheaf on a connected stack is also constant. Let $L$ be a lcc subsheaf on $\mathcal X$ of the constant sheaf associated to a finite set $M.$ Consider their total spaces. We have an inclusion of substacks $i:[L]\hookrightarrow\coprod_{m\in M}\mathcal X_m,$ where each part $\mathcal X_m$ is identified with $\mathcal X.$ Then $i^{-1}(\mathcal X_m)\to\mathcal X_m$ is finite \'etale, and is the inclusion of a substack, hence is either an equivalence or the inclusion of the empty substack, since $\mathcal X$ is connected. It is clear that $L$ is also constant, associated to the subset of those $m\in M$ for which $i^{-1}(\mathcal X_m)\ne\emptyset.$ Finally we prove the lemma. Refining $\mathscr S$ if necessary, we assume all strata are connected stacks. For each stratum $\mathcal U\in\mathscr S,$ let $\mathcal Y\to\mathcal U$ be a representable finite \'etale morphism, such that all sheaves in $\mathcal{L(U)}$ become constant on $\mathcal Y.$ Then define $\mathcal{L'(U)}$ to be the set of isomorphism classes of simple lcc $\Lambda_0$-sheaves on $\mathcal U_{\text{lis-\'et}}$ which become constant on $\mathcal Y.$ For any $L$ and $M\in\mathcal{L'(U)},$ since all lcc subsheaves of $L\otimes_{\Lambda_0}M$ are constant on $\mathcal Y,$ we see that $L\otimes_{\Lambda_0}M$ has JH components in $\mathcal{L'(U)}$ and hence $(\mathscr S,\mathcal L')$ is a tensor closed refinement of $(\mathscr S,\mathcal L).$ \end{proof} \begin{sublemma}\label{Gal} Let $\mathcal Y\to\mathcal X$ be a representable finite \'etale morphism between connected $S$-algebraic stacks. Then there exists a morphism $\mathcal Z\to\mathcal Y,$ such that $\mathcal Z$ is \emph{Galois} over $\mathcal X,$ i.e. it is a $G$-torsor for some finite group $G.$ \end{sublemma} \begin{proof} Assume $\mathcal X$ is non-empty, and take a geometric point $\overline{x}\to\mathcal X.$ Let $\mathscr C$ be the category $\text{F\'Et}(\mathcal X)$ of representable finite \'etale morphisms to $\mathcal X,$ and let $$ F:\mathscr C\to\text{FSet} $$ be the fiber functor to the category of finite sets, namely $F(\mathcal Y)=Hom_{\mathcal X}(\overline{x},\mathcal Y).$ Note that this Hom, which is a priori a category, is a finite set, since $\mathcal Y\to\mathcal X$ is representable and finite. Then one can verify that $(\mathscr C,F:\mathscr C\to\text{FSet})$ satisfies the axioms of Galois formalism in (\cite{SGA1}, Exp. V, 4), and use the consequence g) on p. 121 in \textit{loc. cit.} For the reader's convenience, we follow Olsson's suggestion and explain the proof briefly. Basically, we will verify certain axioms of (G1) -- (G6), and deduce the conclusion as in \textit{loc. cit.} First note that $\mathscr C,$ which is a priori a 2-category, is a 1-category. This is because for any 2-commutative diagram $$ \xymatrix@C=.6cm @R=.5cm{ \mathcal Y \ar[dr] \ar[rr]^-f && \mathcal Z \ar[dl] \\ & \mathcal X &} $$ where $\mathcal Y,\mathcal Z\in\mathscr C,$ the morphism $f$ is also representable (and finite \'etale), so $Hom_{\mathcal X}(\mathcal Y,\mathcal Z)$ is discrete. By definition, the functor $F$ preserves fiber-products, and $F(\mathcal X)$ is a one-point set. Let $f:\mathcal Y\to\mathcal Z$ be a morphism in $\mathscr C,$ then it is finite \'etale. So if the degree of $f$ is 1, then $f$ is an isomorphism. This implies that the functor $F$ is \textit{conservative}, i.e. $f$ is an isomorphism if $F(f)$ is. In particular, $f$ is a monomorphism if and only if $F(f)$ is. This is because $f$ is a monomorphism if and only if $p_1:\mathcal Y\times_{\mathcal Z}\mathcal Y\to\mathcal Y$ is an isomorphism, and $F$ preserves fiber-products. Since $f:\mathcal Y\to\mathcal Z$ is finite \'etale, its image stack $\mathcal Y'\subset\mathcal Z$ is both open and closed, hence $\mathcal Y'\to\mathcal Z$ is a monomorphism that is an isomorphism onto a direct summand of $\mathcal Z$ (i.e. $\mathcal Z=\mathcal Y'\coprod\mathcal Y''$ for some other open and closed substack $\mathcal Y''\subset \mathcal Z$). Also, since $\mathcal Y\to\mathcal Y'$ is epic and finite \'etale, it is \textit{strictly epic}, i.e. for every $\mathcal Z\in\mathscr C,$ the diagram $$ Hom(\mathcal Y',\mathcal Z)\to Hom(\mathcal Y,\mathcal Z)\rightrightarrows Hom(\mathcal Y\times_{\mathcal Y'}\mathcal Y,\mathcal Z) $$ is an equalizer. Every object $\mathcal Y$ in $\mathscr C$ is artinian: for a chain of monomorphisms $$ \cdots\to\mathcal Y_n\to\cdots\to\mathcal Y_2\to\mathcal Y_1\to\mathcal Y, $$ we get a chain of injections $$ \cdots\to F(\mathcal Y_n)\to\cdots\to F(\mathcal Y_1)\to F(\mathcal Y), $$ which is stable since $F(\mathcal Y)$ is a finite set, and so the first chain is also stable since $F$ is conservative. Since $F$ is left exact and every object in $\mathscr C$ is artinian, by (\cite{Gro2}, 3.1) the functor $F$ is \textit{strictly pro-representable}, i.e. there exists a projective system $P=\{P_i;i\in I\}$ of objects in $\mathscr C$ indexed by a filtered partially ordered set $I,$ with epic transition morphisms $\varphi_{ij}:P_j\to P_i\ (i\le j),$ such that there is a natural isomorphism of functors $$ F\overset{\sim}{\longrightarrow}Hom(P,-):=\text{colim}_IHom (P_i,-). $$ Let $\psi_i:P\to P_i$ be the canonical projection in the category $\text{Pro}(\mathscr C)$ of pro-objects of $\mathscr C.$ We may assume that every epimorphism $P_j\to\mathcal Z$ in $\mathscr C$ is isomorphic to $P_j\overset{\varphi_{ij}}{\to}P_i$ for some $i\le j.$ This is because one can add $P_j\to\mathcal Z$ into the projective system $P$ without changing the functor it represents. Also one can show that the $P_i$'s are connected (cf. loc. cit.), and morphisms in $\mathscr C$ between connected stacks are strictly epic. Given $\mathcal Y\in\mathscr C,$ now we show that there exists an object $\mathcal Z\to\mathcal X$ that is Galois and factors through $\mathcal Y.$ Since $F(\mathcal Y)$ is a finite set, there exists an index $j\in I$ such that all maps $P\to\mathcal Y$ factors through $P\overset{\psi_j} {\to}P_j.$ This means that the canonical map $$ P\to\mathcal Y^J:=\underbrace{\mathcal Y\times_{\mathcal X}\cdots\times_{\mathcal X}\mathcal Y}_{\#J\text{ factors}}, \quad\text{where }J:=F(\mathcal Y)=Hom_{\text{Pro}(\mathscr C)}(P,\mathcal Y) $$ factors as $$ \xymatrix@C=.7cm{ P \ar[r]^-{\psi_j} & P_j \ar[r]^-A & \mathcal Y^J.} $$ Let $P_j\to P_i\overset{B}{\to}\mathcal Y^J$ be the factorization of $A$ into a composition of an epimorphism and a monomorphism $B.$ We claim that $P_i$ is Galois over $\mathcal X.$ Since $F(P_i)$ is a finite set, there exists an index $k\in I$ such that all maps $P\to P_i$ factors through $P\overset{\psi_k}{\to}P_k.$ Fix any $v:P_k\to P_i.$ To show $P_i$ is Galois, it suffices to show that $\text{Aut}(P_i)$ acts on $F(P_i)=Hom(P_k,P_i)$ transitively, i.e. there exists a $\sigma\in\text{Aut}(P_i)$ making the triangle commute: $$ \xymatrix@C=1cm @R=.7cm{ P_k \ar[dr]_-{\varphi_{ik}} \ar[r]^-v & P_i \ar[d]^-{\sigma} \\ & P_i.} $$ For every $u\in J=Hom(P_i,\mathcal Y),$ we have $u\circ v\in Hom(P_k,\mathcal Y),$ so there exists a $u'\in Hom(P_i,\mathcal Y)$ making the diagram commute: $$ \xymatrix@C=1cm @R=.7cm{ P_k \ar[r]^-v \ar[d]_-{\varphi_{ik}} & P_i \ar[d]^-u \\ P_i \ar[r]_-{u'} & \mathcal Y.} $$ Since $v$ is epic, the function $u\mapsto u':J\to J$ is injective, hence a bijection. Let $\alpha:\mathcal Y^J\to \mathcal Y^J$ be the isomorphism induced by the map $u\mapsto u'.$ Then the diagram $$ \xymatrix@C=1cm @R=.7cm{ P_k \ar[r]^-v \ar[dr]_-{\varphi_{ik}} & P_i \ar[r]^-B & \mathcal Y^J \ar[d]^-{\alpha} \\ & P_i \ar[r]_-B & \mathcal Y^J} $$ commutes. By the uniqueness of the factorization of the map $P_k\to\mathcal Y^J$ into the composition of an epimorphism and a monomorphism, there exists a $\sigma\in\text{Aut}(P_i)$ such that $\sigma\circ v=\varphi_{ik}.$ This finishes the proof. \end{proof} We give some basic properties of stratifiable complexes. \begin{lemma}\label{L3.3} (i) $\mathscr D_c^{\emph{stra}}(\mathscr A)$ (resp. $D_c^{\emph{stra}}(\mathcal X,\Lambda)$) is a triangulated subcategory of $\mathscr D_c(\mathscr A)$ (resp. $D_c(\mathcal X,\Lambda)$) with the induced standard $t$-structure. (ii) If $f:\mathcal X\to\mathcal Y$ is an $S$-morphism, then $f^*:\mathscr D_c(\mathscr A(\mathcal Y))\to\mathscr D_c(\mathscr A(\mathcal X))$ (resp. $f^*:D_c(\mathcal Y,\Lambda)\to D_c(\mathcal X,\Lambda)$) preserves stratifiability. (iii) If $\mathscr S$ is a stratification of $\mathcal X,$ then $K\in\mathscr D_c(\mathscr A(\mathcal X))$ is stratifiable if and only if $K|_V$ is stratifiable for every $V\in\mathscr S.$ (iv) Let $P:X\to\mathcal X$ be a presentation, and let $K=(K_n)_n\in\mathscr D_c(\mathscr A(\mathcal X)).$ Then $K$ is stratifiable if and only if $P^*K$ is stratifiable. (v) $D_c^{\emph{stra}}(\mathcal X,\Lambda)$ contains $D_c^b(\mathcal X,\Lambda),$ and the heart of $D_c^{\emph{stra}}(\mathcal X,\Lambda)$ is the same as that of $D_c(\mathcal X,\Lambda)$ (\ref{R2.2}i). (vi) Let $K\in\mathscr D_c(\mathscr A)$ be a normalized complex (\cite{LO2}, 3.0.8). Then $K$ is trivialized by a pair $(\mathscr S,\mathcal L)$ if and only if $K_0$ is trivialized by this pair. (vii) Let $K\in\mathscr D_c^{\emph{stra}}(\mathscr A).$ Then its Tate twist $K(1)$ is also stratifiable. \end{lemma} \begin{proof} (i) To show $\mathscr D_c^{\text{stra}}(\mathscr A)$ is a triangulated subcategory, it suffices to show (\cite{SGA4.5}, p.271) that for every exact triangle $K'\to K\to K''\to K'[1]$ in $\mathscr D_c(\mathscr A),$ if $K'$ and $K''$ are stratifiable, so also is $K.$ Using refinement we may assume that $K'$ and $K''$ are trivialized by the same pair $(\mathscr S,\mathcal L).$ Consider the cohomology sequence of this exact triangle at level $n,$ restricted to a stratum $\mathcal U\in\mathscr S.$ By (\cite{Ols3}, 9.1), to show that a sheaf is lcc on $\mathcal U,$ one can pass to a presentation $U$ of the stack $\mathcal U.$ Then by (\cite{Mil1}, 20.3) and 5-lemma, we see that the $\mathscr H^i(K_n)$'s are lcc on $\mathcal U,$ with JH components contained in $\mathcal{L(U)}.$ Therefore $\mathscr D_c^{\text{stra}}(\mathscr A)$ (and hence $D_c^{\text{stra}}(\mathcal X,\Lambda)$) is a triangulated subcategory. The $t$-structure is inherited by $\mathscr D_c^{\text{stra}}(\mathscr A)$ (and hence by $D_c^{\text{stra}}(\mathcal X,\Lambda)$) because, if $K\in\mathscr D_c(\mathscr A)$ is stratifiable, so also are its truncations $\tau_{\le r}K$ and $\tau_{\ge r}K.$ (ii) $f^*$ is exact on the level of sheaves, and takes a lcc sheaf to a lcc sheaf. If $(K_n)_n\in\mathscr D_c(\mathscr A(\mathcal Y))$ is trivialized by $(\mathscr S,\mathcal L),$ then $(f^*K_n)_n$ is trivialized by $(f^*\mathscr S,f^*\mathcal L),$ where $f^*\mathscr S=\{f^{-1}(V)|V\in\mathscr S\}$ and $(f^*\mathcal L)(f^{-1}(V))$ is the set of isomorphism classes of JH components of $f^*L,\ L\in\mathcal L(V).$ The case of $D_c(-,\Lambda)$ follows easily. (iii) The ``only if" part follows from (ii). The ``if" part is clear: if $(\mathscr S_V,\mathcal L_V)$ is a pair on $V$ that trivializes $(K_n|_V)_n,$ then the pair $(\mathscr S_{\mathcal X},\mathcal L)$ on $\mathcal X,$ where $\mathscr S_{\mathcal X}=\cup\mathscr S_V$ and $\mathcal L=\{\mathcal L_V\}_{V\in\mathscr S},$ trivializes $(K_n)_n.$ (iv) The ``only if" part follows from (ii). For the ``if" part, assume $P^*K$ is trivialized by a pair $(\mathscr S_X,\mathcal L_X)$ on $X.$ Let $U\in\mathscr S_X$ be an open stratum, and let $V\subset\mathcal X$ be the image of $U$ (\cite{LMB}, 3.7). Recall that for every $T\in\text{Aff}/S,\ V(T)$ is the full subcategory of $\mathcal X(T)$ consisting of objects $x$ that are locally in the essential image of $U(T),$ i.e. such that there exists an \'etale surjection $T'\to T$ in $\text{Aff}/S$ and $u'\in U(T'),$ such that the image of $u'$ in $\mathcal X(T')$ and $x|_{T'}$ are isomorphic. Then $V$ is an open substack of $\mathcal X$ (hence also an algebraic stack) and $P|_U:U\to V$ is a presentation. Replacing $P:X\to\mathcal X$ by $P|_U:U\to V$ and using noetherian induction and (iii), we may assume $\mathscr S_X=\{X\}.$ It follows from a theorem of Gabber \cite{Gab} that $P_*$ takes a bounded complex to a bounded complex. In fact, using base change by $P,$ we may assume that $P:Y\to X$ is a morphism from an $S$-algebraic space $Y$ to an $S$-scheme $X.$ Let $j:U\to Y$ be an open dense subscheme of $Y$ with complement $i:Z\to Y.$ For a bounded complex $L$ of $\Lambda_n$-sheaves on $Y,$ we have the exact triangle $$ \xymatrix@C=.5cm{ (Pi)_*i^!L \ar[r] & P_*L \ar[r] & (Pj)_*j^*L \ar[r] &.} $$ Gabber's theorem implies that $(Pj)_*j^*L$ is bounded, since $Pj:U\to X$ is a morphism between schemes. Note that the dualizing functor preserves boundedness, so does $i^!=D_Zi^*D_Y,$ and therefore we may assume that $(Pi)_*i^!L$ is bounded by noetherian induction. It follows that $P_*L$ is bounded. Now take a pair $(\mathscr S,\mathcal L)$ on $\mathcal X$ that trivializes all $P_*L$'s, for $L\in\mathcal L_X;$ this is possible since each $P_*L$ is bounded and $\mathcal L_X$ is a finite set. We claim that $K$ is trivialized by $(\mathscr S,\mathcal L).$ For each sheaf $\mathscr F$ on $\mathcal X,$ the natural map $\mathscr F\to R^0P_*P^*\mathscr F$ is injective. This follows from the sheaf axioms for the lisse-lisse topology, and the fact that the lisse-\'etale topos and the lisse-lisse topos are the same. Explicitly, to verify the injectivity on $X_U\to U,$ for any $u\in\mathcal X(U),$ since the question is \'etale local on $U,$ one can assume $P:X_U\to U$ has a section $s:U\to X_U.$ Then the composition $\mathscr F_U\to R^0P_*P^*\mathscr F_U\to R^0P_*R^0s_*s^*P^*\mathscr F_U=\mathscr F_U$ of the two adjunctions is the adjunction for $P\circ s=\text{id},$ so the composite is an isomorphism, and the first map is injective. We take $\mathscr F$ to be the cohomology sheaves $\mathscr H^i(K_n).$ Since $P^*\mathscr H^i(K_n)$ is an iterated extension of sheaves in $\mathcal L_X,$ we see that $P_*P^*\mathscr H^i(K_n),$ and in particular $R^0P_*P^* \mathscr H^i(K_n),$ are trivialized by $(\mathscr S,\mathcal L)$ by (i). Since $\mathscr H^i(K_n)$ is lcc (\cite{Ols3}, 9.1), by (\ref{R3.2}ii) we see that $\mathscr H^i(K_n)$ (hence $K$) is trivialized by $(\mathscr S,\mathcal L).$ (v) It suffices to show, by (i) and (\ref{R2.2}i), that all adic systems $M=(M_n)_n\in\mathscr A$ are stratifiable. By (iv) we may assume $\mathcal X=X$ is an $S$-scheme. Since $X$ is noetherian, there exists a stratification (\cite{SGA5}, VI, 1.2.6) of $X$ such that $M$ is lisse on each stratum. By (iii) we may assume $M$ is lisse on $X.$ Let $\mathcal L$ be the set of isomorphism classes of JH components of the $\Lambda_0$-sheaf $M_0.$ We claim that $\mathcal L$ trivializes $M_n$ for all $n.$ Suppose it trivializes $M_{n-1}$ for some $n\ge1.$ Consider the sub-$\Lambda_n$-modules $\lambda M_n\subset M_n[\lambda^n] \subset M_n,$ where $M_n[\lambda^n]$ is the kernel of the map $\lambda^n:M_n\to M_n.$ Since $M$ is adic, we have exact sequences of $\Lambda_X$-modules \begin{gather*} \xymatrix@C=.5cm{ 0 \ar[r] & \lambda M_n \ar[r] & M_n \ar[r] & M_0 \ar[r] & 0,} \\ \xymatrix@C=.5cm{ 0 \ar[r] & M_n[\lambda^n] \ar[r] & M_n \ar[r] & \lambda^nM_n \ar[r] & 0,}\quad\text{and} \\ \xymatrix@C=.5cm{ 0 \ar[r] & \lambda^nM_n \ar[r] & M_n \ar[r] & M_{n-1} \ar[r] & 0.} \end{gather*} The natural surjection $M_n/\lambda M_n\to M_n/M_n[\lambda ^n]$ implies that $\mathcal L$ trivializes $\lambda^nM_n,$ and therefore it also trivializes $M_n.$ By induction on $n$ we are done. Since $D_c^b\subset D_c^{\text{stra}}\subset D_c,$ and $D_c^b$ and $D_c$ have the same heart, it is clear that $D_c^{\text{stra}}$ has the same heart as them. (vi) Applying $-\otimes^L_{\Lambda_n}K_n$ to the following exact sequence, viewed as an exact triangle in $\mathscr D(\mathcal X,\Lambda_n)$ $$ \xymatrix@C=.8cm{ 0 \ar[r] & \Lambda_{n-1} \ar[r]^-{1\mapsto\lambda} & \Lambda_n \ar[r] & \Lambda_0 \ar[r] & 0,} $$ we get an exact triangle by (\cite{LO2}, 3.0.10) $$ \xymatrix@C=.5cm{ K_{n-1} \ar[r] & K_n \ar[r] & K_0 \ar[r] &.} $$ By induction on $n$ and (\ref{R3.4}) below, we see that $K$ is trivialized by $(\mathscr S,\mathcal L)$ if $K_0$ is. (vii) Let $K=(K_n)_n.$ By definition $K(1)=(K_n(1))_n,$ where $K_n(1)=K_n\otimes^L_{\Lambda_n}\Lambda_n(1).$ Note that the sheaf $\Lambda_n(1)$ is a flat $\Lambda_n$-module: to show that $-\otimes_{\Lambda_n}\Lambda_n(1)$ preserves injections, one can pass to stalks at geometric points, over which we have a trivialization $\Lambda_n\simeq\Lambda_n(1).$ Suppose $K$ is $(\mathscr S,\mathcal L)$-stratifiable. Using the isomorphism $$ \mathscr H^i(K_n)\otimes_{\Lambda_n}\Lambda_n(1)=\mathscr H^i(K_n\otimes^L_{\Lambda_n}\Lambda_n(1)), $$ it suffices to show the existence of a pair $(\mathscr S,\mathcal L')$ such that for each $\mathcal U\in\mathscr S,$ the JH components of the lcc sheaves $L\otimes_{\Lambda_n} \Lambda_n(1)$ lie in $\mathcal L'(\mathcal U),$ for all $L\in\mathcal{L(U)}.$ Since $L$ is a $\Lambda_0$-module, we have $$ L\otimes_{\Lambda_n}\Lambda_n(1)=(L\otimes_{\Lambda_n} \Lambda_0)\otimes_{\Lambda_n}\Lambda_n(1)=L\otimes_{\Lambda_n} (\Lambda_0\otimes_{\Lambda_n}\Lambda_n(1))=L\otimes_{\Lambda_n} \Lambda_0(1)=L\otimes_{\Lambda_0}\Lambda_0(1), $$ and we can take $\mathcal L'(\mathcal U)$ to be a tensor closed hull of $\{\Lambda_0(1),L\in\mathcal{L(U)}\}.$ \end{proof} \begin{subremark}\label{R3.4} In fact the proof of (\ref{L3.3}i) shows that $\mathscr D_{\mathscr S,\mathcal L}(\mathscr A)$ is a triangulated subcategory with induced standard $t$-structure, for each fixed pair $(\mathscr S,\mathcal L).$ Let $D_{\mathscr S,\mathcal L}(\mathcal X,\Lambda)$ be the essential image of $\mathscr D_{\mathscr S,\mathcal L}(\mathscr A)$ in $D_c(\mathcal X,\Lambda),$ and this is also a triangulated subcategory with induced standard $t$-structure. Also if $E^{ij}_r\Longrightarrow E^n$ is a spectral sequence in the category $\mathscr A(\mathcal X),$ and the $E_r^{ij}$'s are trivialized by $(\mathscr S,\mathcal L)$ for all $i,j,$ then all the $E^n$'s are trivialized by $(\mathscr S,\mathcal L).$ \end{subremark} We denote by $D_c^{\dagger,\text{stra}}(\mathcal X,\Lambda),$ for $\dagger=\pm,b,$ the full subcategory of $\dagger$-bounded stratifiable complexes, using the induced $t$-structure. The following is a key result for showing the stability of stratifiability under the six operations later. Recall that $M\mapsto\widehat{M}=L\pi^*R\pi_*M$ is the normalization functor, where $\pi:\mathcal X^{\mathbb N}\to\mathcal X$ is the morphism of topoi in (\cite{LO2}, 2.1), mentioned in (\ref{normalization}). \begin{proposition}\label{L3.4.5} For a pair $(\mathscr S,\mathcal L)$ on $\mathcal X,$ if $M\in\mathscr D_{\mathscr S,\mathcal L}(\mathscr A),$ then $\widehat{M}\in\mathscr D_{\mathscr S,\mathcal L}(\mathscr A),$ too. In particular, if $K\in D_c(\mathcal X,\Lambda),$ then $K$ is stratifiable if and only if its normalization $\widehat{K}\in\mathscr D_c(\mathscr A)$ is stratifiable. \end{proposition} \begin{proof} First, we will reduce to the case where $M$ is essentially bounded (i.e. $\mathscr H^iM$ is AR-null for $|i|\gg0$). Let $P:X\to\mathcal X$ be a presentation. The $\ell$-cohomological dimension of $X_{\text{\'et}}$ is finite, by the assumption (LO) on $S.$ Therefore, by (\cite{LO2}, 2.1.i), the normalization functor for $X$ has finite cohomological dimension, and the same is true for $\mathcal X$ since $P^*\widehat{M}=\widehat{P^*M},$ by (\cite{LO2}, 2.2.1, 3.0.11). This implies that for each integer $i,$ there exist integers $a$ and $b$ with $a\le b,$ such that $\mathscr H^i(\widehat{M})=\mathscr H^i(\widehat {\tau_{[a,b]}M}).$ Since $\tau_{[a,b]}M$ is also trivialized by $(\mathscr S,\mathcal L),$ we may assume $M\in\mathscr D_{\mathscr S,\mathcal L}^b(\mathscr A(\mathcal X)).$ Since $\widehat{M}$ is normalized, by (\ref{L3.3}vi), it suffices to show that $(\widehat{M})_0$ is trivialized by $(\mathscr S,\mathcal L).$ Using projection formula and the flat resolution of $\Lambda_0$ $$ \xymatrix@C=.7cm{ 0 \ar[r] & \Lambda \ar[r]^-{\lambda} & \Lambda \ar[r]^-{\epsilon} & \Lambda_0 \ar[r] & 0,} $$ we have (\cite{LO2}, p.176) $$ (\widehat{M})_0=\Lambda_0\otimes^L_{\Lambda}R\pi_*M= R\pi_*(\pi^*\Lambda_0\otimes^L_{\Lambda_{\bullet}}M), $$ where $\pi^*\Lambda_0$ is the constant projective system defined by $\Lambda_0.$ Let $C\in\mathscr{D(A)}$ be the complex of projective systems $\pi^*\Lambda_0\otimes^L _{\Lambda_{\bullet}}M;$ it is a $\lambda$-complex, and $C_n=\Lambda_0\otimes^L_{\Lambda_n}M_n\in\mathscr D_c (\mathcal X,\Lambda_0).$ Recall (\cite{SGA5}, V, 3.2.3) that, a projective system $(K_n)_n$ ringed by $\Lambda_{\bullet}$ in an abelian category is AR-adic if and only if $\bullet$ it satisfies the condition (MLAR) (\cite{SGA5}, V, 2.1.1), hence (ML), and denote by $(N_n)_n$ the projective system of the universal images of $(K_n)_n;$ $\bullet$ there exists an integer $k\ge0$ such that the projective system $(L_n)_n:=(N_{n+k}\otimes\Lambda_n)_n$ is adic. Moreover, $(K_n)_n$ is AR-isomorphic to $(L_n)_n.$ Now for each $i,$ the projective system $\mathscr H^i(C)$ is AR-adic (\ref{R2.2}i). Let $N^i=(N^i_n)_n$ be the projective system of the universal images of $\mathscr H^i(C),$ and choose an integer $k\ge0$ such that the system $L^i=(L^i_n)_n=(N^i_{n+k}\otimes\Lambda_n)_n$ is adic. Since $N^i_{n+k}\subset\mathscr H^i(C_{n+k})$ is annihilated by $\lambda,$ we have $L^i_n=N^i_{n+k},$ and the transition morphism gives an isomorphism $$ \xymatrix@C=.6cm{ L^i_n\simeq L^i_n\otimes_{\Lambda_n}\Lambda_{n-1} \ar[r]^-{\sim} & L^i_{n-1}.} $$ This means the projective system $L^i$ is the constant system $\pi^*L^i_0.$ By (\cite{LO2}, 2.2.2) we have $R\pi_*\mathscr H^i(C)\simeq R\pi_*L^i,$ which is just $L^i_0$ by (\cite{LO2}, 2.2.3). The spectral sequence $$ R^j\pi_*\mathscr H^i(C)\Longrightarrow\mathscr H^{i+j}((\widehat{M})_0) $$ degenerates to isomorphisms $L^i_0\simeq\mathscr H^i((\widehat{M})_0),$ so we only need to show that $L^i_0$ is trivialized by $(\mathscr S,\mathcal L).$ Using the periodic $\Lambda_n$-flat resolution of $\Lambda_0$ $$ \xymatrix@C=.5cm{ \cdots \ar[r] & \Lambda_n \ar[r]^-{\lambda} & \Lambda_n \ar[r]^-{\lambda^n} & \Lambda_n \ar[r]^-{\lambda} & \Lambda_n \ar[r]^-{\epsilon} & \Lambda_0 \ar[r] & 0,} $$ we see that $\Lambda_0\otimes^L_{\Lambda_n}\mathscr H^j(M_n)$ is represented by the complex $$ \xymatrix@C=.5cm{ \cdots \ar[r] & \mathscr H^j(M_n) \ar[r]^-{\lambda^n} & \mathscr H^j(M_n) \ar[r]^-{\lambda} & \mathscr H^j(M_n) \ar[r] & 0,} $$ so $\mathscr H^i(\Lambda_0\otimes^L_{\Lambda_n}\mathscr H^j(M_n))$ are trivialized by $(\mathscr S,\mathcal L),$ for all $i,j.$ Since $M$ is essentially bounded, we have the spectral sequence $$ \mathscr H^i(\Lambda_0\otimes^L_{\Lambda_n}\mathscr H^j(M_n))\Longrightarrow\mathscr H^{i+j}(C_n), $$ from which we deduce (by (\ref{R3.4})) that the $\mathscr H^i(C_n)$'s are trivialized by $(\mathscr S, \mathcal L).$ The universal image $N^i_n$ is the image of $\mathscr H^i(C_{n+r})\to\mathscr H^i(C_n)$ for some $r\gg0,$ therefore the $N^i_n$'s (and hence the $L^i_n$'s) are trivialized by $(\mathscr S,\mathcal L).$ For the second claim, let $K\in D_c(\mathcal X,\Lambda).$ Since $K$ is isomorphic to the image of $\widehat{K}$ under the localization $\mathscr D_c(\mathscr A)\to D_c(\mathcal X,\Lambda)$ (\cite{LO2}, 3.0.14), we see that $K$ is stratifiable if $\widehat{K}$ is. Conversely, if $K$ is stratifiable, which means that it is isomorphic to the image of some $M\in\mathscr D_c^{\text{stra}} (\mathscr A),$ then $\widehat{K}=\widehat{M}$ is also stratifiable. \end{proof} \begin{anitem}\label{D-stra} For $K\in D_c(\mathcal X,\Lambda),$ we say that $K$ is $(\mathscr S,\mathcal L)$\textit{-stratifiable} if $\widehat{K}$ is, and (\ref{L3.4.5}) implies that $K\in D_{\mathscr S,\mathcal L}(\mathcal X,\Lambda)$ (cf. (\ref{R3.4})) if and only if $K$ is $(\mathscr S,\mathcal L)$-stratifiable. \end{anitem} \begin{corollary}\label{C3.5} (i) If $\mathscr S$ is a stratification of $\mathcal X,$ then $K\in D_c(\mathcal X,\Lambda)$ is stratifiable if and only if $K|_V$ is stratifiable for every $V\in\mathscr S.$ (ii) Let $K\in D_c(\mathcal X,\Lambda).$ Then $K$ is stratifiable if and only if its Tate twist $K(1)$ is. (iii) Let $P:X\to\mathcal X$ be a presentation, and let $K\in D_c(\mathcal X,\Lambda).$ Then $K$ is stratifiable if and only if $P^*K$ (resp. $P^!K$) is stratifiable. \end{corollary} \begin{proof} (i) The ``only if" part follows from (\ref{L3.3}ii). For the ``if" part, we first prove the following. \begin{sublemma}\label{pull-normalized} For an $S$-algebraic stack $\mathcal X$ locally of finite type, let $N\overset{u}{\to}M\to C\to N[1]$ be an exact triangle in $\mathscr D_c(\mathscr A),$ where $N$ is a normalized complex and $C$ is almost AR-null. Then the morphism $u$ is isomorphic to the natural map $\widehat{M}\to M.$ \end{sublemma} \begin{proof} Consider the following diagram $$ \xymatrix@C=1cm{ \widehat{N} \ar[r]^-{\widehat{u}} \ar[d]_-{\simeq} & \widehat{M} \ar[r] \ar[d] & \widehat{C} \ar[r] \ar[d] & \widehat{N}[1] \ar[d] \\ N \ar[r]^-u & M \ar[r] & C \ar[r] & N[1].} $$ Since $C$ is almost AR-null, we have $\widehat{C}=0$ by (\cite{LO2}, 2.2.2), and so $\widehat{u}$ is an isomorphism. \end{proof} Now let $f:\mathcal V\to\mathcal X$ be a morphism of $S$-algebraic stacks, and let $M\in\mathscr D_c(\mathscr A(\mathcal X)).$ We claim that $f^*\widehat{M}\simeq \widehat{f^*M}.$ Applying $f^*$ to the exact triangle $$ \xymatrix@C=.7cm{ \widehat{M} \ar[r] & M \ar[r] & C \ar[r] &} $$ we get $$ \xymatrix@C=.7cm{ f^*\widehat{M} \ar[r] & f^*M \ar[r] & f^*C \ar[r] &.} $$ By (\cite{LO1}, 4.3.2), $\widehat{M}_n=\text{hocolim}_N\tau_{\le N} \widehat{M}_n,$ and $-\otimes^L_{\Lambda_n}\Lambda_{n-1}$ and $f^*$ preserve homotopy colimit because they preserve infinite direct sums. Now that $\tau_{\le N} \widehat{M}_n$ and $\Lambda_{n-1}$ are bounded above complexes, we have $f^*(\tau_{\le N}\widehat{M}_n \otimes^L_{\Lambda_n}\Lambda_{n-1})\simeq f^*\tau_{\le N}\widehat{M}_n\otimes^L_{\Lambda_n} \Lambda_{n-1}$ (cf. the proof of (\cite{LO1}, 4.5.3)). Hence applying $f^*$ to the isomorphism $$ \xymatrix@C=.7cm{ \widehat{M}_n\otimes^L_{\Lambda_n}\Lambda_{n-1} \ar[r] & \widehat{M}_{n-1}} $$ we get an isomorphism $$ \xymatrix@C=.7cm{ f^*\widehat{M}_n\otimes^L_{\Lambda_n}\Lambda_{n-1} \ar[r] & f^*\widehat{M}_{n-1},} $$ and by (\cite{LO2}, 3.0.10), $f^*\widehat{M}$ is normalized. Also it is clear that $f^*C$ is AR-null. By (\ref{pull-normalized}) we have $f^*\widehat{M}\simeq\widehat{f^*M}.$ Therefore, the ``if" part follows from (\ref{L3.3}iii) and (\ref{L3.4.5}), since $\widehat{K}|_V\simeq\widehat{(K|_V)}.$ (ii) This follows from (\ref{L3.3}vii), since $\widehat{K}(1)=\widehat{K(1)}.$ (iii) For $P^*K,$ the ``only if" part follows from (\ref{L3.3}ii), and the ``if" part follows from (\ref{L3.3}iv) and (\ref{L3.4.5}), since $P^*\widehat{K}=\widehat{(P^*K)}$ (\cite{LO2}, 2.2.1, 3.0.11). Since $P$ is smooth of relative dimension $d,$ for some function $d:\pi_0(X)\to\mathbb N,$ we have $P^!K\simeq P^*K(d)[2d],$ so by (ii), $P^*K$ is stratifiable if and only if $P^!K$ is. \end{proof} Before proving the main result of this section, we prove some special cases. \begin{blank}\label{pushforward-unbounded} Let $f:X\to Y$ be a morphism of $S$-schemes. Then the $\Lambda_n$-dualizing complexes $K_{X,n}$ and $K_{Y,n}$ of $X$ and $Y$ respectively have finite quasi-injective dimensions, and are bounded by some integer independent of $n.$ Together with the base change theorem for $f_!,$ we see that there exists an integer $N>0$ depending only on $X,Y$ and $f,$ such that for any integers $a,b$ and $n$ with $n\ge0$ and any $M\in\mathscr D_c^{[a,b]}(X,\Lambda_n),$ we have $f_*M\in\mathscr D_c^{[a,b+N]}(Y,\Lambda_n).$ This implies that for each $n,$ the functor (defined using $K$-injective resolutions; cf. (\cite{Spa}, 6.7)) $$ f_*:\mathscr D(X,\Lambda_n)\to\mathscr D(Y,\Lambda_n) $$ restricts to $$ f_*:\mathscr D_c(X,\Lambda_n)\to\mathscr D_c(Y,\Lambda_n). $$ Moreover, for $M\in\mathscr{D(A}(X))$ with constructible $\mathscr H^j(M_n)$'s (for all $j$ and $n$) and for each $i\in\mathbb Z,$ there exist integers $a<b$ such that $$ R^if_*M\simeq R^if_*\tau_{[a,b]}M. $$ In particular, if $M$ is a $\lambda$-complex on $X,$ then $R^if_*M$ is AR-adic for each $i,$ and hence $f_*M= (f_*M_n)_n$ is a $\lambda$-complex on $Y.$ This enables us to define $$ f_*:D_c(X,\Lambda)\to D_c(Y,\Lambda) $$ to be $K\mapsto Qf_*\widehat{K},$ where $Q:\mathscr D_c (\mathscr A(Y))\to D_c(Y,\Lambda)$ is the localization functor. It agrees with the definition in (\cite{LO2}, 8) when restricted to $D_c^+(X,\Lambda),$ and for each $i\in \mathbb Z$ and $K\in D_c(X,\Lambda),$ there exist integers $a<b$ such that $R^if_*K\simeq R^if_*\tau_{[a,b]}K.$ \end{blank} \begin{lemma}\label{L3.8} (i) If $f:X\to Y$ is a morphism of $S$-schemes, and $K\in D_c(X,\Lambda)$ is trivialized by $(\{X\},\mathcal L)$ for some $\mathcal L,$ then $f_*K$ is stratifiable. (ii) Let $\mathcal X$ be an $S$-algebraic stack that has a connected presentation (i.e. there exists a presentation $P:X\to\mathcal X$ with $X$ a connected $S$-scheme). Let $K_{\mathcal X}$ and $K'_{\mathcal X}$ be two $\Lambda$-dualizing complexes on $\mathcal X,$ and let $D$ and $D'$ be the two associated dualizing functors, respectively. Let $K\in D_c(\mathcal X,\Lambda).$ If $DK$ is trivialized by a pair $(\mathscr S,\mathcal L),$ where all strata in $\mathscr S$ are connected, then $D'K$ is trivialized by $(\mathscr S,\mathcal L')$ for some other $\mathcal L'.$ In particular, for stacks with connected presentation, the property of the Verdier dual of $K$ being stratifiable is independent of the choice of the dualizing complex. (iii) Let $\mathcal X$ be an $S$-algebraic stack that has a connected presentation, and assume that the constant sheaf $\Lambda$ on $\mathcal X$ is a dualizing complex. If $K\in D_c(\mathcal X,\Lambda)$ is trivialized by a pair $(\{\mathcal X\},\mathcal L),$ then $D_{\mathcal X}K$ is trivialized by $(\{\mathcal X\},\mathcal L')$ for some $\mathcal L'.$ \end{lemma} \begin{proof} (i) Since $f_*K$ is the image of $f_*\widehat{K},$ it suffices to show that $f_*\widehat{K}$ is stratifiable. Since $f_*L$ is bounded for each $L\in\mathcal L,$ there exists a pair $(\mathscr S_Y, \mathcal L_Y)$ on $Y$ that trivializes $f_*L,$ for all $L\in\mathcal L.$ We claim that this pair trivializes $R^if_*\widehat{K}_n,$ for each $i$ and $n.$ Since $R^if_*\widehat{K}_n=R^if_*\tau_{[a,b]}\widehat{K}_n$ for some $a<b,$ and $\tau_{[a,b]}\widehat{K}_n$ is trivialized by $(\{X\},\mathcal L),$ we may assume $\widehat{K}_n$ is bounded. The claim then follows from the spectral sequence $$ R^pf_*\mathscr H^q((\widehat{K})_n)\Longrightarrow R^{p+q} f_*((\widehat{K})_n) $$ and (\ref{R3.4}). (ii) Recall that the dualizing complex $K_{\mathcal X}$ (resp. $K'_{\mathcal X}$) is defined to be the image of a normalized complex $K_{\mathcal X,\bullet}$ (resp. $K'_{\mathcal X,\bullet}$), where each $K_{\mathcal X,n}$ (resp. $K'_{\mathcal X,n}$) is a $\Lambda_n$-dualizing complex. See (\cite{LO2}, 7.2.3, 7.2.4). Let $P:X\to\mathcal X$ be a presentation where $X$ is a connected scheme. Then we have $$ P^*R\mathscr Hom(K_{\mathcal X,n},K'_{\mathcal X,n})= R\mathscr Hom(P^*K_{\mathcal X,n},P^*K'_{\mathcal X,n}) =R\mathscr Hom(P^!K_{\mathcal X,n},P^!K'_{\mathcal X,n}). $$ Since $P^!K_{\mathcal X,n}$ and $P^!K'_{\mathcal X,n}$ are $\Lambda_n$-dualizing complexes on $X,$ by (\cite{SGA5}, Exp. I, 2) we see that $P^*R\mathscr Hom(K_{\mathcal X,n},K'_{\mathcal X,n})$ (and hence $R\mathscr Hom (K_{\mathcal X,n},K'_{\mathcal X,n})$) is cohomologically concentrated in one degree, therefore it is quasi-isomorphic to this non-trivial cohomology sheaf, appropriately shifted. So let $R\mathscr Hom(K_{\mathcal X,n},K'_{\mathcal X,n}) \simeq L_n[r_n]$ for some sheaf $L_n$ and integer $r_n.$ Since $P^*L_n$ is invertible and hence lcc (cf. (\cite {SGA5}, p.19)), the sheaf $L_n$ is lcc (\cite{Ols3}, 9.1). For every stratum $\mathcal U\in\mathscr S,$ let $\mathcal L_0(\mathcal U)$ be the union of $\mathcal{L(U)}$ and the set of isomorphism classes of the JH components of the lcc sheaf $L_0|_{\mathcal U}.$ Since all strata in $\mathscr S$ are connected, there exists a tensor closed hull of $(\mathscr S,\mathcal L_0)$ of the form $(\mathscr S,\mathcal L'),$ i.e. they have the same stratification $\mathscr S.$ By (\cite{LO2}, 4.0.8), the system $(L_n[r_n])_n=R\mathscr Hom((K_{\mathcal X,n})_n, (K'_{\mathcal X,n})_n)$ is normalized, so by (\ref {pull-normalized}), $\widehat{D'K_{\mathcal X}}=(L_n[r_n]) _n,$ and by (\ref{L3.3}vi), it is trivialized by $(\mathscr S,\mathcal L').$ Since $DK$ is trivialized by $(\mathscr S,\mathcal L'),$ so also is $D'K,$ because $\widehat{D'K}\simeq\widehat{DK}\otimes^L\widehat {D'K_{\mathcal X}}.$ (iii) The assumption implies in particular that $\mathcal X$ is connected, so by (ii), the question is independent of the choice of the dualizing complex. By definition, $\widehat{K}$ is trivialized by $(\{\mathcal X\},\mathcal L),$ so are truncations of $\widehat{K}.$ The essential image of $R\mathscr Hom (\widehat{K},\Lambda_{\bullet})$ in $D_c(\mathcal X,\Lambda)$ is $DK,$ so by (\ref{D-stra}) it suffices to show that $R\mathscr Hom(\widehat{K},\Lambda_{\bullet}) \in\mathscr D_{\{\mathcal X\},\mathcal L'} (\mathscr A)$ for some $\mathcal L'.$ Since $\mathcal X$ is quasi-compact, each $\Lambda_n$-dualizing complex is of finite quasi-injective dimension, so for each integer $i,$ there exist integers $a$ and $b$ such that $$ \mathscr H^iR\mathscr Hom(\widehat{K}_n, \Lambda_n)=\mathscr H^i R\mathscr Hom(\tau_{[a,b]}\widehat{K}_n,\Lambda_n). $$ Using truncation triangles, we may further replace $\tau_{[a,b]}\widehat{K}_n$ by the cohomology sheaves $\mathscr H^j\widehat{K}_n,$ and hence by their JH components. Therefore, it suffices to find an $\mathcal L'$ that trivializes $\mathscr H^iR\mathscr Hom(L,\Lambda_0),$ for all $i\in\mathbb Z$ and $L\in\mathcal L.$ Note that $R\mathscr Hom(L,\Lambda_0)=\mathscr Hom(L, \Lambda_0)=L^{\vee}$ is a simple $\Lambda_0$-sheaf, so one can take $\mathcal L'=\{L^{\vee}|L\in\mathcal L\}.$ \end{proof} \begin{subremark} For any $S$-algebraic stack $\mathcal X,$ the Verdier dual of a complex $K\in D_c(\mathcal X,\Lambda)$ being stratifiable or not is independent of the choice of the dualizing complex. Let $K_{\mathcal X}$ and $K' _{\mathcal X}$ be two dualizing complexes on $\mathcal X,$ defining dualizing functors $D$ and $D',$ respectively. Let $P:X\to\mathcal X$ be a presentation, and let $K_X=P^!K_{\mathcal X}$ and $K'_X=P^!K'_{\mathcal X},$ defining dualizing functors $D_X$ and $D'_X$ on $X,$ respectively. Suppose $DK$ is stratifiable. To show $D'K$ is also stratifiable, by (\ref{C3.5}iii) it suffices to show $P^!D'K=D'_XP^*K$ is stratifiable. Since $D_XP^*K=P^!DK$ is stratifiable by assumption, we may assume $\mathcal X=X$ is a scheme. Since $X$ is noetherian, it has finitely many connected components, each of which is both open and closed. Then the result follows from (\ref{C3.5}i) and (\ref{L3.8}ii). \end{subremark} Next we prove the main result of this section. \begin{theorem}\label{T3.10} Let $f:\mathcal X\to\mathcal Y$ be a morphism of $S$-algebraic stacks. Then the operations $f_*,f_!,f^*,f^!,D_{\mathcal X},-\otimes^L-$ and $R\mathscr Hom(-,-)$ all preserve stratifiability, namely, they induce functors \begin{gather*} f_*:D_c^{+,\emph{stra}}(\mathcal X,\Lambda) \longrightarrow D_c^{+,\emph{stra}}(\mathcal Y,\Lambda), \qquad f_!:D_c^{-,\emph{stra}}(\mathcal X,\Lambda) \longrightarrow D_c^{-,\emph{stra}}(\mathcal Y,\Lambda), \\ f^*:D_c^{\emph{stra}}(\mathcal Y,\Lambda) \longrightarrow D_c^{\emph{stra}}(\mathcal X,\Lambda), \qquad f^!:D_c^{\emph{stra}}(\mathcal Y,\Lambda) \longrightarrow D_c^{\emph{stra}}(\mathcal X,\Lambda), \\ D:D_c^{\emph{stra}}(\mathcal X,\Lambda) \longrightarrow D_c^{\emph{stra}}(\mathcal X,\Lambda)^{\emph{op}}, \\ \otimes^L:D_c^{-,\emph{stra}}(\mathcal X,\Lambda)\times D_c^{-,\emph{stra}}(\mathcal X, \Lambda)\longrightarrow D_c^{-,\emph{stra}}(\mathcal X,\Lambda)\quad\emph{and} \\ R\mathscr Hom(-,-):D_c^{-,\emph{stra}}(\mathcal X,\Lambda)^{\emph{op}}\times D_c^{+,\emph{stra}} (\mathcal X,\Lambda)\longrightarrow D_c^{+,\emph{stra}} (\mathcal X,\Lambda). \end{gather*} \end{theorem} \begin{proof} We may assume all stacks involved are reduced. We consider the Verdier dual functor $D$ first. Let $P:X\to\mathcal X$ be a presentation. Since $P^*D=DP^!,$ by (\ref{C3.5}iii) we can assume $\mathcal X=X$ is a scheme. Let $K$ be a complex on $X$ trivialized by $(\mathscr S,\mathcal L).$ Refining if necessary, we may assume all strata in $\mathscr S$ are connected and regular. Let $j:U\to X$ be the immersion of an open stratum in $\mathscr S$ with complement $i:Z\to X.$ Shrinking $U$ if necessary, we may assume there is a dimension function on $U$ (\cite{Rio}, D\'efinition 2.1), hence by a result of Gabber (loc. cit., Th\'eor\`eme 0.2), the constant sheaf $\Lambda$ on $U$ is a dualizing complex. Consider the exact triangle $$ \xymatrix@C=.5cm{ i_*D_Z(K|_Z) \ar[r] & D_XK \ar[r] & j_*D_U(K|_U) \ar[r] &.} $$ By (\ref{L3.8}iii) we see that $D_U(K|_U)$ is trivialized by $(\{U\},\mathcal L')$ for some $\mathcal L',$ so $j_*D_U(K|_U)$ is stratifiable (\ref{L3.8}i). By noetherian induction we may assume $D_Z(K|_Z)$ is stratifiable, and it is clear that $i_*$ preserves stratifiability. Therefore by (\ref{L3.3}i), $D_XK$ is stratifiable. The case of $f^*$ (and hence $f^!$) is proved in (\ref{L3.3}ii). Next we prove the case of $\otimes^L.$ For $i=1,2,$ let $K_i\in D_c^-(\mathcal X,\Lambda),$ trivialized by $(\mathscr S_i,\mathcal L_i).$ Let $(\mathscr S,\mathcal L)$ be a common tensor closed refinement (by (\ref{L3.2.5})) of $(\mathscr S_i,\mathcal L_i),\ i=1,2.$ The total tensor product $K_1\otimes^LK_2$ is defined to be the image in $D_c(\mathcal X,\Lambda)$ of $\widehat{K}_1\otimes^L_{\Lambda_{\bullet}} \widehat{K}_2,$ which by (\cite{LO2}, 3.0.10) is normalized, so it suffices to show (by (\ref{L3.3}vi)) that $$ \widehat{K}_{1,0}\otimes^L_{\Lambda_0}\widehat{K}_{2,0}= \widehat{K}_{1,0}\otimes_{\Lambda_0}\widehat{K}_{2,0} $$ is trivialized by $(\mathscr S,\mathcal L).$ This follows from $$ \mathscr H^r(\widehat{K}_{1,0}\otimes_{\Lambda_0}\widehat{K} _{2,0})=\bigoplus_{i+j=r}\mathscr H^i(\widehat{K}_{1,0}) \otimes_{\Lambda_0}\mathscr H^j(\widehat{K}_{2,0}) $$ and the assumption that $(\mathscr S,\mathcal L)$ is tensor closed. The case of $R\mathscr Hom(K_1,K_2)=D(K_1\otimes^LDK_2)$ follows. Finally we prove the case of $f_*$ and $f_!.$ Let $f:\mathcal X\to\mathcal Y$ be a morphism of $S$-algebraic stacks, and let $K\in D^-_{\mathscr S,\mathcal L}(\mathcal X,\Lambda)$ for some pair $(\mathscr S,\mathcal L).$ We want to show $f_!K$ is stratifiable. Let $j:\mathcal U\to\mathcal X$ be the immersion of an open stratum in $\mathscr S,$ with complement $i:\mathcal Z\to\mathcal X.$ From the exact triangle $$ \xymatrix@C=.5cm{ (fj)_!j^*K \ar[r] & f_!K \ar[r] & (fi)_!i^*K \ar[r] &} $$ we see that it suffices to prove the claim for $fj$ and $fi.$ By noetherian induction we can replace $\mathcal X$ by $\mathcal U.$ By (\ref{C3.5}iii) and smooth base change (\cite{LO2}, 12.1), we can replace $\mathcal Y$ by a presentation $Y,$ and by (\ref{C3.5}i) and (\cite{LO2}, 12.3) we can shrink $Y$ to an open subscheme. After these reductions, we assume that $\mathcal Y=Y$ is a regular irreducible affine $S$-scheme that has a dimension function on it, that $K$ is trivialized by $(\{\mathcal X\},\mathcal L),$ and that the relative inertia stack $\mathcal I_{f}:=\mathcal X\times_{\Delta,\mathcal X\times_Y\mathcal X,\Delta}\mathcal X$ is flat and has components over $\mathcal X$ (\cite{Beh2}, 5.1.14). Therefore by (\cite{Beh2}, 5.1.13), $f$ factors as $\mathcal X\overset{g}{\to}\mathcal Z\overset{h}{\to}Y,$ where $g$ is gerbe-like and $h$ is representable (cf. (\cite{Beh2}, 5.1.3-5.1.6) for relevant notions). So we reduce to two cases: $f$ is representable, or $f$ is gerbe-like. \textbf{Case when $f$ is representable}. By shrinking the $S$-algebraic space $\mathcal X$ we can assume $\mathcal X=X$ is a regular connected scheme that has a dimension function, so that the constant sheaf $\Lambda$ on $X$ is a dualizing complex. By (\ref{L3.8}iii) we see that $DK$ is trivialized by some $(\{X\},\mathcal L'),$ and by (\ref{L3.8}i), $f_*DK$ is stratifiable. Therefore $f_!K=Df_*DK$ is also stratifiable. \textbf{Case when $f$ is gerbe-like}. In this case $f$ is smooth (\cite{Beh2}, 5.1.5), hence \'etale locally on $Y$ it has a section. Replacing $Y$ by an \'etale cover, we may assume that $f$ is a neutral gerbe, so $f:B(G/Y)\to Y$ is the structural map, for some flat group space $G$ of finite type over $Y$ (\cite{LMB}, 3.21). By (\cite{Beh2}, 5.1.1) and (\ref{C3.5}i) we may assume $G$ is a $Y$-group scheme. Next we reduce to the case when $G$ is smooth over $Y.$ By assumption $Y$ is integral. Let $k(Y)$ be the function field of $Y$ and $\overline{k(Y)}$ an algebraic closure. Then $G_{\overline{k(Y)},\text{red}}$ is smooth over $\overline{k(Y)},$ so there exists a finite extension $L$ over $k(Y)$ such that $G_{L,\text{red}}$ is smooth over $L.$ Let $Y'$ be the normalization of $Y$ in $L,$ which is a scheme of finite type over $S,$ and the natural map $Y'\to Y$ is finite surjective. It factors through $Y'\to Z\to Y,$ where $Z$ is the normalization of $Y$ in the separable closure of $k(Y)$ in $L=k(Y').$ So $Z\to Y$ is generically \'etale, and $Y'\to Z$ is purely inseparable, hence a universal homeomorphism, so $Y'$ and $Z$ have equivalent \'etale sites. Replacing $Y'$ by $Z$ and shrinking $Y$ we can assume $Y'\to Y$ is finite \'etale. Replacing $Y$ by $Y'$ (by (\ref{C3.5}ii)) we assume $G_{\text{red}}$ over $Y$ has smooth generic fiber, and by shrinking $Y$ we assume $G_{\text{red}}$ is smooth over $Y.$ $G_{\text{red}}$ is a subgroup scheme of $G$ (\cite{SGA3}, VI$_{\text{A}},$ 0.2); let $h:G_{\text{red}}\hookrightarrow G$ be the closed immersion. Then $Bh:B(G_{\text{red}}/Y)\to B(G/Y)$ is faithful and hence representable. It is also radicial: consider the diagram where the square is 2-Cartesian $$ \xymatrix@C=.8cm{ Y \ar@{^{(}->} [r]^-i & G/G_{\text{red}} \ar[r]^-g \ar[d] & Y \ar[d]^-P \\ & B(G_{\text{red}}/Y) \ar[r]_-{Bh} & B(G/Y).} $$ The map $i$ is a nilpotent closed embedding, so $g$ is radicial. Since $P$ is faithfully flat, $Bh$ is also radicial. This shows that $$ (Bh)^*:D_c^-(B(G/Y),\Lambda)\to D_c^-(B(G_{\text{red}}/Y), \Lambda) $$ is an equivalence of categories. Replacing $G$ by $G_{\text {red}}$ we assume $G$ is smooth over $Y,$ and hence $P:Y\to B(G/Y)$ is a connected presentation. Let $d$ be the relative dimension of $G$ over $Y.$ By assumption, the constant sheaf $\Lambda$ on $Y$ is a dualizing complex, and so $f^!\Lambda=\Lambda\langle-d\rangle$ (and hence the constant sheaf $\Lambda$ on $\mathcal X$) is a dualizing complex on $\mathcal X.$ By (\ref{L3.8}iii), we see that $DK$ is trivialized by a pair of the form $(\{\mathcal X\},\mathcal L').$ To show $f_!K$ is stratifiable is equivalent to showing that $Df_!K=f_*DK$ is stratifiable. So replacing $K$ by $DK,$ it suffices to show that $f_*K$ is stratifiable, where $K\in D^+_{\{\mathcal X\},\mathcal L}(\mathcal X, \Lambda)$ for some $\mathcal L.$ Consider the strictly simplicial smooth hypercover associated to the presentation $P:Y\to B(G/Y),$ and let $f_i:G^i\to Y$ be the structural map. As in the proof of (\ref{L3.8}i), it suffices to show the existence of a pair $(\mathscr S_Y,\mathcal L_Y)$ on $Y$ that trivializes $R^nf_*L,$ for all $L\in\mathcal L$ and $n\in\mathbb Z.$ From the spectral sequence (\cite{LO2}, 10.0.9) $$ E_1^{ij}=R^jf_{i*}f_i^*P^*L\Longrightarrow R^{i+j}f_*L, $$ we see that it suffices for the pair $(\mathscr S_Y,\mathcal L_Y)$ to trivialize all the $E_1^{ij}$-terms. Assume $i\ge1.$ If we regard the map $f_i:G^i\to Y$ as the product map $$ \prod_if_1:\prod_iG\to\prod_iY, $$ where the products are fiber products over $Y,$ then we can write $f_i^*P^*L$ as $$ f_1^*P^*L\boxtimes_{\Lambda_0}\Lambda_0\boxtimes _{\Lambda_0}\cdots\boxtimes_{\Lambda_0}\Lambda_0. $$ Note that the scheme $Y$ satisfies the condition (LO). By K\"unneth formula (\cite{LO2}, 11.0.14) we have $$ f_{i*}f_i^*P^*L=f_{1*}f_1^*P^*L\otimes_{\Lambda_0} f_{1*}\Lambda_0\otimes_{\Lambda_0}\cdots\otimes_{\Lambda_0} f_{1*}\Lambda_0. $$ Since $f_{1*}f_1^*P^*L$ and $f_{1*}\Lambda_0$ are bounded complexes (by a theorem of Gabber \cite{Gab}), there exists a tensor closed pair $(\mathscr S_Y,\mathcal L_Y)$ that trivializes them, for all $L\in\mathcal L.$ The proof is finished. \end{proof} Consequently, the theorem also holds for $\overline {\mathbb Q}_{\ell}$-coefficients. Finally we give a lemma which will be used in the next section. This will play the same role as (\cite{Beh2}, 6.3.16). \begin{lemma}\label{L3.11} Let $X$ be a connected variety over an algebraically closed field $k$ of characteristic not equal to $\ell,$ and let $\mathcal L$ be a finite set of isomorphism classes of simple lcc $\Lambda_0$-sheaves on $X.$ Then there exists an integer $d$ (depending only on $\mathcal L$) such that, for every lisse $\Lambda$-adic sheaf $\mathscr F$ on $X$ trivialized by $\mathcal L,$ and for every integer $i,$ we have $$ \dim_EH^i_c(X,\mathscr F\otimes_{\Lambda}E)\le d\cdot\emph{rank}_E(\mathscr F\otimes_{\Lambda}E), $$ where $E$ is the fraction field of $\Lambda.$ \end{lemma} \begin{proof} Since $\mathcal L$ is finite and $0\le i\le2\dim X,$ there exists an integer $d>0$ such that $\dim_{\Lambda_0}H^i_c(X, L)\le d\cdot\text{rank}_{\Lambda_0}L,$ for every $i$ and every $L\in\mathcal L.$ For a short exact sequence of lcc $\Lambda_0$-sheaves $$ \xymatrix@C=.5cm{ 0 \ar[r] & \mathscr G' \ar[r] & \mathscr G \ar[r] & \mathscr G'' \ar[r] & 0} $$ on $X,$ the cohomological sequence $$ \xymatrix@C=.5cm{ \cdots \ar[r] & H^i_c(X,\mathscr G') \ar[r] & H^i_c(X,\mathscr G) \ar[r] & H^i_c(X,\mathscr G'') \ar[r] & \cdots} $$ implies that $\dim_{\Lambda_0}H^i_c(X,\mathscr G)\le\dim _{\Lambda_0}H^i_c(X,\mathscr G')+\dim_{\Lambda_0}H^i_c (X,\mathscr G'').$ So it is clear that if $\mathscr G$ is trivialized by $\mathcal L,$ then $\dim_{\Lambda_0}H^i_c(X,\mathscr G)\le d\cdot \text{rank}_{\Lambda_0}\mathscr G,$ for every $i.$ Since we only consider $\mathscr F\otimes_{\Lambda}E,$ we may assume $\mathscr F=(\mathscr F_n)_n$ is flat, of some constant rank over $\Lambda$ (since $X$ is connected), and this $\Lambda$-rank is equal to $$ \text{rank}_{\Lambda_0}\mathscr F_0=\text{rank}_E(\mathscr F\otimes_{\Lambda}E). $$ $H^i_c(X,\mathscr F)$ is a finitely generated $\Lambda$-module (\cite{SGA5}, VI, 2.2.2), so by Nakayama's lemma, the minimal number of generators is at most $\dim_{\Lambda_0}(\Lambda_0\otimes_{\Lambda}H^i_c(X,\mathscr F)).$ Similar to ordinary cohomology groups (\cite{Mil1}, 19.2), we have an injection $$ \Lambda_0\otimes_{\Lambda}H^i_c(X,\mathscr F)\hookrightarrow H^i_c(X,\mathscr F_0) $$ of $\Lambda_0$-vector spaces. Therefore, $\dim_EH^i_c(X, \mathscr F\otimes_{\Lambda}E)$ is less than or equal to the minimal number of generators of $H^i_c(X,\mathscr F)$ over $\Lambda,$ which is at most $$ \dim_{\Lambda_0}(\Lambda_0\otimes_{\Lambda}H^i_c(X,\mathscr F))\le\dim_{\Lambda_0}H^i_c(X,\mathscr F_0)\le d\cdot\text {rank}_{\Lambda_0}\mathscr F_0=d\cdot\text{rank}_E(\mathscr F\otimes_{\Lambda}E). $$ \end{proof} \section{Convergent complexes and finiteness} We return to $\mathbb F_q$-algebraic stacks $\mathscr X_0,\mathscr Y_0,\cdots$ of finite type. A complex $K_0\in W(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ is said to be \textit{stratifiable} if $K$ on $\mathscr X$ is stratifiable, and we denote by $W^{\text{stra}}(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ the full subcategory of such complexes. Note that if $K_0$ is a lisse-\'etale complex, and it is stratifiable on $\mathscr X_0,$ then it is geometrically stratifiable (i.e. $K$ on $\mathscr X$ is stratifiable). In turns out that in order for the trace formula to hold, it suffices to make this weaker assumption of geometric stratifiability. So we will only discuss stratifiable Weil complexes. Again, by a sheaf we mean a Weil $\overline{\mathbb Q}_{\ell}$-sheaf. \begin{definition}\label{D4.1} (i) Let $K\in D_c(\overline{\mathbb Q}_{\ell})$ and $\varphi:K\to K$ an endomorphism. The pair $(K,\varphi)$ is said to be an \emph{$\iota$-convergent complex} (or just a convergent complex, since we fixed $\iota$) if the complex series in two directions $$ \sum_{n\in\mathbb Z}\quad\sum_{H^n(K),H^n(\varphi)}|\alpha|^s $$ is convergent, for every real number $s>0.$ In this case let $\emph{Tr}(\varphi,K)$ be the absolutely convergent complex series $$ \sum_n(-1)^n\iota\emph{Tr}(H^n(\varphi),H^n(K)) $$ or its limit. (ii) Let $K_0\in W^-(\mathscr X_0,\overline{\mathbb Q}_{\ell}).$ We call $K_0$ an \emph{$\iota$-convergent complex of sheaves} (or just a convergent complex of sheaves), if for every integer $v\ge1$ and every point $x\in\mathscr X_0(\mathbb F_{q^v}),$ the pair $(K_{\overline{x}},F_x)$ is a convergent complex. In particular, all bounded complexes are convergent. (iii) Let $K_0\in W^-(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ be a convergent complex of sheaves. Define $$ c_v(\mathscr X_0,K_0)=\sum_{x\in[\mathscr X_0(\mathbb F_{q^v})]}\frac{1}{\#\emph{Aut}_x(\mathbb F_{q^v})}\emph{Tr}(F_x,K_{\overline{x}})\in\mathbb C, $$ and define the \emph{$L$-series} of $K_0$ to be the formal power series $$ L(\mathscr X_0,K_0,t)=\exp\Big(\sum_{v\ge1}c_v(\mathscr X_0,K_0)\frac{t^v}{v}\Big)\in\mathbb C[[t]]. $$ \end{definition} The zeta function $Z(\mathscr X_0,t)$ in (\ref{D1.2}) is a special case: $Z(\mathscr X_0,t)=L(\mathscr X_0,\overline{\mathbb Q}_{\ell},t).$ It has rational coefficients. \begin{notation}\label{} We sometimes write $c_v(K_0)$ for $c_v(\mathscr X_0,K_0),$ if it is clear that $K_0$ is on $\mathscr X_0.$ We also write $c_v(\mathscr X_0)$ for $c_v(\mathscr X_0,\overline{\mathbb Q}_{\ell}).$ \end{notation} \begin{subremark}\label{R4.2} (i) Behrend defined convergent complexes with respect to arithmetic Frobenius elements (\cite{Beh2}, 6.2.3), and our definition is for geometric Frobenius, and it is essentially the same as Behrend's definition, except we work with $\iota$-mixed Weil complexes (which means \textit{all} Weil complexes, by (\ref{Laff})) for an arbitrary isomorphism $\iota:\overline{\mathbb Q}_{\ell}\to\mathbb C,$ while \cite{Beh2} works with pure or mixed lisse-\'etale sheaves with integer weights. Also our definition is a little different from that in \cite{Ols1}; the condition there is weaker. (ii) Recall that $\text{Aut}_x$ is defined to be the fiber over $x$ of the inertia stack $\mathscr I_0\to\mathscr X_0.$ It is a group scheme of finite type (\cite{LMB}, 4.2) over $k(x),$ so $\text{Aut}_x(k(x))$ is a finite group. (iii) If we have the following commutative diagram $$ \xymatrix@C=.5cm{ \text{Spec }\mathbb F_{q^{vd}} \ar[r] \ar[rd]_{x'} & \text{Spec }\mathbb F_{q^v} \ar[d]^x \\ & \mathscr X_0,} $$ then $(K_{\overline{x}},F_x)$ is convergent if and only if $(K_{\overline{x'}},F_{x'})$ is convergent, because $F_{x'}=F_x^d$ and $s\mapsto sd:\mathbb R^{>0}\to\mathbb R^{>0}$ is a bijection. In particular, for a lisse-\'etale complex of sheaves, the property of being a convergent complex is independent of $q$ and the structural morphism $\mathscr X_0\to\text{Spec }\mathbb F_q.$ Also note that, for every integer $v\ge1,$ a complex $K_0$ on $\mathscr X_0$ is convergent if and only if $K_0\otimes\mathbb F_{q^v}$ on $\mathscr X_0\otimes\mathbb F_{q^v}$ is convergent. \end{subremark} We restate the main theorem in \cite{Beh2} using compactly supported cohomology as follows. It generalizes (\ref{T1.1}). We will prove it in this section and the next. \begin{theorem}\label{T4.3} Let $f:\mathscr X_0\to\mathscr Y_0$ be a morphism of $\mathbb F_q$-algebraic stacks, and let $K_0\in W^{-,\emph{stra}}_m(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ be a convergent complex of sheaves. Then (i) \emph{(Finiteness)} $f_!K_0$ is a convergent complex of sheaves on $\mathscr Y_0,$ and (ii) \emph{(Trace formula)} $c_v(\mathscr X_0,K_0)=c_v(\mathscr Y_0,f_!K_0)$ for every integer $v\ge1.$ \end{theorem} First we give a few lemmas. \begin{lemma}\label{L4.5} Let $$ \xymatrix@C=.7cm @R=.5cm{ K' \ar[r] \ar[d]_{\varphi'} & K \ar[r] \ar[d]_{\varphi} & K'' \ar[r] \ar[d]^{\varphi''} & K'[1] \ar[d]^{\varphi'[1]} \\ K' \ar[r] & K \ar[r] & K'' \ar[r] & K'[1]} $$ be an endomorphism of an exact triangle $K'\to K\to K''\to K'[1]$ in $D^-_c(\overline{\mathbb Q}_{\ell}).$ If any two of the three pairs $(K',\varphi'), (K'',\varphi'')$ and $(K,\varphi)$ are convergent, then so is the third, and $$ \emph{Tr}(\varphi,K)=\emph{Tr}(\varphi',K')+\emph{Tr} (\varphi'',K''). $$ \end{lemma} \begin{proof} By the rotation axiom we can assume $(K',\varphi')$ and $(K'',\varphi'')$ are convergent. We have the exact sequence $$ \xymatrix@C=.5cm{ \cdots \ar[r] & H^n(K') \ar[r] & H^n(K) \ar[r] & H^n(K'') \ar[r] & H^{n+1}(K') \ar[r] & \cdots.} $$ Since $H^n(K)$ is an extension of a sub-object of $H^n(K'')$ by a quotient object of $H^n(K'),$ we have $$ \sum_{H^n(K),\varphi}|\alpha|^s\le\sum_{H^n(K'),\varphi'} |\alpha|^s+\sum_{H^n(K''),\varphi''}|\alpha|^s $$ for every real $s>0,$ so $(K,\varphi)$ is convergent. Since the series $\sum_{n\in\mathbb Z}(-1)^n\sum_{H^n(K), \varphi}\iota\alpha$ converges absolutely, we can change the order of summation, and the second assertion follows if we split the long exact sequence above into short exact sequences. \end{proof} \begin{corollary}\label{C4.6} If $K_0'\to K_0\to K_0''\to K_0'[1]$ is an exact triangle in $W^-(\mathscr X_0,\overline{\mathbb Q}_{\ell}),$ and two of the three complexes $K_0',K_0''$ and $K_0$ are convergent complexes, then so is the third, and $c_v(K_0)=c_v(K_0')+c_v(K_0'').$ \end{corollary} \begin{proof} For every $x\in\mathscr X_0(\mathbb F_{q^v}),$ we have an exact triangle $$ \xymatrix@C=.5cm{ K'_{\overline{x}} \ar[r] & K_{\overline{x}} \ar[r] & K''_ {\overline{x}} \ar[r] &} $$ in $D^-_c(\overline{\mathbb Q}_{\ell}),$ equivariant under the action of $F_x.$ Then apply (\ref{L4.5}). \end{proof} \begin{lemma}\label{L4.9} (\ref{T4.3}) holds for $f:\emph{Spec }\mathbb F_{q^d}\to\emph{Spec }\mathbb F_q.$ \end{lemma} \begin{proof} We have an equivalence of triangulated categories $$ \xymatrix@C=.7cm{ W^-(\text{Spec }\mathbb F_q,\overline{\mathbb Q}_{\ell}) \ar[r]^-{\sim} & D^-_c(\text{Rep}_{ \overline{\mathbb Q}_{\ell}}(G)),} $$ where $G$ is the Weil group $F^{\mathbb Z}$ of $\mathbb F_q.$ Let $H$ be the subgroup $F^{d\mathbb Z},$ the Weil group of $\mathbb F_{q^d}.$ Since $f:\text{Spec }\mathbb F_{q^d}\to\text{Spec }\mathbb F_q$ is finite, we have $f_!=f_*,$ and it is the induced-module functor $$ \xymatrix@C=.5cm{ \text{Hom}_{\overline{\mathbb Q}_{\ell}[H]}\big(\overline{ \mathbb Q}_{\ell}[G],-\big):D_c^-\big(\text{Rep}_{\overline {\mathbb Q}_{\ell}}(H)\big) \ar[r] & D_c^-\big(\text{Rep}_{\overline{\mathbb Q}_{\ell}}(G)\big),} $$ which is isomorphic to the coinduced-module functor $\overline{\mathbb Q}_{\ell}[G]\otimes_{\overline{\mathbb Q}_{\ell}[H]}-.$ In particular, $f_!$ is exact on the level of sheaves. Let $A$ be a $\overline{\mathbb Q}_{\ell}$-representation of $H,$ and $B=\overline{\mathbb Q}_{\ell}[G]\otimes_ {\overline{\mathbb Q}_{\ell}[H]}A.$ Let $x_1,\cdots,x_m$ be an ordered basis for $A$ with respect to which $F^d$ is an upper triangular matrix $$ \begin{bmatrix}\alpha_1 & * & * \\ & \ddots & * \\ && \alpha_m \end{bmatrix} $$ with eigenvalues $\alpha_1,\cdots,\alpha_m.$ Then $B$ has an ordered basis \begin{gather*} 1\otimes x_1,\ F\otimes x_1,\ \cdots,\ F^{d-1}\otimes x_1, \\ 1\otimes x_2,\ F\otimes x_2,\ \cdots,\ F^{d-1}\otimes x_2, \\ \cdots\ \cdots \\ 1\otimes x_m,\ F\otimes x_m,\ \cdots,\ F^{d-1}\otimes x_m, \end{gather*} with respect to which $F$ is the matrix $ \begin{bmatrix} M_1 & * & * \\ & \ddots & * \\ && M_m\end{bmatrix}, $ where $M_i=\begin{bmatrix}0 & \cdots & 0 & \alpha_i \\ 1 & \ddots & 0 & 0 \\ & \ddots & \ddots & \vdots \\ && 1 & 0\end{bmatrix}.$ The characteristic polynomial of $F$ on $B$ is $\prod_{i=1}^m(t^d-\alpha_i).$ Let $K_0$ be a complex of sheaves on $\text{Spec }\mathbb F_{q^d}.$ The eigenvalues of $F$ on $\mathscr H^n(f_!K)=f_!\mathscr H^n(K)$ are all the $d$-th roots of the eigenvalues of $F^d$ on $\mathscr H^n(K),$ so for every $s>0$ we have $$ \sum_n\quad\sum_{\mathscr H^n(f_!K),F}|\alpha|^s=d\sum_n\quad \sum_{\mathscr H^n(K),F^d}|\alpha|^{s/d}. $$ This shows that $f_!K_0$ is a convergent complex on $\text{Spec }\mathbb F_q$ if and only if $K_0$ is a convergent complex on $\text{Spec }\mathbb F_{q^d}.$ Next we prove $$ c_v(\text{Spec }\mathbb F_{q^d},K_0)=c_v(\text{Spec }\mathbb F_q,f_!K_0) $$ for every $v\ge1.$ Since $H^n(f_!K)=f_!H^n(K),$ and both sides are absolutely convergent series so that one can change the order of summation without changing the limit, it suffices to prove it when $K=A$ is a single representation concentrated in degree 0. Let us review this classical calculation. Use the notation as above. For each $i,$ fix a $d$-th root $\alpha_i^{1/d}$ of $\alpha_i,$ and let $\zeta_d$ be a primitive $d$-th root of unity. Then the eigenvalues of $F$ on $B$ are $\zeta_d^k\alpha_i^{1/d},$ for $i=1,\cdots,m$ and $k=0,\cdots,d-1.$ If $d\nmid v,$ then $\text{Hom}_{\mathbb F_q}(\mathbb F_{q^d},\mathbb F_{q^v})=\emptyset,$ so $c_v(\text{Spec }\mathbb F_{q^d},A)=0.$ On the other hand, $$ c_v(\text{Spec }\mathbb F_q,f_!A)=\text{Tr}(F^v,B)=\sum_{i,k}\zeta_d ^{vk}\alpha_i^{v/d}=\sum_i\alpha_i^{v/d}\sum_{k=0}^{d-1} \zeta_d^{vk}=0. $$ If $d|v,$ then $\text{Hom}_{\mathbb F_q}(\mathbb F_{q^d}, \mathbb F_{q^v})=\text{Hom}_{\mathbb F_q}(\mathbb F_{q^d}, \mathbb F_{q^d})=\mathbb Z/d\mathbb Z.$ So $$ c_v(\mathbb F_{q^d},A)=d\text{Tr}(F^v,A)=d\sum_i\alpha_i^{v/d}. $$ On the other hand, $$ c_v(\mathbb F_q,B)=\text{Tr}(F^v,B)=\sum_{i,k}\zeta_d^{vk} \alpha_i^{v/d}=\sum_{i,k}\alpha_i^{v/d}=d\sum_i\alpha_i^{v/d}. $$ \end{proof} Next, we consider $BG_0,$ for a finite group scheme $G_0$ over $\mathbb F_q.$ \begin{lemma}\label{L4.8} Let $G_0$ be a finite $\mathbb F_q$-group scheme, and let $\mathscr F_0$ be a sheaf on $BG_0.$ Then $H^r_c(BG,\mathscr F)=0$ for all $r\ne0,$ and $H^0_c(BG,\mathscr F)\simeq H^0(BG,\mathscr F)$ has dimension at most $\emph{rank}(\mathscr F).$ Moreover, the set of $\iota$-weights of $H^0_c(BG,\mathscr F)$ is a subset of the $\iota$-weights of $\mathscr F_0.$ \end{lemma} \begin{proof} By (\cite{Ols1}, 7.12-7.14) we can replace $G_0$ by its maximal reduced closed subscheme, and assume $G_0$ is reduced, hence \'etale. Then $G_0$ is the same as a finite group $G(\mathbb F)$ with a continuous action of $\text{Gal}(\mathbb F_q)$ (\cite{Mil2}, 7.8). We will also write $G$ for the group $G(\mathbb F),$ if there is no confusion. Since $\text{Spec }\mathbb F\to BG$ is surjective, we see that there is no non-trivial stratification on $BG.$ In particular, all sheaves on $BG$ are lisse, and they are just $\overline{\mathbb Q}_{\ell}$-representations of $G.$ $BG$ is quasi-finite and proper over $\mathbb F,$ with finite diagonal, so by (\cite{Ols1}, 5.8), $H^r_c(BG,\mathscr F)=0$ for all $r\ne0.$ From (\cite{Ols1}, 5.1), we see that if $\mathscr F$ is a sheaf on $BG$ corresponding to the representation $V$ of $G,$ then $H^0_c(BG,\mathscr F)=V_G$ and $H^0(BG,\mathscr F)=V^G,$ and there is a natural isomorphism $$ v\mapsto\sum_{g\in G}gv: V_G\longrightarrow V^G. $$ Therefore $$ h^0_c(BG,\mathscr F)=\dim V_G\le\dim V=\text{rank}(\mathscr F), $$ and the weights of $V_G$ form a subset of the weights of $V$ (counted with multiplicities). \end{proof} \begin{blank}\label{alg-gp} (i) If $k$ is a field, by a \textit{$k$-algebraic group} $G$ we mean a smooth $k$-group scheme of finite type. If $G$ is connected, then it is geometrically connected (\cite{SGA3}, VI$_{\text{A}},$ 2.1.1). (ii) For a connected $k$-algebraic group $G,$ let $a:BG\to\text{Spec }k$ be the structural map. Then $$ a^*:\Lambda\text{-Sh(Spec }k)\longrightarrow \Lambda\text{-Sh}(BG) $$ is an equivalence of categories. This is because $\bullet BG$ has no non-trivial stratifications (it is covered by $\text{Spec }k$ which has no non-trivial stratifications), and therefore $\bullet$ any constructible $\Lambda$-adic sheaf on $BG$ is lisse, given by an adic system $(M_n)_n$ of sheaves on $\text{Spec }k$ with $G$-actions, and these actions are trivial since $G$ is connected. See (\cite{Beh2}, 5.2.9). (iii) Let $G_0$ be a connected $\mathbb F_q$-algebraic group. By a theorem of Serge Lang (\cite{Lan}, Th. 2), every $G_0$-torsor over $\text{Spec }\mathbb F_q$ is trivial, with automorphism group $G_0,$ therefore $$ c_v(BG_0)=\frac{1}{c_v(G_0)}=\frac{1}{\#G_0(\mathbb F_{q^v})}. $$ \end{blank} Recall the following theorem of Borel as in (\cite{Beh2}, 6.1.6). \begin{theorem}\label{Borel} Let $k$ be a field and $G$ a connected $k$-algebraic group. Consider the Leray spectral sequence given by the projection $f:\emph{Spec }k\to BG,$ $$ E_2^{rs}=H^r(BG_{\overline{k}})\otimes H^s(G_{\overline{k}})\Longrightarrow\overline{\mathbb Q}_{\ell}. $$ Let $N^s=E_{s+1}^{0,s}\subset H^s(G_{\overline{k}})$ be the transgressive subspaces, for $s\ge1,$ and let $N$ be the graded $\overline{\mathbb Q}_{\ell}$-vector space $\bigoplus_{s\ge1}N^s.$ We have (a). $N^s=0$ if $s$ is even, (b). the canonical map $\xymatrix@C=.5cm{\bigwedge N \ar[r] & H^*(G_{\overline{k}})}$ is an isomorphism of graded $\overline{\mathbb Q}_{\ell}$-algebras. (c). The spectral sequence above induces an epimorphism of graded $\overline{\mathbb Q}_{\ell}$-vector spaces $ \xymatrix@C=.5cm{ H^*(BG_{\overline{k}}) \ar@{->>}[r] & N[-1].} $ Any section induces an isomorphism $$ \xymatrix@C=.5cm{ \emph{Sym}^*(N[-1]) \ar[r]^-{\sim} & H^*(BG_{\overline{k}}).} $$ \end{theorem} \begin{subremark} (i) The $E_2^{rs}$-term in (\ref{Borel}) should be $H^r(BG_{\overline{k}},R^sf_*\overline{\mathbb Q}_{\ell}),$ and $R^sf_*\overline{\mathbb Q}_{\ell}$ is a constructible sheaf on $BG.$ By (\ref{alg-gp}ii), the sheaf $R^sf_*\overline{\mathbb Q}_{\ell}$ is isomorphic to $a^*f^*R^sf_*\overline{\mathbb Q}_{\ell}=a^*H^s(G_{\overline{k}}),$ where $a:BG\to\text{Spec }k$ is the structural map and $H^s(G_{\overline{k}})$ is the $\text{Gal}(k)$-module regarded as a sheaf on $\text{Spec }k.$ Therefore by projection formula, $E_2^{rs}=H^r(BG_{\overline{k}}) \otimes H^s(G_{\overline{k}}).$ (ii) Since the spectral sequence converges to $\overline{\mathbb Q}_{\ell}$ sitting in degree 0, all $E_{\infty}^{rs}$ are zero, except $E_{\infty}^{00}.$ For each $s\ge1,$ consider the differential map $d_{s+1}^{0,s}:E_{s+1}^{0,s}\to E_{s+1}^{s+1,0}$ on the $(s+1)$st page. This map must be injective (resp. surjective) because it is the last possibly non-zero map from $E_*^{0,s}$ (resp. into $E_*^{s+1,0}$). Therefore, it is an isomorphism. Note that $N^s=E_{s+1}^{0,s}$ is a subspace of $E_2^{0,s}=H^s(G_{\overline{k}}),$ and $E_{s+1}^{s+1,0}$ is a quotient of $E_2^{s+1,0}=H^{s+1}(BG_{\overline{k}}).$ Using the isomorphism $d_{s+1}^{0,s}$ we get the surjection $H^{s+1}(BG_{\overline{k}})\to N^s.$ \end{subremark} \begin{anitem}\label{4.6.1} Let $G_0$ be a connected $\mathbb F_q$-algebraic group of dimension $d.$ We apply (\ref{Borel}) to investigate the compact support cohomology groups $H^*_c(BG).$ We have graded Galois-invariant subspaces $N=\bigoplus_{r\ge1}N^r\subset\bigoplus_{r\ge0}H^r(G)$ concentrated in odd degrees, such that the induced map $$ \xymatrix@C=.5cm{ \bigwedge N \ar[r] & H^*(G)} $$ is an isomorphism, and $H^*(BG)\cong\text{Sym}^*N[-1].$ Let $n_r=\dim N^r,$ and let $v_{r1},\cdots,v_{rn_r}$ be a basis for $N^r$ with respect to which the Frobenius acting on $N^r$ is upper triangular $$ \begin{bmatrix}\alpha_{r1} & * & * \\ & \ddots & * \\ & & \alpha_{rn_r}\end{bmatrix} $$ with eigenvalues $\alpha_{r1},\cdots,\alpha_{rn_r}.$ By (\cite{Del2}, 3.3.5), the weights of $H^r(G)$ are $\ge r,$ so $|\alpha_{ri}|\ge q^{r/2}>1.$ We have $$ H^*(BG)=\text{Sym}^*\overline{\mathbb Q}_{\ell}\langle v_{ij}|\text{for all }i,j\rangle=\overline{\mathbb Q}_{\ell}[v_{ij}], $$ with $\deg(v_{ij})=i+1.$ Note that all $i+1$ are even. In particular, $H^{2r-1}(BG)=0$ and \begin{equation*} \begin{split} H^{2r}(BG) &=\{\text{homogeneous polynomials of degree }2r \text{ in }v_{ij}\} \\ &=\overline{\mathbb Q}_{\ell}\langle \prod_{i,j}v_{ij}^{m_{ij}};\sum_{i,j}m_{ij}(i+1)=2r\rangle. \end{split} \end{equation*} With respect to an appropriate order of the basis, the matrix representing $F$ acting on $H^{2r}(BG)$ is upper triangular, with eigenvalues $$ \prod_{i,j}\alpha_{ij}^{m_{ij}},\quad\text{ for }\sum _{i,j}m_{ij}(i+1)=2r. $$ By Poincar\'e duality, the eigenvalues of $F$ acting on $H^{-2r-2d}_c(BG)$ are $q^{-d}\prod_{i,j}\alpha_{ij}^{-m_{ij}},$ for tuples of non-negative integers $(m_{ij})_{i,j}$ such that $\sum_{i,j}m_{ij}(i+1)=2r.$ Therefore the reciprocal characteristic polynomial of $F$ on $H^{-2r-2d}_c(BG)$ is $$ P_{-2r-2d}(BG_0,t)=\prod_{\substack{m_{ij}\ge0 \\ \sum_{i,j}m_{ij}(i+1)=2r}}\Big(1-q^{-d}\prod_{i,j}\alpha _{ij}^{-m_{ij}}\cdot t\Big). $$ \end{anitem} In the following two lemmas we prove two key cases of (\ref{T4.3}i). \begin{lemma}\label{L4.10} Let $G_0$ be an $\mathbb F_q$-group scheme of finite type. Then (\ref{T4.3}i) holds for the structural map $f:BG_0\to\emph{Spec }\mathbb F_q$ and any convergent complex $K_0\in W^-(BG_0,\overline{\mathbb Q}_{\ell}).$ \end{lemma} \begin{proof} By (\cite{Ols1}, 7.12-7.14) we may assume $G_0$ is reduced (hence smooth), so that the natural projection $\text {Spec }\mathbb F_q\to BG_0$ is a presentation. Note that then the assumptions of $\iota$-mixedness and stratifiability on $K_0$ are verified automatically, by (\ref{L2.8}, \ref{C3.5}iii), even though we will not use them explicitly in the proof. Let $G_0^0$ be the identity component of $G_0$ and consider the exact sequence of algebraic groups $$ \xymatrix@C=.5cm{ 1 \ar[r] & G_0^0 \ar[r] & G_0 \ar[r] & \pi_0(G_0) \ar[r] & 1.} $$ The fibers of the induced map $BG_0\to B\pi_0(G_0)$ are isomorphic to $BG_0^0,$ so we reduce to prove two cases: $G_0$ is finite \'etale (or even a finite constant group scheme, by (\ref{R4.2}iii)), or $G_0$ is connected and smooth. \textbf{Case of $G_0$ finite constant.} Let $G_0/\mathbb F_q$ be the finite constant group scheme associated with a finite group $G,$ and let $K_0\in W^-(BG_0,\overline{\mathbb Q}_{\ell}).$ Again we denote by $G$ both the group scheme $G_0\otimes\mathbb F$ and the finite group $G_0(\mathbb F),$ if no confusion arises. Let $y$ be the unique point in $\text{Spec }\mathbb F_q,$ $$ \xymatrix@C=.5cm{ BG \ar[r] \ar[d]_{f_{\overline{y}}} & BG_0 \ar[d]^f \\ \text{Spec }\mathbb F \ar[r]^-{\overline{y}} & \text{Spec }\mathbb F_q.} $$ Then $D^-_c(BG,\overline{\mathbb Q}_{\ell})$ is equivalent to $D^-_c(\text{Rep}_{\overline{\mathbb Q}_{\ell}}(G)),$ and the functor $$ (f_{\overline{y}})_!:D^-_c(BG/\mathbb F,\overline{\mathbb Q} _{\ell})\longrightarrow D^-_c(\text{Spec }\mathbb F, \overline{\mathbb Q}_{\ell}) $$ is identified with the coinvariance functor $$ (\ )_G:D^-_c(\text{Rep}_{\overline{\mathbb Q}_{\ell}}(G)) \longrightarrow D^-_c(\overline{\mathbb Q}_{\ell}), $$ which is exact on the level of modules, since the category $\text{Rep}_{\overline{\mathbb Q}_{\ell}}(G)$ is semisimple. So $(f_!K_0)_{\overline{y}} =(f_{\overline{y}})_!K=K_G$ and $\mathscr H^n(K_G)=\mathscr H^n(K)_G.$ Therefore $$ \sum_{\mathscr H^n((f_{\overline{y}})_!K),F}|\alpha|^s\le \sum_{\mathscr H^n(K),F}|\alpha|^s $$ for every $n\in\mathbb Z$ and $s>0.$ Therefore, if $K_0$ is a convergent complex, so is $f_!K_0.$ \textbf{Case of $G_0$ smooth and connected.} In this case $$ f^*:\overline{\mathbb Q}_{\ell}\text{-Sh}(\text{Spec }\mathbb F_q)\longrightarrow\overline{\mathbb Q}_{\ell}\text{-Sh}(BG_0) $$ is an equivalence of categories (\ref{alg-gp}ii). Let $d=\dim G_0,$ and let $\mathscr F_0$ be a sheaf on $BG_0,$ corresponding to a representation $V$ of the Weil group $W(\mathbb F_q),$ with $\beta_1,\cdots,\beta_m$ as eigenvalues of $F.$ By the projection formula (\cite{LO2}, 9.1.i) we have $H^n_c(BG,\mathscr F)\simeq H^n_c(BG)\otimes V,$ and by (\ref{4.6.1}) the eigenvalues of $F$ on $H_c^{-2r-2d}(BG)\otimes V$ are (using the notation in (\ref{4.6.1})) $$ q^{-d}\beta_k\prod_{i,j}\alpha_{ij}^{-m_{ij}}, $$ for $k=1,\cdots,m$ and tuples $(m_{ij})$ such that $\sum_{i,j}m_{ij}(i+1)=2r.$ For every $s>0,$ $$ \sum_{n\in\mathbb Z}\ \sum_{H^n_c(BG)\otimes V,F}|\alpha|^s= \sum_{m_{ij},k}q^{-ds}|\beta_k|^s\prod_{i,j}|\alpha_{ij} ^{-m_{ij}}|^s=\big(\sum_{k=1}^m |\beta_k|^s\big)\big(\sum_{m_{ij}} q^{-ds}\prod_{i,j}|\alpha_{ij}|^{-m_{ij}s}\big), $$ which converges to $$ q^{-ds}\Big(\sum_{k=1}^m |\beta_k|^s\Big)\prod_{i,j}\frac{1}{1-|\alpha_{ij}|^{-s}}, $$ since $|\alpha_{ij}|^{-s}<1$ and the product above is taken over finitely many indices. Let $K_0$ be a convergent complex on $BG_0,$ and for each $k\in\mathbb Z,$ let $V_k$ be a $W(\mathbb F_q)$-module corresponding to $\mathscr H^kK_0.$ For every $x\in BG_0(\mathbb F_q)$ (for instance the trivial $G_0$-torsor), the pair $(\mathscr H^k(K)_{\overline{x}},F_x)$ is isomorphic to $(V_k,F).$ Consider the $W(\mathbb F_q)$-equivariant spectral sequence $$ H^r_c(BG,\mathscr H^k(K))\Longrightarrow H^{r+k}_c(BG,K). $$ We have \begin{equation*} \begin{split} \sum_{n\in\mathbb Z}\quad\sum_{H^n_c(BG,K),F}|\alpha|^s &\le\sum_{n\in\mathbb Z}\ \sum_{r+k=n}\quad \sum_{H^r_c(BG,\mathscr H^kK),F}|\alpha|^s =\sum_{r,k\in\mathbb Z}\quad\sum_{H^r_c(BG)\otimes V_k,F}|\alpha|^s \\ &=\sum_{k\in\mathbb Z}\ \sum_{r\in\mathbb Z}\quad\sum_{H^r_c(BG)\otimes V_k,F}|\alpha|^s=\sum_{k\in\mathbb Z}q^{-ds}\big( \sum_{V_k,F}|\alpha|^s\big)\prod_{i,j}\frac{1} {1-|\alpha_{ij}|^{-s}} \\ &=\big(\sum_{k\in\mathbb Z}\quad\sum_{V_k,F}|\alpha|^s\big) \big(q^{-ds}\prod_{i,j}\frac{1}{1-|\alpha_{ij}|^{-s}}\big), \end{split} \end{equation*} where the first factor is convergent by assumption, and the product in the second factor is taken over finitely many indices. This shows that $f_!K_0$ is a convergent complex. \end{proof} Let $E_{\lambda}$ be a finite subextension of $\overline{\mathbb Q}_{\ell}/\mathbb Q_{\ell}$ with ring of integers $\Lambda$ and residue field $\Lambda_0,$ and let $(\mathscr S,\mathcal L)$ be a pair on $\mathscr X$ defined by simple lcc $\Lambda_0$-sheaves on strata. A complex $K_0\in W(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ is said to be \textit{$(\mathscr S,\mathcal L)$-stratifiable} (or \textit{trivialized by $(\mathscr S,\mathcal L)$}), if $K$ is defined over $E_{\lambda},$ with an integral model over $\Lambda$ trivialized by $(\mathscr S,\mathcal L).$ \begin{lemma}\label{L4.11} Let $X_0/\mathbb F_q$ be a geometrically connected variety, $E_{\lambda}$ a finite subextension of $\overline{\mathbb Q}_{\ell}/\mathbb Q_{\ell}$ with ring of integers $\Lambda,$ and let $\mathcal L$ be a finite set of simple lcc $\Lambda_0$-sheaves on $X.$ Then (\ref{T4.3}i) holds for the structural map $f:X_0\to\emph{Spec }\mathbb F_q$ and all lisse $\iota$-mixed convergent complexes $K_0$ on $X_0$ that are trivialized by $(\{X\},\mathcal L).$ \end{lemma} \begin{proof} Let $N=\dim X_0.$ From the spectral sequence $$ E_2^{rk}=H^r_c(X,\mathscr H^kK)\Longrightarrow H^{r+k}_c(X,K) $$ we see that $$ \sum_{n\in\mathbb Z}\quad\sum_{H^n_c(X,K),F}|\alpha|^s\le \sum_{n\in\mathbb Z} \quad\sum_{r+k=n}\quad\sum_{H^r_c(X,\mathscr H^kK),F} |\alpha|^s=\sum_{\substack{0\le r\le2N \\ k\in\mathbb Z}}\quad\sum_{H^r_c(X,\mathscr H^kK),F}|\alpha|^s, $$ therefore it suffices to show that, for each $0\le r\le2N,$ the series $\sum_{k\in\mathbb Z}\sum_{H^r_c(X,\mathscr H^kK),F}|\alpha|^s$ converges. Let $d$ be the number in (\ref{L3.11}) for $\mathcal L.$ Each cohomology sheaf $\mathscr H^nK_0$ is $\iota$-mixed and lisse on $X_0,$ so by (\ref{T2.4}i) we have the decomposition $$ \mathscr H^nK_0=\bigoplus_{b\in\mathbb R/\mathbb Z}(\mathscr H^nK_0)(b) $$ according to the weights mod $\mathbb Z,$ defined over $E_{\lambda}$ (\ref{R2.5}ii). For each coset $b,$ we choose a representative $b_0\in b,$ and take a $b_1\in\overline{\mathbb Q}_{\ell}^*$ such that $w_q(b_1)=-b_0.$ Then the sheaf $(\mathscr H^nK_0)(b)^{(b_1)}$ deduced by twist is lisse with integer punctual weights. Let $W$ be the filtration by punctual weights (\ref{T2.4}ii) of $(\mathscr H^nK_0)(b)^{(b_1)}.$ For every $v\ge1$ and $x\in X_0(\mathbb F_{q^v}),$ and every real $s>0,$ we have \begin{equation*} \begin{split} \sum_{n\in\mathbb Z}\quad\sum_{(\mathscr H^nK_0)_{\overline{x}},F_x}|\alpha|^{s/v} &=\sum_{\substack{n\in\mathbb Z \\ b\in\mathbb R/\mathbb Z}}\quad\sum_{(\mathscr H^nK_0)(b)_{\overline{x}},F_x} |\alpha|^{s/v} \\ &=\sum_{\substack{n\in\mathbb Z \\ b\in\mathbb R/\mathbb Z}}\quad\sum_{(\mathscr H^nK_0)(b)^{(b_1)}_{\overline{x}},F_x}|\alpha /b_1^v|^{s/v} \\ &=\sum_{\substack{n\in\mathbb Z \\ b\in\mathbb R/ \mathbb Z}}q^{b_0s/2}\sum_{(\mathscr H^nK_0)(b)^{(b_1)} _{\overline{x}},F_x}|\alpha|^{s/v} \\ &=\sum_{\substack{n\in\mathbb Z \\ b\in\mathbb R/\mathbb Z}}q^{b_0s/2}\sum_{i\in\mathbb Z}\qquad\sum_{\text{Gr}_i^W((\mathscr H^nK_0)(b)^{(b_1)})_{\overline{x}},F_x}|\alpha|^{s/v} \\ &=\sum_{\substack{n\in\mathbb Z \\ b\in\mathbb R/ \mathbb Z}}q^{b_0s/2}\sum_{i\in\mathbb Z}q^{is/2}\cdot\text {rank(Gr}_i^W((\mathscr H^nK_0)(b)^{(b_1)})). \end{split} \end{equation*} Since $K_0$ is a convergent complex, this series is convergent. For each $n\in\mathbb Z,$ every direct summand $(\mathscr H^nK_0)(b)$ of $\mathscr H^nK_0$ is trivialized by $(\{X\},\mathcal L).$ The filtration $W$ of each $(\mathscr H^nK_0)(b)^{(b_1)}$ gives a filtration of $(\mathscr H^nK_0)(b)$ (also denoted $W$) by twisting back, and it is clear that this latter filtration is defined over $E_{\lambda}.$ We have $\text{Gr}^W_i((\mathscr H^nK_0)(b)^{(b_1)})=(\text{Gr}^W_i(\mathscr H^nK_0)(b))^{(b_1)},$ and each $\text{Gr}^W_i((\mathscr H^nK_0)(b))$ is trivialized by $(\{X\},\mathcal L).$ By (\ref{L3.11}), \begin{equation*} \begin{split} h^r_c(X,\text{Gr}^W_i((\mathscr H^nK)(b)^{(b_1)})) &=h^r_c(X,\text{Gr}^W_i((\mathscr H^nK)(b)))\qquad\qquad\text{(\cite{LO2}, 9.1.i)} \\ &\le d\cdot\text{rank}(\text{Gr}^W_i((\mathscr H^nK)(b))) \\ &=d\cdot\text{rank}(\text{Gr}^W_i((\mathscr H^nK)(b)^{(b_1)})). \end{split} \end{equation*} Therefore \begin{equation*} \begin{split} \sum_{n\in\mathbb Z}\quad\sum_{H^r_c(X,\mathscr H^nK),F}|\alpha|^s &=\sum_{\substack{n\in\mathbb Z \\ b\in\mathbb R/\mathbb Z}}\quad\sum_{H^r_c(X,(\mathscr H^nK)(b)),F}|\alpha|^s \\ &=\sum_{\substack{n\in\mathbb Z \\ b\in\mathbb R/\mathbb Z}}\quad\sum_{H^r_c(X,(\mathscr H^nK)(b)^{(b_1)}), F}|b_1^{-1}\alpha|^s \\ &\le\sum_{\substack{n\in\mathbb Z \\ b\in\mathbb R/\mathbb Z}}q^{b_0s/2}\sum_{i\in\mathbb Z}\quad\sum _{H^r_c(X,\text{Gr}_i^W((\mathscr H^nK)(b)^{(b_1)})),F}|\alpha|^s \\ &\le\sum_{\substack{n\in\mathbb Z \\ b\in\mathbb R/\mathbb Z}}q^{b_0s/2}\sum_{i\in\mathbb Z}q^{(i+r)s/2}\cdot h^r_c(X,\text{Gr}_i^W((\mathscr H^nK)(b)^{(b_1)})) \\ &\le q^{rs/2}d\sum_{\substack{n\in\mathbb Z \\ b\in\mathbb R/\mathbb Z}}q^{b_0s/2}\sum_{i \in\mathbb Z}q^{is/2}\cdot\text{rank(Gr}_i^W((\mathscr H^nK)(b)^{(b_1)})), \end{split} \end{equation*} and it converges. \end{proof} Now we prove (\ref{T4.3}i) in general. \begin{proof} We may assume all stacks involved are reduced. From (\ref{T2.11}) and (\ref{T3.10}) we know that $f_!K_0\in W^{-,\text{stra}}_m(\mathscr Y_0,\overline{\mathbb Q}_{\ell}).$ Let $y\in\mathscr Y_0(\mathbb F_{q^v}),$ and we want to show that $((f_!K_0)_{\overline{y}},F_y)$ is a convergent complex. Since the property of being convergent depends only on the cohomology sheaves, by base change (\cite{LO2}, 12.5.3) we reduce to the case when $\mathscr Y_0=\text{Spec }\mathbb F_{q^v}.$ Replacing $q$ by $q^v,$ we may assume $v=1.$ By (\ref{R4.2}iii) we only need to show that $(R\Gamma_c(\mathscr X,K),F)$ is convergent. If $j:\mathscr U_0\hookrightarrow\mathscr X_0$ is an open substack with complement $i:\mathscr Z_0\hookrightarrow\mathscr X_0,$ then we have an exact triangle $$ \xymatrix@C=.5cm{ j_!j^*K_0 \ar[r] & K_0 \ar[r] & i_*i^*K_0 \ar[r] &} $$ in $W^-(\mathscr X_0,\overline{\mathbb Q}_{\ell}),$ which induces an exact triangle $$ \xymatrix@C=.5cm{ R\Gamma_c(\mathscr U_0,j^*K_0) \ar[r] & R\Gamma_c(\mathscr X_0, K_0) \ar[r] & R\Gamma_c(\mathscr Z_0,i^*K_0) \ar[r] &} $$ in $W^-(\text{Spec }\mathbb F_q,\overline{\mathbb Q}_{\ell}).$ So by (\ref{C4.6}) and noetherian induction, it suffices to prove (\ref{T4.3}i) for a nonempty open substack. By (\cite{Beh2}, 5.1.14) we may assume that the inertia stack $\mathscr I_0$ is flat over $\mathscr X_0.$ Then we can form the rigidification $\pi:\mathscr X_0\to X_0$ with respect to $\mathscr I_0$ (\cite{Ols2}, $\S1.5),$ where $X_0$ is an $\mathbb F_q$-algebraic space of quasi-compact diagonal. $X_0$ contains an open dense subscheme (\cite{Knu}, II, 6.7). Replacing $\mathscr X_0$ by the inverse image of this scheme, we can assume $X_0$ is a scheme. If (\ref{T4.3}i) holds for two composable morphisms $f$ and $g,$ then it holds for their composition $g\circ f.$ Since $R\Gamma_c(\mathscr X_0,-)=R\Gamma_c(X_0,-)\circ\pi_!,$ we reduce to prove (\ref{T4.3}i) for these two morphisms. For every $x\in X_0(\mathbb F_{q^v}),$ the fiber of $\pi$ over $x$ is a gerbe over $\text{Spec }k(x).$ Extending the base $k(x)$ (\ref{R4.2}iii) one can assume it is a neutral gerbe (in fact all gerbes over a finite field are neutral; see (\cite{Beh2}, 6.4.2)). This means the following diagram is 2-Cartesian: $$ \xymatrix@C=.5cm{ B\text{Aut}_x \ar[r] \ar[d] & \mathscr X_0 \ar[d]^-{\pi} \\ \text{Spec }\mathbb F_{q^v} \ar[r]^-x & X_0.} $$ So we reduce to two cases: $\mathscr X_0=BG_0$ for an $\mathbb F_q$-algebraic group $G_0,$ or $\mathscr X_0=X_0$ is an $\mathbb F_q$-scheme. The first case is proved in (\ref{L4.10}). For the second case, given a convergent complex $K_0\in W_m^{-,\text{stra}}(X_0,\overline{\mathbb Q}_{\ell}),$ defined over some $E_{\lambda}$ with ring of integers $\Lambda,$ and trivialized by a pair $(\mathscr S,\mathcal L)$ ($\mathcal L$ being defined over $\Lambda_0$) on $X,$ we can refine this pair so that every stratum is connected, and replace $X_0$ by models of the strata defined over some finite extension of $\mathbb F_q$ (\ref{R4.2}iii). This case is proved in (\ref{L4.11}). \end{proof} \section{Trace formula for stacks} We prove two special cases of (\ref{T4.3}ii) in the following two lemmas. \begin{proposition}\label{L5.3} Let $G_0$ be a finite \'etale group scheme over $\mathbb F_q,$ and $\mathscr F_0$ a sheaf on $BG_0.$ Then $$ c_1(BG_0,\mathscr F_0)=c_1(\emph{Spec }\mathbb F_q,R\Gamma_c(BG_0,\mathscr F_0)). $$ \end{proposition} \begin{proof} This is a special case of (\cite{Ols1}, 8.6), on correspondences given by group homomorphisms, due to Olsson. \end{proof} \begin{proposition}\label{L5.4} Let $G_0$ be a connected $\mathbb F_q$-algebraic group, and let $\mathscr F_0$ be a sheaf on $BG_0.$ Then $$ c_1(BG_0,\mathscr F_0)=c_1(\emph{Spec }\mathbb F_q,R\Gamma_c(BG_0,\mathscr F_0)). $$ \end{proposition} \begin{proof} Let $f:BG_0\to\text{Spec }\mathbb F_q$ be the structural map and $d=\dim G_0.$ By (\ref{alg-gp}ii), the sheaf $\mathscr F_0$ on $BG_0$ takes the form $f^*V,$ for some sheaf $V$ on $\text{Spec }\mathbb F_q.$ By (\ref{alg-gp}iii), we have $$ c_1(BG_0,\mathscr F_0)=\frac{1}{\#G_0(\mathbb F_q)}\text {Tr}(F_x,\mathscr F_{\overline{x}})=\frac{\text{Tr}(F,V)} {\#G_0(\mathbb F_q)}. $$ By the projection formula we have $H^n_c(BG,\mathscr F)\simeq H^n_c(BG)\otimes V,$ so $$ \text{Tr}(F,H^n_c(BG,\mathscr F))=\text{Tr}(F,H^n_c(BG))\cdot\text{Tr}(F,V). $$ Then \begin{equation*} \begin{split} c_1(\text{Spec }\mathbb F_q,R\Gamma_c(BG_0,\mathscr F_0)) &=\sum_n(-1)^n\text{Tr}(F,H^n_c(BG,\mathscr F)) \\ &=\text{Tr}(F,V)\sum_n(-1)^n\text{Tr}(F,H^n_c(BG)), \end{split} \end{equation*} so we can assume $\mathscr F_0=\overline{\mathbb Q}_{\ell}.$ Using the notations in (\ref{4.6.1}) we have \begin{equation*} \begin{split} \sum_n(-1)^n\text{Tr}(F,H^n_c(BG)) &=\sum_{r\ge0}\text{Tr} (F,H^{-2r-2d}_c(BG))=\sum_{r\ge0}\quad\sum_{\substack{\sum m_{ij}(i+1)=2r \\ m_{ij}\ge0}}q^{-d}\prod_{i,j}\alpha_{ij}^{-m_{ij}} \\ &=q^{-d}\sum_{m_{ij}\ge0}\quad\prod_{i,j}\alpha_{ij}^{-m_{ij}} =q^{-d}\prod_{i,j}(1+\alpha_{ij}^{-1}+\alpha_{ij}^{-2}+\cdots) \\ &=q^{-d}\prod_{i,j}\frac{1}{1-\alpha_{ij}^{-1}}. \end{split} \end{equation*} It remains to show $$ \#G_0(\mathbb F_q)=q^d\prod_{i,j}(1-\alpha_{ij}^{-1}). $$ In (\ref{4.6.1}), we saw that if each $N^i$ has an ordered basis $v_{i1},\cdots,v_{in_i}$ with respect to which $F$ is upper triangular, then since $H^*(G)=\bigwedge N,\ H^i(G)$ has basis $$ v_{i_1j_1}\wedge v_{i_2j_2}\wedge\cdots\wedge v_{i_mj_m}, $$ such that $\sum_{r=1}^mi_r=i,\ i_r\le i_{r+1},$ and if $i_r=i_{r+1},$ then $j_r<j_{r+1}.$ The eigenvalues of $F$ on $H^i(G)$ are $\alpha_{i_1j_1}\cdots\alpha_{i_mj_m}$ for such indices. By Poincar\'e duality, the eigenvalues of $F$ on $H^{2d-i}_c(G)$ are $q^d(\alpha_{i_1j_1}\cdots\alpha_{i_mj_m})^{-1}.$ Note that all the $i_r$ are odd, so $$ 2d-i\equiv i=\sum_{r=1}^mi_r\equiv m\mod2. $$ Applying the classical trace formula to $G_0,$ we have $$ \#G_0(\mathbb F_q)=\sum(-1)^mq^d\alpha_{i_1j_1}^{-1}\cdots \alpha_{i_mj_m}^{-1}=q^d\prod_{i,j}(1-\alpha_{ij}^{-1}). $$ This finishes the proof. \end{proof} \begin{anitem}\label{5.5} Note that, in (\ref{L5.3}) and (\ref{L5.4}) we did not make explicit use of the fact that $\mathscr F_0$ is $\iota$-mixed. \end{anitem} Now we prove (\ref{T4.3}ii) in general. \begin{proof} Since $c_v(\mathscr X_0,K_0)=c_1(\mathscr X_0\otimes\mathbb F_{q^v},K_0\otimes\mathbb F_{q^v}),$ we can assume $v=1.$ We shall reduce to proving (\ref{T4.3}ii) for all fibers of $f$ over $\mathbb F_q$-points of $\mathscr Y_0,$ following the approach of Behrend (\cite{Beh2}, 6.4.9). Let $y\in\mathscr Y_0(\mathbb F_q)$ and $(\mathscr X_0)_y$ be the fiber over $y.$ Then $(\mathscr X_0)_y(\mathbb F_q)$ is the groupoid of pairs $(x,\alpha),$ where $x\in\mathscr X_0(\mathbb F_q)$ and $\alpha:f(x)\to y$ is an isomorphism in $\mathscr Y_0(\mathbb F_q).$ Suppose $(\mathscr X_0)_y(\mathbb F_q)\ne\emptyset,$ and fix an $x\in(\mathscr X_0)_y(\mathbb F_q).$ Then $\text{Isom}(f(x),y)(\mathbb F_q)$ is a trivial left $\text{Aut}_y(\mathbb F_q)$-torsor. There is also a natural right action of $\text{Aut}_x(\mathbb F_q)$ on $\text{Isom}(f(x),y)(\mathbb F_q),$ namely $\varphi\in\text{Aut}_x(\mathbb F_q)$ takes $\alpha$ to $\alpha\circ f(\varphi).$ Under this action, for $\alpha$ and $\alpha'$ to be in the same orbit, there should be a $\varphi\in\text{Aut}_x(\mathbb F_q)$ such that the diagram $$ \xymatrix@C=.6cm{ f(x) \ar[rr]^-{f(\varphi)} \ar[rd]_-{\alpha'} && f(x) \ar[ld]^-{\alpha} \\ & y &} $$ commutes, and this is the definition for $(x,\alpha)$ to be isomorphic to $(x,\alpha')$ in $(\mathscr X_0)_y(\mathbb F_q).$ So the set of orbits $\text{Isom}(f(x),y)(\mathbb F_q)/\text{Aut}_x(\mathbb F_q)$ is identified with the inverse image of the class of $x$ along the map $[(\mathscr X_0)_y(\mathbb F_q)]\to[\mathscr X_0(\mathbb F_q)].$ The stabilizer group of $\alpha\in\text{Isom}(f(x),y)(\mathbb F_q)$ is $\text{Aut}_{(x,\alpha)}(\mathbb F_q),$ the automorphism group of $(x,\alpha)$ in $(\mathscr X_0)_y(\mathbb F_q).$ In general, if a finite group $G$ acts on a finite set $S,$ then we have $$ \sum_{[x]\in S/G}\frac{\#G}{\#\text{Stab}_G(x)}=\sum _{[x]\in S/G}\#\text{Orb}_G(x)=\#S. $$ Now for $S=\text{Isom}(f(x),y)(\mathbb F_q)$ and $G=\text{Aut}_x(\mathbb F_q),$ we have $$ \sum_{\substack{(x,\alpha)\in[(\mathscr X_0)_y(\mathbb F_q)] \\ (x,\alpha)\mapsto x}}\frac{\#\text{Aut}_x(\mathbb F_q)}{\#\text{Aut}_{(x,\alpha)}(\mathbb F_q)}=\#\text{Isom} (f(x),y)(\mathbb F_q)=\#\text{Aut}_y(\mathbb F_q); $$ the last equality follows from the fact that $S$ is a trivial $\text{Aut}_y(\mathbb F_q)$-torsor. If we assume (\ref{T4.3}ii) holds for the fibers $f_y:(\mathscr X_0)_y\to\text{Spec }\mathbb F_q$ of $f,$ for all $y\in\mathscr Y_0(\mathbb F_q),$ then \begin{equation*} \begin{split} c_1(\mathscr Y_0,f_!K_0) &=\sum_{y\in[\mathscr Y_0(\mathbb F_q)]}\frac{\text{Tr}(F_y, (f_!K)_{\overline{y}})}{\#\text{Aut}_y(\mathbb F_q)} \\ &=\sum_{y\in[\mathscr Y_0(\mathbb F_q)]}\frac{\text{Tr}(F_y, (f_{y!}K)_{\overline{y}})}{\#\text{Aut}_y(\mathbb F_q)} \qquad\qquad(\cite{LO2}, 12.5.3) \\ &=\sum_{y\in[\mathscr Y_0(\mathbb F_q)]}\frac{1}{\#\text{Aut} _y(\mathbb F_q)}\sum_{(x,\alpha)\in[(\mathscr X_0)_y(\mathbb F_q)]}\frac{\text{Tr}(F_x,K_{\overline{x}})}{\#\text {Aut}_{(x,\alpha)}(\mathbb F_q)} \\ &=\sum_{y\in[\mathscr Y_0(\mathbb F_q)]}\frac{1}{\#\text {Aut}_y(\mathbb F_q)}\sum_{\substack{x\in[\mathscr X_0(\mathbb F_q)] \\ x\mapsto y}}\Big(\sum_{\substack{(x,\alpha)\in[(\mathscr X_0)_y(\mathbb F_q)] \\ (x,\alpha)\mapsto x}}\frac{\text {Tr}(F_x,K_{\overline{x}})}{\#\text{Aut}_{(x,\alpha)}(\mathbb F_q)}\Big) \\ &=\sum_{y\in[\mathscr Y_0(\mathbb F_q)]}\frac{1}{\#\text{Aut}_y (\mathbb F_q)}\sum_{\substack{x\in[\mathscr X_0(\mathbb F_q)] \\ x\mapsto y}}\frac{1}{\#\text{Aut}_x(\mathbb F_q)} \\ & \qquad\qquad\qquad\qquad\qquad\Big(\sum _{\substack{(x,\alpha)\in[(\mathscr X_0)_y(\mathbb F_q)] \\ (x,\alpha)\mapsto x}}\frac{\#\text{Aut}_x(\mathbb F_q)}{\# \text{Aut}_{(x,\alpha)}(\mathbb F_q)}\Big)\text{Tr}(F_x,K _{\overline{x}}) \\ &=\sum_{y\in[\mathscr Y_0(\mathbb F_q)]}\frac{1}{\#\text{Aut} _y(\mathbb F_q)}\sum_{\substack{x\in[\mathscr X_0(\mathbb F_q)] \\ x\mapsto y}}\frac{\text{Tr}(F_x,K_{\overline{x}})} {\#\text{Aut}_x(\mathbb F_q)}\#\text{Aut}_y(\mathbb F_q) \\ &=\sum_{x\in[\mathscr X_0(\mathbb F_q)]}\frac{\text{Tr}(F_x, K_{\overline{x}})}{\#\text{Aut}_x(\mathbb F_q)}=:c_1(\mathscr X_0,K_0). \end{split} \end{equation*} So we reduce to the case when $\mathscr Y_0=\text{Spec }\mathbb F_q.$ If $K_0'\to K_0\to K_0''\to K_0'[1]$ is an exact triangle of convergent complexes in $W_m^{-,\text{stra}}(\mathscr X_0,\overline{\mathbb Q}_{\ell}),$ then by (\ref{C4.6}) and (\ref{T4.3}i) we have $$ c_1(\mathscr X_0,K_0)=c_1(\mathscr X_0,K_0')+c_1(\mathscr X_0,K_0'') $$ and $$ c_1(\mathscr Y_0,f_!K_0)=c_1(\mathscr Y_0,f_!K_0')+c_1 (\mathscr Y_0,f_!K_0''). $$ If $j:\mathscr U_0\to\mathscr X_0$ is an open substack with complement $i:\mathscr Z_0\to\mathscr X_0,$ then $$ c_1(\mathscr X_0,j_!j^*K_0)=c_1(\mathscr U_0,j^*K_0)\ \text{ and }\ c_1(\mathscr X_0,i_*i^*K_0)=c_1(\mathscr Z_0,i^*K_0). $$ By noetherian induction we can shrink $\mathscr X_0$ to a nonempty open substack. So as before we may assume the inertia stack $\mathscr I_0$ is flat over $\mathscr X_0,$ with rigidification $\pi:\mathscr X_0\to X_0,$ where $X_0$ is a scheme. If (\ref{T4.3}ii) holds for two composable morphisms $f$ and $g,$ then it holds for $g\circ f.$ So we reduce to two cases as before: $\mathscr X_0=X_0$ is a scheme, or $\mathscr X_0=BG_0,$ where $G_0$ is either a connected algebraic group, or a finite \'etale algebraic group over $\mathbb F_q.$ We may assume $X_0$ is separated, by further shrinking (for instance to an affine open subscheme). For a complex of sheaves $K_0$ and an integer $n,$ we have an exact triangle $$ \xymatrix@C=.5cm{ \tau_{<n}K_0 \ar[r] & \tau_{<n+1}K_0 \ar[r] & \mathscr H^n(K_0)[-n] \ar[r] &,} $$ so \begin{equation*} \begin{split} c_1(\tau_{<n+1}K_0) &=c_1(\tau_{<n}K_0)+c_1(\mathscr H^n(K_0)[-n]) \\ &=c_1(\tau_{<n}K_0)+(-1)^nc_1(\mathscr H^n(K_0)). \end{split} \end{equation*} Since $K_0$ is bounded above, $\tau_{<N}K_0\simeq K_0$ for $N\gg0.$ Since $K_0$ is convergent, $c_1(\tau_{< n}K_0)\to0$ absolutely as $n\to-\infty,$ so the series $\sum_{n\in\mathbb Z}(-1)^nc_1(\mathscr H^n(K_0))$ converges absolutely to $c_1(K_0).$ Applying $R\Gamma_c$ we get an exact triangle $$ \xymatrix@C=.5cm{ R\Gamma_c(\mathscr X_0,\tau_{<n}K_0) \ar[r] & R\Gamma_c(\mathscr X_0,\tau_{<n+1}K_0) \ar[r] & R\Gamma_c(\mathscr X_0,\mathscr H^nK_0)[-n] \ar[r] &} $$ in $W^-(\text{Spec }\mathbb F_q,\overline{\mathbb Q}_{\ell}).$ We claim that, for $\mathscr X_0=X_0$ a scheme, or $BG_0,$ we have $$ \lim_{n\to-\infty}c_1(\text{Spec }\mathbb F_q,R\Gamma_c(\mathscr X_0,\tau_{<n}K_0))=0 $$ absolutely. Recall that $c_1(R\Gamma_c(\tau_{<n}K_0))= \sum_{i\in \mathbb Z}(-1)^i\iota\text{Tr}(F,H^i_c(\mathscr X,\tau_{<n}K)),$ so we need to show $$ \sum_{i\in\mathbb Z}\quad\sum_{H^i_c(\mathscr X,\tau_{< n}K),F}|\alpha|\to0\qquad\text{as}\qquad n\to-\infty. $$ From the spectral sequence $$ H^r_c(\mathscr X,\mathscr H^k\tau_{<n}K) \Longrightarrow H^{r+k}_c(\mathscr X,\tau_{<n}K) $$ we see that \begin{equation*} \begin{split} \sum_{i\in\mathbb Z}\quad\sum_{H^i_c(\mathscr X,\tau_{< n}K),F}|\alpha| &\le\sum_{i\in\mathbb Z}\quad\sum_{r+k=i}\quad\sum_{H^r_c(\mathscr X,\mathscr H^k\tau_{<n}K),F}|\alpha| \\ &=\sum_{i\in\mathbb Z}\quad\sum_{\substack{r+k=i \\ k< n}}\quad\sum_{H^r_c(\mathscr X,\mathscr H^kK),F}|\alpha|. \end{split} \end{equation*} Let $d=\dim\mathscr X_0$ (cf. \ref{dim}). In the cases where $\mathscr X_0$ is a scheme or $BG_0,$ we have $H^r_c(\mathscr X,\mathscr F)=0$ for every sheaf $\mathscr F$ unless $r\le2d$ (cf. (\ref{4.6.1}) and (\ref{L4.8})). Therefore $$ \sum_{i\in\mathbb Z}\quad\sum_{\substack{r+k=i \\ k<n}}\quad \sum_{H^r_c(\mathscr X,\mathscr H^kK),F}|\alpha|\le \sum_{i<n+2d}\quad\sum_{r+k=i}\quad\sum_{H^r_c(\mathscr X,\mathscr H^kK),F}|\alpha|, $$ and it suffices to show that the series $$ \sum_{i\in\mathbb Z}\quad\sum_{r+k=i}\qquad\sum_{H^r_c (\mathscr X,\mathscr H^kK),F}|\alpha| $$ converges. This is proved for $BG_0$ in (\ref{L4.10}), and for schemes $X_0$ in (\ref{L4.11}) (we may shrink $X_0$ so that the assumption in (\ref{L4.11}) is satisfied). Note that in the two cases $\mathscr X_0=X_0$ or $BG_0,$ (\ref{T4.3}ii) holds when $K_0$ is a sheaf concentrated in degree 0. For separated schemes $X_0,$ this is a classical result of Grothendieck and Verdier \cite{Gro, Ver}; for $BG_0,$ this is done in (\ref{L5.3}) and (\ref{L5.4}). Therefore, for a general convergent complex $K_0,$ we have \begin{equation*} \begin{split} c_1(R\Gamma_c(\tau_{<n+1}K_0)) &=c_1(R\Gamma_c(\tau_{< n}K_0))+c_1(R\Gamma_c(\mathscr H^nK_0)[-n]) \\ &=c_1(R\Gamma_c(\tau_{<n}K_0))+(-1)^nc_1(\mathscr H^nK_0), \end{split} \end{equation*} and so $$ c_1(R\Gamma_c(K_0))=\sum_{n\in\mathbb Z}(-1)^nc_1(\mathscr H^nK_0)+\lim_{n\to-\infty}c_1(R\Gamma_c(\tau_{<n}K_0))= c_1(K_0). $$ \end{proof} \begin{corollary}\label{C5.5} Let $f:\mathscr X_0\to\mathscr Y_0$ be a morphism of $\mathbb F_q$-algebraic stacks, and let $K_0\in W_m^{-,\emph{stra}}(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ be a convergent complex of sheaves. Then $$ L(\mathscr X_0,K_0,t)=L(\mathscr Y_0,f_!K_0,t). $$ \end{corollary} \section{Infinite products} For a convergent complex $K_0$ on $\mathscr X_0,$ the series $\sum_{v\ge1}c_v(K_0)t^v/v$ (and hence the $L$-series $L(\mathscr X_0,K_0,t)$) usually has a finite radius of convergence. For instance, we have the following lemma. \begin{lemma}\label{radius} Let $X_0/\mathbb F_q$ be a variety of dimension $d.$ Then the radius of convergence of $\sum_{v\ge1}c_v(X_0)t^v/v$ is $1/q^d.$ \end{lemma} \begin{proof} Let $f_{X_0}(t)=\sum_{v\ge1}c_v(X_0)t^v/v.$ Let $Y_0$ be an irreducible component of $X_0$ with complement $U_0.$ Then $c_v(X_0)=c_v(Y_0)+c_v(U_0),$ and since all the $c_v$-terms are non-negative, we see that the radius of convergence of $f_{X_0}(t)$ is the minimum of that of $f_{Y_0}(t)$ and that of $f_{U_0}(t).$ Since $\max\{\dim(Y_0),\dim(U_0)\}=d,$ and $U_0$ has fewer irreducible component than $X_0,$ by induction we can assume $X_0$ is irreducible. Then there exists an open dense subscheme $U_0\subset X_0$ that is smooth over $\text{Spec }\mathbb F_q.$ Let $Z_0=X_0-U_0,$ then $\dim(Z_0)<\dim(X_0)=d.$ From the cohomology sequence $$ \xymatrix@C=.5cm{ H^{2d-1}_c(Z) \ar[r] & H^{2d}_c(U) \ar[r] & H^{2d}_c(X) \ar[r] & H^{2d}_c(Z)} $$ we see that $H^{2d}_c(X)=H^{2d}_c(U)=\overline{\mathbb Q}_{\ell}(-d).$ The Frobenius eigenvalues $\{\alpha_{ij}\}_j$ on $H^i_c(X)$ have $\iota$-weights $\le i,$ for $0\le i<2d$ (\cite{Del2}, 3.3.4). By the fixed point formula, $$ \frac{c_v(X_0)}{c_{v+1}(X_0)}=\frac{q^{vd}+\sum_{0\le i<2d}(-1)^i\sum_j\alpha_{ij}^v}{q^{(v+1)d}+\sum_{0\le i<2d}(-1)^i\sum_j\alpha_{ij}^{v+1}}=\frac{\frac{1}{q^d}+ \frac{1}{q^d}\sum_{0\le i<2d}(-1)^i\sum_j(\frac{\alpha _{ij}}{q^d})^v} {1+\sum_{0\le i<2d}(-1)^i\sum_j(\frac {\alpha_{ij}}{q^d})^{v+1}}, $$ which converges to $1/q^d$ as $v\to\infty,$ therefore the radius of convergence of $f_{X_0}(t)$ is $$ \lim_{v\to\infty}\frac{c_v(X_0)/v}{c_{v+1}(X_0)/(v+1)}= \frac{1}{q^d}. $$ \end{proof} In order to prove the meromorphic continuation (\ref{T8.1}), we want to express the $L$-series as a possibly infinite product. For schemes, if we consider only bounded complexes, the $L$-series can be expressed as a finite alternating product of polynomials $P_n(X_0,K_0,t),$ so it is rational \cite{Gro}. In the stack case, even for the sheaf $\overline{\mathbb Q}_{\ell},$ there might be infinitely many nonzero compact cohomology groups, and we need to consider the issue of convergence of the coefficients in an infinite products. \begin{definition}\label{D6.1} Let $f_n(t)=\sum_{k\ge0}a_{nk}t^k\in\mathbb C[[t]]$ be a sequence of power series over $\mathbb C.$ The sequence is said to be \emph{convergent term by term}, if for each $k,$ the sequence $(a_{nk})_n$ converges, and the series $$ \lim_{n\to\infty}f_n(t):=\sum_{k\ge0}t^k\lim_{n\to\infty}a_{nk} $$ is called the limit of the sequence $\big(f_n(t)\big)_n.$ \end{definition} \begin{anitem}\label{} Strictly speaking, a series (resp. infinite product) is defined to be a sequence $(a_n)_n,$ usually written as an ``infinite sum" (resp. ``infinite product") so that $(a_n)_n$ is the sequence of finite partial sums (resp. finite partial products) of it. So the definition above applies to series and infinite products. \end{anitem} Recall that $\log(1+g)=\sum_{m\ge1}(-1)^{m+1}g^m/m$ for $g\in t\mathbb C[[t]].$ \begin{lemma}\label{L6.2} (i) Let $f_n(t)=1+\sum_{k\ge1}a_{nk}t^k\in\mathbb C[[t]]$ be a sequence of power series. Then $\big(f_n(t)\big)_n$ is convergent term by term if and only if $\big(\log f_n(t)\big)_n$ is convergent term by term, and $$ \lim_{n\to\infty}\log f_n(t)=\log\lim_{n\to\infty}f_n(t). $$ (ii) Let $f$ and $g$ be two power series with constant term 1. Then $$ \log(fg)=\log(f)+\log(g). $$ (iii) Let $f_n(t)\in1+t\mathbb C[[t]]$ be a sequence as in (i). Then the infinite product $\prod_{n\ge1}f_n(t)$ converges term by term if and only if the series $\sum_{n\ge1}\log f_n(t)$ converges term by term, and $$ \sum_{n\ge1}\log f_n(t)=\log\prod_{n\ge1}f_n(t). $$ \end{lemma} \begin{proof} (i) We have \begin{equation*} \begin{split} \log f_n(t) &=\sum_{m\ge1}(-1)^{m+1} \big(\sum_{k\ge1}a_{nk}t^k\big)^m/m \\ &=t\cdot a_{n1}+t^2(a_{n2}-\frac{a_{n1}^2}{2})+t^3(a_{n3} -a_{n1}a_{n2}+\frac{a_{n1}^3}{3}) \\ & \qquad+t^4(a_{n4}-a_{n1}a_{n3} -\frac{a_{n2}^2}{2}+a_{n1}^2a_{n2})+\cdots \\ &=:\sum_{k\ge1}A_{nk}t^k. \end{split} \end{equation*} In particular, for each $k,\ A_{nk}-a_{nk}=h(a_{n1},\cdots, a_{n,k-1})$ is a polynomial in $a_{n1},\cdots,a_{n,k-1}$ with rational coefficients. So if $(a_{nk})_n$ converges for each $k,$ then $(A_{nk})_n$ also converges, and by induction the converse also holds. If $\lim_{n\to\infty}a_{nk}=a_k,$ then $\lim_{n\to\infty}A_{nk}=a_k+h(a_1,\cdots,a_{k-1}),$ and $$ \log\lim_{n\to\infty}f_n(t)=\log(1+\sum_{k\ge1}a_kt^k)= \sum_{k\ge1}(a_k+h(a_1,\cdots,a_{k-1}))t^k=\lim_{n\to\infty} \log f_n(t). $$ (ii) $\log$ and $\exp$ are inverse to each other on power series, so it suffices to prove that for $f$ and $g\in t\mathbb C[[t]],$ we have $$ \exp(f+g)=\exp(f)\exp(g). $$ This follows from the binomial formula: \begin{equation*} \begin{split} \exp(f+g) &=\sum_{n\ge0}(f+g)^n/n!=\sum_{n\ge0}\frac{1}{n!} \sum_{k=0}^n\binom{n}{k}f^kg^{n-k}=\sum_{n\ge0}\quad \sum_{k=0}^n\frac{f^k}{k!}\cdot\frac{g^{n-k}}{(n-k)!} \\ &=\sum_{i,j\ge0} \frac{f^i}{i!}\cdot\frac{g^j}{j!}=\big(\sum_{i\ge0}f^i/i! \big)\big(\sum_{j\ge0}g^j/j!\big)=\exp(f)\exp(g). \end{split} \end{equation*} (iii) Let $F_N(t)=\prod_{n=1}^Nf_n(t).$ Applying (i) to the sequence $(F_N(t))_N,$ we see that the infinite product $\prod_{n\ge1}f_n(t)$ converges term by term if and only if (by definition) $\big(F_N(t)\big)_N$ converges term by term, if and only if the sequence $\big(\log F_N(t)\big)_N$ converges term by term, if and only if (by definition) the series $\sum_{n\ge1}\log f_n(t)$ converges term by term, since by (ii) $$ \log\prod_{n=1}^Nf_n(t)=\sum_{n=1}^N\log f_n(t) $$ Also $$ \log\prod_{n\ge1}f_n(t)=\log\lim_{N\to\infty}F_N(t)= \lim_{N\to\infty}\log F_N(t)=\lim_{N\to\infty}\sum_{n=1} ^N\log f_n(t)=:\sum_{n\ge1}\log f_n(t). $$ \end{proof} \begin{blank}\label{} For a complex of sheaves $K_0$ on $\mathscr X_0$ and $n\in\mathbb Z,$ define $$ P_n(\mathscr X_0,K_0,t):=\det(1-Ft,H^n_c(\mathscr X,K)). $$ We regard $P_n(\mathscr X_0,K_0,t)^{\pm1}$ as a complex power series with constant term 1 via $\iota.$ \end{blank} \begin{proposition}\label{P6.5} For every convergent complex of sheaves $K_0\in W_m^{-,\emph{stra}}(\mathscr X_0,\overline{\mathbb Q}_{\ell}),$ the infinite product $$ \prod_{n\in\mathbb Z}P_n(\mathscr X_0,K_0,t)^{(-1)^{n+1}} $$ is convergent term by term to the $L$-series $L(\mathscr X_0,K_0,t).$ \end{proposition} \begin{proof} The complex $R\Gamma_c(\mathscr X,K)$ is bounded above, so $P_n(\mathscr X_0,K_0,t)=1$ for $n\gg0,$ and the infinite product is a one-direction limit, namely $n\to-\infty.$ Let $\alpha_{n1},\cdots,\alpha_{nm_n}$ be the eigenvalues (counted with multiplicity) of $F$ on $H^n_c(\mathscr X,K),$ regarded as complex numbers via $\iota,$ so that $$ P_n(t)=P_n(\mathscr X_0,K_0,t)=(1-\alpha_{n1}t)\cdots(1-\alpha_{nm_n}t). $$ By (\ref{L6.2}iii) it suffices to show that the series $$ \sum_{n\in\mathbb Z}(-1)^{n+1}\log P_n(t) $$ converges term by term to $\sum_{v\ge1}c_v(K_0)t^v/v.$ We have \begin{equation*} \begin{split} \sum_{n\in\mathbb Z}(-1)^{n+1}\log P_n(t) &=\sum_{n\in\mathbb Z}(-1)^{n+1}\log\prod_i(1-\alpha_{ni}t)=\sum_{n\in\mathbb Z}(-1)^n\sum_i\sum_{v\ge1}\frac{\alpha_{ni}^vt^v}{v} \\ &=\sum_{v\ge1}\frac{t^v}{v}\sum_{n\in\mathbb Z}(-1)^n\sum_i\alpha_{ni}^v=\sum_{v\ge1}\frac{t^v}{v}c_v (R\Gamma_c(K_0)), \end{split} \end{equation*} which converges term by term (\ref{T4.3}i), and is equal (\ref{T4.3}ii) to $\sum_{v\ge1}c_v(K_0)t^v/v.$ \end{proof} \begin{subremark}\label{R6.6} In particular we have $$ Z(\mathscr X_0,t)=\prod_{n\in\mathbb Z}P_n(\mathscr X_0,t)^{(-1)^{n+1}}, $$ where $P_n(\mathscr X_0,t)=P_n(\mathscr X_0,\overline {\mathbb Q}_{\ell},t).$ This generalizes the classical result for schemes (\cite{Gro}, 5.1). When we want to emphasize the dependence on the prime $\ell,$ we will write $P_{n,\ell}(\mathscr X_0,t).$ If $G_0$ is a connected $\mathbb F_q$-algebraic group, (\ref{4.6.1}) shows that the zeta function of $BG_0$ is given by \begin{equation*} \begin{split} Z(BG_0,t) &=\prod_{r\ge0}\quad\prod_{\substack{m_{ij}\ge0 \\ \sum_{i,j}m_{ij}(i+1)=2r}}\Big(1-q^{-d}\prod _{i,j}\alpha_{ij}^{-m_{ij}}\cdot t\Big)^{-1} \\ &=\prod_{m_{ij}\ge0}\Big(1-q^{-d}\prod_{i,j}\alpha_{ij}^ {-m_{ij}}\cdot t\Big)^{-1}. \end{split} \end{equation*} \end{subremark} \section{Examples of zeta functions} In this section we compute the zeta functions of some stacks, and in each example we do it in two ways: counting rational points and computing cohomology groups. Also we investigate some analytic properties. \begin{example}\label{example1} $B\mathbb G_m.$ By (\ref{alg-gp}iii) we have $c_v(B\mathbb G_m)=1/c_v(\mathbb G_m),$ so the zeta function is $$ Z(B\mathbb G_m,t)=\exp\Big(\sum_{v\ge1}c_v(B\mathbb G_m)\frac {t^v}{v}\Big)=\exp\Big(\sum_{v\ge1}\frac{1}{q^v-1}\frac{t^v} {v}\Big). $$ Using Borel's theorem (\ref{Borel}) one can show (or see (\cite{LMB}, 19.3.2)) that the cohomology ring $H^*(B\mathbb G_m)$ is a polynomial ring $\overline{\mathbb Q}_{\ell}[T],$ generated by a variable $T$ of degree 2, and the global Frobenius action is given by $FT^n=q^nT^n.$ So by Poincar\'e duality, we have \begin{equation*} \begin{split} \text{Tr}(F,H^{-2n-2}_c(B\mathbb G_m)) &=\text{Tr}(F,H^{-2n-2}_c(B\mathbb G_m,\overline{\mathbb Q}_{\ell}(-1)))/q \\ &=\text{Tr}(F^{-1},H^{2n}(B\mathbb G_m))/q=q^{-n-1}. \end{split} \end{equation*} This gives $$ \prod_{n\in\mathbb Z}P_n(B\mathbb G_m,t)^{(-1)^{n+1}}= \prod_{n\ge1}(1-q^{-n}t)^{-1}. $$ It is easy to verify (\ref{R6.6}) directly: \begin{equation*} \begin{split} \exp\Big(\sum_{v\ge1}\frac{1}{q^v-1}\frac{t^v}{v}\Big) &= \exp\Big(\sum_{v\ge1}\frac{1/q^v}{1-1/q^v}\frac{t^v}{v} \Big)=\exp\Big(\sum_{v\ge1}\frac{t^v}{v}\sum_{n\ge1}\frac{1} {q^{nv}}\Big) \\ &=\prod_{n\ge1}\exp\Big(\sum_{v\ge1}\frac{(t/q^n)^v}{v}\Big) =\prod_{n\ge1}(1-t/q^n)^{-1}. \end{split} \end{equation*} There is also a functional equation $$ Z(B\mathbb G_m,qt)=\frac{1}{1-t}Z(B\mathbb G_m,t), $$ which implies that $Z(B\mathbb G_m,t)$ has a meromorphic continuation to the whole complex plane, with simple poles at $t=q^n,$ for $n\ge1.$ $H^{-2n-2}_c(B\mathbb G_m)$ is pure of weight $-2n-2.$ A natural question is whether Deligne's theorem of weights (\cite{Del2}, 3.3.4) still holds for algebraic stacks. Olsson told me that it does not hold in general, as the following example shows. \end{example} \begin{example}\label{example2} $BE,$ where $E$ is an elliptic curve over $\mathbb F_q.$ Again by (\ref{alg-gp}iii) we have $$ c_v(BE)=\frac{1}{\#E(\mathbb F_{q^v})}. $$ Let $\alpha$ and $\beta$ be the roots of the reciprocal characteristic polynomial of the Frobenius on $H^1(E):$ \begin{equation}\label{E2} x^2-(1+q-c_1(E))x+q=0. \end{equation} Then for every $v\ge1,$ we have $c_v(E)=1-\alpha^v-\beta^v+q^v=(1-\alpha^v)(1-\beta^v).$ So \begin{equation*} \begin{split} c_v(BE) &=\frac{1}{(1-\alpha^v)(1-\beta^v)}=\frac{\alpha^{-v}} {1-\alpha^{-v}}\cdot\frac{\beta^{-v}}{1-\beta^{-v}} \\ &=\big(\sum_{n\ge1}\alpha^{-nv}\big)\big(\sum_{m\ge1}\beta^ {-nv}\big)=\sum_{n,m\ge1}\Big(\frac{1}{\alpha^n\beta^m}\Big)^v, \end{split} \end{equation*} and the zeta function is $$ Z(BE,t)=\exp\Big(\sum_{v\ge1}c_v(BE)\frac{t^v}{v}\Big)= \exp\Big(\sum_{\substack{n,m\ge1 \\ v\ge1}} \big(\frac{t}{\alpha^n\beta^m}\big)^v/v\Big)= \prod_{n,m\ge1}\big(1-\frac{t}{\alpha^n\beta^m}\big)^{-1}. $$ To compute its cohomology, one can apply Borel's theorem (\ref{Borel}) to $E,$ and we have $N=N^1=H^1(E),$ so $N[-1]$ is a 2-dimensional vector space sitting in degree 2, on which $F$ has eigenvalues $\alpha$ and $\beta.$ Then $H^*(BE)$ is a polynomial ring $\overline{\mathbb Q}_{\ell}[a,b]$ in two variables, both sitting in degree 2, and the basis $a,b$ can be chosen so that the Frobenius action $F$ on $H^2(BE)$ is upper triangular (or even diagonal) $$ \begin{bmatrix}\alpha & \gamma \\ & \beta\end{bmatrix}. $$ Then $F$ acting on $$ H^{2n}(BE)=\text{Sym}^nN[-1]=\overline{\mathbb Q}_{\ell}\langle a^n,a^{n-1}b,\cdots,b^n\rangle $$ can be represented by $$ \begin{bmatrix}\alpha^n & * & * & * \\ & \alpha^{n-1}\beta & * & * \\ && \ddots & * \\ &&& \beta^n\end{bmatrix}, $$ with eigenvalues $\alpha^n,\alpha^{n-1}\beta,\cdots,\beta^n.$ So the eigenvalues of $F$ on $H^{-2-2n}_c(BE)$ are $$ q^{-1}\alpha^{-n},q^{-1}\alpha^{1-n}\beta^{-1},\cdots, q^{-1}\beta^{-n}, $$ and $\prod_{n\in\mathbb Z}P_n(BE,t)^{(-1)^{n+1}}$ is $$ \frac{1}{(1-q^{-1}t)[(1-q^{-1}\alpha^{-1}t)(1-q^{-1} \beta^{-1}t)][(1-q^{-1}\alpha^{-2}t)(1-q^{-1}\alpha^{-1} \beta^{-1}t)(1-q^{-1}\beta^{-2}t)]\cdots}, $$ which is the same as $Z(BE,t)$ above (since $\alpha\beta=q$). Let $Z_1(t):=Z(BE,qt).$ Its radius of convergence is 1, since by (\ref{radius}) $$ \lim_{v\to\infty}\frac{c_v(BE)}{c_{v+1}(BE)}=\lim_{v\to \infty}\frac{c_{v+1}(E)}{c_v(E)}=q. $$ There is also a functional equation $$ Z_1(\alpha t)=\frac{1}{1-\alpha t}Z_1(t)Z_2(t), $$ where $$ Z_2(t)=\frac{1}{(1-\alpha\beta^{-1}t)(1-\alpha\beta^{-2}t) (1-\alpha\beta^{-3}t)\cdots}. $$ $Z_2(t)$ is holomorphic in the open unit disk and satisfies the functional equation $$ Z_2(\beta t)=\frac{1}{1-\alpha t}Z_2(t). $$ Therefore $Z_2(t),$ and hence $Z(BE,t),$ has a meromorphic continuation to the whole complex $t$-plane with the obvious poles. \begin{subremark}\label{R7.1} $H^{-2-2n}_c(BE)$ is pure of weight $-2-n,$ which is not $\le-2-2n$ unless $n=0.$ So the upper bound of weights for schemes fails for $BE.$ This also leads to the failure of the decomposition theorem for $BE;$ see (\cite{Decom}, $\S1$) for the example of a pure complex on $BE$ which is not geometrically semi-simple. Also note that, the equation (\ref{E2}) is independent of $\ell,$ so the polynomials $P_{n,\ell}(BE,t)$ are independent of $\ell.$ \end{subremark} \end{example} \begin{example}\label{example3} $BG_0,$ where $G_0$ is a finite \'etale $\mathbb F_q$-group scheme, corresponding to a finite group $G$ and a Frobenius automorphism $\sigma$ on it. Then we have $BG_0(\mathbb F_{q^v})\simeq G/\rho^{(v)},$ where $\rho^{(v)}$ is the right action of $G$ on the set $G$ given by $h:g\mapsto\sigma^v(h^{-1})gh.$ So $$ c_v(BG_0)=\sum_{[g]\in G/\rho^{(v)}}\frac{1}{\#\text {Stab}_{\rho^{(v)}}(g)}=\frac{\#G}{\#G}=1, $$ and the zeta function is $$ Z(BG_0,t)=\frac{1}{1-t}. $$ Its cohomology groups are given in (\ref{L4.8}): $H^0_c(BG)=\overline{\mathbb Q}_{\ell},$ and other $H^i_c=0.$ This verifies (\ref{R6.6}). Note that $Z(BG_0,t)$ is the same as the zeta function of its coarse moduli space $\text{Spec }\mathbb F_q.$ As a consequence, for every $\mathbb F_q$-algebraic stack $\mathscr X_0,$ with finite inertia $\mathscr I_0\to\mathscr X_0$ and coarse moduli space $\pi:\mathscr X_0\to X_0$ (\cite{Crd}, 1.1), we have $Z(\mathscr X_0,t)=Z(X_0,t),$ and hence it is a rational function. This is because for every $x\in X_0(\mathbb F_{q^v}),$ the fiber $\pi^{-1}(x)$ is a neutral gerbe over $\text{Spec }k(x),$ and from the above we see that $c_v(\pi^{-1}(x))=1,$ and hence $c_v(\mathscr X_0)=c_v(X_0).$ The fact that $Z(X_0,t)$ is a rational function follows from (\cite{Knu}, II, 6.7) and noetherian induction. More generally, we have the following. \begin{subproposition}\label{C7.2} Let $\mathscr X_0$ be an $\mathbb F_q$-algebraic stack. Suppose that $\mathscr X_0$ either has finite inertia, or is Deligne-Mumford (not necessarily separated). Then for every $K_0\in W^b(\mathscr X_0,\overline{\mathbb Q}_{\ell}),$ the $L$-series $L(\mathscr X_0,K_0,t)$ is a rational function. \end{subproposition} \begin{proof} It suffices to show that (\ref{T4.3}) holds for the structural map $\mathscr X_0\to\text{Spec }\mathbb F_q$ and $K_0\in W^b(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ in these two cases. We will not make explicit use of the fact (\ref{Laff}) that $K_0$ is $\iota$-mixed. \textbf{Case when $\mathscr X_0$ has finite inertia.} Let $\pi:\mathscr X_0\to X_0$ be its coarse moduli space. For any sheaf $\mathscr F_0$ on $\mathscr X_0,$ by (\ref{L4.8}) we have isomorphisms $H^r_c(X,R^0\pi_!\mathscr F)\simeq H^r_c(\mathscr{X,F}),$ so $R\Gamma_c(\mathscr X_0,\mathscr F_0)$ is a bounded complex, hence a convergent complex. To prove the trace formula for $\mathscr X_0\to\text {Spec }\mathbb F_q$ and the sheaf $\mathscr F_0,$ it suffices to prove it for $\mathscr X_0\to X_0$ and $X_0\to\text{Spec }\mathbb F_q.$ The first case, when passing to fibers, is reduced to $BG_0,$ and when passing to fibers again, it is reduced to the two subcases: when $G_0$ is finite, or when $G_0$ is connected. In both of these two cases as well as the case of an algebraic space $X_0\to\text{Spec }\mathbb F_q,$ the trace formula can be proved without using $\iota$-mixedness (\ref{5.5}). Therefore, (\ref{T4.3}) holds for $\mathscr X_0\to\text{Spec }\mathbb F_q$ and any sheaf, hence any bounded complex, on $\mathscr X_0.$ The trace formula is equivalent to the equality of power series $$ L(\mathscr X_0,K_0,t)=\prod_{i\in\mathbb Z}P_i(\mathscr X_0,K_0,t)^{(-1)^{i+1}}, $$ and the right-hand side is a finite product, because $R\Gamma_c(\mathscr X_0,K_0)$ is bounded. Therefore, $L(\mathscr X_0,K_0,t)$ is rational. \textbf{Case when $\mathscr X_0$ is Deligne-Mumford.} For both (i) and (ii) of (\ref{T4.3}), we may replace $\mathscr X_0$ by a non-empty open substack, hence by (\cite{LMB}, 6.1.1) we may assume $\mathscr X_0$ is the quotient stack $[X_0'/G],$ where $X_0'$ is an affine $\mathbb F_q$-scheme of finite type and $G$ is a finite group acting on $X_0'.$ This stack has finite diagonal, and hence finite inertia, so by the previous case we are done. Also, we know that $R\Gamma_c(\mathscr X_0,K_0)$ is bounded, therefore $L(\mathscr X_0,K_0,t)$ is rational. \end{proof} If one wants to use Poincar\'e duality to get a functional equation for the zeta function, (\cite{Ols1}, 5.17) and (\cite{LO2}, 9.1.2) suggest that we should assume $\mathscr X_0$ to be proper smooth and of finite diagonal. Under these assumptions, one gets the expected functional equation for the zeta function, as well as the independence of $\ell$ for the coarse moduli space, which is proper but possibly singular. Examples of such stacks include the moduli stack of pointed stable curves $\overline{\mathscr M}_{g,n}$ over $\mathbb F_q.$ \begin{subproposition}\label{T7.3} Let $\mathscr X_0$ be a proper smooth $\mathbb F_q$-algebraic stack of equidimension $d,$ with finite diagonal, and let $\pi:\mathscr X_0\to X_0$ be its coarse moduli space. Then $Z(X_0,t)$ satisfies the usual functional equation $$ Z(X_0,\frac{1}{q^dt})=\pm q^{d\chi/2}t^{\chi}Z(X_0,t), $$ where $\chi:=\sum_{i=0}^{2d}(-1)^i\deg P_{i,\ell}(X_0,t).$ Moreover, $H^i(X)$ is pure of weight $i,$ for every $0\le i\le2d,$ and the reciprocal roots of each $P_{i,\ell}(X_0,t)$ are algebraic integers independent of $\ell.$ \end{subproposition} \begin{proof} First we show that the adjunction map $\overline{\mathbb Q}_{\ell}\to\pi_*\pi^*\overline{\mathbb Q}_{\ell}=\pi_*\overline{\mathbb Q}_{\ell}$ is an isomorphism. Since $\pi$ is quasi-finite and proper (\cite{Crd}, 1.1), we have $\pi_*=\pi_!$ (\cite{Ols1}, 5.1) and $R^r\pi_!\overline{\mathbb Q}_{\ell}=0$ for $r\ne0$ (\cite{Ols1}, 5.8). The natural map $\overline{\mathbb Q}_{\ell}\to R^0\pi_*\overline{\mathbb Q}_{\ell}$ is an isomorphism, since the geometric fibers of $\pi$ are connected. Therefore $R\Gamma(\mathscr X_0,\overline{\mathbb Q}_{\ell})=R\Gamma(X_0,\pi_*\overline{\mathbb Q}_{\ell}) =R\Gamma(X_0,\overline{\mathbb Q}_{\ell}),$ and hence (\cite{Ols1}, 5.17) $H^i(\mathscr X)\simeq H^i_c(\mathscr X)\simeq H^i(X)\simeq H^i_c(X)$ for all $i.$ Let $P_i(t)= P_i(\mathscr X_0,t)=P_i(X_0,t).$ Since $X_0$ is an algebraic space of dimension $d,\ P_i(t)=1$ if $i\notin[0,2d].$ Since $\mathscr X_0$ is proper and smooth, Poincar\'e duality gives a perfect pairing $$ \xymatrix@C=.5cm{ H^i(\mathscr X)\times H^{2d-i}(\mathscr X) \ar[r] & \overline{\mathbb Q}_{\ell}(-d).} $$ Following the standard proof for proper smooth varieties (as in (\cite{Mil1}, 27.12)) we get the expected functional equation for $Z(\mathscr X_0,t)=Z(X_0,t).$ $H^i(X)$ is mixed of weights $\le i$ (\cite{Del2}, 3.3.4), so by Poincar\'e duality, it is pure of weight $i.$ Following the proof in (\cite{Del1}, p.276), this purity implies that the polynomials $P_{i,\ell}(X_0,t)$ have integer coefficients independent of $\ell.$ \end{proof} \begin{subremark}\label{R7.4} Weizhe Zheng suggested (\ref{C7.2}) to me. He also suggested that we give a functional equation relating $L(\mathscr X_0,DK_0,t)$ and $L(\mathscr X_0,K_0,t),$ for $K_0\in W^b(\mathscr X_0,\overline{\mathbb Q}_{\ell}),$ where $\mathscr X_0$ is a proper $\mathbb F_q$-algebraic stack with finite diagonal, of equidimension $d,$ but not necessarily smooth. Here is the functional equation: $$ L(\mathscr X_0,K_0,t^{-1})=t^{\chi_c}\cdot Q\cdot L(\mathscr X_0,DK_0,t), $$ where $\chi_c=\sum_{i=0}^{2d}(-1)^ih^i_c(\mathscr X,K)$ and $Q=(t^{\chi_c}L(\mathscr X_0,K_0,t))|_{t=\infty}.$ Note that the rational function $L(\mathscr X_0,K_0,t)$ has degree $-\chi_c,$ hence $Q$ is well-defined. The proof is similar to the above. \end{subremark} \end{example} \begin{example}\label{example4} $BGL_N.$ We have $\#GL_N(\mathbb F_{q^v})=(q^{vN}-1)(q^{vN}-q^v)\cdots(q^{vN}-q^{v(N-1)}),$ so one can use $c_v(BGL_N)=1/c_v(GL_N)$ to compute $Z(BGL_N,t).$ One can also compute the cohomology groups of $BGL_N$ using Borel's theorem (\ref{Borel}). We refer to (\cite{Beh1}, 2.3.2) for the result. Let us consider the case $N=2$ only. The general case is similar. We have $$ c_v(BGL_2)=\frac{1}{q^{4v}}\bigg(1+\frac{1}{q^v}+\frac{2} {q^{2v}}+\frac{2}{q^{3v}}+\frac{3}{q^{4v}}+\frac{3}{q^{5v}} +\cdots\bigg), $$ and therefore \begin{equation*} \begin{split} Z(BGL_2,t) &=\exp\Big(\sum_v\frac{(t/q^4)^v}{v}\Big)\cdot \exp\Big(\sum_v\frac{(t/q^5)^v}{v}\Big)\cdot\exp\Big(\sum_v \frac{2(t/q^6)^v}{v}\Big)\cdots \\ &=\frac{1}{1-t/q^4}\cdot\frac{1}{1-t/q^5}\cdot\Big(\frac{1} {1-t/q^6}\Big)^2\cdot\Big(\frac{1}{1-t/q^7}\Big)^2\cdot \Big(\frac{1}{1-t/q^8}\Big)^3\cdots. \end{split} \end{equation*} So $Z(BGL_2,qt)=Z(BGL_2,t)\cdot Z_1(t),$ where $$ Z_1(t)=\frac{1}{(1-t/q^3)(1-t/q^5)(1-t/q^7)(1-t/q^9)\cdots}. $$ $Z_1(t)$ satisfies the functional equation $$ Z_1(q^2t)=\frac{1}{1-t/q}\cdot Z_1(t), $$ So $Z_1(t),$ and hence $Z(BGL_2,t),$ has a meromorphic continuation with the obvious poles. The non-zero compactly supported cohomology groups of $BGL_2$ are given as follows: $$ H^{-8-2n}_c(BGL_2)=\overline{\mathbb Q}_{\ell}(n+4)^{\oplus \big(\left\lfloor\frac{n}{2}\right\rfloor+1\big)},\ n\ge0. $$ This gives $$ \prod_{n\in\mathbb Z}P_n(BGL_2,t)^{(-1)^{n+1}} =\frac{1}{(1-t/q^4)(1-t/q^5)(1-t/q^6)^2(1-t/q^7)^2\cdots}, $$ and (\ref{R6.6}) is verified. Note that the eigenvalues are $1/q^{n+4},$ which are independent of $\ell.$ \end{example} \section{Analytic continuation} We state and prove a generalized version of (\ref{T1.3}). \begin{theorem}\label{T8.1} Let $\mathscr X_0$ be an $\mathbb F_q$-algebraic stack, and let $K_0\in W_m^{-,\emph{stra}}(\mathscr X_0,\overline{\mathbb Q}_{\ell})$ be a convergent complex. Then $L(\mathscr X_0,K_0,t)$ has a meromorphic continuation to the whole complex $t$-plane, and its poles can only be zeros of the polynomials $P_{2n}(\mathscr X_0,K_0,t)$ for some integers $n.$ \end{theorem} We need a preliminary lemma. For an open subset $U\subset \mathbb C,$ let $\mathscr O(U)$ be the set of analytic functions on $U.$ There exists a sequence $\{K_n\}_{n\ge1}$ of compact subsets of $U$ such that $U=\bigcup_nK_n$ and $K_n\subset(K_{n+1})^{\circ}.$ For $f$ and $g$ in $\mathscr O(U),$ define $$ \rho_n(f,g)=\sup\{|f(z)-g(z)|;z\in K_n\}\quad\text{and} \quad\rho(f,g)=\sum_{n=1}^{\infty}\Big(\frac{1}{2}\Big)^n \frac{\rho_n(f,g)}{1+\rho_n(f,g)}. $$ Then $\rho$ is a metric on $\mathscr O(U)$ and the topology is independent of the subsets $\{K_n\}_n$ chosen (cf. (\cite{Con}, VII, $\S1$)). The following lemma is (\cite{Con}, p.167, 5.9). \begin{lemma}\label{L8.2} Let $U\subset\mathbb C$ be connected and open and let $(f_n)_n$ be a sequence in $\mathscr O(U)$ such that no $f_n$ is identically zero. If $\sum_n(f_n(z)-1)$ converges absolutely and uniformly on compact subsets of $U,$ then $\prod_{n\ge1}f_n(z)$ converges in $\mathscr O(U)$ to an analytic function $f(z).$ If $z_0$ is a zero of $f,$ then $z_0$ is a zero of only a finite number of the functions $f_n,$ and the multiplicity of the zero of $f$ at $z_0$ is the sum of the multiplicities of the zeros of the functions $f_n$ at $z_0.$ \end{lemma} Now we prove (\ref{T8.1}). \begin{proof} Factorize $P_n(\mathscr X_0,K_0,t)$ as $\prod_{j=1}^{m_n}(1-\alpha_{nj}t)$ in $\mathbb C.$ Since $R\Gamma_c(\mathscr X_0,K_0)$ is a convergent complex (\ref{T4.3}i), the series $\sum_{n,j}|\alpha_{nj}|$ converges. By (\ref{P6.5}) we have $$ L(\mathscr X_0,K_0,t)=\prod_{n\in\mathbb Z}\big(\prod_{j=1} ^{m_n}(1-\alpha_{nj}t)\big)^{(-1)^{n+1}} $$ as formal power series. To apply (\ref{L8.2}), take $U$ to be the region $\mathbb C-\{\alpha_{nj}^{-1};n\text{ even}\}.$ Take the lexicographical order on the set of all factors $$ 1-\alpha_{nj}t,\text{ for }n\text{ odd};\ \frac{1}{1-\alpha_{nj}t},\text{ for }n\text{ even}. $$ Each factor is an analytic function on $U.$ The sum $``\sum_n(f_n(z)-1)"$ here is equal to $$ \sum_{n\text{ odd},j}(-\alpha_{nj}t)+\sum_{n\text{ even},j} \frac{\alpha_{nj}t}{1-\alpha_{nj}t}. $$ Let $$ g_n(t)=\begin{cases}\sum_{j=1}^{m_n}|\alpha_{nj}t|,\ n\text{ odd,} \\ \sum_{j=1}^{m_n}\frac{|\alpha_{nj}t|} {|1-\alpha_{nj}t|},\ n\text{ even.}\end{cases} $$ We need to show that $\sum_ng_n(t)$ is pointwise convergent, uniformly on compact subsets of $U.$ Precisely, we want to show that for any compact subset $B\subset U,$ and for any $\varepsilon>0,$ there exists a constant $N_B\in\mathbb Z$ such that $$ \sum_{n\le N}g_n(t)<\varepsilon $$ for all $N\le N_B$ and $t\in B.$ Since $g_n(t)$ are non-negative, it suffices to do this for $N=N_B.$ There exists a constant $M_B$ such that $|t|<M_B$ for all $t\in B.$ Since $\sum_{n,j}|\alpha_{nj}|$ converges, $|\alpha_{nj}|\to0$ as $n\to-\infty,$ and there exists a constant $L_B\in\mathbb Z$ such that $|\alpha_{nj}|<1/(2M_B)$ for all $n<L_B.$ So $$ g_n(t)\le2\sum_{j=1}^{m_n}|\alpha_{nj}t| $$ for all $n<L_B$ and $t\in B.$ There exists a constant $N_B<L_B$ such that $$ \sum_{n\le N_B}\quad\sum_j|\alpha_{nj}|<\varepsilon/(2M_B) $$ and hence $$ \sum_{n\le N_B}g_n(t)\le2\sum_{n\le N_B}\ \sum_j|\alpha_{nj}t|\le 2M_B\sum_{n\le N_B}\ \sum_j|\alpha_{nj}|<\varepsilon. $$ By (\ref{L8.2}), $L(\mathscr X_0,K_0,t)$ extends to an analytic function on $U.$ By the second part of (\ref{L8.2}), the $\alpha_{nj}^{-1}$'s, for $n$ even, are at worst poles rather than essential singularities, therefore the $L$-series is meromorphic on $\mathbb C.$ \end{proof} Now $L(\mathscr X_0,K_0,t)$ can be called an ``$L$-function". \section{Weight theorem for algebraic stacks} \begin{blank}\label{dim} We prove (\ref{T1.4}) in this section. For the reader's convenience, we briefly review the definition of the \textit{dimension} of a locally noetherian $S$-algebraic stack $\mathcal X$ from (\cite{LMB}, chapter 11). If $X$ is a locally noetherian $S$-algebraic space and $x$ is a point of $X,$ the dimension $\dim_x(X)$ of $X$ at $x$ is defined to be $\dim_{x'}(X'),$ for any pair $(X',x')$ where $X'$ is an $S$-scheme \'etale over $X$ and $x'\in X'$ maps to $x.$ This is independent of the choice of the pair. If $f:X\to Y$ is a morphism of $S$-algebraic spaces, locally of finite type, and $x$ is a point of $X$ with image $y$ in $Y,$ then the relative dimension $\dim_x(f)$ of $f$ at $x$ is defined to be $\dim_x(X_y).$ Let $P:X\to\mathcal X$ be a presentation of an $S$-algebraic stack $\mathcal X,$ and let $x$ be a point of $X.$ Then the relative dimension $\dim_x(P)$ of $P$ at $x$ is defined to be the relative dimension at $(x,x)$ of the smooth morphism of $S$-algebraic spaces $\text{pr}_1:X\times_{\mathcal X}X\to X.$ If $\mathcal X$ is a locally noetherian $S$-algebraic stack and if $\xi$ is a point of $\mathcal X,$ the dimension of $\mathcal X$ at $\xi$ is defined to be $\dim_{\xi}(\mathcal X)=\dim_x(X)-\dim_x(P),$ where $P:X\to\mathcal X$ is an arbitrary presentation of $\mathcal X$ and $x$ is an arbitrary point of $X$ lying over $\xi.$ This definition is independent of all the choices made. At last one defines $\dim\mathcal X=\sup_{\xi}\dim_{\xi}\mathcal X.$ For quotient stacks we have $\dim[X/G]=\dim X-\dim G.$ \end{blank} Now we prove (\ref{T1.4}). \begin{proof} If $j:\mathscr U_0\to\mathscr X_0$ is an open substack with complement $i:\mathscr Z_0\to\mathscr X_0,$ then we have an exact sequence $$ \xymatrix@C=.5cm{ \cdots \ar[r] & H^n_c(\mathscr U,j^*\mathscr F) \ar[r] & H^n_c(\mathscr X,\mathscr F) \ar[r] & H^n_c(\mathscr Z,i^*\mathscr F) \ar[r] & \cdots}. $$ If both $H^n_c(\mathscr U,j^*\mathscr F)$ and $H^n_c(\mathscr Z,i^*\mathscr F)$ are zero (resp. have all $\iota$-weights $\le m$ for some number $m$), then so is $H^n_c(\mathscr X,\mathscr F).$ Since the dimensions of $\mathscr U_0$ and $\mathscr Z_0$ are no more than that of $\mathscr X_0,$ and the set of punctual $\iota$-weights of $i^*\mathscr F_0$ and of $j^*\mathscr F_0$ is the same as that of $\mathscr F_0,$ we may shrink $\mathscr X_0$ to a non-empty open substack. We can also make any finite base change on $\mathbb F_q.$ To simplify notation, we may use twist (\ref{Weil-cplx}) and projection formula to assume $w=0.$ As before, we reduce to the case when $\mathscr X_0$ is geometrically connected, and the inertia $f:\mathscr I_0\to\mathscr X_0$ is flat, with rigidification $\pi:\mathscr X_0\to X_0,$ where $X_0$ is a scheme. The squares in the following diagram are 2-Cartesian: $$ \xymatrix@C=.7cm{ && \mathscr I_0 \ar[d]_f & \text{Aut}_y \ar[l] \ar[d] \\ B\text{Aut}_{\overline{x}} \ar[r] \ar[d]_{\pi_{\overline{x}}} & B\text{Aut}_x \ar[r] \ar[d]_{\pi_x} & \mathscr X_0 \ar[d]^{\pi} & \text{Spec } \mathbb F_{q^v} \ar[l]_-y \\ \text{Spec }\mathbb F \ar[r]_-{\overline{x}} & \text{Spec }\mathbb F_{q^v} \ar[r]_-x & X_0 &.} $$ We have $(R^k\pi_!\mathscr F_0)_{\overline{x}}=H^k_c(B \text{Aut}_{\overline{x}},\mathscr F).$ Since $f$ is representable and flat, and $\mathscr X_0$ is connected, all automorphism groups $\text{Aut}_x$ have the same dimension, say $d.$ Assume (\ref{T1.4}) holds for all $BG_0,$ where $G_0$ are $\mathbb F_q$-algebraic groups. Then $R^k\pi_!\mathscr F_0=0$ for $k>-2d,$ and for $k\le-2d,$ the punctual $\iota$-weights of $R^k\pi_!\mathscr F_0$ are $\le\frac{k}{2}-d,$ hence by (\cite{Del2}, 3.3.4), the punctual $\iota$-weights of $H^r_c(X,R^k\pi_!\mathscr F)$ are $\le\frac{k}{2}-d+r.$ Consider the Leray spectral sequence $$ E_2^{rk}=H^r_c(X,R^k\pi_!\mathscr F) \Longrightarrow H^{r+k}_c(\mathscr X,\mathscr F). $$ If we maximize $\frac{k}{2}-d+r$ under the constraints $$ r+k=n,\ 0\le r\le2\dim X_0,\text{ and }k\le-2d, $$ we find that $H^n_c(\mathscr X,\mathscr F)=0$ for $n> 2\dim X_0-2d=2\dim\mathscr X_0,$ and for $n\le2\dim\mathscr X_0,$ the punctual $\iota$-weights of $H^n_c(\mathscr X,\mathscr F)$ are $\le\dim X_0+\frac{n}{2}-d=\dim\mathscr X_0+\frac{n}{2}.$ So we reduce to the case $\mathscr X_0=BG_0.$ The Leray spectral sequence for $h:BG_0\to B\pi_0(G_0)$ degenerates (by (\ref{L4.8})) to isomorphisms $$ H^0_c(B\pi_0(G),R^nh_!\mathscr F)\simeq H^n_c(BG,\mathscr F). $$ The fibers of $h$ are isomorphic to $BG_0^0,$ so by base change and (\ref{L4.8}) we reduce to the case when $G_0$ is connected. Let $d=\dim G_0$ and $f:BG_0\to\text{Spec }\mathbb F_q$ be the structural map. In this case, $\mathscr F_0\cong f^*V$ for some $\overline{\mathbb Q}_{\ell}$-representation $V$ of $W(\mathbb F_q),$ and hence $\mathscr F_0$ and $V$ have the same punctual $\iota$-weights. Using the natural isomorphism $H^n_c(BG)\otimes V\simeq H^n_c(BG,\mathscr F),$ we reduce to the case when $\mathscr F_0=\overline{\mathbb Q}_{\ell}.$ In (\ref{4.6.1}) we see that, if $\alpha_{i1},\cdots, \alpha_{in_i}$ are the eigenvalues of $F$ on $N^i,\ i\ge1$ odd, then the eigenvalues of $F$ on $H^{-2k-2d}_c(BG)$ are $$ q^{-d}\prod_{i,j}\alpha_{ij}^{-m_{ij}},\ \text{where } \sum_{i,j}m_{ij}(i+1)=2k. $$ Since $i\ge1,$ we have $\sum im_{ij}\ge k;$ together with $|\alpha_{ij}|\ge q^{i/2}$ one deduces $$ |q^{-d}\prod_{i,j}\alpha_{ij}^{-m_{ij}}|\le q^{\frac{-k-2d}{2}}, $$ so the punctual $\iota$-weights of $H^{-2k-2d}_c(BG)$ are $\le-k-2d$ for $k\ge0,$ and the other compactly supported cohomology groups are zero. It is clear from the proof and (\cite{Del2}, 3.3.10) that the weights of $H^n_c(\mathscr X,\mathscr F)$ differ from the weights of $\mathscr F_0$ by integers. Recall that $H^{2k}(BG)$ is pure of weight $2k,$ for a linear algebraic group $G_0$ over $\mathbb F_q$ (\cite{Del3}, 9.1.4). Therefore, $H^{-2k-2d}_c(BG)$ is pure of weight $-2k-2d,$ and following the same proof as above, we are done. \end{proof} \begin{remark}\label{R9.1} When $\mathscr X_0=X_0$ is a scheme, and $n\le2\dim X_0,$ we have $\dim X_0+\frac{n}{2}+w\ge n+w,$ so our bound for weights is worse than the bound in (\cite{Del2}, 3.3.4). For an $\mathbb F_q$-abelian variety $A,$ our bound for the weights of $H^n_c(BA)$ is sharp: the weights are exactly $\dim(BA)+\frac{n}{2},$ whenever the cohomology group is non-zero. \end{remark} We hope (\ref{T1.4}) has some useful and interesting applications, for instance for generalizing the decomposition theorem of Beilinson-Bernstein-Deligne-Gabber (cf. \cite{Decom}) to stacks with affine stabilizers, and for studying the Hasse-Weil zeta functions of Artin stacks over number fields. For instance, it implies that the Hasse-Weil zeta function is analytic in some right half complex $s$-plane. Using (\ref{T1.4}) we can show certain stacks have $\mathbb F_q$-points. \begin{example}\label{example9.2} Let $\mathscr X_0$ be a form of $B\mathbb G_m,$ i.e., $\mathscr X\cong B\mathbb G_{m,\mathbb F}$ over $\mathbb F.$ Then all the automorphism group schemes in $\mathscr X_0$ are affine, and $h^{-2-2n}_c(\mathscr X)=h^{-2-2n}_c(B\mathbb G_m)=1,$ for all $n\ge0.$ Let $\alpha_{-2-2n}$ be the eigenvalue of $F$ on $H^{-2-2n}_c(\mathscr X).$ Then by (\ref{T1.4}) we have $|\alpha_{-2-2n}|\le q^{-1-n}.$ Smoothness is fppf local on the base, so $\mathscr X_0$ is smooth and connected, hence $H^{-2}_c(\mathscr X)=\overline{\mathbb Q}_{\ell}(1)$ and $\alpha_{-2}=q^{-1}.$ So \begin{equation*} \begin{split} \#\mathscr X_0(\mathbb F_q) &=\sum_{n\ge0}\text{Tr}(F, H^{-2-2n}_c(\mathscr X))=q^{-1}+\alpha_{-4}+\alpha_{-6}+ \cdots \\ &\ge q^{-1}-q^{-2}-q^{-3}+\cdots=q^{-1}-\frac{q^{-1}}{q-1}>0 \end{split} \end{equation*} when $q\ne2.$ In fact, since there exists an integer $r\ge1$ such that $\mathscr X_0\otimes\mathbb F_{q^r}\cong B\mathbb G_{m,\mathbb F_{q^r}},$ we see that all cohomology groups $H^{-2-2n}_c(\mathscr X)$ are pure, i.e. $|\alpha_{-2-2n}|=q^{-1-n}.$ In fact, one can classify the forms of $B\mathbb G_{m,\mathbb F_q}$ as follows. If $\mathscr X_0$ is a form, then it is also a gerbe over $\text{Spec }\mathbb F_q,$ hence a neutral gerbe $BG_0$ for some algebraic group $G_0$ by (\cite{Beh2}, 6.4.2). By comparing the automorphism groups, we see that $G_0$ is a form of $\mathbb G_{m,\mathbb F_q}.$ There is only one nontrivial form of $\mathbb G_{m,\mathbb F_q},$ because $$ H^1(\mathbb F_q,\text{Aut}(\mathbb G_m))=H^1(\mathbb F_q,\mathbb Z/2\mathbb Z)=\mathbb Z/2\mathbb Z, $$ and this form is the kernel $R^1_{\mathbb F_{q^2}/\mathbb F_q}\mathbb G_{m,\mathbb F_{q^2}}$ of the norm map $$ \xymatrix@C=1cm{ R_{\mathbb F_{q^2}/\mathbb F_q}\mathbb G_{m,\mathbb F_{q^2}} \ar[r]^-{\text{Nm}} & \mathbb G_{m,\mathbb F_q},} $$ where $R_{\mathbb F_{q^2}/\mathbb F_q}$ is the operation of Weil's restriction of scalars. Therefore, the only non-trivial form of $B\mathbb G_{m,\mathbb F_q}$ is $B(R^1_{\mathbb F_{q^2}/\mathbb F_q}\mathbb G_{m,\mathbb F_{q^2}}).$ In particular, they all have $\mathbb F_q$-points, even when $q=2.$ \end{example} \begin{example}\label{example9.3} Consider the projective line $\mathbb P^1$ with the following action of $\mathbb G_m:$ it acts by multiplication on the open part $\mathbb A^1\subset\mathbb P^1,$ and leaves the point $\infty$ fixed. So we get a quotient stack $[\mathbb P^1/\mathbb G_m]$ over $\mathbb F_q.$ Let $\mathscr X_0$ be a form of $[\mathbb P^1/\mathbb G_m].$ We want to find an $\mathbb F_q$-point on $\mathscr X_0,$ or even better, an $\mathbb F_q$-point on $\mathscr X_0$ which, when considered as a point in $\mathscr X(\mathbb F)\cong[\mathbb P^1/\mathbb G_m](\mathbb F),$ lies in the open dense orbit $[\mathbb G_m/\mathbb G_m](\mathbb F).$ \begin{anitem} Consider the following general situation. Let $G_0$ be a connected $\mathbb F_q$-algebraic group, and let $X_0$ be a proper smooth variety with a $G_0$-action over $\mathbb F_q.$ Let $$ \xymatrix@C=.5cm{[X_0/G_0] \ar[r]^-f & BG_0 \ar[r]^-g & \text{Spec }\mathbb F_q} $$ be the natural maps, and let $\mathscr X_0$ be a form of $[X_0/G_0].$ Then $f$ is representable and proper. For every $k,\ R^kf_*\overline{\mathbb Q}_{\ell}$ is a lisse sheaf, and takes the form $g^*V_k$ for some sheaf $V_k$ on $\text{Spec }\mathbb F_q.$ Consider the Leray spectral sequence $$ E^{rk}_2=R^rg_!R^kf_*\overline{\mathbb Q}_{\ell}\Longrightarrow R^{r+k}(gf)_!\overline{\mathbb Q}_{\ell}. $$ Since $R^rg_!R^kf_*\overline{\mathbb Q}_{\ell}=R^rg_!(g^* V_k)=(R^rg_!\overline{\mathbb Q}_{\ell})\otimes V_k,$ we have $$ h^n_c(\mathscr X)=h^n_c([X/G])\le\sum_{r+k=n}h^r_c(BG)\cdot \dim V_k=\sum_{r+k=n}h^r_c(BG)\cdot h^k(X). $$ \end{anitem} Now we return to $[\mathbb P^1/\mathbb G_m].$ Since $h^0(\mathbb P^1)=h^2(\mathbb P^1)=1$ and $h_c^{-2i}(B\mathbb G_m)=1$ for $i\ge1,$ we see that $h_c^n(\mathscr X)=0$ for $n$ odd and $$ h^{2n}_c(\mathscr X)\le h^0(\mathbb P^1)h^{2n}_c(B\mathbb G_m)+h^2(\mathbb P^1)h^{2n-2}_c(B\mathbb G_m)=\begin{cases} 0,\ n\ge1, \\ 1,\ n=0, \\ 2,\ n<0.\end{cases} $$ Since $\mathscr X_0$ is connected and smooth of dimension 0, we have $H^0_c(\mathscr X)=\overline{\mathbb Q}_{\ell}.$ By (\ref{T1.4}), the $\iota$-weights of $H^{2n}_c(\mathscr X)$ are $\le2n.$ The trace formula gives \begin{equation*} \begin{split} \#\mathscr X_0(\mathbb F_q) &=\sum_{n\le0}\text{Tr}(F,H^{2n}_c(\mathscr X))=1+\sum_{n<0}\text{Tr}(F,H^{2n}_c(\mathscr X)) \\ &\ge1-2\sum_{n<0}q^n=1-\frac{2}{q-1}>0 \end{split} \end{equation*} when $q\ge4.$ In order for the rational point to be in the open dense orbit, we need an upper bound for the number of $\mathbb F_q$-points on the closed orbits. When passing to $\mathbb F,$ there are 2 closed orbits, both having stabilizer $\mathbb G_{m,\mathbb F}.$ So in $[\mathscr X_0(\mathbb F_q)]$ there are at most 2 points whose automorphism groups are forms of the algebraic group $\mathbb G_{m,\mathbb F_q}.$ From the cohomology sequence $$ \xymatrix@C=.7cm{ 1 \ar[r] & (R^1_{\mathbb F_{q^2}/\mathbb F_q}\mathbb G_{m,\mathbb F_{q^2}})(\mathbb F_q) \ar[r] & \mathbb F_{q^2}^* \ar[r]^-{\text{Nm}} & \mathbb F_q^*} $$ we see that $\#(R^1_{\mathbb F_{q^2}/\mathbb F_q}\mathbb G_{m,\mathbb F_{q^2}})(\mathbb F_q)=q+1.$ Since $\frac{1}{q+1} \le\frac{1}{q-1},$ the space that the closed orbits can take is at most $\frac{2}{q-1},$ and equality holds only when the two closed orbits are both defined over $\mathbb F_q$ with stabilizer $\mathbb G_m.$ In order for there to exist an $\mathbb F_q$-point in the open dense orbit, we need $$ 1-\frac{2}{q-1}>\frac{2}{q-1}, $$ and this is so when $q\ge7.$ \end{example} \section{About independence of $\ell$}\label{sec-indep} The coefficients of the expansion of the infinite product $$ Z(\mathscr X_0,t)=\prod_{i\in\mathbb Z}P_{i,\ell}(\mathscr X_0,t)^{(-1)^{i+1}} $$ are rational numbers and are independent of $\ell,$ because the $c_v(\mathscr X_0)$'s are rational numbers independent of $\ell.$ A famous conjecture is that this is also true for each $P_{i,\ell}(\mathscr X_0,t).$ First we show that the roots of $P_{i,\ell}(\mathscr X_0,t)$ are Weil $q$-numbers. Note that $P_{i,\ell}(\mathscr X_0,t)\in\mathbb Q_{\ell}[t].$ \begin{definition}\label{D10.1} An algebraic number is called a \emph{Weil $q$-number} if all of its conjugates have the same weight relative to $q,$ and this weight is a rational integer. It is called a \emph{Weil $q$-integer} if in addition it is an algebraic integer. A number in $\overline{\mathbb Q}_{\ell}$ is called a \emph{Weil $q$-number} if it is a Weil $q$-number via $\iota.$ \end{definition} For $\alpha\in\overline{\mathbb Q}_{\ell},$ being a Weil $q$-number or not is independent of $\iota;$ in fact the images in $\mathbb C$ under various $\iota$'s are conjugate. For an $\mathbb F_q$-variety $X_0,$ not necessarily smooth or proper, (\cite{Del2}, 3.3.4) implies all Frobenius eigenvalues of $H^i_c(X)$ are Weil $q$-integers. The following lemma generalizes this. \begin{lemma}\label{L10.2} For every $\mathbb F_q$-algebraic stack $\mathscr X_0,$ and a prime number $\ell\ne p,$ the roots of each $P_{i,\ell}(\mathscr X_0,t)$ are Weil $q$-numbers. In particular, the coefficients of $P_{i,\ell}(\mathscr X_0,t)$ are algebraic numbers in $\mathbb Q_{\ell}$ (i.e. algebraic over $\mathbb Q$). \end{lemma} \begin{proof} For an open immersion $j:\mathscr U_0\to\mathscr X_0$ with complement $i:\mathscr Z_0\to\mathscr X_0,$ we have an exact sequence $$ \xymatrix@C=.6cm{ \cdots \ar[r] & H^i_c(\mathscr U) \ar[r] & H^i_c(\mathscr X) \ar[r] & H^i_c(\mathscr Z) \ar[r] & \cdots,} $$ thus we may shrink to a non-empty open substack. In particular, (\ref{L10.2}) holds for algebraic spaces, by (\cite{Knu}, II 6.7) and (\cite{Del2}, 3.3.4). We may assume $\mathscr X_0$ is smooth and connected. By Poincar\'e duality, it suffices to show that the Frobenius eigenvalues of $H^i(\mathscr X)$ are Weil $q$-numbers, for all $i.$ Take a presentation $X_0\to\mathscr X_0$ and consider the associated strictly simplicial smooth covering $X^{\bullet}_0\to\mathscr X_0$ by algebraic spaces. Then there is a spectral sequence (\cite{LO2}, 10.0.9) $$ E_1^{rk}=H^k(X^r)\Longrightarrow H^{r+k}(\mathscr X), $$ and the assertion for $\mathscr X_0$ follows from the assertion for algebraic spaces. \end{proof} \begin{problem}\label{Q10.3} Is each $$ P_{i,\ell}(\mathscr X_0,t)=\det(1-Ft,H^i_c(\mathscr X,\mathbb Q_{\ell})) $$ a polynomial with coefficients in $\mathbb Q,$ and the coefficients are independent of $\ell?$ \end{problem} \begin{subremark}\label{R10.3.1} (i) Note that, unlike the case for varieties, we cannot expect the coefficients to be integers (for instance, for $B\mathbb G_m,$ the coefficients are $1/q^i$). (ii) (\ref{Q10.3}) is known to be true for smooth proper varieties (\cite{Del2}, 3.3.9), and (coarse moduli spaces of) proper smooth algebraic stacks of finite diagonal (\ref{T7.3}). It remains open for general varieties. Even the Betti numbers are not known to be independent of $\ell$ for a general variety. See \cite{Ill}. \end{subremark} Let us give positive answer to (\ref{Q10.3}) in some special cases of algebraic stacks. In $\S7$ we see that it holds for $BE$ and $BGL_N.$ We can generalize these two cases as follows. \begin{lemma}\label{L10.4} (\ref{Q10.3}) has a positive answer for (i) $BA,$ where $A$ is an $\mathbb F_q$-abelian variety; (ii) $BG_0,$ where $G_0$ is a linear algebraic group over $\mathbb F_q.$ \end{lemma} \begin{proof} (i) Let $g=\dim A.$ Then $N=H^1(A)$ is a $2g$-dimensional vector space, with eigenvalues $\alpha_1,\cdots,\alpha _{2g}$ for the Frobenius action $F,$ and $N$ is pure of weight 1. Let $a_1,\cdots,a_{2g}$ be a basis for $N$ so that $F$ is upper-triangular $$ \begin{bmatrix}\alpha_1 & * & * \\ & \ddots & * \\ && \alpha_{2g}\end{bmatrix}. $$ Then $H^*(BA)=\text{Sym}^*N[-1]=\overline{\mathbb Q}_{\ell}[a_1,\cdots,a_{2g}],$ where each $a_i$ sits in degree 2. In degree $2n,\ H^{2n}(BA)=\overline{ \mathbb Q}_{\ell}\langle a_{i_1}\cdots a_{i_n}|1\le i_1, \cdots,i_n\le2g\rangle,$ and the eigenvalues are $\alpha_{i_1}\cdots\alpha_{i_n}.$ By Poincar\'e duality $$ H_c^{-2n-2g}(BA)=H^{2n}(BA)^{\vee}\otimes\overline{\mathbb Q}_{\ell}(g) $$ we see that the eigenvalues of $F$ on $H^{-2g-2n}_c(BA)$ are $$ q^{-g}\cdot\alpha_{i_1}^{-1}\cdots\alpha_{i_n}^{-1}. $$ Each factor $$ P_{-2g-2n}(q^gt)=\prod_{1\le i_1,\cdots,i_n\le2g}\big(1-(\alpha_{i_1} \cdots\alpha_{i_n})^{-1}t\big) $$ stays unchanged if we permute the $\alpha_i$'s arbitrarily, so the coefficients are symmetric polynomials in the $\alpha_i^{-1}$'s with integer coefficients, hence are polynomials in the elementary symmetric functions, which are coefficients of $\prod_{i=1}^{2g}(t-\alpha_i^{-1}).$ The polynomial $$ \prod_{i=1}^{2g}(1-\alpha_it)=\det\big(1-Ft,H^1(A, \mathbb Q_{\ell})\big) $$ also has roots $\alpha_i^{-1},$ and this is a polynomial with integer coefficients, independent of $\ell,$ since $A$ is smooth and proper. Let $m=\pm q^g$ be leading coefficient of it. Then $$ \prod_{i=1}^{2g}(t-\alpha_i^{-1})=\frac{1}{m}\prod_{i=1} ^{2g}(1-\alpha_it). $$ This verifies (\ref{Q10.3}) for $BA.$ (ii) Let $d=\dim G_0.$ For every $k\ge0,\ H^{2k}(BG)$ is pure of weight $2k$ (\cite{Del3}, 9.1.4), hence by Poincar\'e duality, $H^{-2d-2k}_c(BG)$ is pure of weight $-2d-2k.$ The entire function $$ \frac{1}{Z(BG_0,t)}=\prod_{k\ge0}P_{-2d-2k}(BG_0,t)\in \mathbb Q[[t]] $$ is independent of $\ell,$ and invariant under the action of $\text{Gal}(\mathbb Q)$ on the coefficients of the Taylor expansion. Therefore the roots of $P_{-2d-2k}(BG_0,t)$ can be described as $$ \text{``zeros of }\frac{1}{Z(BG_0,t)}\text{ that have weight }2d+2k\text{ relative to }q", $$ which is a description independent of $\ell,$ and these roots (which are algebraic numbers) are permuted under $\text{Gal}(\mathbb Q).$ Hence $P_{-2d-2k}(BG_0,t)$ has rational coefficients. \end{proof} The following proposition generalizes both (\ref{T7.3}) and (\ref{L10.4}ii). \begin{proposition}\label{P10.6} Let $X_0$ be the coarse moduli space of a proper smooth $\mathbb F_q$-algebraic stack of finite diagonal, and let $G_0$ be a linear $\mathbb F_q$-algebraic group that acts on $X_0,$ and let $\mathscr X_0$ be a form of the quotient stack $[X_0/G_0].$ Then (\ref{Q10.3}) is verified for $\mathscr X_0.$ \end{proposition} \begin{proof} It suffices to show that $H^n_c(\mathscr X)$ is pure of weight $n,$ for every $n.$ To show this, we can make a finite extension of the base field $\mathbb F_q,$ so we may assume $\mathscr X_0=[X_0/G_0].$ Let $$ \xymatrix@C=.5cm{ \mathscr X_0 \ar[r]^-f & BG_0 \ar[r]^-h & B\pi_0(G_0)} $$ be the natural maps. Let $d=\dim G_0.$ Consider the spectral sequence $$ H^{-2d-2r}_c(BG,R^kf_!\overline{\mathbb Q}_{\ell}) \Longrightarrow H^{-2d-2r+k}_c(\mathscr X). $$ The $E_2$-terms can be computed from the degenerate Leray spectral sequence for $h:$ $$ H^{-2d-2r}_c(BG,R^kf_!\overline{\mathbb Q}_{\ell})\simeq H^0_c(B\pi_0(G),R^{-2d-2r}h_!R^kf_!\overline{\mathbb Q}_{\ell}). $$ The restriction of $R^{-2d-2r}h_!R^kf_!\overline{\mathbb Q}_{\ell}$ along the natural projection $\text{Spec }\mathbb F_q\to B\pi_0(G_0)$ is isomorphic to the Galois module $H^{-2d-2r}_c(BG^0,R^kf_!\overline{\mathbb Q}_{\ell}),$ and since $G^0_0$ is connected, $(R^kf_!\overline{\mathbb Q}_{\ell})|_{BG_0^0}$ is the inverse image of some sheaf $V_k$ via the structural map $BG^0_0\to\text{Spec }\mathbb F_q.$ By base change, we see that the sheaf $V_k,$ regarded as a $\text{Gal}(\mathbb F_q)$-module, is $H^k(X).$ By projection formula we have $$ H^{-2d-2r}_c(BG^0,R^kf_!\overline{\mathbb Q}_{\ell})\simeq H^{-2d-2r}_c(BG^0)\otimes H^k(X) $$ as representations of $\text{Gal}(\mathbb F_q),$ and by (\ref{T7.3}), the right hand side is pure of weight $-2d-2r+k.$ By (\ref{L4.8}), $H^{-2d-2r}_c(BG,R^kf_!\overline{\mathbb Q}_{\ell})$ is also pure of weight $-2d-2r+k,$ therefore $H^n_c(\mathscr X)$ is pure of weight $n,$ for every $n.$ \end{proof} \begin{blank}\label{10.7} Finally, let us consider the following much weaker version of independence of $\ell.$ For $\mathscr X_0$ and $i\in\mathbb Z,$ let $\Psi(\mathscr X_0,i)$ be the following property: the Frobenius eigenvalues of $H^i_c(\mathscr X,\overline{\mathbb Q}_{\ell}),$ counted with multiplicity, for all $\ell\ne p,$ are contained in a finite set of algebraic numbers with multiplicities assigned, and this set together with the assignment of multiplicity, depends only on $\mathscr X_0$ and $i.$ In particular it is independent of $\ell.$ In other words, there is a finite decomposition of the set of all prime numbers $\ell\ne p$ into disjoint union of some subsets, such that the Frobenius eigenvalues of $H^i_c(\mathscr X,\overline{\mathbb Q}_{\ell})$ depends only on the subset that $\ell$ belongs to. If this property holds, we also denote such a finite set of algebraic numbers (which is not unique) by $\Psi(\mathscr X_0,i),$ if there is no confusion. \begin{subproposition}\label{P10.7} The property $\Psi(\mathscr X_0,i)$ holds for every $\mathscr X_0$ and $i.$ \end{subproposition} \begin{proof} If $\mathscr U_0$ is an open substack of $\mathscr X_0$ with complement $\mathscr Z_0,$ and properties $\Psi(\mathscr U_0,i)$ and $\Psi(\mathscr Z_0,i)$ hold, then $\Psi(\mathscr X_0,i)$ also holds, and the finite set $\Psi(\mathscr X_0,i)$ a subset of $\Psi(\mathscr U_0,i)\cup\Psi(\mathscr Z_0,i).$ Firstly we prove this for schemes $X_0.$ By shrinking $X_0$ we can assume it is a connected smooth variety. By Poincar\'e duality it suffices to prove the similar statement $\Psi^*(X_0,i)$ for ordinary cohomology, i.e. with $H^i_c$ replaced by $H^i,$ for all $i.$ This follows from \cite{deJ} and (\cite{Del2}, 3.3.9). Therefore it also holds for all algebraic spaces. For a general algebraic stack $\mathscr X_0,$ by shrinking it we can assume it is connected smooth. By Poincar\'e duality, it suffices to prove $\Psi^*(\mathscr X_0,i)$ for all $i.$ This can be done by taking a hypercover by simplicial algebraic spaces, and considering the associated spectral sequence. \end{proof} \end{blank}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{1. Introduction} Due to its impact on human health [e.g., {\it K\"{u}nzli et al.}, 2000] and terrestrial vegetation [e.g., {\it Finnan et al.}, 1996], and its role as a greenhouse gas, ozone has long been intensively studied via surface and ozonesonde measurements, aircraft observations, satellite total ozone mapping, and numerical modelling. Many long-term and intensive surface ozone measurements in the remote marine environment had been conducted over the Atlantic [e.g., {\it Winkler}, 1988; {\it Penkett et al.}, 1998; {\it Junkermann and Stockwell}, 1999], the Pacific [e.g., {\it Piotrowicz et al.}, 1986; {\it Piotrowicz et al.}, 1991; {\it Jaffe et al.}, 1996; {\it Crawford et al.}, 1997; {\it Kajii et al.}, 1997; {\it Pochanart et al.}, 1999; {\it Monks et al.}, 2000], and the Indian Ocean [e.g., {\it Lal et al.}, 1998]. Since there are limitations in observations of ozone and other constituents primarily at the surface, many measurements such as ozonesonde [e.g., {\it Moody et al.} [1995]; {\it Oltmans et al.}, 1996; {\it Logan}, 1999, and references therein; {\it Latt et al.}, 1999], and airplanes [e.g., {\it Kawa and Pearson}, 1989; {\it Murphy and Fahey}, 1994] were used to provide the necessary profile information for understanding the chemistry and dynamics controlling the variations of ozone in the troposphere. {\it Monks} [2000] gave a comprehensive review of the surface observations and the springtime ozone maximum, while {\it Logan} [1999] documented ozonesonde measurements and derived a climatology for tropospheric ozone based on these sonde data. Two prominent features constantly show up from these measurements. First, the appearance of a spring maximum in the troposphere; and secondly, the existence of an interhemispheric asymmetry between the northern and southern hemispheres [e.g., {\it Winkler}, 1988; {\it Johnson et al.}, 1990]. These interhemispheric asymmetries also featured in three-dimensional (3D) modelling studies [e.g., {\it M\"{u}ller and Brasseur}, 1995]. Basically there are two theories regarding the origins of elevated ozone in the remote troposphere. First, transport from the stratosphere to the troposphere is the dominating source of ozone in the troposphere [e.g., {\it Moody et al.}, 1995; {\it Oltmans et al.}, 1996; {\it Roelofs and Lelieveld}, 1997]. Second, that tropospheric ozone originated mainly from emissions (e.g., biomass burning emissions of $O_3$ precursors, and NOx sources from soil and lightning in the continents), photochemistry (photochemical oxidation of CO and hydrocarbons catalysed by HOx and NOx [e.g., see {\it Monks}, 2000, and references therein]), and transport processes (e.g., large-scale long-range transport, cloud convection, and vertical mixing between the atmospheric boundary layer and free troposphere) within the troposphere [{\it Roelofs et al.}, 1997; {\it Yinger et al.}, 1999; {\it Logan}, 1999]. The general consensus is that the elevated ozone concentrations in the remote MBL are non-indigenous, and the transport of either or both elevated ozone and its precursors from elsewhere is the major contributing factor for the observed high ozone levels in the remote MBL. In addition, very low ozone concentrations have also been observed in the tropical MBL [{\it Kley et al.}, 1996; {\it Singh et al.}, 1996], indicating that halogens may play an important role in oxidation processes and the ozone budget in parts of the remote MBL [{\it Ariya et al.}, 1998; {\it Dickerson et al.}, 1999; {\it Nagao et al.}, 1999]. Hence modelling the spring ozone maximum and interhemispheric asymmetry remains one of the most critical tests to our current understanding of tropospheric chemistry in the remote MBL [{\it Winkler}, 1988; {\it Logan}, 1999; {\it Monks}, 2000]. The recently available ozonesonde measurements provide a much higher global coverage of the ozone vertical profiles from surface to the lower stratosphere; however, these ozonesondes have not yet been widely used to address the issue such as spring ozone maximum and interhemispheric asymmetry [e.g., {\it Logan}, 1999]. In this first part of a three-part series of papers, concerning the sources of the spring ozone maximum and its interhemispheric asymmetry in the remote MBL, we present results from a 3D chemistry transport model (CTM) and a detailed annual comparison with surface and ozonesonde measurements at several locations in the remote marine environments. A modelling test on the {\it stratosphere-dominated theory} is discussed in the second part of the paper. Finally, the test on the {\it self-contained theory} is discussed in the final part of the paper. \section{2. The IMS Model} A detailed description about the formulation and evaluation of model emission inventories, transport processes, chemistry, and the simulated $O_3$, $CH_4$, and CO distributions in the troposphere were described in {\it Wang et al.} [2001], {\it Wang et al.} [1999], and {\it Wang and Shallcross} [2000]. Briefly, the model uses a semi-Lagrangian approach for the large-scale advection of long-lived species. Vertical mixing of species in the atmospheric boundary layer is modelled following the radiatively driven diurnal variation of the boundary-layer height. Vertical redistribution of chemical species through cloud convection is achieved using a mass-flux cloud scheme. The model uses a comprehensive gas-phase reaction mechanism for NOx, methane, NMHC, biogenic VOCs, and other sulfur and halogen compounds. Specified emissions for anthropogenic sources is that of EDGAR and GEIA. Geographical distributions of the sinks of important tropospheric species ($O_3$, $NO_2$, PAN, CO, etc) are considered via dry and wet deposition processes. The model uses analyzed winds (1992) from ECMWF, and it contains 19 vertical layers which extends from surface to about 10 hPa. The model horizontal resolution is about $7.5^{\circ}$ in longitude and $4.5^{\circ}$ in latitude. \section{3. Results} The ozone simulations were performed using the IMS model with analyzed data of zonal wind, meridional wind, temperature, specific humidity, and surface pressure from the European Centre for Medium Range Weather Forecasts (ECMWF) which are updated every 6 hours. The IMS model was multitasked and run parallelly on the shared-memory CRAY J90 [{\it Wang et al.}, 2000]. The model was run for two years with full tropospheric chemistry, and the results, composed from every 6-hour model output frequency, from the second year were used for the following discussion. We note that the meteorology used here is not directly corresponding to the years where the ozonesonde data were available. Hence, some variability in the distribution of tropospheric ozone can be caused by the interannual variability in the tropics (eg., {\it Peters et al.}, 2001). \subsection{3.1. Surface Ozone Distribution} \begin{figure*}[hp] \vbox{ \vskip -0.0in \centerline{ \leavevmode (a) \epsfxsize=2.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{o3_jan.eps}} (b) \epsfxsize=2.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{o3_jul.eps}} } \vskip -0.0in \centerline{ \leavevmode (c) \epsfxsize=3.0in \epsfysize=6.0in \rotatebox{-90.}{\epsfbox{o3_sep_oct.eps}} } } \caption{ \label{fig.8.1} Monthly mean ozone distributions (ppbv) calculated at the surface for (a) January, (b) July, and (c) September to October.} \end{figure*} Figure~\ref{fig.8.1} shows modelled surface ozone distributions over most of the marine boundary layer in the Atlantic and the Pacific (from $120^{\circ}E$ to $0^{\circ}W$, and from $60^{\circ}N$ to $60^{\circ}S$). The model surface ozone in the MBL clearly shows an interhemispheric asymmetry when compares ozone concentrations at latitudes in the SH to the latitudes in the NH of the Atlantic and the Pacific, respectively. For example, the averaged December ozone concentration over the Atlantic and the Pacific in the NH is higher than in the SH (Figure~\ref{fig.8.1}(a)). This northward meridional gradient in ozone concentration in December is reversed to the southward meridional gradient in July (Figure~\ref{fig.8.1}(b)), indicating that the model meridional ozone gradients in the MBL always point toward the wintertime hemisphere. The change in the direction of concentration gradients closely follows the seasonal movements. The model predicts an area of high surface ozone concentrations over the tropical south Atlantic, which extends across southern Africa, the Indian Ocean, and Australia through September to October (Figure~\ref{fig.8.1}(c)). Based on satellite and ozonesonde measurements, {\it Jenkins et al.} [1997] reported that this area shows the highest tropospheric ozone concentrations through September to October. This indicates that the model calculated surface ozone is consistent with the measurements over the southern MBL. Notice that the seasonal tropospheric ozone maximum in the tropical south Atlantic was first recognised from satellite observations by {\it Fishman et al.} [1986, 1991], followed by many subsequent studies [e.g., {\it Thompson et al.}, 1996; {\it Jacob et al.}, 1996; {\it Diab et al.}, 1996; {\it Browell et al.}, 1996]. \subsection{3.2. Comparison of Model with Surface Measurements} \begin{figure*}[hp] \vbox{ \vskip -0.0in \centerline{ \leavevmode \epsfxsize=6.0in \epsfysize=3.0in \epsfbox{sites.eps} } } \caption{ \label{fig.sites} Distribution of surface and ozonesonde measurement sites used for this study.} \end{figure*} The results from IMS annual simulations were first compared with surface measurements in the remote MBL environment. Figure~\ref{fig.sites} shows a global distribution of the surface and ozonesonde measurement sites used for the following comparison. The annual surface ozone measurements at Westman Is., Bermuda, Mauna Loa, and Samoa were taken from the NOAA CMDL Surface Ozone Data [{\it S.J. Oltmans}, 2001, personal communications; see also {\it Oltmans and Levy}, 1994]. In addition to these sites, ozonesonde measurements were taken from the same CMDL source [{\it S.J. Oltmans}, 2001, personal communications], and the NASA SHADOZ data [{\it Thompson and Witte}, 1999]. \begin{figure*}[hp] \vbox{ \vskip -0.0in \centerline{ \leavevmode (a) \epsfxsize=3.0in \epsfysize=1.5in \epsfbox{vmi.eps} (b) \epsfxsize=3.0in \epsfysize=1.5in \epsfbox{ber.eps} } \vskip -0.0in \centerline{ \leavevmode (c) \epsfxsize=3.0in \epsfysize=1.5in \epsfbox{mlo.eps} (d) \epsfxsize=3.0in \epsfysize=1.5in \epsfbox{smo.eps} } } \caption{ \label{fig.8.14.1} Comparison of modelled (solid thick lines) seasonal cycles of $O_3$ (ppbv) at (a) Westman, Iceland, (b) Bermuda, (c) Mauna Loa, and (d) Samoa with the measurements (thin dashed lines). Two measured $O_3$ levels (for 1988-1992, except at Westman where the 1992-1997 data were used) are shown here, one for the daily maximum, while the other one for the daily minimum.} \end{figure*} Figure~\ref{fig.8.14.1} shows time-series plots of modelled and observed $O_3$ levels at four sites located in the remote MBL. We compare model results with two measured $O_3$ levels (for the 1988-1992), one for the daily maximum, and the other one for the daily minimum. While the observed daily minima represent the indigenous local background 'clean' conditions, the observed daily maxima clearly indicate the nonindigenous influences such as elevated $O_3$ from the upper troposphere, or due to the long-range transport of high $O_3$ and anthropogenic $O_3$ precursors from industrial and biomass burning areas. These time-series plots show very distinct and easily identifiable spring ozone maxima at these MBL sites. While it has long been observed that the spring maximum is a NH phenomenon [e.g., {\it Monks}, 2000], the SH observations at Samoa also shows very distinct spring time maximum comparable to those NH sites. The observed spring ozone behaviour is generally well reproduced by the model at these sites, and the modelled ozone levels generally fall within the observed ranges at Westman, Bermuda, and Mauna Loa, and is at the upper bound of the observed ozone concentrations at Samoa. Though the model overestimates ozone at some tropical MBL locations, the observed seasonal cycles are closely reproduced by the model. This indicates that the model is capturing the correct sense of the processes controlling ozone variation at remote MBL environments. Notice that the time of observed minimum (JJA) is not reproduced by the model at Westman. This indicates that model underestimates the processes contributing to the ozone levels at this location in other seasons. For example, in the simulation without considering tropospheric emissions (see part 3), the model can only produce ozone close to the daily minimum measurements. Hence, the background ozone concentration at higher latitudes is likely to be too low in the seasons through autumn to late winter. The model also overestimates ozone at Samoa, indicating that too much ozone has been transported/produced in the southern MBL. We note that the observed daily maxima are 2 to 4 times that of the daily minima, indicating that the MBL environment is actually very sensitive to the air from elsewhere such as the upper troposphere and industrial and biomass burning areas. These transport-driven sensitivities are much higher than those driven by the photochemical loss process, which is on the order of no more than a few ppbv per day [e.g., {\it Paluch et al.}, 1994; {\it Monks et al.}, 2000]. This indicates that ozone in the MBL at these sites are largely dominated by the processes such as long-range transport of ozone-riched air, cloud convective transport, mixing and dry deposition in the MBL, and cross-tropopause transport. \subsection{3.3. Comparison of Model with Ozonesondes} While the previous surface comparisons show pronounced seasonal cycles and distinctive interhemispheric asymmetry in $O_3$ in the low latitude MBL, these comparisons were limited to the surface [e.g., {\it Oltmans et al.}, 1996]. Many components such as free tropospheric $O_3$ and other $O_3$ precursor distribution, $O_3$ exchange in the upper troposphere and lower stratosphere, and atmospheric transport processes are crucial for understanding the chemical behaviour over the remote marine troposphere. In this section we compare modelled ozone vertical profiles with ozonesonde measurements taken from the NASA SHADOZ data and the NOAA CMDL ozone data for the period 1998-1999. \subsubsection{3.3.1. The Atlantic} \begin{figure*}[hp] \vbox{ \vskip -0.0in \centerline{ \leavevmode (a) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{bermuda_cmdl.eps}} (b) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{bermuda.eps}} } \vskip -0.0in \centerline{ \leavevmode (c) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{ascen_shadoz.eps}} (d) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{ascen.eps}} } } \caption{ \label{fig.ozonesonde.at} Time-height cross sections of $O_3$ (ppbv) from measurements at Bermuda ($32^{\circ}N$, $65{\circ}W$) (a) and Ascension Island ($8^{\circ}S$, $14^{\circ}W$) (c). The model calculation for these locations are shown in (b) and (d), respectively.} \end{figure*} Figure~\ref{fig.ozonesonde.at} shows time-height cross sections of measured and modelled $O_3$ at northern (Bermuda) and southern (Ascension Island) Atlantic. For the northern Atlantic site (Figure~\ref{fig.ozonesonde.at}(a)), analyses of one year of vertical soundings of ozone show that high values of ozone extend from the upper troposphere to the middle and lower troposphere during the NH spring (March-May). High levels of ozone are also seen in the lower stratosphere during this period. The model calculated annual variations of ozone in the troposphere at this location is shown in Figure~\ref{fig.ozonesonde.at}(b). The observed high ozone concentrations from the upper to the middle and lower troposphere during the NH spring are generally well reproduced by the model. Both model and ozonesondes show a consistent picture of the ozone distribution in the troposphere during the NH spring compared with other seasons: Larger ozone concentrations extend downward from the tropopause to near the surface, while smaller ozone concentrations extend upward from the surface to near the tropopause. These characteristics are consistent with other analyses [{\it Oltmans et al.}, 1996]. {\it Moody et al.} [1995] suggested that the elevated ozone concentrations in the midtroposphere at this location during this period is associated with downward transport of ozone from the upper and lower stratosphere. Based on the analyses of summer and spring ozonesondes at five locations over the North Atlantic, {\it Oltmans et al.} [1996] found the connection between large ozone mixing rations and dry air in the middle and upper troposphere with large ozone values in the tropopause region. They suggested that the stratosphere plays a major role in loading the troposphere with ozone, and high ozone events usually extend downward from the tropopause region. From trajectory analyses showing the history of transport path, {\it Oltmans et al.} [1996] concluded that the upper troposphere and lower stratosphere was the source of elevated ozone in the troposphere. For the southern Atlantic site (Figure~\ref{fig.ozonesonde.at}(c)), high ozone concentrations are seen in the troposphere during the SH spring (September-November). The spread of high ozone levels extends from the upper troposphere downward to near the surface during this season. Model calculations (Figure~\ref{fig.ozonesonde.at}(d)) at this southern Atlantic site show similar spring ozone maxima extending from the upper troposphere downward, in accordance with the measurements, though the modelled high ozone in the upper troposphere does not extend as far down as in the measurements. The enhanced tropospheric ozone, which occurs between July and October and observed at Ascension Island, is linked to dry season biomass burning [{\it Diab et al.}, 1996]. During September-October, gases from extensive fires in Brazil were transported by convective storms into the upper troposphere where tropospheric ozone was photochemically produced and advected eastward over the south Atlantic, while the widespread fires in the deep convection-free central Africa were advected at low altitudes over the Atlantic [{\it Browell et al.}, 1996; {\it Jacob et al.}, 1996; {\it Thompson et al.}, 1996]. The lower modelled ozone concentration in the troposphere compared with the ozonesondes indicate that biomass burning emissions in the equatorial south America and central Africa are too low during the biomass burning season. The above analyses show an interhemispheric asymmetry in the tropospheric ozone distribution over the Atlantic basin. More persistent and widespread ozone maxima were observed and modelled during the hemispheric spring months, from March to May for the NH, and from September to November for the SH, than any other seasons of the year. Though the season (hemispheric spring) is the same, the sources of elevated ozone in the troposphere are different when comparing the NH with the SH. For the NH Atlantic basin, the downward transport of high ozone from upper troposphere and lower stratosphere are likely to be the major sources. For the SH Atlantic basic, the biomass burning emissions from continents, together with following cloud convective transport and/or large scale transport are likely to be the major contributors (see following part 2 and part 3 papers for further discussions). \subsubsection{3.3.2. Western Indian Ocean} \begin{figure*}[hp] \vbox{ \vskip -0.0in \centerline{ \leavevmode (a) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{reunion_shadoz.eps}} (b) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{reunion.eps}} } \vskip -0.0in \centerline{ \leavevmode (c) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{irene_shadoz.eps}} (d) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{irene.eps}} } } \caption{ \label{fig.ozonesonde.in} Time-height cross sections of $O_3$ (ppbv) from measurements at La Reunion ($21^{\circ}S$, $56^{\circ}E$) (a) and Irene ($26^{\circ}S$, $28^{\circ}E$) (c). The model calculation for these locations are shown in (b) and (d), respectively.} \end{figure*} Figure~\ref{fig.ozonesonde.in} shows comparisons of ozone profiles from ozonesondes with model calculations at two locations over the SH western Indian Ocean. Extensive high ozone concentrations from the lower to upper troposphere are seen at these sites. Model calculations show similar SH spring maxima in the troposphere compared with the measurements. Both locations experienced the same sources for the elevated spring ozone in the troposphere as to the SH Atlantic sites of Ascension Island and Natal [{\it Diab et al.}, 1996; {\it Baldy et al.}, 1996]. Analysis of one-year ozonesondes at Reunion Island (Figure~\ref{fig.ozonesonde.in}(c)) shows high levels of ozone are observed in the lower to upper troposphere during the SH spring (September-November). {\it Baldy et al.} [1996] reported that the elevated ozone in the free troposphere during this period of time at this island is concomitant with active biomass burning in the southeastern African continent and Madagascar. {\it Thompson et al.} [1996] reported that features of elevated tropospheric $O_3$ ($\ge$ 90 ppbv) extend in a band from $0^{\circ}$ to $25^{\circ}S$, over SE Indian Ocean, Africa, the Atlantic, and eastern South America during September-October. They showed a strong connection between regions of high ozone and concentrated biomass burning. \subsubsection{3.3.3. Western Pacific Ocean} \begin{figure*}[hp] \vbox{ \vskip -0.0in \centerline{ \leavevmode (a) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{taiwan_shadoz.eps}} (b) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{taiwan.eps}} } \vskip -0.0in \centerline{ \leavevmode (c) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{fiji_shadoz.eps}} (d) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{fiji.eps}} } } \caption{ \label{fig.ozonesonde.wpa} Time-height cross sections of $O_3$ (ppbv) from measurements at Taiwan ($25^{\circ}N$, $121^{\circ}E$) (a) and Fiji ($17^{\circ}S$, $179^{\circ}E$) (c). The model calculation for these locations are shown in (b) and (d), respectively.} \end{figure*} The observations and our previous modelling at sites in the southern Atlantic basin showed that large-scale biomass burning emissions provide sources of elevated ozone over the tropical south Atlantic. Long-range transport of biomass burning pollution could affect ozone on a hemispheric scale [{\it Fishman et al.}, 1991; {\it Schultz et al.}, 1999]. Here we examine the extent of seasonal biomass burning influences over the Pacific basin. Figure~\ref{fig.ozonesonde.wpa} show time-height cross sections of vertical ozone profiles form ozonesondes and model calculations at Taiwan (in the subtropical western North Pacific), and Fiji (in the subtropical western South Pacific). Observed ozone profiles at Taiwan shows similar spring ozone maxima in the troposphere as to the sites in the northern Atlantic, and Hawaii in the eastern North Pacific (see next section). The model shows a similar spring ozone maximum, though the elevated ozone concentrations do not extend as far down to near the surface as in the measurements. The major differences at altitudes below 4 km ($\sim$ 600 hPa) coincide with the altitude range most influenced by continental outflow [e.g., {\it Kajii et al.}, 1997; {\it Crawford et al.}, 1997]. This indicates that the model underestimates the impact of continental outflow of $O_3$ precursors such as NO and NMHC [{\it Crawford et al.}, 1997]. The model calculations at Fiji show that elevated tropospheric ozone occurs in August, and from October onward. The elevated ozone ($\ge$ 40 ppbv) calculated by the model at Java are seen from May to August, instead of measured maxima from October to November. {\it Schultz et al.} [1999] reported the importance of biomass burning emissions in South America and Africa for the ozone budget at higher altitudes, and the NOx decomposed from transported PAN at altitudes below 4 km over the tropical South Pacific. This indicates that the model underestimates the impact from biomass burning emissions and the long-range transport of reactive nitrogen at Java, while overestimating the impact of biomass burning emissions at Fiji. \subsubsection{3.3.5. Central to Eastern Pacific} \begin{figure*}[hp] \vbox{ \vskip -0.0in \centerline{ \leavevmode (a) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{hilo_cmdl.eps}} (b) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{hawaii.eps}} } \vskip -0.0in \centerline{ \leavevmode (c) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{tahiti_shadoz.eps}} (d) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{tahiti.eps}} } } \caption{ \label{fig.ozonesonde.epa} Time-height cross sections of $O_3$ (ppbv) from measurements at Hawaii ($20^{\circ}N$, $155^{\circ}W$) (a) and Tahiti ($18^{\circ}S$, $149^{\circ}W$) (c). The model calculation for these locations are shown in (b) and (d), respectively.} \end{figure*} The South Pacific is the region of the tropical troposphere most remote from human activity [{\it Schultz et al.}, 1999]. Figure~\ref{fig.ozonesonde.epa} (c) shows ozone vertical profiles from ozonesondes at Tahiti. At this location, an identifiable SH spring ozone maxima are seen as elevated ozone concentrations extend from upper troposphere downward to the middle troposphere. The timing of this ozone maxima, from September to November, coinciding with the intensive SH biomass burning activities take place in South America, Africa, Southeast Asia, and Oceania [{\it Schultz et al.}, 1999]. Elevated ozone ($\ge$ 40 ppbv) levels extend from the upper to the middle troposphere at Samoa in October. The measurements also exhibit another period of elevated ozone ($\ge$ 40 ppbv) in the upper troposphere in June-July. The model calculations show that elevated ozone extends from the upper to the middle troposphere at this period, indicating the contribution of biomass burning emissions. However, the model does not show the elevated ozone as observed in September-October. Modelled ozone vertical profiles at South Pacific sites (Java, Fiji, Samoa, and Tahiti) are persistently higher than the observations in June-July, indicating that the model contains too much local biomass burning emissions, or too much biomass burning is transported into this area from Africa or South America. On the other hand, less biomass burning emissions has been produced or transported into South Pacific during September-October. For the NH site of Hawaii, both observed and modelled ozone profiles show very distinctive seasonal spring maximum. Elevated ozone levels extend from upper troposphere to near the surface. Seasonal minima appear during the summer months. {\it Wang et al.} [1998] attributed this strong spring maximum to the long-range transport of Asian pollution over the North Pacific. Analyses of anthropogenic aerosols [{\it Perry et al.}, 1999] and CO [{\it Jaffe et al.}, 1997] at this site also shows a seasonal maximum in spring with sources from Asia continent [{\it Perry et al.}, 1999]. In summary, the comparison of time series ozone vertical distribution at Pacific basin shows an interhemispheric asymmetry in ozone concentrations between NH and SH subtropics. While the NH spring maxima are maintained by the continental outflow and long-range transport of continental anthropogenic pollutants, the SH spring maxima are likely due to the biomass burning emissions which take place in Southeast Asia, Oceania, southern Africa, and South America, and transport of $O_3$ precursors such as PAN and NOx from soil and lightning [{\it Schultz et al.}, 1999]. \subsubsection{3.3.6. Northern Higher Latitudes} \begin{figure*}[hp] \vbox{ \vskip -0.0in \centerline{ \leavevmode (a) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{trinidadhead_cmdl.eps}} (b) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{trinidadhead.eps}} } \vskip -0.0in \centerline{ \leavevmode (c) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{boulder_cmdl.eps}} (d) \epsfxsize=1.5in \epsfysize=3.0in \rotatebox{-90.}{\epsfbox{boulder.eps}} } } \caption{ \label{fig.ozonesonde.ohl} Time-height cross sections of $O_3$ (ppbv) from measurements at Trinidadhead ($41^{\circ}N$, $124^{\circ}W$) (a) and Boulder ($40^{\circ}N$, $105^{\circ}W$) (c). The model calculation for these locations are shown in (b) and (d), respectively.} \end{figure*} Figure~\ref{fig.ozonesonde.ohl} extends the comparison of ozone vertical distribution from ozonesonde measurements with model to higher latitude locations in the NH. Both model and measurements at Trinidadhead and Boulder show highest ozone in spring in the lower stratosphere (200 hPa), and in April to August in the middle troposphere and near the surface. Limited ozonesonde measurements and model calculations at Fairbanks show similar timing for the occurrence of spring ozone maximum compared with Trinidadhead and Boulder. Model calculations at the Azores show elevated ozone in the lower stratosphere in spring, in the middle troposphere in spring to summer, and in spring in the lower troposphere. These characteristics, high ozone extending downward from the tropopause region, are close to the available measurements [{\it Oltmans et al.}, 1996]. \section{4. Summary} In this papers we present modelling results from a 3D CTM for the global troposphere. The modelled results are examined at the surface and on a series of time-height cross sections at several locations spread over the Atlantic, the Indian, and the Pacific. Comparison of model with surface measurements at remote MBL stations indicate a close agreement. The most striking feature of the hemispheric spring ozone maximum in the MBL can be most easily identified from these NOAA CMDL surface ozone measurements, at the NH sites of Westman Island, Bermuda, and Mauna Loa, and at the SH site of Samoa. Modelled ozone vertical distribution in the troposphere are compared with ozone profiles from NOAA CMDL and NASA SHADOZ ozonesonde measurements. For the Atlantic and the Indian sites, the model generally produces a hemispheric spring ozone maximum close to those of the measurements. The model also produces the spring ozone maximum in the northeastern and tropical north Pacific close to those measurements, and at sites in the NH high latitudes. The close agreement between model and the measurements indicate that the model can reproduce the proposed processes responsible for producing the spring ozone maximum in these regions of the MBL, lending confidence in the use of the model to investigate MBL ozone chemistry. Overall, the model appears to perform better at sites where stratospheric and biomass burning emissions is the major contributor. For example, for sites at the Atlantic basin, western Indian (except Nairobi), central north Pacific, and the NH high latitudes. The model performance is degraded as the distance between the site and the continental source increase, or as the site located close to the equator (e.g., San Cristobal and Nairobi), where the complex tropical circulations (e.g., Walker circulations and Hadley circulations) and deep cloud convections are more difficult for a model to handle well. Other factors such as dry deposition, heterogeneous chemistry, halogen chemistry, model resolution, and cross tropopause transport can all affect model results and need further investigations. In the following two papers (Parts 2 and 3) we investigate the impact of stratosphere-troposphere exchange and biomass burning emission on the simulated ozone distribution, respectively. \acknowledgments The authors like to thank BADC, ECMWF, Central Weather Bureau (Taiwan), S.J. Oltmans, W.-S. Kau, G. Carver, and Brian Doty for their support on this work. We are very grateful to the NASA SHADOZ project for the ozonesonde data archive ($http://hyperion.gsfc.nasa.gov/Data services/shadoz/Sites2.html$). This research was supported by the NSC grant NSC-89-2119-M-008-007. The Centre for Atmospheric Science is a joint initiative of the Department of Chemistry and the Department of Applied Mathematics and Theoretical Physics. This work forms part of the NERC U.K. Universities Global Atmospheric Modeling Programme. \balance
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{s:intro} In the context of vibration attenuation and elastic wave control, metamaterials are mechanical systems featuring a wave-carrying medium decorated with arrays of resonating units. Effectively, each resonator acts as a tuned mass damper. When tuned all at the same frequency, these resonators give way to frequency regions of strong wave attenuation called locally-resonant bandgaps~\cite{Liu2000, Huang2009, Hussein2014}. What makes metamaterials appealing from an application standpoint is their capability of attenuating waves with wavelengths much larger than the size of the resonators or their spacing~\cite{Oudich2011, Rupin2014, Celli2015}. A wide variety of elastic media across different length-scales can be turned into metamaterial systems by adding or embedding resonating units, arranged according to desired periodic or non-periodic patterns; examples are metamaterial bars and beams~\cite{Bergamini2014, Attarzadeh2020}, plates~\cite{Oudich2011, Rupin2014}, solids~\cite{Liu2000, Bilal2018} and half-spaces~\cite{Garova1999, Schwan2013}. In addition, the resonating units can be engineered to interact with all types of waves propagating in these media, from flexural to longitudinal, shear and surface-type. In this work, we are interested in attenuating waves in half-spaces. Depending on the location and nature of the excitation, several types of waves can develop in a half-space. If the source is on or near the surface, surface waves (e.g., of the Rayleigh type) will travel through the medium. Metamaterial systems that can interact with these waves feature above-surface or sub-surface resonators~\cite{Garova1999, Palermo2016, Colombi2016, colombi2016wedge, Miniaci2016, Colquitt2017, Muhammad2019, Palermo2019}. When the source is far below the surface, the waves that travel in the medium are predominantly longitudinal- and shear-polarized bulk waves. If the goal is to shield a location on the surface from these waves, the most logical metamaterial design would feature a number of resonating units located directly below the target location~\cite{Achaoui2016}. In the seismic realm, where this problem is most relevant, this amounts to creating metamaterial-like foundations~\cite{Mitchell2014, Lasalandra2017, Casablanca2018, Basone2019, Colombi2020}. Here, we are interested in the attenuation of shear waves that are approaching the surface of a half-space from the depth~\cite{Finocchio2014, Sun2020}. In particular, we propose the concept of \emph{metapiles}: one-dimensional arrays of resonators buried near the half-space surface and located around -- rather than underneath -- a target location to be isolated. This concept is illustrated in the schematic of Fig.~\ref{f:intro}(a). \begin{figure}[!htb] \centering \includegraphics[scale=1.1]{Fig_SWM_Intro} \caption{The concept of metapiles. (a) Sketch that illustrates the proposed idea of attenuating shear waves with sparse metapiles. (b) Experimental setup for proof of concept. To provide an idea of its scale, we highlight the distance between the centers of two resonators.} \label{f:intro} \end{figure} We demonstrate that, when properly designed, sparse arrangements of metapiles allow to significantly attenuate shear waves that impinge on the target location from the depth direction. The key idea behind our concept is that, owing to their subwavelength response, metapiles need not be adjacent to produce wave attenuation effects. In other words, by engineering the distance between piles to be narrower than the wavelength, we ensure that waves cannot be transmitted along those paths between piles and are instead attenuated. In a way, the behavior of metapiles is opposite to that of resonant waveguides designed for wave transmission~\cite{Khelif2004, Oudich2010, Celli2015}, which feature instead paths that are comparable in size to the wavelength. We first study this concept and the underlying physics, by embedding 3D printed arrays of resonators in acrylic plates, as illustrated in Fig.~\ref{f:intro}(b). Numerical results on this system are validated via experiments. Then, we extend our idea to the case study of a semi-infinite soil medium and resort to numerical simulations, to quantify the performance of metapiles and the influence of various design parameters, such as the characteristics of the resonators, the number of resonators in a pile, the number of piles and the distance between piles. We demonstrate that even a small number of piles and a limited number of resonators can have significant effects on wave attenuation. These considerations are here validated assuming linear elasticity and small-amplitude waves. In the future, the performance of metapiles should be further validated in more realistic scenarios, involving soil-structure interactions and broadband, transient signals. This work is organized as follows. In Section~\ref{s:table}, we show our idea via numerical simulations and experiments on a tabletop model. In Section~\ref{s:seismic}, we translate this idea to wave attenuation in an elastic half-space representing a soil medium. Conclusions and future outlook are discussed in Section~\ref{s:conc}. \section{Proof of concept} \label{s:table} To understand the physics behind the behavior of our metapiles and study their performance, we develop a two-dimensional experimental setup. The centerpiece of the setup, illustrated in Fig.~\ref{f:intro}(b), is a large acrylic pate in which we embed metapiles, here represented by arrays of 3D printed resonators (ca. 1.8 cm in diameter). As discussed in this section, the design is guided by limitations imposed by the maximum size of the plate and by the minimum feature size allowed by our 3D printer. Leveraging this setup and finite element (FE) simulations performed in COMSOL Multiphysics, we demonstrate the wave attenuation performance of metapiles in an idealized setting, and draw preliminary information on their behavior. \subsection{Resonator design, fabrication and testing} \label{s:1D} Our experimental setup is based on the idea that an acrylic plate can be representative of a semi-infinite medium. This choice is inspired by our previous work on surface wave control~\cite{Palermo2019}. We create metapiles by carving through holes on the acrylic plate and filling them with composite resonators that feature polymeric springs and metallic masses. Such resonators are common in the metamaterials literature~\cite{Liu2000, Bonanomi2015, Matlack2016, Barnhart2019}. In particular, our objective is to design these resonators for shear wave attenuation. As a first step, we set bounds on acceptable resonant frequencies. Considering the standard properties of polymethyl methacrylate (PMMA), Young's modulus $E = 5.5\,\mathrm{GPa}$, Poisson's ratio $\nu=0.35$ and density $\rho=1190\,\mathrm{kg\,m^{-3}}$, we can readily calculate the wave speed for shear waves: $v_s=\sqrt{E/(2\rho+2\rho\nu)}\approx1300\,\mathrm{ms^{-1}}$. Assuming nondispersive propagation of shear waves in the plate, the wavelength as a function of frequency $f$ is: $\lambda_s=v_s/f$. Since the size of the plate is limited to a maximum of $1219\times610\,\mathrm{mm}$, and since this size needs to be large enough to accommodate at least a couple wavelengths along the shortest dimension, our frequency of operation has a lower bound of approximately $4\,\mathrm{kHz}$. There are two ways of making polymer-metal composite resonators. One way relies on surrounding the metallic mass with a soft elastomeric layer~\cite{Liu2000}; this design is known to feature large damping~\cite{Barnhart2019}. The other avenue, chosen in this work, features 3D printed compliant springs made of a stiff elastomer~\cite{Bonanomi2015, Matlack2016}; this combination is a better choice if the damping within a resonator needs to be minimized. In this work, the 3D printed springs are made of Shapeways' polyamide (PA2200) and fabricated via selective laser sintering. The properties of this material are: Young's modulus $E_p = 1.7\,\mathrm{GPa}$, Poisson's ratio $\nu_p=0.34$ and density $\rho_p=930\,\mathrm{kg\,m^{-3}}$~\cite{Pajunen2019}. Our resonator of choice is illustrated in Fig.~\ref{f:1D}(a), and it features a circular polyamide casing with circular springs and a heavy mass at its center. The geometry of the resonator is chosen via trial-and-error, with the objective of obtaining a resonance in the 6\,kHz range. \begin{figure}[!htb] \centering \includegraphics[scale=1.1]{Fig_SWM_1DNum} \caption{3D printed resonator design. (a) Unit cell for the one-dimensional tests, with its relevant dimensions. Its out-of-plane thickness is $b=6.35\,\mathrm{mm}$. Bloch boundary conditions are applied to the long edges, and free boundary conditions to the short edges. The inset illustrates the polyamide-steel resonator. (b) Numerical dispersion relation of the unit cell in (a). The insets illustrate three distinct mode shapes (from lowest to highest frequency: shear, longitudinal and out-of-plane) at $k=\pi/a$.} \label{f:1D} \end{figure} The thickness of the polyamide walls is $t=0.85\,\mathrm{mm}$, the total diameter of the casing is $d_o=18\,\mathrm{mm}$, the diameter of the heavy mass is $d_i=8.4\,\mathrm{mm}$ and the distance between the center of the heavy mass and the center of one of the circular springs is $d_c=5.7\,\mathrm{mm}$. For the heavy metallic mass, we choose Grade 304 Stainless Steel, whose nominal properties are: Young's modulus $E_s = 193\,\mathrm{GPa}$, Poisson's ratio $\nu_s=0.27$ and density $\rho_s=8000\,\mathrm{kg\,m^{-3}}$. To test this design, we consider as unit cell a strip of acrylic of height $h=60\,\mathrm{mm}$, width $a=20\,\mathrm{mm}$ and out-of-plane thickness $b=6.35\,\mathrm{mm}$, that features a circular hole of diameter $d_o$. The resonator is press-fit into the acrylic and the two parts are bonded via cyanoacrylate. This unit cell is tessellated along the $x$ direction as to form a 1D array. To simulate wave propagation in an infinite array, we use finite element simulations with Bloch periodic boundary conditions. The $h$ dimension of the strip, perpendicular to the direction of wave traveling, is chosen to be significantly larger than $a$ to simulate wave speeds that are comparable to those we expect when the resonators are embedded in a large plate and are not in proximity of the plate's boundaries. The result of this analysis is the band diagram shown in Fig~\ref{f:1D}(b), where each circular marker is automatically color-coded depending on the characteristics of the corresponding mode shape; this allows to identify specific modes of wave propagation. The blue markers correspond to a longitudinal mode, that features a bandgap starting at $6.1\,\mathrm{kHz}$, where the mode flattens; as highlighted by the corresponding mode shape at the edge of the Brillouin zone (corresponding to the maximum $k$ value), this gap is caused by resonant dynamics of the composite resonator. The red markers represent a shear mode, that also features a resonant bandgap starting at $5.9\,\mathrm{kHz}$. This is the mode we are mostly interested in. All gray points in Fig~\ref{f:1D}(b) correspond to mixed modes. Of interest to our discussion is the mode featuring out-of-plane resonant dynamics at $8\,\mathrm{kHz}$. Now that we have evidence of the presence of a shear wave bandgap taking place at acceptable frequencies, we can validate these predictions experimentally using a finite strip of 18 resonators. Our experimental setup is illustrated in Fig.~\ref{f:1Dexp}(a). \begin{figure}[!htb] \centering \includegraphics[scale=1.1]{Fig_SWM_1DExp} \caption{Experiments on 1D specimens. (a) Experimental setup to test a one-dimensional array of resonators embedded in acrylic. (b) Numerical (circular markers) and experimentally-reconstructed (colormap) dispersion relations. (c) Numerical and experimental transmission curves.} \label{f:1Dexp} \end{figure} The strip is glued to a piezoelectric actuator capable of generating shear waves (Panametrics Videoscan V150-RM), that imparts a wideband Ricker signal centered at $10\,\mathrm{kHz}$. The signal is created in MATLAB, fed to a signal generator (Agilent 33220A), and amplified by a piezo amplifier. To measure the shear component of the traveling wave at various points of the structure, we place a Laser Vibrometer (Polytec) on a motorized linear stage, and program the stage so that the laser can acquire data at each resonator and, in particular, at points where the resonator wall is directly in contact with the acrylic material. Velocity signals are then recorded by an oscilloscope (Tektronix DPO3034) and postprocessed in MATLAB. To provide a complete characterization of the measured mode of wave propagation, we take the data at all measurement points and use it to reconstruct a dispersion relation for the medium via a 2D-Discrete Fourier Transform of the space-time data we obtain. The reconstructed dispersion is shown as a colormap in Fig.~\ref{f:1Dexp}(b). We can see that the dark regions match the markers corresponding to the numerical dispersion relation (same markers as in Fig.~\ref{f:1D}(b)), especially around the point where the lower shear branch flattens (near resonance). Predicting the full extent of the bandgap from this plot is more challenging. To better extract this information, we consider only the measured data at the input (near the transducer), and at the output (the point of the specimen that is further away from the transducer). We then plot the transmission (TR, output velocity divided by input velocity), and we compare it to a numerical prediction obtained via harmonic analysis in COMSOL. Note that no damping is used in these simulations. This comparison is illustrated in Fig.~\ref{f:1Dexp}(c). We can see that the numerical model captures the experimental response for a wide range of frequencies, as highlighted by the proximity of peaks in the two sets of results. Additionally, both numerical and experimental results show a dip in the transmission between 6 and $7.3\,\mathrm{kHz}$, which can be ascribed to the bandgap. \subsection{Experiments on shear wave attenuation via metapiles} After validating the behavior of our composite resonators, we can proceed to provide an experimental demonstration of the attenuation behavior of metapiles embedded in a larger acrylic domain. While this idea could be extended to 3D domains, we choose to validate it for a 2D medium for simplicity. A schematic of our experimental setup, with all its relevant dimensions, is illustrated in Fig.~\ref{f:2D}(a); recall that a photo of this same setup is shown in Fig.~\ref{f:intro}(b). \begin{figure}[!htb] \centering \includegraphics[scale=1.1]{Fig_SWM_2D} \caption{Experiments on shear wave attenuation via metapiles. (a) Experimental setup, featuring a large acrylic plate with two metapiles. (b) Numerical and experimental transmission curves. The averaged numerical curve is obtained from a moving average of the raw data. The experimental results, due to the limitations of our setup, account for both in-plane and out-of-plane motion.} \label{f:2D} \end{figure} The acrylic domain has width $L=1219\,\mathrm{mm}$, height $H=610\,\mathrm{mm}$ and thickness $b=6.35\,\mathrm{mm}$. Two regions of the bottom edge of the plate, located 30\,cm away from the vertical edges, are clamped to an optical table via angle brackets. The same transducer used for the 1D experiments is also glued at the center point of the bottom edge of the plate, as to simulate a source of shear waves. Near the transducer, we also attach a small acrylic block onto the plate. This is needed to provide a measurement point for in-plane shear waves that is accessible to the laser vibrometer . This measurement point is used to record an input signal. At the top edge of the plate, representing the free surface of our domain, we also glue an acrylic block that is used to record the output response. To properly record in-plane shear waves, the laser should be parallel to the plate itself (i.e., it should be parallel to the direction of vibration that needs to be measured). However, this is not possible due to space limitations in our setup. As a consequence, the laser is oriented at a $\approx30^{\mathrm{o}}$ angle with respect to the plate and is therefore bound to record some out-of-plane dynamics together with the desired in-plane vibration. In order to create and test metapiles, we carve holes in the acrylic plate by means of a CNC router, and press-fit composite resonators identical to those introduced in Section~\ref{s:1D}. Resonators are located right below the top edge (the center of the first resonator is located at $a/2$ from the top edge), and the edge of the resonator of each pile is located at a distance $D$ from the vertical mid-line of the plate, so that the center of the resonator is $D+d_o/2$ distant from that same point. For our experiment, given $d_o=18\,\mathrm{mm}$, we choose $D=21 \,\mathrm{mm}$. This $D$ value is chosen so that the distance between piles $2D=42\,\mathrm{mm}$ is much smaller than the wavelength we expect at the resonance frequency of the resonators, $223\,\mathrm{mm}$. Our specimen features $N_r=5$ resonators for each pile. This setup is replicated in COMSOL, where we perform a harmonic analysis to also determine the transmission of this medium to in-plane shear waves. A comparison between experimental and numerical transmission curves is shown in Fig.~\ref{f:2D}(b). Our numerical simulations do not feature any damping. To smoothen-out the numerical frequency response and to make it resemble the response one would obtain with moderate values of damping, we apply a moving average procedure. The response before and after the application of the moving average is shown as gray and black curves in Fig.~\ref{f:2D}(b). The red curve is the experimental transmission curve. We can see that there is good qualitative agreement between numerics and experiments up until 7 kHz. In particular, we can see that both sets of data capture the small dip around 4 kHz, the amplitude increase around 5 kHz and the large dip that starts right below 6 kHz. Based on our knowledge on the dynamics of these resonators (section~\ref{s:1D}), we understand that this large dip is the onset of the bandgap induced by the metapiles. After 7 kHz, we can see that the numerical response increases again, while the experimental one experiences a second large dip. We claim that this second dip is due to undesired out-of plane dynamics of the plate that are picked up due to the inclination of the laser. This conjecture is corroborated by the fact that the band diagram in Fig.~\ref{f:1D}(b) shows that our resonators present an out-of-plane resonance in the neighborhood of 8 kHz, the frequency at which we see the second dip in Fig.~\ref{f:2D}(b). In conclusion, this preliminary experiment demonstrates that shear wave attenuation in a half-space can take place even if we use spaced-out arrays of resonators. Next, we will use numerical simulations to better understand this attenuation phenomenon. \subsection{Numerical generalization and parametric study} In order to probe the effects of various metapile parameters on the attenuation performance, it would be necessary to test many different spatial resonator configurations. This is very impractical to do experimentally. However, since we have validated our numerical simulations with experimental results for a specific choice of parameters, we now resort to numerical simulations to perform a limited parametric study. In particular, we keep the same identical resonators as in previous sections, and vary $D$, the distance between the edge of a metapile and the location to be shielded, and $N_r$, the number of resonators in each metapile. To compare the performance of various configurations, we define a metric of wave attenuation performance as illustrated in Fig.~\ref{f:num}(a,b). \begin{figure}[!htb] \centering \includegraphics[scale=1.1]{Fig_SWM_Num} \caption{Numerical parametric study. (a) Transmission curve of the bare plate (no resonators). TR$_b$ is the baseline transmission in the 4--10$\,\mathrm{kHz}$ range of interest, obtained by fitting the transmission with a first order polynomial. (b) Extraction of the attenuation measure from the transmission of a configuration with resonators. This specific configuration, featuring 20 adjacent piles with 5 resonators per pile, is used as reference to benchmark attenuation performance. The shaded area is obtained by intersecting the transmission with the TR$_b$ line. (c) Evolution of the attenuation performance with $N_r$, the number of resonators in a metapile, and $D$, the distance between the edge of one metapile and the output measurement point. The attenuation measure for each case is normalized by the area of the attenuation region in the benchmark case (b).} \label{f:num} \end{figure} First, we consider a frequency range around the expected bandgap, here chosen to be from $4\,\mathrm{kHz}$ to $10\,\mathrm{kHz}$. We then compute the transmission of a bare acrylic plate without resonators and extract a baseline curve (linear fit of the response in the desired frequency range). This procedure, shown in Fig.~\ref{f:num}(a), is done to account for the slope of the response curve in the range of interest. When we consider a system with resonators, we first smoothen out the numerical transmission with a moving average filter. Then, we consider as bandgap a continuous region that includes the expected resonance $6\,\mathrm{kHz}$ and that remains below the baseline. The area of that region is our measure of attenuation. For each metapile configuration of interest, we extract the attenuation area and normalize it by the area obtained for a compact array of $20\times5$ resonators located right below the location to be shielded. The transmission plot and attenuation area for this reference configuration are shown in Fig.~\ref{f:num}(b). By considering the normalized areas for combinations of $D=0.25\text{--}4\,\mathrm{cm}$ and $N_r=1\text{--}9$, we can extract information to compile the design map shown in Fig.~\ref{f:num}(c). To better appreciate the extent of the attenuation area for each metapile configuration, one can see all transmission plots in Fig.~\ref{f:numall}. \begin{figure}[!htb] \centering \includegraphics[scale=1.1]{Fig_SWM_NumAll} \caption{Complete map of the transmission results that yield the parametric colormap in Fig.~\ref{f:num}(c). All simulations feature a line wave source.} \label{f:numall} \end{figure} From this data set, we can conclude that configurations with more resonators in the metapile and with less distance between piles perform better in terms of wave attenuation. In particular, from the map in Fig.~\ref{f:num}(c), we can see that the performance of the best metapile configurations (with low $D$ and high $N_r$) is approximately 30\% of the performance of the reference configuration -- a metafoundation featuring a number of resonators an order of magnitude larger than any metapile configuration. From these plots, we can also appreciate that $D$ has much larger effects than $N_r$, as highlighted by the fact that configurations with metapiles close to each other, but featuring a few resonators, perform better than configurations with many resonators per pile but with far-away piles. This is not surprising since, for $D=4\,\mathrm{cm}$, the distance between piles of $2D=8\,\mathrm{cm}$ is much closer in magnitude to the shear wavelength in the acrylic plate. \section{Application to a soil half-space} \label{s:seismic} The results illustrated in Section~\ref{s:table} give us some preliminary information on the wave attenuation performance of metapiles. However, two aspects make vast parametric studies based on the model used in Section~\ref{s:table} impractical: (1) the size limitations of our experimental setup, that does not allow us to increase the distance between piles without incurring in significant boundary effects; and (2) the fact that our COMSOL models are based on an accurate rendering of each resonator, and are therefore computationally expensive. Thus, we build a new model where resonators are simplified as spring-mass systems connected at single nodes of the half-space. We also take advantage of this new model to concentrate on a case study that more closely resembles a potential application of our system. We therefore consider an elastic half-space with soil-like properties. Using the software SAP 2000, a structural engineering oriented FE platform, we investigate the effects of different layouts of metapiles and their characteristics on shear wave attenuation. \subsection{The numerical model} The half-space model, illustrated in Fig.~\ref{f:sapmod}(a) has a base of $400\,\mathrm{m}$, a depth of $20\,\mathrm{m}$ and a mesh of four-nodes $1\,\mathrm{m}\times1\,\mathrm{m}$ elements. \begin{figure}[!htb] \centering \includegraphics[scale=1.1]{Fig_SWM_SapModel} \caption{(a) Elastic half-space model in SAP2000. (b) Top panel: comparison between the response of the soil and the response of the soil with 5 layers of resonators located below the surface. The resonators are tuned at the first peak of the soil response. $P_{s}$ is the amplitude of the soil response at 3.1\,Hz. Bottom panel: difference between the transmission} \label{f:sapmod} \end{figure} We assume plain strain conditions. The half-space is assumed to be linear elastic, isotropic, and homogeneous. Soil conditions are modeled assuming a density of the material of $1700\,\mathrm{kg\,m^{-3}}$, an elastic modulus of $7.07\,\mathrm{GPa}$ and a Poisson's ratio of 0.3. We consider a damping ratio of 5\%. Frequency domain analyses are performed by applying a horizontal (shear) harmonic base displacement of $0.05\,\mathrm{m}$ to the baseline of the model. Frequencies of excitation are selected in the range $0\text{--}25\,\mathrm{Hz}$. We introduce absorbing boundary conditions using dampers, as proposed by Lysmer and Kuhlemeyer~\cite{lysmer1969finite}. Along the boundaries, we choose damping constants $c_1=1\,\mathrm{Nsm^{-1}}$ and $c_2=0.25\,\mathrm{Nsm^{-1}}$ along the horizontal and vertical directions, respectively, as studies show that this assumption leads to a reasonable wave absorption~\cite{wang2009numerical, vermeer1998plaxis}. To reduce wave reflections at the lateral boundaries of the domain, we also choose a lateral extension of the domain that is more than eight times its height ~\cite{amorosi2009numerical}. The size of the mesh is selected to meet the requirements proposed by Lysmer and Kuhlemeyer, as each element has dimensions much smaller than $\lambda/8$, where $\lambda$ is the wavelength corresponding to the maximum frequency of interest $f$. For the analyses discussed in this work, $\lambda/8=v_s/8f=16\,\mathrm{m}$, being $v_s=396\,\mathrm{ms^{-1}}$ and $f=3.1\,\mathrm{Hz}$. Finally, to perform a transmission analysis, we record the lateral displacement at an input point located at the midpoint of the baseline of the model, and at an output point (also called location to be isolated or control node) located at the midpoint of the upper boundary. For the half-space without resonators, we compare the peaks of the numerical transmission, shown as a black curve in Fig.~\ref{f:sapmod}(b), to the theoretical solution available in the literature~\cite{kramer1996geotechnical}. This comparison yields a perfect match. Our frequency of interest is $3.1\,\mathrm{Hz}$, corresponding to the first shear resonance of the half-space. Metapiles are modeled as arrays of resonators located at prescribed nodes of the mesh; thus, we assume that each resonator occupies an area of $1\mathrm{m}\times1\mathrm{m}$. Since we are interested in shear wave attenuation, the resonators are only capable of lateral motion. An example of transmission for a configuration featuring 5 rows of resonators located below the whole upper boundary of the domain is shown as a red line in the top panel of Fig.~\ref{f:sapmod}(b). These resonators are tuned to resonate at $3.1\,\mathrm{Hz}$, acting as a single, tuned mass damper: the original peak of the half-space is replaced by an anti-resonance accompanied by two adjacent and distinct peaks. To quantify the effects of the resonators on the response of the model, we plot the difference between the transmission of the soil and that of the soil with resonators, as shown in the bottom panel of Fig.~\ref{f:sapmod}(b). A difference greater than 0 corresponds to attenuation regions, while a difference smaller than 0 means that the resonators have amplified the response. This plot clearly highlights that attenuation at 3.1\,Hz comes at a cost: the response is amplified at other frequencies. We then define the following metrics of attenuation: (i) the peak attenuation, defined as $P_{att}/P_s$, where $P_{att}$ is the maximum attenuation and $P_{s}$ is the soil response at the resonance peak of $3.1\,\mathrm{Hz}$; (ii) the effective attenuation, which is meant to account for the presence of amplification regions and is defined as $(P_{att}-P_{amp})/P_s$, where $P_{amp}$ is the maximum amplification; (iii) the bandwidth BW, i.e., the frequency range where a peak attenuation greater than 5\% is detected. \subsection{Parametric study} We begin our parametric study by investigating the influence of the characteristics of the resonators on the attenuation. This comparison is carried out by considering 5 rows of resonators homogeneously distributed below the free surface. This configuration is illustrated schematically in Fig.~\ref{f:sapmap}(a). \begin{figure}[!htb] \centering \includegraphics[scale=1.1]{Fig_SWM_SapMaps} \caption{Parametric study on the soil half-space model. Various configurations of resonators are here compared as a function of peak attenuation $P_{att}/P_s$, effective attenuation $(P_{att}-P_{amp})/P_s$, and bandwidth BW. (a) Resonator arrangement, featuring 5 rows of resonators homogeneously distributed below the free surface, used to study the influence of the resonators parameters. (b) Design maps obtained from (a), detailing the influence of the resonators' mass $m$ and damping factor $\xi$. (c) Generic metapile configuration with two arrays of resonators that are equidistant from the control node. (d) Design maps obtained from (c), detailing the influence of the distance between metapile arrays and control node $D$, and the influence of $N_r$, the number of resonators in each array, with parameters $m=2\,\mathrm{t}$ and $\xi=5$\% fixed. (e) Configuration with more than two metapiles. (f) Design maps obtained from (e), detailing the influence of the number of metapiles located on the same side of the control node $N_p$, and their distance $s$, with $m=2\,\mathrm{t}$, $\xi=5$\%, $D=10\,\mathrm{m}$, $N_r=5$ fixed.} \label{f:sapmap} \end{figure} We let the mass of each resonator range from $0.25\,\mathrm{t}$ to $2.5\,\mathrm{t}$, and the equivalent viscous damping factor from 2 to 16\%. We keep the natural frequency of the resonator constant at $3.1\,\mathrm{Hz}$ and therefore vary the stiffness of the resonator to tune its response. From the charts of Fig.~\ref{f:sapmap}(b), it is interesting to observe that the peak attenuation is generally better for higher mass and lower damping. We can see that, in this configuration featuring a large number of resonators, even resonators with low masses and high values of damping yield a $\approx95\%$ attenuation of the peak. Similar trends are observed for the bandwidth, that peaks for large masses at about $0.45\,\mathrm{Hz}$. It is less straight-forward to make meaningful considerations for the effective attenuation; the only interesting feature of this map is the presence of an outlier for $\xi=8\%$ and $m=1\,\mathrm{t}$. For this combination of parameters, the effective attenuation is smaller than 0 and indicating amplification of the signal. This highlights the importance of also considering amplification effects in addition to attenuation -- an aspect that is discussed in other works~\cite{Xiao2020} and whose detailed investigation in the case of metapiles deserves a separate treatment. Since a mass of $2\,\mathrm{t}$ and a damping of 5\% allow to obtain a significant response attenuation, we fix these reasonable parameters in the following analyses. To study the effects of varying numbers of resonators $Nr$ in a pile and of their distance from the control node $D$, we build the model illustrated in Fig.~\ref{f:sapmap}(c). Fig.~\ref{f:sapmap}(b) shows that, when resonators are homogeneously distributed below the surface, the peak attenuation for a mass of $2\,\mathrm{t}$ and damping of 5\% amounts to $99\%$. Fig.~\ref{f:sapmap}(d) shows that the peak attenuation we obtain with only two metapiles located at any distance from 10 to 30 m is still significant. This amounts to 86\% for $N_r=5$ and increases up to $97\%$ for $N_r=10$. Not surprisingly, deeper metapiles yield better peak attenuation. As $D$ increases, we can see that the peak attenuation decreases since the distance between piles $2D$ is now approaching $129\,\mathrm{m}$, the wavelength in the soil at $3.1\,\mathrm{Hz}$. Note that values of $D$ larger than 50 would compromise the validity of the model since the piles would be too close to the left- and right- boundaries of our domain. The effective attenuation map yields less intuitive results, and features a minimum for $D=30\,\mathrm{m}$ and $N_r=8$. Finally, we can see that the bandwidth decreases as we increase $D$ and decrease $N_r$. We also analyze the influence of the number of adjacent metapiles, by fixing $m=2\,\mathrm{t}$, $\xi=5\%$, $N_r=5$ and $D=10\,\mathrm{m}$. We call $N_p$ the number of adjacent piles per side of the control node, as shown in Fig.~\ref{f:sapmap}(e). We call $s$ the distance between piles. The wave attenuation performance of these configurations is shown in Fig.~\ref{f:sapmap}(f). Increasing $N_p$, while keeping $D$ constant, increases the attenuation performance of the system in terms of peak attenuation, effective attenuation and bandwidth. The distance between piles $s$ does not have a significant influence on the attenuation performance. \section{Conclusions} In this article, we have shown the shear wave attenuation properties of metapiles buried in an elastic half-space. First, we demonstrated via numerical simulations and experiments that cm-scale metapiles embedded in an acrylic plate can attenuate waves, when their distance is smaller than the wavelength in the medium of interest. Then, we numerically extended this idea in waves propagating in soil, showing that metapiles have the potential to yield significant wave attenuation with a minimal number of resonators. Future work might be directed towards the realization of metapiles with spatially-varying distributions of resonance frequencies, to widen the frequency bandwidth of the wave attenuation regime~\cite{Colombi2020}. For seismic applications, it will be important to evaluate the performance of metapiles in real soil media, known to hinder some of the wave attenuation effects exhibited by metabarriers~\cite{Palermo2018, Zaccherini2020}, and consider soil-structure interaction effects~\cite{Sun2020}. Another important aspect that deserves to be investigated is the coupling between metapiles and the structure to be isolated~\cite{Xiao2020}. In particular, the effects of undesired vibration amplification should be carefully considered. \label{s:conc} \section*{Acknowledgments} We wish to thank Antonio Palermo for fruitful discussions, and Sai Sharan Injeti, Alex Ogren and Semih Taniker for helpful COMSOL-related assistance. \bibliographystyle{elsarticle-num}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A smart city integrates technology with humanity for the well-being of its citizens and improvements of its resources \cite{Eternal-thing}, by the incorporation of the Internet of Things (IoT). In an IoT network, the sensor node is an important part, which has four main components - the sensor itself for data collection, the transceiver for data transfer to/from a local system or the Cloud, the microcontroller for controlling both the sensor and the transceiver, and the battery for powering the entire unit. The constant data collected and computed helps in various monitoring tasks in our environment. The real-world deployment of IoT has given rise to the challenge of monitoring the health and performance of the sensor nodes themselves \cite{zhang2007health}. If one or more components of sensor node malfunction, then the erroneous data may indicate a fault in the related system. Like any other component, sensors fail due to drift, bias, and precision degradation. Microprocessors usually last longer, but its memory \cite{Microprocessor_degradation_2} and other electronic components can fail with time. But in this paper, we will focus on battery health, as the death of the battery leads to the complete failure of the sensor node. For efficient and sustainable service, a resilient IoT-based solution has been thought of to communicate sensed data and health data, as shown in Fig. \ref{fig:overview}. \begin{figure*}[htbp] \centering \includegraphics[width=0.9\textwidth]{Overview} \captionsetup{justification=centering} \caption{Overview of the proposed resilient IoT-based solution for Smart City} \label{fig:overview} \end{figure*} Efficient and sustainable computing for IoT has been the area of research interest, focusing on self-sustaining IoT nodes requiring the least possible maintenance in the long run \cite{Eternal-thing}. For IoT devices to work with the highest efficiency, energy usage must be accurately predicted \cite{Radio_power_practical_issues}. Energy Consumption can be reduced using a lightweight mutual authentication protocol for safe data transmission in smart cities \cite{Lightweight_authentication_IoT}, or using a more efficient A/D Converter for sampling sensor data \cite{ADC_efficiency}. Efficiency can also be improved by prioritizing more delay-sensitive machine-to-machine requests for a sustainable system \cite{sustainable_m2m}. All these developments have improved the IoT devices, but they have not talked about accurate and efficient ways to predict battery RUL, which is the main focus of this paper. Rechargeable Lithium-ion batteries are extensively used in Electric Vehicles \cite{future_of_EV}, Mobile Phones and Laptops \cite{super_capacitors}, IoT devices, and many others as the power source due to their high energy density, low self-discharge, and prolonged lifespan compared to other battery types \cite{GPR}. Battery-powered IoT devices are mainly used in remote applications, whose uptime and functioning are critical. Since Li-ion batteries degrade over time due to formation of Solid Electrolyte Interphase film at the carbonaceous anode material \cite{safari2008multimodal}, the battery loses capacity and gains internal resistance. Hence, the in-built system should be able to accurately and reliably monitor the State of Health (SOH) and predict the Remaining Useful Life (RUL) of a battery to prevent over-discharge, overcharge \cite{intellbatt} and help in the timely replacement of the battery to avoid the failure of vital IoT instruments. In general, the SOH of a battery is the measure of the battery's ability to store and deliver electrical energy. The RUL is defined as the remaining cycles or time before the battery reaches its End of Life (EOL), i.e., the time needed to reach 70\%\ or 80\%\ of the SOH. The SOH and RUL help in the Short-term and Long-term prediction of Battery life, respectively.\par For Battery health prediction, data generation is also challenging, as Li-ion Battery has thousands of cycles, which causes reliability researchers to take several months or years before failure testing completes. The experimental methods \cite{noura2020review} and model-based methods \cite{ICA}\cite{kalman_fliter} are complex and unsuitable for IoT applications. Several Machine Learning models, such as Neural Network, Support Vector Machine (SVM), Gaussian Process Regression (GPR), have also been used for battery health prediction, but they were not developed considering remotely located IoT applications with low computational complexity. In most of these existing works, the raw data is transmitted to Cloud for health prediction, which increases the power needed by the transmitter. Since existing health prediction processes do not work for IoT devices deployed in the wild \cite{fafoutis2018predicting}, we have proposed a unique \textit{iThing} architecture capable of performing the SOH estimation and RUL prediction on-board using peak extraction method with minimal computational load and storage requirement. The remaining paper is organized as follows: The related prior works and research gaps are mentioned in Section \ref{literature}. The contributions and novelty of the paper are stated in Section \ref{novelty}. The \textit{iThing} architecture, the peak extraction method, the methodology used, and the correlation analysis are explained in Section \ref{method}. The setup used for the experimental data extracted and the dataset used is discussed in Section \ref{experiment}. The accuracy and the efficiency of the novel peak extraction model proposed, and the prediction results are summed up in Section \ref{results}. Finally, the complete paper is summarized in Section \ref{conclusion}. \section{Related Works and Research Gap} \label{literature} In general, there are three different methods for battery health estimation: Experimental, Model-based, and Data-driven methods. Experimental methods include measuring the direct Health Indicators (HIs) like internal resistance, impedance, and capacitance to estimate the battery SOH. The internal resistance of Li-ion batteries during charge-discharge cycles can be measured by Ohm's Law-based method \cite{wang2018internal}. Electrochemical Impedance Spectroscopy (EIS) may be used for Battery SOH derivation \cite{impedeance_based1}. Though accurate and straightforward, these experimental methods are laboratory-based, time-consuming, and need vast knowledge of the complex internal battery chemistry for precise prediction. Hence, they are not feasible for implementation in battery health estimation for IoT devices. \par Model-based methods, such as Kalman filter, least square-based methods, and simplified electrochemical models, are also used for battery health estimation. The influence of external resistance was investigated using the first-order equivalent battery model in \cite{resistance_based}. It is possible to estimate the battery SOH by Kalman filter and a Thevenin model \cite{kalman_fliter}, or by Incremental Capacity Analysis and Lorentzian function-based model \cite{ICA}. These methods are also accurate and robust, but have very complex mathematical structures. Hence, they are not feasible for low-power IoT applications with low computational power. \par With the rapid development of Machine Learning and Artificial Intelligence, data-driven methods are becoming very popular, as they are non-parametric, do not consider the electrochemical principles much, do not require a deep understanding of the internal working of the battery, or complex mathematical equations for battery health prediction. A SOH prediction method utilizing Prior Knowledge-based Neural Networks and the Markov chain was proposed in \cite{PKNN}. Battery SOH may also be estimated using Relevance Vector Machine (RVM) and the Gray Model (GM) \cite{zhao2019hybrid}, or using SVM \cite{feng2019online}. Due to its several advantages, like lesser overfitting, fewer training data, etc., GPR has recently become more popular than other ML and AI techniques to estimate SOH and simultaneously train the RUL model \cite{GPR}. In this paper, the GPR technique predicts the RUL for maximum accuracy with a minimal computational load. \par \begin{table*}[htbp] \centering \caption{Related works on sustainable computing for IoT} \label{tab:prior_work_TSUCS} \begin{tabular}{p{2cm} p{4.5cm} p{7.7cm}} \toprule \textbf{Author} & \textbf{Topic discussed} & \textbf{Salient features}\\ \hline Ram \textit{et al}. \cite{Eternal-thing} & Self-sustaining IoT node &{ ~~\llap{$\bullet$}~~ Combined security, solar-energy harvesting, and aging detection in a unified system for self-sustaining SEHS } \\ \hline Luo \textit{et al}. \cite{Radio_power_practical_issues} & Practical issues in radio powered IoT & { ~~\llap{$\bullet$}~~ Proposed a model for very accurately describes the energy harvesting process and the power consumption of a sustainable IoT device }\\ \hline Li \textit{et al}. \cite{Lightweight_authentication_IoT} & Lightweight mutual authentication methods for IoT & { ~~\llap{$\bullet$}~~ A lightweight mutual authentication protocol is proposed based on a novel public key encryption method to make IoT more secure and efficient at the same time }\\ \hline Huang \textit{et al}. \cite{sustainable_m2m} & Sustainable machine to machine communication & { ~~\llap{$\bullet$}~~ An admission control model is used to split the priority of the requests made by the devices in order to make machine-to-machine communication more sustainable. }\\ \hline Klingensmith \textit{et al}. \cite{ADC_efficiency} & More efficient analog to digital converters & { ~~\llap{$\bullet$}~~ A unique data acquisition technique using a non-uniform FTT algorithm is proposed to increase power efficiency for sampling data from sensors }\\ \hline Guha \textit{et al}. \cite{Resistance_capacitance} & Resistance and capacitance-based battery health prediction & { \makecell[l]{~~\llap{$\bullet$}~~ A capacitance and resistance-based particle filtering\\model is combined, and a fused degradation model is\\proposed to predict battery degradation. \\ ~~\llap{$\bullet$}~~ Model is complex and challenging to implement in a\\resource-strained IoT device} }\\ \hline Lyu \textit{et al}. \cite{EIS} & SOH prediction based on Electrochemical Impedance Spectroscopy (EIS)& { \makecell[l]{~~\llap{$\bullet$}~~ It uses a neural network for RUL prediction\\ ~~\llap{$\bullet$}~~ The proposed method is fast and economical\\ ~~\llap{$\bullet$}~~ Neural network prediction error increases as cycles\\increase} }\\ \hline Liu \textit{et al}. \cite{Electrochemical} & RUL prediction using electrochemical model of battery & { \makecell[l]{~~\llap{$\bullet$}~~ An electrochemical-based particle filter model is used\\to predict RUL\\ ~~\llap{$\bullet$}~~ State variables of the new PF algorithm are selected\\from battery's health characteristics rather than meaning-\\less fitting coefficients\\ ~~\llap{$\bullet$}~~ The model was too complex to implement in low-\\power IoT devices} } \\ \hline Mo \textit{et al}. \cite{Kalman_filter} & RUL prediction using Kalman filter and improved particle filter & { \makecell[l]{~~\llap{$\bullet$}~~ A new particle filter method is used, which combines\\Kalman filter and particle swarm optimization\\ ~~\llap{$\bullet$}~~ The combined model is not affected much by noise} } \\ \hline Hu \textit{et al}. \cite{HI_extraction} & RUL prediction based on GPR model & { \makecell[l]{~~\llap{$\bullet$}~~ Uses a unique health indicator extraction method to\\train the GPR model for accurate RUL prediction\\ ~~\llap{$\bullet$}~~ It uses 12 different parameters to train the model; the\\battery must fully charge every time} } \\ \hline \textbf{Current paper} & \textbf{Peak Extraction method} & { \makecell[l]{~~\llap{$\bullet$}~~ Uses a unique peak extraction method that represents\\all factors causing battery degradation to train model\\ ~~\llap{$\bullet$}~~ Since only one variable is involved, it is efficient and\\suitable for low-powered and remote IoT applications} } \\ \bottomrule \end{tabular} \end{table*} The summary of the related work with methodologies used along with a brief discussion are given in Table \ref{tab:prior_work_TSUCS}. Although all these discussed methods have their merits, none of them are made with IoT devices in mind, i.e., they have complex mathematical models, require high computational power, no correction for local mutations for individual battery cells, and other shortcomings mentioned previously. To overcome these shortcomings, a novel peak extraction method has been proposed to estimate SOH and predict RUL efficiently. \section{Contributions of the Current Paper} \label{novelty} \subsection{Problem Addressed in the Current Paper} A smart city can improve the performance of resources and human life by the integration of intelligent sensors. These sensors need an uninterrupted power supply to monitor and transmit the required data continuously. The deployment of smart sensors in the wild often faces several problems, of which continuous power supply is one of the main issues. Most of the IoT sensor nodes use rechargeable Li-ion batteries as the source of power. However, the batteries suffer degradation with time. Although some methods exist to predict the remaining battery life, they involve complex calculations, more storage space and are not self-sustaining. Therefore, such methods are not suitable for deployment in the low-power IoT sensor nodes, which need to run for a long time without external interference. Hence, this paper has tried to address the challenge of Battery Health Self-monitoring for IoT nodes in Smart Cities. \subsection{Solution Proposed in the Current Paper} In this work, the battery's Charging Capacity is considered the Health Indicator of the Battery. The State of Charge (SoC) is extracted from the Charging Capacity by a simple formula. As the battery degrades, the Charge Capacity and the SoC values decrease. The State of Health (SOH) is taken as the collection of the peak values of SoC from each charge-discharge cycle of the battery. This process reduces the computational complexity of the method, making it suitable for IoT sensor nodes. \par The GPR model is used to predict the Remaining Useful Life (RUL) of the battery. Since the GPR model can work well with a small training dataset, the storage requirement problem is also addressed. \subsection{Novelty of the Current Work} The main contributions of this paper are as follows: \begin{itemize} \item This paper proposes a straightforward method for battery health prediction, optimized for low-power IoT applications, using a novel technique of peak extraction. A small training dataset is used, which reduces the storage requirement. \item The proposed \textit{iThing} architecture enables the entire Battery life prediction to be done on-board. Only the predicted health data is transmitted instead of the raw data, reducing the transmitter power requirement and making it suitable for low-power IoT applications. \item The peak SoC of each charge-discharge cycle of the battery is extracted. This peak is then defined as the SOH of the battery cell and closely represents the change in internal factors of the battery, like resistance and impedance of the battery, but reduces the computational complexity of the process. \item The extracted SOH is then used to train a GPR model to predict the RUL, whose predicted values closely represent the actual values. The data from different battery cells are compared to establish the accuracy and robustness of the prediction model. \end{itemize} \section{The Proposed Novel Next-Generation Sensors for Sustainable IoT} \label{method} \subsection{iThing Architecture} \begin{figure}[b] \centering \includegraphics[width=0.8\textwidth]{Architecture} \captionsetup{justification=centering} \caption{Our Vision of iThing - Battery Health Self-monitoring for Sustainable IoT} \label{fig:architecture} \end{figure} A unique hardware component called \textit{iThing} has been introduced for Sustainable IoT for Battery Health Self-monitoring in Sensor Nodes. The proposed architecture for \textit{iThing} is shown in Fig. \ref{fig:architecture}. The Battery is responsible for the power supply to the various components of the sensor node, like sensors, controller, and transmitter. The Battery Management System is responsible for recharging the battery in every cycle. The current sensor collects the time and current information and sends it to the Maximum Charge Point Tracking Unit, responsible for Charging Capacity calculation, SoC derivation, and SOH extraction. The extracted SOH data is then sent to the RUL Prediction Unit, where the GPR algorithm is used to predict the RUL. The entire calculation is done in the Sensor Node itself with minimal external interference, and the Internal Health Data is then transmitted to the Cloud. This process flow is shown in Fig. \ref{fig:flowchart}. The external sensors responsible for measuring the environmental parameters are also collected and sent for further processing and calculation to the Cloud. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{Flow_chart} \captionsetup{justification=centering} \caption{A simple flowchart representing the process flow for RUL model training and prediction} \label{fig:flowchart} \end{figure} In the following sub-sections, we will discuss the techniques used for the feature extraction method required to train the RUL prediction model. \subsection{Extraction of SoC and SOH data from measured data} Since remotely operated IoT devices must operate at low power, the training model must not use too much computational power and memory. To reduce the computational complexity, the variables measured and the number of features used for model training are kept to two and one respectively, the variables measured is the charge capacity of the battery and the total time elapsed since the first charge-discharge cycle. The charge capacity of the battery is not directly calculated, it is calculated by integrating the current flowing into/out of the battery with respect to time. This can be done by either using a physical integrator circuit or computationally. However, the second method will result in errors due to approximation. \begin{equation} Charge\ capacity = \int{Idt} \end{equation} The measured charge capacity of the battery ($C_{curr}$) is divided by the battery's nominal charge capacity ($C_{nomi}$) in Ah. The obtained value is considered as the SoC of the battery as given in equation \ref{SoC_eq}: \begin{equation} SoC = \frac{C_{curr}}{C_{nomi}} \label{SoC_eq} \end{equation} When a battery undergoes multiple charge-discharge cycles, the maximum measured Charge Capacity of the battery at the end of the charging cycle invariably decreases compared to the previous cycle as the maximum charging capacity decreases. Due to the decrease in Charge Capacity, the calculated SoC value also decreases proportionally; this collection of peak SoCs is defined here as the SOH of the battery, as shown in fig. \ref{fig:SOH_from_SOC}. \begin{figure}[h] \centering \includegraphics[width = 0.9\textwidth]{SOH_from_SOC} \caption{SOH from SOC by peak extraction} \label{fig:SOH_from_SOC} \end{figure} \subsection{Battery Life Prediction Algorithm} \label{GPR} Before we go into the RUL prediction processes itself, in this section, we will glance at the Gaussian Process Regression model used in this paper for training the RUL model. Several machine learning algorithms are popular; some of them include SVM, RVM, Na\"{i}ve-Bayes, and GPR. Out of all the models, GPR is more efficient and flexible, and accurate enough for RUL prediction than other models\cite{GPR}. Hence, we will be using the GPR method for RUL prediction. Below is the mathematics for the Gaussian Process Regression.\\ Let us assume \textit{x} is the feature set (SOH) and \textit{y} is the target set (RUL) of the data. The regression model is defined as: \begin{equation} y = f(x) + \epsilon \end{equation} and \begin{equation} f(x) = x^Tw, \end{equation} where \(\epsilon\) represents the Gaussian noise of the model and $w$ represents the vector of weights of the regression model. The likelihood of the model can be defined as: \begin{equation} p(y|x,w) = \frac{1}{(2\pi\sigma^2_n)^\frac{n}{2}}\exp{(-\frac{1}{2\sigma^2_n|y-X^Tw|^2})}. \end{equation} Assuming the mean $\mu$ as 0, and the covariance matrix as \(\Sigma_p\) for the weights: \begin{equation} w \sim N(0,\Sigma_p). \end{equation} The posterior distribution is defined as: \begin{equation} p(w|y,X) = \frac{p(y|X,w)p(w)}{p(y|X)}. \end{equation} By writing the terms that are dependent on weights only (i.e.), prior and likelihood, we obtain: \begin{equation} p(w|x,y) \sim N(\Bar{w},A^{-1}) \end{equation} where \(\Bar{w}\) is defined as \(\sigma^{-2}_n(\sigma^{-2}_nXX^T + \Sigma^{-1}_p)^{-1}Xy\). Since this posterior distribution is Gaussian with mean \(\Bar{w} = \frac{1}{\sigma^2_n}A^{-1}Xy\) and covariance \(A^{-1} = \sigma^{-2}_nXX^T + \Sigma^{-1}_p\). Let us assume that \(x_*\) is the SOH input whose RUL output \(y_*\) is to be predicted, then the Gaussian posterior can be written as: \begin{equation} p(f_*|x_*,X,y) = N(\frac{1}{\sigma^2_n}x_*^TA^{-1}Xy,A^{-1}x) \end{equation} Here, the kernel function, which is nothing but the covariance function, is defined as: \begin{equation} k(x,x') = \phi(x)^T\Sigma_p\phi(x') \end{equation} In this paper, we will be using the squared-exponential function, also known as the radial basis function, as the kernel functions; it is defined as: \begin{equation} k(x,x') = \sigma^2\exp(-\frac{1}{2}(x-x')l^{-1}(x-x')) \end{equation} where \(\sigma^2\) is the correlation coefficient matrix, and \(l\) represents the characteristic length scale.\\ The general pseudo-code of GPR is given in Algorithm I. \begin{algorithm} \caption{Battery Life Prediction Algorithm} \begin{algorithmic}[1] \Function{fit}{self, length\_scale=1, X, y} \State kernel\_ = rbf\_kernel(X, None, length\_scale) \State lower = True \State L = cholesky(kernel\_, lower = lower) \State self.alpha\_ = cho\_solve((L, lower), y) \State self.X\_train\_ = X \State self.L\_ = L \EndFunction \Function{predict}{self, length\_scale, X} \State K\_star = rbf\_kernel(X, self.X\_train\_, length\_scale) \State y\_mean = K\_star.dot(self.alpha\_) \State lower = True \State v = cho\_solve((self.L\_, lower), K\_star.T) \State y\_cov = rbf\_kernel(X, None, length\_scale) - K\_star.dot(v) \State \textbf{return} y\_mean, y\_cov \EndFunction \Function{rbf\_kernel}{X, Y = None, length\_scale = 1} \If{Y is NULL} \State dists = pdist(X / length\_scale) \State K = exp(-.5 * dists) \State K = squareform(K) \State fill\_diagonal(K, 1) \Else \State dists = cdist(X / length\_scale, Y / length\_scale, metric = \say{sqeuclidean}) \State K = exp(-.5 * dists) \EndIf \State \Return{K} \EndFunction \end{algorithmic} \end{algorithm} In the code mentioned in Algorithm I, the function \say{pdist} calculates the pair-wise standardized euclidean distance of the given matrix. The function \say{cdist} calculates the standardized euclidean distance between the two given matrices. The function \say{fill\_diagonal} is used to fill the diagonal of the given matrix with the value provided. The function \say{squareform} converts the square-form distance matrix to a distance vector. The function \say{cholesky} returns the cholesky decomposition of the matrix. The function \say{cho\_solve} solves the linear equation Ax = B when the cholesky decomposition of A is provided. With this, we have discussed the Gaussian process fully. \subsection{Training of the RUL model} Once the SOH is calculated, the collected data is used to train a GPR model to predict the RUL of the battery. Since this paper targets low-powered IoT devices, sampling is done at a low frequency to save computational costs. Also, since these batteries last for a long time due to less usage, lots of data is collected even at low sampling usage, making the ML model very accurate at prediction. Before beginning the model training, the practitioner fixed a SOH value to stop the model training. Hence, until the calculated SOH reaches the fixed setpoint, the model is continuously trained with the acquired data. Once the SOH setpoint is crossed, the model training is halted, and the prediction of RUL for the battery starts. \subsection{Correlation analysis} The SOH extracted and RUL predicted from this model is dependent on measured values. Still, to quantify their dependency, we perform correlation analysis on the two variables (SOH and RUL) with the measured variable (Charge Capacity). This is necessary for analyzing the prediction performance of the GPR model. There are different correlation analysis methods like Spearman's correlation coefficient, Kendall's correlation coefficient, and Pearson's product-moment correlation coefficient. Out of the aforementioned methods, only Pearson's correlation \cite{correlationmath} coefficient accepts non-ordinal data. But Pearson's coefficient calculation assumes that its data is linear, which is not the case in this paper. Hence, the data is divided, and a piecewise analysis is performed. Let us assume \(x_i\) is the \(i^{th}\) feature of the feature set (SOH in this paper) with size n, and \(y_i\) is the \(i^{th}\) target of the target set (RUL in this paper) with size n. Hence, the Pearson's correlation coefficient is calculated using the following formula \cite{correlationmath}: \begin{equation} r = \frac{\Sigma_{i=1}^n(x_i-\Bar{x})(y_i-\Bar{y})}{\sqrt{\Sigma_{i=1}^n(x_i-\Bar{x})^2\Sigma_{i=1}^n(y_i-\Bar{y})^2}}, \end{equation} where \(\Bar{x}\) is \begin{equation} \Bar{x} = \frac{1}{n}\Sigma_{i=1}^nx_i \end{equation} and \(\Bar{y}\) is \begin{equation} \Bar{y} = \frac{1}{n}\Sigma_{i=1}^ny_i \end{equation} \section{Experimental Study} \label{experiment} \subsection{Dataset description} In this section, we will discuss the dataset used for the battery aging test. This data set is drawn from experiments performed by Severson \textit{et al.}\cite{dataset} and it is as follows: Lithium Iron Phosphate (LFP)/Graphite cells manufactured by A123 Systems (APR18650M1A) are used. These batteries were cycled in horizontal cylindrical fixtures on a 48-channel Arbin LBT potentiostat in a forced convection temperature chamber whose temperature was set at 30 \degree C. The battery specifications are given in Table \ref{tab:battery_specifications}. \begin{table}[h] \centering \caption{Specifications of the battery} \label{tab:battery_specifications} \begin{tabular}{l|l} \toprule \textbf{Parameter} & \textbf{Value}\\ \hline Nominal Capacity & 1.1 Ah\\ Nominal Voltage & 3.3 V\\ Battery Cathode & \(\text{LiFePO}_4\)\\ Battery Anode & Graphite\\ \bottomrule \end{tabular} \end{table} The cells have been charged under a two-step fast charging condition. The battery is initially charged in Constant Current (CC) mode at 5 Columb until the cell reaches 67\% SoC, after that it is charged in CC mode at 4 Columb until the cell reaches 80\% SoC, charging time is fixed at 10 minutes for 0\% to 80\% SoC; after that, the battery is charged at 1 Columb Constant Current-Constant Voltage mode. The upper and lower cutoff potentials are 3.6 V and 2.0 V respectively, which are consistent with the manufacture's specifications. These cutoff potentials are fixed for all current steps, including fast charging. After some cycling, the cells may hit the upper cutoff potential during fast charging, leading to significant constant-voltage charging. The battery is discharged at 4 Columb CC mode. The current and voltage graphs for one charge-discharge cycle are shown in Fig. \ref{fig:Dataset_graph_image}. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{Dataset_graph_image} \captionsetup{justification=centering} \caption{The (a) Current versus time, and (b) Voltage versus time for one charge-discharge cycle for the dataset used} \label{fig:Dataset_graph_image} \end{figure} The computing platform used in the model training and correlation analysis is configured with Windows 10, 16 GB of RAM, and an AMD Ryzen 7 3750H CPU @ 2.3 GHz. The software used for training and testing the model was done using the scikit-learn library and the Pandas library in Python. The correlation analysis was done in R with the nlcor library. Data extraction and handling were done in Matlab. \subsection{Experimental Setup} We have also extracted data for one cycle from a different battery, whose experimental setup, and the voltage and current versus time plots are depicted in Fig. \ref{fig:exp_dataset_graph}. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{Experiment_setup} \captionsetup{justification=centering} \caption{The (a) Experimental Setup, (b) Current versus time, and (c) Voltage versus time for one charge-discharge cycle for the dataset used} \label{fig:exp_dataset_graph} \end{figure} The predicted RUL of the battery using the novel peak extraction model comes out to be 1141 cycles. The specifications of the battery used for this experiment are mentioned in Table \ref{tab:battery_specifications_exp}. \begin{table}[h] \centering \caption{Specifications of the battery used in the experimental setup} \label{tab:battery_specifications_exp} \begin{tabular}{l|l} \toprule \textbf{Parameter} & \textbf{Value}\\ \hline Nominal Capacity & 2.2 Ah\\ Nominal Voltage & 3.7 V\\ Model Number & ICR 18650\\ \bottomrule \end{tabular} \end{table} \section{Results and Discussion} \label{results} \subsection{RUL prediction} The battery cell RUL prediction model proposed in this paper predicts and provides the remaining cycles left of the battery life. To check and verify the accuracy and prediction performance of the trained model, three different error calculation methods are used in this paper. The types of error calculated are Absolute Error (AE), Relative Error (RE) and the Root Mean Square Error (RMSE). \begin{equation} AE = \vert RUL_{actual} - RUL_{predicted}\vert \end{equation} \begin{equation} RE = \frac{\vert RUL_{actual} - RUL_{predicted}\vert}{RUL_{actual}} * 100 \end{equation} \begin{equation} RMSE = \sqrt{\frac{\sum_{i=1}^n (RUL_{actual_i} - RUL_{predicted_i})^2}{n}} \end{equation} In this paper, we focus more on computational efficiency rather than accuracy of the prediction model. The RUL prediction results are shown in Fig. \ref{fig:RULact_vs_predicted}. The exact values of the three errors for four different batteries are compared in Table \ref{tab:battery_compare}. \begin{table}[h] \centering \caption{Comparison of results from different cells used in the dataset } \label{tab:battery_compare} \begin{tabular}{l|l|l|l} \toprule \textbf{Battery barcode} & \textbf{Maximum AE} & \textbf{Maximum RE} & \textbf{RMSE} \\ \hline el150800737313 & 22 cycles& 14.24\% & 9 cycles\\ el150800737280 & 24 cycles& 15.04\% & 8 cycles\\ el150800737378 & 28 cycles& 21.20\% & 12 cycles\\ el150800737274 & 25 cycles& 20.93\% & 7 cycles\\ \bottomrule \end{tabular} \end{table} To verify the effectiveness, the proposed method is verified at different starting prediction points. As shown in Fig. \ref{fig:err550_600}, the starting prediction points are set as the 550$^{th}$ cycle and 600$^{th}$ cycle. The results from training the model are given in Table \ref{tab:battery_compare550} and Table \ref{tab:battery_compare600}, respectively. These results show that the proposed method can predict the RUL during the entire battery life, and the accuracy increases if the prediction starts at a later point. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{RUL_PH_error} \captionsetup{justification=centering} \caption{These graphs show (a) the closeness of the actual RUL to the predicted RUL along with the PH margin, and (b) the Absolute Error in predicted RUL} \label{fig:RULact_vs_predicted} \end{figure} \begin{table}[h] \centering \caption{Comparison of results for four batteries with model trained for first 550 cycles} \label{tab:battery_compare550} \begin{tabular}{c|c|>{\centering\arraybackslash}m{0.15\linewidth}|c|c} \toprule \textbf{Battery ID} & \textbf{Actual RUL} & \textbf{Predicted RUL} & \textbf{AE} & \textbf{RE} \\ \hline el150800737313& 157 cycles& 175 cycles& 18 cycles & 11.69\% \\ el150800737280& 144 cycles& 162 cycles& 18 cycles & 12.68\% \\ el150800737378& 223 cycles& 198 cycles& 25 cycles & 11.25\% \\ el150800737274& 155 cycles& 161 cycles& 6 cycles & 4.04\% \\ \bottomrule \end{tabular} \end{table} \begin{table}[h] \centering \caption{Comparison of results for four batteries with model trained for first 600 cycles} \label{tab:battery_compare600} \begin{tabular}{c|c|>{\centering\arraybackslash}m{0.15\linewidth}|c|c} \toprule \textbf{Battery ID} & \textbf{Actual RUL} & \textbf{Predicted RUL} & \textbf{AE} & \textbf{RE} \\ \hline el150800737313& 107 cycles& 117 cycles& 10 cycles & 8.99\% \\ el150800737280& 94 cycles& 103 cycles& 9 cycles & 9.37\% \\ el150800737378& 173 cycles& 159 cycles& 14 cycles & 8.08\% \\ el150800737274& 105 cycles& 108 cycles& 3 cycles & 2.66\% \\ \bottomrule \end{tabular} \end{table} The Prognostic Horizon (PH) is also calculated to determine how fast the model\rq{s} predicted RUL reaches the correct RUL within the margin of error \cite{gou2020state}. \begin{equation} PH = Cycle_{EOL} - Cycle_{i} \end{equation} where $Cycle_i$ is the cycle at which \(RUL_{actual}*(1 - \alpha) \leq RUL_{predicted} \leq RUL_{actual}*(1 + \alpha) \) and \(Cycle_{EOL}\) is the cycle at which the battery reaches it\rq{s} End of Life. Here \(\alpha\) is the error value from 0 to 1. As seen in Fig. \ref{fig:RULact_vs_predicted}, the proposed method can predict RUL consistently within the desired accuracy cone specified as \(\alpha\) (where \(\alpha\) = 0.1). The PH is used to determine how fast the RUL prediction model is; the higher the PH value, the faster the prediction model reaches the required error specifications. Hence, the higher the PH value, the better the prediction model. The PH for battery \say{el150800737280} comes out to be 641 cycles which means the model reaches the acceptable error quickly. This does not mean the error constantly remains under the required specification, but it is accurate none the less. These results show that the proposed method for RUL prediction is robust, straightforward, and accurate so that it can be used for battery health monitoring in the \textit{iThing} sensor node. This enables the \textit{iThing} to operate independently for a long time and ensures that no critical monitoring data is lost due to the death of the battery. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{RUL_startpoints} \captionsetup{justification=centering} \caption{The RUL prediction result of batteries starting from (a) 550$^{th}$ cycle, and (b) 600$^{th}$ cycle} \label{fig:err550_600} \end{figure} \subsection{Computational efficiency} Since extracting peaks reduces the training data by a lot, only 0.14\% of the data was used in training the model. Since the GPR model can't be trained with all the collected data as the measured charge capacity is an oscillating data set, there is no point in storing all the collected data except for testing or diagnostic purposes. Hence, the SOH and RUL that have been extracted and stored for training the RUL prediction model come down to only 15 KB of data, which can be discarded after the model's training if necessary, saving even more space. The time taken for the GPR model to train is less than 2 seconds; this estimation was done in an off-the-shelf computing device with a CPU clock speed of 900 MHz and RAM capacity of 1GB. \subsection{Correlation between RUL and SOH} The correlation between RUL and SOH comes out to be 0.97, which means the calculated SOH is very closely related with RUL and hence the SOH closely represents the battery degradation, as seen in Fig. \ref{fig:SOH_vs_time}. This analysis was done in R using the \say{ncol} library. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{SOH_graphs} \captionsetup{justification=centering} \caption{These graphs represents the decrease in SOH with (a) time, and (b) number of cycles which represents the decrease in maximum charge capacity of the battery due to battery degradation} \label{fig:SOH_vs_time} \end{figure} Hence the error analysis results show that it is feasible for the model's implementation in an IoT device including devices with low computational power and memory with acceptable levels of accuracy for general applications, this is also reflected in Table \ref{tab:result_compare} where the novel peak extraction technique is compared with other methods. \begin{table}[h] \centering \caption{Comparison of results of different techniques} \label{tab:result_compare} \begin{tabular}{llll} \toprule \textbf{Method} & \textbf{Input Parameters} & \textbf{Output Parameters} & \textbf{Error} \\ \hline Resistance \& Capacity-based\\models \cite{Resistance_capacitance}, 2018 & 2 & 1 & Average AE of 3 cycles \\ EIS \cite{impedeance_based1}, 2019 & 5 & 1 & AE within 10\% of 240 cycles \\ Electrochemical model \cite{Electrochemical}, 2020 & 5 & 1 & AE of 20 cycles maximum \\ Kalman Filter \cite{kalman_fliter}, 2016 & 2 & 1 & Maximum AE of 7 cycles \\ \textbf{Peak extraction} (proposed method) & 1 & 1 & RMSE error of 8 cycles \\ \bottomrule \end{tabular} \end{table} \subsection{Sustainability of the model} As seen from the efficiency results, this RUL prediction model can be used for low powered applications, this allows remote and/or low-powered IoT nodes to operate and monitor their battery health remotely and can alert the user only for important actions like battery replacement. Since the RUL prediction is done locally, there is no need to transmit large amounts of training and testing data to the cloud for RUL prediction and hence can save up on energy costs. The device is integrated with clean and environmentally friendly power sources like solar cells to recharge the battery and power the device, this allows the device to operate for a prolonged period of time without external interference. These properties are compared with other papers in Table \ref{tab:result_compare_2}. \begin{table}[h] \centering \caption{Comparison of different IoT nodes} \label{tab:result_compare_2} \begin{tabular}{>{\centering\arraybackslash}m{0.2\linewidth}|>{\centering\arraybackslash}m{0.25\linewidth}|>{\centering\arraybackslash}m{0.25\linewidth}|>{\centering\arraybackslash}m{0.2\linewidth}} \toprule \textbf{Concept that uses IoT} & \textbf{Self Sustaining} & \textbf{Fully integrated} & \textbf{Self Monitoring} \\ \hline IoT based energy saving devices \cite{IoT_energy_saving} & The nodes are powered by an external power source & The data is sent form the node to a master controller and hence no data is processed locally & The node only monitors the power use of the device and not the node itself \\ Monitoring pollution using IoT nodes \cite{IoT_pollution_monitor} & The node is battery powered and will have to be replaced & The node has a microcontroller which is used to perform calculations and classify the air pollution levels & The sensor node only monitors itself passively by approximating input data to avoid sensor errors \\ Water management using WSN \cite{WSN_water_management} & The sensor nodes are battery powered and have to be replaced from time to time & The sensor node dose not process data locally to control the system & The sensor node does not monitor it\rq{s} own health \\ Self sustaining IoT based greenhouse \cite{IoT_agriculture} & Greenhouse and IoT device is powered by renewable energy & The device is integrated including it\rq{s} solar power source & The device only monitors the greenhouse, it does not monitor itself \\ \textbf{Peak extraction} (our proposed method) & \textbf{The device is integrated with green sources for power and can last long without external interference} & \textbf{The device performs all required calculations and connects with the cloud only for user interference} & \textbf{The device can monitor it\rq{s} own health and inform the user in case of battery placement} \\ \bottomrule \end{tabular} \end{table} \section{Conclusion and Future Scope} \label{conclusion} It is highly important and necessary for the battery health prediction system to be able to predict the RUL for the battery accurately and efficiently without consuming too much computational capability of the IoT devices, this applies especially for low-powered and remotely operated IoT devices. A novel peak extraction method is proposed in this paper to counter these problems. The main conclusions are summarized as follows: (1) A novel and simple peak extraction method is used to estimate SOH efficiently. (2) A single feature GPR based prediction model for efficient RUL prediction with low computational burden, the Pearson's correlation coefficient calculation also confirms the correlation between the extracted SOH and the predicted RUL. (3) The accuracy of the model is estimated with the absolute error and RMSE and the error values come out to be 22 cycles and 8 cycles for absolute error and RMSE respectively. With this we confirm that the proposed model is accurate and efficient enough to be implemented in IoT devices and can benefit the user a lot by alerting them for timely and planned replacement of the battery cell to avoid any crucial failures. Although this paper provides a novel method for battery health estimation, there are still some areas for improvement for this model. For example, the temperature inconsistencies of the battery were not considered, and only one type of battery (\(LiFePO_4\)) was experimented with and used to train and test the peak extraction method, which can be considered for future work. \bibliographystyle{unsrtnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction \& Motivation} \label{sec:Intro} Driven by the availability of massive amounts of data, image classification achieves accuracy of over 90\% with thousands of classes, today \cite{yalniz}. To a large extent, the performance of modern deep learning models is driven by the volume of data, and the availability of accurate labels for training. Popular datasets such as ImageNet \cite{imagenet}, COCO \cite{coco}, and MNIST \cite{mnist} serve as benchmarks for assessing machine learning model performance. These benchmarks were generated by visual inspection of images to manually craft labels. Extension to new labels or classes requires additional effort. The same procedure---as previously executed for the collection of existing labels---needs application to the new, unlabeled set of data. While image classification achieved tremendous success for photographs, a straightforward adoption of established machine learning techniques for satellites, drones, and remote sensing images have limited success \cite{penatti,remote-labels,maggiori,wang1}. In general, geospatial image classification \cite{dlearth} lags behind machine learning precision from social media \cite{mahajan2018}---due to more sparsely labeled data, and due to increased heterogeneity in data. Remote sensing signals often demand far more sophisticated processing compared with conventional photography. For instance, Light Detection and Ranging (LiDAR) \cite{LiDAR}---acquired as a 3D point cloud---requires processing and conversion of the data into a commonly usable machine learning 2D data input format; such as a multi-dimensional array of numbers \cite{paszke2017automatic,tensorflow2015-whitepaper}. Geospatial data also come with varying spatial resolution acquired across multiple seasons. Real time integration of satellite data into multidimensional models like weather or climate requires automation of label generation. For example: Identification of extreme weather events such as a hurricane, earthquakes, wildfires, etc., demands specific training data with need to generate labels on the fly as those events are forming. Here we propose \textit{AutoGeoLabel}, a framework to create labels for geospatial imagery. It extracts features using simple statistical methods from raw and unlabeled data. The approach allows to identify class labels from a combination of rasterized layers. We demonstrate the utility of such an approach taking LiDAR measurements to exemplify how simple combinations of statistical features can help to automatically classify a geospatial scene. As a result, such rapidly generated labels may get exploited to identify land cover as (noisy) input for machine learning models. The framework is developed to label large volumes of data, and to create labels in near real time for any image type. Moreover, the method is not limited to LiDAR, but may get adopted for any other high quality data available for the area of interest such as hyper- and multi-spectral imaging, synthetic aperture radar (SAR), or microwave radiometry. \section{Previous Work \& Applications} \label{sec:PrevWorkAndApp} Labeled geospatial data for machine learning benchmarks is hard to come by, and those is sensor platform specific. Commonly, labels from land classification is typically rooted in a combination of pixel- and object-based classifications \cite{corine}. Generated land cover data is often corrected through a data quality process where manual reclassification is carried out. Additional maps and land use datasets may get exploited in the process. Given the (human) effort involved in generating land cover data, those is limited in spatial coverage or temporal updates. Typical refresh rates read once in about 3--5 years \cite{corine, mrlc}. Standard classified data like \textit{CORINE Land Cover} \cite{corine} and \textit{Multi-Resolution Land Characteristics} (MRLC) \cite{mrlc} products have a spatial resolution of tens of meters, limited spatial coverage, and the number of classes is restricted to the most common land covers---like forest, water bodies, and agricultural lands, etc. Standard geospatial benchmark datasets like \textit{Spacenet} \cite{spaenet6}, \textit{BigEarthNet} \cite{BigEarthNetAL}, and \textit{DeepSat}\cite{deepsat} have an even more limited number of labeled classes. In contrast, \textit{OpenStreetMap} (OSM) is one of the most complete, crowd-sourced community efforts with hundreds of labels collected by volunteers \cite{openstreetmap}. OSM labels is represented in vector shape format (points, lines, polygons) from which rasterized maps can get generated where roads, houses, buildings, and vegetation-covered areas is color-coded. It has been demonstrated that such OSM-based land classification labels may train deep neural network models for semantic segmentation of high-resolution, multi-spectral imagery \cite{albrecht}. In particular, uneven OSM-label completeness of different geographies added noise to the segmentation task. Technically, semantic segmentation was handled by unsupervised image-style translation employing a modified CycleGAN \cite{zhang2020map} architecture. Concerning remote sensing image classification in general, standard machine learning tools previously applied read e.g.\ \textit{Random Forest} \cite{randomf}, \textit{Support Vector Machine} \cite{svm}, \textit{XGBoost} \cite{sclassification}, and a plethora of deep learning models \cite{zhu2017deep}. Geospatial data is one of the most prevalent data in Energy \& Utility, Oil \& Gas, Forestry, Agriculture, Transportation, Navigation, and Disaster Emergency Response \cite{bolstadgis}. Many of the above industries do have massive amounts of data collected through conventional observation, sensing, and various other measurement methods. However, the absence of labels impedes most of automation and predictive methods based on machine learning. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{AutoGeoLabel.pdf} \caption{\label{fig:AutoGeoLabelFlowChart} Schematic flow chart of the \textit{AutoGeoLabel} framework to generate labeled data for machine learning. Motivated by the shortage in geospatial labels (Sec.\ \ref{sec:Intro}) where at the same time there exists vast amounts and a plurality of unstructured geospatial information (Sec. \ref{sec:BigGeoDataPlatform}), \textit{AutoGeoLabel} automatically generates labels for weakly supervised learning. } \end{figure} For many industries, the large number of features of interest requires dedicated efforts to create labeled datasets based on domain expertise. \textit{AutoGeoLabel} may address such situations where labels and images is generated on the fly and computational resources is limited. \textit{AutoGeoLabel} bears potential for lightweight computational techniques ready for deployment on e.g.\ \textit{Internet-of-Things} (IoT) \cite{madakam2015internet} edge devices \cite{chen2019deep}. \section{Datasets and Geospatial Platform} \label{sec:BigGeoDataPlatform} \subsection{LiDAR Data} \label{sec:LiDAR} LiDAR data generate a dense point cloud from laser pulses bouncing back from the Earth's surface. Massive amounts of LiDAR data is acquired mainly to map topography. In addition to elevation mapping, LiDAR carries rich embedded information on land cover such as vegetation, roads, buildings, etc. However, extracting such features may require reclassification of the point cloud, but it is prohibitively expensive. \textit{AutoGeoLabel} can enable rapid label generation for existing petabytes of data \cite{opentopography}. For our test case, point cloud data was collected in 2017, with approx.\ 10 points per square meter density \cite{nycLiDAR}. A small subset of point cloud data was classified into broad classes associated with water, ground, and bridges. However the majority of points fell into the unclassified category. The data volume is in excess of $1$ terabyte, and the data is made available as about $1900$ \texttt{LAS} 3D point cloud files \cite{LAS2019}. \subsection{Land Cover Data} \label{sec:LandCoverData} The LiDAR data of Sec.\ \ref{sec:LiDAR} was further processed by NYC in combination with high resolution, multi-spectral imagery to generate an $8$--class land cover dataset at $0.5$ meters resolution \cite{nyc-landcover}. The $8$ classes read: \textit{Tree Canopy}, \textit{Grass/Shrub}, \textit{Bare Soil}, \textit{Water}, \textit{Buildings}, \textit{Roads}, \textit{Other Impervious}, and \textit{Railroads}. Each bears accuracy in classification above 90\%. The data was also adjusted based on previous city surveys on roads, rails, and building footprints to further improve classification accuracy. We employ the \textit{Land Cover} data as a ground truth validation set of \textit{AutoGeoLabel}-generated classes. Such land cover classes, as obtained for NYC, is not readily available for most parts of the world. In many cases classification from the \textit{OpenStreetMap} (OSM) project \cite{openstreetmap} is the best at hand. \subsection{Geospatial Platforms} Open-source geospatial data volume exceeds petabytes making it comparable in volume to data generated by social media \cite{social}. Multiple geospatial platforms exist where images are stored either as objects \cite{google-ee,aws,planetary} or as indexed pixels ready for computation \cite{whitby2017geowave,pairs}. IBM PAIRS does index all pixels exploiting a set of predefined spatial grids to align data layers. This approach renders ideal for search of similarities across large geographies. Also, the framework proofs efficient to apply similar processing across multiple datasets. Current efforts to enable widespread analytics on geospatial data focus on automating machine learning \cite{autogeo,auto-geos} to lower the user's effort in creating training data, train models, and aggregate generated results. If label data is sparse or missing, \textit{AutoGeoLabel} can quickly generate the required classes. \section{Methods} \label{sec:methods} \subsection{Feature Extraction from Point Cloud Data} \label{sec:FeatureExtractPointCloud} Open-source earth observation data libraries such as the \textit{Geospatial Data Abstraction Library} (GDAL) \cite{gdal} and the \textit{Point Data Abstraction Library} (PDAL) \cite{pdal} enable handling and manipulation of a plurality of geospatial data formats. Python application programming interfaces (API) wrapping these libraries provide data scientists low-barrier access to G/PDAL functionalities. The libraries also offer an easy way to filter and sort information based on a set of statistical attributes. \textit{AutoGeoLabel} constructs processing pipelines built on GDAL, PDAL, and Python's NumPy module \cite{2020NumPy-Array}: Raw point cloud data is rasterized as detailed below with the aid of PDAL. Those rasters get reprojected and aligned into a nested grid utilizing GDAL. Once curated with IBM PAIRS, fusion with other data layers such as multi-spectral imagery allows queries to pull data cubes as NumPy arrays through the \textit{PAIRS Python API wrapper} (PAW) \cite{PAW2019} for machine\slash deep learning consumption (cf.\ e.g. \textit{PyTorch tensors} \cite{paszke2017automatic}). \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{LiDAR_layers.pdf} \caption{\label{fig:RasterizedStatFeaturesSample} A--M: Statistical LiDAR features, cf.\ Tab.\ \ref{tab:PointCloudStats}, as extracted from the original point cloud for a sample area in NYC. Each layer is stored as individual raster grid with spatial resolution of $\sim0.5$ meters. N: depicts OSM labels of the same area as reference; with corresponding orthophoto (O) to the right.} \end{figure} \begin{table}[tb] \centering \caption{Point Cloud Statistics} \label{tab:PointCloudStats} \begin{tabular}{l c r} \hline \bf attribute & \bf statistics & \bf Fig.\ \ref{fig:RasterizedStatFeaturesSample} index \\ \hline & minimum $r_-$ & A \\ reflectance $r$ & maximum $r_+$ & B \\ & mean $\overline{r}$ & C \\ & standard deviation $r_\Delta$ & D \\ \hline & minimum $c_-$& E \\ & maximum $c_+$& F \\ count $c$ & mean $\overline{c}$& G \\ & standard deviation $c_\Delta$& H \\ & sum $\sum$& I \\ \hline & minimum $e_-$& J \\ elevation $e$ & maximum $e_+$& K \\ & mean $\overline{e}$& L \\ & standard deviation $e_\Delta$& M \\ \hline \end{tabular} \end{table} For LiDAR data, attributes such as \textit{Intensity}, \textit{Number of Returns}, and \textit{Count} is recorded to quantify the laser light reflected by the probing pulse sent from the airborne LiDAR device. While the point cloud may get automatically classified by the data provider, it typically requires specialized, commercial, and compute intensive software. Rather than the use of proprietary data processing, in general, \textit{AutoGeoLabel} employs simple statistical features extracted from high spatial resolution, high quality data. As a demonstration, here, we present the approach for LiDAR point clouds. The spatial distribution of the point cloud is determined given a local neighborhood parametrized by the user. For the case presented, an area of linear extent of approx.\ $2.5$ square meters is defined. It hosts roughly $25$ data points for the evaluation of statistical features listed in Tab.\ \ref{tab:PointCloudStats}. As the area of aggregations slides across the coverage area at grid size $0.3$ meters, it determines the features for all data points falling within that area. In this processing step, the three--dimensional point cloud data is converted to two-dimensional images where each statistical feature is stored as a separate, curated, and indexed raster layer in IBM PAIRS. An example of the extracted features for an area in NYC is presented in Fig.\ \ref{fig:RasterizedStatFeaturesSample}. While the use case presented here exploits simple statistical features like the minimum, maximum, mean, and standard deviation, higher moments of the distribution like kurtosis, skewness, etc.\ may get calculated to add further (non-linear) statistical raster layers serving as additional features. As the point cloud data is converted to $13$ distinct raster layers where each one stores a statistical feature, visual inspection indicates that vegetation, buildings, roads, and bare land is the dominant land cover features comprising most of information in the data layers. \subsection{Rule-Based Labeling from LiDAR surveys} \label{sec:RuleBasedLabeling} In order to distill classification information from the rasterized LiDAR statistics generated in the previous Sec.\ \ref{sec:FeatureExtractPointCloud}, we exploit rules drawn from physical characteristics when the classification objects get probed by the LiDAR laser pulse: \begin{itemize} \item \textit{buildings}: The firm surface of rooftops is most likely to (partially) reflect the laser pulse by a single return. Moreover, compared to the overall size of the building, flat roofs bear little variation in elevation. Thus, pseudo-RGB imagery with channels encoding minimum $e_-$, maximum $e_+\approx e_-$, and standard deviation $e_\Delta\approx0$ of elevation measurements will most prominently discriminate buildings in top-down airborne LiDAR survey data. \item \textit{vegetation}: In contrast to rooftops, vegetation allows for strong variation in elevation measurements from LiDAR laser pulses, $e_\Delta\gg0$: As the laser penetrates a tree's canopy it might get reflected multiple times by branches and foliage at various elevation levels. Moreover, in contrast to a single return $c_-=c_+=1,~c_\Delta=0$ with rooftops, multiple pulses will bounce back to the detector, i.e.\ $c_+\gg1$ and $c_\Delta\gg0$ \item \textit{roads}: Given global terrain slopes have been leveled to zero for elevation statistics\footnote{as easily performed in preprocessing of LiDAR point clouds, for an application cf.\ e.g.\ \cite{albrecht2019learning}; alternatively, fusion with existing elevation models is an option}, $e_-\approx0$. In addition, lane markers typically contain reflective particles with mirror like properties when illuminated by a laser, $\overline{r}\gg0$. In contrast, the black surface of asphalt absorbs a major portion of the laser pulse, $r_-\approx0$. \item \textit{water body}: We mention the option of no-data areas in rasterized LiDAR data statistics. While the projection of the irregular point cloud onto Earth's surface may yield areas of varying point cloud density, larger patches of void area typically stem from full absorption of the laser pulse. Water is a prominent land class where close-to-zero laser signal is returned, $c_+=0$. \end{itemize} Generating pseudo-RGB images $(R,G,B)$, Tab.\ \ref{tab:LabelRules} summarizes the rules applied to infer classification maps for buildings, roads, and vegetation with $\langle\cdot\rangle$ spatial averaging of a scene\slash tile. $r_{\max}$ and $e_{\max}$ denote maximum reflectance and maximum (local, global terrain removed) elevation, respectively. While the thresholding rules for buildings and vegetation is intrinsically defined based on averaging a given (pseudo-)image patch, establishing rules for road labeling on laser reflectance values is statically defined. Typically, building height and vegetation types significantly vary from one geo-location to another. However, in a crude approximation, we consider constant laser reflectance of e.g.\ road lane markers and road surface ($r_->.1r_{\max}\land\overline{r}<.6r_{\max}$) on local elevation ground zero ($e_-<.1e_{\max}$) for various geo-locations. The top of Fig.\ \ref{fig:RuleBasedLiDARStatsLabeling} exemplifies the rule-based label generation map. It is depicted the classes \textit{vegetation} (dark madder purple), \textit{roads} (lime green), \textit{buildings} (dark green), and \textit{bare land} (yellow). Bare land serves as auxiliary class for all geo-locations identified as neither road, building, nor vegetation. Fig.\ \ref{fig:RuleBasedLiDARStatsLabeling} (bottom) plots the ground truth labels of the corresponding area. Apparently, qualitative reconstruction of the scene is possible. \begin{table}[tb] \centering \caption{\label{tab:LabelRules}Labeling Rules from LiDAR Statistics} \tiny \begin{tabular}{lcl} \hline \bf class & \bf pseudo (R,G,B) & \bf binary classification rule \\ \hline buildings & $\left(e_-, e_\Delta, e_+\right)$ & $e_->\langle e_-\rangle\land e_\Delta<\langle e_\Delta\rangle\land e_+>\langle e_+\rangle$\\ vegetation & $\left(c_+, e_\Delta, c_\Delta\right)$ & $c_+>\langle c_+\rangle\land e_\Delta>\langle e_\Delta\rangle\land c_\Delta>\langle c_\Delta\rangle$\\ roads & $\left(r_-, \overline{r}, e_-\right)$ & $r_->.1r_{\max}\land\overline{r}<.6r_{\max}\land e_-<.1e_{\max}$ \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{Figure_2021-09_Albrecht_FullMapLouisArmstrongHouseCorona.png}\\ \includegraphics[width=0.95\columnwidth]{Figure_2021-09_Albrecht_FullMapGroundTruthLouisArmstrongHouseCorona.png}\\ \caption{Rule-based land cover classification utilizing rasterized LiDAR statistics (top, cf.\ Secs.\ \ref{sec:FeatureExtractPointCloud}, \ref{sec:RuleBasedLabeling}) compared to the ground truth (bottom, cf.\ Sec.\ \ref{sec:LandCoverData}) exemplified by a block in the neighborhood Corona, Queens, NYC. Qualitatively, the classification by LiDAR data statistics reproduces the scene containing buildings, vegetation, and roads. Quantitative measures, cf.\ Tab.\ \ref{tab:AccuracyAssessmentRuleBasedLabeling} and Fig.\ \ref{fig:RuleBasedLabelsAccuracyAssessment}, indicate noise for which its root is detailed in Sec.\ \ref{sec:ValAutoLabel} and Fig.\ \ref{fig:tSNEVisualization}. } \label{fig:RuleBasedLiDARStatsLabeling} \end{figure} \subsection{Validation of Automated Label Generation} \label{sec:ValAutoLabel} A quantitative analysis of accuracy reveals the combination of Fig.\ \ref{fig:RuleBasedLabelsAccuracyAssessment} and Tab.\ \ref{tab:AccuracyAssessmentRuleBasedLabeling}. In particular, for each class detected, we compute the binary classification accuracy measures \textit{precision} $P$ and \textit{recall} $R$ according to \begin{align} P=\frac{TP}{TP+FP} \quad\text{and}\quad R=\frac{TP}{TP+FN} \quad. \end{align} Given the \textit{Land Cover} ground truth labels (cf. Sec.\ \ref{sec:LandCoverData}) and a class to evaluate (here: roads, buildings, or vegetation), precision $P$ defines the amount of rule-based labeled pixels correctly identified (\textit{true positive}: $TP$) in proportion to all pixels labeled as given class (type I error, \textit{false positive}: $FP$). Similarly, recall $R$ encodes type II errors (\textit{false negative}: $FN$) by forming the ratio of all true positive pixels relative to all ground truth pixels of a class. Accordingly, overall (class-specific) accuracy $acc$ is defined by all the truly classified pixels ($TP+TN$, with \textit{true negative}: $TN$) in proportion to the total number of pixels: \begin{equation} acc=\frac{TP+TN}{TP+TN+FP+FN}\quad. \end{equation} \begin{figure} \centering \includegraphics[width=0.45\columnwidth]{test-area-corona-LouisArmstrongMuseum-lq.jpg} \includegraphics[width=0.45\columnwidth]{Figure_2021-09_Albrecht_BuildingLabelAccuracyLouisArmstrongHouseCorona.png}\\ \includegraphics[width=0.45\columnwidth]{Figure_2021-09_Albrecht_VegetationLabelAccuracyLouisArmstrongHouseCorona.png} \includegraphics[width=0.45\columnwidth]{Figure_2021-09_Albrecht_RoadsLabelAccuracyLouisArmstrongHouseCorona.png}\\ \caption{Visual accuracy evaluation of rule-based labeling from LiDAR measurements of the urban block exemplified in Fig.\ \ref{fig:RuleBasedLiDARStatsLabeling}. The top left orthophoto provides a visual impression of the scene. The remaining sub-figures separately investigate the accuracy of buildings (top right), vegetation (bottom left), and roads (bottom right). White color coding marks correct labeling, blue brands missed labels (false negative, $FN$), and red distinguishes incorrect class labelling (false positive, $FP$). Tab.\ \ref{tab:AccuracyAssessmentRuleBasedLabeling} quantifies the assessment.} \label{fig:RuleBasedLabelsAccuracyAssessment} \end{figure} As visually depicted in Fig.\ \ref{fig:RuleBasedLabelsAccuracyAssessment}, identification of buildings and roads is dominated by false negatives (blue). Reversely, false positives (red) govern identification of vegetation. For labeling of buildings and vegetation such inverse relation of $FP$ and $FN$ roots in the physical reflectance properties of the LiDAR measurements at the edge of buildings: As the laser partially hits the outline of a building, multiple pulses get reflected back to the detector from the rooftop, the building's face, and the ground---a scenario the rule-based labeling for vegetation in Tab.\ \ref{tab:LabelRules} is sensitive to. In fact, this source of label noise might get exploited for building footprint extraction utilizing traditional computer vision (post-)processing steps such as morphological filtering \cite{sigmund2007morphology} or a straight line detector \cite{kiryati1991probabilistic}. Except for the building edge labeling artefact, low false positive/negative rates on bulk objects such as buildings and vegetation allow for a relatively high $F_1$ score determined by the harmonic mean of precision and recall: \begin{equation} F_1 = \frac{1}{\frac{1}{2}(1/P + 1/R)}=\frac{2PR}{P+R} \quad. \end{equation} Nevertheless, a fourth quantity, the \textit{Intersection over Union} ($IoU$) is required to complement the accuracy evaluation: \begin{equation} IoU = \frac{\vert C\cap T\vert}{\vert C\cup T\vert} \quad, \end{equation} with $C$ the set of rule-based labels of a given class, and $T$ the corresponding ground truth. $\vert\cdot\vert$ denotes the set size operator which returns geospatial area covered. This way, it is measured the degree of spatial misalignment of rule-based labels wrt.\ the ground truth. Since label noise dominates at the boundary of classification objects, $IoU$ scores low for buildings and vegetation. Results for road labeling suffer from covering vegetation and \textit{curb noise} such as parked vehicles, power lines, traffic lights, light poles, etc. However, as Fig.\ \ref{fig:RuleBasedLiDARStatsLabeling} visually demonstrates, the bulk of objects gets correctly labeled by the rule-based approach. A fact imprinted in the $acc$-measure for each class. Hence, we propose a challenge to the large-scale data mining remote sensing community: employ rule-based rasterized LiDAR statistics as automatically generated, noisy labels to benchmark weakly supervised semantic segmentation methodologies. In particular, the NAIP orthoimages \cite{naip-orthophoto} in combination with the NYC LiDAR data \cite{nycLiDAR} provides model input. \begin{table}[tb] \centering \caption{Accuracy Assessment of rule-based Labeling} \label{tab:AccuracyAssessmentRuleBasedLabeling} \scalebox{.9}{ \begin{tabular}{l|ccccc} \hline \bf class & \bf precision $P$ & \bf recall $R$ & \bf $F_1$-score &\bf acc & \bf Intersection \\ & & & & & \bf over Union $IoU$ \\ \hline buildings & .98 & .62 & .76 & .88 & .61\\ vegetation & .52 & .60 & .55 & .90 & .38\\ roads & .91 & .44 & .59 & .93 & .42\\ \hline \end{tabular} } \end{table} \subsection{LiDAR Statistics Clustering for Label Generation} To further investigate the label noise, we visualize data clustering in the multi--dimensional space of rasterized LiDAR statistics. Specifically, we utilized the $13$ raw statistics features $X=(A,B,\dots,M)$ of Tab.\ \ref{tab:PointCloudStats} to non-linearly project these into two dimensions for plotting through \textit{t-Distributed Stochastic Neighbor Embedding} (t-SNE) \cite{vandermaaten08a}: $(x,y)=tSNE(X)$. Random sampling of geo-locations generates a ground-truth $l\in\{\text{building},\text{road},\text{vegetation}\}$ labeled set of data points $\{(X_1,l_1),(X_2,l_2),(X_3,l_3),\dots\}$. The corresponding t-SNE projection $\{(x_1, y_1),(x_2, y_2),(x_3,y_3),\dots\}$ is presented in Fig.\ \ref{fig:tSNEVisualization} with color coding $\{l_1,l_2,l_3,\dots\}$. While cluster sizes and distances between strongly depend on t-SNE initial conditions and parameter settings, qualitative cluster structure in high-dimensional space may get read off from the two-dimensional embedding. In particular, we ran t-SNE experiments with various settings and differing data samplings. Repeatedly, it yielded qualitatively similar results as exemplified by Fig.\ \ref{fig:tSNEVisualization}: The three classes associated with vegetation, buildings, and roads are well separated. Thus, distinct sets of classes can be defined through a combination of LiDAR statistics layers in order to identify those. However, standard clustering methods like $k$-means---even when adapted to variable number of clusters \cite{hamerly2004learning}---is likely to fail due to the highly nonlinear separation of classes: As apparent from Fig.\ \ref{fig:tSNEVisualization}, a single class like vegetation is associated with a number of clusters well separated. Highly non-linear functions can get modelled by artificial neural networks. Research in self-supervised learning recently demonstrated the generation of expressive feature vectors without the need for any labels \cite{jing2020self}. Performance of self-supervised learning is typically measured by the accuracy of \textit{downstream tasks} that employ the learnt feature representation $z$. Specifically, object classification is evaluated by training a single-layer of fully connected neurons attached to $z$ on a small set of ground truth labels. Hence, a future research direction in \textit{AutoGeoLabel} may exploit self-supervised learning to improve upon the noisy, rule-based label generation process with the aid of a small set of ground truth labels. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{t-sne2.pdf}\\ \caption{t-SNE embedding of the $13$--dimensional LiDAR raster statistics, cf.\ Tab.\ \ref{tab:PointCloudStats}. Randomly picked land classification samples of \textit{vegetation} (green, $\boldsymbol{\cdot}$), \textit{buildings} (red, $\cdot$), and \textit{roads} (blue, $+$) is plotted. It becomes apparent that near to perfect class separation requires a strongly non-linear function such as e.g.\ modelled by artificial neural networks.} \label{fig:tSNEVisualization} \end{figure} \section{Application} We demonstrate an industrial application of labels generated by \textit{AutoGeoLabel} to identify trees and quantify carbon sequestration \cite{klein2019,carbon}. LiDAR statistics can identify tree's location, extract canopy diameter and calculate the total carbon sequestered in trees. \textit{AutoGeoLabel}-generated vegetation labels are converted to polygons. Subsequently, the mask is used to crop the maximum elevation LiDAR statistics data, $e_+$. The obtained image with vegetation height is segmented using a watershed method to delineate the tree crown diameter \cite{watershed}. Then, the tree crown polygons can be used as another simple filter layer where the eccentricity \cite{eccentricity} of polygons helps to eliminate features that appears elongated to preserve rounded\slash circular features, only. In addition, filtering is employed to remove features that are too small or too large to be associated with a tree. Manually labeled tree species data is used to create a classifier associating tree species labels \cite{treespecies}. Four dominant tree species are used to reclassify all trees within NYC \cite{carbon}. Tree height is extracted from the LiDAR height where the height is adjusted taking into account the ground elevation---resulting in absolute tree height. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{CarbonStorage.pdf} \caption{Application: Total carbon stored in trees (top) calculated based on tree species and tree dimensions derived from LiDAR measurements for geospatial sample area of Fig.\ \ref{fig:RuleBasedLiDARStatsLabeling}. The corresponding ground truth image (bottom) depicts the distribution of the urban forest, cf. Fig.\ \ref{fig:RuleBasedLiDARStatsLabeling} (bottom).} \label{fig:CarbonStorage} \end{figure} Quantification of carbon stored in trees follows a procedure outlined before \cite{carbon}. The method presented here is different from previous works as it exclusively relies on LiDAR data to identify vegetation using automatically created labels, delineate tree canopy, and calculate tree heights. The only additional data required is tree species information \cite{treespecies, carbon}. Tree identification and carbon sequestration offer a way for city management to better plan tree replanting, and to efficiently quantify total carbon stored in urban forest. \section{Conclusion \& Perspectives} In this paper we presented \textit{AutoGeoLabel}; a framework to address automatic data labeling for geospatial applications. Many industrial and scientific solutions can benefit from automating label generation to overcome the challenge of manual image annotation---a labor- and time-consuming effort. Based on an airborne LiDAR survey for New York City, we demonstrated and explored a novel approach to use simple statistical features of remote sensing data in order to create data classes. Weakly supervised, and self-supervised learning has been argued as promising approaches. Utilizing automatically generated labels for vegetation, we demonstrated an application to quantify carbon sequestration in urban forests with no need for explicit tree segmentation from e.g. orthophotos. In perspective of industry applications, there is great potential of \textit{AutoGeoLabel} to contribute. E.g. for emergency response during a natural disaster event, many of the existing labeled data acquired under \textit{normal conditions} may not hold representative of what it is observed on the ground. In such extreme situations, new labeled data may need to be generated on the fly. One such example is recognizing flooded areas utilizing aerial, or drone images where training data may not exist. Detecting the impact of extreme weather can drive rescue missions, where assessment of \textit{change} from normal conditions like water extent and potential damage (estimates of depth of water) require creation of data features that can be used to delineate the boundary of overflown water. Then, information related to the number of flooded houses and roads may help to coordinate the best routing for rescue missions. { \small \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Recent years have witnessed an increasing interest in utilizing methods of algebraic topology for statistical data analysis. In terms of algebraic topology, conventional clustering methods are regarded as charactering $0$-dimensional topological features which mean connected components of data. Furthermore, higher dimensional topological features also represent informative shape of data, such as rings ($1$-dimension) and cavities ($2$-dimension). The research analyzing these topological features in data is called {\em topological data analysis} (TDA) \cite{Ca09}, which has been successfully applied to various areas including information science \cite{CIdSZ08,dSG07}, biology \cite{KZPSGP07,XW14}, brain science \cite{LCKKL11,PETCNHV14,SMISCR08}, biochemistry \cite{GHIKMN13}, material science \cite{HNHEMN16, NHHEN15, STRFH17}, and so on. In many of these applications, data have complicated geometric structures, and thus it is important to extract informative topological features from the data. A {\em persistent homology} \cite{ELZ02}, which is a key mathematical tool in TDA, extracts robust topological information from data, and it has a compact expression called a {\em persistence diagram}. While it is applied to various problems such as the ones listed above, statistical or machine learning methods for analysis on persistence diagrams are still limited. In TDA, analysts often elaborate only single persistence diagram and, in particular, methods for handling many persistence diagrams, which can contain randomness from the data, are at the beginning stage (see the end of this section for related works). Hence, developing a framework of statistical data analysis on persistence diagrams is a significant issue for further success of TDA and, to this goal, this paper discusses kernel methods for persistence diagrams. \subsection{Topological descriptor} \label{subsec:persistent_homology} In order to provide some intuitions for the persistent homology, let us consider a typical way of constructing persistent homology from data points in a Euclidean space, assuming that the point set lies on a submanifold. The aim is to make inference on the topology of the underlying manifold from finite data points. We consider the $r$-balls (balls with radius $r$) to recover the topology of the manifold, as popularly employed in constructing an $r$-neighbor graph in many manifold learning algorithms. While it is expected that, with an appropriate choice of $r$, the $r$-ball model can represent the underlying topological structures of the manifold, it is also known that the result is sensitive to the choice of $r$. If $r$ is too small (resp. large), the union of $r$-balls consists simply of the disjoint $r$-balls (resp. a contractible space). Then, by considering not one specific $r$ but all $r$, the persistent homology gives robust topological features of the point set. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.9\hsize]{filtration.pdf} \caption{Unions of $r$-balls at data points (left) and its $1$-st persistence diagram (right). The point $(b_{1},d_{1})$ in the persistence diagram represents the ring $\alpha_{1}$, which appears at $r=b_1$ and disappears at $r=d_1$. The noisy rings are plotted as the points close to the diagonal.} \label{fig:filtration} \end{center} \end{figure} As a useful representation of persistent homology, a persistence diagram is often used in topological data analysis. The persistence diagram is given in the form of a multiset $D=\{(b_i,d_i) \in \lR^{2} \mid i\in I, \ b_i < d_i\}$ (Figure \ref{fig:filtration}). Every point $(b_i,d_i)\in D$, called a {\em generator} of the persistent homology, represents a topological property (e.g., connected components, rings, cavities, etc.) which appears at $r=b_i$ and disappears at $r=d_i$ in the ball model. Then, the {\em persistence} $d_i-b_i$ of the generator shows the robustness of the topological property under the radius parameter. A generator with large persistence can be regarded as a reliable structure, while that with small persistence (points close to the diagonal) is likely to be a structure caused by noise. In this way, persistence diagrams encode topological and geometric information of data points. See Section \ref{sec:background} and Appendix \ref{sec:topology} for more information. \subsection{Contribution} \label{subsec:contribution} Since a persistence diagram is a point set of variable size, it is not straightforward to apply standard methods of statistical data analysis, which typically assume vectorial data. To vectorize persistence diagrams, we employ the framework of kernel embedding of (probability and more general) measures into reproducing kernel Hilbert spaces (RKHS). This framework has recently been developed and leading various new methods for nonparametric inference \cite{MFSS17,SGSS07,SFG13}. It is known \cite{SFL11} that, with an appropriate choice of kernels, a signed Radon measure can be uniquely represented by the Bochner integral of the feature vectors with respect to the measure. Since a persistence diagram can be regarded as a sum of Dirac delta measures, it can be embedded into an RKHS by the Bochner integral. Once such a vector representation is obtained, we can introduce any kernel methods for persistence diagrams systematically (see Figure \ref{fig:overview}). \begin{figure}[htbp] \begin{center} \includegraphics[width=0.95\hsize]{overview.pdf} \caption{ (1) A data set $X$ is transformed into a persistence diagram $D_q(X)$ (Section \ref{subsec:persistence_diagram}). (2) The persistence diagram $D_q(X)$ is mapped to an RKHS vector $E_{k}(\mu_{D_{q}(X)}^{w})$, where $k$ is a positive definite kernel and $w$ is a weight function controlling the effect of persistence (Section \ref{subsec:vectorization}). (3) Statistical methods are applied to those vector representations of persistence diagrams (Section \ref{sec:experiment}).} \label{fig:overview} \end{center} \end{figure} Furthermore, since each generator in a persistence diagram is equipped with a persistence which indicates a robustness of the topological features, we will utilize it as a weight on the generator. For embedding persistence diagrams in an RKHS, we propose a useful class of positive definite kernels, called {\em persistence weighted Gaussian kernel} (PWGK). The advantages of this kernel are as follows: (i) We can explicitly control the effect of persistence by a weight function, and hence discount the noisy generators appropriately in statistical analysis. (ii) As a theoretical contribution, the distance defined by the RKHS norm for the PWGK satisfies the stability property, which ensures the continuity from data to the vector representation of the persistence diagram. (iii) The PWGK allows efficient computation by using the random Fourier features \cite{RR07}, and thus it is applicable to persistence diagrams with a large number of generators. We demonstrate the performance of the proposed kernel method with synthesized and real-world data, including granular systems (taken by X-ray Computed Tomography on granular experiments), oxide glasses (taken by molecular dynamics simulations) and protein datasets (taken by NMR and X-ray crystallography experiments). We remark that these real-world problems have physical and biochemical significance in their own right, as detailed in Section \ref{sec:experiment}. \subsection{Related works} \label{subsec:related_work} There are already some relevant works on statistical approaches to persistence diagrams. Some studies discuss how to transform a persistence diagram to a vector \cite{AEKNPSCHMZ17, Bu15, CMWOXW15, COO15, RHBK15, RT16}. In these methods, a transformed vector is typically expressed in a Euclidean space $\lR^{k}$ or a function space $L^{p}$, and simple and ad-hoc summary statistics like means and variances are used for data analysis such as principal component analysis (PCA) and support vector machines (SVMs). In this paper, we will compare the performance among the PWGK, the persistence scale-space kernel \cite{RHBK15}, the persistence landscape \cite{Bu15}, the persistence image \cite{AEKNPSCHMZ17}, and the molecular topological fingerprint \cite{CMWOXW15} in several machine learning tasks. Furthermore, we show that our vectorization is a generalization of the persistence scale-space kernel and the persistence image although the constructions are different. We also remark that there are some works discussing statistical properties of persistence diagrams for random data points: \cite{CGLM15} show convergence rates of persistence diagram estimation, and \cite{FLRWBS14} discuss confidence sets in a persistence diagram. These works consider a different but important direction to the statistical methods for persistence diagrams. The remaining of this paper is organized as follows: In Section \ref{sec:background}, we review some basics on persistence diagrams and kernel embedding methods. In Section \ref{sec:pdkernel}, the PWGK is proposed, and some theoretical and computational issues are discussed. Section \ref{sec:experiment} shows experimental results and compares the proposed kernel method with other methods. This paper is an extended version of our ICML paper \cite{KFH16}. The difference from this conference version is as follows: (i) Comparisons with other relevant methods, in particular, persistence landscapes and persistence images, have been added to this version. (ii) New experimental results in comparison with other relevant methods. (iii) Detailed proofs of the stability theorem has been added. \section{Backgrounds} \label{sec:background} We review the concepts of persistence diagrams and kernel methods. For readers who are not familiar with algebraic topology, we give a brief summary in Appendix \ref{sec:topology}. See also \cite{Ha02} as an accessible introduction to algebraic topology. \subsection{Persistence diagram} \label{subsec:persistence_diagram} In order to define a persistence diagram, we transform a data set $X$ into a filtration ${\rm {\lF}ilt}(X)$ and compute its persistent homology $H_{q}({\rm {\lF}ilt}(X))$. In this section, we will first introduce this mathematical framework of persistence diagrams. Then, by using a ball model filtration, we will intuitively explain geometrical meanings of persistence diagrams. The ball model filtrations can be generalized toward two constructions using ${\rm \check{C}ech}$ complexes and sub-level sets. The former construction is useful for computations of persistence diagrams and the later is useful to discuss theoretical properties. \subsubsection{Mathematical framework of persistence diagrams} Let $K$ be a coefficient field of homology\footnote{In this setting, all homology are $K$-vector spaces. You may simply consider the case $K=\lR$, but the theory is built with an arbitrary field.}. Let ${\rm {\lF}ilt}=\{F_a\mid a\in\mathbb{R}\}$ be a (right continuous) {\em filtration} of simplicial complexes (resp. topological spaces), i.e., $F_a$ is a subcomplex (resp. subspace) of $F_b$ for $a\leq b$ and $F_{a}=\bigcap_{a<b}F_{b}$. For $a\leq b$, the $K$-linear map induced from the inclusion $F_a\hookrightarrow F_b$ is denoted by $\rho^b_a: H_q(F_a)\rightarrow H_q(F_b)$, where $H_{q}(F_{a})$ is the $q$-th homology of $F_{a}$. The $q$-th {\em persistent homology} $H_q({\rm {\lF}ilt})=(H_q(F_a),\rho^b_a)$ of ${\rm {\lF}ilt}$ is defined by the family of homology $\{H_q(F_a)\mid a\in\mathbb{R}\}$ and the induced linear maps $\{ \rho^b_a \mid a\leq b \}$. A {\em homological critical value} of $H_q({\rm {\lF}ilt})$ is the number $a\in\mathbb{R}$ such that the linear map $\rho^{a+{\varepsilon}}_{a-{\varepsilon}}: H_q(F_{a-{\varepsilon}})\rightarrow H_q(F_{a+{\varepsilon}})$ is not isomorphic for any sufficiently small ${\varepsilon}>0$. The persistent homology $H_q({\rm {\lF}ilt})$ is called {\em tame} if $\dim H_q(F_a)<\infty$ for any $a\in\mathbb{R}$ and the number of homological critical values is finite. A tame persistent homology $H_q({\rm {\lF}ilt})$ has a nice decomposition property: \begin{thm}[\cite{ZC05}]\label{thm:decomposition} A tame persistent homology can be uniquely expressed by \begin{align}\label{eq:decom} H_q({\rm {\lF}ilt})\simeq\bigoplus_{i\in I} \lI[b_i,d_i], \end{align} where $\lI[b_i,d_i]=(U_a,\iota^b_a)$ consists of a family of $K$-vector spaces \begin{align*} U_a=\left\{\begin{array}{ll} K,&b_i\leq a < d_i\\ 0,&{\rm otherwise} \end{array}\right., \end{align*} and $\iota^b_a={\rm id}_{K}$ for $b_i\leq a \leq b<d_i$. \end{thm} Each summand $\lI[b_i,d_i]$ means a topological feature in ${\rm {\lF}ilt}$ that appears at $a=b_{i}$ and disappears at $a=d_{i}$. The birth-death pair $x=(b_i,d_i)$ is called a {\em generator} of the persistent homology, and ${\rm pers}(x):=d_{i}-b_{i}$ a {\em persistence} of $x$. We note that, when $\dim H_q(F_a) \neq 0$ for any $a<0$ (resp. for any $a>0$), the decomposition \eqref{eq:decom} should be understood in the sense that some $b_i$ takes the value $-\infty$ (resp. $d_i=\infty$), where $-\infty,\infty$ are the elements in the extended real $\overline{\mathbb{R}}=\mathbb{R}\cup\{-\infty,\infty\}$. From the decomposition \eqref{eq:decom}, we define the {\em persistence diagram} of ${\rm {\lF}ilt}$ as the multi-set\footnote{A {\em multi-set} is a set with multiplicity of each point. We regard a persistence diagram as a multi-set, since several generators can have the same birth-death pairs.} \begin{align*} D_q({\rm {\lF}ilt})=\rl{ (b_i,d_i)\in\overline{\mathbb{R}}^2 \ \middle| \ i\in I}. \end{align*} In this paper, we assume that all persistence diagrams have finite cardinality because a tame persistent homology defines a finite persistence diagram. Moreover, we also assume that all birth-death pairs are bounded\footnote{This assumption will be justified in Section \ref{subsec:geometrical}.}, that is, all elements in a persistence diagram take neither $\infty$ nor $-\infty$. Here, we define the (abstract) persistence diagram $D$ by a finite multi-set above the diagonal $\lR^{2}_{{\rm ad}}:=\{(b,d) \in \lR^{2} \mid b < d\}$. \subsubsection{Ball model filtrations} \label{subsec:geometrical} The example used in Figure \ref{fig:filtration} can be expressed as follows. Let $X = \{\bm{x}_1, \ldots, \bm{x}_n \}$ be a finite subset in a metric space $(M,d_{M})$ and $X_{a}:=\bigcup_{i=1}^{n} B(\bm{x}_{i};a)$ be a union of balls $B(\bm{x}_i;a)=\{ \bm{x} \in M \mid d_{M}(\bm{x}_i,\bm{x}) \leq a\}$ with radius $a \geq 0$. For convenience, let $X_{a}:=\emptyset \ (a < 0)$. Since $\lX=\{X_{a} \mid a \in \lR \}$ is a right-continuous filtration of topological spaces and $X$ is a finite set, $H_{q}(\lX)$ is tame and the persistence diagram $D_{q}(\lX)$ is well-defined. For notational simplicity, the persistence diagram of this ball model filtration is denoted by $D_q(X)$. We remark that, in this model, there is only one generator in $D_0(X)$ that does not disappear in the filtration; its lifetime is $\infty$. From now on, we deal with $D_0(X)$ by removing this infinite lifetime generator\footnote{This is called the {\em reduced persistence diagram}.}. Let ${\rm diam}(X)$ be the diameter of $X$ defined by $\max_{\bm{x}_{i},\bm{x}_{j} \in X} d_{M}(\bm{x}_{i},\bm{x}_{j})$. Then, all generators appear after $a=0$ and disappear before $a={\rm diam}(X)$ because $X_{{\rm diam}(X)}$ becomes a contractible space. Thus, for any dimension $q$, all birth-death pairs of $D_{q}(X)$ have finite values. \subsubsection{Geometric complexes} We review some standard methods of constructing a filtration from finite sets in a metric space. See also \cite{CdSO14} for more details. Let $(M,d_M)$ be a metric space and $X= \{\bm{x}_1, \ldots, \bm{x}_n \}$ be a finite subset in $M$. For a fixed $a \geq 0$, we form a $q$-simplex $[\bm{x}_{i_0} \cdots \bm{x}_{i_q}]$ as a subset $\{ \bm{x}_{i_0}, \ldots, \bm{x}_{i_q} \}$ of $X$ whenever there exists $\bar{\bm{x}} \in M$ such that $d_M(\bm{x}_{i_j},\bar{\bm{x}}) \leq a$ for all $j=0,\ldots,q$, or equivalently, $\cap^{q}_{j = 0} B(\bm{x}_{i_{j}}; a) \neq \emptyset$. The set of these simplices forms a simplicial complex, called the $\check{C}${\em ech complex} of $X$ with parameter $a$, denoted by ${\rm \check{C}ech}(X;a)$. For $a < 0$, we define ${\rm \check{C}ech}(X;a)$ as an empty set. Since there is a natural inclusion ${\rm \check{C}ech}(X;a) {\hookrightarrow} {\rm \check{C}ech}(X;b)$ whenever $a \leq b$, ${\rm \check{\lC}ech}(X)=\left\{{\rm \check{C}ech}(X;a) \ \middle| \ a \in \lR \right\}$ is a filtration. When $M$ is a subspace of $\lR^{d}$, from the nerve lemma \cite{Ha02}, it is known that the topology of ${\rm \check{C}ech}(X;a)$ is the same\footnote{Precisely, they are {\em homotopy equivalent}.} as $X_{a}$ (Figure \ref{fig:cech}), and hence $D_{q}({\rm \check{\lC}ech}(X))=D_{q}(X)$. The Rips complex (or Vietoris-Rips complex) is also often used in TDA and it gives different topology from the ${\rm \check{C}ech}$ complex. For a fixed $a \geq 0$, we form a $q$-simplex $[\bm{x}_{i_0} \cdots \bm{x}_{i_q}]$ as a subset $\left\{ \bm{x}_{i_0}, \ldots , \bm{x}_{i_q} \right\}$ of $X$ that satisfies $d_M(\bm{x}_{i_j},\bm{x}_{i_k}) \leq 2a$ for all $j,k=0,\ldots ,q$. The set of these simplices forms a simplicial complex, called the {\em Rips complex} of $X$ with parameter $a$, denoted by ${\rm Rips}(X;a)$. Similarly, the Rips complex also forms a filtration ${\rm {\lR}ips}(X)$. In general, $D_{q}({\rm {\lR}ips}(X))$ is not the same as $D_{q}(X)$ (see Figure \ref{fig:cech}). \begin{figure}[htbp] \begin{center} \includegraphics[width=0.7\hsize]{cech.pdf} \end{center} \caption{A point set $X$, the union of balls $X_{a}$, the ${\rm \check{C}ech}$ complex ${\rm \check{C}ech}(X;a)$ and the Rips complex ${\rm Rips}(X;a)$. There are two rings in $X_{a}$ and ${\rm \check{C}ech}(X;a)$. However, ${\rm Rips}(X;a)$ has only one ring because there is a $2$-simplex.} \label{fig:cech} \end{figure} \subsubsection{sub-level sets} Let $M$ be a topological space and $f:M {\rightarrow} \lR$ be a continuous map. Then, we define a {\em sub-level set} by ${\rm Sub}(f;a):=f^{-1}((-\infty,a])$ for $a \in \lR$ and its filtration by ${\rm {\lS}ub}(f):=\{{\rm Sub}(f;a) \mid a \in \lR \}$. Here, $f:M {\rightarrow} \lR$ is said to be {\em tame} if $H_{q}({\rm {\lS}ub}(f))$ is tame. For a finite set $X=\{\bm{x}_1, \ldots, \bm{x}_n \}$ in a metric space $(M, d_{M})$, we define the distance function ${\rm dist}_{X}:M {\rightarrow} \lR$ by \[ {\rm dist}_{X}(\bm{x}):=\min_{\bm{x}_{i} \in X} d_{M} (\bm{x},\bm{x}_{i}). \] Then, we can see ${\rm Sub}({\rm dist}_{X};a)=\bigcup_{x_{i} \in X} B(x_{i};a)$ and $D_{q}({\rm {\lS}ub}({\rm dist}_{X}))=D_{q}(X)$. This means that the ball model is a special case of the sub-level set, and the ${\rm \check{C}ech}$ complex and the sub-level set with the distance function ${\rm dist}_{X}$ give the same persistence diagram. \subsection{Stability of persistence diagrams} \label{sec:bottleneck_stability} When we consider data analysis based on persistence diagrams, it is useful to introduce a distance measure among persistence diagrams for describing their relations. In introducing a distance measure, it is desirable that, as a representation of data, the mapping from data to a persistence diagram is continuous with respect to the distance. In many cases, data involve noise or stochasticity, and thus the persistence diagrams should be stable under perturbation of data. The {\em bottleneck distance} $d_{{\rm B}}$ between two persistence diagrams $D$ and $E$ is defined by \[ d_{{\rm B}}(D,E):=\inf_{\gamma} \sup_{x \in D \cup {\Delta}} \norm{x-\gamma(x)}_{\infty}, \] where ${\Delta}:=\{(a,a) \mid a \in \lR \}$ is the diagonal set with infinite multiplicity and $\gamma$ ranges over all multi-bijections\footnote{A {\em multi-bijection} is a bijective map between two multi-sets counted with their multiplicity.} from $D \cup {\Delta}$ to $E \cup {\Delta}$. Here, for $z=(z_1,z_2)\in\lR^2$, $\Vert z \Vert_\infty$ denotes $\max \{|z_1|,|z_2| \}$. We note that there always exists such a multi-bijection $\gamma$ because the cardinalities of $D \cup {\Delta}$ and $E \cup {\Delta}$ are equal by considering the diagonal set $\Delta$ with infinite multiplicity. For sets $X$ and $Y$ in a metric space $(M, d_{M})$, let us recall the {\em Hausdorff distance} $d_{{\rm H}}$ given by \begin{align*} d_{{\rm H}}(X,Y):= \max \left\{\sup_{\bm{x} \in X} \inf_{\bm{y} \in Y} d_{M}(\bm{x},\bm{y}), \sup_{\bm{y} \in Y} \inf_{\bm{x} \in X} d_{M}(\bm{x},\bm{y}) \right\}. \end{align*} Then, the bottleneck distance satisfies the following stability property. \begin{prop}[\cite{CdSO14,CEH07}] \label{prop:point_stability} Let $X$ and $Y$ be finite subsets in a metric space $(M,d_{M})$. Then the persistence diagrams satisfy \[ d_{{\rm B}}(D_{q}(X),D_{q}(Y)) \leq d_{{\rm H}}(X,Y). \] \end{prop} Proposition \ref{prop:point_stability} provides a geometric intuition of the stability of persistence diagrams. Assume that two point sets $X$ and $Y$ are close to each other with ${\varepsilon}=d_{{\rm H}}(X,Y)$. If there is a generator $(b,d) \in D_{q}(Y)$, then we can find at least one generator in $X$ which is born in $(b-{\varepsilon},b+{\varepsilon})$ and dies in $(d-{\varepsilon},d+{\varepsilon})$ (see Figure \ref{fig:stability}). Thus, the stability guarantees the similarity of two persistence diagrams, and hence we can infer the true topological features from the persistence diagrams given by noisy observation (see also \cite{FLRWBS14}). \begin{figure}[htbp] \begin{center} \includegraphics[width=0.8\hsize]{stability.pdf} \vspace{-3mm} \caption{Two point sets $X$ and $Y$ (left) and their persistence diagrams (right). The green region is an ${\varepsilon}$-neighborhood of $D_{q}(Y)$ and all generators in $D_{q}(X)$ are in the ${\varepsilon}$-neighborhood.} \label{fig:stability} \end{center} \end{figure} For $1 \leq p < \infty$, the {\em $p$-Wasserstein distance} $d_{{\rm W}_{p}}$, which is also used as a distance between two persistence diagrams $D$ and $E$, is defined by \[ d_{ {\rm W}_{p}}(D,E)=\inf_{\gamma}\pare{\sum_{x \in D \cup {\Delta}} \norm{x-\gamma(x)}^{p}_{\infty}}^{\frac{1}{p}}, \] where $\gamma$ ranges over all multi-bijections from $D \cup {\Delta}$ to $E \cup {\Delta}$. The $\infty$-Wasserstein distance $d_{{\rm W}_{\infty}}$ is defined by the bottleneck distance $d_{{\rm B}}$. Here, we define the {\em degree-$p$ total persistence} of $D$ by ${\rm Pers}_{p}(D):=\sum_{x \in D} {\rm pers}(x)^{p}$ for $1 \leq p < \infty$. \begin{prop}[\cite{CEHM10}] \label{prop:wasserstein_stability} Let $1 \leq p' \leq p < \infty$, and $D$ and $E$ be persistence diagrams whose degree-$p'$ total persistences are bounded from above. Then, \[ d_{ {\rm W}_{p}}(D, E) \leq \pare{\frac{{\rm Pers}_{p'}(D)+{\rm Pers}_{p'}(E)}{2}}^{\frac{1}{p}} d_{{\rm B}}(D,E)^{1-\frac{p'}{p}}. \] \end{prop} For a persistence diagram $D$, its degree-$p$ total persistence is bounded from above by $\card{D} \times \max_{x \in D} {\rm pers}(x)^{p}$, where $\card{D}$ denotes the number of generators in $D$. However, this bound may be weak because, in general, $\card{D}$ cannot be bounded from above. In particular, if data set has noise, the persistence diagram often has many generators close to the diagonal. Thus, it is desirable that the total persistence is bounded from above independently of $\card{D}$. In the case of persistence diagrams obtained from a ball model filtration, we have the following upper bound (see Appendix \ref{sec:total} for the proof): \begin{lem} \label{lem:point_total} Let $M$ be a triangulable compact subspace in $\lR^{d}$, $X$ be a finite subset of $M$, and $p>d$. Then, \[ {\rm Pers}_{p}(D_{q}(X)) \leq \frac{p}{p-d}C_{M}{\rm diam}(M)^{p-d}, \] where $C_{M}$ is a constant depending only on $M$. \end{lem} \begin{cor} \label{cor:point_wasserstein} Let $M$ be a triangulable compact subspace in $\lR^{d}$, $X,Y$ be finite subsets of $M$, and $p \geq p' > d$. Then \begin{align*} d_{{\rm W}_{p}} (D_{q}(X), D_{q}(Y) ) &\leq \pare{ \frac{p'}{p'-d}C_{M}{\rm diam}(M)^{p'-d} }^{\frac{1}{p}} d_{{\rm B}}(D_{q}(X),D_{q}(Y))^{1-\frac{p'}{p}} \\ &\leq \pare{ \frac{p'}{p'-d}C_{M}{\rm diam}(M)^{p'-d} }^{\frac{1}{p}} d_{{\rm H}}(X,Y)^{1-\frac{p'}{p}} \end{align*} where $C_{M}$ is a constant depending only on $M$. \end{cor} \subsection{Kernel methods for representing signed measures} \label{subsec:universal} As a preliminary to our proposal of vector representation for persistence diagrams, we briefly summarize a method for embedding signed measures with a positive definite kernel. Let $\Omega$ be a set and $k:\Omega \times \Omega {\rightarrow} \lR$ be a {\em positive definite kernel} on $\Omega$, i.e., $k$ is symmetric, and for any number of points $x_{1},\ldots,x_{n}$ in $\Omega$, the Gram matrix $\pare{k(x_{i},x_{j})}_{i,j=1,\ldots,n}$ is nonnegative definite. A popular example of positive definite kernel on $\lR^{d}$ is the Gaussian kernel $k_{{\rm G}}(x,y)=e^{-\frac{\norm{x-y}^{2}}{2 \sigma^{2}}} \ (\sigma>0)$, where $\norm{\cdot}$ is the Euclidean norm in $\lR^{d}$. From Moore-Aronszajn theorem, it is also known that every positive definite kernel $k$ on $\Omega$ is uniquely associated with a reproducing kernel Hilbert space $\cH_{k}$ (RKHS). We use a positive definite kernel to represent persistence diagrams by following the idea of the kernel mean embedding of distributions \cite{MFSS17, SGSS07,SFL11}. Let $\Omega$ be a locally compact Hausdorff space, $M_{{\rm b}}(\Omega)$ be the space of all finite signed Radon measures\footnote{A {\em Radon measure} $\mu$ on $\Omega$ is a Borel measure on $\Omega$ satisfying (i) $\mu(C) < \infty$ for any compact subset $C \subset \Omega$, and (ii) $\mu(B)=\sup \{ \mu(C) \mid C \subset B, ~ C \mbox{:compact}\}$ for any $B$ in the Borel $\sigma$-algebra of $\Omega$.} on $\Omega$, and $k$ be a bounded measurable kernel on $\Omega$. Since $\int \norm{k(\cdot,x)}_{\cH_{k}} d \mu(x)$ is finite, the integral $\int k(\cdot, x) d \mu(x)$ is well-defined as the Bochner integral \cite{DU77}. Here, we define a mapping from $M_{{\rm b}}(\Omega)$ to $\cH_{k}$ by \begin{equation}\label{eq:E_k} E_{k}:M_{{\rm b}}(\Omega) {\rightarrow} \cH_{k}, ~~ \mu \mapsto \int k(\cdot, x) d \mu(x). \end{equation} For a locally compact Hausdorff space $\Omega$, let $C_{0}(\Omega)$ denote the space of continuous functions vanishing at infinity\footnote{A function $f$ is said to {\em vanish at infinity} if for any ${\varepsilon} >0$ there is a compact set $K \subset \Omega$ such that $\sup_{x \in K^{c}} |f(x)| \leq {\varepsilon}$.}. A kernel $k$ on $\Omega$ is said to be $C_{0}$-kernel if $k(\cdot,x) \in C_{0}(\Omega)$ for any $x \in \Omega$. If $k$ is $C_{0}$-kernel, the associated RKHS $\cH_{k}$ is a subspace of $C_{0}(\Omega)$. A $C_{0}$-kernel $k$ is called {\em $C_{0}$-universal} if $\cH_{k}$ is dense in $C_{0}(\Omega)$. It is known that the Gaussian kernel $k_{{\rm G}}$ is $C_{0}$-universal on $\lR^{d}$ \cite{SFL11}. When $k$ is $C_{0}$-universal, the vector $E_k(\mu)$ in the RKHS uniquely determines the finite signed measure $\mu$, and thus serves as a representation of $\mu$. We summarize the property as follows: \begin{prop}[\cite{SFL11}] \label{prop:C0_distance} Let $\Omega$ be a locally compact Hausdorff space. If $k$ is $C_{0}$-universal on $\Omega$, the mapping $E_{k}$ is injective. Thus, \[ d_{k}(\mu,\nu)=\norm{E_{k}(\mu)-E_{k}(\nu)}_{\cH_{k}} \] defines a distance on $M_{{\rm b}}(\Omega)$. \end{prop} \section{Kernel methods for persistence diagrams} \label{sec:pdkernel} We propose a positive definite kernel for persistence diagrams, called the persistence weighted Gaussian kernel (PWGK), to embed the persistence diagrams into an RKHS. This vectorization of persistence diagrams enables us to apply any kernel methods to persistence diagrams and explicitly control the effect of persistence. We show the stability theorem with respect to the distance defined by the embedding and discuss the efficient and precise approximate computation of the PWGK. \subsection{Vectorization of persistence diagrams} \label{subsec:vectorization} We propose a method for vectorizing persistence diagrams using the kernel embedding \eqref{eq:E_k} by regarding a persistence diagram as a discrete measure. In vectorizing persistence diagrams, it is desirable to have flexibility to discount the effect of generators close to the diagonal, since they often tend to be caused by noise. To this goal, we explain slightly different two ways of embeddings, which turn out to give the same inner product for two persistence diagrams. First, for a persistence diagram $D$, we introduce a measure $\mu^{w}_{D}:=\sum_{x \in D} w(x){\delta}_{x}$ with a weight $w(x) >0$ for each generator $x \in D$ (Figure \ref{fig:weighted}), where ${\delta}_x$ is the Dirac delta measure at $x$. By appropriately choosing $w(x)$, the measure $\mu^{w}_{D}$ can discount the effect of generators close to the diagonal. A concrete choice of $w(x)$ will be discussed later. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.8\hsize]{weighted_delta.pdf} \caption{Unweighted (left) and weighted (right) measures.} \vspace{-3mm} \label{fig:weighted} \end{center} \end{figure} As discussed in Section \ref{subsec:universal}, given a $C_0$-universal kernel $k$ above the diagonal $\lR^{2}_{{\rm ad}}=\{(b,d) \in \lR^{2} \mid b < d\}$, the measure $\mu^{w}_{D}$ can be embedded as an element of the RKHS $\cH_{k}$ via \begin{equation}\label{E_k:embed} \mu^{w}_{D} \mapsto E_{k}(\mu^{w}_{D}):=\sum_{x\in D}w(x)k(\cdot,x). \end{equation} From the injectivity in Proposition \ref{prop:C0_distance}, this mapping identifies a persistence diagram; in other words, it does not lose any information about persistence diagrams. Hence, $E_k(\mu^w_D)\in\cH_k$ serves as a vector representation of the persistence diagram. As the second construction, let \[ k^w(x,y):=w(x)w(y)k(x,y) \] be the weighted kernel with the same weight function as above. Then the mapping \begin{equation}\label{E_k^w:embed} E_{k^w}: \mu_D\mapsto \sum_{x\in D}w(x)w(\cdot)k(\cdot,x)\in \cH_{k^w} \end{equation} also defines a vectorization of persistence diagrams. The first construction may be more intuitive by directly weighting a measure, while the second one is also practically useful since all the parameter tuning is reduced to kernel choice. We note that the inner products introduced by two RKHS vectors \eqref{E_k:embed} and \eqref{E_k^w:embed} are the same: \[ \langle E_k(\mu_D^w), E_k(\mu_E^w)\rangle_{\cH_k} = \langle E_{k^w}(\mu_D), E_{k^w}(\mu_E)\rangle_{\cH_{k^w}}. \] In addition, these two RKHS vectors \eqref{E_k:embed} and \eqref{E_k^w:embed} are essentially equivalent, as seen from the next proposition: \begin{prop}\label{prop:iso} Let $k$ be $C_{0}$-universal on $\lR^{2}_{{\rm ad}}$ and $w$ be a positive function on $\lR^{2}_{{\rm ad}}$. Then the following mapping \[ \cH_k\to \cH_{k^w}, \quad f\mapsto wf \] defines an isomorphism between the RKHSs. Under this isomorphism, $E_k(\mu_D^w)$ and $E_{k^w}(\mu_D)$ are identified. \end{prop} \begin{proof} Let $\tilde{\cH}:=\{wf:\lR^{2}_{{\rm ad}}\to\lR\mid f\in \cH_k\}$ and define its inner product by \[ \langle wf,wg\rangle_{\tilde{\cH}}:=\langle f, g\rangle_{\cH_k}. \] Then, it is easy to see that $\tilde{\cH}$ is a Hilbert space and the mapping $f\mapsto wf$ gives an isomorphism between $\tilde{\cH}$ and $\cH_{k}$ of the Hilbert spaces. In fact, we can show that $\tilde{\cH}$ is the same as $\cH_{k^w}$. To see this, it is sufficient to check that $k^w$ is a reproducing kernel of $\tilde{\cH}$ from the uniqueness property of a reproducing kernel for an RKHS. The reproducing property is proven from \[ \langle wf, k^w(\cdot,x)\rangle_{\tilde{\cH}}=\langle f,w(x)k(\cdot,x)\rangle_{\cH_k} = w(x)f(x) = (wf)(x). \] The second assertion is obvious from Equations \eqref{E_k:embed} and \eqref{E_k^w:embed}. \end{proof} \subsection{Stability with respect to the kernel embedding} \label{subsec:stability} Given a data set $X$, we compute the persistence diagram $D_{q}(X)$ and vectorize it as an element $E_{k}(\mu_{D_{q}(X)}^{w})$ of the RKHS. Then, for practical applications, this map $X \mapsto E_{k}(\mu_{D_{q}(X)}^{w})$ should be stable with respect to perturbations to the data as discussed in Section \ref{sec:bottleneck_stability}. Let $D$ and $E$ be persistence diagrams and $\gamma:D \cup {\Delta} {\rightarrow} E \cup {\Delta}$ be any multi-bijection. Here, we partition $D$ (resp. ${\Delta}$) into $D_{1}$ and $D_{2}$ (resp. ${\Delta}_{1}$ and ${\Delta}_{2}$) such as \[ \gamma(D_{1}) \subset \lR^{2}_{{\rm ad}}, \ \gamma(D_{2}) \subset {\Delta}, \ \gamma({\Delta}_{1}) \subset \lR^{2}_{{\rm ad}}, \ \gamma({\Delta}_{2}) \subset {\Delta}. \] Then $D_{1} \cup {\Delta}_{1}$ and $E$ are bijective under $\gamma$. Now, let a weight function $w$ be zero on the diagonal ${\Delta}$. Then, the norm of the difference between RKHS vectors is calculated as follows: \begin{align*} &\dk{k}{w}{D}{E} \nonumber \\ &=\norm{ \sum_{x \in D} w(x)k(\cdot,x) -\sum_{y \in E} w(y) k(\cdot,y)}_{\cH_{k}} \nonumber \\ &=\norm{ \sum_{x \in D} w(x)k(\cdot,x) -\sum_{x \in D_{1} \cup {\Delta}_{1}} w(\gamma(x)) k(\cdot,\gamma(x))}_{\cH_{k}} \nonumber \\ &=\norm{ \sum_{x \in D \cup {\Delta}_{1}} \biggl(w(x)k(\cdot,x)-w(\gamma(x))k(\cdot,\gamma(x)) \biggr) + \sum_{x \in D_{2}} w(\gamma(x))k(\cdot,\gamma(x))}_{\cH_{k}} \nonumber \\ & = \norm{ \sum_{x \in D \cup {\Delta}_{1} } \biggl(w(x)k(\cdot,x)-w(\gamma(x))k(\cdot,\gamma(x)) \biggr) }_{\cH_{k}} \nonumber \\ & = \norm{ \sum_{x \in D } w(x) \biggl(k(\cdot,x)-k(\cdot,\gamma(x)) \biggr) + \sum_{x \in D \cup {\Delta}_{1} } \biggl(w(x)-w(\gamma(x)) \biggr) k(\cdot,\gamma(x)) }_{\cH_{k}} \nonumber \\ &\leq \sum_{x \in D} w(x) \norm{k(\cdot,x)-k(\cdot,\gamma(x))}_{\cH_{k}}+ \sum_{x \in D \cup {\Delta}_{1}} \abs{w(x)-w(\gamma(x))} \norm{k(\cdot,\gamma(x))}_{\cH_{k}}. \end{align*} In the sequel, we consider the Gaussian kernel $k_{{\rm G}}(x,y)=e^{-\frac{\norm{x-y}^{2}}{2 \sigma^{2}}} \ (\sigma>0)$ for a $C_{0}$-universal kernel. Since $\norm{k_{{\rm G}}(\cdot,x)-k_{{\rm G}}(\cdot,y)}_{\cH_{G}} \leq \frac{\sqrt{2}}{\sigma} \norm{x-y}_{\infty}$ (Lemma \ref{lemm:Lip_k} in Appendix \ref{sec:stability}) and $\norm{k_{{\rm G}}(\cdot,x)}_{\cH_{k_{{\rm G}}}}=\sqrt{k_{{\rm G}}(x,x)} \equiv 1$ for any $x,y \in \lR^{2}$, we have \begin{align} \dk{k_{{\rm G}}}{w}{D}{E} \leq \frac{\sqrt{2}}{\sigma}\sum_{x \in D} w(x) \norm{x-\gamma(x)}_{\infty}+ \sum_{x \in D \cup {\Delta}_{1}} \abs{w(x)-w(\gamma(x))}. \label{eq:dk_halfway} \end{align} In this paper, we propose to use a weight function \[ w_{{\rm arc}}(x) = \arctan (C {\rm pers}(x)^{p}) ~~~ (C>0, \ p \in \lZ_{>0}). \] This is a bounded and increasing function of ${\rm pers}(x)$. The corresponding positive definite kernel is \begin{equation} k_{{\rm PWG}}(x,y)=w_{{\rm arc}}(x)w_{{\rm arc}}(y)e^{-\frac{\norm{x-y}^{2}}{2 \sigma^{2}}}. \end{equation} We call it {\em persistence weighted Gaussian kernel} (PWGK). This function $w_{{\rm arc}}$ gives a small (resp. large) weight on a noisy (resp. essential) generator. In addition, by appropriately adjusting the parameters $C$ and $p$ in $w_{{\rm arc}}$, we can control the effect of the persistence. Furthermore, we show that the PWGK has the following property: \begin{prop} \label{prop:general_stability} Let $p > 2$, and $D$ and $E$ be finite persistence diagrams whose degree-$(p-1)$ total persistence are bounded from above. Then, \begin{align*} \dk{k_{{\rm G}}}{w_{{\rm arc}}}{D}{E} \leq L(D,E;C,p,\sigma) d_{{\rm B}}(D,E), \end{align*} where $L(D,E;C,p,\sigma)$ is a constant bounded from above by \begin{align*} \biggl\{ \frac{\sqrt{2}}{\sigma} {\rm Pers}_{p}(D)+ 2p \biggl( {\rm Pers}_{p-1}(D) + {\rm Pers}_{p-1}(E) \biggr) \biggr\} C . \end{align*} \end{prop} \begin{proof} Let $d_{{\rm B}}(D,E) = {\varepsilon}$ and $\gamma:D \cup {\Delta} {\rightarrow} E \cup {\Delta}$ be a multi-bijection achieving the bottleneck distance, i.e., $\sup_{x \in D \cup {\Delta}} \norm{x- \gamma(x)}_{\infty} = {\varepsilon}$. We have already observed \begin{align*} \dk{k_{{\rm G}}}{w_{{\rm arc}}}{D}{E} \leq \frac{\sqrt{2}}{\sigma}\sum_{x \in D} w_{{\rm arc}}(x) \norm{x-\gamma(x)}_{\infty}+ \sum_{x \in D \cup {\Delta}_{1}} \abs{w_{{\rm arc}}(x)-w_{{\rm arc}}(\gamma(x))} \end{align*} in Equation \eqref{eq:dk_halfway}. From Lemma \ref{lemm:w_continuous} in Appendix \ref{sec:stability}, the right-hand side of the above inequality is bounded from above by \begin{align} & \frac{\sqrt{2}}{\sigma} \sum_{x \in D} w_{{\rm arc}}(x) \norm{x-\gamma(x)}_{\infty} + 2pC \sum_{x \in D \cup {\Delta}_{1}} \max \{{\rm pers}(x)^{p-1}, {\rm pers}(\gamma(x))^{p-1}\} \norm{x-\gamma(x)}_{\infty} \nonumber \\ & \leq \frac{\sqrt{2}}{\sigma} C {\varepsilon} \sum_{x \in D} {\rm pers}(x)^{p} + 2pC {\varepsilon} \sum_{x \in D \cup {\Delta}_{1}} \max \{ {\rm pers}(x)^{p-1}, {\rm pers}(\gamma(x))^{p-1} \} \label{eq:warc_bound} \\ & \leq \biggl\{ \frac{\sqrt{2}}{\sigma} {\rm Pers}_{p}(D)+ 2p \biggl( {\rm Pers}_{p-1}(D) + {\rm Pers}_{p-1}(\gamma(D \cup {\Delta}_{1})) \biggr) \biggr\} C {\varepsilon} \nonumber \\ &= \biggl\{ \frac{\sqrt{2}}{\sigma} {\rm Pers}_{p}(D)+ 2p \biggl( {\rm Pers}_{p-1}(D) + {\rm Pers}_{p-1}(E) \biggr) \biggr\} C {\varepsilon} \label{eq:total_last}. \end{align} We have used the fact $w_{\rm arc}(x)\leq C {\rm pers}(x)^p$ in \eqref{eq:warc_bound} and ${\rm Pers}_{p-1}(\gamma(D \cup {\Delta}_{1})) = {\rm Pers}_{p-1}(E)$ in \eqref{eq:total_last}. Thus, if both degree-$(p-1)$ total persistences of $D$ and $E$ are bounded from above, since degree-$p$ total persistence of $D$ is also bounded from above from Proposition \ref{prop:persistence_inequality}, the coefficient of ${\varepsilon}$ appearing in \eqref{eq:total_last} is bounded from above. \end{proof} The constant $L(D,E;C,p,\sigma)$ is dependent on $D$ and $E$, and hence we cannot say that the map $D \mapsto E_{k_{{\rm G}}}(\mu^{w_{\rm arc}}_{D})$ is continuous. In the case of persistence diagrams obtained from ball model filtrations, from Lemma \ref{lem:point_total}, the PWGK satisfies the following stability property. Recall that $D_{q}(X)$ denotes the persistence diagram to the ball model for $X$: \begin{thm} \label{thm:kernel_stability} Let $M$ be a triangulable compact subspace in $\lR^{d}$, $X,Y \subset M$ be finite subsets, and $p>d+1$. Then, \[ \dk{k_{{\rm G}}}{w_{{\rm arc}}}{D_{q}(X)}{D_{q}(Y)} \leq L(M,d;C,p,\sigma) d_{{\rm B}}(D_{q}(X),D_{q}(Y)), \] where $L(M,d;C,p,\sigma)$ is a constant depending on $M,d,C,p,\sigma$. \end{thm} \begin{proof} For any finite set $X \subset M$, from Lemma \ref{lem:point_total}, there exists a constant $C_{M}>0$ such that \[ {\rm Pers}_{p}(D_{q}(X)) \leq \frac{p}{p-d} C_{M}{\rm diam}(M)^{p-d}. \] By replacing $D$ and $E$ with $D_{q}(X)$ and $D_{q}(Y)$ in \eqref{eq:total_last}, respectively, we have \begin{align*} & \dk{k_{{\rm G}}}{w_{{\rm arc}}}{D_{q}(X)}{D_{q}(Y)} \\ & \leq \biggl\{ \frac{\sqrt{2}}{\sigma} {\rm Pers}_{p}(D_{q}(X))+ 2p \biggl( {\rm Pers}_{p-1}(D_{q}(X)) + {\rm Pers}_{p-1}(D_{q}(Y)) \biggr) \biggr\} C d_{{\rm B}}(D_{q}(X),D_{q}(Y)) \\ & \leq \biggl( \frac{\sqrt{2}}{\sigma} \frac{p}{p-d} {\rm diam}(M) + \frac{4p(p-1)}{p-1-d} \biggr) C_{M}{\rm diam}(M)^{p-1-d}C d_{{\rm B}}(D_{q}(X),D_{q}(Y)) \end{align*} Then, $L(M,d;C,p,\sigma):=\biggl( \frac{\sqrt{2}}{\sigma} \frac{p}{p-d} {\rm diam}(M) + \frac{4p(p-1)}{p-1-d} \biggr) C_{M}{\rm diam}(M)^{p-1-d}C$ is a constant independent of $X$ and $Y$. \end{proof} Let $\cP_{{\rm finite}}(M)$ be the set of finite subsets in a triangulable compact subspace $M \subset \lR^{d}$. Since the constant $L(M,d;C,p,\sigma)$ is independent of $X$ and $Y$, Proposition \ref{prop:point_stability} and Theorem \ref{thm:kernel_stability} conclude that the map \[ \cP_{{\rm finite}}(M) {\rightarrow} \cH_{k_{{\rm G}}} , ~~ X \mapsto E_{k_{{\rm G}}}(\mu_{D_{q}(X)}^{w_{{\rm arc}}}) \] is Lipschitz continuous. Note again that this implies a desirable stability property of the PWGK with the ball model: small perturbation of data points in terms of the Hausdorff distance causes only small perturbation of the persistence diagrams in terms of the RKHS distance with the PWGK. As the most relevant work to the PWGK, the persistence scale-space kernel (PSSK, \cite{RHBK15})\footnote{See Section \ref{subsubsec:pssk}.} provides another kernel method for vectorization of persistence diagrams and its stability result is shown with respect to $1$-Wasserstein distance. However, to the best of our knowledge, $1$-Wasserstein stability with respect to the Hausdorff distance is not shown, that is, for point sets $X$ and $Y$, $d_{ {\rm W}_{1}}(D_{q}(X),D_{q}(Y))$ is not estimated by $d_{{\rm H}}(X,Y)$ such as Proposition \ref{prop:point_stability} or Corollary \ref{cor:point_wasserstein}. Furthermore, it is shown \cite{RHBK15} that the PSSK does not satisfy the stability with respect to $p$-Wasserstein distance for $p>1$, including the bottleneck distance $d_{{\rm B}} = d_{{\rm W}_{\infty}}$, and hence it is not ensured that results obtained from the PSSK are stable under perturbation of data points in terms of the Hausdorff distance. On the other hand, since the PWGK has the desirable stability (Theorem \ref{thm:kernel_stability}), it is one of the advantages of our method over the previous research\footnote{In fact, if we apply Theorem 3 in \cite{RHBK15} to the PWGK directly, it concludes that the PWGK also does not satisfy the bottleneck stability. However, by using Proposition \ref{prop:general_stability}, we can avoid this difficulty, and Theorem \ref{thm:kernel_stability} holds. For more details, see Appendix \ref{sec:additive}.}. In addition, by the similar way in \cite{RHBK15}, we show the stability with respect to $1$-Wasserstein distance for our kernel vectorization. \begin{prop} \label{prop:pwgk_wasserstein} Let $D$ and $E$ be persistence diagrams. If a weight function $w$ is zero on the diagonal and there exist constants $c_{1},c_{2}>0$ such that \[ \abs{w(x)} \leq c_{1}, ~~ \abs{w(x)-w(y)} \leq c_{2} \norm{x-y}_{\infty} \] for any $x,y \in \lR^{2}$, then \[ \dk{k_{{\rm G}}}{w}{D}{E} \leq \pare{ \frac{\sqrt{2}}{\sigma} c_{1}+c_{2}} d_{ {\rm W}_{1}}(D,E). \] \end{prop} \begin{proof} From Equation \eqref{eq:dk_halfway}, we have \begin{align} \dk{k_{{\rm G}}}{w}{D}{E} & \leq \frac{\sqrt{2}}{\sigma}\sum_{x \in D} w(x) \norm{x-\gamma(x)}_{\infty}+ \sum_{x \in D \cup {\Delta}_{1}} \abs{w(x)-w(\gamma(x))} \label{eq:pwgk_inequality} \\ &\leq \frac{\sqrt{2}}{\sigma} c_{1} \sum_{x \in D} \norm{x-\gamma(x)}_{\infty} + c_{2} \sum_{x \in D \cup {\Delta}_{1}} \norm{x-\gamma(x)}_{\infty} \nonumber \end{align} Since this inequality holds for any multi-bijection $\gamma$, we obtain the $1$-Wasserstein stability. \end{proof} The weight function $w_{{\rm arc}}$ is bounded from above by $\frac{\pi}{2}$, and for $p=1$, from Lemma \ref{lemm:w_continuous}, we have \[ \abs{w_{{\rm arc}}(x)-w_{{\rm arc}}(y)} \leq 2C \norm{x-y}_{\infty} ~~~ (x,y \in \lR^{2}_{{\rm ad}}) . \] Therefore, from Proposition \ref{prop:pwgk_wasserstein}, the PWGK also have $1$-Wasserstein stability: \begin{cor} Let $p=1$, and $D$ and $E$ be persistence diagrams. Then \[ \dk{k_{{\rm G}}}{w_{{\rm arc}}}{D}{E} \leq \pare{ \frac{\pi}{\sqrt{2}\sigma} +2C} d_{ {\rm W}_{1}}(D,E). \] \end{cor} For $p>1$, we have \begin{align*} \sum_{x \in D \cup {\Delta}_{1}} \abs{w_{{\rm arc}}(x)-w_{{\rm arc}}(\gamma(x))} & \leq 2pC \sum_{x \in D \cup {\Delta}_{1}} \max \{ {\rm pers}(x)^{p-1}, {\rm pers}(\gamma(x))^{p-1} \} \norm{x-\gamma(x)}_{\infty} \\ & \leq 2pC \biggl( {\rm Pers}_{p-1}(D) + {\rm Pers}_{p-1}(E) \biggr) \sum_{x \in D \cup {\Delta}_{1}} \norm{x-\gamma(x)}_{\infty}, \end{align*} from Lemma \ref{lemm:w_continuous} and, hence, from Equation \eqref{eq:pwgk_inequality}, we have \[ \dk{k_{{\rm G}}}{w_{{\rm arc}}}{D}{E} \leq \biggl\{ \frac{\pi}{\sqrt{2}\sigma} +2pC \biggl( {\rm Pers}_{p-1}(D) + {\rm Pers}_{p-1}(E) \biggr) \biggr\} d_{ {\rm W}_{1}}(D,E). \] Although the above inequality does not directly imply the Lipschitz continuity of the PWGK for $p>1$ with respect to $1$-Wasserstein distance, combining with Lemma \ref{lem:point_total}, we have the following $1$-Wasserstein stability: \begin{cor} Let $M$ be a triangulable compact subspace in $\lR^{d}$, $X,Y \subset M$ be finite subsets, and $p>d+1$. Then, \[ \dk{k_{{\rm G}}}{w_{{\rm arc}}}{D_{q}(X)}{D_{q}(Y)} \leq \pare{ \frac{\pi}{\sqrt{2}\sigma} + \frac{4p(p-1)}{p-1-d} C_{M}{\rm diam}(M)^{p-1-d} C} d_{ {\rm W}_{1}}(D_{q}(X),D_{q}(Y)), \] for some constant $C_{M} > 0$. \end{cor} \subsection{Kernel methods on RKHS} Once persistence diagrams are represented as RKHS vectors, we can apply any kernel methods to those vectors by defining a kernel over the vector representation. In a similar way to the standard vectors, the simplest choice is to consider the inner product as a linear kernel \begin{align} K_{{\rm L}}(D,E;k,w):= \inn{\Ek{k}{w}{D}}{\Ek{k}{w}{E}}_{\cH_{k}} =\sum_{x \in D} \sum_{y \in E} w(x)w(y)k(x,y) \label{eq:linear_rkhs} \end{align} on the RKHS and we call it the {\em $(k,w)$-linear kernel}. If $k$ is a $C_{0}$-universal kernel and $w$ is strictly positive on $\lR^{2}_{{\rm ad}}$, from Proposition \ref{prop:C0_distance}, $\dk{k}{w}{D}{E}$ defines a distance on the persistence diagrams and it is computed as \[ \sqrt{K_{{\rm L}}(D,D;k,w)+K_{{\rm L}}(E,E;k,w)-2K_{{\rm L}}(D,E;k,w)}. \] Then, we can also consider a nonlinear kernel \begin{align} K_{{\rm G}}(D, E; k,w) = \exp \pare{- \frac{1 }{ 2 \tau^{2} } \dk{k}{w}{D}{E}^{2} } \ (\tau >0) \label{eq:gauss_rkhs} \end{align} on the RKHS and we call it the {\em $(k,w)$-Gaussian kernel}. In this paper, if there is no confusion, we also refer to the $(k_{{\rm G}},w_{{\rm arc}})$-Gaussian kernel as the PWGK. \cite{MFDS12} observed better performance with nonlinear kernels for some complex tasks and this is one of the reasons that we will use the Gaussian kernel on the RKHS. \subsection{Computation of Gram matrix} \label{subsec:calculation} Let $\cD=\{D_{\ell} \mid \ell=1,\ldots,n\}$ be a collection of persistence diagrams. In many practical applications, the number of generators in a persistence diagram can be large, while $n$ is often relatively small; in Section \ref{subsec:glass}, for example, the number of generators is about 30000, while $n=80$. If the persistence diagrams contain at most $m$ points, each element of the Gram matrix $(K_{{\rm G}}(D_{i},D_{j};k_{{\rm G}},w))_{i,j=1,\ldots,n}$ involves $O(m^2)$ evaluations of $e^{-\frac{\norm{x-y}^{2}}{2\sigma^{2}}}$, resulting the complexity $O(m^{2}n^{2})$ for obtaining the Gram matrix. Hence, reducing computational cost with respect to $m$ is an important issue. We solve this computational issue by using the random Fourier features \cite{RR07}. To be more precise, let $z_{1},\ldots,z_{M_{{\rm rff}}}$ be random variables from the $2$-dimensional normal distribution $N( (0,0), \sigma^{-2} I)$ where $I$ is the identity matrix. This method approximates $e^{-\frac{\norm{x-y}^{2}}{2\sigma^{2}}}$ by $\frac{1}{M_{{\rm rff}}}\sum_{a=1}^{M_{{\rm rff}}} e^{\sqrt{-1}z_{a}x} (e^{\sqrt{-1}z_{a}y})^{*}$, where $*$ denotes the complex conjugate. Then, $\sum_{x \in D_{i}} \sum_{y \in D_{j}} w(x)w(y)k_{{\rm G}}(x,y)$ is approximated by $\frac{1}{M_{{\rm rff}}}\sum_{a=1}^{M_{{\rm rff}}}B^{a}_{i} (B^{a}_{j})^{*}$, where $B^{a}_{\ell}=\sum_{x \in D_{\ell}} w(x) e^{\sqrt{-1}z_{a}x}$. As a result, the computational complexity of the approximated Gram matrix is $O(mnM_{{\rm rff}}+n^2M_{{\rm rff}})$, which is linear to $m$. We note that the approximation by the random Fourier features can be sensitive to the choice of $\sigma$. If $\sigma$ is much smaller than $\norm{x-y}$, the relative error can be large. For example, in the case of $x=(1,2),y=(1,2.1)$ and $\sigma=0.01$, $e^{-\frac{\norm{x-y}^{2}}{2\sigma^{2}}}$ is about $10^{-22}$ while we observed the approximated value can be about $10^{-3}$ with $M_{{\rm rff}}=10^{3}$. As a whole, these $m^{2}$ errors may cause a critical error to the statistical analysis. Moreover, if $\sigma$ is largely deviated from the ensemble $\norm{x-y}$ for $x \in D_{i},y \in D_{j}$, then most values $e^{-\frac{\norm{x-y}^{2}}{2 \sigma^{2}}}$ become close to $0$ or $1$. In order to obtain a good approximation and extract meaningful values, the choice of parameters is important. For supervised learning such as SVM, we use the cross-validation (CV) approach. For unsupervised case, we follow the heuristics proposed in \cite{GFTSSS07} and set \[ \sigma ={\rm median} \{ \sigma(D_{\ell}) \mid \ell=1,\ldots,n\}, \mbox{ where } \sigma(D)={\rm median} \{ \norm{x_{i}-x_{j}} \mid x_{i},x_{j} \in D, \ i<j \}, \] so that $\sigma$ takes close values to many $\norm{x-y}$. For the parameter $C$, we also set \[ C=( {\rm median} \{{\rm pers}(D_{\ell}) \mid \ell=1,\ldots,n\} )^{-p}, \mbox{ where } {\rm pers}(D)={\rm median} \{ {\rm pers}(x_{i}) \mid x_{i} \in D \}. \] Similarly, the parameter $\tau$ in the $(k,w)$-Gaussian kernel is defined by \begin{align} {\rm median} \rl{ \dk{k}{w}{D_{i}}{D_{j}} \ \middle| \ 1 \leq i < j \leq n}. \label{eq:tau} \end{align} \section{Experiments} \label{sec:experiment} In this section, we apply the kernel method of the PWGK to synthesized and real data, and compare the performance between the PWGK and other statistical methods of persistence diagrams. All persistence diagrams are obtained from the ball model filtrations and computed by CGAL \cite{DLY15} and PHAT \cite{BKRW14}. With respect to the dimension of persistence diagrams, we use $2$-dimensional persistence diagrams in Section \ref{subsec:granular} and $1$-dimensional ones in other parts. \subsection{Comparison to previous works} \label{subsec:comparison} \subsubsection{Persistence scale-space kernel} \label{subsubsec:pssk} The most relevant work to our method is the one proposed by \cite{RHBK15}. Inspired by the heat equation, they propose a positive definite kernel called {\em persistence scale-space kernel} (PSSK) $K_{{\rm PSS}}$ on the persistence diagrams: \begin{align} K_{{\rm PSS}}(D,E)=\inn{\Phi_{t}(D)}{\Phi_{t}(E)}_{L^{2}(\lR^{2})} =\frac{1}{8 \pi t} \sum_{x \in D} \sum_{y \in E} e^{-\frac{ \norm{x-y}^{2} }{8 t}} - e^{-\frac{ \norm{x-\bar{y}}^{2} }{8 t}}, \label{eq:pssk} \end{align} where $\Phi_{t}(D)(x)=\frac{1}{4\pi t} \sum_{y \in D} e^{-\frac{\norm{x-y}^{2}}{4 t}} - e^{-\frac{\norm{x-\bar{y}}^{2}}{4 t}}$ and $\bar{y}:=(y^{2},y^{1})$ for $y=(y^{1},y^{2})$. We note that $\Phi_{t}(D)$ also takes zero on the diagonal by subtracting the Gaussian kernels for $y$ and $\bar{y}$. In fact, we can verify that the $(k,w)$-linear kernel contains the PSSK. Let $\tilde{D}:=D \cup D^{*}$ where $D^{*}=\{(d,b) \in \lR^{2} \mid (b,d) \in D\}$. Then, $\Phi_{t}(D)$ can also be expressed as \begin{align*} \Phi_{t}(D)=\frac{1}{4\pi t}\sum_{y \in \tilde{D}} w_{{\rm PSS}}(y) k_{{\rm G}}(\cdot,y) \ \mbox{ where } \ w_{{\rm PSS}}(y)= \begin{cases} 1, & y^{2}>y^{1} \\ 0, & y \in {\Delta} \\ -1, & y^{2} <y^{1} \end{cases}, \end{align*} which is equal to $\frac{1}{4\pi t} E_{k_{{\rm G}}}(\mu^{w_{{\rm PSS}}}_{\tilde{D}})$. Furthermore, the inner product in $\cH_{k_{{\rm G}}}$ is \begin{align} K_{{\rm L}}(\tilde{D},\tilde{E};k_{{\rm G}},w_{{\rm PSS}})=\inn{E_{k_{{\rm G}}}(\mu^{w_{{\rm PSS}}}_{\tilde{D}})}{E_{k_{{\rm G}}}(\mu^{w_{{\rm PSS}}}_{\tilde{E}})}_{\cH_{k_{{\rm G}}}}=2\sum_{x \in D} \sum_{y \in E} k_{{\rm G}}(x,y)-k_{{\rm G}}(x,\bar{y}). \label{eq:pssk_embedding} \end{align} By scaling the variance parameter $\sigma$ in the Gaussian kernel $k_{{\rm G}}$ and multiplying by an appropriate scalar, Equation \eqref{eq:pssk} is the same as Equation \eqref{eq:pssk_embedding}. Thus, the PSSK can also be approximated by the random Fourier features. When we apply the random Fourier features for the PSSK, we set $\tilde{\sigma}={\rm median}\{\sigma(\tilde{D}_{\ell}) \mid \ell =1,\cdots, n\}$ as before and $t=\frac{\tilde{\sigma}^{2}}{4}$. While both methods discount noisy generators, the PWGK has the following advantages over the PSSK. (i) The PWGK can control the effect of the persistence by $C$ and $p$ in $w_{{\rm arc}}$ independently of the bandwidth parameter $\sigma$ in the Gaussian factor, while in the PSSK only one parameter $t$ cannot adjust the global bandwidth and the effect of persistence simultaneously. (ii) The PSSK does not satisfy the stability with respect to the bottleneck distance (see also remarks after Theorem \ref{thm:kernel_stability}). \subsubsection{Persistence landscape} \label{subsubsec:pl} The {\em persistence landscape} \cite{Bu15} is a well-known approach in TDA for vectorization of persistence diagrams. For a persistence diagram $D$, the persistence landscape $\lambda_{D}$ is defined by \[ \lambda_{D}(k,t) = k \mbox{-th largest value of } \min \{ t-b_{i},d_{i}-t\}_{+}, \] where $c_{+}$ denotes $\max \{c,0\}$, and it is a vector in the Hilbert space $L^{2}(\lN \times \lR)$. Here, we define a positive definite kernel of persistence landscapes as a linear kernel on $L^{2}(\lN \times \lR)$: \begin{align} K_{{\rm PL}}(D,E):=\inn{\lambda_{D}}{\lambda_{E}}_{L^{2}(\lN \times \lR)}=\int_{\lR} \sum_{k=1} \lambda_{D}(k,t)\lambda_{E}(k,t) dt. \label{eq:landscape} \end{align} Since a persistence landscape does not have any parameters, we do not need to consider the parameter tuning. However, the integral computation is required and it causes much computational time. Let $\cD=\{D_{\ell} \mid \ell=1,\ldots,n\}$ be a collection of persistence diagrams which contain at most $m$ points. Since $\lambda_{D_{i}}(k,t) \equiv 0$ for any $k>m, ~ t \in \lR, ~ i=1,\cdots,n$, calculating $\{\lambda_{D_{i}}(k,t) \mid k \in \lZ_{\geq 0}\}$, which needs sorting, is in $O(m \log m)$ (see also \cite{BD17}). For a fixed $t$, we can calculate $( \sum_{k=1} \lambda_{D_{i}}(k,t)\lambda_{D_{j}}(k,t) )_{i,j=1,\cdots, n}$ in $O(nm \log m + n^{2}m)$, and the Gram matrix $(K_{{\rm PL}}(D_{i},D_{j}))_{i,j = 1,\cdots,n}$ in $O(M_{{\rm int}} (nm \log m +n^{2}m))$, where $M_{{\rm int}}$ is the number of partitions in the integral interval. Theoretically speaking, this implies that it takes more time to calculate the Gram matrix of $K_{{\rm PL}}$ than the PWGK and the PSSK by the random Fourier features. \subsubsection{Persistence image} \label{subsubsec:pi} As a finite dimensional vector representation of a persistence diagram, a {\em persistence image} is proposed in \cite{AEKNPSCHMZ17}. First, we prepare a differentiable probability density function $\phi_{x}:\lR^{2} {\rightarrow} \lR$ with mean $x$ and a weight function $w:\lR^{2}_{{\rm ad}} {\rightarrow} \lR$. For a persistence diagram $D$, the {\em corresponding persistence surface} is defined by \begin{align} \rho_{D}(z) := \sum_{x \in D} w(x)\phi_{x}(z). \label{eq:pi} \end{align} Then, for fixed points $a_{0} < \cdots < a_{M} ~ (a_{i} \in \lR)$, the {\em persistence image} ${\rm PI}(D)$ is defined by an $M \times M$ matrix whose $(i,j)$-element is assigned to the integral of $\rho_{D}$ over the pixel $P_{i,j}:=(a_{i-1},a_{i}] \times (a_{j-1},a_{j}]$, i.e., \[ {\rm PI}(D)_{i,j}: = \int _{ P_{i,j}} \rho_{D}(z) dz. \] Since the persistence image can be regarded as an $M^{2}$-dimensional vector, we define a vector ${\rm PIV}(D) \in \lR^{M^{2}}$ by \[ {\rm PIV}(D)_{i+M (j -1)}: = {\rm PI}(D)_{i, j} ~ , \] and, in this paper, call it the persistence image vector. In \cite{AEKNPSCHMZ17}, they use the $2$-dimensional Gaussian distribution $\frac{1}{2\pi \sigma^{2}}k_{{\rm G}}(x,z)$ as $\phi_{x}(z)$ and a piecewise linear weighting function $w_{{\rm pers}}(x)$ defined by \begin{align*} w_{{\rm pers}}(x):= \begin{cases} 0 & ({\rm pers}(x) < 0) \\ \frac{1}{L} {\rm pers}(x)& (0 \leq {\rm pers}(x) \leq L) \\ 1 & ({\rm pers}(x)>L) \end{cases} ~~, \end{align*} where $L$ is a parameter. In this paper, for a collection of persistence diagrams $\cD=\{D_{\ell} \mid \ell=1,\ldots,n\}$, we set $L$ as \[ L=\max \{ L(D_{\ell}) \mid \ell=1,\cdots,n \}, \mbox{ where } L(D)=\max \{ d_{i} \mid (b_{i},d_{i}) \in D\}. \] For points $a_{0} < \cdots < a_{M}$ of a pixel $P_{i,j}=(a_{i-1},a_{i}] \times (a_{j-1},a_{j}]$, we set $a_{M}=L$ and $a_{i}=\frac{i}{M}a_{M}$ for $0 \leq i \leq M$\footnote{Here, we set $a_{0}=0$ because all generators in the ball model filtrations are born after $b=0$.}. Here, by choosing $\phi_{x}$ and $w$ in the proposed way, we define a positive definite kernel of persistence image vector as a linear kernel on $\lR^{M^{2}}$: \begin{align} K_{{\rm PI}}(D,E)&:=\inn{{\rm PIV}(D)}{{\rm PIV}(E)}_{\lR^{M^{2}}} \nonumber \\ &=\sum_{i,j=1}^{M}{\rm PI}(D)_{i,j}{\rm PI}(E)_{i,j} \nonumber \\ &=\frac{1}{(2\pi\sigma^{2})^{2}} \sum_{x \in D} \sum_{y \in E} w_{{\rm pers}}(x)w_{{\rm pers}}(y) \sum_{i,j=1}^{M} \int_{P_{i,j}} k_{{\rm G}}(x,z) dz \int_{P_{i,j}} k_{{\rm G}}(y,z) dz . \label{eq:inner_pi} \end{align} If we choose $\phi_{x}(z)$ as a (normalized) positive definite kernel $k(x,z)$, the corresponding persistence surface \eqref{eq:pi} is the same as the RKHS vector $E_{k}(\mu^{w}_{D})$\footnote{\cite{AEKNPSCHMZ17} use a persistence diagram in birth-persistence coordinates. That is, by a linear transformation $T(b,d)=(b,d-b)$, a persistence diagram $D$ is transformed into $T(D)$. In this paper, in order to compare with the persistence image and the PWGK, we use birth-death coordinates.}. Thus, it may be expected that the persistence image and the PWGK show similar performance for data analysis. However, there are several differences between the persistence image and the PWGK. (i) The mapping from a persistence diagram to the persistence image is not injective due to the discretization by the integral, on the other hand, the injectivity of the RKHS vector $E_{k}(\mu^{w}_{D})$ is ensured in Proposition \ref{prop:C0_distance}. (ii) It is also shown that the persistence image has a stability result with respect to $1$-Wasserstein distance, but it does not satisfy the bottleneck stability (Remark 1 in \cite{AEKNPSCHMZ17}) or the Haussdorff stability as noted after Theorem \ref{thm:kernel_stability}. (iii) The computational complexity of a persistence image does not depend on the number of generators in a persistence diagram, but instead, it depends on the number of pixels. We can reduce the computational time of the persistence image by choosing a small mesh size $M$. However, as data in Section \ref{subsec:Synthesized}, some situations need a fine mesh (i.e., a large mesh size). Thus, we have to be careful with the choice of mesh size. \subsection{Classification with synthesized data} \label{subsec:Synthesized} We compare the performance among the PWGK, the PSSK, the persistence landscape, and the persistence image for a simple binary classification task with SVMs. \subsubsection{Synthesized data} In this experiment, we design data sets so that important generators close to the diagonal must be taken into account to solve the classification task. Let $S^{1}(x,y,r,N)$ be a set composed of $N$ points sampled with equal distance from a circle in $2$-dimensional Euclidean space with radius $r$ centered at $(x,y)$. When we compute the persistence diagram of $S^{1}(x,y,r,N)$ for $N>3$, there always exists a generator whose birth time is approximately $\frac{\pi r}{N}$ (here we use $\sin \theta \approx \theta$ for small $\theta$) and death time is $r$ (Figure \ref{fig:birth-death}). \begin{figure}[htbp] \begin{center} \includegraphics[width=0.6\hsize]{birth-death.pdf} \end{center} \caption{Birth and death of the generator for $S^{1}(x,y,r,N)$.} \label{fig:birth-death} \end{figure} In order to add randomness on $S^{1}(x,y,r,N)$, we extend it into $\lR^{3}$ and change $S^{1}(x,y,r,N)$ to $S_{z}^{1}(x,y,r,N)$ and $\tilde{S}_{z}^{1}(x,y,r,N)$ as follows: \begin{align*} S_{z}^{1}(x,y,r,N) &:=\{(z_{1},z_{2},z_{3}) \mid (z_{1},z_{2}) \in S^{1}(x,y,r,N), \ z_{3} \mbox{ is uniformly sampled from }[0, 0.01] \}\\ \tilde{S}_{z}^{1}(x,y,r,N) &:=S_{z}^{1}(x+W_{x}^{2},y+W_{y}^{2},r+W_{r}^{2}, \lceil N+2 W_{N} \rceil), \end{align*} where $W_{x},W_{y} \sim N(0,2)$\footnote{$N(\mu,\sigma^{2})$ is the $1$-dimensional normal distribution with mean $\mu$ and variance $\sigma^{2}$.}, $W_{r},W_{N} \sim N(0,1)$ and $\lceil c \rceil$ is the smallest integer greater than or equal to $c$. Then, we add $S_{2}:=S_{z}^{1}(x_{2},y_{2},r_{2},N_{2})$ to $S_{1}:=\tilde{S}_{z}^{1}(x_{1},y_{1},r_{1},N_{1})$ with probability $0.5$ and use it as the synthesized data. In this paper, we choose parameters by \begin{align*} r_{1}&=1+8 W^{2} ~~ (W\sim N(0,1)), \\ x_{1}=y_{1}&=1.5 r_{1}, \\ N_{1}& ~:~ \mbox{a random integer with equal probability in } (\lceil \frac{\pi r}{2} \rceil, 4\pi r), \end{align*} and set $(x_{2},y_{2},r_{2},N_{2})$ as $(0,0,0.2,10)$ (Figure \ref{fig:synthesized}). \begin{figure}[htbp] \begin{center} \includegraphics[width=0.6\hsize]{combi_example.pdf} \end{center} \caption{Examples of synthesized data. Left: $S_2$ exits. Right: $S_2$ does not exist. } \label{fig:synthesized} \end{figure} For the binary classification, we introduce the following labels: \begin{align*} z_{0}=1 &~~ \mbox{if a generator for $S_{1}$ is born before $1$ and dies after $4$.} \\ z_{1}=1 &~~ \mbox{if $S_2$ exists.} \end{align*} The class label of the data set is then given by ${\bf XOR}(z_0,z_1)$. By this construction, identifying $z_{0}$ requires relatively smooth function in the area of long lifetimes, while classifying the existing of $z_{1}$ needs delicate control of the resolution around the diagonal. \subsubsection{SVM results} SVMs are trained from persistence diagrams given by 100 data sets, and evaluated with 100 independent test data sets. As a positive definite kernel $k$, we choose the Gaussian kernel $k_{{\rm G}}$ and the linear kernel $k_{{\rm L}}(x,y):=\inn{x}{y}_{\lR^{2}}$. For a weight function $w$, we use the proposed function $w_{{\rm arc}}(x)=\arctan (C {\rm pers}(x)^{p})$, the piecewise linear weighting function $w_{{\rm pers}}(x)$ defined in Section \ref{subsubsec:pi}, and an unweighted function $w_{{\rm one}}(x) \equiv 1$. The hyper-parameters $(\sigma, C)$ in the PWGK and $t$ in the PSSK are chosen by the 10-fold cross-validation, and the degree $p$ in $w_{{\rm arc}}(x)$ is set as $1, 5, 10$. For $K_{{\rm PSS}}$ and $K_{{\rm PL}}$, while they originally consider only the inner product, we also apply the Gaussian kernels on RKHS following Equation \eqref{eq:gauss_rkhs}. Since $K_{{\rm PI}}$ can be seen as a discretization of the $(k_{{\rm G}},w_{{\rm pers}})$-linear kernel, we also construct another kernel of persistence image by replacing $w_{{\rm pers}}$ with $w_{{\rm arc}}$, which is considered as a discretization of the PWGK. In order to check whether the persistence image with $w_{{\rm arc}}$ is an appropriate discretization of the PWGK, we try several mesh size $M=20,50,100$. \begin{table}[htbp] \caption{Results of SVMs with the $(k,w)$-linear/Gaussian kernel, the PSSK, the persistence landscape, and the persistence image. Average classification rates ($\%$) and standard deviations for 100 test data sets are shown.}\label{table:Synth_results} \centering \begin{tabular}{ c c | c | c } \hline \multicolumn{2}{c|}{} & Linear & Gaussian \\ \hline \multicolumn{2}{c|}{\textbf{PWGK}} & & \\ \textbf{kernel} & \textbf{weight} & & \\ & $w_{{\rm arc}} \ (p=1)$ & 75.7 $\pm$ 2.31 & 85.8 $\pm$ 5.19 (PWGK) \\ & $w_{{\rm arc}} \ (p=5)$ & 75.8 $\pm$ 2.47 ($\triangle$) & 85.6 $\pm$ 5.01 (PWGK, $\square$) \\ $k_{{\rm G}}$ & $w_{{\rm arc}} \ (p=10)$ & 76.0 $\pm$ 2.39 & 86.0 $\pm$ 4.98 (PWGK) \\ & $w_{{\rm pers}}$ & 49.3 $\pm$ 2.72 & 52.3 $\pm$ 6.60 \\ & $w_{{\rm one}}$ & 53.8 $\pm$ 4.76 & 55.1 $\pm$ 8.42 \\ \hline & $w_{{\rm arc}} \ (p=5)$ & 49.3 $\pm$ 6.92 & 51.8 $\pm$ 3.52 \\ $k_{{\rm L}}$ & $w_{{\rm pers}}$ & 51.0 $\pm$ 6.84 & 55.7 $\pm$ 8.68 \\ & $w_{{\rm one}}$ & 50.5 $\pm$ 6.90 & 53.0 $\pm$ 4.89 \\ \hline \multicolumn{2}{c|}{\textbf{PWGK with Persistence image}} & & \\ $M=20$ & $w_{{\rm arc}} \ (p=5)$ & 48.8 $\pm$ 3.75 ($\triangle_{20}$) & 52.0 $\pm$ 5.65 ($\square_{20}$) \\ $M=50$ & $w_{{\rm arc}} \ (p=5)$ & 49.2 $\pm$ 5.77 ($\triangle_{50}$) & 51.8 $\pm$ 7.23 ($\square_{50}$) \\ $M=100$ & $w_{{\rm arc}} \ (p=5)$ & 75.0 $\pm$ 2.20 ($\triangle_{100}$) & 85.8 $\pm$ 4.15 ($\square_{100}$) \\ \hline \multicolumn{2}{c|}{\textbf{PSSK} } & 50.5 $\pm$ 5.60 ($K_{{\rm PSS}}$) & 53.6 $\pm$ 6.69 \\ \hline \multicolumn{2}{c|}{\textbf{Persistence landscape}} & 50.6 $\pm$ 5.92 ($K_{{\rm PL}}$) & 48.8 $\pm$ 4.25 \\ \hline \multicolumn{2}{c|}{\textbf{Persistence image}} & & \\ $M=20$ & $w_{{\rm pers}}$ & 51.1 $\pm$ 4.38 $(K_{{\rm PI}}$) & 51.7 $\pm$ 6.86 \\ $M=50$ & $w_{{\rm pers}}$ & 49.0 $\pm$ 6.14 $(K_{{\rm PI}}$) & 52.3 $\pm$ 7.21 \\ $M=100$ & $w_{{\rm pers}}$ & 54.5 $\pm$ 8.76 $(K_{{\rm PI}}$) & 52.1 $\pm$ 6.70 \\ \hline \end{tabular} \end{table} In Table \ref{table:Synth_results}, we can see that the PWGK $\triangle$ and the Gaussian kernel on the persistence image with $w_{{\rm arc}}$ and large mesh size $\square_{100}$ show higher classification rates ($85\%$ accuracy) than the other methods ($K_{{\rm PSS}}: 50\%$, $K_{{\rm PL}}: 50\%$, and $K_{{\rm PI}}: 55\%$). Although the $(k_{{\rm G}},w_{{\rm pers}})$-Gaussian kernel and the persistence image with the original weight $w_{{\rm pers}}$ discount noisy generators, the classification rates are close the chance level. These unfavorable results must be caused by the difficulty in handling the local and global locations of generators simultaneously. While the result of the persistence image with a large mesh size is similar to that of the PWGK (e.g., $\square$ and $\square_{100}$), a small mesh size gives bad approximation results (e.g., $\square$ and $\square_{50}$). The reason is because a small mesh size makes rough pixels, and $S_{2}$ itself and noisy generators are treated in some rough pixel. On the other hand, we remark that a large mesh size $M$ needs much computational time since the computational complexity of the persistence image depends on $O(M^{2})$. We observe that the classification accuracies are not sensitive to $p$. Thus, in the rest of this paper, we set $p=5$ because the assumption $p>d+1$ in Theorem \ref{thm:kernel_stability} ensures the continuity in the kernel embedding of persistence diagrams and all data points are obtained from $\lR^{3}$. \subsection{Analysis of granular system} \label{subsec:granular} We apply the PWGK, the PSSK, the persistence landscape, and the persistence image to persistence diagrams obtained by experimental data in a granular packing system \cite{FSCS11}. In this example, a partially crystallized packing with $150,000$ monosized beads (diameter $=1$mm, polydispersity $=0.025$mm) in a container is obtained by experiments, where the configuration of the beads is imaged by means of X-ray Computed Tomography. One of the fundamental interests in the study of granular packings is to understand the transition from random packings to crystallized packings. In particular, the maximum packing density $\phi_*$ that random packings can attain is still a controversial issue (e.g., see \cite{TTD00}). Here, we apply the change point analysis to detect $\phi_*$. In oder to observe configurations of various densities, we divide the original full system into $35$ cubical subsets containing approximately $4000$ beads. The data are provided by the authors of the paper \cite{FSCS11}. The packing densities of the subsets range from $\phi=0.590$ to $\phi=0.730$. \cite{STRFH17} computed a persistence diagram for each subset by taking the beads configuration as a finite subset in $\mathbb{R}^3$, and found that the persistence diagrams characterize different configurations in random packings (small $\phi$) and crystallized packings (large $\phi$). Hence, it is expected that the change point analysis applied to these persistence diagrams can detect the maximum packing density $\phi_*$ as a transition from the random to crystallized packings. Our strategy is to regard the maximum packing density as the change point and detect it from a collection $\cD=\{D_{\ell} \mid \ell=1,\ldots,n\} \ (n=35)$ of persistence diagrams made by beads configurations of granular systems, where $\ell$ is the index of the packing densities listed in the increasing order. As a statistical quantity for the change point detection, we use the kernel Fisher discriminant ratio \cite{HMB09} defined by \begin{equation} \label{eq:kfdr} {\rm KFDR}_{n,\ell,\gamma}(\cD)=\frac{\ell(n- \ell )}{n} \norm{ \pare{\frac{ \ell }{n} \hat{\Sigma}_{1: \ell }+\frac{n- \ell }{n} \hat{\Sigma}_{ \ell+1:n} +\gamma I}^{-\frac{1}{2}} \pare{\hat{\mu}_{ \ell+1:n}-\hat{\mu}_{1: \ell }} }_{\cH_{K}}, \end{equation} where the empirical mean element $\hat{\mu}_{i:j}$ and the empirical covariance operator $\hat{\Sigma}_{i:j}$ with data $D_{i}$ through $D_{j} \ (i<j)$ are given by \begin{align*} &\hat{\mu}_{i:j}=\frac{1}{j-i+1} \sum^{j}_{\ell=i} K(\cdot,D_{\ell}), \\ &\hat{\Sigma}_{i:j}=\frac{1}{j-i+1}\sum^{j}_{\ell=i} \pare{K(\cdot,D_{\ell})-\hat{\mu}_{i:j} } \otimes \pare{K(\cdot,D_{\ell})-\hat{\mu}_{i:j} } \end{align*} respectively, and $\gamma$ is a regularization parameter (in this paper we set $\gamma=10^{-3}$). The index $\ell$ achieving the maximum of ${\rm KFDR}_{n,\ell,\gamma}(\cD)$ corresponds to the estimated change point. In Figure \ref{fig:packing_KFDR}, all the four methods detect $\ell=23$ as the sharp maximizer of the KFDR. This result indicates that the maximum packing density $\phi_*$ exists in the interval $[0.604,0.653]$ and supports the traditional observation $\phi_* \approx 0.636$ \cite{An72}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.9\hsize]{granular_kfdr.pdf} \end{center} \vspace{-3mm} \caption{The ${\rm KFDR}$ graphs of the PWGK, the PSSK, the persistence landscape, and the persistence image.} \label{fig:packing_KFDR} \end{figure} We also apply kernel principal component analysis (KPCA) to the same collection of the 35 persistence diagrams. Figure \ref{fig:packing_kpca} shows the $2$-dimensional KPCA plots where each green triangle (resp. red circle) indicates the persistence diagram of random packing (resp. crystallized packing). We can see clear two-cluster structure corresponding to two physical states. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.9\hsize]{granular_kpca.pdf} \end{center} \vspace{-3mm} \caption{The KPCA plots of the PWGK (contribution rate: 92.9\%), the PSSK (99.7\%), the persistence landscape (83.8\%), and the persistence image (98.7\%).} \label{fig:packing_kpca} \end{figure} \subsection{Analysis of ${\rm {\bf SiO_2}}$} \label{subsec:glass} When we rapidly cool down the liquid state of ${\rm SiO}_2$, it avoids the usual crystallization and changes into a glass state. Understanding the liquid-glass transition is an important issue for the current physics and industrial applications \cite{GS07}. Glass is an amorphous solid, which does not have a clear structure in the configuration of molecules, but it is also known that the medium distance structure such as rings have important influence on the physical properties of the material. It is thus promising to apply the persistent homology to express the topological and geometrical structure of the glass configuration. For estimating the glass transition temperature by simulations, a traditional physical method is to prepare atomic configurations of ${\rm SiO_{2}}$ for a certain range of temperatures by molecular dynamics simulations, and then draw the temperature-enthalpy graph. The graph consists of two lines in high and low temperatures with slightly different slopes which correspond to the liquid and the glass states, respectively, and the glass transition temperature is conventionally estimated as an interval of the transient region combining these two lines (e.g., see \cite{El90}). However, since the slopes of two lines are close to each other, determining the interval is a subtle problem. Usually only the rough estimate of the interval is available. Hence, we apply our framework of topological data analysis with kernels to detect the glass transition temperature. Let $\{D_\ell\mid \ell=1,\dots,80\}$ be a collection of the persistence diagrams made by atomic configurations of ${\rm SiO}_2$ and sorted by the decreasing order of the temperature. The same data was used in the previous works by \cite{HNHEMN16,NHHEN15}. The interval of the glass transition temperature $T$ estimated by the conventional method explained above is $2000K\leq T\leq 3500K$, which corresponds to $35\leq \ell \leq 50$. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.9\hsize]{SiO2_kfdr.pdf} \end{center} \vspace{-3mm} \caption{The ${\rm KFDR}$ graphs of the PWGK (left), the PSSK (center) and the persistence image (right).} \label{fig:glass_KFDR} \end{figure} In Figure \ref{fig:glass_KFDR}, the KFDR plots show that the change point is estimated as $\ell=39$ by the PWGK, $\ell=33$ by the PSSK, and $\ell=33$ by the persistence image. For the persistence landscape, we cannot obtain the KFDR or the KPCA results with reasonable computational time. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.9\hsize]{SiO2_kpca.pdf} \end{center} \vspace{-3mm} \caption{The $2$-dimensional and $3$-dimensional KPCA plots of the PWGK (contribution rates for $2$-dimension: 81.7\%, $3$-dimension: 92.1\%), the PSSK (97.2\%, 99.3\%) and the persistence image (99.9\%, 99.9\%).} \label{fig:glass_kpca} \end{figure} As we see from the $2$-dimensional plots given by KPCA (Figure \ref{fig:glass_kpca}), the PWGK presents the clear phase change between before (green triangle) and after (red circle) the change point determined by the KFDR. This strongly suggests that the glass transition occurs at the detected change point. On the other hand, we cannot observe clear two-cluster structure in the KPCA plots of the PSSK and the persistence image. We also remark that the detailed cluster structure is observed in the $3$-dimensional KPCA plots of the PWGK. \subsection{Protein classification} \label{subsec:Protein} We apply the PWGK to two classification tasks studied in \cite{CMWOXW15}. They introduced the molecular topological fingerprint (MTF) as a feature vector constructed from the persistent homology, and used it for the input to the SVM. The MTF is given by the $13$-dimensional vector whose elements consist of the persistences of some specific generators\footnote{The MTF method is not a general method for persistence diagrams because some elements of the MTF vector are specialized for protein data, e.g., the ninth element of the MTF vector is defined by the number of Betti $1$ bars that locate at $[4.5, 5.5]$\AA, divided by the number of atoms. For the details, see \cite{CMWOXW15}.} in persistence diagrams. We compare the performance between the PWGK and the MTF method under the same setting of the SVM reported in \cite{CMWOXW15}. The first task is a protein-drug binding problem, where the binding and non-binding of drug to the M2 channel protein of the influenza A virus is to be classified. For each of the two forms, 15 data were obtained by NMR experiments, and 10 data are used for training and the remaining for testing. We randomly generate 100 ways of partitions and calculate the average classification rates. In the second problem, the taut and relaxed forms of hemoglobin are to be classified. For each form, 9 data were collected by the X-ray crystallography. We select one data from each class for testing and use the remaining for training. All the 81 combinations are performed to calculate the CV classification rates. The results of the two problems are shown in Table \ref{table:Protein_results}. We can see that the PWGK achieves better performance than the MTF in both problems. \begin{table}[ttt] \caption{CV classification rates ($\%$) of SVMs with the PWGK and the MTF (cited from \cite{CMWOXW15}).} \label{table:Protein_results} \centering \begin{tabular}{c|c c} \hline & Protein-Drug & Hemoglobin \\ \hline PWGK & 100 & 88.90 \\ MTF & (nbd) 93.91 / (bd) 98.31 & 84.50 \\ \hline \end{tabular} \vspace{-3mm} \end{table} \section{Conclusion and Discussions} One of the contributions of this paper is to introduce a kernel framework to topological data analysis with persistence diagrams. We applied the kernel embedding approach to vectorize the persistence diagrams, which enables us to utilize any standard kernel methods for data analysis. Another contribution is to propose a kernel specific to persistence diagrams, that is called persistence weighted Gaussian kernel (PWGK). As a significant advantage, our kernel enables one to control the effect of persistence in data analysis. We have also proven the stability property with respect to the distance in the Hilbert space. Furthermore, we have analyzed the synthesized and real data by using the proposed kernel. The change point detection, the principal component analysis, and the support vector machine derived meaningful results for the tasks. From the viewpoint of computations, our kernel can utilize an efficient approximation to compute the Gram matrix. One of the main theoretical results of this paper is the stability of the PWGK (Theorem \ref{thm:kernel_stability}). It is obtained as a corollary of Proposition \ref{prop:general_stability} by restricting the class of persistence diagrams to that obtained from ball model filtrations. The reason of this restriction is because the total persistence can be bounded from above independent of the persistence diagram. Thus, one direction to extend this work is to examine the boundedness condition about the total persistence of other persistence diagrams, for example obtained from sub-level sets or Rips complexes. Another direction to extend this work is to generalize the class of weight functions. The reason of the choice of $w_{{\rm arc}}$ is mainly for the stability property, but in principle, we can apply any weight function to data analysis. Then, the question is what types of weight functions have a stability property with respect to the bottleneck or $p$-Wasserstein distance. Even if we do not concern about stability properties, which weight function is practically good for data analysis? Suppose generators close to the diagonal are sometimes seen as important features. Then, our statistical framework can treat such small generators as significant ones by a weight function which has large weight close to the diagonal, while other statistical methods for persistence diagrams always see small generators as noisy ones. \section*{Acknowledgement} We thank Ulrich Bauer for giving us useful comments in Section \ref{subsubsec:pssk}, and Mohammad Saadatfar and Takenobu Nakamura for providing experimental and simulation data used in Section \ref{subsec:granular} and \ref{subsec:glass}. This work is partially supported by JST CREST Mathematics (15656429), JSPS KAKENHI Grant Number 26540016, Structural Materials for Innovation Strategic Innovation Promotion Program D72, Materials research by Information Integration" Initiative (MI$^{2}$I) project of the Support Program for Starting Up, Innovation Hub from JST, and JSPS Research Fellow (17J02401). \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and examples} Consider the one-dimensional Schr\"odinger (or Sturm-Liouville) operator \begin{equation} L=\frac{\partial^2}{\partial x^2}+u(x). \end{equation} Its heat kernel $H(x,y,t)$ is the fundamental solution of the heat equation \begin{equation}\label{1.2} \left(\frac{\partial}{\partial t}-L\right)f=0. \end{equation} It is well known that $H(t,x,y)$ has an asymptotic expansion of the form \begin{equation} H(x,y,t) \sim \frac {e^{-\frac{(x-y)^2}{4t}}}{\sqrt{4\pi t}} \left( 1 + \sum_{n=1}^{\infty} H_n(x,y)t^n\right) \text{ as }t\rightarrow 0+. \end{equation} The differential equation \eqref{1.2} for $H(x,y,t)$ implies the recursion-differential equations for the coefficients $H_n=H_n(x,y)$: \begin{align} &H_0=1 \label{1.4}\\ &(x-y)\frac{\partial H_n}{\partial x}+nH_n=LH_{n-1} \text{ for }n\geq 1. \label{1.5} \end{align} This system is known to admit unique smooth solutions $H_n=H_n(x,y)$ in some neighborhood of the diagonal $x=y$. The coefficients $H_n$ are named after J. Hadamard \cite{Hadamard}, who constructed them for the first time. Computation of heat invariants of self-adjoint elliptic operators is a well known problem in spectral theory which has many applications, in particular to geometry and theoretical physics \cite{BGV,Berger,Fulling,Gilkey,Kac,McKS}. The asymptotics of the one-dimensional Schr\"odinger operator are of particular interest due to their relations to the Korteweg-de Vries (KdV) hierarchy. More precisely, it is known that the restriction of the heat coefficients on the diagonal gives the right-hand sides of the KdV hierarchy, see \cite{McKvM,Sch}. In the present paper we show that there are simple formulas for the Hadamard's coefficients $H_n(x,y)$ in terms of the $\tau$-function of the KdV hierarchy (see the next section for a precise definition of the $\tau$-function). A remarkable explicit formulas for the coefficients of the Taylor expansion of $H_n(x,y)$ around the diagonal $x=y$ were previously constructed in \cite[Theorem 1.3]{ASch}. However, these formulas have a rather complicated combinatorial structure and it is practically impossible to write a closed formula for the coefficients even for simple potentials $u$. One advantage of the formulas derived in this paper is that they give finite expressions for the heat coefficient if the $\tau$-function is known (e.g. the solitons, or the more general algebro-geometric solutions of KdV). To see the importance of the KdV equations, let us compute the first few coefficients using the defining relations \eqref{1.4}-\eqref{1.5}. Anticipating the appearance of the $\tau$-function, let us write $u(x)$ as \begin{equation*} u(x)=2\frac{\partial^2 \log(\tau(x))}{\partial x^2}. \end{equation*} From \eqref{1.5} one can easily obtain simple formulas for $H_1$ and $H_2$ \begin{equation} H_1(x,y)=\frac{2}{x-y}\left(\frac{\tau'(x)}{\tau(x)}- \frac{\tau'(y)}{\tau(y)}\right) \end{equation} and \begin{equation} H_2(x,y)=\frac{2}{(x-y)^2}\left(\left(\frac{\tau''(x)}{\tau(x)}+ \frac{\tau''(y)}{\tau(y)}\right)-H_1(x,y) -2\frac{\tau'(x)\tau'(y)}{\tau(x)\tau(y)}\right). \end{equation} For the third coefficient we have the following formula \begin{equation}\label{1.8} \begin{split} &(x-y)^3H_3(x,y)=-6(x-y)H_2(x,y) +2\left(\frac{\tau'''(x)}{\tau(x)}-\frac{\tau'''(y)}{\tau(y)}\right)\\ &\quad -2\left(\frac{\tau''(x)\tau'(x)}{\tau^2(x)} -\frac{\tau''(y)\tau'(y)}{\tau^2(y)}\right) +4\left(\frac{\tau'(x)}{\tau(x)}\frac{\tau''(y)}{\tau(y)} -\frac{\tau'(y)}{\tau(y)}\frac{\tau''(x)}{\tau(x)}\right)\\ &\quad +\frac{4}{3}\left(\left(\frac{\tau'(x)}{\tau(x)}\right)^3- \left(\frac{\tau'(y)}{\tau(y)}\right)^3\right)+\int_{y}^x u^2(\xi) d \xi. \end{split} \end{equation} Notice that the integral cannot be computed explicitly, unless something remarkable happens. This is the place where the KdV equation comes in. Assume that $u(x)$ depends on a additional parameter $s_3$ and it satisfies the KdV equation \begin{equation}\label{1.9} 4\partial_3 u = u'''+6uu', \end{equation} where $\partial_3=\partial/\partial s_3$ stands for the partial derivative with respect to $s_3$, and $u'$ is the derivative with respect to $x$. Then one can easily see that \begin{equation} \begin{split} &\int_{y}^x u^2(\xi) d \xi = -\frac{2}{3}\left(\frac{\tau'''(x)}{\tau(x)} -\frac{\tau'''(y)}{\tau(y)}\right) +2\left(\frac{\tau''(x)\tau'(x)}{\tau^2(x)} -\frac{\tau''(y)\tau'(y)}{\tau^2(y)}\right)\\ &\quad -\frac{4}{3}\left(\left(\frac{\tau'(x)}{\tau(x)}\right)^3- \left(\frac{\tau'(y)}{\tau(y)}\right)^3\right) +\frac{8}{3}\partial_{3}\log \frac{\tau(x)}{\tau(y)}, \end{split} \end{equation} which combined with \eqref{1.8} leads to a simple formula for $H_3(x,y)$. We'll extend these computations by showing that if $u$ is a solution of the KdV hierarchy and $\tau$ is the corresponding $\tau$-function, then there are simple explicit formulas for $H_n(x,y)$ in terms of $\tau$. The paper is organized as follows. In the next section we recall some basic facts about the KdV hierarchy and Sato theory, which are needed for the formulation and the proof of the main result. In Section 3, we prove a general formula for $H_n(x,y)$. It is interesting that the smoothness of the coefficient $H_n(x,y)$ on the diagonal is related to the Gegenbauer polynomials. As a corollary of the main theorem, we see the symmetry of the coefficients about the diagonal $x=y$ as well as the connection between $H_n(x,x)$ and KdV equations. As another application of the explicit formula, we show in \cite{I} that the expansion is finite if and only if the potential $u(x)$ is a rational solution of the KdV hierarchy decaying at infinity studied in \cite{AM,AMM}. Equivalently, one can characterize the corresponding operators as the rank one bispectral family in \cite{DG}. For related results concerning the finiteness property of the heat kernel expansion on the integers and rational solutions of the Toda lattice hierarchy see \cite{GI}. For solitons of the Toda lattice and purely discrete versions of the heat kernel see \cite{Haine}. \section{Korteweg-de Vries hierarchy and Sato theory} In this section we recall some basic facts about KdV hierarchy and Sato theory. For more details on this and the more general Kadomtsev-Petviashvili hierarchy we refer the reader to the papers \cite{SS,DJKM} or the more detailed expositions \cite{Dickey,vM}. Let $$L=\frac{\partial^2}{\partial x^2}+u(x)$$ be a second order differential operator. The KdV hierarchy is defined by the Lax equations \begin{equation}\label{2.1} \frac{\partial L}{\partial s_j}=[(L^{j/2})_+\;,L], \end{equation} where $j=1,3,5,\dots$ is an odd positive integer and $(L^{j/2})_+$ is the differential part of the pseudo-differential operator $L^{j/2}$. The first equation (for $j=1$) simply means that $u(x,s_1,s_3,s_5,\dots)=u(x+s_1,s_3,s_5,\dots)$, giving us the convenience to occasionally identify $x$ and $s_1$. The next equation (for $j=3$) is exactly the KdV equation \eqref{1.9}. Let us represent $L$ in a dressing form \begin{equation}\label{2.2} L=W\partial^2 W^{-1}, \end{equation} where $W$ is a pseudo-differential operator of the form \begin{equation}\label{2.3} W=\sum_{k=0}^{\infty}\psi_k\partial^{-k}, \quad \psi_0=1. \end{equation} The wave (Baker) function $\Psi(x,s,z)$ and the adjoint wave function $\Psi^*(x,s,z)$ are defined as \begin{equation}\label{2.4} \begin{split} \Psi(x,s,z)&= W \exp\left(xz+\sum_{i=1}^{\infty}s_{2i-1}z^{2i-1}\right) \\ & =\left(\sum_{k=0}^{\infty}\psi_k z^{-k}\right)\exp\left(xz+\sum_{i=1}^{\infty}s_{2i-1}z^{2i-1}\right) \end{split} \end{equation} and \begin{equation}\label{2.5} \begin{split} \Psi^*(x,s,z)&= (W^*)^{-1} \exp\left(-xz-\sum_{i=1}^{\infty}s_{2i-1}z^{2i-1} \right) \\ & =\left(\sum_{k=0}^{\infty}\psi^*_k z^{-k}\right)\exp\left(-xz-\sum_{i=1}^{\infty}s_{2i-1}z^{2i-1} \right), \end{split} \end{equation} where $W^*$ is the formal adjoint to the pseudo-differential operator $W$. Using \eqref{2.2} one can easily see that \begin{equation}\label{2.6} L\Psi(x,s,z)=z^2\Psi(x,s,z)\text{ and }L\Psi^*(x,s,z)=z^2\Psi^*(x,s,z). \end{equation} We shall also use the reduced wave function $\bar{\Psi}$ and the reduced adjoint wave function $\bar{\Psi}^*$ obtained from $\Psi$ and $\Psi^*$, respectively, by omitting the exponential factor, i.e. \begin{equation} \bar{\Psi}(x,s,z)=\sum_{k=0}^{\infty}\psi_k z^{-k} \end{equation} and \begin{equation} \bar{\Psi}^*(x,s,z)=\sum_{k=0}^{\infty}\psi^*_k z^{-k}. \end{equation} Equations \eqref{2.6} imply \begin{equation}\label{2.9} L\bar{\Psi}(x,s,z)+2z\partial_x \bar{\Psi}(x,s,z)=0\text{ and } L\bar{\Psi}^*(x,s,z)-2z\partial_x\bar{\Psi}^*(x,s,z)=0. \end{equation} Using equations \eqref{2.2}-\eqref{2.5} one can show that the wave and the adjoint wave function satisfy the following bilinear identities \begin{equation}\label{2.10} \mathrm{res}_z\left(z^{2n}\Psi^{(l)}(x,s,z)\Psi^*(x,s,z)\right)=0, \end{equation} for all nonnegative integers $n$ and $l$, where $\Psi^{(l)}(x,s,z)$ is the $l$th derivative of $\Psi$ with respect to $x$, and the residue is around $z=\infty$. The remarkable discovery of the Kyoto school was that the KdV hierarchy \eqref{2.1} could be described by a function $\tau(x,s)$. The reduced wave and the reduced adjoint wave functions can be expressed in terms of $\tau(x,s)$ by the following formulas \begin{equation}\label{2.11} \bar{\Psi}(x,s,z)=\frac{\tau(x;s-[z^{-1}])}{\tau(x,s)} \text{ and } \bar{\Psi}^*(x,s,z)=\frac{\tau(x;s+[z^{-1}])}{\tau(x,s)}, \end{equation} where $[z]=(z,z^3/3,z^5/5,\dots)$. Finally, let us denote by $W_n(x,y)$ the coefficients of the function\footnote{This function is closely related to the Green function for $L$.}\\ $\bar{\Psi}(x,s,z)\bar{\Psi}^*(y,s,z)$, i.e. \begin{equation}\label{2.12} \bar{\Psi}(x,s,z)\bar{\Psi}^*(y,s,z)=\sum_{n=0}^{\infty}W_n(x,y)z^{-n}. \end{equation} Using \eqref{2.11} we can easily write an explicit formula for $W_n$ in terms of the $\tau$-function. If we denote by $\mathfrak{S}_k(s)$ the elementary Schur polynomials defined by \begin{equation} \sum_{k=0}^{\infty}\mathfrak{S}_k(s)z^k= \exp\left(\sum_{k=1}^{\infty}s_{2k-1}z^{2k-1} \right), \end{equation} then we have \begin{equation}\label{2.14} W_n(x,y)=\frac{\sum_{k=0}^n [\mathfrak{S}_k(-\tilde{\partial})\tau(x,s)]\;[\mathfrak{S}_{n-k}(\tilde{\partial})\tau(y,s)]}{\tau(x,s)\tau(y,s)}, \end{equation} where \begin{equation} \tilde{\partial}=(\partial_1,\partial_3/3,\dots,\partial_{2k-1}/(2k-1),\dots). \end{equation} \section{Explicit formulas for Hadamard's coefficients} The main result of the paper is the following theorem. \begin{Theorem} The Hadamard's coefficients can be computed from the following relation \begin{equation}\label{3.1} H_n(x,y)=(-1)^n\sum_{k=0}^{n-1}\frac{2^{n-k}(n-k)_{2k}}{k!} \frac{W_{n-k}(x,y)}{(x-y)^{n+k}}, \end{equation} where $(\alpha)_k=\alpha(\alpha+1)\dots(\alpha+k-1)$ denotes the Pochhammer symbol, and $W_n(x,y)$ are defined by \eqref{2.14}. \end{Theorem} \begin{proof} To prove that the Hadamard's coefficients are given by \eqref{3.1} we need to check that \eqref{1.5} holds and that $H_n(x,y)$ are smooth on the diagonal $x=y$. To see that \eqref{1.5} holds, let us denote \begin{equation} f_n(x,y,z)=(-1)^n\sum_{k=0}^{n-1} \frac{2^{n-k}(n-k)_{2k}}{k!(x-y)^{n+k}}z^{n-k-1}. \end{equation} Then \eqref{3.1} can be rewritten as \begin{equation} H_n(x,y)=\mathrm{res}_z\left[f_n(x,y,z)\bar{\Psi}(x,s,z)\bar{\Psi}^*(y,s,z)\right]. \end{equation} Using the last equation together with \eqref{2.9} one can easily see that \begin{align*} &\left[(x-y)\partial_x+n\right]H_n(x,y)-L(x,\partial_x)H_{n-1}(x,y)\\ &\quad =\mathrm{res}_z\Big[ \big((x-y)\partial_x f_n(x,y,z)+nf_n(x,y,z)-\partial_x^2f_{n-1}(x,y,z)\big) \bar{\Psi}(x,s,z)\bar{\Psi}^*(y,s,z)\\ &\qquad + \big((x-y)f_n(x,y,z)+2zf_{n-1}(x,y,z)-2\partial_xf_{n-1}(x,y,z)\big) \partial_x\bar{\Psi}(x,s,z)\bar{\Psi}^*(y,s,z) \Big]. \end{align*} A direct computation now shows that \begin{align*} & (x-y)\partial_x f_n(x,y,z)+nf_n(x,y,z)-\partial_x^2f_{n-1}(x,y,z)=0\\ & (x-y)f_n(x,y,z)+2zf_{n-1}(x,y,z)-2\partial_xf_{n-1}(x,y,z)=0, \end{align*} which proves \eqref{1.5}. Next we need to show that $H_n(x,y)$ is well defined on the diagonal. Writing $H_n(x,y)$ as \begin{equation} H_n(x,y)=\frac{2(-1)^n}{(x-y)^{2n-1}} \sum_{k=0}^{n-1}\frac{2^k(x-y)^{k}(k+1)_{2n-2k-2}}{(n-k-1)!}W_{k+1}(x,y), \end{equation} and applying L'H\^opital's rule we see that we need to prove that for $j=0,1,\dots,2n-2$ we have \begin{equation}\label{3.5} \sum_{k=0}^{n-1}2^k\binom{j}{k}\frac{(2n-2-k)!}{(n-k-1)!}\,\partial_x^{j-k} W_{k+1}(x,y)|_{x=y}=0. \end{equation} Using \eqref{2.12} and $$\partial_x^j\bar{\Psi}(x,s,z)=\exp\left(-xz-\sum_{i=1}^{\infty}s_{2i-1}z^{2i-1} \right) (\partial_x-z)^j\Psi(x,s,z)$$ we see that \eqref{3.5} is equivalent to the following identities \begin{equation}\label{3.6} \mathrm{res}_z\left[\left(\sum_{k=0}^{n-1}2^k\binom{j}{k}\frac{(2n-2-k)!}{(n-k-1)!}\, z^k(\partial_x-z)^{j-k}\Psi(x,s,z)\right)\Psi^*(x,s,z)\right]=0 \end{equation} Equation \eqref{3.6} will follow from the bilinear identities \eqref{2.10} if we can prove that the polynomial \begin{equation}\label{3.7} P_{n,j}(w)= \sum_{k=0}^{n-1}2^k\binom{j}{k}\frac{(2n-2-k)!}{(n-k-1)!}\,(w-1)^{j-k}, \end{equation} is an even/odd function when $j$ is an even/odd number, respectively. It is a pleasant surprise to see that these polynomials are closely related to very well known classical orthogonal polynomials - the so called Gegenbauer polynomials. The Gegenbauer (or ultraspherical) polynomials are defined by \begin{equation}\label{3.8} {\mathcal C}_n^{\lambda}(w)=\sum_{k=0}^n2^k\binom{\lambda+k-1}{k} \binom{2\lambda+n+k-1}{n-k}(w-1)^k, \end{equation} see for example \cite[pages 302--303]{AAR}. Notice that this definition can be used for arbitrary $\lambda$. If $\lambda>-1/2$ and $\lambda\neq 0$ these polynomials orthogonal on the the interval $(-1,1)$ with respect to $(1-x^2)^{\lambda-\frac{1}{2}}$ which, in particular, implies that ${\mathcal C}_n^{\lambda}(w)$ is an even/odd function when $n$ is even/odd, respectively. However, we need these polynomials also for negative values of $\lambda$. In this case, we can use the three term reccurence relation \begin{equation} 2(n+\lambda)w{\mathcal C}_n^{\lambda}(w)=(n+1){\mathcal C}_{n+1}^{\lambda}(w) +(n+2\lambda-1){\mathcal C}_{n-1}^{\lambda}(w). \end{equation} and ${\mathcal C}_0^{\lambda}=1$ and ${\mathcal C}_1^{\lambda}=2\lambda w$ to deduce that ${\mathcal C}_n^{\lambda}(w)$ is an even/odd polynomial when $n$ is even/odd, respectively. Changing the summation index in \eqref{3.7} we can rewrite $P_{n,j}(w)$ as \begin{equation}\label{3.10} P_{n,j}(w)= \sum_{k=\max(0,j-n+1)}^{j}2^{j-k}\binom{j}{k}\frac{(2n-j-2+k)!}{(n-j+k-1)!}\, (w-1)^{k}. \end{equation} From the last equation and the defining relation \eqref{3.8} for the Gegenbauer polynomials one can see that\footnote{$(-1)!!=1$ and $(2k-1)!!=1\cdot 3\cdots(2k-1)$ for $k\geq 1$.} \begin{align} P_{n,j}(w) &= j!2^{n-1}(2n-2j-3)!!\;{\mathcal C}_{j}^{n-j-\frac{1}{2}}(w) &\text{ for } 0\leq j\leq n-1,\\ P_{n,j}(w) &= \frac{(-1)^{j-n+1}j!2^{n-1}}{(2j-2n+1)!!}\; {\mathcal C}_{j}^{n-j-\frac{1}{2}}(w) &\text{ for }n\leq j \leq 2n-2, \end{align} which completes the proof. \end{proof} From \eqref{2.14} and \eqref{3.1} we obtain the following \begin{Corollary} The Hadamard's coefficients $H_n(x,y)$ are symmetric functions of $x$ and $y$, i.e. we have $$H_n(x,y)=H_n(y,x).$$ \end{Corollary} Finally, we show that the heat coefficients $\{H_n(x,x)\}$ determine the right-hand sides of KdV equations \eqref{2.1}. \begin{Corollary} We have \begin{equation}\label{3.13} H_n(x,x)=\frac{2^n}{(2n-1)!!}W_{2n}(x,x) \end{equation} and \begin{equation}\label{3.14} [(L^{\frac{2n-1}{2}})_+,L]=2\partial_x W_{2n}(x,x). \end{equation} Thus, the KdV hierarchy \eqref{2.1} is equivalent to the following equations \begin{equation} \partial_{2n-1}u=\frac{(2n-1)!!}{2^{n-1}}\partial_x H_n(x,x), \text{ for } n=1,2,\dots. \end{equation} \end{Corollary} \begin{proof} Using \eqref{3.1} and applying L'H\^opital's rule $2n-1$ times we see that \begin{equation}\label{3.16} \begin{split} H_n(x,x)&=(-1)^n\frac{2}{(2n-1)!}\\ &\qquad\times\sum_{k=0}^{n-1}2^k\binom{2n-1}{k}\frac{(2n-k-2)!}{(n-k-1)!} \partial_x^{2n-1-k}W_{k+1}(y,x)|_{y=x}\\ & = (-1)^n\frac{2}{(2n-1)!} \mathrm{res}_z\left[z^{2n-1}\left(P_{n,2n-1}(z^{-1}\partial_x)\Psi(x,s,z)\right)\; \Psi^*(x,s,z)\right], \end{split} \end{equation} where $P_{n,2n-1}(w)$ is the polynomial defined by \eqref{3.10} for $j=2n-1$ \begin{equation} P_{n,2n-1}(w)= \sum_{k=n}^{2n-1}2^{2n-1-k}\binom{2n-1}{k}\frac{(k-1)!}{(k-n)!}\, (w-1)^{k}. \end{equation} Notice that this time we have \begin{equation} P_{n,2n-1}(w)= (-1)^n2^{n-1}(2n-2)!!\left({\mathcal C}^{-n+\frac{1}{2}}_{2n-1}(w)+1\right), \end{equation} and using the same argument (the bilinear identity and the fact that ${\mathcal C}_{2n-1}^{-n+\frac{1}{2}}(w)$ is an odd polynomial) we obtain \eqref{3.13} from \eqref{3.16}. From \eqref{2.2}, \eqref{2.4}, \eqref{2.5} and \eqref{2.11} it follows that \begin{equation*} L^{\frac{2n-1}{2}} = W \partial_x^{2n-1} W^{-1}= \sum_{i,j=0}^{\infty} \frac{\mathfrak{S}_i(-\tilde{\partial})\tau(x,s)}{\tau(x,s)}\; \partial_x^{2n-1-i-j}\cdot \frac{\mathfrak{S}_j(\tilde{\partial})\tau(x,s)}{\tau(x,s)}. \end{equation*} Combining this formula with \eqref{2.14} we see that the coefficient of $\partial_x^{-1}$ in $L^{\frac{2n-1}{2}}$ is $W_{2n}(x,x)$. If we denote by $(L^{\frac{2n-1}{2}})_-$ the integral (Volterra) part of the pseudo-differential operator $L^{\frac{2n-1}{2}}$ we obtain \begin{align*} &[(L^{\frac{2n-1}{2}})_+,L]=[(L^{\frac{2n-1}{2}})_+,L]_+ =[L^{\frac{2n-1}{2}}-(L^{\frac{2n-1}{2}})_-,L]_+=[L,(L^{\frac{2n-1}{2}})_-]_+\\ &= [\partial_x^2+u(x),W_{2n}(x,x)\partial_x^{-1}+O(\partial_x^{-2})]_+ =2\partial_x(W_{2n}(x,x)), \end{align*} which gives \eqref{3.14} and completes the proof. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:introduction} Deep neural networks have shown remarkable success in a wide variety of machine learning (ML) applications, ranging from biometric authentication (e.g., facial image recognition), medical diagnosis (e.g., CT lung cancer detection) to autonomous driving systems (traffic sign classification), etc. However, while these models can achieve outstanding performance on benign data points, recent research has shown that state-of-the-art models can be easily fooled by malicious data points crafted intentionally with adversarial perturbations~\cite{szegedy2013intriguing}. To date, the most effective defense mechanism is to incorporate adversarial examples during model training, known as adversarial training (AT)~\cite{madry2017towards, zhang2019theoretically}. Nonetheless, current adversarial training approaches primarily only consider a single perturbation type (or threat model) quantified in a specific distance metric (e.g., $\ell_{p}$-ball). In this regard, lacking exploration on the compositional adversarial robustness against a combination of several threats models could lead to impractical conclusion and undesirable bias in robustness evaluation. For example, a model that is robust to perturbations within $\ell_{p}$-ball does not imply it can simultaneously be robust to other realistic semantic perturbations (e.g., hue, saturation, rotation, brightness, and contrast). To tackle this issue, in this paper, we propose \textbf{generalized adversarial training (GAT)}, which can harden against a wide range of threat models, from single $\ell_{\infty}$-norm or semantic perturbation to a combination of them. Notably, extending standard adversarial training to composite adversarial perturbations is a challenging and non-trivial task, as each perturbation type is sequentially applied, and thus the attack order will affect the effectiveness of the composite adversarial example. To bridge this gap, we propose an efficient attack order scheduling algorithm to learn the optimal ordering of various perturbation types, which will then be incorporated into the GAT framework. \begin{figure*}[t] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=.94\textwidth, trim=1.4cm 10.3cm 1.27cm 1.2cm, clip]{preprint/assets/figs/cvpr-header-new.pdf} \caption{} \label{fig:composite_attack_exp} \end{subfigure} \hfill \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=.94\textwidth, trim=1.9cm 0.5cm 2cm 0.4cm, clip]{preprint/assets/figs/header-chart-3.pdf} \caption{} \label{fig:composite_attack_eval} \end{subfigure} \caption{(a) \textbf{Qualitative study:} illustration of some perturbed examples generated by different attack combinations and their predictions by different ResNet50 models \cite{He2015DeepRL} on ImageNet, including standard training, Madry's $\ell_{\infty}$ robust training~\cite{madry2017towards} and our proposed \textbf{GAT}. The results show that our proposed GAT can maintain robust accuracy under a variety of composite adversarial attacks, even with the increasing number of attacks. (b) \textbf{Quantitative study:} the attack success rate (ASR, \%) of the above-mentioned models under multiple composite attacks (a higher ASR means less robust) on all correctly classified test samples for each model. The corresponding robust accuracy (RA) is listed in Table \ref{tab:ra_with_diff_attack_num}.} \label{fig:demo_fig} \end{figure*} Different from existing works, this paper aims to address the following fundamental questions: (a) How to generalize adversarial training from single threat model to multiple? (b) How to optimize the perturbation order from a set of semantic and $\ell_{p}$-norm perturbations? (c) Can GAT outperform other adversarial training baselines against composite perturbations? Our main contributions in this paper provide affirmative answers to the questions: \newline \begin{enumerate}[leftmargin=*] \item We propose composite adversarial attack (CAA), a novel and unified approach to generate adversarial examples across from multiple perturbation types with attack-order-scheduling, including semantic perturbations (\textit{Hue, Saturation, Contrast, Brightness and Contrast}) and $\ell_{p}$-norm space. To the best of our knowledge, this paper is the first work that leverages scheduling algorithm for finding the optimal attack order in composite adversarial attacks. \item Building upon our composite adversarial attack framework, we propose generalized adversarial training (\textbf{GAT}) toward achieving compositional adversarial robustness, which enables the training of neural networks robust to composite adversarial attacks. \item For the attack part, our proposed composite adversarial attack exhibits a high attack success rate (ASR) against standard or $\ell_{\infty}$-norm robust models. Moreover, our method with learned attack order significantly outperforms random attack ordering, giving average $9$\% and $7$\% increase in ASR on CIFAR-10 and ImageNet. \item For the defense part, comparing our GAT to other adversarial training baselines \cite{madry2017towards, zhang2019theoretically, laidlaw2021perceptual, zhang2020fat, wu2020adversarial, wong2020fast}, the results show the robust accuracy of GAT outperforms them by average 30\% $\sim$ 60\% on semantic attacks and 15\% $\sim$ 22\% on full attacks. \end{enumerate} To further motivate the effectiveness of our proposed GAT framework, Fig.~\ref{fig:demo_fig} compares the performance of different models under selected attacks, ranging from a single threat to composite threats. The models include standard training, $\ell_\infty$-robust training, and our proposed GAT. The results show the limitation of $\ell_\infty$-robust model \cite{madry2017towards}, which is robust against the same-type adversarial attack, but becomes fragile against semantic adversarial attacks and their composition. Our proposed GAT addresses this limitation by providing a novel training approach that is robust to any combination of multiple and adversarial threats. \section{Related Work}\label{sec:relatedwork} \paragraph{Adversarial Semantic Perturbations} Most studies on adversarial machine learning concentrate on generating examples that can trick a model into making the wrong predictions~\cite{biggio2018wild}. Several works have primarily focused on the vulnerability of deep neural networks against $\ell_{p}$-norm adversarial threats~\cite{goodfellow2014explaining, carlini2017towards, chen2017ead,croce2020reliable}. Some works consider the adversarial threats beyond $\ell_{p}$-norm, which generally occur in natural transformations such as geometry, color, and brightness, and are classified as semantic perturbations~\cite{hosseini2018semantic, joshi2019semantic, shamsabadi2020colorfool, wang2021demiguise, wang2020generating, bhattad2019unrestricted, kang2019testing, qiu2020semanticadv}. In contrast to $\ell_{p}$-norm perturbations, semantic perturbations normally lead to semantically-similar or natural-looking adversarial examples but with large differences, in $\ell_{p}$-norm perspective. For color translation, \cite{hosseini2018semantic} shows that randomly shifting the Hue and Saturation components in the Hue-Saturation-Value (HSV) color space of images can dramatically decrease the accuracy of a neural network by 88\%. A similar idea is proposed by \cite{bhattad2019unrestricted}, including colorization and texture transfer attack, which can either perturb gray-scale image with natural colorization or infuse the texture of one image into another. For geometric transformation,~\cite{xiao2018spatially, engstrom2019exploring} target rotate transformation. The former uses coordinate-wise optimization in each pixel, which is computationally expensive. The latter proposes a simple way by parametrizing a set of tunable parameters for spatial transformation. \cite{wong2019wasserstein} utilizes Wasserstein distance for generating adversarial examples beyond $\ell_{p}$-norm. \cite{Mohapatra_2020_CVPR} studies certified robustness against semantic perturbations but do not discuss adversarial training. Prior works~\cite{qiu2020semanticadv, dunn2020evaluating, Zhou2021TowardsDA} exploit the context-sensitive changes to features from the input and perturb images with the corresponding feature map interpolation. \paragraph{Composite Adversarial Perturbations} Inspired by the previous literature~\cite{laidlaw2019functional, bhattad2019unrestricted, jordan2019quantifying}, via combining different adversarial threats, the adversarial examples can be hardened. The experimental results of prior works show how to expand the perturbation space of an image and further increase the misclassification rate of neural networks. Laidlaw and Feizi~\cite{laidlaw2019functional} propose the ReColorAdv attack, which admits multi-functional threats to be used for perturbing every input pixel and also combines with additional $\ell_{p}$-norm threat. Instead of changing input by adding perturbation functionally, Mao et al. utilize genetic algorithms to search for the best combination in multiple attacks that are stronger than a single attack~\cite{mao2020composite}. However, they merely consider searching the order of attack combination in particular norm spaces (i.e., $\ell_{2}$, $\ell_{\infty}$, and corruption semantic space), and could not cope with all attacks simultaneously. In this paper, we consider the scheduling problem for multiple attack types, which can be easily extended to support different attack combinations. In ~\cite{kang2019testing}, they propose to measure model robustness with an ensemble of unforeseen attacks from broader threat models, including JPEG, Fog, Snow, Gabor, etc. They consider the worst case over all attacks and attempt to improve model performance against these unforeseen adversarial threats. \paragraph{Adversarial Training} Adversarial training (AT) is one of the most efficient ways to derive a robust model, which can defend against the corresponding adversarial attacks~\cite{kurakin2016adversarial_ICLR, madry2017towards, zhang2019theoretically}. Madry et al.~\cite{madry2017towards} proposed to minimize the worst-case loss in a region around the input. On top of that, Zhang et al.~\cite{zhang2019theoretically} make the robust boundary much smoother by considering both natural input and the perturbed input in computing the loss, along with a parameter $\beta$ to define the ratio of them. Furthermore, Laidlaw et al.~\cite{laidlaw2021perceptual} expand the adversarial attack from single to broader threats via neural perceptual distance measurement to generalize the AT with perceptual adversarial examples. Recently, Mao et al. proposed combining robust components as building blocks of vision transformers, which aids in obtaining a state-of-the-art robust vision transformer~\cite{mao2022towards}. AT with adversarial transformations is also done in~\cite{stutz2020confidence, engstrom2019exploring}. However, most of them only target a single adversarial threat model. Specifically, as shown in Fig.~\ref{fig:demo_fig}, a robust classifier that can help defend against $\ell_{\infty}$-norm perturbations still has low robustness to composite semantic attacks or other $\ell_{q}$ threats ($p \neq q$) \cite{sharma2017attacking}. The adversarial robustness under multiple adversarial threats has been discussed in ~\cite{tramer2019adversarial,wang2019towards, pmlr-v119-maini20a}. They proposed multiple-norm adversarial training, which yields models simultaneously robust against multiple $\ell_{p}$-norm (e.g., $\ell_{1}$, $\ell_{2}$, and $\ell_{\infty}$) attacks. In particular, although Tramer et al.~\cite{tramer2019adversarial} had considered alternately optimizing perturbation types \textit{given a fixed attack order}, the search for the strongest possible attack order is left out for discussion. Nonetheless, the considered perturbations are simultaneously added to the same data sample rather than sequentially. In contrast to their works, we consider composite adversarial perturbations involving the design of attack order and extend beyond the $\ell_p$-norm attacks by considering semantic perturbations. \section{Composite Adversarial Attack and Generalized Adversarial Training}\label{sec:method} In this section, we first propose the composite adversarial attack (CAA) framework (Fig.~\ref{fig:caa_implementation}), and elucidate the details of our attack order scheduling algorithm. We then adopt the CAA into adversarial training, which is called generalized adversarial training (GAT). \subsection{Composite Attack Formulation}\label{subsec:composite_attack_formulation} \begin{figure} \centering \includegraphics[width=\linewidth]{preprint/assets/figs/CAA_Flow.pdf} \caption{A pipeline of the proposed \textit{composite adversarial attack} method with the ability to dynamically optimize the attack order and harden adversarial examples.} \label{fig:caa_implementation} \end{figure} \paragraph{Composite Adversarial Attacks with Order Scheduling.} Let $\mathcal{F}: \mathcal{X}\rightarrow\mathbb{R}^d$ be an image classifier that takes image $x\in\mathcal{X}$ as input and generates a $d$-dimensional prediction scores (e.g., softmax outputs) for $d$ classes, and let $\Omega=\{A_1,\ldots, A_n\}$ denote an attack set that contains $n$ attack types. For each attack $A_k$, we define a corresponding perturbation interval (boundary) $\epsilon_k = [\alpha_k,\beta_k]$ to govern the attack power of $A_k$. We then denote the corresponding perturbation intervals of $\Omega$ as $E=\{\epsilon_{k}|k\in\{1,\ldots,n\}\}$. In CAA, we optimize not only the power of each attack component in $\Omega$, but also the attack order applied to the image $x$. That is, consider $\mathcal{I}_n=\{i\}_{i=1}^{n}$, we can use an assignment function $\pi_{i}: \mathcal{I}_n\to\mathcal{I}_n$ to determine the attack order to be used under the $i$-th schedule. As shown in Fig.~\ref{fig:caa_implementation}, after $i$-th scheduling, a composite adversarial example $x_\text{c-adv}$ can be formulated as: $$x_\text{c-adv} = A_{\pi_{i}(n)}(A_{\pi_{i}(n-1)}(\cdots A_{\pi_{i}(1)}(x))).$$ Noted that input $x$ would be perturbed in the order of: $A_{\pi_{i}(1)} \to A_{\pi_{i}(2)} \to \cdots \to A_{\pi_{i}(n)}$. For each attack operation $A_{k} \in \Omega$, an input $x$ would be transformed to a perturbed sample with a specific perturbation level $\delta_{k}$, where $\delta_{k}\in\epsilon_{k}$ would be optimized via projected gradient descent, maximizing the classification error (e.g., cross-entropy loss $\mathcal{L}$). Therefore, the operation of $A_k(x;\delta_k)$ could be expressed as optimizing $\delta_k$, that is: \begin{align} \label{eqn:adv} \mathop{\arg\max}\limits_{ \delta_k\in\epsilon_k} \mathcal{L} (\mathcal{F}(A_{k}(x;\delta_{k})),y), \end{align} where $y$ is the ground-truth label of $x$. We named it component-wise PGD (Comp-PGD) and will explain more details in Sec. \ref{subsec:comp_pgd}. Since the assignment function $\pi_{i}(\cdot)$ is essentially a permutation matrix (or Birkhoff polytope), we can optimize it by treating it as a (relaxed) \textit{scheduling matrix} $Z^{i}$, where $Z^{i}=\big[\mathbf{z}_{1},\ldots,\mathbf{z}_{n}\big]^\top$ is also a doubly stochastic matrix, i.e. $\mathbf{z}_j\in\mathbb{R}^{n}$, $\sum_{i}{z_{ij}}=\sum_{j}{z_{ij}}=1$, $\forall~i,j\in\{1,\ldots,n\}$. Furthermore, we can utilize the Hungarian algorithm~\cite{kuhn1955hungarian, Munkres1957AlgorithmsFT} to obtain an optimal attack order assignment. In sum, we formalize CAA's attack order auto-scheduling as a constrained optimization problem, where the attack order having maximum classification error can be obtained by solving: \begin{gather} \label{eqn:advorder} \mathop{\max}\limits_{ \pi} \mathcal{L} (\mathcal{F}(A_{\pi(n)} (\cdots A_{\pi(1)}(x;\delta_{\pi(1)}) \cdots;\delta_{\pi(n)}) ),y)\text{.} \end{gather} \paragraph{The Surrogate Image for Scheduling Optimization.} Since $x_\text{c-adv}$ contains merely one attack perturbation at each iteration, using it alone is challenging to optimize the likelihood of other attacks in the relaxed scheduling matrix. To manage this issue, we adopt a surrogate composite adversarial image $x_\text{surr}$ to relax the restriction and compute the loss for updating the scheduling matrix $Z$, i.e. by weighting each type of attack perturbation with its corresponding probability at each iteration. Therefore, we could optimize the scheduling matrix $Z$ via maximizing the corresponding loss $\mathcal{L}(\mathcal{F}(x_\text{surr}),y)$. Given the attack pool $\Omega$ of $n$ attacks, the surrogate image would be computed for $n$ iterations. For each iteration $i$, the surrogate image is defined as: $${x_\text{surr}^i}=\sum_{j=1}^{n}{z_{ij}}\cdot{A_{j}(x_\text{surr}^{i-1};\delta_{j})})\text{, }\forall i\in{\{1,\ldots,n\}}\text{,}$$ and $x_\text{surr}^0 = x$. Let $\mathbf{A}^{\top}=\big(A_{1},\ldots,A_{n}\big)$ denotes a vector of all attack types in $\Omega$. Consequently, after $n$ iterations, the resulting surrogate image $x_\text{surr}^n$ can be formulated into the following compositional form: \begin{equation} \label{eqn:surrogate_image} \begin{aligned} x_\text{surr}^n &= \mathbf{z}_n^{\top}\mathbf{A}(\cdots(\mathbf{z}_{2}^{\top}\mathbf{A}(\mathbf{z}_{1}^{\top}\mathbf{A}(x)))) \\ & = \mathbf{z}_n^{\top}\mathbf{A}(\cdots(\mathbf{z}_{2}^{\top}\mathbf{A}(\sum_{j=1}^{n}{z_{1j}}\cdot{A_{j}(x;\delta_{j})}))) \\ & = \mathbf{z}_n^{\top}\mathbf{A}(\cdots(\mathbf{z}_{2}^{\top}\mathbf{A}(x_\text{surr}^1))) \\[3pt] & = \mathbf{z}_n^{\top}\mathbf{A}(\cdots(x_\text{surr}^2)). \end{aligned} \end{equation} \paragraph{How to Learn Optimal Attack Order?} Learning an optimal attack order expressed by the scheduling matrix $Z^\star$ is originally a combinatorial optimization problem to solve the best column and row permutation of a scheduling matrix. Sinkhorn and Knopp proved that any positive square matrix could be turned into a doubly stochastic matrix by alternately normalizing the rows and columns of it~\cite{sinkhorn_1966}. Furthermore, Mena et al. theoretically showed how to extend the Sinkhorn normalization to learn and determine the optimal permutation matrix \cite{Mena2018Sinkhorn}. Similarly, in our problem, optimizing the attack order over a doubly stochastic matrix $Z$ can be cast as a maximization problem, where the feasible solution set is convex. With the surrogate composite adversarial example $x_\text{surr}$, the updating process of the scheduling matrix $Z^t$ for iteration $t$ can be formulated as: \begin{align} \label{eqn:dsm_update} Z^{t} &= \mathcal{S}\big(\exp(Z^{t-1}+\frac{\partial \mathcal{L}(\mathcal{F}(x_\text{surr}),y)}{\partial Z^{t-1}})\big) \text{,} \end{align} where $\mathcal{S}$ (Sinkhorn normalization) can be done in a limited number of iterations~\cite{sinkhorn1967concerning}. Here, we fixed the iteration as 20 steps. After deriving an updated scheduling matrix, we utilize the Hungarian assignment algorithm to obtain the updated order assignment function $\pi_{t}(\cdot)$, as shown in Eq.~\ref{eqn:hungarian}: \begin{align} \label{eqn:hungarian} \pi_{t}(j) := \arg \max \mathbf{z}_{j} \text{, } \forall j \in \{1,\ldots,n\}. \end{align} \subsection{The Component-wise PGD (Comp-PGD)}\label{subsec:comp_pgd} Upon addressing the attack scheduling issue, we now move forward to elucidate the design of adversarial perturbation in each attack type (component) of our composite adversarial attacks. For most of the semantic perturbations, their parameters are of continuous value. Therefore, we propose to search the parameters of semantic attacks by gradient descent algorithm within each continuous semantic space. In particular, we showed how to optimize the parameters in the following five different semantic perturbations, including (i) hue, (ii) saturation, (iii) brightness, (iv) contrast, and (v) rotation. We extend the iterative gradient sign method~\cite{Kurakin2016Adversarial} to optimize our semantic perturbations for $T$ iterations, which is defined as: \begin{align} \label{eqn:attack_pgd} \delta_{k}^{t+1} = \text{clip}_{\epsilon_{k}} \big( \delta_{k}^{t} + \alpha\cdot\text{sign}(\nabla_{\delta_{k}^{t}}\mathcal{L}(\mathcal{F}(A_{k}(x;\delta_{k}^{t})),y))\big)\text{,} \end{align} where $t$ denotes the iteration index, $\alpha$ is the step size of each iteration, $\nabla_{\delta_{k}^{t}}\mathcal{L}(\cdot)$ is the gradient of a loss function $\mathcal{L}$ with respect to the perturbation variable $\delta_{k}^{t}$. Let $\epsilon_k=[\alpha_k,\beta_k]$, we denote the element-wise clipping operation $\text{clip}_{\epsilon_k}(z)$ as: \begin{align*} \text{clip}_{\epsilon_k}(z) = \text{clip}_{[\alpha_k,\beta_k]}(z) = \left\{ \begin{array}{rl} \alpha_{k} & \mbox{if } z < \alpha_{k} \text{,} \\ z & \mbox{if } \alpha_{k} \leq z \leq \beta_{k} \text{,} \\ \beta_{k} & \mbox{if } \beta_{k} < z \text{.} \end{array}\right. \end{align*} Next, we elucidate each semantic attack. The concrete examples of each of them are shown in Appendix \ref*{sec:appendix_attack_levels} and the loss trace analysis of Comp-PGD are shown in Appendix \ref*{sec:appendix_loss_landscape}. \paragraph{Hue.} The hue value is defined on a color wheel in HSV color space, ranging from $0$ to $2\pi$. In hue attack ($A_{H}$), we define the perturbation interval of hue as $\epsilon_{H}=[\alpha_{H},\beta_{H}]$, $-\pi\leq\alpha_{H}\leq\beta_{H}\leq\pi$. Let $x_{H}=\text{Hue}(x)$ denote the hue value of an image $x$, the variation of hue value at step $t$ is $\delta_{H}^{t}$, and the initial variance $\delta_{H}^{0}$ is chosen from $\epsilon_{H}$ uniformly. Then $\delta_{H}^t$ can be updated iteratively via Eq.~\ref{eqn:attack_pgd}, and the hue value of the perturbed image $x_\text{c-adv}^t=A_{H}(X;\delta_{H}^t)$ is: \begin{align*} x_{H}^t = \text{Hue}(x_\text{c-adv}^t) = \text{clip}_{[0,2\pi]}(x_{H}+\delta_{H}^t)\text{.} \end{align*} \paragraph{Saturation.} Similar to hue value, saturation value determines the colorfulness of an image ranging from $0$ to $1$. Let $x_{S}=\text{Sat}(x)$ denote the saturation value of an image $x$. If $x_{S}\to0$, the image becomes more colorless, resulting in a gray-scale image if $x_{S}=0$. The perturbation interval of saturation is defined as $\epsilon_{S}=[\alpha_{S},\beta_{S}]$, $0\leq\alpha_{S}\leq\beta_{S}<\infty$. Let the perturbation factor of saturation value at step $t$ is $\delta_{S}^{t}$, and the initial factor $\delta_{S}^{0}$ is chosen from $\epsilon_{S}$ uniformly. The saturation attack is to update the perturbation factor $\delta_{S}$ via Eq.~\ref{eqn:attack_pgd}, and the saturation value of the perturbed image $x_\text{c-adv}^t=A_{S}(X;\delta_{S}^t)$ is: \begin{align*} x_{S}^t = \text{Sat}(x_\text{c-adv}^t) = \text{clip}_{[0,1]}( x_{S}\cdot\delta_{S}^{t})\text{.} \end{align*} \paragraph{Brightness and Contrast.} Unlike hue and saturation, these values are defined on RGB color space (pixel space), and they determine the lightness, darkness, and brightness differences of images. In our implementation, we convert the images from $[0,255]$ scale to $[0,1]$. The perturbation interval of brightness and contrast is defined as $\epsilon_{B}=[\alpha_{B},\beta_{B}]$, $-1\leq\alpha_{B}\leq\beta_{B}\leq1$ and $\epsilon_{C}=[\alpha_{C},\beta_{C}]$, $0\leq\alpha_{C}\leq\beta_{C}<\infty$, respectively; the same, the initial perturbation $\delta_{B}^{0}$ and $\delta_{C}^{0}$ are chosen from $\epsilon_{B}$ and $\epsilon_{C}$ uniformly, and can update via Eq.~\ref{eqn:attack_pgd}. The perturbed image $x_\text{c-adv}^t$ under the brightness attack ($A_{B}$) and contrast attack ($A_{C}$) can be formulated as: \begin{align*} x_\text{c-adv}^{t} = \text{clip}_{[0,1]}(x+\delta_{B}^{t}) \text{~~and~~} x_\text{c-adv}^{t} = \text{clip}_{[0,1]}(x\cdot\delta_{C}^{t})\text{.} \end{align*} \paragraph{Rotation.} This transformation aims to find a rotation angle such that the rotated image has a maximum loss. The rotation implementation is constructed by \cite{riba2020kornia}. Given a square image $x$, let $(i,j)$ denotes pixel position and $(c,c)$ denotes the center position of $x$. Then the position $(i^{\prime},j^{\prime})$ rotated by $\theta$ degree from $(i,j)$ can be formulated as: \begin{align*} \begin{bmatrix} {i}^{\prime}\\ {j}^{\prime}\\ \end{bmatrix} & = \begin{bmatrix} \cos\theta\cdot{i} + \sin\theta\cdot{j} + (1-\cos\theta) \cdot c - \sin\theta\cdot c \\ -\sin\theta\cdot{i} + \cos\theta\cdot{j} + \sin\theta\cdot c + (1-\cos\theta)\cdot c \\ \end{bmatrix}\text{.} \end{align*} Here, we define the perturbation interval of rotation degree $\epsilon_{R}=[\alpha_{R}\degree,\beta_{R}\degree]$, $\alpha_{R}\leq\beta_{R}$, $\alpha_{R},\beta_{R}\in\mathbb{R}$. The perturbation degree at step $t$ is $\delta_{R}^{t}$, and the initial degree $\delta_{R}^{0}$ is chosen from $\epsilon_{R}$ uniformly. Similarly, like the previous attack, the perturbation $\delta_{R}$ will be updated via Eq.~\ref{eqn:attack_pgd}. \subsection{Generalized Adversarial Training (GAT)} To harden the classifier against composite perturbations, we generalize the standard adversarial training approach with our proposed composite adversarial attack from Section \ref{subsec:composite_attack_formulation}. Our goal is to train a robust model $\mathcal{F}(\cdot)$ over a data distribution $(x,y) \sim \mathcal{D}$, and make it robust against composite perturbations in the perturbation boundary $E$. Existing adversarial training objectives such as the $\min$-$\max$ loss \cite{madry2017towards} or TRADES loss \cite{zhang2019theoretically} can be utilized in GAT. Here we use $\min$-$\max$ training loss (Madry's loss) for illustration. The inner maximization in Eq.~\ref{eqn:adv_training} is to generate $x_{\text{c-adv}}$ optimized using CAA within boundary $E$, and the outer minimization is for optimizing the model parameters $\theta_{\mathcal{F}}$. \begin{align} \label{eqn:adv_training} \min_{\theta_{\mathcal{F}}}\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}} \bigg[ \max_{x_{\text{c-adv}}\in \mathcal{B}(x; \Omega; E)} \mathcal{L}(\mathcal{F}(x_{\text{c-adv}}),y)\bigg] \end{align} For completeness, in Appendix \ref*{sec:appendix_gat_algo}, we summarize the flow of our proposed composite adversarial attacks with order scheduling and attack component optimization. In addition, the ablation study showing order-scheduling and Comp-PGD are essential can be found in Appendix \ref*{sec:appendix_random_training}. \section{Experiments}\label{sec:experiment} In this section, we first elucidate the experimental settings and then present the performance evaluation and analysis against multiple composite attacks on two datasets: CIFAR-10~\cite{Krizhevsky09learningmultiple} and ImageNet~\cite{ILSVRC15}. Additional experimental results and implementation details are shown in Appendix \ref*{sec:appendix_additional_exp}. \subsection{Experiment Setups}\label{sub_sec:experiment_setup} \paragraph{Datasets.} We evaluated GAT on two different datasets: CIFAR-10~\cite{Krizhevsky09learningmultiple} and ImageNet~\cite{ILSVRC15}. CIFAR-10 consists of 60000 32*32 images, with 6000 images per class. There are 50000 training images and 10000 test images. ImageNet is a benchmark in image classification and object detection with 10 million images, including 1000 classes. \vspace{-1mm} \paragraph{Attack Composition.} There are many feasible combinations of threats can be utilized in the evaluation; we discuss two attack combinations here, \textit{semantic attacks} and \textit{full attacks}, with two scheduling strategies. Semantic attacks consist of a combination of five semantic perturbations, including \textit{Hue}, \textit{Saturation}, \textit{Rotation}, \textit{Brightness} and \textit{Contrast} attacks. For full attacks, one can generate examples with \textit{all five semantic attacks} and $\ell_{\infty}$ \textit{attack}. We consider different order scheduling strategies: \textit{scheduled} and \textit{random}. That is, we can either schedule the order by the aforementioned scheduling algorithm in Sec. \ref{subsec:composite_attack_formulation}, or randomly shuffle an attack order when launching attacks for generating the corresponding composite adversarial examples. Furthermore, we also present the results of a variety of attack compositions for analysis (see Appendix \ref*{sec:appendix_sen_analysis_order}) and discuss the difference between separately/jointly optimizing the attack parameters in Appendix \ref*{sec:appendix_sep_opt}. \vspace{-1mm} \paragraph{Comparative Training Methods.}\label{subsec:model_detail} We compare our GAT with several baseline adversarial training models on both datasets using two different model backbones: ResNet50~\cite{He2015DeepRL} and WideResNet34~\cite{Zagoruyko16WRN}. The comparative methods are summarized in \textbf{Baseline Model Details} below. For GAT, we train our models via finetuning on the $\ell_{\infty}$-robust pretrained model for both CIFAR-10 and ImageNet and use the min-max loss in Eq.~\ref{eqn:adv_training} \cite{madry2017towards}. Two ordering modes were adopted in GAT: random order (\textit{GAT-f}) and scheduled order (\textit{GAT-fs}). We also found that training from scratch using GAT is unstable due to the consideration of multiple perturbation threats (see Appendix \ref*{sec:appendix_implementation_details}). \paragraph{Baseline Model Details.} In summary below, we use symbols to mark the model backbones. Here, $\dagger$ denotes models in ResNet50~\cite{He2015DeepRL} architecture and $\ast$ denotes models in WideResNet34~\cite{Zagoruyko16WRN}. The baseline models are obtained from top-ranked models of the same architecture in RobustBench~\cite{croce2021robustbench}. \begin{itemize}[leftmargin=*,noitemsep,topsep=6pt] \item \textbf{Normal$^\dagger$/Normal$^\ast$}: Standard training. \item \textbf{Madry$_{\infty}^{\dagger}$}: $\ell_{\infty}$ adversarial training~in \cite{madry2017towards}. \item \textbf{Trades$_{\infty}^{\ast}$}: $\ell_{\infty}$ adversarial training~in \cite{zhang2019theoretically}. \item \textbf{FAT$_{\infty}^{\ast}$}: \cite{zhang2020fat} uses friendly adversarial data that are confidently misclassified for adversarial training. \item \textbf{AWP$_{\infty}^{\ast}$}: \cite{wu2020adversarial} injects the worst-case weight perturbation during adversarial training to flatten the weight loss landscape. \item \textbf{PAT$_{self}^\dagger$}, \textbf{PAT$_{alex}^\dagger$}: Two adversarial training models based on the perceptual distance (LPIPS), two models differ: ResNet50 (\textit{self}) and AlexNet (\textit{alex})~\cite{laidlaw2021perceptual}. \item \textbf{Fast-AT$^\dagger$}: Computationally efficient $\ell_{\infty}$ adversarial training~in \cite{wong2020fast}. \end{itemize} \paragraph{Training \& Evaluation Settings.} We adopt the whole training set on both CIFAR-10 and ImageNet for model training. In every training iterative step, the images in each batch share the same attack order. Besides, the Comp-PGD is applied on each image, where we set the iteration-update step $T$ as ten steps of each attack component for evaluation and seven steps for GAT. During the training of GAT, we apply every attack component on the input without the \textit{early-stopped} option to ensure the model could learn all attack components which have been launched. Furthermore, we evaluate two different order scheduling settings: \textit{random}/\textit{scheduled} during GAT on CIFAR-10. Since both ordering mechanisms provide competitive robust models, therefore, we only use random ranking when training GAT on ImageNet, considering the training efficiency. As mentioned in Sec.~\ref{sub_sec:experiment_setup}, GAT utilizes a pre-trained model for fine-tuning to make the composite adversarial training more efficient than training from scratch. Different from the training phase of GAT, during the evaluation, we allow CAA to trigger the \textit{early-stop} option when the attack is successful, which can help us improve the attack success rate and reduce the computational cost. Further discussion and comparison between different training settings of GAT, including using TRADES/Madry loss and fine-tuning/training from scratch, are given in Appendix \ref*{sec:appendix_implementation_details}. \input{preprint/assets/tables/cifar10_tables} \input{preprint/assets/tables/imagenet_tables} \paragraph{Computing Resources and Code.} For CIFAR-10, we train models on ResNet50 and WideResNet34 with SGD for 150 epochs. The training of GAT-f takes about 16 hours (ResNet50) and 28 hours (WideResNet34), and GAT-fs takes about 28 hours (ResNet50) and 55 hours (WideResNet34) on 8 Nvidia Tesla V100 GPUs. For ImageNet, we train ResNet50 with SGD for 100 epochs and about three days on 64 Nvidia Tesla V100 GPUs. The implementation is built with PyTorch~\cite{Paszke19PyTorch}. \paragraph{Evaluation Metrics.} We report the models' Clean and Robust Accuracy (RA, \%) against multiple composite adversarial attacks. The RA aims to evaluate the model accuracy toward the fraction of perturbed examples retrieved from the test set which is correctly classified. We also provide the attack success rate (ASR, \%) in the appendices, in which the higher indicates the stronger attack. \subsection{Performance Evaluation}\label{subsec:exp} The experimental results are shown in Table~\ref{tab:cifar10_ra} (CIFAR-10) and Table~\ref{tab:imagenet_ra} (ImageNet). On CIFAR-10, \textit{GAT-fs} and \textit{GAT-f} show competitive results. Both of them outperform all other baselines by a significant margin. For semantic attacks, the RA increases by 45\% $\sim$ 60\% on CIFAR-10, and 28\% $\sim$ 37\% on ImageNet. For full attacks, the RA increases by 15\% $\sim$ 27\% on CIFAR-10, and 9\% $\sim$ 15\% on ImageNet. Nonetheless, the RA against three multiple threats with three different combinations, our proposed GAT keeps outperforming other baselines and shows the highest robustness of others. The comparison between GAT-f and GAT-fs demonstrates that GAT-fs can obtain higher RA against full attacks. However, the result also suggests a trade-off between the robustness of $\ell_{\infty}$ and semantic attacks. Besides adversarial training models, we empirically observe that the RA of models with standard training has a degraded performance of 20\% $\sim$ 31\% on ImageNet data under semantic attacks (without $\ell_{\infty}$ attack). However, while $\ell_{\infty}$ attack is involved in the full attacks or other multiple threats (e.g., three attacks in Tables \ref{tab:cifar10_ra} and \ref{tab:imagenet_ra}), the models with only standard training are unable to resist these kinds of composite semantic perturbations, and the RA drops dramatically to 0\%. \subsection{Analysis, Discussion, and Visualization}\label{sub_sec:analysis} \paragraph{Robust Accuracy vs. Number of Attacks and Their combinations.} We conduct an ablation study to show that the number of attacks and their combinations can hugely affect robust accuracy, illustrating the importance of attack ordering and the new insights into robustness through our proposed composite adversarial examples. Fig.~\ref{fig:composite_attack_eval} already demonstrates that our model is the most resilient to composite adversarial examples consisting of different numbers of attacks, in terms of attaining the lowest attack success rate in the test set that each model initially correctly classified. Furthermore, Table~\ref{tab:ra_with_diff_attack_num} shows that as the number of attacks increases (\textit{CAA}$_{1}$ to \textit{CAA}$_{6}$), the RA of our proposed GAT consistently outperforms all other models. Specifically, GAT outperforms other baselines by up to 35\%. Although the standard model (Normal$^\dagger$) has the advantage of higher cleaning accuracy, it is still not resistant to semantic and various composite adversarial perturbations. Results of \textit{three attacks} in Tables \ref{tab:cifar10_ra} and \ref{tab:imagenet_ra} demonstrate the effect of different combinations when the number of attacks is fixed. Comparing GAT with others on both CIFAR-10 and ImageNet, the result shows that \textit{\textit{GAT-f}} is more robust than all baselines under three different attacks by 9\% $\sim$ 23\%. On ImageNet, \textit{GAT-f} also outperforms those baselines. For more experimental results, including single attacks, Auto-attack, two-component attacks, and other results on other datasets (e.g., SVHN), please refer to Appendix \ref*{sec:appendix_additional_exp}. \input{preprint/assets/tables/figure1b_ra} \vspace{-2mm} \paragraph{Effectiveness of Random/Scheduled Ordering.} We conducted the pairwise t-test to compare the effectiveness of random and scheduled ordering. By running ten experiments with different initializations, the experimental results on CIFAR10 / Full-attack show that the robust accuracy of \textit{scheduled} ordering is statistically significantly lower than \textit{random} ordering ($p$-value $<.001$ for all models). \paragraph{Current Adversarial Robustness Assessments May Not Be Comprehensive.} To gain more insights, we compare the rankings of the top ten models on RobustBench (CIFAR-10, $\ell_{\infty}$) \cite{croce2021robustbench}. We found that rankings between Auto-Attack and CAA have a low correlation, suggesting that only considering perturbations in $\ell_{p}$-ball for robustness evaluation could be biased and incomplete. Furthermore, the Spearman's rank correlation coefficient between Auto-Attack and CAA (rand. \& sched.) are as follows: 0.16 (rand. \emph{vs.} sched.) for semantic attacks, and 0.36 (rand. \emph{vs.} Auto) and 0.38 (sched. \emph{vs.} Auto) for full attacks. \paragraph{Visualization of Loss Landscape.} To better understand why GAT leads to great improvement, we visualize the loss landscape of a single semantic attack under three different models (see Fig.~\ref{fig:loss_singles}), including standardly trained ResNet50 (Normal$^{\dagger}$), ResNet50 with $\ell_{\infty}$-robust (Madry$_{\infty}^{\dagger}$), and our proposed GAT (\textit{GAT-f}$^{\dagger}$). We visualize the cross entropy loss of selected samples for each model and sweep over the semantic perturbation space in the designated interval. We empirically observe that across five different single semantic attacks, GAT can result in much smoother, flatter, and lower curves (green) compared to the other models. We believe that this phenomenon sheds light on the effectiveness of our proposed approach, which can indeed train a model robust to the composite adversarial perturbations. \section{Conclusion}\label{sec:conslusions} In this paper, we proposed GAT, a generic approach to preparing deep learning for the real world by strengthening classifiers to be robust against composite semantic perturbations. The effectiveness of GAT lies in our novel design of attack order scheduling for multiple perturbation types. Compared to existing adversarial training methods, GAT enhances robustness against a variety of adversarial perturbations, including $\ell_{p}$ norms and semantic spaces. Evaluated on CIFAR-10 and ImageNet datasets, our results demonstrate that GAT achieves the highest robust accuracy on most composite attacks by a large margin, providing new insights into achieving compositional adversarial robustness. We believe our work sheds new light on the frontiers of realistic adversarial attacks and defenses. \section*{Appendix}\label{sec:appendix} \section{Implementation Details}\label{sec:appendix_implementation_details} \paragraph{Training Phase.} In the implementation of generalized adversarial training (GAT), we consider two model architectures, ResNet-50~\cite{He2015DeepRL} and WideResNet-34~\cite{Zagoruyko16WRN}, on CIFAR-10 dataset~\cite{Krizhevsky09learningmultiple}; and ResNet-50 on ImageNet dataset~\cite{ILSVRC15}. For CIFAR-10, we set the maximum training epoch to 150 with batch size 2048 and selected the model with the best evaluation test accuracy. The learning rate is set to 0.1 at the beginning and exponentially decays. We utilize the warming-up learning rate technique for the first ten epochs, which means the learning rate would linearly increase from zero to the preset value (0.1) in the first ten epochs. For ImageNet, we set the maximum training epoch to 100 with the batch size 1536 and selected the model with the best evaluation test accuracy. The learning rate is set to 0.1 at the beginning and exponentially decays by 0.1 every 30 epochs. Similarly, we utilize the warming-up learning rate technique for the first five epochs. We launched all threat models (full attacks) while training; for each batch, we utilized scheduled ordering for \textit{GAT-fs} and random ordering for \textit{GAT-f}. The iteration step $T$ of each attack for Comp-PGD is set to 7, and the step size of attack $A_k$ is set as $2.5 \cdot (\beta_k - \alpha_k) / 2T$, where $\beta_k$ and $\alpha_k$ are the values of perturbation intervals defined in Table \ref{tab:attack_epsilons}. \paragraph{Testing Phase.} To compare our GAT approach with other adversarial training baselines, we launch composite adversarial attacks (CAAs) of different numbers of attack types, including single attacks, two attacks, three attacks, all semantic attacks, and full attacks on each robust model. Furthermore, the iteration step $T$ of each attack for Comp-PGD is set as 10, and the step size is the same as the training settings. In addition, the maximum iteration of \textit{order scheduling} is designated as five, and we will launch the early stop option in every update step while the CAA succeeds in attacking. \input{preprint/assets/tables/epsilons} \vspace{-5.5mm} \paragraph{Training Strategy.} Our training process considers two training strategies: 1) training from scratch and 2) fine-tuning on $\ell_\infty$-robust models; two learning objectives: 1) Madry's loss~\cite{madry2017towards} and 2) TRADES' loss~\cite{zhang2019theoretically}. Note that $x_{\text{c-adv}}\in{\mathcal{B}(x;\Omega;E)}$ denotes the composite adversarial example $x_\text{c-adv}$ is perturbed by attacks from $\Omega$ within the perturbation intervals $E$. The main difference between these two is shown in Eq. \ref{eqn:madry_loss} and Eq. \ref{eqn:trades_loss}. That is, Eq. \ref{eqn:trades_loss} encourages the natural error to be optimized in the first term; meanwhile, the robust error in the second regularization term could help minimize the distance between the prediction of natural samples and adversarial samples. Zhang et al. theoretically proved that this design of loss function could help the outputs of the model to be smooth~\cite{zhang2019theoretically}. \begin{align} \label{eqn:madry_loss} \min_{\theta_{\mathcal{F}}}\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}} \bigg[ \max_{x_{\text{c-adv}}\in{\mathcal{B}(x;\Omega;E)}} \mathcal{L}_{ce}(\mathcal{F}(x_{\text{c-adv}}),y)\bigg] \end{align} \begin{align} \label{eqn:trades_loss} \min_{\theta_{\mathcal{F}}}\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}\bigg[\mathcal{L}(\mathcal{F}(x), y) + \beta\cdot\max_{x_{\text{c-adv}}\in{\mathcal{B}(x;\Omega;E)}} \mathcal{L}(\mathcal{F}(x),\mathcal{F}(x_{\text{c-adv}}))\bigg] \end{align} As shown in Fig. \ref{fig:cifar10_training_records}, we evaluate the clean test accuracy of GAT models in every epoch with different training settings, including using two architectures (ResNet-50 / WideResNet-34), two learning objectives, and two training strategies mentioned above. We empirically find the models using fine-tuning strategy (solid curves) can achieve higher clean test accuracy than most of models training from scratch (dotted curves) . Furthermore, we evaluate the robust test accuracy for these four models (see Fig. \ref{fig:cifar_finetune_models_ra}). Under the semantic and full attacks, the models GAT-f$_M$ (fine-tuning with Madry's loss) achieve higher robust accuracy than GAT-f$_T$ (fine-tuning with TRADES loss). Hence, in the section of experimental results, we utilized the GAT models, which are trained with Madry's loss and fine-tuning on a $\ell_\infty$-robust model. \begin{figure}[ht] \begin{subfigure}[b]{.48\textwidth} \centering \includegraphics[width=\linewidth, trim=0.5cm 0.2cm 1.8cm 1.2cm, clip]{preprint/assets/figs/appendix/cifar10-training-log.pdf} \caption{} \label{fig:cifar10_training_records} \end{subfigure} \hfill \begin{subfigure}[b]{.46\textwidth} \centering \includegraphics[width=\linewidth, trim=1.2cm 0cm 1.2cm 0.1cm, clip]{preprint/assets/figs/appendix/appendix-ra-2.pdf} \caption{} \label{fig:cifar_finetune_models_ra} \end{subfigure} \caption{(a) The testing accuracy during generalized adversarial training on CIFAR-10. The models differ in different training scenarios; the lower script $T$ denotes that the model using \textit{TRADES}' loss~\cite{zhang2019theoretically} for training, and $M$ for \textit{Madry}'s loss~\cite{madry2017towards}. The upper script $\dagger$ denotes the model using ResNet50~\cite{He2015DeepRL} backbone and $\ast$ is for WideResNet34~\cite{Zagoruyko16WRN}. (b) The robust accuracy (\%) of our GAT fine-tuned models under semantic and full attacks. } \end{figure} \section{Algorithm of the Composite Adversarial Attack (CAA)}\label{sec:appendix_gat_algo} \input{preprint/assets/gat_algorithm} \clearpage \section{The Loss Trace Analysis of Component-wise PGD (Comp-PGD)}\label{sec:appendix_loss_landscape} To demonstrate the effectiveness of Comp-PGD, in Fig. \ref{fig:appendix_loss_traces}, we visualize the update process of Comp-PGD when performing a single semantic attack on WideResnet-34 model. We uniformly sample 20 start points for each attack and update $\delta_k$ using Comp-PGD by these initial points. The red margins of each sub-figure in Fig. \ref{fig:appendix_loss_traces} represent the margin of successful attack by our samples. The endpoints of the loss trace show obviously that Comp-PGD indeed can help search for the worst case by maximizing the loss during each attack. \input{preprint/assets/figs/loss_landscape/loss_traces} \section{Ablation Study: Attack Components' Optimization}\label{sec:appendix_sep_opt} \subsection{Why Separately Optimize the Attack Parameters? (Comp-PGD vs. Ensemble-PGD)} In this paper, we used Comp-PGD to optimize the individual attack component. On the other hand, one can also optimize all attack components simultaneously given an attack order, for which we call \textit{Ensemble-PGD}. Specifically, CAA can jointly optimize the attack parameters for an attack chain \textit{at a chosen fixed attack order}. In this regard, we repeated the same experiments on CIFAR-10 but considered optimizing the attack parameters \textit{simultaneously} instead of \textit{sequentially}. The results show that Ensemble-PGD does not provide better attack capacity (see Table \ref{tab:cifar10_ablation}) than Comp-PGD (see Table \ref{tab:appendix_cifar10_multi_asr}). We provide the experimental results in Attack Success Rate (ASR), as it represents the strength of the attack (higher means a more vigorous attack). Although GAT approaches still outperform other baselines in defending against all threats, the results showed that Ensemble-PGD generally has \textit{lower} attack performance than Comp-PGD. This is probably due to the fact that the number of the variables for optimizing in Ensemble-PGD is higher than that of Comp-PGD (in each sequential step), making the optimization process harder to achieve similar results. \input{preprint/assets/tables/ablation1} \subsection{Why Not Optimize Attack Power by Grid Search? (Comp-PGD vs. Grid Search)} It is intuitive to optimize the attack parameters (levels) in a brute-force way, i.e., \textit{grid search}. However, doing so would exponentially increase the computational cost as the number of attacks increases. We conduct an experiment to compare the attack success rate (ASR, \%) between the Grid-Search attack and our proposed CAA. We include all types of semantic attacks (Hue, Saturation, Rotation, Brightness, and Contrast) in this experiment. Also, since there are $N!$ kinds of attack orders for $N$ attacks, for simplicity, we chose only one attack order here and utilized the same attack order in CAA (fixed). As shown in Fig.~\ref{fig:grid_search_vs_caa}, the results demonstrated that CAA is obviously stronger than grid search, with a significantly lower computational cost. The results also indicate that CAA is more valuable than grid-search-based optimization, as CAA consistently achieves a higher ASR. This is because, in grid search, it could only look into the discrete attack value space; clearly, it would need to increase spatial density (grid numbers) to obtain a higher attack success rate. To be more specific, given the grid numbers $K$ (uniformly sampled points in each attack space), the attack complexity of Grid-Search Attack is $\mathcal{O}(K^N)$; and the attack complexity of CAA (fixed order) is $\mathcal{O}(N\cdot T\cdot R)$, where $T$ is the optimization steps for Comp-PGD, and $R$ is the number of restarts. That is, we allow CAA to optimize each attack with $R$ different starting points. In our experiment, since CAA could search for the optimal attack value by gradient-based search, we need only five restart points ($R$) and ten steps for Comp-PGD optimization ($T$) to outperform the grid-search-based strategy. In this scenario, the attack complexity of Grid-Search Attack is higher than CAA (since $\mathcal{O}(K^N)>\mathcal{O}(N\cdot K^2)>\mathcal{O}(N\cdot T\cdot R)$, given $T, R\leq K$). \begin{figure}[h] \centering \includegraphics[width=\linewidth, trim=0.5cm 0.25cm 0.43cm 0.2cm, clip]{preprint/assets/figs/appendix/grid-search-comparison.pdf} \vspace{-2mm} \caption{Comparison of attack success rate between Grid-Search Attack and CAA.} \label{fig:grid_search_vs_caa} \end{figure} \vspace{-4mm} \section{Ablation Study: Order Scheduling and Comp-PGD Are Essential to Strengthen GAT}\label{sec:appendix_random_training} \vspace{-2mm} To further verify that our scheduling mechanism and Comp-PGD play essential roles in CAA while doing GAT, we remove the order scheduling feature and Comp-PGD but pre-generate training data by adding random semantic perturbations on the CIFAR-10 training set, referring to RSP-10. That is, RSP-10 is generated in random attack ordering and random attack parameters on CIFAR-10. We then performed regular adversarial training on RSP-10 to obtain the robust models \cite{zhang2019theoretically}, including from-scratch and fine-tuning. Table \ref{tab:appendix_rsp_cifar10_ra} listed the robust accuracy of three such robust models under three attacks, semantic attacks and full attacks. The results show that GAT still outperforms other baselines for up to 27\%/54\%/25\% in three/semantic/full attacks, demonstrating that order scheduling and Comp-PGD are essential to harden GAT to derive a robust model. \input{preprint/assets/tables/ablation2} \clearpage \section{Sensitive Analysis and Additional Discussions}\label{sec:appendix_sen_analysis_order} \subsection{The Attack Order Matters! The Two-attack Experiments}\label{subsec:two_attack} We conduct an analysis on different \textit{order} types under two attacks to demonstrate the influence of order on CAA. As shown in Table \ref{tab:appendix_imagenet_two_asr}, we list the attack success rate (ASR) of two attacks with different orders ($\ell_\infty\to$ \textit{semantic} attack / \textit{semantic} attack $\to\ell_\infty$) on GAT and other baseline models. The results show that most baselines are more fragile to the CAA with a semantic attack launched first than the attack with $\ell_\infty$ first. Furthermore, \textit{GAT-f} has the smallest ASR change when alternating the order, indicating that GAT helps improve the robustness when the attack order is changed. \input{preprint/assets/tables/imagenet/two_asr} \vspace{-2mm} \definecolor{correct}{rgb}{0, 0.615, 0} \definecolor{incorrect}{rgb}{1, 0.333, 0.333} \subsection{How Do Composite Perturbations Fool the Model? Visual Examples} In Fig.~\ref{fig:appendix_order_effect_1}, we present the inference results from an $\ell_\infty$-robust model (Madry$_{\infty}^{\dagger}$); the confidence bars are marked in \textcolor{correct}{green} (\textcolor{incorrect}{red}) if the prediction is correct (incorrect). The results showed that while a robust model can resist perturbations in $\ell_{p}$ ball, this only consideration is not comprehensive. That is, if we consider computing $\ell_{\infty}$ perturbations after some semantic attacks, the model may not exhibit the robustness it has around the $\ell_{\infty}$ ball. \vspace{-1mm} \input{preprint/assets/figs/order-effect/order-effect} \clearpage \section{Additional Experimental Results and Adversarial Examples}\label{sec:appendix_additional_exp} We further evaluate multiple CAAs in this section, and the experimental results on SVHN are also provided. In particular, we present the robust accuracy (RA) and their corresponding attack success rate (ASR). Again, the ASR is the percentage of the images that were initially classified correctly but were misclassified after being attacked; therefore, the lower ASR indicates the more robust model. In Sec. \ref{subsec:appendix_single_attack}, we especially show a single attack, which launches merely one attack from the attack pool. Notably, the $\ell_{\infty}$ (20-step) is regular PGD attack, and Auto-$\ell_{\infty}$ is an ensemble of four diverse attacks~\cite{croce2020reliable}. Multiple attacks (including three, semantic, and full) are listed in Sec. \ref{subsec:appendix_multiple_attacks}. (For efficiency, we use $\ell_\infty$ (PGD) in multiple attack evaluation.) \subsection{Single Attack}\label{subsec:appendix_single_attack} \textbf{Results on CIFAR-10} \input{preprint/assets/tables/cifar10/single_ra} \input{preprint/assets/tables/cifar10/single_asr} \clearpage \textbf{Results on ImageNet} \input{preprint/assets/tables/imagenet/single_ra} \input{preprint/assets/tables/imagenet/single_asr} \textbf{Results on SVHN} \input{preprint/assets/tables/svhn/single_ra} \input{preprint/assets/tables/svhn/single_asr} \clearpage \subsection{Multiple Attacks: Three attacks, Semantic attacks and Full attacks}\label{subsec:appendix_multiple_attacks} We only provided the ASR of CIFAR-10 and ImageNet here; the RA can be found in Table \ref{tab:cifar10_ra} and \ref{tab:imagenet_ra} of our paper. Again, the abbreviation used here is the same as in the paper. \textbf{Results on CIFAR-10} \vspace{-2mm} \input{preprint/assets/tables/cifar10/multi_asr} \textbf{Results on ImageNet} \vspace{-2mm} \input{preprint/assets/tables/imagenet/multi_asr} \textbf{Results on SVHN} \vspace{-2mm} \input{preprint/assets/tables/svhn/multi_ra} \vspace{-2mm} \input{preprint/assets/tables/svhn/multi_asr} \vspace{-2mm} \section{Examples of Single Semantic Attacks at Different Levels}\label{sec:appendix_attack_levels} Fig. \ref{fig:attack_levels} shows five single semantic attacks with corresponding perturbation levels. Each row represents the perturbed image of a corresponding attack $A_k$ with different perturbation values $\delta_{k}\in\epsilon_{k}$. \input{preprint/assets/figs/attack_exp/attack_exp} \section{Additional Visualization of Adversarial Examples under Different CAA} We provide some of the adversarial examples from CIFAR-10 in the above CAAs, including single attacks, two attacks, three attacks, semantic attacks, and full attacks. For every attack in the following figures, we arrange the images into several columns; As the Fig. \ref{fig:single_attack}, \ref{fig:two_attacks}, and \ref{fig:three_and_multiple} show, the left-most column represents the original images; every of the following two columns are the adversarial examples generated from one of the CAA attacks and their differences compared with the original images. Note that all of the differences have been multiplied by three for visualization purposes only. \input{preprint/assets/figs/attack_exp/demo/single_adv_exp} \input{preprint/assets/figs/attack_exp/demo/two_adv_exp} \input{preprint/assets/figs/attack_exp/demo/three_and_multiple}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The transition metal oxides are a large group of materials exhibiting a variety of structural, electrical and magnetic properties. Among them the iron oxides are most popular due to their largest abundance as ores and minerals in Earth crust.\citep{iron_oxides} In the family of FeOOH hydroxide polymorphs $\alpha$-FeOOH (goethite) is the compound, which is the most thermodynamically stable at ambient conditions. At room temperature goethite is an antiferromagnetic (AF) insulator and exhibits transition to a paramagnetic (PM) state at a Neel temperature of $T_{\text{N}}\approx 400$~K. Systematic studies of its properties in the paramagnetic state are limited because on heating at temperatures above $600$~K it decomposes to $\alpha$-Fe$_{2}$O$_{3}$ (hematite) and water.\citep{de_Faria_2007} Moreover, at equilibrium conditions the temperature of the goethite/(hematite + water) transformation is even lower (about $400$~K).\citep{iron_oxides} Goethite is one of the most common antiferromagnetic materials in nature. Nevertheless, its intrinsic physical properties are not fully understood yet. The reason is that the goethite samples, both natural and synthetic, are usually composed of nano- or micronsize crystallites often with poor crystallinity, different degree of preferential orientation, exact stoichiometry and impurities. Recently, the interest to $\alpha$-FeOOH has been attracted by the theoretical prediction of a significant linear magnetoelectric effect in the AF state.\citep{Ter_Oganessian_2017} If this prediction is approved, the goethite will be established as one of the most prominent high temperature magnetoelectric materials. In this aspect Raman spectroscopy is a powerful tool for investigation of fine changes in the crystal structure, the electrical, and the magnetic properties of materials, through their impact on the $\Gamma$-point optical phonons. For example, perovskite manganese oxides display significant frequency and linewidth renormalization of their Raman-active modes at temperatures near the AF phase transition,\cite{Granado1998,Granado1999} which give a wealth of information about the mechanism of spin-lattice interaction in this class of materials.\cite{Laverdiere2006,Flores2006} So far, characteristic Raman frequencies of goethite have been documented widely in the literature but Raman spectroscopy has been used mainly for identification of this compound in multiphase mineral samples.\cite{Kustova_1992, de_Faria_1997, Oh_1998, LEGODI_2007, de_Faria_2007, Hanesch_2009, Nieuwoudt_2010, Kreissl_2016, Hedenstedt_2017, Liu_2019} Therefore, the main goal of the present work is to assess the importance of spin-lattice interaction in $\alpha$-FeOOH by investigating: (i) the symmetry of the Raman-active modes observed in the Raman spectra; (ii) their assignment to specific atomic displacements, and (iii) the temperature behaviour of the Raman lines in the vicinity of the AF-PM phase transition. To our best knowledge, these issues have not been addressed in the literature, up to now. In this paper at first we determined the symmetry of the lines observed in the polarized Raman spectra of goethite ores, using the fact that due to the needle-like shape of the goethite microcrystals, even when their sizes are submicronic, their orientation is correlated. After that we precise their assignment using sufficiently large for micro-Raman spectroscopy mineral single crystals with determined orientation. The experimental findings were compared with lattice-dynamical calculations. As a results $22$ out of the $24$ Raman-active modes were assigned to observed lines in the Raman spectra. Later, both unpolarized and polarized Raman spectra were obtained in the temperature interval $293$~K -- $473$~K. After fit of the observed in the spectra lines it was found that some of the lines show anomalies in the temperature dependencies of their parameters (position, width and intensity). Moreover, one of them, the $B_{3g}(3)$ line at $387$~cm$^{-1}$ shows asymmetric shape, which asymmetry increases when the temperature increases towards the $T_{\text{N}}$ and remains almost constant in the PM state. The line profile can be fitted with Fano-shape. The origin of this line behavior is discussed. \section{Experimental methods and calculation details} The origin of the goethite ores is mine Kremikovtsi (Bulgaria). The source of the mineral needle-like single crystals with length up to $2$~mm and diameter up to $100~\mu$m is Tounfit region (Morocco). Both ores and single crystals were characterized using scanning electron microscopy (see Figs.~\ref{SEM1},~\ref{SEM2}). The EDX analysis show no presence of other chemical elements except Fe and O. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{SEM1} \caption{\label{SEM1}Electron microscope image of surface of goethite ore} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{SEM2} \caption{\label{SEM2}Electron microscope image of goethite single crystal} \end{figure} The magnetic response was measured by SQUID magnetometer on a sample composed of several crystals with their $b$-axes aligned. The temperature dependence of the DC magnetic susceptibility measured in field applied along the $b$-axis as well as in direction perpendicular to it is reminiscent of one for collinear antiferromagnet (see Fig.~\ref{magnetic}). With the increase of the temperature, a magnetic phase transition from antiferromagnetic to paramagnetic phase is observed with Neel temperature, $T_{\text{N}} = 393$~K. We note, that $T_{\text{N}}$ for the other two types of samples used in this study---the synthetic powder and the goethite ores is $393$~K and about $347$~K, respectively. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{magnetic} \caption{\label{magnetic}Temperature dependence of the DC magnetic susceptibility measured in field of $4$~T applied along the $[010]$ crystallographic direction (the AFM axis) and perpendicular to it.} \end{figure} Raman spectra were obtained using LabRAM HR Visible (HORIBA Jobin Yvon) Raman spectrometer in backscattering configuration. For excitation were used several laser lines ($633$~nm of He-Ne laser and $515$~nm and $458$~nm of Ar$^{+}$ laser). For the spectra at room temperature an $\times 100$ objective and $600$~mm$^{-1}$ grating were used (the spectral distance between CCD pixels is $1.0$~cm$^{-1}$). For collection of spectra at different temperatures LINKAM THMS600 heating cell was used in combination with a $\times 50$ objective with long working distance and $1800$~mm$^{-1}$ grating (the spectral distance between CCD pixels is $0.3$~cm$^{-1}$). The direction of the polarization of the incident linearly polarized laser light was changed using a $\lambda/2$ plate. The scattered light was analyzed with a polarizer. The different scattering configurations were realized rotating the sample and/or microscope table in the laboratory coordinate system. The optimal laser power on the laser spot (with diameter about $2~\mu$m) on the sample surface (determined after power tests to ensure that there is no local laser overheating) was $0.07$~mW for the powder, $0.4$~mW for both the ore samples and single crystals when $\times 100$ objective was used (for room temperature measurements), and $1.0$~mW for temperature measurements of the single crystals (when $\times 50$ objective was used). The procedure of the power test consists of comparison of the line parameters (position and linewidth) of series of Raman spectra, obtained with increasing laser power and decreasing accumulation time, so that the product of laser power and accumulation time is constant for all spectra. When an overheating occurs, the position of the lines shifts to lower frequencies and their linewidth increases. Lattice-dynamical calculations (LDC) in the framework of the shell model\cite{Dick1958,Woods1960} were performed using the General Utility Lattice Program.\cite{Gale1997,Gale2003} In this model an ion consists of a rigid \textit{shell} with charge $Y$, which represents the valence electrons and has no mass, whereas the nucleus and the inner electrons form the \textit{core}, which has all the ion's mass. The core is bound to the shell by harmonic restoring forces of spring constant $k$, thus the ion polarizability can be introduced as $Y^{2}/k$. The short-range interactions between non-bonded atoms were modeled with repulsive Born-Mayer potential in the Buckingham form $\displaystyle U(r) = A \exp{(-r/\rho)-C/r^{6}}$. In addition, the covalent O-H bond was described by the Morse potential, $\displaystyle U(r)=D_{e}\left\{1-\exp{\left[-a(r-r_{0})\right]}\right\}^{2}$, where $r_{0}= 0.9485$~\AA\ is the equilibrium O-H bond length. All other parameters $Y$, $Z$, $k$, $A$, $\rho$, $C$, $D_{e}$, and $a$ are listed in Table~\ref{tab:LDCsm}. The cell parameters and atomic positions used in the calculations, are taken from Ref.~\onlinecite{Yang_2006}. The shell model parameters $k$ and $a$ were refined to achieve the best fit to the experimental data for the lattice structure and the hydrogen vibrations near $1000$ and $3000$~cm$^{-1}$. \begin{table*}[htbp] \caption{\label{tab:LDCsm}List of the initial values for the shell model parameters, which were taken from Ref.~\onlinecite{Lewis1985} as well as the provided, with the GULP code, libraries. After the refinement we obtained the following values for $k$ and $a$, $28.86~e^2\!/\!\text{\AA}^3$ and $1.8079~\text{\AA$^{-1}$}$, respectively.} \begin{ruledtabular} \begin{tabular}{cdddcdddddc} Ion & \multicolumn{1}{c}{$Z\ (|e|)$} & \multicolumn{1}{c}{$Y\ (|e|)$} & \multicolumn{1}{c}{$k\ (e^2\!/\!\text{\AA}^3)$}& Ionic pair & \multicolumn{1}{c}{$A\ (e\text{V})$} & \multicolumn{1}{c}{$\rho\ (\text{\AA})$} & \multicolumn{1}{c}{$C\ (e\text{V\AA}^6)$}& \multicolumn{1}{c}{$D_{e}\ (e\text{V})$} & \multicolumn{1}{c}{$a\ (\text{\AA$^{-1}$})$} & \multicolumn{1}{c}{cutoffs (\AA)}\\ &&&&&&&&&& min -- max \\\hline Fe & 3.00000 & & & Fe-O1 & 1102.40 & 0.3299 & 0.000 & & & 0.0 -- 12.0 \\ O1 & 0.86902 & -2.86902 & 74.92 & Fe-O2 & 862.08 & 0.3299 & 0.000 & & & 0.0 -- 12.0 \\ O2 & -1.42600 & & & O-O & 22764.00 & 0.1490 & 27.879 & & & 0.0 -- 12.0 \\ H & 0.42600 & & & O1-H & 208.11 & 0.2500 & 0.000 & & & 0.0 -- 10.0 \\ & & & & O2-H & 311.97 & 0.2500 & 0.000 & 7.0525 & 2.1986 & 1.4 -- 10.0 \\ \end{tabular} \end{ruledtabular} \end{table*} As a crosscheck of the results obtained by the empirical shell-model, first-principle density-functional theory (DFT) calculations of the phonons were performed by means of Quantum Espresso (QE)\cite{QE} plane wave (PW) program suit. The choice of exchange-correlation functional and atomic pseudopotentials is essential for the accuracy of the DFT calculations on the transition-metal oxydes due to the significant correlations between $d$-electrons of the transition-metal ions. Hubbard-corrected schemes like LDA+$U$ or GGA+$U$ have been shown to improve significantly the convergency of the self consistent field (SCF) calculations and provide a good description of the structural, electronic and magnetic properties of this class of compunds. So far, the reported up to now first-principle calculations on $\alpha-$FeOOH have been performed within GGA+$U$ approximation.\cite{Blanchard_2013,Ter_Oganessian_2017} In QE software DFT+$U$ functionals are implemented for SCF and optimization but not for phonon calculations, which forced us to look for alternative approaches to the lattice dynamics of $\alpha-$FeOOH. Therefore, we made use of the recently proposed Optimized Norm-Conserving Vanderbilt (ONCV) pseudopotentials\cite{Hamann2013} for PBE functional.\cite{PBE1996} For a moderate or no excess of the PW kinetic energy cutoff, compared to the ultrasoft pseudopotentials, the ONCV potentials show an excellent correspondence with the structural data for a variety of materials, including many compounds of transition-metals.\cite{Schlipf2015} The calculations were made on a $4 \times 12 \times 8$ Monkhorst-Pack (MP) $k$-point grid for a PW kinetic energy cutoff of 80~Ry ($\approx 1100$~eV). Optimized lattice constants of $a =9.87,~b = 3.05$, and $c = 4.50$~{\AA} (in $Pnma$ notation) were obtained, which are in good agreement with the experimental lattice parameters of $\alpha-$FeOOH. The phonon frequencies, calculated for the relaxed structure, are listed in Table~\ref{tab1}. \section{\label{sec:3}Results and discussion} The crystal structure of $\alpha$-FeOOH is orthorhombic with space group $Pnma$ ($D_{2h}^{16}$, No.~$62$, $Z = 4$, see Fig.~\ref{structure1}).\cite{Yang_2006} All four types of atoms, Fe, O1, O2, and H, occupy positions with same site symmetry (Wyckoff position $4c$). Fe atoms are connected with 3O1 and 3O2 atoms, forming Fe(O1)$_{3}$(O2)$_{3}$ octahedra. Two adjacent along $[010]$ direction FeO$_{6}$ octahedra have common edge, forming chains along $[010]$ direction. The octahedra from two adjacent chains also have common edge, making strongly bonded double chains along $[010]$ direction, leading to the needle-like shape of microcrystals. The adjacent double chains have common oxygen atom (they are corner-shared). The hydrogen atom is bonded to oxygen O2. Each atom from the unit cell participates in normal vibrational modes with irreducible representations $2A_{g} + A_{u} + B_{1g} + 2B_{1u} + 2B_{2g} + B_{2u} + B_{3g} + 2B_{3u}$.\cite{Rousseau1981,Kroumova2003} Among them only $A_{g}$, $B_{1g}$, $B_{2g}$, and $B_{3g}$ are Raman-active. The site symmetry restricts the possible directions of atomic vibrations in the different modes. $A_{g}$ and $B_{2g}$ modes are vibrations in $(010)$ plane, whereas $B_{1g}$ and $B_{3g}$ are vibrations along $[010]$ crystallographic direction. Therefore $24$ lines ($8A_{g} + 4B_{1g} + 8B_{2g} + 4B_{3g}$) originating from one-phonon scattering are expected in the Raman spectra. In the simplest approximation of non-resonant Raman scattering their intensity depends only on the directions of the polarization of the incident ($\vec{e}_{\text{i}}$) and scattered ($\vec{e}_{\text{s}}$) light ($\vec{e}_{\text{i}}$ and $\vec{e}_{\text{s}}$ are the unit vectors along these directions): $I_{\vec{e}_{\text{i}} \vec{e}_{\text{s}}} \propto \left(\vec{e}_{\text{i}}\cdot\hat{R}\cdot\vec{e}_{\text{s}}\right)^{2}$. $\hat{R}$ is the Raman tensor. It is symmetric and for the different types of Raman-active modes its non-zero components (in coordinate system connected with the crystallographic axes) are as follows: $\alpha_{xx} \neq \alpha_{yy} \neq \alpha_{zz}$ (for $A_{g}$), $\alpha_{xy}$ (for $B_{1g}$), $\alpha_{xz}$ (for $B_{2g}$), and $\alpha_{yz}$ (for $B_{3g}$).\cite{Rousseau1981,Kroumova2003} From simple atomic mass considerations $18$ out of the $24$ Raman-active modes, including mainly oxygen and iron atoms vibrations, should have frequencies below $800$~cm$^{-1}$, whereas six of them, being purely hydrogen vibrations, must be situated near $1000$~cm$^{-1}$ (for bending vibrations) and above $3000$~cm$^{-1}$ (for stretching vibrations). As the direct interaction between hydrogen atoms is weak, their six Raman-active modes are expected to be distributed into three Davydov pairs (pair of modes with identical direction of hydrogen atoms vibrations as the only difference between them is the relative phase of the vibrations of the adjacent hydrogen atoms---in-phase or out-of-phase) with very close or coinciding frequencies. These three pairs must be the stretching O-H vibration pair $A_{g} + B_{2g}$, the bending (in $(010)$ plane) O-H vibration pair $A_{g} + B_{2g}$, and the bending (along $[010]$ direction) O-H vibration pair $B_{1g} + B_{3g}$. The lattice-dynamical calculations as well the observed Raman spectra confirmed these expectations. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{structure1} \caption{\label{structure1} Crystal structure of $\alpha$-FeOOH (goethite). A supercell of $2 \times 2 \times 2$ unit cells is drawn.} \end{figure} A spectrum obtained by synthetic powder of goethite (commercially available "Bayferrox910", Merck) is shown in Fig.~\ref{powder}. It coincides with the one published in Ref.~\onlinecite{Hanesch_2009}. The attempts to identify the symmetry of the lines comparing their relative intensity in the spectra obtained in parallel ($I_{\text{par}}$) and in crossed ($I_{\text{cr}}$) polarization (using the fact that the depolarization ratio for gas of molecules or randomly oriented particles $\rho = I_{\text{cr}}/I_{\text{par}}$ depends on the symmetry of the lines: $\rho = 3/4$ for B$_{1g}$, B$_{2g}$ and B$_{3g}$ lines and $0 \leq \rho \leq 3/4$ for A$_{g}$ lines) were unsuccessful as these two spectra were identical, showing that the scattered light is completely depolarized. This observation can be explained assuming that the penetration depth of the incident light is much larger than the submicronic size of the powder particles (their acicular shape and predominant size was determined by SEM as $0.1~\mu$m $\times~0.6~\mu$m). \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{powder} \caption{\label{powder} (a) Raman spectrum of synthetic powder of goethite in the region $80$~cm$^{-1}$ -- $4000$~cm$^{-1}$; (b) The low-frequency part of the spectrum ($80$~cm$^{-1}$ -- $800$~cm$^{-1}$, the region where one-phonon Raman scattering by iron and oxygen vibrations is expected). The spectrum in (a) is multiplied by a factor of $15$ compared to the one in (b).} \end{figure} The polarized Raman spectra of goethite ores are shown in Fig.~\ref{ores}. As can be seen from electron microscope image (Fig.~\ref{SEM1}) in the scale of the laser spot the small needle-like microcrystals have nearly parallel long edges and this direction identifies the $[010]$ crystallographic direction ($y$-axis). However, the orientation of the other two directions, $[100]$ and $[001]$, for each microcrystal in the laboratory frame is probably arbitrary. This leads to the possibility that only three qualitatively different polarized Raman spectra to be obtained: $M(YY)\bar{M}$, $M(NN)\bar{M}$, and $M(YN)\bar{M}$. The scattering configuration is described using the Porto notations. The first and the last symbol are the direction of the propagation of the incident and scattered light, whereas the symbols in brackets are the direction of the polarization of the incident and scattered light, respectively. Here $Y$ is the $[010]$ direction and $M$ and $N$ are two mutually perpendicular unknown directions within $(010)$ plane. The spectra obtained with different laser excitation are similar showing that the resonance effects are weak. From Fig.~\ref{ores} it is seen that the observed Raman lines can be sorted into three groups. The lines at $244$, $300$, $389$, and $481$~cm$^{-1}$ have $A_{g}$ symmetry, the ones at $92$, $205$, $300$, $401$, $554$, and $686$~cm$^{-1}$ have $A_{g}$ or $B_{2g}$ symmetry, and the ones at $112$, $167$, $300$, and $389$~cm$^{-1}$ have $B_{1g}$ or $B_{3g}$ symmetry. However, due to the very small size of the goethite crystals in the ore and their defect nature, the width of the lines is large making possible the observation only of the strongest lines. Also the recognition of the close positioned lines (e.g. the lines in the region near $400$~cm$^{-1}$) is difficult. The roughness of the surface also can contribute to deviations from the selection rules (expected for perfect crystal). \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{ores} \caption{\label{ores}Polarized Raman spectra of goethite ores. The wavelengths of the used laser excitation are shown in the figure. For each spectrum the scattering configuration (in Porto notations) and line symmetries, allowed for this configuration, are shown. $M$ and $N$ are two mutually perpendicular unknown directions in $(010)$ plane.} \end{figure} The Raman spectra, obtained from single crystals of goethite, are shown in Figs.~\ref{polarized},~\ref{RamanOH}. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{polarized} \caption{\label{polarized}Polarized Raman spectra of goethite single crystals (low-frequency part). The wavelength of the laser excitation is $\lambda_{L} = 633$~nm. For each spectrum the scattering configuration (in Porto notations) and line symmetry, allowed for this configuration, are shown.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{RamanOH} \caption{\label{RamanOH}Non-polarized Raman spectrum obtained from a $(010)$ surface of goethite single crystal. The wavelength of the laser excitation is $\lambda_{L} = 633$~nm. The two Davydov pairs ($A_{g}$ and $B_{2g}$ modes) of hydrogen vibrations in the O-H groups are indicated.} \end{figure} Before discussing the spectra in details, we will explain how the $[100]$ ($x$-axis) and $[001]$ ($z$-axis) directions were determined. In the case of orthorhombic crystal this is not trivial because the selection rules predict equal number of lines in $xy$ ($4B_{1g}$) and $zy$ ($4B_{3g}$) spectra. Also one and the same set of lines ($8A_{g}$) can be observed in $xx$ and $zz$ spectra. Therefore, the $x$ and $z$ directions are spectroscopically indistinguishable. The morphology of the needle-like crystals determines easily only the $y$ axis. The cross-section of the crystals (perpendicular to $y$ axis) looks rather round and it contains many edges along different crystallographic directions (not only of $\{101\}$ type that can be concluded from structural considerations, see Fig.~\ref{structure1}). Therefore we were looking for the presence of mutually perpendicular edges on $(010)$ faces ($xz$-planes) of different vertically aligned needle-like crystals. Then we obtained two spectra in parallel polarization (parallel and perpendicular to those edges) and one in crossed polarization. Thus, we succeeded in observing one set of lines (A$_{g}$ ones) in spectra in parallel ($xx$ and $zz$) polarization and another set of lines ($B_{2g}$) in crossed ($xz$) polarization (see Fig.~\ref{polarized}). After that from the photos of these faces we measured the angles between the $[100]$ and $[001]$ edges and the other edges. Knowing the lattice parameters, the angles for all $[h0l]$ edges can be calculated. Comparing the measured and calculated angles, the $x$ and $z$ directions as well all other edges (actually the ends of the vertical $(h0l)$ faces) can be identified (see Fig.~\ref{010face1}). After that, placing the needle-like crystals horizontally, we measured spectra in parallel polarization in direction perpendicular to the long edges as long as the measured spectrum coincide with one of the already known $xx$ and $zz$ spectra. Finally, we succeed in finding the $(100)$ and $(001)$ faces and measured the missing up to then $xy$ and $zy$ spectra, where only the lines with $B_{1g}$ and $B_{3g}$ symmetry can be observed. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{010face1} \caption{\label{010face1} Optical photo of $(010)$ face of goethite crystal done with a $\times 50$ microscope objective. The Miller indices of the vertical planes, forming the edges of the $(010)$ face are indicated.} \end{figure} On the basis of polarized spectra shown in Fig.~\ref{polarized}, the observed Raman lines can be easily discriminated by symmetry. The experimental frequencies, including those recorded on the synthetic powder and ores, are listed in Table~\ref{tab1} and there they are compared to the LDC values obtained with the shell-model and \textit{ab initio} DFT. The corresponding calculated amplitude vectors (from both calculating methods) for all Raman-active modes from the $\Gamma$-point of the Brillouin zone are given in the Supporting Information (SI) as xyz files, and could be visualized with the Jmol software. Evidently, the proposed assignment of the experimental Raman features to specific mode symmetry corroborates to a good precision the theoretical expectations of the shell-model and DFT. It is also supported by the earlier GGA+$U$ calculations of Blanchard~{\it et~al.},\cite{Blanchard_2013} though atomic displacement vectors have not been reported there. Therefore, our further analysis will be based on the normal modes calculated in the present work. \begin{table}[htbp] \caption{\label{tab1}Comparison between the frequencies (in cm$^{-1}$) of the experimentally observed Raman lines (in the spectra of the synthetic powder, ores and the single crystals, respectively) and the calculated frequencies: shell model and DFT.} \begin{ruledtabular} \begin{tabular}{rcrrrrr} & & \multicolumn{3}{c}{Experimental data} & \multicolumn{2}{c}{LDC} \\ \cline{3-5}\cline{6-7} & \multicolumn{1}{c}{Line} & \multicolumn{1}{c}{Synth.} & \multicolumn{1}{c}{Ores} & \multicolumn{1}{c}{Single} & \multicolumn{1}{c}{Shell} & \multicolumn{1}{c}{DFT} \\ \multicolumn{1}{c}{No}. & \multicolumn{1}{c}{symmetry} & \multicolumn{1}{c}{powder} & & \multicolumn{1}{c}{cryst.} & \multicolumn{1}{c}{model} & \\ \hline 1 & $A_{g}(1)$ & $93$ & $92$ & $93$ & $90$ & $123$ \\ 2 & $A_{g}(2)$ & $247$ & $244$ & $247$ & $237$ & $261$\\ 3 & $A_{g}(3)$ & $301$ & $300$ & $300$ & $355$ & $309$\\ 4 & $A_{g}(4)$ & $401$ & $401$ & $401$ & $389$ & $426$ \\ 5 & $A_{g}(5)$ & $483$ & $481$ & $483$ & $418$ & $490$\\ 6 & $A_{g}(6)$ & $551$ & $554$ & $551$ & $548$ & $553$\\ 7 & $A_{g}(7)$ & $1003$ & $1000$ & $1004$ & $1005$ & $1071$\\ 8 & $A_{g}(8)$ & $3091$ & $3120$ & $3082$ & $3082$ & $3035$\\ 9 & $B_{1g}(1)$ & $167$ & $167$ & $167$ & $193$ & $169$\\ 10 & $B_{1g}(2)$ & & $300$ & $300$ & $303$ & $305$\\ 11 & $B_{1g}(3)$ & & & $423$ & $490$ & $412$\\ 12 & $B_{1g}(4)$ & & & & $744$ & $748$\\ 13 & $B_{2g}(1)$ & $206$ & $205$ & $206$ & $222$ & $210$\\ 14 & $B_{2g}(2)$ & & $251$ & $258$ & $271$ & $278$\\ 15 & $B_{2g}(3)$ & & & $356$ & $370$ & $351$\\ 16 & $B_{2g}(4)$ & $419$ & & $420$ & $442$ & $399$\\ 17 & $B_{2g}(5)$ & & & $529$ & $467$ & $574$\\ 18 & $B_{2g}(6)$ & $686$ & $686$ & $686$ & $600$ & $651$\\ 19 & $B_{2g}(7)$ & $1003$ & $1000$ & $1004$ & $1028$ & $1064$\\ 20 & $B_{2g}(8)$ & $3091$ & $3120$ & $3082$ & $3091$ & $3027$\\ 21 & $B_{3g}(1)$ & $114$ & $112$ & $114$ & $123$ & $97$\\ 22 & $B_{3g}(2)$ & & & $309$ & $314$ & $297$\\ 23 & $B_{3g}(3)$ & $387$ & $389$ & $387$ & $463$ & $422$\\ 24 & $B_{3g}(4)$ & & & & $756$ & $743$\\ \end{tabular} \end{ruledtabular} \end{table} The presence of a Raman line below $100$~cm$^{-1}$ (the $A_{g}(1)$ mode at $93$~cm$^{-1}$) is not typical for crystal structures, in which the heaviest atom is iron. As shown in Fig.~\ref{modes}(a) this mode corresponds to a libration, i.e. a solid rotation of the double chains about $y$-axis. The displacement pattern of most of the Raman-active vibrations is complex and will not be discussed in details. Instead, we will focus on the strongest feature in the spectra, the $B_{3g}(3)$ mode at $387$~cm$^{-1}$. According to the shell-model and DFT calculations, this mode is dominated by an O1 vibration along $y$, as depicted in Fig.~\ref{modes}(b). Due to the specific atomic arrangement in $\alpha$-FeOOH the O1 atoms move along the bisector of the Fe-O1-Fe angle ($\approx 124^{\circ}$). Therefore, this vibration could be characterized as a mixture of Fe-O1-Fe bond angle bending and a symmetric Fe-O1 stretch, since both iron-oxygen bonds in the Fe-O1-Fe linkage are modulated in-phase. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{FeOOH_modes} \caption{\label{modes} Calculated atomic displacement vectors for the lowest-frequency $A_g(1)$ mode (a), and for the most intense $B_{3g}(3)$ mode (b). The experimental (Exp.) and calculated by DFT (Calc.) frequencies of the two vibrations are also given in the figure.} \end{figure} In Fig.~\ref{RamanOH} the lines observed at $1004$~cm$^{-1}$ and $3082$~cm$^{-1}$ correspond to the two Davydov pairs of $A_{g}+B_{2g}$ modes of hydrogen vibrations. The other features with irregular shape between $1000$~cm$^{-1}$ and $2000$~cm$^{-1}$ originate from two- (and three-) phonon scattering. In Figs.~\ref{Tdepblackup},~\ref{Tdepwhitedown} are presented Raman spectra obtained at different temperatures between $303~$K and $473~$K. They were obtained from lying needle-like crystals with polarization of the laser light perpendicular to their long edge and without an analyzer. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Tdepblackup} \caption{\label{Tdepblackup}Raman spectra obtained at different temperatures from $(101)$ face of a goethite crystal and polarization of the incident laser light perpendicular to $[010]$ direction. No analyzer. In this scattering configuration the lines with $A_{g}$ and $B_{2g}$ symmetry dominate in the spectra.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Tdepwhitedown} \caption{\label{Tdepwhitedown}Raman spectra obtained at different temperatures from $(100)$ face of a goethite crystal and polarization of the incident laser light perpendicular to $[010]$ direction. No analyzer. In this scattering configuration the lines with $A_{g}$ and $B_{3g}$ symmetry are visible.} \end{figure} Analyzing the relative intensities of the observed lines it can be concluded that the spectra presented in Fig.~\ref{Tdepblackup} are obtained from $(101)$ face ($A_{g}$ and $B_{2g}$ lines dominate in the spectra). These spectra were obtained on heating. The spectra presented in Fig.~\ref{Tdepwhitedown} were obtained from a $(100)$ face ($A_{g}$ and $B_{3g}$ lines dominate in the spectra). These latter spectra were obtained on cooling. No any sign of chemical decomposition during the heating/cooling cycle and the spectra obtained at room temperature before heating and after cooling were identical. The most intense lines in the spectra were fitted with Lorentzians. The temperature dependencies of lineshape parameters (position, half-width at half-maximum -- HWHM, and integrated intensity) for some of these modes are shown in Fig.~\ref{Tdep6lines}. It is clear that for $A_{g}(1)$, $A_{g}(4)$, $B_{2g}(4)$, and $B_{2g}(7)$ modes the slope of the frequency-temperature curve is discontinuous at $T_{\text{N}}$. Likewise, a discontinuity is found at $T_{\text{N}}$ in the slope of HWHM-temperature dependence for the $A_{g}(1)$, $A_{g}(6)$, and $B_{2g}(7)$ modes. These ``anomalies'' infer for a considerable spin-lattice coupling in $\alpha-$FeOOH. The spectral changes near $T_{\text{N}}$, however, are better pronounced and more informative for the most intense $B_{3g}(3)$ mode at $387$~cm$^{-1}$. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Tdep6lines} \caption{\label{Tdep6lines} Temperature dependencies of the parameters (position, HWHM (half-width at half maximum) and integral intensity) of some of the lines. Their assignment is given in Table~\ref{tab1}. For the $A_{g}(1)$ line the parameters were calculated from the spectra, given in Fig.~\ref{Tdepwhitedown}. For the other five lines the parameters were calculated from the spectra, given in Fig.~\ref{Tdepblackup}. The two symbols at each temperature correspond to values of the fits from two different spectra. Some of the symbols coincide. The lines tracing some dependencies are just guide to the eye.} \end{figure} The fit of the $B_{3g}(3)$ line with Lorentzian shape is unsatisfactory, with a strong and asymmetric spectral residuum. In order to obtain a relevant description of the $B_{3g}(3)$ lineshape, we performed additional measurements of its temperature evolution (both on heating and cooling) in $X(ZY)\bar{X}$ scattering configuration, in a narrower spectral window by using higher resolution diffraction grating. In this configuration only $B_{3g}$ symmetry is allowed, so the possible superimposition with other closely positioned lines (such as $A_{g}(4)$ at $401$~cm$^{-1}$ and $B_{2g}(4)$ at $420$~cm$^{-1}$) is excluded, and thus we were able to record a pure signal from the $B_{3g}(3)$ mode only. The corresponding spectra are shown in the left panel of Fig.~\ref{Tdep387}. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Tdep387} \caption{\label{Tdep387} (left panel)---Raman spectra obtained from goethite single crystal at different temperatures in $X(ZY)\bar{X}$ scattering configuration (only lines with $B_{3g}$ symmetry are allowed); (right panel)---The temperature dependence of the parameters (position, HWHM, and asymmetry parameter) calculated after a fit of the $B_{3g}(3)$ line with Fano profile. Up-pointing triangle symbols correspond to the spectra obtained on heating, down-pointing triangle symbols correspond to the spectra obtained on cooling.} \end{figure} It is seen that with the increase of the temperature from $293~$K to $473~$K, the frequency of the weak $B_{3g}(2)$ line decreases from $308$ to $302$~cm$^{-1}$, whereas the frequency of the $B_{3g}(3)$ mode increases from $387$ to $395$~cm$^{-1}$. The asymmetric shape of the $B_{3g}(3)$ line is apparent in the spectra at high temperatures. Therefore, we fitted it with an asymmetric Fano profile, $\displaystyle I( \nu )= I_{0}\frac{\left(1+ \frac{ \nu - \nu_{0} }{q \Gamma } \right)^2}{1+ \left(\frac{\nu - \nu_{0}}{\Gamma}\right)^2 }$, where the $q$ is the asymmetry parameter. When $q \rightarrow \infty$, the Fano profile converts to the Lorentzian shape, as the $\nu_{0}$, $\Gamma$, and $I_{0}$ are the position, HWHM, and peak intensity of the Lorentzian, respectively. The calculated parameters from the fit are given in the right panel in Fig.~\ref{Tdep387}. Evidently, the spectral shape parameters display considerable changes around $T_{\text{N}}$. The integrated intensity of the line (not shown in the figure) also displays an abrupt increase below $T_{\rm N}$, which is similar to the temperature behavior observed for specific Raman lines in the magnetoelectric compound Cu$_2$OSeO$_3$.\cite{gnezdilov2010} In the later case, the intensity anomaly has been interpretted as an increase of the dynamic electric polarizability of the material due to a contribution from the magnetoelectric effect. Therefore, it is tentative to interpret the high intensity of the $B_{3g}(3)$ line below $T_{\rm N}$ as a signature for a magnetoelectric succeptability of $\alpha$-FeOOH in its magnetically ordered phase, in agreement with the theoretical predicitons of Ref.~\onlinecite{Ter_Oganessian_2017}. Especially interesting is the temperature evolution of the asymmetry parameter $q$ ($q<0$) of the $B_{3g}(3)$ line. With the increase of temperature, $q$ decreases in modulus, and changes weakly above $T_{\text{N}}$. This results in a more pronounced Fano-shape of $B_{3g}(3)$ line above $T_{\text{N}}$, which indicates to the presence of an excitation continuum whose spectral density undergoes substantial redistribution at the AF/PM transition. It is evident from Fig.~\ref{RamanOH} that the low-frequency phonons overlay a wide scattering band, extending up to $\approx 1000$~cm$^{-1}$, with aparent maximum at $\approx 400$~cm$^{-1}$ -- in a close proximity to the spectral position of the $B_{3g}(3)$ mode. It is tentative, therefore, to assume that the Fano-shape of the $B_{3g}(3)$ line is a result of interaction between that mode and the excitations from the underlaying background. The origin of this scattering continuum, however, is ellusive since systematic studies of the electronic and magnetic excitations in $\alpha-$FeOOH are still lacking. Nevertheless, two plausible hypotheses for the asymmetric shape of the $B_{3g}(3)$ mode could be inferred from the existing works on structure, transport properties and magnetic ordering in goethite. First, and more coherent scenario is based upon coupling of the $B_{3g}(3)$ phonon with magnetic exchange excitations. Ter Oganessian {\it et~al.}\cite{Ter_Oganessian_2017} have reported extensive GGA+$U$ calculations of the exchange interactions in $\alpha-$FeOOH and have established that the strongest exchange, $J_2 = 48.1$~meV$ = 388$~cm$^{-1}$, corresponds to the Fe-O1-Fe bridges connecting pairs of neighboring double chains. The exchange energy of a Fe-O1-Fe dimer is given by: $H_{\rm s-s}=J_2 \vec S_i \cdot \vec S_j = \frac12 J_2 (K(K+1)-2S(S+1))$, where $\vec S_i$ and $\vec S_j$ are the spins of the $i$-th and $j$-th Fe atoms respectively, $S$ is the spin per Fe atom and K is the total spin of the dimer. Obviously, if the dimer was isolated, its ground state would corresponds to a singlet ($K=0$) and the first excited state -- to a triplet ($K=1$). Notably the energy of the singles-triplet transition $\Delta E = J_2$ matches precisely the spectral maximum of the background. Moreover, the singlet-triplet transition of the dimer is Raman-active through the Fleury-Elliot mechanism of exchange-assisted light scattering (usually referred to as ``two-magnon'' scattering).\cite{Fleury1968} Since different Fe-O1-Fe dimers are not isolated, but interact between each other via exchange interactions of comparable magnitude, the singlet-triplet scattering would be manifested as a smeared band of a spectral width comparable to $J_2$, instead of a discrete line. Therefore, it is tentative to assign the scattering background to magnetic excitations involving singlet-triplet excitations of the Fe-O1-Fe dimers. Since $K=1$ spin state transforms as an axial vector, this kind of scattering is active in $B_{1g}+B_{2g}+B_{3g}$ irreducible representations of the $Pnma$ space group, and could be coupled to phonons of the corresponding symmetries, provided physical mechanisms of such a coupling are present. As discussed above, the O1 displacement in the $B_{3g}(3)$ vibration modulates the Fe-O1-Fe bond-angle, as well as the Fe-O1 bond lengths (see Fig.~\ref{modes}(b)). Therefore, a linear spin-phonon coupling of the form $H_{\rm s-ph}=\vec u \cdot \nabla_uJ_2 \vec S_i \cdot \vec S_j$ is allowed for the $B_{3g}(3)$ mode, where $\vec u$ is the O1 displacement vector, $\nabla_uJ_2$ is the gradient of $J_2$ with respect to O1 displacement. Correspondingly, this spin-phonon interaction is likely to result in an asymmetric Fano-shape of the $B_{3g}(3)$ line, since the phonon is directly coupled with the spin excitation of the dimer. It is worthy to note that the mechanism of spin-phonon coupling of the $B_{3g}(3)$ mode in $\alpha-$FeOOH is qualitatively different from the spin-phonon interaction studied previously in perovskite manganites.\cite{Granado1998,Granado1999,Laverdiere2006} Phonons, displaying significant frequency shift below $T_{\rm N}$ in manganites, are associated with asymmetric Mn-O-Mn stretching, for which the linear spin-phonon coupling is forbidden by symmetry. Instead, these phonons are involved in a second-order spin-phonon interaction, given by $H_{\rm s-p} = \frac12 J'' M^2 u^2$, where $J''$ is the second derivative of the exchange integral with respect to the oxygen displacement $u$, and $M$ is the sublattice magnetization below $T_{\rm N}$. Thus, the second-order interaction is equivalent to an additional force-constant, which results in a frequency shifts below $T_{\rm N}$, but does not affect the symmetric shape of the phonon line. The second-order spin-phonon coupling could be operative for the $B_{3g}(3)$ phonon in $\alpha-$FeOOH, as suggested from the pronounced frequency softening of this mode (see Fig.~\ref{Tdep387}). Due to the presence of a first-order interaction, however, the frequency shift of this vibration below $T_{\rm N}$ depends not only on $J''$, like in manganites, but also on $J'^2/J$. Therefore, the first and the second derivatives of the exchange integral with respect to the O1 displacement could not be extracted independently from the measured values of the frequency shift. In order to qualify the magnetic mechanism of $B_{3g}(3)$ line asymmetry, one should also answer why asymmetry of the phonon line is more prononced (i.e. the Fano $q$-parameter decreases in modulus) above the magnetic transition temperature. Recently, an extensive Raman study of the spin excitations in the antiferromagnetic insulator Cu$_2$OSeO$_3$ has been reported by Versteeg {\it et~al.}\cite{Versteeg2019} This compound consists of structurally isolated Cu$_4$ spin units, which give rise to a multitude of intra-cluster spin excitations, which persist well above the N\'eel temperature $T_{\rm N}$, when the antiferromagnetic correlations between different clusters is lost. In the case of $\alpha-$FeOOH the Fe-O1-Fe dimers are not isolated to such an extent as the Cu$_4$ clusters in Cu$_2$OSeO$_3$. Nevertheless, above $T_{\rm N}$ the antiferromagnetic correlations between different Fe-O1-Fe bridges are lost and the excitation spectral density will be shifted closer to the energy of singlet-triplet excitation of an isolated dimer and resspectively the $B_{3g}(3)$ phonon line. Being superimposed on a background of a larger spectral density above $T_{\rm N}$, the $B_{3g}(3)$ phonon acquires more pronounced line asymmetry in PM phase compared to AF phase. The second possible mechanism of the $B_{3g}(3)$ line asymmetry -- still very qualitative -- is the interaction of this phonon with thermally-activated charge carriers. $\alpha-$FeOOH is a charge-transfer insulator with a band gap of 2.5~eV\cite{Sherman2005} whose electric conduction is related to a thermally-activated hopping of small polarons.\cite{Porter_2018} The O1 atoms mediate not only the exchange interaction but also the charge transport between Fe atoms, and the $B_{3g}(3)$ vibration could be effectively coupled to the charge carrier hopping. Below $T_{\rm N}$ the charge transport between the antiferromagnetically ordered Fe spins is largely suppressed due to the Hund's repulsion between electrons of opposite spins. Above transition temperature, however, the antiparallel spin arrangement is lost and correspondingly the carrier mobility increases, i.e. an insulator-metal transition (coinciding with the AF-PM transition) cannot be excluded. This mechanism resembles the double-exchange model describing the charge conduction near the magnetic phase transition in doped LaMnO$_3$. The qualification of this scenario, however, requires additional experiments, like magnetotransport measurements near the transition temperature, and theoretical understanding of the excitation spectrum associated with the charge carriers in $\alpha-$FeOOH. \section{Conclusions} Raman spectra (both non-polarized and polarized) were obtained on different samples of $\alpha$-FeOOH (goethite) -- synthetic powder, ores and mineral single crystals. The symmetry representation of the observed Raman features was determined on the basis of polarization selection rules. The spectral lines were assigned to definite atomic vibrations by comparison with lattice-dynamical calculations, and 22 out of the 24 Raman-active modes were identified. The measurements of the Raman spectra in the temperature interval 293~K -- 473~K (including the temperature of the antiferromagnetic-paramegnetic transition at $T_{\text{N}} = 393$~K) reveals anomalous temperature behavior of the lineshape parameters of specific phonons around $T_{\text{N}}$, which evidences for a significant interaction between spin and lattice degrees of freedom. In particular, the $B_{3g}(3)$ mode at 387~cm$^{-1}$ reveals strongly assymetric Fano-shape above $T_{\text{N}}$. This finding is most likely a signature of a strong coupling between this phonon and a continuum of magnetic excitations, whose spectral density is substantially redistributed above the transition temperature. However, an interaction between the $B_{3g}(3)$ vibration with the thermally-activated charge carriers could also play role, and infers for a possible conductivity enhancement accompanying the magnetic transition. \begin{acknowledgments} MVA, VGI, and NDT thank the support by the Bulgarian Ministry of Education and Science under contract D01-284/2019 (INFRAMAT) and by the European Regional Development Fund within the Operational Programme "Science and Education for Smart Growth 2014 -- 2020" under the Project CoE "National center of mechatronics and clean technologies" BG05M2OP001-1.001-0008-C01. MVA thanks the Alexander von Humboldt Foundation, Bonn (Germany) for the research fellowship, ensuring his stay at Freie Universitat Berlin. This work was partially supported by the bilateral Bulgarian-Russian project KP-06-15 funded by the Ministry of Education and Science, as well as received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819623). The authors thank Rositsa Titorenkova (Institute of Mineralogy and Crystallography-BAS, Bulgaria) and Jordan Kortenski (University of Mining and Geology, Bulgaria) for the supply of the ore samples. The helpful discussions with Milko N. Iliev are highly appreciated as well as the help from Anna Esther and Marti Gich with the electron microscopy work. \end{acknowledgments} \vspace*{-\baselineskip} \section*{References} \vspace*{-\baselineskip}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Problem Formulation} Given a rooted tree $T = (V,E)$, let $L \subset V$ be the set of leaf nodes. We wish to find a set $P \subseteq L$ which comprises a ``good'' placement. Let $f(P) \in \natnum^{\repfact + 1}$ be defined as: $$f(P) := \langle p_o, p_1, ..., p_\repfact \rangle, ~~ p_i = |\{ v \in V \mid v \text{ has survival number } i \text { w.r.t } P\} |.$$ For our purposes, a good placement is one in which $f(P)$ is lexicominimum. Note that optimizing for lexicominima prioritizes minimizing the survival frequency in ascending order of survival number. Intuitively, the number of replicas having survival number $i$ will be minimized before minimizing those having survival number $i+1$. We first claim that any placement which lexicominimizes $f(P)$ must have \textit{balanced} nodes. This local property is key to obtain a near-linear running time. Before giving the formal definition, we motivate this idea with an example. Consider Figure \ref{fig-refa} in which $P_1$ and $P_2$ consist of the leaves labeled $1$ and $2$ respectively. Upon computing $f(P_1)$ and $f(P_2)$, we find that $f(P_1) = \langle 2, 1, 3, 7 \rangle \lleq \langle 1, 1, 4, 7 \rangle = f(P_2)$. We invite the reader to verify that $P_2$ is an optimal solution for this tree. Consider the state of affairs at the root of the tree. in $P_1$, all replicas are placed in the subtree rooted at node $a$. This causes node $a$ to have survival number 0, while in $P_2$, 1 replica is present on $a$, while 2 are present on $b$. This causes the root to be \textit{unbalanced} when $P_2$ is considered, while in $P_1$ the root is balanced. A placement $P$ is said to be balanced if all nodes $v \in V$ are balanced. Let node $n$ have children indexed $1, ..., k$. Also, let the subtree rooted at the $i^{th}$ child of $n$ have $\ell_i$ leaves, and $r_i$ replicas placed on it in placement $P$. Node $n$ is said to be balanced iff: $$\ell_j - r_j > 0,~ \ell_i - r_i > 0 \implies |r_i - r_j| \leq 1, \text{ and } $$ $$\ell_i - r_i = 0,~ \ell_j - r_j > 0,~ r_j \neq 0 \implies r_i \leq r_j.$$ This definition makes a distinction between ``filled'' nodes (for which $\ell_i - r_i = 0$) and ``unfilled'' nodes, for which $\ell_i - r_j > 0$. Imagining the children of $n$ as bins of capacity $\ell_i$, condition \section{Introduction} With the surge towards the cloud, our websites, services and data are increasingly being hosted by third-party data centers. These data centers are often contractually obligated to ensure that data is rarely, if ever unavailable. One cause of unavailability is co-occurring component failures, which can result in outages that can affect millions of websites \cite{Ver:2013:Blog}, and can cost millions of dollars in profits \cite{Ple:2013:Blog}. An extensive one-year study of availability in Google's cloud storage infrastructure showed that such failures are relatively harmful. Their study emphasizes that ``correlation among node failure dwarfs all other contributions to unavailability in our production environment" \cite{ForFra+:2010:OSDI}. We believe that the correlation found among failure events arises due to dependencies among system components. Much effort has been made in the literature to produce quality statistical models of this correlation. But in using such models researchers do not make use of the fact that these dependencies can be explicitly modeled, since they are known to the system designers. In contrast, we propose a model wherein such dependencies are included, and demonstrate how an algorithm may make use of this information to optimize placement of data replicas within the data center. To achieve high availability, data centers typically store multiple replicas of data to tolerate the potential failure of system components. This gives rise to a \emph{placement problem}, which, broadly speaking, involves determining which subset of nodes in the system should store a copy of a given file so as to maximize a given objective function (\emph{e.g.}, reliability, communication cost, response time, or access time). While our focus is on replica placements, we note that our model could also be used to place replicas of other system entities which require high-availability, such as virtual machines and mission-critical tasks. \begin{figure}[b] \begin{subfigure}[b]{0.5\textwidth} \input{informal} \vspace{-0.5cm} \caption{Scenario I} \label{fig:scenarioI} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \input{informalb} \vspace{-0.5cm} \caption{Scenario II} \label{fig:scenarioII} \end{subfigure} \caption{Two scenarios represented by directed trees.} \label{fig:scenarios} \end{figure} In this work, we present a new model for causal dependencies among failures, and a novel algorithm for optimal replica placement in our model. An example model is given as Fig. \ref{fig:scenarios}, in which three identical replicas of the same block of data are distributed on servers in a data center. Each server receives power from a surge protector which is located on each server rack. In Scenario I, each replica is located on nodes which share the same rack. In Scenario II, each replica is located on separate racks. As can be seen from the diagram of Scenario I (Fig. \ref{fig:scenarioI}), a failure in the power supply unit (PSU) on a single rack could result in a situation where every replica of a data block is completely unavailable, whereas in Scenario II, (Fig. \ref{fig:scenarioII}) three PSUs would need to fail in order to achieve the same result. In practice, Scenario I is avoided by ensuring that each replica is placed on nodes which lie on separate racks. This heuristic is already part of known best-practices. Our observation is that this simple heuristic can be suboptimal under certain conditions. For example, consider a failure in the aggregation switch which services multiple racks. Such a failure could impact the availability of every data replica stored on the rack. Moreover, this toy example only represents a small fraction of the number of events that could be modeled in a large data center. While many approaches for replica placement have been proposed, our approach of modeling causal dependencies among failure events appears to be new. Other work on reliability in storage area networks has focused on objectives such as mean time to data loss \cite{DBLP:conficdcsLianCZ05,lian-chen-zhang}. These exemplify an approach towards correlated failure which we term ``measure-and-conquer''. In measure-and-conquer approaches, a measured degree of correlation is given as a parameter to the model. In contrast, we model explicit causal relations among failure events which we believe give rise to the correlation seen in practice. In \cite{DBLP:conficdcsLianCZ05} the authors consider high-availability replica placement, but are primarily focused on modeling the effects of repair time. Later work \cite{lian-chen-zhang} begins to take into account information concerning the network topology, which is a step towards our approach. Similar measure-and-conquer approaches are taken in \cite{ForFra+:2010:OSDI,BakWyl+:2002:TR, WeaMos+:2002:SRDS, NatYu+:2006:NSDI}. More recently, Pezoa and Hayat \cite{Pezoa} have presented a model in which spatially correlated failures are explicitly modeled. However, they consider the problem of task allocation, whereas we are focused on replica placement. In the databases community, work on replica placement primarily focuses on finding optimal placements in storage-area networks with regard to a particular distributed access model or mutual exclusion protocol \cite{HuJia+:2001:JPDC, SheWu:2001:TCS, ZhaWu+:2009:JPDC}. In general, much of the work from this community focuses on specialized communication networks and minimizing communication costs --- system models and goals which are substantially different from our own. Recently, there has been a surge of interest in computer science concerning cascading failure in networks \cite{BluEas+:2011:FOCS, NieLui+:2014:IPL, KimDob:2010:TransRel, ZhuYan+:2014:TPDS}. While our model is most closely related to this work, the existing literature is primarily concerned with applications involving large graphs intended to capture the structure of the world-wide web, or power grids. The essence of all these models is captured in the \textit{threshold cascade model} \cite{BluEas+:2011:FOCS}. This model consists of a directed graph in which each node $v$ is associated with a threshold, $\ell(v) \in \natnum^+$. A node $v$ experiences a cascading failure if at least $\ell(v)$ of its incoming neighbors have failed. This model generalizes our own, wherein we pessimistically assume that $\ell(v) = 1$ for all nodes $v$. Current work in this area is focused on network design \cite{BluEas+:2011:FOCS}, exploring new models \cite{NieLui+:2014:IPL, KimDob:2010:TransRel}, and developing techniques for adversarial analysis \cite{ZhuYan+:2014:TPDS}. To our knowledge, no one has yet considered the problem of replica placement in such models. \section{Model}\label{sec:model} We model dependencies among failure events as a directed graph, where nodes represent failure events, and a directed edge from $u$ to $v$ indicates that the occurrence of failure event $u$ could trigger the occurrence of failure event $v$. We refer to this graph as the \emph{failure model} Given such a graph as input, we consider the problem of selecting nodes on which to store data replicas. Roughly, we define a \emph{placement problem} as the problem of selecting a subset of these vertices, hereafter referred to as a \emph{placement}, from the failure model so as to satisfy some safety criterion. In our application, only those vertices which represent storage servers are candidates to be part of a placement. We refer to such vertices as \emph{placement candidates}. Note that the graph also contains vertices representing other types of failure events, which may correspond to real-world hardware unsuitable for storage (such as a ToR switch), or even to abstract events which have no associated physical component. In most applications, the set of placement candidates forms a subset of the set of vertices. More formally, let $E$ denote the set of failure events, and $C$ denote the set of placement candidates. We are interested in finding a \emph{placement} of size $\repfact$, which is defined to be a set $P \subseteq C$, with $|P| = \repfact$. Throughout this paper we will use $P$ to denote a placement, and $\repfact$ to denote its size. We consistently use $C$ to denote the set of placement candidates, and $E$ to denote the set of failure events. Let $G = (V,A)$ be a directed graph with vertices in $V$ and edges in $A$. The vertices represent both events in $E$ and candidates in $C$, so let \mbox{$V = E \cup C$}. A directed edge between events $e_1$ and $e_2$ indicates that the occurrence of failure event $e_1$ can trigger the occurrence of failure event $e_2$. A directed edge between event $e$ and candidate $c$ indicates that the occurrence of event $e$ could compromise candidate $c$. We will assume failure to act transitively. That is, if a failure event occurs, all failure events reachable from it in $G$ also occur. This a pessimistic assumption which leads to a conservative interpretation of failure. We now define the notions of \textit{failure number} and \textit{failure aggregate}. \begin{definition}\label{def-failure} Let $e \in E$. The \textit{failure number} of event $e$, denoted $f(e,P)$, for a given placement $P$, is defined as the number of candidates in $P$ whose correct operation could be compromised by occurrence of event $e$. In particular, $$f(e,P) = | \{ p \in P \mid p \text{ is reachable from } e \text{ in } G \}|.$$ \end{definition} As an example, node $u$ in Fig. \ref{fig:scenarios} has failure number $3$ in Scenario I, and failure number $1$ in Scenario II. The following property is an easy consequence of the above definition. A formal proof can be found in the appendix. \begin{property}\label{lem-desc} For any placement $P$ of replicas in tree $T$, if node $i$ has descendant $j$, then $f(j, P) \leq f(i, P)$. \end{property} The failure number captures a conservative criterion for a safe placement. Intuitively, we consider the worst case scenario, in which every candidate which \emph{could} fail due to an occurring event \emph{does} fail. Our goal is to find a placement which does not induce large failure numbers in any event. To aggregate this idea across all events, we define \textit{failure aggregate}, a measure that accounts for the failure number of every event in the model. \begin{definition} The \emph{failure aggregate} of a placement $P$ is a vector in $\mathbb{N}^{\repfact+1}$, denoted $\vec{f}(P)$, where \mbox{$\vec{f}(P) := \langle p_\repfact, ..., p_1, p_0\rangle$}, and each $p_i := \left| \big\{ e \in E \mid f(e, P) = i\big\} \right|$. \end{definition} In Fig. \ref{fig:scenarios}, node $v$ has failure aggregate $\langle 2, 0, 0, 1 \rangle$ in Scenario I and failure aggregate $\langle 1, 0, 2, 0 \rangle$ in Scenario II. Failure aggregate is also computed in Fig. \ref{fig-balanced-survivalnums}. In all of the problems considered in this paper, we are interested in optimizing $\vec{f}(P)$. When optimizing a vector quantity, we must choose a meaningful way to totally order the vectors. In the context of our problem, we find that ordering the vectors with regard to the \emph{lexicographic order} is both meaningful and convenient. The lexicographic order $\leq_L$ between $\vec{f}(P) = \langle p_\repfact, ..., p_1, p_0\rangle$ and $\vec{f}(P') = \langle p'_\repfact, ..., p'_1, p'_0\rangle$ is defined via the following formula: $$\vec{f}(P) \leq_L \vec{f}(P') \iff \exists~ m > 0, ~\forall ~i > m \big[p_i = p'_i \wedge p_m \leq p'_m\big]. $$ To see why this is desirable, consider a placement $P$ which lexicominimizes $\vec{f}(P)$ among all possible placements. Such a placement is guaranteed to minimize $p_\repfact$, i.e. the number of nodes which compromise \emph{all} of the entities in our placement. Further, among all solutions minimizing $p_\repfact$, $P$ also minimizes $p_{\repfact-1}$, the number of nodes compromising \emph{all but one} of the entities in $P$, and so on for $p_{\repfact-2}, p_{\repfact-3},..., p_{0}$. Clearly, the lexicographic order nicely prioritizes minimizing the entries of the vector in an appealing manner. Throughout the paper, any time a vector quantity is maximized or minimized, we are referring to the maximum or minimum value in the lexicographic order. We will also use $\vec{f}(P)$ to denote the failure aggregate, and $p_i$ to refer to the $i^{th}$ component of $\vec{f}(P)$, where $P$ can be inferred from context. In the most general case, we could consider the following problem. \begin{problem}\label{prob:additive-function} Given graph $G = (V,A)$ with $V = C \,\cup\, E$, and positive integer $\repfact$ with $\repfact < |C|$, find a placement $P \subseteq C$ with $|P| = \repfact$ such that $\vec{f}(P)$ is lexicominimum. \end{problem} Problem \ref{prob:additive-function} is NP-hard to solve, even in the case where $G$ is a bipartite graph. In particular, a reduction to independent set can be shown. However, the problem is tractable for special classes of graphs, one of which is the case wherein the graph forms a directed, rooted tree with leaf set $L$ and $C = L$. Our main contribution in this paper is a fast algorithm for solving Problem \ref{prob:additive-function} in such a case. We briefly mention a greedy algorithm which solves the problem on $O(n^2\repfact)$ time. However, since $n \gg \repfact$ in practice our result of an $O(n + \repfact^2)$ algorithm is much preferred. \subsection{An $O(n^2\repfact)$ Greedy Algorithm} The greedy solution to this problem forms a partial placement $P'$, to which new replicas are added one at a time, until $\repfact$ replicas have been placed overall. $P'$ starts out empty, and at each step, the leaf $u$ which lexicominimizes $\vec{f}(P' \cup \{u\})$ is added to $P'$. This greedy algorithm correctly computes an optimal placement, however its running time is $O(n^2\repfact)$ for a tree of unbounded degree. This running time comes about since each iteration requires visiting $O(|L|)$ leaves for inclusion. For each leaf $q$ which is checked, every node on a path from $q$ to the root must have its failure number computed. Both the length of a leaf-root path and the number of leaves can be bounded by $O(n)$ in the worst case, yielding the result. That the greedy algorithm works correctly is not immediately obvious. It can be shown via an exchange argument that each partial placement found by the greedy algorithm is a subset of some optimal placement. This is the content of Theorem 1 below. To establish the correctness of the greedy algorithm, we first introduce some notation. For a placement $P$ and $S \subseteq V$, let $\vec{f}(S,P) = \langle g_\repfact, g_{\repfact - 1}, ..., g_1, g_0 \rangle$ where \mbox{$g_i := | \{ x \in S \mid f(x,P) = i \} |$}. Intuitively, $\vec{f}(S,P)$ gives the failure aggregate for all nodes in set $S \subseteq V$. We first establish the truth of two technical lemmas before stating and proving Theorem \ref{thm-greedy}. \begin{comment} We introduce notation for the set of nodes on a path from $u$ to $v$ as $$u \rightsquigarrow v := \{ x \in V \mid x \text{ is on the path from node $u$ to node $v$ } \}.$$ \begin{equation} X \cap Y \neq \emptyset \implies \vec{f}(X \cup Y, E) = \vec{f}(X, E) + \vec{f}(Y, E) \end{equation} \begin{equation} \vec{f}(E, P \cup \{x\}) = \vec{f}(E, P) - \vec{f}(r \rightsquigarrow x, P) + \vec{f}(r \rightsquigarrow x, P \cup \{x\}) \end{equation} \begin{equation} x \notin P \implies \reallywidehat{\vec{f}(r \rightsquigarrow x, P )} = \vec{f}(r \rightsquigarrow x, P \cup \{x\}) \end{equation} Additionally, if $\vec{v} = \langle v_1, ..., v_k \rangle$ we define the vector shifted one index to the left as $\hat{\vec{v}} = \langle v_2, ..., v_k, 0 \rangle$. The following propositions are trivially obtained from these definitions, and are presented without proof. \begin{equation} \vec{v}_\repfact = 0 \implies \vec{v} \lleq \hat{\vec{v}} \end{equation} \begin{equation} \hat{\vec{v}} + \hat{\vec{w}} = \reallywidehat{\vec{v} + \vec{w}} \end{equation} \begin{equation} \forall \alpha \in \mathbb{R} : \alpha\hat{\vec{v}} = \reallywidehat{\alpha \vec{v}} \end{equation} \end{comment} \begin{lemma}\label{lem-path-ineq} Let $r$ be the root of a failure model given by a tree. Given $P \subseteq C$, $a,b \in C - P$. If $f(r\rightsquigarrow a, P) <_L f(r\rightsquigarrow b, P)$ then $f(P \cup \{a\}) <_L f(P \cup \{b\})$. \end{lemma} \begin{proof} Suppose $f(r \rightsquigarrow a, P) <_L f(r \rightsquigarrow b, P)$. Let nodes on the paths from $r$ to $a$ and from $r$ to $b$ be labeled as follows: $$r \rightarrow a_1 \rightarrow a_2 \rightarrow ... \rightarrow a_n \rightarrow a$$ $$r \rightarrow b_1 \rightarrow b_2 \rightarrow ... \rightarrow b_m \rightarrow b$$ We proceed in two cases. In the first case, there is some $1 \leq i \leq \min (m,n)$ for which $f(a_i, P) < f(b_i, P)$. Let $i$ be the minimum such index, and let $f(b_i, P) = k$. Clearly, \mbox{$f(P \cup \{a\})_k < f(P\cup \{b\})_k$}, since $P \cup \{b\}$ counts $b_i$ as having survival number $k$ and $P \cup\{a\}$ does not. Moreover, since $f(a_\ell, P) = f(b_\ell, P)$ for all $\ell < i$, we have that for all $j > k$, $f(P \cup \{a\})_j = f(P \cup \{b\})_j$ by Property \ref{prob:additive-function}. In the second case, $f(a_i, P) \geq f(b_i, P)$ for all $1 \leq i \leq \min(m,n)$. In this case, if $f(a_i, P) > f(b_i, P)$ for some $i$, the only way we could have $f(r \rightsquigarrow a, P) <_L f(r\rightsquigarrow b, P)$ is if there is some $j > i$ with $f(a_j, P) < f(b_j, P)$, but this is a contradiction. Therefore, $f(a_i,P) = f(b_i,P)$ for all $1 \leq i \leq \min(m,n)$. So, we must also have $n \leq m$, since if $n > m$, we would have \mbox{$f(r \rightsquigarrow a, P) >_L f(r \rightsquigarrow b, P)$}. Moreover, since $f(r \rightsquigarrow a, P) <_L f(r \rightsquigarrow b, P)$, we must have that $n < m$, for if $n = m$, we would have $f(r \rightsquigarrow a, P) = f(r \rightsquigarrow b, P$), a contradiction. We have just shown the existence of some node $b_{n+1}$, for which we must have that \mbox{$f(b_{n+1}, P ) \leq f(a_n, P)$}. Notice that the path $r \rightsquigarrow a$ does not have an $(n+1)^{st}$ node, so it's clear that if $f(b_{n+1}, P) = k$, then $f(P \cup \{a\})_k < f(P \cup \{b\})_k$. Finally, since $n < m$, we have by Property \ref{prob:additive-function}, that $f(a_i, P) \leq f(a_n,P) \leq k$ for all $1 \leq i \leq n$. By an additional application of Property \ref{prob:additive-function} it's easy to see that for all $j > k$, we have $f(P \cup \{a\})_j = f(P \cup \{b\})_j$. \qed \end{proof} From Lemma \ref{lem-path-ineq}, we obtain the following result as an easy Corollary. \begin{corollary}\label{coro-iff} Let $r$ be the root of a failure model given by a tree. Given $P \subseteq C$, $a,b \in C - P$. Then $f(r\rightsquigarrow a, P) \lleq f(r\rightsquigarrow b, P)$ if and only if $f(P \cup \{a\}) \lleq f(P \cup \{b\})$. \end{corollary} \begin{proof} Suppose $f(r\rightsquigarrow a, P) \lleq f(r\rightsquigarrow b, P)$. If $f(r\rightsquigarrow a, P) = f(r\rightsquigarrow b, P)$, then since the only nodes which change failure number when considering placements $P$ and $P \cup \{a\}$ are those on the paths $r \rightsquigarrow a$, and each of these nodes' failure numbers increase by $1$, we must have that $f(P \cup \{a\}) = f(P \cup \{b\})$, since the sequence of failure numbers in $r \rightsquigarrow a$ and $r \rightsquigarrow b$ are the same. If $f(r \rightsquigarrow a, P) <_L f(r \rightsquigarrow b, P)$ then by Lemma \ref{lem-path-ineq} the Corollary is proven. If instead $f(P \cup \{a\}) \lleq f(P\cup \{b\})$, and yet $f(r \rightsquigarrow a, P) >_L f(r \rightsquigarrow b, P)$, then by Lemma \ref{lem-path-ineq} we obtain that $f(P \cup \{a\}) >_L f(P \cup \{b\})$, a contradiction. \qed \end{proof} Given a node $u$ in a tree, let $L(u)$ be the set of all leaves which are descendants of $u$. \begin{lemma}\label{lem-technical} Given $P \subseteq C$, $a,b \in C$. Let $c$ be the least common ancestor of $a$ and $b$, and let $d$ be the child of $c$ on the path from $c$ to $a$. If \mbox{$f(r\rightsquigarrow a, P) \lleq f(r\rightsquigarrow b, P)$} and $X \subset C - \{a,b\}$ for which $L(d) \cap X = \emptyset$, and $a,b \notin X$ then $$f(P \cup X \cup \{a\}) \lleq f(P \cup X \cup \{b\}).$$ \end{lemma} \begin{proof} We have that $f(r\rightsquigarrow a, P) \lleq f(r\rightsquigarrow b, P)$. Consider $f(r\rightsquigarrow a, P \cup X)$ and $f(r\rightsquigarrow b, P \cup X)$. We wish to show that $f(r\rightsquigarrow a, P \cup X) \lleq f(r\rightsquigarrow b, P \cup X)$. Since $c$ is the least common ancestor of $a$ and $b$, it is clear that nodes on $r \rightsquigarrow c$ have equivalent failure numbers in both cases. Therefore it suffices to show that $f(c\rightsquigarrow a, P \cup X) \lleq f(c\rightsquigarrow b, P \cup X)$. Note that since $d \cap L(X) = \emptyset$, we have that $f(c \rightsquigarrow a, P \cup X) = f(c \rightsquigarrow a, P)$. Moreover, since the addition of nodes in $X$ cannot cause failure numbers on the path $c \rightsquigarrow b$ to decrease, we must have that $f(c \rightsquigarrow b, P) \lleq f(c \rightsquigarrow b, P \cup X)$. Altogether, we have that $$f(c \rightsquigarrow a, P \cup X) = f(c \rightsquigarrow a, P) \lleq f(c \rightsquigarrow b, P) \lleq f(c \rightsquigarrow b, P \cup X).$$ By applying Corollary \ref{coro-iff}, we obtain that $f(P \cup X \cup \{a\}) \lleq f(P \cup X \cup \{b\})$. \qed \end{proof} \begin{figure}[h] \centering \input{proof-figure-thm-1} \caption{Named nodes used in Theorem \ref{thm-greedy}. The arrow labeled ``swap" illustrates the leaf nodes between which replicas are moved, and is not an edge of the graph.}\label{fig:thm-1} \end{figure} \begin{theorem}\label{thm-greedy} Let $P_i$ be the partial placement from step $i$ of the greedy algorithm. Then there exists an optimal placement $P^\ast$, with $|P^\ast| = \repfact$ such that $P_i \subseteq P^\ast$. \end{theorem} \begin{proof} The proof proceeds by induction on $i$. $P_0 = \emptyset$ is clearly a subset of any optimal solution. Given $P_i \subseteq P^\ast$ for some optimal solution $P^\ast$, we must show that there is an optimal solution $Q^\ast$ for which $P_{i+1} \subseteq Q^\ast$. Clearly, if $P_{i+1} \subseteq P^\ast$, then we are done, since $P^\ast$ is optimal. In the case where $P_{i+1} \not\subseteq P^\ast$ we must exhibit some optimal solution $Q^\ast$ for which $P_{i+1} \subseteq Q^\ast$. Let $u$ be the leaf which was added to $P_i$ to form $P_{i+1}$. Let $v$ be the leaf in $P^\ast - P_{i+1}$ which has the greatest-depth least common ancestor with $u$, where the depth of a node is given by its distance from the root (see Fig. \ref{fig:thm-1}). We set $Q^\ast = (P^\ast - \{v\}) \cup \{u\}$, and claim that $\vec{f}(Q^\ast) \lleq \vec{f}(P^\ast)$. Since $\vec{f}(P^\ast)$ is optimal, and $P_{i+1} \subseteq Q^\ast$ this will complete our proof. Clearly, $f(a \rightsquigarrow u, P_i) \lleq f(a \rightsquigarrow v, P_i)$, since otherwise $f(r \rightsquigarrow u, P_i) >_L f(r \rightsquigarrow v, P_i)$, implying that $f(P_i \cup \{u\}) >_L f(P_i \cup \{v\})$, contradicting our use of a greedy algorithm. Note that $u,v \notin (P^\ast - P_i - \{v\})$. Moreover, by choice of $v$, we have that $L(a) \cap (P^\ast - P_i - \{v\}) = \emptyset$, since the only nodes from $P^\ast$ in $L(a)$ must also be in $P_i$. To complete the proof, we apply Lemma \ref{lem-technical}, setting $X = P^\ast - P_i - \{v\}$. This choice of $X$ is made so as to yield the following equalities. $$Q^\ast = (P^\ast - \{v\}) \cup \{u\} = P_i \cup (P^\ast - P_i - \{v\}) \cup \{u\}, $$ $$P^\ast = P_i \cup (P^\ast - P_i - \{v\}) \cup \{v\}. $$ By Lemma \ref{lem-technical}, we obtain inequality in the following formula, $$f(Q^\ast) = f(P_i \cup (P^\ast - P_i - \{v\}) \cup \{u\}) \lleq f(P_i \cup (P^\ast - P_i - \{v\}) \cup \{v\}) = f(P^\ast).$$ Thereby completing the proof.\qed \end{proof} \section{Balanced Placements} \begin{figure}[t] \begin{minipage}[t]{0.48\textwidth} \centering \includegraphics[scale=0.09]{balanced-cex} \put(0,58){$~~~2$} \put(0,40){$1,2$} \put(0,23){$1$} \put(0,6){$1,2$} \caption{Round-robin placement cannot guarantee optimality}\label{fig-cex} \end{minipage} \begin{minipage}[t]{0.48\textwidth} \centering \input{proof-figure-thm-2} \caption{Nodes used in Theorem \ref {thm-balanced-sufficiency}.} \label{fig-proof-lca} \end{minipage}\hfill \end{figure} Consider a round-robin placement in which the set of replicas placed at each node is distributed among its children, one replica per child, until all replicas have been placed. This process is then continued recursively at the children. Throughout the process, no child is given more replicas than its subtree has leaf nodes. This method has intuitive appeal, but it does not compute an optimal placement exactly as can be seen from Fig. \ref{fig-cex}. Let placements $P_1$ and $P_2$ consist of the nodes labeled by $1$ and $2$ in Fig. \ref{fig-cex} respectively. Note that both outcomes are round-robin placements. A quick computation reveals that $\vec{f}(P_1) = \langle 1, 1, 7, 0 \rangle \neq \langle 1, 3, 3, 2 \rangle = \vec{f}(P_2)$. Since the placements have different failure aggregates, round-robin placement alone cannot guarantee optimality. Key to our algorithm is the observation that any placement which lexicominimizes $\vec{f}(P)$ must be \textit{balanced}. If we imagine each child $c_i$ of $u$ as a bin of capacity $\ell_i$, balanced nodes are those in which all unfilled children are approximately ``level'', and no child is filled while children of smaller capacity remain unfilled. These ideas are formalized in the following definitions. \begin{definition} Let node $u$ have children indexed $1, ..., k$, and let the subtree rooted at the $i^{th}$ child of node $u$ have $\ell_i$ leaves, and $r_i$ replicas placed on it in placement $P$. A node for which $\ell_i - r_i = 0$ is said to be \emph{filled}. A node for which $\ell_i - r_i > 0$ is said to be \emph{unfilled}. \end{definition} \begin{definition}\label{def-balanced} Node $u$ is said to be \emph{balanced} in placement $P$ iff: $$ \ell_i - r_i > 0 \implies ~\forall\,j \in \{1,...,k\} ~ (r_i \geq r_j - 1 ) .$$ Placement $P$ is said to be \emph{balanced} if all nodes $v \in V$ are balanced. \end{definition} To motivate a proof that lexico-minimum placements must be balanced, consider Fig. \ref{fig-balanced-placement} in which $P_1$ and $P_2$ are sets containing leaf nodes labeled $1$ and $2$ respectively. Fig. \ref{fig-balanced-survivalnums} presents two copies of the same tree, but with failure numbers labeled according to $P_1$ and $P_2$. Upon computing $f(P_1)$ and $f(P_2)$, we find that $f(P_1) = \langle 2, 1, 3, 7 \rangle \lgeq \langle 1, 1, 4, 7 \rangle = f(P_2)$. Note that for placement $P_1$, the root of the tree is unbalanced, therefore $P_1$ is unbalanced. Note also, that $P_2$ is balanced, since each of its nodes are balanced. We invite the reader to verify that $P_2$ is an optimal solution for this tree. \begin{table}[tb] \begin{minipage}[b]{0.33\textwidth} \centering \includegraphics[scale=0.075, trim=0 0 0 0, clip=true]{balanced} \put(-88, -3){$1$} \put(-71, -3){$1$} \put(-67, 20){$1,2$} \put(-46, 20){$2$} \put(-12, 20){$2$} \vspace{0.25cm} \captionof{figure}{\strut Placements $P_1, P_2$ \label{fig-balanced-placement}} \end{minipage} \begin{minipage}[b]{0.67\textwidth} \centering \includegraphics[scale=0.075, trim=0 0 0 0, clip=true]{balanced} \put(-50.5, 78){$3$} \put(-71.5, 54){$3$} \put(-29.5, 54){$0$} \put(-80, 30.5){$2$} \put(-63, 30.5){$1$} \put(-46, 30.5){$0$} \put(-29, 30.5){$0$} \put(-12, 30.5){$0$} \put(-88.5, 7){$1$} \put(-71.5, 7){$1$} \put(-46, 7){$0$} \put(-29, 7){$0$} \put(-12, 7){$0$} \hspace{0.5cm} \includegraphics[scale=0.075, trim=0 0 0 0, clip=true]{balanced} \put(-50.5, 78){$3$} \put(-71.5, 54){$1$} \put(-29.5, 54){$2$} \put(-80, 30.5){$0$} \put(-63, 30.5){$1$} \put(-46, 30.5){$1$} \put(-29, 30.5){$0$} \put(-12, 30.5){$1$} \put(-88.5, 7){$0$} \put(-71.5, 7){$0$} \put(-46, 7){$0$} \put(-29, 7){$0$} \put(-12, 7){$0$} \vspace{0.25cm} \captionof{figure}{\strut Failure numbers for $P_1$ \textit{(right)} and $P_2$ \textit{(left)}.\label{fig-balanced-survivalnums}} \end{minipage} \end{table} Our main result is that it is \textit{necessary} for an optimal placement to be balanced. However, the balanced property alone is not sufficient to guarantee optimality. To see this, consider the two placements in Fig. \ref{fig-cex}. By definition, both placements are balanced, yet they have different failure aggregates. Therefore, balancing alone is insufficient to guarantee optimality. Despite this, we can use Theorem \ref{thm-balanced-sufficiency} to justify discarding unbalanced solutions as suboptimal. We exploit this property of optimal placements in our algorithm. \begin{theorem}\label{thm-balanced-sufficiency} Any placement $P$ in which $\vec{f}(P)$ is lexicominimum among all placements for a given tree must be balanced. \end{theorem} \begin{proof} Suppose $P$ is not balanced, yet $\vec{f}(P)$ is lexicominimum among all placements $P$. We proceed to a contradiction, as follows. Let $u$ be an unbalanced node in $T$. Let $v$ be an unfilled child of $u$, and let $w$ be a child of $u$ with at least one replica such that $r_v < r_w - 1$. Since $v$ is unfilled, we can take one of the replicas placed on $w$ and place it on $v$. Let $q_w$ be the leaf node from which this replica is taken, and let $q_v$ be the leaf node on which this replica is placed (see Fig. \ref{fig-proof-lca}). Let \mbox{$P^\ast := (P - \{q_w\}) \cup \{q_v\}$}. We aim to show that $P^\ast$ is more optimal than $P$, contradicting $P$ as a lexicominimum. Let $\vec{f}(P) := \langle p_\repfact, ..., p_0 \rangle$, and $\vec{f}(P^\ast) := \langle p^\ast_\repfact, ..., p^\ast_0 \rangle$. For convenience, we let \mbox{$f(w, P) = m$}. To show that $\vec{f}(P^\ast) <_L \vec{f}(P)$, we aim to prove that $p^\ast_m < p_m$, and that for any $k$ with $\repfact \geq k > m$, that $p^\ast_k = p_k$. We will concentrate on proving the former, and afterwards show that the latter follows easily. To prove $p^\ast_m < p_m$, observe that as a result of the swap, some nodes change failure number. These nodes all lie on the paths $v \rightsquigarrow q_v$ and $w \rightsquigarrow q_w$. Let $S^-$ (resp. $S^+$) be the set of nodes whose failure numbers change to $m$ (resp. change from $m$), as a result of the swap. Formally, we define $$S^- := \{x \in V \mid f(x, P) = m, f(x, P^\ast) \neq m \}, $$ $$S^+ := \{x \in V \mid f(x, P) \neq m , f(x, P^\ast) = m \}.$$ By definition, $p^\ast_m = p_m - |S^-| + |S^+|$. We claim that $|S^-| \geq 1$ and $|S^+| = 0$, which yields $p^\ast_m < p_m$. To show $|S^-| \geq 1$, note that $f(w, P) = m$ by definition, and after the swap, the failure number of $w$ changes. Therefore, $|S^-| \geq 1$. To show $|S^+| = 0$, we must prove that no node whose failure number is affected by the swap has failure number $m$ after the swap has occured. We choose to show a stronger result, that all such node's failure number must be strictly less than $m$. Let $s_v$ be an arbitrary node on the path $v \rightsquigarrow q_v$, and consider the failure number of $s_v$. As a result of the swap, one more replica is counted as failed in each node on this path, therefore $f(s_v, P^\ast) = f(s_v, P) + 1$. Likewise, let $s_w$ be an arbitrary node on path $w \rightsquigarrow q_w$. One less replica is counted as failed in each node on this path, so $f(s_w, P^\ast) = f(s_w, P) - 1$. We will show that $f(s_w, P^\ast) < m$, and $f(s_v, P^\ast) < m$. First, note that for any $s_w$, by Property \ref{lem-desc} $f(s_w, P^\ast) \leq f(w, P^\ast) = m-1 < m$. Therefore, $f(s_w, P^\ast) < m$, as desired. To show $f(s_v, P^\ast) < m$, note that by supposition $r_w - 1 > r_v$, and from this we immediately obtain $f(w, P) - 1 > f(v,P)$ by the definition of failure number. Now consider the nodes $s_v$, for which $$f(s_v, P) \leq f(v,P) < f(w,P) - 1 = m - 1 \implies f(s_v, P^\ast) - 1 < m - 1,$$ Where the first inequality is an application of Property \ref{lem-desc}, and the implication follows by substitution. Therefore $f(s_v, P^\ast) < m$ as desired. Therefore, among all nodes in $P^\ast$ whose failure numbers change as a result of the swap, no node has failure number $m$, so $|S^+| = 0$ as claimed. Moreover, since $f(s, P^\ast) < m$ for any node $s$ whose failure number changes as a result of the swap, we also have proven that $p_k = p^\ast_k$ for all $k$ where $\repfact \geq k > m$. This completes the proof. \qed \end{proof} \section{An $O(n\repfact)$ Algorithm} Our algorithm considers only placements which are balanced. To place $\repfact$ replicas, we start by placing $\repfact$ replicas at the root of the tree, and then proceed to assign these replicas to children of the root. We then recursively carry out the same procedure on each of the children. Before the recursive procedure begins, we obtain values of $\ell_i$ at each node by running breadth-first search as a preprocessing phase. The recursive procedure is then executed in two consecutive phases. During the \textit{divide} phase, the algorithm is tasked with allocating $r(u)$ replicas placed on node $u$ to the children of $u$. After the divide phase, some child nodes are filled, while others remain unfilled. To achieve balance, each unfilled child $c_i$ will have either $r(c_i)$ or $r(c_i) - 1$ replicas placed upon them. The value of $r(c_i)$ is computed for each $c_i$ as part of the divide phase. The algorithm is then recursively called on each unfilled node to obtain values of optimal solutions for their subtrees. Nodes which are filled require no further processing. The output of this call is a pair of two optimal failure aggregates, one supposing $r(c_i)$ replicas are placed at $c_i$, the other supposing $r(c_i) -1$ are placed. Given these failure aggregates obtained from each child, the \textit{conquer} phase then chooses whether to place $r(c_i)$ or $r(c_i) - 1$ replicas on each unfilled child so as to achieve a lexicominimum failure aggregate for node $u$ overall. For ease of exposition, we describe an $O(n\repfact)$ version of our algorithm in this section, and prove it correct. In Section \ref{sec-improvements} then discuss improvements which can be used to obtain an $O(n + \repfact^2)$ algorithm. Finally, we describe some tree transformations which can be used to obtain an $O(n + \repfact \log \repfact)$ algorithm in Section \ref{sec-best}. \subsection{Divide Phase}\label{sec-divide} When node $u$ is first considered, it receives at most two possible values for the number of replicas it could be asked to accommodate. Let these be the values $r(u)$ and $r(u) - 1$. Let $u$ have a list of children indexed $1,2,..., m$, with leaf capacities $\ell_i$ where $1 \leq i \leq m$. The divide phase determines which children will be filled and which will be unfilled. Filled children will have $\ell_i$ replicas placed on them in the optimal solution, while the number of replicas on the unfilled children is determined during the conquer phase. The set of unfilled children can be determined (without sorting) in an iterative manner using an $O(m)$ time algorithm similar to that for the Fractional Knapsack problem. The main idea of the algorithm is as follows: in each iteration, at least one-half of the children whose status is currently unknown are assigned a filled/unfilled status. To determine which half, the median capacity child (with capacity $\ell_{med}$) is found using the selection algorithm. Based upon the number of replicas that have not been assigned to the filled nodes, either \begin{inparaenum}[a)]\item the set of children $c_i$ with $\ell_i \geq \ell_{med}$ are labeled as ``unfilled" or \item the set of children $c_i$ with $\ell_i \leq \ell_{med}$ are labeled as ``filled"\end{inparaenum}. The algorithm recurses on the remaining unlabeled children. Pseudocode for this algorithm can be found in Algorithm \ref{alg-get-filled} We briefly sketch the correctness of Algorithm 1. The following invariant holds after every execution of the while loop: \begin{equation*} \max(F)\cdot(|U| + |M|) < r - \sum_{c_i \in F} \ell_i \leq \min(U) \cdot |U| + \sum_{c_i \in M} \ell_i. \end{equation*} When $U = \emptyset$ or $F = \emptyset$ the invariant is not well-defined. These conditions are easy to test for: $U = \emptyset$ if and only if $\sum \ell_i = r(u)$, and $F = \emptyset$ if and only if $\ell_i > \floorfrac{r(u)}{|M|}$ for all $i$. Hence in what follows, we will work only with cases where $U \neq \emptyset$ and $F \neq \emptyset$. At the end of the algorithm, $M = \emptyset$, and the invariant reduces to the following \begin{equation}\label{eqn-invariant-reduced} \max(F) < \frac{r - \sum_{c_i \in F} \ell_i}{|U|} \leq \min(U). \end{equation} Equation \ref{eqn-invariant-reduced} indicates that the average number of replicas placed on the unfilled nodes lies between the maximum value of $F$ and the minimum value of $U$. From this, it is easy to see that the labeling is correct. Suppose that some filled child $c_i \in F$ has been incorrectly classified. This child contains at most $\ell_i - 1$ replicas, and yet is still unfilled. Moreover, to attain the average, some unfilled child must be assigned at least $\ceilfrac{r - \sum_{c_i \in F}^{\ell_i}}{|U|}$ replicas. Taking the difference of the number of replicas assigned to these two unfilled nodes, we have \begin{align*} & \Big\lceil\dfrac{r - \sum_{c_i \in F}\ell_i}{|U|}\Big\rceil - \ell_i + 1 \\ >~~ & \Big\lceil\dfrac{r - \sum_{c_i \in F}\ell_i}{|U|}\Big\rceil - \max(F) + 1 \\ \geq~~ & \Big\lceil\dfrac{r - \sum_{c_i \in F}\ell_i}{|U|}\Big\rceil - \max(F) + 2 \geq 2 \end{align*} which is a violation of the balanced placement property. Therefore, all replicas are correctly classified. This completes the proof sketch. \begin{algorithm}[t] \SetKwProg{Fn}{Function}{begin}{end} \Fn{\getFilled{$M$, $r$}}{ $F \gets \emptyset$ ; $U \gets \emptyset$ \tcp*[r]{$F$ := filled children $U$ := unfilled children} \While{$M \neq \emptyset$}{ $\ell_{med} \gets \text{ median capacity of children in } M $ \; $M_1 \gets \{c_i \in M \mid \ell_i < \ell_{med} \} $ \; $M_2 \gets \{c_i \in M \mid \ell_i = \ell_{med} \} $ \; $M_3 \gets \{c_i \in M \mid \ell_i > \ell_{med} \} $ \; $x \gets r - \sum_{c_i \in F \cup M_1 \cup M_2} \ell_i$ \tcp*[r]{$x$ to be distributed among $M_3 \cup U$} \uIf(\tcp*[f]{$M_1 \cup M_2$ guaranteed filled}){$x \geq \ell_{med} \cdot (|U| + |M_3|)$}{ $F \gets F\cup M_1 \cup M_2$ \; $M \gets M - (M_1 \cup M_2)$ \; }\Else(\tcp*[f]{$M_2 \cup M_3$ guaranteed unfilled}) { $U \gets F\cup M_2 \cup M_3$ \; $M \gets M - (M_2 \cup M_3)$ \; } } \Return{($F$, $U$)} \tcp*[r]{return filled and unfilled children} } \caption{Determines filled and unfilled nodes}\label{alg-get-filled} \end{algorithm} Suppose we know that we only need to find placements of size $r(u)$ and $r(u) - 1$ for node $u$. Moreover, we know that in an optimal placement of size $r(u)$, each child $c_i$ only needs to accomodate either $r(c_i)$ or $r(c_i) - 1$ replicas. Suppose that optimal placements of size $r(c_i)$ and $r(c_i) - 1$ are available at each child $c_i$. Theorem \ref{thm-two-values} shows that these placements are all that is required to compute optimal placements of size $r(u)$ \emph{and also of size} $r(u) - 1$. \begin{theorem}\label{thm-two-values} In any case where $r(u)$ or $r(u) - 1$ replicas must be balanced among $k$ unfilled children, it suffices to consider placing either $\ceilfrac{r(u) - L}{k}$ or $\floorfrac{r(u)- L -1}{k}$ replicas at each unfilled child. \end{theorem} \begin{proof} Let $s := r(u) - L$. Suppose $s \bmod k = 0$. If $s$ replicas are placed at $u$, then all unfilled children receive exactly $\frac{s}{k} ~(= \ceilfrac{s}{k})$ replicas. If $s - 1$ replicas are placed at $u$, one child gets $\frac{s}{k} - 1 = \floorfrac{s - 1}{k}$ replicas. If instead $s \bmod k > 0$, then the average number of replicas on each unfilled child is $\frac{s}{k} \notin \ints$. To attain this average using integer values, values both above and below $\frac{s}{k}$ are needed. However, since the unfilled children must be balanced, whatever values selected must have absolute difference at most 1. The only two integer values satisfying these requirements are $\ceilfrac{s}{k}$ and $\floorfrac{s}{k}$. But $\floorfrac{s}{k} = \floorfrac{s - 1}{k}$ when \mbox{$s \bmod k > 0$}. \qed \end{proof} \subsection{Conquer Phase}\label{sec-conquer} Once the recursive call completes, we combine the results from each of the children to achieve the lexicographic minimum overall. Our task in this phase is to select $(r(u) - L) \bmod k$ unfilled children on which $\ceilfrac{r(u)-L}{k}$ replicas will be placed, and place $\floorfrac{r(u) - L - 1}{k}$ replicas on the remaining unfilled children. We need to do this in such a way that the resulting placement is lexicominimum. Recall also that we must return two values, one for $r(u)$ and another for $r(u) - 1$. We show how to obtain a solution in the $r(u) - 1$ case using a greedy algorithm. A solution for $r(u)$ can easily be obtained thereafter. In this section, when two vectors are compared or summed, we are implicitly making use of an $O(\repfact)$ function for comparing two vectors of length $\repfact$ in lexicographic order. Let $\vec{a}_i$ (respectively $\vec{b}_i$) represent the lexicominimum value of $\vec{f}(P)$ where $P$ is any placement of $\floorfrac{r(u)- L -1}{k}$ (respectively $\ceilfrac{r(u) - L}{k}$) replicas on child $i$. Recall that $\vec{a}_i, \vec{b}_i \in \natnum^{\repfact + 1}$, and are available as the result of the recursive call. We solve the optimization problem by encoding the decision to take $\vec{b}_i$ over $\vec{a}_i$ as a decision variable $x_i \in \{0,1\}$, for which either $x_i = 0$ if $\vec{a}_i$ is selected, or $x_i = 1$ if $\vec{b}_i$ is selected. The problem can then be described as an assignment of values to $x_i$ according to the following system of constraints, in which all arithmetic operations are performed point-wise. \begin{equation}\label{eqn-greedy-constraints} \min \displaystyle\sum_{i} \vec{a}_i + (\vec{b}_i - \vec{a}_i)x_i, ~~~ \text{subj. to: } \displaystyle\sum_{i}x_i = (r(u) - L) \bmod k. \end{equation} An assignment of $x_i$ which satisfies the requirements in (\ref{eqn-greedy-constraints}) can be found by computing $\vec{b}_i - \vec{a}_i$ for all $i$, and greedily assigning $x_i = 1$ to those $i$ which have the $(r(u) - L) \bmod k$ smallest values of $\vec{b}_i - \vec{a}_i$. This is formally stated as \begin{theorem}\label{thm-greedy-system} Let $\pi := (\pi_1,\pi_2,...,\pi_k)$ be a permutation of $\{1,2,...,k\}$ such that: $$\vec{b}_{\pi_1} - \vec{a}_{\pi_1} \lleq \vec{b}_{\pi_2} - \vec{a}_{\pi_2} \lleq ... \lleq \vec{b}_{\pi_k} - \vec{a}_{\pi_k}~.$$ If vector $\vec{x} = \langle x_1, ..., x_k\rangle$ is defined according to the following rules: set $x_{\pi_i} = 1$ iff \mbox{$i < (r(u) - L) \bmod k$}, else $x_{\pi_i} = 0$, then $\vec{x}$ is an optimal solution to (\ref{eqn-greedy-constraints}). \end{theorem} The following Lemma greatly simplifies the proof of Theorem \ref{thm-greedy-system}. \begin{lemma}\label{lem-logroup} $\langle\ints^n, +\rangle$ forms a linearly-ordered group under $\lleq$. In particular, for any $\vec{x}, \vec{y}, \vec{z} \in \ints^n, \vec{x} \lleq \vec{y} \implies \vec{x} + \vec{z} \lleq \vec{y} + \vec{z}$. \end{lemma} A straight-forward proof of Lemma \ref{lem-logroup} can be found in the appendix. \begin{proof}[Proof of Theorem \ref{thm-greedy-system}] First, notice that a solution to (\ref{eqn-greedy-constraints}) which minimizes the quantity $\sum_i (\vec{b}_i - \vec{a}_i) x_i$ also minimizes the quantity $\sum_i \vec{a}_i + (\vec{b}_i - \vec{a}_i)x_i.$ It suffices to minimize the former quantity, which can be done by considering only those values of $(\vec{b}_i - \vec{a}_i)$ for which $x_i = 1$. For convenience, we consider $\vec{x}$ to be the characteristic vector of a set $S \subseteq \{1,...,k\}$. We show that no other set $S'$ can yield a characteristic vector $\vec{x}'$ which is strictly better than $\vec{x}$ as follows. Let $\alpha := (r(u) - L) \bmod k$, and let $S := \{\pi_1, ..., \pi_{\alpha - 1} \}$ be the first $\alpha - 1$ entries of $\pi$ taken as a set. Suppose that there is some $S'$ which represents a feasible assignment of variables to $\vec{x}'$ for which $\vec{x}'$ is a strictly better solution than $\vec{x}$. $S' \subseteq \{1, ..., k\}$, such that $|S'| = \alpha - 1$, and $S' \neq S$. Since $S' \neq S$, and $|S'| = |S|$ we have that $S - S' \neq \emptyset$ and $S' - S \neq \emptyset$. Let $i \in S-S'$ and $j \in S' - S$. We claim that we can form a better placement, $S^\ast = (S' - \{j\}) \cup \{i\}$. Specifically, \begin{equation}\label{eqn-claim01} \sum_{\ell \in S^\ast} (\vec{b}_\ell - \vec{a}_\ell) \lleq \sum_{m \in S'} (\vec{b}_m - \vec{a}_m)~. \end{equation} which implies that replacing a single element in $S'$ with one from $S$ does not cause the quantity minimized in (\ref{eqn-greedy-constraints}) to increase. To prove (\ref{eqn-claim01}) note that \mbox{$j \notin S$ and $i \in S \implies (\vec{b}_i - \vec{a}_i) \lleq (\vec{b}_j - \vec{a}_j)$.} We now apply Lemma \ref{lem-logroup}, setting $\vec{x} = (\vec{b}_i - \vec{a}_i)$, $\vec{y} = (\vec{b}_j - \vec{a}_j)$, and \mbox{$\vec{z} = \sum_{\ell \in (S^\ast - \{i\})} (\vec{b}_\ell - \vec{a}_\ell)$}. This yields $$\sum_{\ell \in (S^\ast - \{i\})} (\vec{b}_\ell - \vec{a}_\ell) + (\vec{b}_i - \vec{a}_i) \lleq \sum_{\ell \in (S^\ast - \{i\})} (\vec{b}_\ell - \vec{a}_\ell) + (\vec{b}_j - \vec{a}_j)~.$$ But since $S^\ast - \{i\} = S' - \{j\}$, we have that \begin{equation}\label{eqn-claim02} \sum_{\ell \in (S^\ast - \{i\})} (\vec{b}_\ell - \vec{a}_\ell) + (\vec{b}_i - \vec{a}_i) \lleq \sum_{m \in (S' - \{j\})} (\vec{b}_m - \vec{a}_m) + (\vec{b}_j - \vec{a}_j)~. \end{equation} Clearly, (\ref{eqn-claim02}) $\implies$ (\ref{eqn-claim01}), thereby proving (\ref{eqn-claim01}). This shows that any solution which is not $S$ can be modified to swap in one extra member of $S$ without increasing the quantity minimized in (\ref{eqn-greedy-constraints}). By induction, it is possible to include every element from $S$, until $S$ itself is reached. Therefore, $\vec{x}$ is an optimal solution to (\ref{eqn-greedy-constraints}). \qed \end{proof} In the algorithm, we find an optimal solution to (\ref{eqn-greedy-constraints}) by assigning $\ceilfrac{r(u) - L - 1}{k}$ replicas to those children where $i$ is such that \mbox{$1 \leq i < (r(u) - L) \bmod k$}, and $\floorfrac{r(u) - L}{k}$ replicas to those remaining. To do this, we find the unfilled child having the $((r(u) - L) \bmod k)^{th}$ largest value of $\vec{b}_i - \vec{a}_i$ using linear-time selection, and use the partition procedure from quicksort to find those children having values below the selected child. This takes time $O(k\repfact)$ at each node. At the end of the conquer phase, we compute and return the sum\footnote{In the mentioned sum we assume for notational convenience, that the vectors have been indexed in increasing order of $\vec{b}_i - \vec{a}_i$, although the algorithm performs no such sorting.} \begin{equation}\label{eqn-recursive-sum} \sum_{i \,<\, (r(u) - L )\bmod k}\vec{b}_i + \sum_{i \,\geq\, (r(u) - L) \bmod k} \vec{a}_i + \sum_{j \,:\, \text{filled}} \vec{f}(P_j) + \vec{1}_{r(u)-1}, \end{equation} where $P_j$ is the placement of $\ell_j$ replicas on child $j$ and $\vec{1}_{r(u)-1}$ is a vector of length $\rho$ having a one in entry $r(u)-1$ and zeroes everywhere else. The term $\vec{1}_{r(u)-1}$ accounts for the failure number of $u$. This sum gives the value of an optimal placement of size $r(u) - 1$. Note there are $k+1$ terms in the sum, each of which is a vector of length at most $\rho + 1$. Both computing the sum and performing the selection take $O(k\repfact)$ time at each node, yielding $O(n\repfact)$ time overall. We have only focused upon computing the \textit{value} of the optimal solution. The solution itself can be recovered easily by storing the decisions made during the conquer phase at each node, and then combining them to output an optimal placement. \section{An $O(n + \repfact^2)$ Algorithm} \label{sec-improvements} An $O(n + \repfact^2)$ running time can be achieved by an $O(n)$ divide phase, and an $O(\repfact^2)$ conquer phase. The divide phase already takes at most $O(n)$ time overall, so to achieve our goal, we concern ourselves with optimizing the conquer phase. The conquer phase can be improved upon by making two changes. First, we modify the vector representation used for return values. Second, we transform the structure of the tree to avoid pathological cases. In the remainder of the paper, we will use array notation to refer to entries of vectors. For a vector $\vec{v}$, the $k^{th}$ entry of $\vec{v}$ is denoted $\vec{v}[k]$. \subsubsection{Compact Vector Representation} Observe that the maximum failure number returned from child $c_i$ is $r(c_i)$. This along with Property \ref{lem-desc} implies that the vector returned from $c_i$ will have a zero in indices $\repfact, \repfact-1, ..., r(c_i) +1$. To avoid wasting space, we modify the algorithm to return vectors of length only $r(c_i)$. At each node, we then compute (\ref{eqn-recursive-sum}) by summing entries in increasing order of their index. Specifically, to compute $\vec{v}_1 + \vec{v}_2 + ... + \vec{v}_k$, where each vector $\vec{v}_j$ has length $r(c_i)$, we first allocate an empty vector $\vec{w}$, of size $r(c_i)$, to store the result of the sum. Then, for each vector $\vec{v}_j$, we set $\vec{w}[i] \gets \vec{w}[i] + \vec{v}_j[i]$ for indices $i$ from $0$ up to $r(c_i)$. After all vectors have been processed, $\vec{w} = \vec{v}_1 + ... + \vec{v}_k$. This algorithm takes \mbox{$r(c_1) + ... + r(c_k) = O(r(u))$} time. Using smaller vectors also implies that the $((r(u) - L) \bmod k)^{th}$ best child is found in $O(r(u))$ time, since each unfilled child returns a vector of size at most $O(\frac{r(u)}{k})$, and there are only $k$ unfilled children to compare. With these modifications the conquer phase takes $O(r(u))$ time at node $u$. \subsubsection{Tree Transformations} Note that for each $i$, nodes at depth $i$ have $O(\repfact)$ replicas placed on them in total. We can therefore achieve an $O(\repfact^2)$ time conquer phase overall by ensuring that the conquer phase only needs to occur in at most $O(\repfact)$ levels of the tree. To do this, we observe that when $r(u) = 1$, any leaf with minimum depth forms an optimal placement. Recursive calls can therefore be stopped once $r(u) = 1$. To ensure that $r(u) = 1$ after $O(\repfact)$ levels, we contract paths on which all nodes have degree two into a single pseudonode during the preprocessing phase. The length of this contracted path is stored in the pseudonode, and is accounted for when computing the sum. This suffices to ensure $r(u)$ decreases by at least one at each level, yielding an $O(n + \repfact^2)$ algorithm. \section{An $O(n + \repfact \log \repfact)$ Algorithm} \label{sec-best} In this section, we extend ideas about tree transformation from the last section to develop an algorithm in which the conquer phase only needs to occur in at most $O(\log \repfact)$ levels. We achieve this by refining the tree transformations described in Section \ref{sec-improvements}. To ensure that there are only $O(\log \repfact)$ levels in the tree, we transform the tree so as to guarantee that as the conquer phase proceeds down the tree, $r(u)$ decreases by at least a factor of two at each level. This happens automatically when there are two or more unfilled nodes at each node, since to balance the unfilled children, at most $\ceilfrac{r(u) - L}{2}$ replicas will be placed on each of them. Problems can therefore only arise when a tree has a path of nodes each of which have a single, unfilled child. We call such a path a \textit{degenerate chain}. By detecting and contracting all such degenerate chains, we can achieve an $O(\repfact \log \repfact)$ conquer phase. Fig. \ref{fig-degenerate-unfilled-case} illustrates a degenerate chain. In this figure, each $T_i$ with $1 \leq i \leq t - 1$ is the set of all descendant nodes of $v_i$ which are filled. Thus, $v_1, ..., v_{t-1}$ each have only a single unfilled child (since each $v_i$ has $v_{i+1}$ as an child). In contrast, node $v_t$ has at least two unfilled children. It is easy to see that if the number of leaves in each $T_i$ is $O(1)$ then $t$, the length of the chain, can be as large as $O(\repfact)$. This would imply that there can be $O(\repfact)$ levels in the tree where the entire conquer phase is required. To remove degenerate chains, we contract nodes $v_1, ..., v_{t-1}$ into a single pseudonode $w$, as in Fig. \ref{fig-contracted-nodes}. However, we must take care to ensure that the pair of vectors which pseudonode $w$ returns takes into account contributions from the entire contracted structure. We will continue to use $v_i$ and $T_i$ throughout the remainder of this section to refer to nodes in a degenerate chain. To find and contract degenerate chains, we add an additional phase, the \textit{transform} phase, which takes place between the divide and conquer phases. Recall that after the divide phase, the set of filled and unfilled children are available at each node. Finding nodes in a degenerate chain is therefore easily done via a breadth-first search. We next consider what information must be stored in the pseudonode, to ensure that correct results are maintained. \begin{figure} \begin{subfigure}{0.60\textwidth} \centering \input{degenerate-case} \caption{A degenerate chain.}\label{fig-degenerate-unfilled-case} \end{subfigure} \begin{subfigure}{0.35\textwidth} \centering \raisebox{1cm}{ \input{contracted-nodes} } \caption{Contracted pseudonode.}\label{fig-contracted-nodes} \end{subfigure} \caption{Illustration of a degenerate chain in which each $v_i$ where $1 \leq i \leq t-1$ represents a node which has a single unfilled child. All filled descendents of node $v_i$ are collectively represented as $T_i$. In the figure on the right, nodes $v_1, ..., v_{t-1}$ have been contracted into pseudonode $w$.} \end{figure} Let $(\vec{a_w}, \vec{b_w})$ be the pair of values which will be returned by pseudonode $w$ at the end of the conquer phase. In order for the transformation to be correct, the vectors $(\vec{a_w}, \vec{b_w})$ must be the same as those which would have been returned at node $v_1$ had no transformation occurred. To ensure this, we must consider and include the contribution of each node in the set \mbox{$T_1 \cup ... \cup T_{t-1} \cup \{v_1, ..., v_{t-1}\}$}. It is easy to see that the failure numbers of nodes in $\{v_1, ..., v_{t-1}\}$ depend only upon whether $r(v_t)$ or $r(v_t) - 1$ replicas are placed on node $v_t$, while the filled nodes in sets $T_1, ..., T_{t-1}$ have no such dependency. Observe that if $r(v_t)$ replicas are placed on $v_t$, then $r(v_i)$ replicas are placed at each node $v_i$. If instead $r(v_t) - 1$ replicas are placed, then $r(v_i) - 1$ replicas are placed at each $v_i$. Since values of $r(v_i)$ are available at each node after the divide phase, enough information is present to contract the degenerate chain before the conquer phase is performed. The remainder of this section focuses on the technical details needed to support our claim that the transform phase can be implemented in time \mbox{$O(n + \repfact \log \repfact)$} overall. Let $S_w := T_1 \cup ... \cup T_{t-1} \cup \{v_1, ..., v_{t-1}\}$, and let the contibution of nodes in $S_w$ to $\vec{a_w}$ and $\vec{b_w}$ be given by vectors $\vec{a}$ and $\vec{b}$ respectively. The transform phase is then tasked with computing $\vec{a}$ and $\vec{b}$, and contracting the degenerate chain. We will show that this can be done in time $O(|S_w| + r(v_1))$ for each pseudonode $w$. Pseudocode for the transform phase is given in Algorithm \ref{alg-transf}. The transform phase is started at the root of the tree by invoking \transf{$root,false, \repfact$}. \transf is a modified recursive breadth-first search. As the recursion proceeds down the tree, each node is tested to see if it is part of a degenerate chain (lines \ref{line-bottom} and \ref{line-test2}). If a node is not part of a degenerate chain, the call continues on all unfilled children (line \ref{line-pass-on}). The first node ($v_1$) in a degenerate chain is marked by passing down $chain \gets true$ at lines \ref{line-mark-ru} and \ref{line-mark-rv1}. The value of $r(v_1)$ is also passed down to the bottom of the chain at lines \ref{line-mark-ru} and \ref{line-mark-rv1}. Once the bottom of the chain (node $v_t$) has been reached, the algorithm allocates memory for three vectors, $\vec{a}, \vec{b}$ and $\vec{f}$, each of size $r(v_1)+1$ (line \ref{line-alloc}). These vectors are then passed up through the entire degenerate chain (line \ref{line-return}), along with node $u$, whose use will be explained later. When a node $u$ in a degenerate chain receives $\vec{a}, \vec{b}$, and $\vec{f}$, $u$ adds its contribution to each vector (lines \ref{line-contribstart}-\ref{line-contribend}). The contribution of node $u$ consists of two parts. First, the contribution of the filled nodes is added to $\vec{f}$ by invoking a special \filled subroutine (see Algorithm \ref{alg-filled}) which computes the sum of the failure aggregates of each filled child of $u$ (lines \ref{line-contribstart}-\ref{line-filledend}). Note that \filled uses pass-by-reference semantics when passing in the value of $\vec{f}$. The contribution of node $u$ itself is then added, by summing the number of leaves in all of the filled children, and the number of replicas on the single unfilled child, $v$ (lines \ref{line-ustart}-\ref{line-contribend}). By the time that the recursion reaches the start of the chain on the way back up (line \ref{line-chainback-to-start}), all nodes have added their contribution, and the pseudonode is created and returned (line \ref{line-pseudonodecreate}). \begin{algorithm}[t] \SetKwFunction{transf}{Transform}\SetKwFunction{mkPseudo}{Make-Pseudonode} \SetKwProg{Fn}{Function}{begin}{end} \Fn{\transf{$u, chain, r(v_1)$}}{ \If{$u$ has two or more unfilled children}{ \label{line-bottom} \ForEach{child $c_i$ unfilled}{ \label{line-pass-on}$(-, -, -, x) \gets $\transf{$c_i, false, \bot$} \; $c_i \gets x$ \label{line-update}\; } \lIf{$chain = false$} { \Return{$(\bot, \bot, \bot, u)$} \label{line-vt}} \lElse(\tcp*[f]{$3\cdot O(r(v_1))$ time}){ \Return{$(\vec{0}_{r(v_1) + 1}, \vec{0}_{r(v_1)+1}, \vec{0}_{r(v_1)+1}, u)$} \label{line-alloc} } } \If{$u$ has one unfilled child, $v$}{ \label{line-test2} \If{$chain = false$} { \tcp{pass $r(v)$ as max vector length}$(\vec{a},\vec{b},\vec{f},x) \gets$ \transf{$v, true, r(v)$} \label{line-mark-ru}\; }\Else{ $(\vec{a},\vec{b},\vec{f},x) \gets$ \transf{$v, true, r(v_1)$} \label{line-mark-rv1}\; } \ForEach{filled child $c_i$} { \label{line-contribstart} \filled{$c_i, \vec{f}$} \tcp*[r]{$O(n_i)$ time} \label{line-filledend} } $k \gets \sum_i \ell_i + r(v) - 1$\label{line-ustart}\; $\vec{a}[k+1] \gets \vec{a}[k+1] + 1$\; $\vec{b}[k] \gets \vec{b}[k] + 1$\label{line-contribend}\; \If{$chain = false$}{ \label{line-chainback-to-start} $x \gets $ \mkPseudo{$\vec{a}, \vec{b}, \vec{f}, x$} \label{line-pseudonodecreate} } \Return{$(\vec{a},\vec{b},\vec{f},x)$}\label{line-return} } } \caption{Transform phase}\label{alg-transf} \end{algorithm} The transformation takes place as \transf is returned back up the tree. At the end of the degenerate chain, node $v_t$ is returned (lines \ref{line-vt}-\ref{line-alloc}), and this value is passed along the length of the entire chain (line \ref{line-return}), until reaching the beginning of the chain, where the pseudonode is created and returned (line \ref{line-pseudonodecreate}). When the beginning of the chain is reached, the parent of $v_1$ updates its reference (line \ref{line-update}) to refer to the newly created pseudonode. At line \ref{line-update} note that if $c_i$ was \textit{not} the beginning of a degenerate chain, $x = c_i$ and the assignment has no effect (see lines \ref{line-vt}-\ref{line-alloc}). We provide pseudocode for the \filled and \mkPseudo subroutines in Algorithms \ref{alg-filled} and \ref{alg-mkpseudo}. The \mkPseudo subroutine runs in $O(1)$ time. It is easy to see that the \filled routine runs in $O(n_i)$ time, where $n_i$ is the number of nodes in the subtree rooted at child $c_i$. The \transf routine therefore takes $O(|T_i|)$ time to process a single node $v_i$. The time needed for \transf to process an entire degenerate chain is therefore $O(|S_w|) + 3\cdot O(r(v_1))$, where the $3\cdot O(r(v_1))$ term arises from allocating memory for vectors $\vec{a}$, $\vec{b}$ and $\vec{f}$ at the last node of the chain. When we sum this time over all degenerate chains, we obtain a running time of $O(n + \repfact \log \repfact)$ for the transform phase. To reach this result, we examine the sum of $r(v_1)$ for all pseudonodes at level $i$. Since there are at most $\repfact$ replicas at each level $i$, this sum can be at most $O(\repfact)$ in any level. There are only $O(\log \repfact)$ levels where $r(u) > 1$ after degenerate chains have been contracted, thus, pseudonodes can be only be present in the first $O(\log \repfact)$ levels of the tree. Therefore the $3\cdot O(r(v_1))$ term sums to $O(\repfact \log \repfact)$ overall. Since $|S_w|$ clearly sums to $O(n)$ overall, the transform phase takes at most $O(n + \repfact \log \repfact)$ time. Finally, after the transformation has completed, we can ensure that the value of $r(u)$ decreases by a factor of two at each level. This implies that there are only $O(\log \repfact)$ levels where the conquer phase needs to be run in its entirety. Therefore, the conquer phase takes $O(\repfact \log \repfact)$ time overall. When combined with the $O(n)$ divide phase and the $O(n + \repfact \log \repfact$) transform phase, this yields an $O(n + \repfact \log \repfact)$ algorithm for solving replica placement in a tree. \begin{algorithm}[t] \SetKwProg{Fn}{Function}{begin}{end} \Fn{\filled{$u$, $\vec{f}$}}{ \ElseIf{$u$ is a leaf}{ $\vec{f}[0] \gets \vec{f}[0] + 1$\; \Return \; }{ \ForEach{child $c_i$}{ \filled{$c_i, \vec{f}$} } $a \gets \sum_i \ell_i$ \; $\vec{f}[a] \gets \vec{f}[a] + 1$\; \Return \; } } \caption{Computes failure aggregate of filled nodes}\label{alg-filled} \end{algorithm} \begin{algorithm}[t] \SetKwProg{Fn}{Function}{begin}{end} \Fn{\mkPseudo{$\vec{a}$, $\vec{b}$, $\vec{f}$, $x$}}{ allocate a new node $node$\; $node.\vec{a} \gets \vec{a} + \vec{f}$\; $node.\vec{b} \gets \vec{b} + \vec{f}$\; $node.child \gets x$\; \Return $node$ } \caption{Creates and returns a new pseudonode}\label{alg-mkpseudo} \end{algorithm} \section{Conclusion}\label{sec-conclusion} In this paper, we formulate the replica placement problem and show that it can be solved by a greedy algorithm in $O(n^2 \repfact)$ time. In search of a faster algorithm, we prove that any optimal placement in a tree must be balanced. We then exploit this property to give a $O(n\repfact)$ algorithm for finding such an optimal placement. The running time of this algorithm is then improved, yielding an $O(n + \repfact \log \repfact)$ algorithm. An interesting next step would consist of proving a lower bound for this problem, and seeing how our algorithm compares. In future work we plan to consider replica placement on additional classes of graphs, such as special cases of bipartite graphs. We would like to acknowledge insightful comments from S. Venkatesan and Balaji Raghavachari during meetings about results contained in this paper, as well as comments from Conner Davis on a draft version of this paper.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{secI} The existence of molecules made of heavy baryons is a hot topic in nowadays hadronic physics~\cite{Lee11,Meg11,Liz12,Oka13,Hua14,Men17,Yan20}. The observation in 2017 by the LHCb Collaboration of a doubly charmed baryon~\cite{Aai17} increased the interest in exotic states containing pairs of charmed quarks. Right now, the LHCb Collaboration has reported two structures matching the lineshape of a resonance just above twice the $J/\Psi$ mass, that could originate from a hadron containing two charm quarks~\cite{Aai20}. Although the existence of exotic structures containing pairs of heavy quarks is a long-term challenge~\cite{Ade82}, it has recently been noticed, for example in Refs.~\cite{Kar17,Eic17,Her20}, that doubly charmed tetraquarks are the first to be at the edge of binding. On general grounds, the main motivation to wonder about the existence of heavy-baryon molecules is rooted in the reduction of the kinetic energy due to larger reduced masses. However, such molecular states could be the concatenation of several effects and not just a fairly attractive interaction. The coupling between nearby channels, conflicts between different terms of the interaction, and non-central forces often play a significant role. Some of these contributions may be reinforced by the presence of heavy quarks while others may become weaker~\cite{Jun19,Ric20}. Behind all this there is the understanding of the hadron-hadron interaction governed by the dynamics of quarks and gluons, which is a topical issue. To encourage new experiments and analysis of existing data it is essential to have detailed theoretical investigations. Despite some uncertainty in contemporary interaction models, the possible existence of bound states or resonances is a key element, because their signs might be clear enough to be identified in the experimental data~\cite{Aai20}. Thus, it is the purpose of this work to study the possible existence of hadronic molecules or resonances in two- and three-baryon systems with two units of charm, in particular, $\Lambda_c\Lambda_c$, $\Sigma_c\Sigma_c$, $N\Lambda_c\Lambda_c$ and $N\Sigma_c\Sigma_c$ states. When tackling this problem, one has to manage with an important difficulty, namely the complete lack of experimental data. Therefore, the generalization of models describing two-hadron interactions in the light-flavor sector could offer insight about the unknown interaction of hadrons with heavy flavors. Following these ideas, we will make use of a constituent quark model (CQM) tuned on the description of the $NN$ interaction~\cite{Val05} as well as the meson~\cite{Vij05} and baryon~\cite{Vag05,Val08} spectra in all flavor sectors, to obtain parameter-free predictions that will hopefully be testable in future experiments. Let us note that the study of the interaction between charmed baryons has become an interesting subject in several contexts~\cite{Aai20,Wie11,Hoh11,Nou17,Fuj17} and it may shed light on the possible existence of exotic nuclei with heavy flavors~\cite{Dov77,Gar15,Mae16,Hos17,Kre18,Miy18}. The paper is organized as follows. In Sec.~\ref{secII} we describe and analyze particular aspects of the $S$ wave two-body subsystems: $N\Lambda_c$, $N\Sigma_c$, $\Lambda_c\Lambda_c$ and $\Sigma_c\Sigma_c$. Section~\ref{secIII} is devoted to the study of the lightest $N\Lambda_c\Lambda_c$ and $N\Sigma_c\Sigma_c$ three-body systems. Finally, in Sec.~\ref{secIV} we summarize our main conclusions. \section{Two-baryon systems} \label{secII} The two-body interactions that are necessary to study the charm $+2$ two- and three-baryon systems have been discussed at length in the literature~\cite{Gar19,Car15}. They are derived from the CQM~\cite{Val05,Vij05,Vag05,Val08}. The capability of the model is endorsed by the nice description of the $NN$ phase shifts, as can be seen in Figs. 2, 3 and 4 of Ref.~\cite{Gar99}. The $N\Lambda_c$ and $N\Sigma_c$ interactions have been presented and discussed in detail in Ref.~\cite{Gar19}, in comparison with the other approaches available in the literature, in particular recent lattice QCD studies~\cite{Miy18}. The $\Lambda_c\Lambda_c$ and $\Sigma_c\Sigma_c$ interactions have been consistently derived within the CQM in Ref.~\cite{Car15}, also in comparison with the alternative approaches available in the literature. We refer the reader to Refs.~\cite{Gar19,Car15} for a thorough description of the derivation and analysis of the two-body interactions. As can be seen in Table 1 of Ref.~\cite{Gar19} and Table II of Ref.~\cite{Car15} all two-body interactions are consistently derived with the same set of parameters. In the following we highlight some peculiarities of the two-body interactions that are relevant to the purpose of the present work. We summarize in Table~\ref{tab1} the low-energy parameters of the two-body systems in the charm $+1$ and $+2$ sectors. The scattering length becomes complex for those two-body channels with open lower mass two-body states. The two-body interactions are in general attractive but not sufficient for having bound states, in agreement with lattice QCD estimations~\cite{Miy18}. The singlet isospin $1/2$ and triplet isospin $3/2$ $\Sigma_c N$ interactions are the only repulsive ones. The last line of Table~\ref{tab1} presents the results for the uncoupled $\Sigma_c\Sigma_c$ isosinglet system~\footnote{Note that all other $\Sigma_c\Sigma_c$ $S$ wave states are clearly unbound, see Fig. 6 of Ref.~\cite{Car15}.}. It can be seen how the scattering length is positive and larger than the range of the interaction, see Fig.~\ref{fig1}, pointing to existence of a bound state that will be discussed further below. \begin{table}[t] \caption{CQM results for the $^1S_0$ and $^3S_1$ scattering lengths ($\mathrm{a}_s$ and $\mathrm{a}_t$) and effective range parameters ($r_s$ and $r_t$) in fm for the different $S$ wave $Y_c N$ and $Y_cY_c$ systems ($Y_c = \Lambda_c$ or $\Sigma_c$). The results shown in the last line, marked by a $\dagger$, correspond to the uncoupled $\Sigma_c\Sigma_c$ system.} \begin{tabular}{cp{0.5cm}cp{1cm}cp{0.35cm}cp{1cm}cp{0.35cm}c} \hline\hline $I$ && System && $\mathrm{a}_s$ && $r_s$ && $\mathrm{a}_t$ && $r_t$ \\ \hline \multirow{2}{*}{$1/2$} && $\Lambda_c N$ && $-$0.86 && 5.64 && $-$2.31 && 2.97 \\ && $\Sigma_c N$ && $0{.}74 - i\, 0{.}18$ && $-$ && $-5{.}21 - i\, 1{.}96$ && $-$ \\ $3/2$ && $\Sigma_c N$ && $-$1.25 && 8.70 && 0.95 && 4.89 \\ \multirow{3}{*}{$0$} && $\Lambda_c \Lambda_c$ && $-$6.45 && 2.29 && $-$ && $-$ \\ && $\Sigma_c \Sigma_c$ && $-0{.}014 + i\, 0{.}26$ && $-$ && $-$ && $-$ \\ && $\Sigma_c \Sigma_c$$^\dagger$ && 1.79 && 0.44 && $-$ && $-$ \\ \hline \end{tabular} \label{tab1} \end{table} \begin{figure}[b] \vspace*{-0.5cm} \includegraphics[width=.6\columnwidth]{Fig1.eps} \vspace*{-6.cm} \caption{Charm $+2$ $(I,J^P)=(0,0^+)$ two-body interactions.} \label{fig1} \end{figure} Of particular interest are the results for the lightest charm $+2$ channel, with quantum numbers $(I,J^P)=(0,0^+)$. We show in Fig.~\ref{fig1} the two-body potentials involved in this channel. The $\Lambda_c\Lambda_c$ interaction is slightly attractive at intermediate distances but, however, repulsive at short range. It is decoupled from the closest two-baryon threshold, the $N\Xi_{cc}$ state~\cite{Car15}, which is relevant for the possible existence of a resonance in the strange sector~\cite{Sas20,Gar20}. There is a general agreement on the overall attractive character of the $\Sigma_c\Sigma_c$ interaction~\cite{Hua14,Lee11,Meg11}. Finally, the CQM coupling between the $\Lambda_c\Lambda_c$ and $\Sigma_c\Sigma_c$ channels is a bit stronger than in hadronic theories, based solely on a one-pion exchange potential~\cite{Meg11}, due to quark-exchange effects~\cite{Car15}. All of this fits the scenario of the strange sector, as can be seen by comparing with Fig. 1(b) of Ref.~\cite{Car12}, but the absence of the one-kaon exchange potential gives rise to a less attractive interaction. \begin{figure}[t] \vspace*{-1.0cm} \includegraphics[width=.6\columnwidth]{Fig2.eps} \vspace*{-6.cm} \caption{Fredholm determinant of the two-body $(I,J^P)=(0,0^+)$ charm $+2$ channel. The dashed line corresponds to the $\Lambda_c\Lambda_c - \Sigma_c\Sigma_c$ coupled system, whereas the solid line considers only the $\Sigma_c\Sigma_c$ channel. The zero energy represents the mass of the lowest threshold, $2m_{\Lambda_c}$ for the dashed line and $2m_{\Sigma_c}$ for the solid line.} \label{fig2} \end{figure} In Fig.~\ref{fig2} we present the Fredholm determinant~\cite{Gar87} for the two-body $(I,J^P)=(0,0^+)$ charm $+2$ channel in two different cases. The dashed line corresponds to the result considering the full coupling between the $\Lambda_c\Lambda_c$ and $\Sigma_c\Sigma_c$ states, whereas the solid line considers only the $\Sigma_c\Sigma_c$ channel. The coupled channel calculation shows an attractive character but not sufficient to generate a bound state, the Fredholm determinant does not become negative for energies below threshold. This result is in agreement with other estimations in the literature~\cite{Hua14,Lee11,Meg11} in which in spite of the attractive character of the $\Lambda_c\Lambda_c$ interaction, the central potential alone is not enough to generate a bound state. The coupling to larger mass channels could be important for the existence of a bound state or a resonance. However, due to the large mass difference between the two coupled channels in the $(I,J^P)=(0,0^+)$ partial wave, 338 MeV, the coupled channel effect is weakened. Let us just note that, for example, in the strange sector the coupling to the $N\Sigma$ state is relevant for the $N\Lambda$ system~\cite{Gar07} due to a smaller mass difference, $M(\Sigma) - M(\Lambda)=$ 77 MeV. Heavier mass channels play a minor role, such as the $\Delta\Delta$ channel (584 MeV above the $NN$ threshold) in the $NN$ system~\cite{Val95}. Thus, one does not expect higher channels, as it could be $\Sigma_c^*\Sigma_c^*$ (468 MeV above the $\Lambda_c\Lambda_c$ threshold) to play a relevant role, as it has been explicitly checked in the literature~\cite{Hua14}. Due to the large mass difference between the $\Lambda_c\Lambda_c$ and $\Sigma_c\Sigma_c$ channels, we have studied the uncoupled $\Sigma_c\Sigma_c$ system. The dynamics could be dominated by the attraction in the $\Sigma_c\Sigma_c$ channel in a way that the $\Lambda_c\Lambda_c$ channel would be mainly a tool for detection. This mechanism is somewhat related to the 'synchronization of resonances' proposed by D.~Bugg~\cite{Bug08}. A similar situation could be the case of the $d^*(2380)$ resonance in the $\Delta\Delta$ system, see Ref.~\cite{Don18} for a recent review. The result is depicted in Fig.~\ref{fig2} by the solid line, showing a bound state 16.2 MeV below the $\Sigma_c\Sigma_c$ threshold, corresponding to the scattering length given in the last line of Table~\ref{tab1}. However, since the $\Lambda_c\Lambda_c$ channel is open, the $\Sigma_c\Sigma_c$ state would decay showing a resonance behavior. This scenario, a two-body coupled channel problem showing a bound state in the upper channel but not in the coupled channel calculation has been studied in detail in Ref.~\cite{Gar18}. It was demonstrated how the width of the resonance does not come only determined by the available phase space for its decay to the detection channel, but it greatly depends on the relative position of the mass of the resonance with respect to the masses of the coupled-channels generating the state.~\footnote{The equivalence of the results obtained using a two-cluster interaction or a variational approach for the multiquark problem has been recently shown, see for example in Ref.~\cite{Car19}. Dealing with resonances, the two-cluster interaction allows to look for the poles of the propagator without resorting to numerical extensions of the variational approach, like the complex scaling method, that would just give an indication about the possible existence of a resonance.} Thus, making use of the interactions given in Fig.~\ref{fig1}, we have studied the width of the resonance produced in between the two thresholds, $\Lambda_c\Lambda_c$ and $\Sigma_c\Sigma_c$. The Lippmann-Schwinger equation in the case of $S$-wave interactions is written as, \begin{eqnarray} t^{ij}(p,p';E) &=& V^{ij}(p,p')+\sum_{k=1,2} \int_0^\infty {p^{\prime\prime}}^2 dp^{\prime\prime} \nonumber \\ & \times & \!\! V^{ik}(p,p^{\prime\prime})\, \frac{1}{E-\Delta E \,\, \delta_{2,k}-{p^{\prime\prime}}^2/2\mu_k+i\epsilon}\, t^{kj}(p^{\prime\prime},p';E)\, , \,\,\,\,\,\,\,\, i,j =1,2, \label{eq1} \end{eqnarray} where $\mu_1=m_{\Lambda_c}/2$, $\mu_2=m_{\Sigma_c}/2$, and $\Delta E=2 m_{\Sigma_c} - 2 m_{\Lambda_c}$. The interactions in momentum space are given by, \begin{equation} V^{ij}(p,p')=\frac{2}{\pi}\int_0^\infty r^2dr\; j_0(pr)V^{ij}(r)j_0(p'r) \, , \label{eq2} \end{equation} where $V^{ij}(r)$ are the two-body potentials in Fig.~\ref{fig1}. The resonance exists at an energy $E=E_R$ such that the phase shift $\delta(E_R)=90^\circ$, for energies between the $\Lambda_c\Lambda_c$ and $\Sigma_c\Sigma_c$ thresholds, i.e., $0 < E_R < \Delta E$. The mass of the resonance is given by $W_R=E_R + 2 m_{\Lambda_c}$. The width of the resonance is calculated using the Breit-Wigner formula as~\cite{Bre36,Cec14,Cec08}, \begin{equation} \Gamma (E) =\lim\limits_{E \to E_R}\, \frac{2(E_R-E)}{\text{cotg}[\delta(E)]} \, . \label{eq4} \end{equation} Although the Breit-Wigner formula is not very accurate close to threshold, however, we have explicitly checked by analytic continuation of the S matrix on the second Riemann sheet that at low energy the width follows the expected $\Gamma \sim E^{1/2}$ behavior. Using the formalism described above we have calculated the width of the $\Sigma_c\Sigma_c$ state. We found a resonance 331.8 MeV above the $\Lambda_c\Lambda_c$ threshold, 6.2 MeV below the $\Sigma_c\Sigma_c$ threshold, with a width of 4.7 MeV. As a consequence of the coupling to the lower $\Lambda_c\Lambda_c$ channel, the pole approaches the threshold moving from $-16.2$ MeV in the real axis to the complex plane, $(-6.2 - i \, 4.7/2)$ MeV. The mechanism we have discussed helps in understanding the narrow width of experimental resonances found in the heavy hadron spectra with a large phase space in the decay channel. The observation of a small width for the decay to a low-lying channel could thus point to a dominant contribution of some upper channel to the formation of the resonance. \section{The three-baryon system} \label{secIII} The $\Lambda_c\Lambda_c - \Sigma_c\Sigma_c$ system in a pure $S$ wave configuration has quantum numbers $(I,J^P)=(0,0^+)$ so that adding one more nucleon, the $N\Lambda_c\Lambda_c$ system has necessarily quantum numbers $(I,J^P)=(1/2,1/2^+)$. It is coupled to the $N\Sigma_c\Sigma_c$ channel. A detailed description of the Faddeev equations of the three-body system can be found in Ref.~\cite{Gar14}. It has been explained how to deal with coupled channels containing identical particles of various types in the upper and lower channels. We show in Table~\ref{tab2} the different two-body channels that contribute to the $N\Lambda_c\Lambda_c - N\Sigma_c\Sigma_c$ $(I,J^P)=(\frac{1}{2},\frac{1}{2}^+)$ three-body system. Notice that the charm $+2$ $S$ wave channels $\Lambda_c\Sigma_c$ and $\Sigma_c\Sigma_c$ with isospin 1 are not considered since they do not couple to the isosinglet $\Lambda_c\Lambda_c$ two-body subsystem. Therefore, the Faddeev equations of the $(I,J^P)=(1/2,1/2^+)$ three-body system are of the form, \begin{eqnarray} T_{N\Lambda_c}^{\Lambda_c} = && -t_{N\Lambda_c}^{\Lambda_c} G_0 T_{N\Lambda_c}^{\Lambda_c} +2 \, t_{N\Lambda_c}^{\Lambda_c} G_0T_{\Lambda_c\Lambda_c}^N + t_{N\Lambda_c-N\Sigma_c}^{\Lambda_c} G_0T_{N\Lambda_c}^{\Sigma_c} \nonumber \\ T_{N\Lambda_c}^{\Sigma_c} = && t_{N\Lambda_c}^{\Sigma_c} G_0T_{N\Sigma_c}^{\Lambda_c} +2 \, t_{N\Lambda_c-N\Sigma_c}^{\Sigma_c} G_0T_{\Sigma_c\Sigma_c}^N + t_{N\Lambda_c-N\Sigma_c}^{\Sigma_c} G_0T_{N\Sigma_c}^{\Sigma_c} \nonumber \\ T_{N\Sigma_c}^{\Lambda_c} = && t_{N\Sigma_c}^{\Lambda_c} G_0T_{N\Lambda_c}^{\Sigma_c} +2 \, t_{N\Sigma_c-N\Lambda_c}^{\Lambda_c} G_0T_{\Lambda_c\Lambda_c}^N + t_{N\Sigma_c-N\Lambda_c}^{\Lambda_c} G_0T_{N\Lambda_c}^{\Lambda_c} \nonumber \\ T_{N\Sigma_c}^{\Sigma_c} = && -t_{N\Sigma_c}^{\Sigma_c} G_0T_{N\Sigma_c}^{\Sigma_c} +2 \, t_{N\Sigma_c}^{\Sigma_c} G_0T_{\Sigma_c\Sigma_c}^N + t_{N\Sigma_c-N\Lambda_c}^{\Sigma_c} G_0T_{N\Sigma_c}^{\Lambda_c} \nonumber \end{eqnarray} \begin{eqnarray} T_{\Lambda_c\Lambda_c}^N = && t_{\Lambda_c\Lambda_c}^N G_0T_{N\Lambda_c}^{\Lambda_c} +t_{\Lambda_c\Lambda_c-\Sigma_c\Sigma_c}^N G_0T_{N\Sigma_c}^{\Sigma_c} \nonumber \\ T_{\Sigma_c\Sigma_c}^N = && t_{\Sigma_c\Sigma_c}^N G_0T_{N\Sigma_c}^{\Sigma_c} +t_{\Sigma_c\Sigma_c-\Lambda_c\Lambda_c}^N G_0T_{N\Lambda_c}^{\Lambda_c} \, , \label{eq71} \end{eqnarray} where $t_{ij}^k$ are the two-body $t$-matrices that already contain the coupling among all two-body channels contributing to a given three-body state, see Table~\ref{tab2}. $G_0$ is the propagator of three free particles. The superscript of the amplitudes indicates the spectator particle and the subscript the interacting pair. \begin{table}[t] \caption{$S$ wave two-body channels $(i,j)$ of the various subsystems that contribute to the $N\Lambda_c\Lambda_c - N\Sigma_c\Sigma_c$ $(I,J^P)=(\frac{1}{2},\frac{1}{2}^+)$ three-body system.} \begin{ruledtabular} \begin{tabular}{ccccc} & Two-body subsystem & Spectator & $(i,j)$ & \\ \hline & $\Lambda_c\Lambda_c$ & $N$ & (0,0) & \\ & $N\Lambda_c$ & $\Lambda_c$ & $(\frac{1}{2},0)$,$(\frac{1}{2},1)$ & \\ & $\Sigma_c\Sigma_c$ & $N$ & (0,0),(1,1) & \\ & $N\Lambda_c$ & $\Sigma_c$ & $(\frac{1}{2},0)$,$(\frac{1}{2},1)$ & \\ & $N\Sigma_c$ & $\Lambda_c$ & $(\frac{1}{2},0)$,$(\frac{1}{2},1)$ & \\ & $N\Sigma_c$ & $\Sigma_c$ & $(\frac{1}{2},0)$,$(\frac{1}{2},1)$, $(\frac{3}{2},0)$,$(\frac{3}{2},1)$ & \\ \end{tabular} \end{ruledtabular} \label{tab2} \end{table} The results are presented in Figure~\ref{fig3}. We have performed three different calculations. First, we have included the three-body amplitudes of the first two lines of Table~\ref{tab2} that do not contain the $\Sigma_c$ baryon, the result being depicted by the dotted line. As it could have been expected, there exists attraction due to the attractive character of the $N\Lambda_c$ and $\Lambda_c\Lambda_c$ interactions discussed in Sect.~\ref{secII}. However, the attraction is not sufficient for having a bound state. Then, we have included the amplitudes containing the $\Sigma_c\Sigma_c$ two-body subsystem, third line in Table~\ref{tab2}, and all isospin 1/2 three-body amplitudes containing a $\Sigma_c$ baryon either as spectator or as a member of the interacting pair, lines 4 to 6 of Table~\ref{tab2}. The result corresponds to the dashed-dotted line. The coupled channel effect induces some additional attraction, reducing the value of the Fredholm determinant. Finally, we added the isospin $3/2$ $N\Sigma_c$ amplitudes indicated in the last line of Table~\ref{tab2}, the result is depicted by the solid line. Although the singlet amplitudes increase the attraction, the repulsive triplet isospin $3/2$ $N\Sigma_c$ amplitude, discussed in Sect.~\ref{secII}, induces an overall repulsion in the three-body system, increasing the value of the Fredholm determinant. \begin{figure*}[t] \vspace*{-0.5cm} \includegraphics[width=.60\columnwidth]{Fig3.eps} \vspace*{-6.cm} \caption{Fredholm determinant of the $N\Lambda_c\Lambda_c - N\Sigma_c\Sigma_c$ $(I,J^P)=(\frac{1}{2},\frac{1}{2}^+)$ three-body state. See text for details.} \label{fig3} \end{figure*} \begin{figure*}[t] \vspace*{-0.5cm} \includegraphics[width=.60\columnwidth]{Fig4.eps} \vspace*{-6.cm} \caption{Fredholm determinant of the $N\Lambda_c\Lambda_c - N\Sigma_c\Sigma_c$ $(I,J^P)=(\frac{1}{2},\frac{1}{2}^+)$ three-body state for two different values of the Gaussian size parameter of the charm quark wave function, $b_c$.} \label{fig4} \end{figure*} We have studied the dependence of the results on the only free parameter of the interacting potential, the Gaussian size parameter of the charm quark wave function, $b_c$. It has been illustrated in Fig. 4 of Ref.~\cite{Gar19} how the central $Y_c N$ interactions become more attractive for smaller values of $b_c$. However, the weakening of the non-central contributions generates a slightly less attractive systems in the presence of coupled-channel effects, see right panel of Fig. 5 of Ref.~\cite{Car15}. It was argued in Ref.~\cite{Car11} that the smaller values of $b_c$ are preferred to get consistency with calculations based on infinite expansions, as hyperspherical harmonic expansions~\cite{Vij09}, where the quark wave function is not postulated. This also agrees with simple harmonic oscillator relations $b_c=b_n\sqrt{\frac{m_n}{m_c}}$~\footnote{$b_n$ and $m_n$ refer to the light quarks, they are given in Table 1 of Ref.~\cite{Gar19} and Table II of Ref.~\cite{Car15}}, leading to the best suited value $b_c=0.2$ fm for the study of the charm sector~\cite{Car11}. Thus, in Fig.~\ref{fig4} we show the Fredholm determinant for two different values of $b_c$, 0.5 and 0.2 fm. As can be seen, the attraction increases for smaller values of $b_c$, the effect not being enough to generate a bound state. Such tiny contributions might well be of importance for states at the edge of binding, as discussed in the following. Guided by the resonance found in the $\Sigma_c\Sigma_c$ system, we have finally studied the $N\Sigma_c\Sigma_c$ system without coupling to $N\Lambda_c\Lambda_c$, looking for a three-body resonance. The results are promising if the triplet isospin $3/2$ amplitude is not considered. Thus, considering only the isospin $1/2$ amplitudes we obtain a bound state with a separation energy of 0.6 MeV. If the singlet isospin $3/2$ amplitude is included, the separation energy increases to 0.7 MeV. If the repulsive triplet isospin 3/2 $(N\Sigma_c)\Sigma_c$ amplitude is considered, the signal of the resonance is lost. However, adopting the best suited value for the charm sector, $b_c=0.2$ fm, the $N\Sigma_c\Sigma_c$ three-body resonance is still there with a separation energy of 0.2 MeV. \section{Summary} \label{secIV} In short, we have studied two- and three-baryon systems with two units of charm looking for possible bound states or resonances. All two-baryon interactions are consistently derived from a constituent quark model tuned in the light-flavor hadron phenomenology. Our results show a narrow two-body resonance with quantum numbers $(I,J^P)=(0,0^+)$. It is located 6.2 MeV below the $\Sigma_c\Sigma_c$ threshold and has a width of 4.7 MeV. A detailed study of the coupled $N\Lambda_c\Lambda_c - N\Sigma_c\Sigma_c$ three-body system as well as the uncoupled $N\Sigma_c\Sigma_c$ system shows that the foregoing two-body state contributes to generate a $N \Sigma_c\Sigma_c$ resonance with quantum numbers $(I,J^P)=(1/2,1/2^+)$ and a separation energy of 0.2 MeV. Weakly bound states and resonances are usually very sensitive to potential details and therefore theoretical investigations with different phenomenological models are highly desirable. The existence of these states could be scrutinized in the future at the LHC, J-PARC and RHIC providing a great opportunity for extending our knowledge to some unreached part in our matter world. \section{Acknowledgments} This work has been partially funded by COFAA-IPN (M\'exico) and by Ministerio de Ciencia e Innovaci\'on and EU FEDER under Contracts No. FPA2016-77177-C2-2-P, PID2019-105439GB-C22 and RED2018-102572-T.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Alternate Approximation for Many Additive Agents} \label{sec:additive} For the special case of additive buyers, we show how to modify the analysis of \citet{bilw-focs14} in order to achieve a much tighter bound than that stated in Theorem~\ref{thm:main-partition}. The relaxation and stitching steps hold as before; we prove that \citeauthor{bilw-focs14}'s single-agent approximation can be made to respect {ex~ante}\ constraints with only a small penalty to the approximation factor. \begin{lemma} \label{lem:approx-additive} For any product distribution $\mathcal D$ and any ${\mathbf \eaprob} \in [0,1]^m$, \[\earev(\mathcal D,\feasi[\mathsc{Additive}]) \leq 7\,\eatrev({\mathbf \dist},\feasi[\mathsc{Additive}])\] \end{lemma} Combining Lemma~\ref{lem:approx-additive} with Lemma~\ref{lem:relaxation} and Corollary~\ref{cor:stitching} as before, we get our improved result. \begin{theorem} \label{thm:main-additive} For any product value distribution ${\mathbf \dist}$, there exists a supply-feasible sequential two-part tariff mechanism ${\mathcal M}$ such that \[ \mathsc{Rev}({\mathbf \dist}, \times_n \feasi[\mathsc{Additive}])\le 28\,\revm({\mathbf \dist}) \] \end{theorem} We devote the remainder of this section to the proof of Lemma~\ref{lem:approx-additive}; we drop the explicit dependence on the feasibility constraint, $\feasi[\mathsc{Additive}]$, for clarity. We make use of the fact that relaxing the demand constraint is unnecessary in the additive setting. This allows us to get a much tighter concentration result in the core. Instead of defining $t_j = \eapricej + \tau$, as in Section~\ref{sec:single-agent}, we define $t_j = \max\left(\eapricej, \easrev(\mathcal D)\right)$. It is straightforward to verify that our core decomposition (Lemma~\ref{lem:core-decomposition}) continues to hold under this definition. The key insight in~\citet{bilw-focs14}'s analysis is that this definition allows for a nontrivial bound on the variance of the core, leading to a strong concentration result via Chebyshev's inequality, while keeping the expected number of items in the tail small. It turns out that their analysis goes through under an {ex~ante}\ constraint, except for a small loss in the core due to enforcing the constraint for the bundle pricing. In addition to the notation from Section~\ref{sec:single-agent}, we define the following notation (see Table~\ref{tab:add-notation}). Let $r_j = \earev[\eaprobj](\distj)$ and $r = \easrev(\mathcal D)$. Note that in the additive setting $\easrev(\mathcal D) = \sum_j\earev[\eaprobj](\distj)$; in other words, $r = \sum_jr_j$. \begin{table}[t] \renewcommand{\arraystretch}{1.5} \caption{Notation for Section~\ref{sec:additive}.} \begin{tabular}{r l l} \hline Notation & Definition & Formula \\ \hline ${\mathbf \eaprob}$ & {Ex~ante}\ probabilities & \\ ${\mathbf \eaprice}$ & {Ex~ante}\ prices & $\eapricej = \distj^{-1}(1-\eaprobj)\,\,\forall j\in [m]$ \\ $r_j$ & Revenue from item $j$ & $\earev[\eaprobj](\distj)$ \\ $r$ & Item-pricing revenue & $\sum_jr_j$ \\ $t_j$ & Core-tail threshold for item $j$ & $\max(\eapricej, r)$ \\ $\tprobj$ & Probability item $j$ is in the tail & $\prob[\valj\sim\distj]{\valj > t_j}$ \\ ${\mathbf \dist}-{\mathbf \price}$ & Distribution ${\mathbf \dist}$ shifted to the left by ${\mathbf \price}$ & \\ \hline \end{tabular} \label{tab:add-notation} \end{table} By Lemmas~\ref{lem:core-decomposition} and \ref{lem:core-bound}, we have \[\earev(\mathcal D) \leq {\mathbf \eaprice}\cdot{\mathbf \eaprob} + \mathsc{Val}(\core-{\mathbf \eaprice}) + \sum_{A\subseteq[m]}\tprobA\mathsc{Rev}(\dists^T_A)\] Clearly ${\mathbf \eaprice}\cdot{\mathbf \eaprob} \leq \easrev(\mathcal D)$, so it remains to bound the other two terms. Note that the {ex~ante}\ constraint has been effectively removed from these terms; we will show that \citeauthor{bilw-focs14}'s unconstrained bounds continue to apply here. We state these bounds and then show that our distributions satisfy their conditions. Recalling that $\mathsc{BRev}(\core-{\mathbf \eaprice}) leq \eatrev(\mathcal D)$ completes the proof. \begin{lemma}[\citet{bilw-focs14}] \label{lem:add-core} If, for all $j$, $Var(\dist^C_j-\eapricej) \leq 2rr_j$, then \[\mathsc{Val}(\core - {\mathbf \eaprice}) \leq 4\,\max\left\{\mathsc{BRev}(\core-{\mathbf \eaprice}),\easrev({\mathbf \dist})\right\}\] \end{lemma} \begin{lemma}[\citet{bilw-focs14}] \label{lem:add-tail} If, for all $j$, $\mathsc{Rev}(\dist^T_j) \leq r_j/\tprobj$ and $\tprobj \leq r_j/r$, then \[\sum_{A\subseteq[m]}\tprobA\mathsc{Rev}(\dists^T_A) \leq 2\,\easrev({\mathbf \dist})\] \end{lemma} The following two lemmas capture the necessary conditions. \begin{lemma} $Var(\dist^C_j-\eapricej) \leq 2rr_j$ \end{lemma} \begin{proof} We first prove $\mathsc{Rev}(\dist^C_j-\eapricej) \leq r_j$ for all $j$. Let $\pricej^* \geq 0$ be an optimal price for selling to $\dist^C_j-\eapricej$. Selling to $\dist^C_j$ at price $\pricej^* + \eapricej$ gets at least as much revenue and sells with probability at most $\eaprobj$, so $\mathsc{Rev}(\dist^C_j-\eapricej) \leq \earev[\eaprobj](\dist^C_j)$. Now, let $q_j^*$ be the probability with which a mechanism obtaining $\earev[\eaprobj](\dist^C_j)$ sells to $\dist^C_j$. Clearly, since $\distj$ stochastically dominates $\dist^C_j$, selling to $\distj$ with probability $q_j^*$ gets at least as much revenue and satisfies the same {ex~ante}\ constraint. Given the above, we employ an argument originally due to \citet{LY-PNAS13} to bound the variance of $\dist^C_j-\eaprobj$. Note that $\dist^C_j-\eapricej$ is supported on $[0,r]$, but its revenue is at most $r_j$. So $\prob[V\sim(\dist^C_j-\eapricej)]{V \geq v} \leq r_j/v$. Then \begin{align*} \operatorname{E}\expectarg[V\sim(\dist^C_j-\eapricej)]{V^2} &\leq \int_0^{r^2}\min(1, r_j/\sqrt{v})\,dv \\ &\leq 2rr_j \end{align*} \end{proof} \begin{lemma} $\mathsc{Rev}(\dist^T_j) \leq r_j/\tprobj$ and $\tprobj \leq r_j/r$. \end{lemma} \begin{proof} The first inequality follows from the assumption that $t_j \geq \eapricej$. Let $\pricej^*$ be an optimal price for $\mathsc{Rev}(\dist^T_j)$. Then, by setting price $\pricej^*$, one can obtain $\tprobj\mathsc{Rev}(\dist^T_j)$ from $\distj$ while respecting the {ex~ante}\ constraint $\eaprobj$. In other words, $\tprobj\mathsc{Rev}(\dist^T_j) \le \earev[\eaprobj](\distj) = r_j$. Recall that $t_j \ge r$. So one could sell item $j$ at price $t_j$ and earn profit $\tprobjt_j\ge \tprobj r$ while respecting the {ex~ante}\ constraint $\eaprobj$. But $\earev[\eaprobj](\distj) = r_j$, therefore we must have $\tprobj r \leq \tprobjt_j\leq r_j$. \end{proof} \subsection{Bounding the Core} Recall that an item $j$ is in the core if its value $\valj$ is no more than the threshold $t_j = \eapricej + \tau$. We will bound the {ex~ante}\ constrained social welfare of the core, $\eaVal(\core,{\mathcal F})$, in two parts: the welfare obtained from values below ${\mathbf \eaprice}$ via a prophet inequality and the welfare between ${\mathbf \eaprice}$ and ${\mathbf \eaprice} + \tau$ using a concentration bound introduced by \citet{rw-15}. Recall that $\dist^C_j$ denotes the value distribution for item $j$ conditioned on being in the core. We use $\dist^C_j-\eapricej$ to denote the distribution of $\valj-\beta$ conditioned on $\valj$ being in the core; in other words, $\dist^C_j-\eapricej$ is the distribution $\dist^C_j$ shifted to the left by $\eapricej$. $\core-{\mathbf \eaprice}$ is defined to be the product of the distributions $\dist^C_j-\eapricej$. Observe that value vectors drawn from $\core-{\mathbf \eaprice}$ are bounded by $\tau$ in every coordinate. The following lemma breaks $\eaVal(\core,{\mathcal F})$ up into the two components, each of which can be bounded separately. \begin{lemma} \label{lem:core-bound} For any product disribution ${\mathbf \dist}$ and downwards closed feasibility constraint ${\mathcal F}$, $\eaVal(\core,{\mathcal F}) \leq {\mathbf \eaprice}\cdot{\mathbf \eaprob} + \mathsc{Val}(\core-{\mathbf \eaprice},{\mathcal F})$. \end{lemma} \begin{proof} Let ${\mathbf \alloc}({\mathbf \val})$ be the interim allocation rule of a ${\mathbf \eaprob}$-constrained BIC mechanism which attains social welfare equal to $\eaVal(\core, {\mathcal F})$. Then \begin{align*} \eaVal(\core,{\mathcal F}) &= \sum_j\int_0^{t_j}f_j(y)\allocj(y)y\,dy \\ &\leq \sum_j\int_0^{t_j}f_j(y)\allocj(y)\eapricej \,dy + \sum_j\int_0^{t_j}f_j(y)\allocj(y)(y-\eapricej)\,dy \\ &\leq {\mathbf \eaprice}\cdot{\mathbf \eaprob} + \mathsc{Val}(\core-{\mathbf \eaprice}, {\mathcal F}). \end{align*} \end{proof} We can recover $\mathsc{Val}(\core-{\mathbf \eaprice},{\mathcal F})$ using a two-part tariff for the original distribution ${\mathbf \dist}$ by employing the following concentration result proved by \citet{rw-15}, based on a result of \citet{schechtman-99}. \begin{lemma}[\citet{rw-15}] \label{lem:schechtman} Let ${\mathbf \val}$ be a constrained additive value function with a downwards closed feasibility constraint, drawn from a distribution over support $(-\infty, \tau]$ for some $\tau\ge 0$. Let $a$ be the median of the value of the grand bundle, ${\mathbf \val}([m])$. Then, $\operatorname{E}\expectarg{{\mathbf \val}([m])} \leq 3a + 4\tau/\ln 2$. \end{lemma} \begin{lemma} \label{lem:lipschitz-dist-bound} \begin{align*} \mathsc{Val}(\core-{\mathbf \eaprice},{\mathcal F}) & \leq 6\,\mathsc{BRev}({\mathbf \dist}-{\mathbf \eaprice},{\mathcal F}) + \frac{8}{\ln 2} \,\easrev({\mathbf \dist},{\mathcal F}) \end{align*} \end{lemma} \begin{proof} We apply Lemma~\ref{lem:schechtman} to the distribution $\core-{\mathbf \eaprice}$ to obtain $\mathsc{Val}(\core-{\mathbf \eaprice},{\mathcal F}) \le 3a+4\tau/\ln 2$ where $a$ is the median of the value of the grand bundle under the distribution $\core-{\mathbf \eaprice}$, and $\tau$ is the constant defined earlier. Consider offering the grand bundle at price $a$ to a buyer with value drawn from $\core-{\mathbf \eaprice}$; the buyer accepts with probability $1/2$. Therefore $\mathsc{BRev}({\mathbf \dist}-{\mathbf \eaprice})\ge\mathsc{BRev}(\core-{\mathbf \eaprice})\ge a/2$. Next, suppose that $\tau>0$. Consider selling the items separately at prices $t_j$ for all $j$. Recall that $\tau>0$ implies that $\prob[{\mathbf \val}\sim{\mathbf \dist}]{\exists j \text{ s.t. } \valj > t_j} = 1-\tprobA[\emptyset]= 1/2$. So the agent buys at least one item with probability $1/2$. Noting that $t_j>\tau$ for all $j$, this item pricing obtains a revenue of at least $\tau/2$. Since also $t_j \geq \eapricej$ for all $j$, we have $\tau \leq 2\easrev({\mathbf \dist},{\mathcal F})$. \end{proof} \subsection{Putting the Pieces Together} Combining Lemmas~\ref{lem:core-decomposition}, \ref{lem:tail-bound}, \ref{lem:core-bound}, and \ref{lem:lipschitz-dist-bound} together, we obtain the main result of this section: \begin{lemma} \label{lem:single-agent} For any product value distribution ${\mathbf \dist}$, downward closed feasibility constraint ${\mathcal F}$ and {ex~ante}\ constraints ${\mathbf \eaprob}$, \[\earev({\mathbf \dist},{\mathcal F}) \leq 6\,\mathsc{BRev}({\mathbf \dist}-{\mathbf \eaprice},{\mathcal F}) + 8(1+\ln 2 + 1/\ln 2)\,\easrev({\mathbf \dist},{\mathcal F}) + {\mathbf \eaprice}\cdot{\mathbf \eaprob}.\] \end{lemma} It remains to bound the ${\mathbf \eaprice}\cdot{\mathbf \eaprob}$ term. Note that this term is the revenue that would be obtained in the absence of any demand constraint (equivalently, in the additive setting) by setting the {ex~ante}\ prices on the items. When ${\mathcal F}$ is a partition matroid and if the {ex~ante}\ constraint ${\mathbf \eaprob}$ lies in the shrunk polytope $\frac 12{\mathcal P}_{\feas}$, \citet{CHMS-STOC10} show via a prophet inequality that the term ${\mathbf \eaprice}\cdot{\mathbf \eaprob}$ is bounded by the revenue of an item pricing. \begin{lemma}[\citet{CHMS-STOC10}] \label{thm:partition-matroid} For a partition matroid ${\mathcal F}$, {ex~ante}\ constraints ${\mathbf \eaprob}\in\frac 12{\mathcal P}_{\feas}$, and corresponding {ex~ante}\ prices ${\mathbf \eaprice}$, \[{\mathbf \eaprice}\cdot{\mathbf \eaprob} \leq 2\,\easrev({\mathbf \dist},{\mathcal F}).\] \end{lemma} No prophet inequality based on static thresholds is known for general matroids. However, \citet{fsz-15} nonetheless show that, if ${\mathbf \eaprob} \in b{\mathcal P}_{\feas}$, selling at the {ex~ante}\ prices recovers a $(1-b)$ fraction of the relaxed revenue under a stronger demand constraint. This leads to the following result. \begin{lemma}[\citet{fsz-15}] \label{thm:ocrs} For a general matroid ${\mathcal F}$, constant $b \in (0,1)$, {ex~ante}\ constraints ${\mathbf \eaprob} \in b{\mathcal P}_{\feas}$, and corresponding {ex~ante}\ prices ${\mathbf \eaprice}$, there exists a submatroid ${\mathcal F}' \subseteq {\mathcal F}$ such that \[{\mathbf \eaprice}\cdot{\mathbf \eaprob} \leq \frac{1}{1-b}\easrev({\mathbf \dist},{\mathcal F}').\] Furthermore, the constraint ${\mathcal F}'$ is efficiently computable. \end{lemma} We are now ready to prove Lemma~\ref{lem:approx-partition} and Corollary~\ref{cor:general-ex-ante}, stated in Section~\ref{sec:theorems}. \begingroup \def\ref{cor:general-ex-ante}{\ref{lem:approx-partition}} \begin{lemma} Let $\mathcal D$ be any product value distribution and ${\mathcal F}$ be a matroid with feasible polytope ${\mathcal P}_{\feas}$. Then, for any $q\in \frac 12{\mathcal P}_{\feas}$, there exists a submatroid ${\mathcal F}' \subseteq {\mathcal F}$ such that \[ \earev(\mathcal D, {\mathcal F}) \le 33.1\,\eatrev(\mathcal D, {\mathcal F}') \] If ${\mathcal F}$ is a partition matroid, then ${\mathcal F}' = {\mathcal F}$. \end{lemma} \addtocounter{theorem}{-1} \endgroup \begin{proof} We first observe that $\mathsc{BRev}({\mathbf \dist}-{\mathbf \eaprice},{\mathcal F}) \le \eatrev({\mathbf \dist},{\mathcal F})$. In particular, for any $a>0$, a two-part tariff with entry fee $a$ and item prices ${\mathbf \eaprice}$ achieves at least as much revenue over values drawn from ${\mathbf \dist}$ as does a bundle pricing with price $a$ over values drawn from ${\mathbf \dist} - \beta$. The lemma now follows from Lemma~\ref{lem:single-agent}, together with the bounds on ${\mathbf \eaprice}\cdot{\mathbf \eaprob}$ given by Lemmas~\ref{thm:partition-matroid} and \ref{thm:ocrs} \end{proof} \smallskip As a final remark, we note that the condition ${\mathbf \eaprob} \in \frac12{\mathcal P}_{\feas}$ in Lemma~\ref{lem:approx-partition} is necessary only to recover ${\mathbf \eaprice}\cdot{\mathbf \eaprob}$; we can, in fact, show a slightly weaker result which holds for arbitrary ${\mathbf \eaprob}$. \begingroup \def\ref{cor:general-ex-ante}{\ref{cor:general-ex-ante}} \begin{corollary} Let $\mathcal D$ be any product value distribution and ${\mathcal F}$ be a matroid. Then for any $q\in[0,1]^m$, there exists a submatroid ${\mathcal F}' \subseteq {\mathcal F}$ such that \[\earev(\mathcal D, {\mathcal F}) \le 35.1\,\eatrev(\mathcal D,{\mathcal F}') \] If ${\mathcal F}$ is a partition matroid, then ${\mathcal F}' = {\mathcal F}$. \end{corollary} \addtocounter{theorem}{-1} \endgroup \begin{proof} For any ${\mathbf \eaprob} \in [0,1]^m$, \[\earev(\mathcal D,{\mathcal F}) \leq \max_{\substack{{\mathbf \eaprob}'\leq{\mathbf \eaprob} \\ {\mathbf \eaprob}'\in{\mathcal P}_{\feas}}} \earev[{\mathbf \eaprob}'](\mathcal D,{\mathcal F}).\] Therefore, there exists ${\mathbf \eaprob}' \in {\mathcal P}_{\feas}$ and corresponding ${\mathbf \eaprice}'$, such that Lemma~\ref{lem:single-agent} gives \[\earev({\mathbf \dist},{\mathcal F}) \leq 31.1 \eatrev[{\mathbf \eaprob}'](\mathcal D,{\mathcal F}) + {\mathbf \eaprice}'\cdot{\mathbf \eaprob}'.\] Furthermore, by scaling ${\mathbf \eaprob}'$ to lie in $\frac12{\mathcal P}_{\feas}$ we can only increase the corresponding {ex~ante}\ prices, so Lemma~\ref{thm:ocrs} gives ${\mathbf \eaprice}'\cdot{\mathbf \eaprob}' \leq 4\easrev[{\mathbf \eaprob}'](\mathcal D,{\mathcal F}')$ for some ${\mathcal F}'\subseteq{\mathcal F}$. The corollary now follows by noting $\eatrev[{\mathbf \eaprob}'](\mathcal D,{\mathcal F}) \leq \eatrev(\mathcal D,{\mathcal F})$ and $\easrev[{\mathbf \eaprob}'](\mathcal D,{\mathcal F}') \leq \easrev(\mathcal D,{\mathcal F}')$. \end{proof} \subsection{Core Decomposition with {Ex~Ante}\ Constraints} \label{sec:core-decomp} The proof of Lemma~\ref{lem:core-decomposition} makes use of the following two lemmas, which are analogous to results proved by Babaioff et al. after Li and Yao. \begin{lemma} \label{lem:subdomain-stitching} There exists a set $\{\eaprobsA \in [0,1]^m : A \subseteq [m]\}$ such that $\sum_A \tprobA \eaprobAj \leq \eaprobj$ for all $j$ and \[\earev({\mathbf \dist},{\mathcal F}) \leq \sum_{A\subseteq[m]}\tprobA\earev[\eaprobsA]({\mathbf \dist}_A,{\mathcal F})\] \end{lemma} \begin{proof} Let ${\mathcal M}$ be a BIC mechanism which is ${\mathbf \eaprob}$-constrained under ${\mathbf \dist}$ such that $\revm({\mathbf \dist}) = \earev({\mathbf \dist},{\mathcal F})$. So $\revm({\mathbf \dist}) = \sum_{A\subseteq[m]}\tprobA\revm({\mathbf \dist}_A)$. Let $\eaprobAj$ be the probability that ${\mathcal M}$ allocates item $j$ given that ${\mathbf \val}$ is drawn from ${\mathbf \dist}_A$; that is, $\eaprobAj = \operatorname{E}\expectarg[{\mathbf \val}\sim{\mathbf \dist}_A]{\allocj({\mathbf \val})}$. Clearly $\sum_{A}\tprobA \eaprobAj \leq \eaprobj$, by the assumption that ${\mathcal M}$ is ${\mathbf \eaprob}$-constrained. The result follows since $\revm({\mathbf \dist}_A) \leq \earev[\eaprobsA]({\mathbf \dist}_A,{\mathcal F})$ for each $A$. \end{proof} \begin{lemma} \label{lem:marginal-mechanism} For any two independent distributions ${\mathbf \dist}_S$ and ${\mathbf \dist}_T$ over disjoint sets of items $S$ and $T$ with corresponding {ex~ante}\ constraints ${\mathbf \eaprob}_S$ and ${\mathbf \eaprob}_T$ and a joint feasibility constraint ${\mathcal F}$, \[\earev[({\mathbf \eaprob}_S; {\mathbf \eaprob}_T)]({\mathbf \dist}_S\times{\mathbf \dist}_T,{\mathcal F}) \leq \eaVal[{\mathbf \eaprob}_S]({\mathbf \dist}_S,{\mathcal F}|_S) + \earev[{\mathbf \eaprob}_T]({\mathbf \dist}_T,{\mathcal F}|_T).\] \end{lemma} \begin{proof} Let ${\mathcal M}$ be a BIC mechanism which is $({\mathbf \eaprob}_S; {\mathbf \eaprob}_T)$-constrained under $({\mathbf \dist}_S\times{\mathbf \dist}_T)$ such that $\revm({\mathbf \dist}_S\times{\mathbf \dist}_T) = \earev[({\mathbf \eaprob}_S;{\mathbf \eaprob}_T)]({\mathbf \dist}_S\times{\mathbf \dist}_T,{\mathcal F})$. We construct a mechanism ${\mathcal M}'$ for selling items in $T$ as follows. ${\mathcal M}'$ first samples ${\mathbf \val}_S\sim{\mathbf \dist}_S$, and then solicits a bid ${\mathbf \val}_T$ for items in $T$. Let $({\mathbf \alloc}_{S\cup T}({\mathbf \val}_S;{\mathbf \val}_T), p({\mathbf \val}_S;{\mathbf \val}_T))$ be the allocation returned and payment charged by ${\mathcal M}$ for the combined bid; then ${\mathcal M}'$ returns the allocation ${\mathbf \alloc}_T({\mathbf \val}_S;{\mathbf \val}_T) $ and charges $p({\mathbf \val}_S;{\mathbf \val}_T) - {\mathbf \val}_S({\mathbf \alloc}_S({\mathbf \val}_S;{\mathbf \val}_T))$. We now prove that ${\mathcal M}'$ is truthful. Suppose the bidder submits a bid ${\mathbf \val}_T'$. His utility is ${\mathbf \val}_T({\mathbf \alloc}_T({\mathbf \val}_S; {\mathbf \val}_T')) - \big(p({\mathbf \val}_S; {\mathbf \val}_T') - {\mathbf \val}_S({\mathbf \alloc}_S({\mathbf \val}_S; {\mathbf \val}_T'))\big)$, which is the utility of a bidder participating in ${\mathcal M}$ with valuation $({\mathbf \val}_S,{\mathbf \val}_T')$. Since ${\mathcal M}$ is truthful, the bidder can do no worse by bidding ${\mathbf \val}_T$ in ${\mathcal M}'$ and receiving the utility of an agent who bids truthfully in ${\mathcal M}$. Note that ${\mathcal M}'$ allocates item $j \in T$ exactly when ${\mathcal M}$ does (conditioned on ${\mathbf \val}_S$). So ${\mathcal M}'$ is demand-feasible. Furthermore, since ${\mathcal M}'$ draws ${\mathbf \val}_S$ from ${\mathbf \dist}_S$, ${\mathcal M}'$ is also ${\mathbf \eaprob}_T$-constrained under ${\mathbf \dist}_T$. Formally, let ${\mathbf \alloc}'$ be the allocation rule of ${\mathcal M}'$; then $\operatorname{E}\expectarg[{\mathbf \val}_T\sim{\mathbf \dist}_T]{\allocj'({\mathbf \val}_T)} = \operatorname{E}\expectarg[{\mathbf \val}_S\sim{\mathbf \dist}_S,{\mathbf \val}_T\sim{\mathbf \dist}_T]{\allocj({\mathbf \val}_S;{\mathbf \val}_T)} \leq \eaprobj$ for all $j \in T$. The revenue obtained by ${\mathcal M}'$ is \begin{align*} \revm[{\mathcal M}']({\mathbf \dist}_T) &= \operatorname{E}\expectarg[{\mathbf \val}_S\sim{\mathbf \dist}_S,{\mathbf \val}_T\sim{\mathbf \dist}_T]{p({\mathbf \val}_S;{\mathbf \val}_T) - {\mathbf \val}_S({\mathbf \alloc}_S({\mathbf \val}_S;{\mathbf \val}_T))} \\ &= \earev[({\mathbf \eaprob}_S;{\mathbf \eaprob}_T)]({\mathbf \dist}_S\times{\mathbf \dist}_T,{\mathcal F}) - \operatorname{E}\expectarg[{\mathbf \val}_S\sim{\mathbf \dist}_S,{\mathbf \val}_T\sim{\mathbf \dist}_T]{{\mathbf \val}_S( {\mathbf \alloc}_S({\mathbf \val}_S;{\mathbf \val}_T))} \\ &\geq \earev[({\mathbf \eaprob}_S;{\mathbf \eaprob}_T)]({\mathbf \dist}_S\times{\mathbf \dist}_T,{\mathcal F}) - \eaVal[{\mathbf \eaprob}_S]({\mathbf \dist}_S,{\mathcal F}|_S), \end{align*} where the inequality follows because the welfare ${\mathcal M}$ obtains from items in $S$ is a lower bound on the welfare of any ${\mathbf \eaprob}_S$-constrained mechanism for ${\mathbf \dist}_S$. \end{proof} \begin{proofof}{Lemma \ref{lem:core-decomposition}} By Lemmas \ref{lem:subdomain-stitching} and \ref{lem:marginal-mechanism}, we have \[\earev({\mathbf \dist},{\mathcal F}) \leq \sum_{A\subseteq[m]} \tprobA\left( \eaVal[\eaprobsA](\dists^C_A,{\mathcal F}|_{A^c}) + \earev[\eaprobsA](\dists^T_A,{\mathcal F}|_{A})\right).\] For each $A\subseteq [m]$, let ${\mathcal M}^A$ be a truthful $\eaprobsA$-constrained demand-feasible mechanism which obtains welfare equal to $\eaVal[\eaprobsA](\dists^C_A,{\mathcal F}|_{A^c})$. One way to allocate items when values are drawn from $\core$ is to choose to sell only items from some set $A\subseteq[m]$. Consider a mechanism which chooses from among all subsets of items, choosing $A$ with probability $\tprobA$, and then runs ${\mathcal M}^A$. The expected welfare from such a mechanism is exactly $\sum_{A\subseteq[m]}\tprobA\eaVal[\eaprobsA](\dists^C_A,{\mathcal F}|_{A^c})$. Since $\sum_{A\subseteq[m]}\tprobA\eaprobAj \leq \eaprobj$, the welfare of this mechanism also provides a lower bound on $\eaVal(\core,{\mathcal F})$. \end{proofof} \section{Introduction} Multi-parameter optimal mechanism design is challenging from both a computational and a conceptual viewpoint, even when it involves only a single buyer. Multi-parameter type spaces can be exponentially large, and multi-dimensional incentive constraints lack the nice structure of single-dimensional constraints that permits simplification of the optimization problem. As a result, optimal mechanisms can possess undesirable properties such as requiring randomness \citep{BCKW-SODA10, HN-EC12, HN-EC13}, displaying non-monotonicity of revenue in values \citep{HR-2013, rw-15}, and are in many cases computationally hard to find (see, e.g., \citealp{DDT-WINE12, DDT-SODA14}). The situation exacerbates in multi-agent settings. \citet[chap. 8]{Hartline-MDnA} identifies two further difficulties: multi-parameter agents impose multi-dimensional externality on each other that may not be possible to capture succinctly; and multi-parameter problems are typically not revenue linear, meaning that the optimal revenue does not scale linearly with the probability of service. Designing simple near-optimal mechanisms in such settings is a primary goal of algorithmic mechanism design. In this paper we study the problem facing a monopolist with many items and many buyers, where each buyer is interested in buying one of many different subsets of items, and his value for each such subset is additive over the items in that subset. What selling mechanism should the monopolist use in such a setting to maximize his revenue? One challenge for the seller is that buyers may have heterogeneous preferences: some buyers are interested in buying a few specific items, others are indifferent between multiple items, and yet others have a long shopping list. We design the first approximation mechanism for this problem; our main result is a constant-factor approximation when buyers' values are additive up to a matroid feasibility constraint. Our approximation mechanism has a particularly simple and appealing format -- a sequential extension of standard {\em two-part tariff mechanisms}. Two-part tariffs for a single agent have the following structure. The buyer first pays a fixed entry fee and is then allowed to purchase any set of items at fixed per-item prices. The buyer may choose not to participate in the mechanism, in which case he does not pay the entry fee and does not receive any item. In our context, buyers pay (different) entry fees at the beginning, and then take turns (in an arbitrary but fixed order) to buy a subset of items at predetermined item-specific prices, subject to availability. There are many real-world examples of two-part tariffs, such as amusement park pricing; memberships for discount shopping clubs like Costco, Sam's Club, and Amazon's Prime; telephone services; and membership programs for cooperatives and CSAs. These mechanisms have long been studied in economics for their ability to effectively price discriminate among different buyers despite their relative simplicity. \citet{Arm-99} shows, for example, that for an additive value buyer with independent item values and sufficiently many items, two-part tariffs extract nearly the entire social surplus. Our work combines and significantly extends techniques from several different lines of work in mechanism design. We use the {\em {ex~ante}\ relaxation} of \citet{Alaei-FOCS11} to break up the multi-agent revenue maximization problem into its single-agent counterparts and capture the externalities among buyers through {ex~ante}\ supply constraints. We solve the single-agent problems with {ex~ante}\ supply constraints by adapting and extending the so-called {\em core-tail decomposition} technique of \citet{LY-PNAS13}, as well employing the prophet inequalities of \citet{CHMS-STOC10} and \citet{fsz-15}. Finally, we use ideas from \citep{CHMS-STOC10} to combine the resuting single-agent mechanisms sequentially and obtain a multi-agent approximation mechanism that is {ex~post}\ supply~feasible. While our main result applies to buyers with values additive up to a matroid constraint, parts of our approach extend to more general value functions such as those satisfying the gross substitutes condition. \subsection{Multi-Parameter Mechanism Design: Previous Work} This paper belongs to a long line of research on finding simple and approximately optimal mechanisms for multi-parameter settings under various assumptions on the buyers' value functions and type distributions, and on the seller's supply constraint. The first breakthrough along these lines was made by \citet{CHK-07} who showed that the revenue of an optimal mechanism for a single unit-demand buyer can be approximated within a factor of $3$ by an item pricing,\footnote{\citet{CHMS-STOC10} later improved this approximation factor to $2$.} a mechanism that allows the buyer to choose any item to buy at fixed per-item prices. More recently, \citet{bilw-focs14} developed a similar result for a single buyer with additive values.\footnote{This is the culmination of a series of papers including \cite{HN-EC12, HN-EC13, LY-PNAS13}.} They showed that the revenue of an optimal mechanism in this case is approximated within a factor of $6$ by one of two simple mechanisms: an item pricing that fixes a price for each item and allows the buyer to choose any subset of items to buy, and a bundle pricing that allows the buyer to buy the grand bundle of all items at a fixed price. Observe that item pricing and bundle pricing are both two-part tariffs (with the entry fee or the per-item prices being zero, respectively). Unit-demand and additive types are two extremes within a broader class of value functions that we call {\em constrained additive values}. A constrained additive buyer has a value (drawn independently) for each item under sale; he is interested in buying a set of items that satisfies a certain downward-closed constraint; his value is additive over any such set. We have only recently begun to understand optimal mechanism design for a single agent with constrained additive values. \citet{rw-15} proved that in this setting, as in the additive case, either item pricing or bundle pricing gives a constant-factor approximation to the optimal revenue.\footnote{\citet{rw-15}'s result holds for a much broader setting with a single subadditive value agent, but their factor of approximation is rather large -- about 340.} There are many similarities between the two lines of work on unit-demand buyers and additive buyers, and \citeauthor{rw-15}'s result can be seen as a unification of the two approaches, albeit with a worse approximation factor. Multi-parameter settings with multiple buyers are less well understood. For settings with many unit-demand buyers, \citet{CHMS-STOC10, cms-10} developed a generic approach for approximation via sequential posted-price mechanisms (SPMs). SPMs approach buyers in some predetermined order and offer items for sale to each buyer at predetermined prices while supplies last. For settings with many additive-value buyers, \citet{yao-15} showed that either running a second-price auction for each item separately or optimally selling to bidder $i$ the set of items for which he is the highest bidder\footnote{In the latter case, Yao approximates the optimal revenue via two-part tariffs.} achieves a constant-factor approximation. \citet{CDW-16} presented a new uniform framework that can be used to rederive both Yao and Chawla et al.'s results, with a tighter analysis for the former. However, prior to our work, no approximations were known for other constrained additive settings or for settings with heterogeneous buyers. Consider, for example, a setting with some unit-demand and some additive buyers. In this case, neither of the results mentioned above provide an approximation. \citeauthor{CHMS-STOC10}'s analysis relies on a reduction from multi-dimensional incentive constraints to single-dimensional ones that applies only to the unit-demand setting, and, in particular, cannot account for revenue from bundling, which is crucial in non-unit-demand settings. \citeauthor{yao-15}'s approach on the other hand relies on allocating each item to the highest value agent, and cannot provide a constant-factor approximation for subadditive~agents.\footnote{To see why Yao's approach cannot work for unit-demand agents, observe that if a single unit-demand agent has the highest value for each item, the seller must try to sell all but one item to non-highest-value buyers in order to obtain good revenue.} A different approach to optimal mechanism design due to \citet{CDW-STOC12, CDW-FOCS12, CDW-SODA13, CDW-FOCS13} uses linear programming formulations for settings with small support type distributions, and shows that optimal mechanisms are virtual welfare maximizers. This approach is unsuitable for our setting which, even upon discretization of values, involves distributions over exponential size supports. Moreover, mechanisms generated by this approach tend to lack the nice structure and simplicity of pricing-based~mechanisms. Finally, a new approach to mechanism design has emerged in recent years that uses duality theory to design as well as analyze optimal or approximately optimal mechanisms \citep[see, e.g.,][]{DDT-EC15, GK-14, GK-15, HH-15, CDW-16}. Designing good dual solutions in this context, however, involves more art than science, and for the most part, positive results are restricted to very special classes of value functions and value distributions. \subsection{Our Techniques and Contributions} \paragraph{{Ex~Ante}\ Relaxation.} Our work follows a generic approach developed by \citet{Alaei-FOCS11} for transforming multi-agent mechanism design problems into their single-agent counterparts via the so-called {\em {ex~ante}\ relaxation}. In a multi-agent setting, agents impose externalities upon each other through the seller's supply constraint: each item must be sold to at most one buyer {ex~post}. Alaei proposes relaxing the problem by enforcing the supply constraint {ex~ante}\ rather than {ex~post}: the probabilities with which an item is sold to the different buyers should sum up to no more than one. In other words, in expectation the item is sold at most once. Applying the {ex~ante}\ relaxation to a mechanism design problem with multiple buyers involves three steps: \begin{enumerate} \item {\bf Decompose into single-agent problems:} determine the {ex~ante}\ probabilities with which each item can be sold to each buyer; for each item these probabilities should sum up to no more than $1$; \item {\bf Solve single-agent problems:} for each agent, find an approximately optimal mechanism satisfying the {ex~ante}\ supply constraint determined in the first step; \item {\bf Stitch single-agent mechanisms:} combine the single-agent mechanisms developed in the second step into a single mechanism that satisfies the supply constraints {ex~post}. \end{enumerate} The first step is conceptually simple and applies in any setting where buyers have independent values. We reproduce this argument in Section~\ref{sec:relaxation} for completeness. Alaei described how to implement the second and third steps for problems involving unit-demand agents.\footnote{Alaei also presented solutions for certain additive-value settings under the assumption that the agents' type spaces are small and given explicitly.} For the third ``stitching'' step, he suggested composing the single-agent mechanisms sequentially (similar to the approach of \citet{CHMS-STOC10}). However, this does not work for arbitrary single-agent mechanisms. Once the composite mechanism has sold off a few items, fewer bundles are available to subsequent buyers, and the mechanism may obtain far less revenue than its single-agent counterparts. We show that two-part tariffs compose well without much loss in revenue when each buyer's value function is additive up to a matroid feasibility constraint (and, more generally, when the value functions satisfy the gross substitutes condition). \paragraph{Core-Tail Decomposition.} In order to bound the single-agent revenue as required in step two of the {ex~ante}\ approach, we use the core-tail decomposition of \citet{LY-PNAS13}, and its extensions due to \citet{bilw-focs14} and \citet{rw-15}. Roughly speaking, in the absence of {ex~ante}\ supply constraints, for any vector of item values, we can partition items into those with small value and those with large value. This partitioning is done in such a manner that the set of large-value items (a.k.a. the tail) contains only a few items in expectation; the revenue generated by these items behaves essentially like unit-demand revenue, and can be recovered by selling the items separately via an argument of \citet{cms-10}. The set of small-value items (a.k.a. the core), on the other hand, displays concentration of value and the revenue generated by these items can be recovered via bundling \citep{rw-15}. Under an {ex~ante}\ supply constraint the revenue generated by the tail can still be recovered via item pricing as before. Bounding the revenue from the core is trickier, however, because different items may face very different {ex~ante}\ constraints, and their total values may not concentrate well. Furthermore, selling the grand bundle allocates all items with the same probability to the buyer and consequently may not respect the given {ex~ante}\ constraint. We make a careful choice of thresholds for partitioning values into the core and the tail in such a manner that we can recover the value of the core in two parts: (1) when the {ex~ante}\ constraint is strong (i.e. the allocation probabilities are mostly small), selling separately recovers most of the core revenue; (2) when the {ex~ante}\ constraint is weak (i.e. the allocation probabilities are mostly large), bundling as part of a two-part tariff recovers most of the core revenue while continuing to respect the {ex~ante}\ constraint. \paragraph{Prophet Inequalities.} Observe that the {ex~ante}\ approach described above relaxes the seller's supply constraint, but continues to enforce the buyer's demand constraint\footnote{The buyer's demand constraint refers to, e.g., whether the buyer desires one item as in the unit-demand case, or all items as in the additive case.} {ex~post}. It is unclear how a relaxation of the buyer's demand constraint would capture revenue due to bundling, and whether such a relaxation is useful for mechanism design. Nevertheless, our analysis gives rise to a term which corresponds to item-pricing revenue from a common relaxation of the seller's and buyer's constraints. Roughly speaking, this term captures the total revenue that the seller can obtain from the buyer by selling each item separately subject to a bound on the probability of sale, under the condition that these bounds respect both the seller's and the buyer's feasibility constraints in an {ex~ante}\ sense. For example, for a unit-demand buyer, the probabilities of sale over the items must sum up to no more than $1$. We then employ a prophet inequality to relate this term to the optimal item-pricing revenue for that buyer. A prophet inequality in this context specifies an item pricing that, regardless of which maximal feasible set of items the buyer purchases, obtains in expectation a constant fraction of the {ex~ante}\ optimal revenue. Prophet inequalities of the above form are known to hold for several classes of feasibility constraints, such as uniform matroids, partition matroids, and their intersections (see, e.g., \citealp{CHMS-STOC10}). For general matroid constraints, it is not known whether a prophet inequality with static item prices as described above can obtain a constant approximation factor.\footnote{\citet{KW-STOC12} present a prophet inequality with adaptive prices, but this is unsuitable for our setting.} However, \citet{fsz-15} give a prophet inequality that obtains a constant approximation by restricting the buyer's demand -- in other words, by forbidding the buyer to purchase certain feasible sets. We discuss and use these results in Section~\ref{sec:single-agent}. \paragraph{The Final Mechanism.} As mentioned earlier, our final mechanism is a sequential two-part tariff mechanism. We remark that buyers in our mechanism are required to pay the entry fee before finding out whether their favorite items will be available when it is their turn to buy; therefore, our mechanism is only Bayesian incentive compatible (BIC), and not necessarily dominant strategy incentive compatible (DSIC). We leave open the question of whether it is possible to approximate the optimal revenue within a constant factor via a DSIC mechanism. In some settings, our mechanism restricts the subsets of items that a buyer is allowed to buy; we call such a mechanism a {\em demand-limiting sequential two-part tariff}. This is seen, for instance, in market-style CSA programs in which members can buy only certain quantities and combinations of~produce. \paragraph{Other Contributions.} As special cases of our general result, we also obtain improvements to the results of \citet{rw-15}. Recall that \citeauthor{rw-15} show that for a single buyer with subadditive values, either item pricing or bundle pricing obtains a constant-factor approximation. We improve this result in two ways. First, for constrained additive values, we improve the approximation factor from about 340 to 31.1 (Corollary~\ref{cor:true-single-agent}).\footnote{It is possible to use \citeauthor{rw-15}'s techniques to obtain a better approximation for the special case of constrained additive values, however, the resulting bound is still much weaker than ours.} Second, we show that the result holds also under an {ex~ante}\ constraint for a suitable definition of item pricings and bundle pricings that respect the same {ex~ante}\ constraint (see Corollary~\ref{cor:general-ex-ante}). Finally, for revenue maximization with multiple additive buyers, we adapt arguments from \citep{bilw-focs14} to obtain an approximation factor of 28 (Appendix~\ref{sec:additive}); this is an improvement over \citet{yao-15}'s approximation factor of 69 for the same setting, but is worse than \citet{CDW-16}'s improvement of \citeauthor{yao-15}'s analysis to an 8-approximation. Arguably, our analysis for this setting is conceptually simpler than both of those~works. \paragraph{Symmetric Settings.} In an interesting special case of our setting, the buyers are a~priori symmetric (but items are heterogeneous). That is, each buyer has a value vector drawn from identical independent distributions, and also desires the same bundles of items. In this setting, our mechanism sets the same entry fee as well as item prices for all buyers. Furthermore, these fees and prices can be computed~efficiently (Section~\ref{sec:symmetric}). \paragraph{Further Directions.} For settings with asymmetric buyers, we leave open the question of efficiently solving the {ex~ante}\ relaxation. Our main result requires buyers' demand constraint to be matroids for two reasons: this allows us to use a prophet inequality for a single agent, and it also enables us to combine single-agent mechanisms sequentially without much loss in revenue. It is an interesting challenge to apply the {ex~ante}\ approach for demand constraints beyond matroids, or for more general classes of subadditive values. \section*{Acknowledgements} We are grateful to Anna Karlin for feedback on early drafts of this work, and to Jason Hartline for insights on efficiently solving the ex ante relaxation for symmetric agents. \bibliographystyle{apalike} \section{Matroid Concepts} \label{sec:matroids} A matroid $M$ is a tuple $(G, \mathcal{I})$ where $G$ is called the {\em ground set} and $\mathcal{I} \subseteq 2^G$ is a collection of {\em independent sets} satisfying the following two properties: \begin{enumerate} \item If $I \subseteq J$ and $J \in \mathcal{I}$, then $I \in \mathcal{I}$ ($\mathcal{I}$ is downward-closed); and \item If $I,J \in \mathcal{I}$ and $|J| > |I|$, then there exists $e \in J\setminus I$ such that $(I\cup\{e\}) \in \mathcal{I}$. \end{enumerate} A {\em basis} is an independent set of maximal size: $B \subseteq G$ is a basis if $B \in \mathcal{I}$ and $|I| \leq |B|$ for all $I \in \mathcal{I}$. The following lemma is a simple consequence of the fact that the greedy algorithm finds the maximum weight basis in any matroid. \begin{lemma} \label{lem:matroid-greedy} Let ${\mathcal F}$ be any matroid over ground set $G$, $I$ be any subset of $G$, and $w$ be any vector of weights defined on elements in $G$. If $j\in G$ belongs to a maximum weight basis of ${\mathcal F}$ and $j\in I$, then $j$ also belongs to a maximum weight basis of ${\mathcal F}|_{I}$. \end{lemma} Several classes of matroids are of special interest. A {\em $k$-uniform} matroid is a matroid in which any $S \subseteq G$ with $|S| \leq k$ is an independent set; the class of uniform matroids generalizes the extensively studied additive ($k=m)$ and unit-demand ($k=1$) settings. A {\em partition} matroid is the union of uniform matroids: $G = G_1\cup\ldots\cup G_N$, where $(G_i,\mathcal{I}_i)$ is a $k_i$-uniform matroid, and a set $S\subseteq G$ is independent if $S\cap G_i \in \mathcal{I}_i$ for all $i$. \section{Efficient Approximation for Symmetric Agents} \label{sec:optimizing-beta-dot-q} We will now discuss how to solve the optimization problem \eqref{eq:bq-max} efficiently when ${\mathcal F}$ is a matroid. We first modify the distribution ${\mathbf \dist}$ so that for every item $j$, any value below quantile $1-1/2n$ is mapped to $0$. The problem then simplifies to the following. \begin{equation} \label{eq:bq-max-2} \begin{aligned} \text{maximize }\; & {\mathbf \eaprob}\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob}) & \text{s.t. }\; & {\mathbf \eaprob} \in {\mathcal P}_{\feas} \end{aligned} \end{equation} This problem is related to the {ex~ante}\ relaxation of the single-parameter revenue maximization problem with $m$ buyers, where buyer $j$'s value is distributed independently according to $\distj$ and the seller faces the feasibility constraint ${\mathcal F}$ (i.e., he can sell to any subset of buyers that form an independent set in ${\mathcal F}$). When the distributions $\distj$ are all regular, the objective ${\mathbf \eaprob}\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob})$ is concave, and the above problem can be solved using standard convex optimization techniques. When the distributions $\distj$ are not all regular, \eqref{eq:bq-max-2} is not necessarily convex. In this case, allowing for a randomized solution convexifies the problem. Consider the following relaxation that maximizes the objective over all distributions over vectors ${\mathbf \eaprob}$: \begin{equation} \label{eq:bq-max-3} \begin{aligned} \text{maximize }\; & \operatorname{E}\expectarg{{\mathbf \eaprob}\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob})} & \text{s.t. }\; & \operatorname{E}\expectarg{{\mathbf \eaprob}} \in {\mathcal P}_{\feas} \end{aligned} \end{equation} This problem can in turn be restated as follows: maximize the ironed virtual surplus of a BIC mechanism for the single-parameter revenue maximization problem stated above subject to the feasibility constraint ${\mathcal P}_{\feas}$ imposed {ex~ante}. \citet{Hartline-comm} describes an alternative to standard convex optimization for solving the above problem to within arbitrary accuracy. Pick a sufficiently small $\epsilon>0$. Discretize the problem by creating a new discrete distribution $\distj'$ for every $j\in [m]$ as follows: for every integer $z$ in $[0, 1/\epsilon]$, place a mass of $\epsilon$ on the ironed virtual value for distribution $\distj$ at quantile $z\epsilon$. Let $R_j$ denote the support of distribution $\distj'$. Over these discrete supports, the ironed virtual value maximization problem becomes one of selecting a subset of $\cup_j R_j$ of maximum total (ironed virtual) value subject to the constraint that the subset can be partitioned into at most $1/\epsilon$ parts each of which is independent in ${\mathcal F}$. In other words, this is the problem of finding a maximum weight basis over a matroid formed by the union of $1/\epsilon$ identical copies of ${\mathcal F}$. The standard greedy algorithm for matroids solves this problem efficiently for any constant $\epsilon$. This algorithm approximates \eqref{eq:bq-max-3} to within an additive error of $\epsilon\sum_j\distj^{-1}(1)$, and in the case of non-regular distributions, produces a distribution over two vectors ${\mathbf \eaprob}_1$ and ${\mathbf \eaprob}_2$ that in expectation satisfies the constraint ${\mathcal P}_{\feas}$. \section{Preliminaries} \label{sec:prelim} We consider a setting with a single seller and $n$ buyers. The seller has $m$ heterogeneous items to sell. Each buyer $i\in [n]$ has a type composed of a public downward-closed demand constraint $\feasi\subseteq 2^{[m]}$ and a private value vector $\vali=(\valij[1], \cdots, \valij[m])$ that maps items to non-negative values. Roughly speaking, the demand constraint $\feasi$ describes the maximal sets of items from which the buyer derives value. Formally, the buyer's value for a set of items is described by a {\em constrained additive} function: for $S\subseteq 2^{[m]}$, \[ \vali(S) = \max_{S'\in\feasi; S'\subseteq S} \sum_{j\in S'} \valij\] It will sometimes be necessary to consider feasibility restricted to subsets of the available items. For $M' \subseteq [m]$, the {\em restriction of $\feasi$ to $M'$}, denoted $\feasi|_{M'}$, is formed by dropping items not in $M'$. Formally, $\feasi|_{M'} = \feasi\intersect2^{M'}$. We will typically assume that for all $i$, $\feasi$ is a matroid; see Appendix~\ref{sec:matroids} for a review of matroid concepts. We assume that the values $\valij$ are drawn from distribution $\distij$ independently of all other values; we use $\disti=\prod_j \distij$ to denote the joint distribution of buyer $i$'s value vector and ${\mathbf \dist} = \prod_i \disti$ to denote the joint distribution over all value vectors. The demand constraints $\feasi$ may be different for different buyers. Let ${\mathcal F} = \{\feasi\}_{i\in[n]}$ denote the tuple of feasibility constraints, one for each buyer. \subsection{Incentive Compatible Mechanisms and Revenue Maximization} A mechanism ${\mathcal M}$ takes as input the value vectors ${\mathbf \val}=(\vali[1], \cdots, \vali[n])$ and returns an allocation ${\mathbf \alloc}({\mathbf \val})$ and payment vector ${\mathbf \price}({\mathbf \val})$. Here $\alloci({\mathbf \val})$ denotes the (potentially random) set of items that is allocated to buyer $i$. A mechanism ${\mathcal M}$ is {\em supply-feasible} if every item is allocated to at most one buyer; in other words, for all ${\mathbf \val}$, and $i_1\ne i_2$, $\alloci[i_1]({\mathbf \val})\cap\alloci[i_2]({\mathbf \val}) = \emptyset$ with probability $1$. We use $\allocij({\mathbf \val})$ to denote the probability with which buyer $i$ receives item $j$. Without loss of generality, we focus on mechanisms that for every value vector ${\mathbf \val}$ and every buyer $i$ satisfy $\alloci({\mathbf \val})\in\feasi$ with probability $1$; we call such mechanisms {\em demand-feasible}. Consequently, we note that the vector $(\allocij[1]({\mathbf \val}), \cdots, \allocij[m]({\mathbf \val}))$ lies in the polytope enclosing $\feasi$, which we denote\footnote{Formally, $\ptopei$ is the convex hull of the incidence vectors of all sets in $\feasi$ in $\Re^m$.} $\ptopei$. In the rest of the paper we will overload notation and use $\alloci({\mathbf \val})$ to denote the vector $(\allocij[1]({\mathbf \val}), \cdots, \allocij[m]({\mathbf \val}))$. We assume that buyers are risk neutral and have quasi-linear utilities. In other words, the utility that a buyer derives from allocation $\alloci$ and payment $\pricei$ is given by $\alloci\cdot\vali - \pricei$. We consider mechanisms which are {\em Bayesian incentive compatible (BIC)}. A mechanism is BIC if truthtelling is a Bayes-Nash equilibrium; that is, if a buyer maximizes his own utility---in expectation over other buyers' values, assuming they report truthfully, as well as randomness inherent in the mechanism---by reporting truthfully. In contrast, a mechanism is {\em dominant-strategy incentive compatible (DSIC)} if truthtelling is a dominant strategy; that is, if a buyer maximizes his own utility by reporting truthfully, regardless of what other buyers report. We are interested in revenue maximization for the seller. The seller's revenue from a BIC mechanism ${\mathcal M}=({\mathbf \alloc},{\mathbf \price})$ at value vectors ${\mathbf \val}$ is $\sum_i\pricei({\mathbf \val})$, and the expected revenue is $\revm({\mathbf \dist}) = \operatorname{E}\expectarg[{\mathbf \val}\sim{\mathbf \dist}]{\sum_i\pricei({\mathbf \val})}$. The revenue maximization problem seeks to maximize $\revm({\mathbf \dist})$ over all BIC mechanisms that are demand- and supply-feasible; we use $\mathsc{Rev}({\mathbf \dist},{\mathcal F})$ to denote this maximum revenue. \subsection{{Ex~Ante}\ Constrained Revenue Maximization} We will reduce the multiple buyer revenue maximization problem described above to single-buyer problems with {ex~ante}\ supply constraints. The following definitions are for a single agent $i$; we omit the subscript $i$ for clarity. Let ${\mathbf \eaprob} = (\eaprobj[1], \cdots, \eaprobj[n])$ be a vector of probabilities with $\eaprobj\in [0,1]$ for all $j\in [m]$. A mechanism ${\mathcal M}=({\mathbf \alloc},{\mathbf \price})$ is {\em ${\mathbf \eaprob}$-constrained under ${\mathbf \dist}$} if for all items $j\in [m]$, its {ex~ante}\ probability for selling item $j$ when values are drawn from ${\mathbf \dist}$, $\operatorname{E}\expectarg[{\mathbf \val}\sim{\mathbf \dist}]{\allocj({\mathbf \val})}$, is at most $\eaprobj$. We will consider both revenue and welfare maximization problems over ${\mathbf \eaprob}$-constrained mechanisms. Formally, we define \begin{align} \label{eq:constrained-revenue} \earev(\mathcal D,{\mathcal F}) & = \max_{{\mathcal M}=({\mathbf \alloc},{\mathbf \price}): \operatorname{E}\expectarg[{\mathbf \val}\sim{\mathbf \dist}]{\allocj({\mathbf \val})}\le\eaprobj \,\,\forall j\in [m]} \revm({\mathbf \dist}) \end{align} and \begin{align*} \eaVal(\mathcal D,{\mathcal F}) & = \max_{{\mathcal M}=({\mathbf \alloc},{\mathbf \price}): \operatorname{E}\expectarg[{\mathbf \val}\sim{\mathbf \dist}]{\allocj({\mathbf \val})}\le\eaprobj \,\,\forall j\in [m]} \Valm({\mathbf \dist}), \end{align*} where the maximum is taken over all BIC demand-feasible mechanisms\footnote{We don't need to impose the supply-feasibility constraint explicitly --- this is already implicit in the {ex~ante}\ probability constraint.} and $\Valm({\mathbf \dist}) = \operatorname{E}\expectarg[{\mathbf \val}\sim{\mathbf \dist}]{{\mathbf \alloc}({\mathbf \val})\cdot{\mathbf \val}}$. It will sometimes be convenient to express the {ex~ante}\ constraint in the form of {ex~ante}\ prices defined as: $\eapricej = \distj^{-1}(1-\eaprobj)$. In other words, for every $j\in [m]$, $\eapricej$ is defined such that the probability that $\valj$ exceeds this price is precisely $\eaprobj$. Note that there is a one-one correspondence between {ex~ante}\ probabilities and {ex~ante}\ prices. \subsection{Special Single-Agent Mechanisms} \paragraph{Item Pricing.} An item pricing is defined by a set of prices $\pricej$, one for each item $j$. A buyer is allowed to select as many items as he pleases, up to some downward-closed constraint ${\mathcal F}$, and he pays the sum of the associated prices. That is, if the buyer selects the set $S \subseteq [m]$, he pays $\sum_{j\in S}p_j$. The buyer then selects the set $S \in {\mathcal F}$ which maximizes $\sum_{j\in S}(\valj - \pricej)$. We use $\mathsc{SRev}({\mathbf \dist},{\mathcal F})$ to denote the optimal revenue obtainable by any item pricing from a buyer with value distribution ${\mathbf \dist}$ and demand constraint ${\mathcal F}$. \paragraph{Bundle Pricing.} A bundle pricing is defined by a single price (a.k.a. entry fee) $\pi$. A buyer can buy any subset of items satisfying the demand constraint ${\mathcal F}$ at price $\pi$. A rational buyer chooses to participate (i.e. pay the fee) if $v([m])=\max_{S\in{\mathcal F}}v(S) \geq \pi$ and then selects a corresponding maximal set $S$. We use $\mathsc{BRev}({\mathbf \dist},{\mathcal F})$ to represent the optimal revenue obtainable by any bundle pricing from a buyer with value distribution ${\mathbf \dist}$ and demand constraint ${\mathcal F}$. \paragraph{Two-Part Tariffs.} A two-part tariff is a common generalization of both item pricings and bundle pricings. It is described by an $m+1$ dimensional vector of prices: $(\pi, \pricej[1], \cdots, \pricej[m])$. The mechanism offers each set $S\subseteq [m]$ of items to the buyer at a price of $\pi+\sum_{j\in S} \pricej$; the buyer can then choose to buy his favorite set at these offered prices. Informally speaking, the mechanism charges the buyer an {\em entry fee} of $\pi$ for the right to buy any set of items, with item $j$ offered at a fixed price of $\pricej$. Like other pricing-based mechanisms, two-part tariffs are deterministic, dominant strategy incentive compatible mechanisms. A utility-maximizing buyer with values ${\mathbf \val}$ and feasibility constraint ${\mathcal F}$ when offered a two-part tariff $(\pi, {\mathbf \price})$ buys the set $S\in{\mathcal F}$ of items that maximizes ${\mathbf \val}(S)-\pi-\sum_{j\in S} \pricej$, if that quantity is non-negative\footnote{This is essentially an ex-post IR condition.}; in that case, we say that the buyer participates in the mechanism. We denote the revenue of a two-part tariff $(\pi, {\mathbf \price})$ offered to a buyer with feasibility constraint ${\mathcal F}$ and value distribution ${\mathbf \dist}$ by $\revt({\mathbf \dist},{\mathcal F})$. We use $\mathsc{TRev}({\mathbf \dist},{\mathcal F})$ to denote the optimal revenue that a two-part tariff can obtain from a buyer with value distribution ${\mathbf \dist}$ and demand constraint ${\mathcal F}$. Two-part tariffs are known to be approximately optimal in certain single-agent settings. The following results\footnote{Here $\feasi[\mathsc{UnitDemand}] = \{S\subset [m] \mid |S|=1\}$ represents a unit-demand buyer, and $\feasi[\mathsc{Additive}] = 2^{[m]}$ represents a buyer with fully additive values.} are due to \citet{cms-10} and \citet{bilw-focs14} respectively. \citet{rw-15} proved a similar result for constrained additive values, but with a very large approximation factor (about 340). \begin{align*} \mathsc{Rev}({\mathbf \dist},\feasi[\mathsc{UnitDemand}]) & \leq 4\,\mathsc{TRev}({\mathbf \dist},\feasi[\mathsc{UnitDemand}])\\ \mathsc{Rev}({\mathbf \dist},\feasi[\mathsc{Additive}]) & \leq 6\,\mathsc{TRev}({\mathbf \dist},\feasi[\mathsc{Additive}]) \end{align*} \paragraph{Pricings with an {Ex~Ante}\ Constraint.} Next we extend the above definitions to respect {ex~ante}\ supply constraints. We say that a two-part tariff $(\pi, {\mathbf \price})$ satisfies {ex~ante}\ constraint ${\mathbf \eaprob}$ if for all $j$, $\pricej\ge \eapricej=\distj^{-1}(1-\eaprobj)$. Note that this is a stronger condition than merely requiring that the mechanism allocates item $j$ with {ex~ante}\ probability at most $\eaprobj$. We use $\eatrev({\mathbf \dist},{\mathcal F})$ to denote the optimal revenue achieved by a demand-feasible two-part tariff that satisfies {ex~ante}\ constraint ${\mathbf \eaprob}$. Likewise, we use $\easrev({\mathbf \dist},{\mathcal F})$ to denote the optimal revenue achievable by an item pricing ${\mathbf \price}$ with $\pricej\ge \eapricej$ for all $j$. \subsection{Multi-Agent (Sequential) Two-Part Tariff Mechanisms} We now extend the definition of two-part tariffs to multi-agent settings. Consider a setting with $n$ agents and demand constraints ${\mathcal F}=\{\feasi\}_{i\in[n]}$. A {\em sequential two-part tariff} for this setting is parameterized by an ordering $\sigma$ over the agents, a set of entry fees ${\mathbf \ef}=(\efi[1], \cdots, \efi[n])$, and a set of prices ${\mathbf \price}=\{\priceij\}$. The mechanism proceeds as follows. \begin{enumerate} \item The ordering $\sigma$ and prices ${\mathbf \ef};{\mathbf \price}$ are announced. \item Each agent $i$ independently decides whether or not to participate in the mechanism. If the agent decides to participate, then he pays his corresponding entry fee~$\efi$. \item The mechanism considers agents in the order given by $\sigma$. When an agent $i$ is considered, if the agent previously declined to participate, no items are allocated and no payment is charged. Otherwise, of the items unallocated so far, the agent is allowed to purchase his favorite feasible set of items at the prices $\priceij$. \end{enumerate} Observe that agents choose whether or not to participate in the mechanism before knowing which items will be available when it is their turn to purchase. Accordingly, a sequential two-part tariff is BIC but not necessarily DSIC. The sequential two-part tariff mechanisms that we develop in this paper are {\em order oblivious} in the sense that their revenue guarantees hold regardless of the ordering $\sigma$ chosen over the agents. Accordingly, in describing these mechanisms, we need only specify the prices ${\mathbf \ef};{\mathbf \price}$. In some cases, our two-part tariff mechanisms disallow agents from buying certain sets of items. Specifically, a {\em demand-limiting sequential two-part tariff} is parameterized by an ordering $\sigma$, prices ${\mathbf \ef}; {\mathbf \price}$, as well as feasibility constraints ${\mathcal F}'=\{\feasi'\}_{i\in[n]}$ where, for every agent $i$, $\feasi'\subseteq {\mathcal F}$ is a matroid constraint stronger than the agent's original demand constraint. When it is agent $i$'s turn to buy items, the agent is allowed to buy any subset of items in $\feasi'$. In particular, the agent is not allowed to buy sets of items in~$\feasi\setminus\feasi'$. \section{Single-Agent Approximation for General Matroids} \label{sec:general-matroid} The proof of Lemma~\ref{lem:approx-partition} relied upon the existence of a threshold-based prophet inequality for partition matroids, which translates directly into a pricing scheme. No such prophet inequality is known for general matroids, but a recent result of \citet{fsz-15} provides a \scnote{demand-limiting pricing scheme}. \begin{theorem} (\cite{fsz-15}) For a general matroid ${\mathcal F}$, constant $b \in (0,1)$, {ex~ante}\ constraints ${\mathbf \eaprob} \in b{\mathcal P}_{\feas}$, and corresponding {ex~ante}\ prices ${\mathbf \eaprice}$, there exists a submatroid ${\mathcal F}' \subseteq {\mathcal F}$ such that \[{\mathbf \eaprice}\cdot{\mathbf \eaprob} \leq \frac{1}{1-b}\easrev({\mathbf \dist},{\mathcal F}').\] Furthermore, the prices and constraint ${\mathcal F}'$ which achieve this bound are efficiently approximable. \end{theorem} Combining Theorems~\ref{thm:single-agent} and \ref{thm:ocrs}, we obtain Lemma~\ref{lem:approx-general} for general matroids. \begin{lemma}[\bf Single-agent approximation \# 2] \label{lem:approx-general} Let $\mathcal D$ be any value distribution and ${\mathcal F}$ be an arbitrary matroid with feasible polytope ${\mathcal P}_{\feas}$. Then, for any $q\in \frac 12{\mathcal P}_{\feas}$, there exists a submatroid ${\mathcal F}'\subseteq{\mathcal F}$, such that \[ \earev(\mathcal D, {\mathcal F}) \le 33.1 \eatrev(\mathcal D, {\mathcal F}') \] \end{lemma} \section{The {Ex~Ante}\ Relaxation and Stitching} \label{sec:relaxation} In this section we prove Lemmas~\ref{lem:relaxation} and ~\ref{lem:stitching-trevs}. \begin{proofof}{Lemma~\ref{lem:relaxation}} Let ${\mathcal M}$ be the optimal mechanism for feasibility constraints ${\mathcal F}$ and value distributions ${\mathbf \dist}$ achieving revenue $\mathsc{Rev}({\mathbf \dist},{\mathcal F})$. We will now consider a buyer $i$ and construct a mechanism ${\mathcal M}_i$ for this buyer as follows. When the buyer $i$ reports a value vector $\vali$, the mechanism ${\mathcal M}_i$ draws value vectors $\tilde{\mathbf \val}_{-i}$ from the joint distribution ${\mathbf \dist}_{-i}$; It then returns the allocation and payment that ${\mathcal M}$ returns at $(\vali, \tilde{\mathbf \val}_{-i})$. It is easy to see that if ${\mathcal M}$ is BIC, then so is ${\mathcal M}_i$. Furthermore, ${\mathcal M}_i$ obtains the same revenue from buyer $i$ as ${\mathcal M}$. Therefore, we have: \[ \mathsc{Rev}({\mathbf \dist},{\mathcal F}) = \sum_i \revm[{\mathcal M}_i](\disti).\] Now let $\alloci$ denote the allocation rule of ${\mathcal M}_i$ and let $\eaprobij = \operatorname{E}\expectarg[\vali\sim\disti]{\allocij(\vali)}$. Then, recalling equation~\eqref{eq:constrained-revenue}, we have $\revm[{\mathcal M}_i](\disti) \le \earev[\eaprobi](\disti,\feasi)$, and so, \[\mathsc{Rev}({\mathbf \dist},{\mathcal F})\le \sum_i \earev[\eaprobi](\disti,\feasi).\] Finally, the demand feasiblity of ${\mathcal M}$ implies that the vector $\eaprobi$ lies in the polytope $\ptopei$, while the supply feasiblity of ${\mathcal M}$ implies that $\sum_i \eaprobij\le 1$ for all $j$. This completes the proof. \end{proofof} \section{Omitted Proofs} \label{sec:single-agent-proofs} \subsection{Proofs from Section \ref{sec:single-agent}} We make use of the following result of \citet{cms-10}. \begin{lemma} \label{lem:unit-srev} (\cite{cms-10}) For any product distribution ${\mathbf \dist}$, \[\mathsc{Rev}({\mathbf \dist},\feasi[\mathsc{UnitDemand}]) \leq 4\mathsc{SRev}({\mathbf \dist},\feasi[\mathsc{UnitDemand}]).\] \end{lemma} \begin{proofof}{Claim~\ref{lem:RevSRevBound}} Let ${\mathcal M}$ be a BIC mechanism such that $\revm({\mathbf \dist}) = \mathsc{Rev}({\mathbf \dist},{\mathcal F})$. Let $({\mathbf \alloc}({\mathbf \val}), p({\mathbf \val}))$, where $\sum_{j=1}^m\allocj({\mathbf \val}) \leq m$, be the lottery offered by ${\mathcal M}$ to an agent who reports ${\mathbf \val}$. We modify ${\mathcal M}$ to get ${\mathcal M}'$, a BIC mechanism which allocates at most one item and has revenue $\revm[{\mathcal M}']({\mathbf \dist}) = \frac{1}{m}\revm({\mathbf \dist})$. For every type ${\mathbf \val}$, let $\mathbf{x}'({\mathbf \val}) = \frac{1}{m}{\mathbf \alloc}({\mathbf \val})$ and $p'({\mathbf \val}) = \frac{1}{m}p({\mathbf \val})$ be the lottery offered by ${\mathcal M}'$. Since $|{\mathbf \alloc}({\mathbf \val})|_1 \leq m$, we have $|{\mathbf \alloc}'({\mathbf \val})|_1 \leq 1$, and so ${\mathcal M}'$ is feasible for the unit-demand setting. Because the buyer's utility is quasi-linear, scaling the allocation probabilities and payments by $m$ simply scales the utility of each outcome by $m$. Therefore, the buyer will select corresponding outcomes in ${\mathcal M}$ and ${\mathcal M}'$, and ${\mathcal M}'$ is BIC with revenue $\frac{1}{m}\revm({\mathbf \dist})$. Combined with Lemma~\ref{lem:unit-srev}, this completes the proof. \end{proofof} \subsection{Proofs from Section~\ref{sec:symmetric}} \begin{proofof}{Lemma~\ref{lem:symmetric-reduction}} Let ${\mathcal M}$ be a demand- and supply-feasible BIC mechanism such that $\revm({\mathbf \dist}) = \mathsc{Rev}({\mathbf \dist},{\mathcal F})$. Let $\eaprobij$ be the probability with which ${\mathcal M}$ sells item $j$ to agent $i$. By symmetry, we can permute the identities of the agents uniformly at random before running ${\mathcal M}$ without hurting the expected revenue. Under this permutation, the ex ante probability with which ${\mathcal M}$ sells $j$ to $i$ is at most $1/n$. We can therefore assume without loss of generality that $\eaprobij \leq 1/n$. Now, consider a single agent $i$; with probability $1/2$, allocate the empty set to this agent at price $0$, and with probability $1/2$, draw values for all other agents from ${\mathbf \dist}_{-i}$ and simulate mechanism ${\mathcal M}$. The resulting mechanism is a single agent mechanism that obtains a $1/2n$ fraction of the revenue of ${\mathcal M}$ and satisfies an ex ante constraint ${\mathbf \eaprob}\in {\mathcal P}_{\feas}\cap\left[0,\tfrac{1}{2n}\right]^m$. The lemma follows. \end{proofof} \section{Two-Part Tariffs for a Single Agent} \label{sec:single-agent} We now turn to bounding the revenue from a single agent subject to an {ex~ante}\ constraint. In this section we will prove Lemma~\ref{lem:approx-partition}. In the following discussion, we assume that the buyer has a product value distribution $\mathcal D=\prod_j\distj$, and faces a demand feasibility constraint ${\mathcal F}$, while the mechanism is subject to an {ex~ante}\ supply constraint ${\mathbf \eaprob}$. Recall that we define the {ex~ante}\ prices ${\mathbf \eaprice}$ as $\eapricej = \distj^{-1}(1-\eaprobj)$ for all items $j$. \subsubsection*{Core-Tail Decomposition with {Ex~Ante}\ Constraints} We begin by defining the notation for the core-tail decomposition (see Table~\ref{tab:notation}). Let $\tau \geq 0$ be a constant to be defined later. We use $t_j = \eapricej+\tau$ to denote the threshold for classifying values into the core or the tail. Specifically, for any item $j$, if $\valj > t_j$, we say item $j$ is in the tail, otherwise it is in the core. Let $\dist^C_j$ (resp., $\dist^T_j$) denote the distribution for item $j$'s value conditioned on the item being in the core (resp., tail). For a set $A\subseteq [m]$ of items, let $\tprobA$ denote the probability that the items in $A$ are in the tail and the remaining items are in the core; that is, $\tprobA=\left(\prod_{j\in A}\prob[\valj\sim\distj]{\valj > t_j}\right) \left(\prod_{j\not\in A}\prob[\valj\sim\distj]{\valj \le t_j}\right)$. Then $\tprobA[\emptyset]$ denotes the probability that all items are in the core. Observe that as we increase the constant $\tau$ (thereby increasing the core-tail thresholds uniformly), the probability $\tprobA[\emptyset]$ increases. We pick $\tau$ to be the smallest non-negative number such that $\tprobA[\emptyset]\ge 1/2$. Observe that $\tau>0$ implies\footnote{For simplicity, we are assuming that the value distribution does not contain any point masses; it is easy to modify our argument to work in the absence of this assumption, but we omit the details.} $\tprobA[\emptyset]=1/2$. We now state our version of the core-tail decomposition, extended to respect {ex~ante}\ constraints. We defer the proof to Section~\ref{sec:core-decomp}. Note that although the sum over tail revenues does not explicitly enforce the {ex~ante}\ constraints, the tail distributions are supported only on values above the {ex~ante}\ prices ${\mathbf \eaprice}$. \begin{lemma}[\bf Core Decomposition with {Ex~Ante}\ Constraints] \label{lem:core-decomposition} For any product distribution ${\mathbf \dist}$, feasibility constraint ${\mathcal F}$, and {ex~ante}\ constraint ${\mathbf \eaprob}$, \[\earev({\mathbf \dist},{\mathcal F}) \leq \eaVal(\core,{\mathcal F}) + \sum_{A\subseteq [m]}\tprobA\mathsc{Rev}(\dists^T_A,{\mathcal F}|_A)\] \end{lemma} \begin{table}[t] \renewcommand{\arraystretch}{1.5} \caption{Notation for Section~\ref{sec:single-agent}.} \begin{tabular}{r l l} \hline Notation & Definition & Formula \\ \hline ${\mathbf \eaprob}$ & {Ex~ante}\ probabilities & \\ ${\mathbf \eaprice}$ & {Ex~ante}\ prices & $\eapricej = \distj^{-1}(1-\eaprobj)\,\,\forall j\in [m]$ \\ $t_j$ & Core-tail threshold for item $j$ & $\eapricej+\tau$ \\ $\tau$ & Difference between $t_j$ and $\eapricej$; same for all items & $\min\{t \mid \prob[{\mathbf \val}\sim{\mathbf \dist}]{\valj \le t+\eapricej \forall j} \ge 1/2 \} $ \\ $\dist^C_j$ & Core distribution for item $j$ & $\distj|_{{\valj \leq t_j}}$ \\ $\dist^T_j$ & Tail distribution for item $j$ & $\distj|_{{\valj > t_j}}$ \\ $\dists^C_A$ & Core distribution for items not in $A$ & $\prod_{j\not\in A}\dist^C_j$ \\ $\dists^T_A$ & Tail distribution for items in $A$ & $\prod_{j\in A}\dist^T_j$ \\ $\tprobj$ & Probability item $j$ is in the tail & $\prob[\valj\sim\distj]{\valj > t_j}$ \\ $\tprobA$ & Probability exactly items in $A$ are in the tail & $\left(\prod_{j\in A}\tprobj\right) \left(\prod_{j\not\in A}(1-\tprobj)\right)$ \\ ${\mathbf \dist}-{\mathbf \price}$ & Distribution ${\mathbf \dist}$ shifted to the left by ${\mathbf \price}$ & \\ \hline \end{tabular} \label{tab:notation} \end{table} \subsection{Stitching together the multi-agent mechanisms} \begin{proofof}{Lemma~\ref{lem:stitching-trevs}} For every buyer $i$, let $\efi$ and $(\priceij[1], \cdots, \priceij[m])$ denote the entry fee and item prices respectively in the mechanism ${\mathcal M}_i$. We will compose the mechanisms ${\mathcal M}_i$ to obtain a single mechanism ${\mathcal M}$ as follows. The mechanism ${\mathcal M}$ considers buyers in an arbitrary order and offers items for sale sequentially to the buyers in that order. When it is buyer $i$'s turn, some (random set of) items have already been sold to other buyers. The mechanism offers the remaining items to buyer $i$ via a two-part tariff: it charges the buyer an entry fee of $\frac 12\efi$ for the right to buy any subset of the remaining items, with item $j$ priced at $\priceij$. Importantly, buyers must make the decision of whether or not to participate (that is, whether or not to pay the entry fee) before knowing which items are left unsold. By definition, the mechanism is BIC: buyers may choose whether or not to participate and which subset of items to purchase. Let us now consider a single buyer $i$. We first claim that when the mechanism ${\mathcal M}$ considers buyer $i$, for every item $j$, the probability (taken over value vectors of other agents) that item $j$ is available to be bought by $i$ is at least $1/2$. Recall that for every pair $i,j$, $\prob[\valij\sim\distij]{\valij>\priceij}=1-\distij(\priceij)\le 1-\distij(\eapriceij)=\eaprobij$. So the probability that some agent $i'$ buys item $j$ is at most $q_{i'j}$. Therefore, the probability (over values of agents other than $i$) that item $j$ is allocated to an agent other than $i$ is at most $\sum_{i'\ne i} q_{i'j}\le 1/2$, and this proves the claim. We will now use the above claim to argue that if after drawing his value vector the buyer chooses to participate (i.e. pay the entry fee) in mechanism ${\mathcal M}_i$, then he chooses to participate in ${\mathcal M}$. If agent $i$ participates in mechanism ${\mathcal M}_i$, then for some set $S\in {\mathcal F}_i$ his value vector satisfies $\sum_{j\in S} (\valij-\priceij)-\efi>0$. In the mechanism ${\mathcal M}$, the agent derives from the same set $S$ an expected utility of \[\left(\sum_{j\in S} \prob{j\text{ is available for } i}(\valij-\priceij) \right) -\frac 12\efi,\] which by the above claim is at least $1/2(\sum_{j\in S} (\valij-\priceij)-\efi)>0$. Consequently, if ${\mathcal M}_i$ obtains the entry fee $\efi$ from agent $i$, then ${\mathcal M}$ obtains the entry fee $\efi/2$. Next we claim that if agent $i$ buys item $j$ in mechanism ${\mathcal M}_i$ and item $j$ is available for him in mechanism ${\mathcal M}$, then the agent buys item $j$ in ${\mathcal M}$. This follows directly from Lemma~\ref{lem:matroid-greedy} (Appendix~\ref{sec:matroids}) by noting that ${\mathcal M}_i$ and ${\mathcal M}$ offer the same item prices to the agent and that the agent is a utility maximizer. As argued previously, item $j$ is available with probability at least $1/2$, therefore, this claim implies that if ${\mathcal M}_i$ obtains the price $\priceij$ from agent $i$, then ${\mathcal M}$ obtains the same price $\priceij$ with probability $1/2$. Putting this together with the above observation about entry fee, we get that ${\mathcal M}$ obtains in expectation at least half of the total revenue obtained by the mechanisms ${\mathcal M}_i$. \end{proofof} The proof of the lemma relies upon three facts: (1) mechanism ${\mathcal M}$ offers each item with probability at least half to each buyer, (2) under these probabilities, the buyer's expected utility from a set $S$ is at least half his utility from obtaining $S$ with certainty, and, (3) in the composed mechanism, the buyer selects those items in $S$ that are still available. Fact (2) holds more generally for a buyer with any monotone submodular value function~\cite{FMV-11}. Fact (3) follows directly from the definition of gross substitutes valuations,\footnote{A valuation $v$ satisfies the gross substitutes condition if for all price vectors ${\mathbf \price}$, ${\mathbf \price}'$ where ${\mathbf \price} \leq {\mathbf \price}'$, for all $S$ such that $v(S) - {\mathbf \price} \geq v(S') - {\mathbf \price}$ for all $S'$, there exists $T$ such that $v(T) - {\mathbf \price}' \geq v(T') - {\mathbf \price}'$ for all $T'$ and $\{j \in S : \pricej = \pricej'\} \subseteq T$.} a special case of submodular value functions. So Lemma~\ref{lem:stitching-trevs} holds more generally for buyers with gross substitutes valuations. \section{Approximation for Symmetric Agents} \label{sec:symmetric} Computing the approximate mechanisms of Theorem~\ref{thm:main-partition} requires being able to efficiently solve the {ex~ante}\ optimization, $\max_{{\mathbf \eaprob}} \sum_i \earev[\eaprobi](\disti,\feasi) \; \text{s.t. } \sum_i \eaprobi\le\vec{\mathbf 1}$. This is not necessarily a convex optimization problem and it is not clear whether this can be solved or approximated efficiently in general. In this section we show how to solve this problem in the special case where agents are a priori identical. In a {\em symmetric agents} setting, agents share a common feasibility constraint and value distribution. In particular, $\feasi = \feasi[i']={\mathcal F}$ and $\disti = \disti[i']=\mathcal D$ for all $i, i'\in [n]$. Note that the values of different items are not necessarily distributed identically, neither is ${\mathcal F}$ necessarily symmetric across items. Since each agent is identical, we can focus on maximizing the revenue obtained from a single agent, while ensuring that the {ex~ante}\ probability of selling each item is small enough that we may apply Lemma~\ref{lem:stitching-trevs}. We formalize this in the following lemma. See Appendix~\ref{sec:single-agent-proofs} for a proof. \begin{lemma} \label{lem:symmetric-reduction} In a symmetric agents setting with $n$ agents, a matroid feasibility constraint ${\mathcal F}$ and product distribution ${\mathbf \dist}$, \[\mathsc{Rev}(\times_n {\mathbf \dist}, \times_n {\mathcal F}) \leq 2n \max_{{\mathbf \eaprob}\in {\mathcal P}_{\feas} \cap\left[0,\tfrac{1}{2n}\right]^m} \earev[{\mathbf \eaprob}](\mathcal D,{\mathcal F}),\] \end{lemma} For the remainder of this section, we focus on efficiently approximately maximizing the single agent objective $\earev[{\mathbf \eaprob}](\mathcal D,{\mathcal F})$ subject to ${\mathbf \eaprob}\in \widehat{{\mathcal P}_{\feas}}$, where we use $\widehat{{\mathcal P}_{\feas}}$ as short form for ${\mathcal P}_{\feas} \cap\left[0,\tfrac{1}{2n}\right]^m$. Lemma~\ref{lem:single-agent} bounds the revenue by three terms; we observe that $\easrev({\mathbf \dist}, {\mathcal F})$ is at most $\max_{{\mathbf \eaprob}'\le{\mathbf \eaprob}} {\mathbf \eaprob}'\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob}')$. Therefore, \begin{align} \notag \max_{{\mathbf \eaprob}\in \widehat{{\mathcal P}_{\feas}}}\earev({\mathbf \dist},{\mathcal F}) & \leq 6\,\max_{{\mathbf \eaprob}\in \widehat{{\mathcal P}_{\feas}}}\mathsc{BRev}({\mathbf \dist}-{\mathbf \dist}^{-1}(1-{\mathbf \eaprob}),{\mathcal F}) + 26.1\, \max_{{\mathbf \eaprob}\in \widehat{{\mathcal P}_{\feas}}}{\mathbf \eaprob}\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob})\\ &\label{eq:symmetric} \le 6\,\mathsc{BRev}({\mathbf \dist}-{\mathbf \dist}^{-1}(1-1/2n),{\mathcal F}) + 26.1\, \max_{{\mathbf \eaprob}\in \widehat{{\mathcal P}_{\feas}}}{\mathbf \eaprob}\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob}) \end{align} \noindent The first term on the LHS of Equation~\eqref{eq:symmetric} is easy to capture. We can use sampling to efficiently compute the optimal bundle price for the value distribution ${\mathbf \dist}-{\mathbf \dist}^{-1}(1-1/2n)$. Call this price $a$, and let $\pricej=\distj^{-1}(1-1/2n)$ for all items $j\in [m]$. Then, by Lemma~\ref{lem:stitching-trevs} the multi-agent sequential two-part tariff mechanism that offers each agent an entry fee of $a$ and per item pricing ${\mathbf \price}$ obtains revenue at least $\frac n2 \mathsc{BRev}({\mathbf \dist}-{\mathbf \dist}^{-1}(1-1/2n),{\mathcal F})$. This leaves us with the following maximization problem: \begin{equation} \label{eq:bq-max} \begin{aligned} \text{maximize }\; & {\mathbf \eaprob}\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob}) & \text{s.t. }\; & {\mathbf \eaprob} \in {\mathcal P}_{\feas} \cap \left[0,\tfrac{1}{2n}\right]^m \end{aligned} \end{equation} In Appendix~\ref{sec:optimizing-beta-dot-q} we discuss how to solve (a relaxation of) this problem efficiently when ${\mathcal F}$ is a matroid. We obtain a (potentially random) vector ${\mathbf \eaprob}$ that in expectation satisfies the feasibility constraint $\widehat{{\mathcal P}_{\feas}}$ and obtains an expected objective function value no smaller than the optimum of \eqref{eq:bq-max}. Then, for partition matroids, we can employ a constructive version of Lemma~\ref{thm:partition-matroid} due to \citet{CHMS-STOC10} to obtain a (potentially random) sequential two-part tariff mechanism that obtains revenue at least $\frac n4$ times the optimum of \eqref{eq:bq-max}. For general matroids, we can likewise employ a constructive version of Theorem~\ref{thm:ocrs} due to \citet{fsz-15} to obtain a (potentially random) demand-limiting sequential two-part tariff mechanism that obtains revenue at least $\frac n4$ times the optimum of \eqref{eq:bq-max}. We obtain the following theorem. \begin{theorem} For any symmetric, matroid feasibility constraint ${\mathcal F}$ and symmetric, product distribution ${\mathbf \dist}$, there is an efficiently computable randomized demand-limiting sequential two-part tariff mechanism ${\mathcal M}$ and a constant $c$ such that \[\mathsc{Rev}({\mathbf \dist},{\mathcal F}) \leq c\, \revm({\mathbf \dist}).\] When ${\mathcal F}$ is a partition matroid, we obtain a sequential two-part tariff mechanism, and when $\distj$ is regular for all $j$, our mechanism is deterministic. \end{theorem} \subsection{Bounding the Tail} We first show that the tail revenue can be bounded by selling items separately under the given {ex~ante}\ supply constraint ${\mathbf \eaprob}$. The main result of this section is as~follows. \begin{lemma} \label{lem:tail-bound} For any product distribution ${\mathbf \dist}$ over $m$ independent items and any ${\mathcal F}$, \[\sum_{A\subseteq [m]}\tprobA\mathsc{Rev}(\dists^T_A,{\mathcal F}|_A) \leq 8(1+\ln 2) \easrev({\mathbf \dist},{\mathcal F})\] \end{lemma} \begin{proof} We make use of the following weak but general relationship between the optimal revenue and the revenue generated by selling separately for a single-agent constrained additive value setting; this follows by noting that $\mathsc{Rev}$ and $\mathsc{SRev}$ are within a factor of $4$ of each other for unit demand agents (see Appendix~\ref{sec:single-agent-proofs} for a proof). \begin{claim} \label{lem:RevSRevBound} For any product distribution ${\mathbf \dist}$ over $m$ items and any ${\mathcal F}$, \[\mathsc{Rev}({\mathbf \dist}, {\mathcal F}) \leq 4m\mathsc{SRev}({\mathbf \dist},\feasi[\mathsc{UnitDemand}]).\] \end{claim} Applying this claim to the revenues $\mathsc{Rev}(\dists^T_A,{\mathcal F}|_A)$, we get that \[ \sum_{A}\tprobA\mathsc{Rev}(\dists^T_A,{\mathcal F}|_A) \leq 4 \sum_A\tprobA|A|\mathsc{SRev}(\dists^T_A,\feasi[\mathsc{UnitDemand}]). \] We will now use the fact that the tail contains few items in expectation. Let $\tprobj$ denote the probability that item $j$ is in the tail: $\tprobj = \prob[\valj\sim\distj]{\valj > t_j}$. We can write the following series of inequalities. \begin{align} \label{eq:1} \sum_{A}\tprobA|A|\mathsc{SRev}(\dists^T_A,\feasi[\mathsc{UnitDemand}]) &\leq \sum_{A}\tprobA|A|\sum_{j\in A}\mathsc{Rev}(\dist^T_j) \\ &\notag = \sum_{j\in[m]}\mathsc{Rev}(\dist^T_j)\sum_{A\ni j}\tprobA|A| \\ &\notag = \sum_{j\in[m]}\tprobj\mathsc{Rev}(\dist^T_j)\operatorname{E}\expectarg{\lvert A\rvert \, | j \in A} \\ & \label{eq:4} \leq (1+\ln 2)\sum_{j\in[m]}\earev[\tprobj](\distj) \\ & \label{eq:5} \leq \frac{1}{\tprobA[\emptyset]}(1+\ln 2)\easrev[{\mathbf \xi}]({\mathbf \dist},{\mathcal F}) \end{align} Here inequality~\eqref{eq:1} follows by removing the demand constraint $\feasi[\mathsc{UnitDemand}]$. Inequality~\eqref{eq:4} follows from three observations: (1) the tail is non-empty with probability at most $1/2$; (2) if $\{z_i\}_{i\in[n]}$ are probabilities satisfying $\prod_i(1-z_i)\ge 1/2$, then $\sum_iz_i \leq \ln 2$; (3) a single-agent single-item mechanism for value distribution $\dist^T_j$ that achieves revenue $\mathsc{Rev}(\dist^T_j)$ would achieve $\tprobj$ times that revenue on the value distribution $\distj$ while satisfying an {ex~ante}\ supply constraint of $\tprobj$. Inequality \eqref{eq:5} follows from the standard argument that the revenue obtained by selling each item individually at prices $t_j$ (or higher) is at least $\tprobA[\emptyset]$ times the sum of the corresponding per-item revenues. Finally, the result follows by recalling that $\tprobA[\emptyset]\ge 1/2$ and relaxing the {ex~ante}\ constraint. \end{proof} \section{Main Results} \label{sec:theorems} We now state our three main results corresponding to the three parts of the {ex~ante}\ approach for approximating $\mathsc{Rev}({\mathbf \dist}, {\mathcal F})$. Lemma~\ref{lem:relaxation} corresponds to the first {\bf relaxation} step, and states that the revenue $\mathsc{Rev}({\mathbf \dist}, {\mathcal F})$ can be bounded by the sum of single-agent revenues with appropriate {ex~ante}\ constraints. While the lemma is stated here for buyers with constrained additive values, it holds for arbitrary value functions as long as values are independent across buyers. \begin{lemma}[\bf Relaxation] \label{lem:relaxation} For any feasibility constraints ${\mathcal F}=\{\feasi\}_{i\in[n]}$ and value distributions ${\mathbf \dist}=\prod_i\disti$, there exist {ex~ante}\ probability vectors $\eaprobi[1], \cdots, \eaprobi[n]$, satisfying: (1) $\eaprobi\in\ptopei$ for all $i$, and, (2) $\sum_i \eaprobij\le 1$ for all $j$, such that \[\mathsc{Rev}({\mathbf \dist},{\mathcal F})\le \sum_i \earev[\eaprobi](\disti,\feasi).\] \end{lemma} Lemma~\ref{lem:stitching-trevs} corresponds to the last {\bf stitching} step, and shows that any single-agent two-part tariff mechanisms that collectively satisfy an {ex~ante}\ constraint on every item can be stitched together into a multi-agent sequential two-part tariff mechanism without losing much revenue. \begin{lemma} \label{lem:stitching-trevs} For every agent $i$, let ${\mathcal M}_i = (\efi,\pricei)$ be any two-part tariff that is demand-feasible with respect to a matroid feasibility constraint $\feasi$ and that satisfies {ex~ante}\ supply constraints $\eaprobi$ under value distribution $\disti$. Let ${\mathcal F}=\{\feasi\}_{i\in[n]}$ and $\mathcal D=\prod_i\disti$. Then, if $\sum_i \eaprobij\le 1/2$ for all $j$, there exists a sequential two-part tariff mechanism ${\mathcal M}$ that is supply-feasible and demand-feasible with respect to ${\mathcal F}$ such that \[\revm({\mathbf \dist})\ge \frac 12\sum_i \revm[{\mathcal M}_i](\disti).\] \end{lemma} We therefore obtain the following corollary. \begin{corollary}[\bf Stitching] \label{cor:stitching} For any value distributions ${\mathbf \dist}=\prod_i\disti$ and feasibility constraints ${\mathcal F}=\{\feasi\}_{i\in[n]}$, where each $\feasi$ is a matroid, let $\eaprobi[1], \cdots, \eaprobi[n]$ be any {ex~ante}\ probability vectors satisfying $\sum_i \eaprobij\le 1/2$ for all $j$. Then, there exists a demand- and supply-feasible sequential two-part tariff mechanism ${\mathcal M}$ such that \[\revm({\mathbf \dist})\ge \frac 12\sum_i \eatrev[\eaprobi](\disti,\feasi).\] \end{corollary} In order to put together the Relaxation Lemma and the Stitching Corollary, it remains to relate $\earev$ for a single agent to $\eatrev$ for the same agent. The following lemma presents such a relationship when the buyer's demand constraint is a matroid. \begin{lemma}[\bf Single-agent approximation] \label{lem:approx-partition} Let $\mathcal D$ be any product value distribution and ${\mathcal F}$ be a matroid with feasible polytope ${\mathcal P}_{\feas}$. Then, for any $q\in \frac 12{\mathcal P}_{\feas}$, there exists a submatroid ${\mathcal F}' \subseteq {\mathcal F}$ such that \[ \earev(\mathcal D, {\mathcal F}) \le 33.1\,\eatrev(\mathcal D, {\mathcal F}') \] If ${\mathcal F}$ is a partition matroid, then ${\mathcal F}' = {\mathcal F}$. \end{lemma} Putting Lemmas~\ref{lem:relaxation} and \ref{lem:approx-partition}, and Corollary~\ref{cor:stitching} together, and observing that by the concavity of the revenue objective, $\earev[\frac 12\eaprobi](\disti,\feasi)\ge \frac 12\earev[\eaprobi](\disti,\feasi)$ for all $i$, we get our main result. \begin{theorem} \label{thm:main-partition} For any product value distribution ${\mathbf \dist}$ and feasibility constraints ${\mathcal F}=\{\feasi\}_{i\in[n]}$, where each $\feasi$ is a matroid, there exist submatroids $\feasi' \subseteq \feasi$ and a supply-feasible $\{\feasi'\}$-limited sequential two-part tariff mechanism ${\mathcal M}$ such that \[ \mathsc{Rev}({\mathbf \dist}, {\mathcal F})\le 133\,\revm({\mathbf \dist}) \] If $\feasi$ is a partition matroid, then $\feasi' = \feasi$. \end{theorem} \subsection*{Further Results} As a consequence of our single-agent approximation (Lemma~\ref{lem:single-agent} in Section~\ref{sec:single-agent}), we also obtain an improved approximation for the single-agent revenue maximization problem with constrained additive values. Specifically, taking ${\mathbf \eaprob} = \vec{\mathbf{1}}$ and noting ${\mathbf \eaprice} = \vec{\mathbf{0}}$, Lemma~\ref{lem:single-agent} gives the following bound on the optimal revenue for the single-agent setting. \begin{corollary} \label{cor:true-single-agent} For any downward closed feasibility constraint ${\mathcal F}$ and any value distribution ${\mathbf \dist}$, \[\mathsc{Rev}({\mathbf \dist},{\mathcal F}) \leq 31.1\,\max\left\{\mathsc{SRev}({\mathbf \dist},{\mathcal F}), \mathsc{BRev}({\mathbf \dist},{\mathcal F})\right\}.\] \end{corollary} Also as a consequence of Lemma~\ref{lem:single-agent}, we show the following bound for revenue maximization under an arbitrary {ex~ante}\ constraint. \begin{corollary} \label{cor:general-ex-ante} Let $\mathcal D$ be any product value distribution and ${\mathcal F}$ be a matroid. Then for any $q\in[0,1]^m$, there exists a submatroid ${\mathcal F}' \subseteq {\mathcal F}$ such that \[\earev(\mathcal D, {\mathcal F}) \le 35.1\,\eatrev(\mathcal D,{\mathcal F}') \] If ${\mathcal F}$ is a partition matroid, then ${\mathcal F}' = {\mathcal F}$. \end{corollary}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We~\cite{Cundy:2012ee,*Cundy:2013xsa} seek to explain the emergence of the linear string tension in QCD by studying the Wilson Loop. A common approach, often achieved by fixing to a particular gauge~\cite{Kronfeld1987516,*Suzuki:1989gp,*deForcrand:2000pg,*Stack:1994wm,*Shiba:1994db,*Greensite:2003bk}, is to extract the Abelian part of the gauge link (Abelian decomposition), projecting out the coloured, off-diagonal, elements of the gauge link leaving just an Abelian, colour neutral field which is expected to dominate confinement. It is best to respect gauge invariance by using the Cho-Duan-Ge (CDG) Abelian decomposition~\cite{Cho:1980,*Cho:1981,*Duan:1979,*F-N:98,*Shabanov:1999}. The CDG decomposition extracts the components of the gauge field aligned with $N_C-1$ commuting traceless colour vectors $n_j$; our choice of $n_j$ is the main novelty of this study. Other recent studies~\cite{Kondo:2005eq,*Kondo:2010pt,*Shibata:2007pi} select $n$ (the Abelian theory must be U(1) in that work) through additional dynamical fields which allow the authors to relate the string tensions of the Abelian theory and full QCD. Instead, we maximise the Abelian symmetry so that all the possible degrees of freedom are included. It is possible to choose a specific $n_j$ which diagonalises the Wilson Loop and leaves the maximal $U(1)^{N_C-1}$ Abelian symmetry for an original SU($N_C$) gauge theory. This $U(1)^{N_C-1}$ theory can be studied numerically and modelled theoretically. We believe that certain topological objects contained within the colour fields can provide an explanation of confinement. In section \ref{sec:2}, we use a particular choice of the CDG decomposition to diagonalise the Wilson Loop, and outline how this might be used to demonstrate a linear static potential. In section \ref{sec:4}, we provide numerical results supporting our analysis, and we conclude in section \ref{sec:5}. \section{Diagonalisation of Wilson Loops}\label{sec:2} A linear static potential, $V(R)$, is a signal for confinement. For a gauge field $A_\mu$, $V(R)$ may be constructed using $V(R) =- \lim_{T\rightarrow \infty} \frac{1}{T}\log \langle \tr\; W[\{R,T\},U] /N_C\rangle$, where $W$ is the $R\times T$ Wilson Loop~\cite{wilson:1977}. Consider a Wilson loop of length $L=N\delta \sigma$ parametrised by $\sigma$ around a curve $C_s$, a $R\times T$ rectangle in the $x$-$t$ plane, with $x_\mu(\sigma = 0)= x_\mu(\sigma = L)=s_\mu$, where $P$ represents path ordering, \begin{align} W[C_s,U] = &\lim_{\delta\sigma \rightarrow 0}\prod_{\sigma = 0,\delta\sigma,2\delta\sigma,\ldots}^{(N-1)\delta\sigma} U_{\mu(\sigma)}(x(\sigma))& U_{\mu}(x) =& P[e^{-ig \int_x^{x + \delta\sigma \hat{\mu}} dx'_\mu A_\mu}]. \end{align} We now insert an identity operator $I_\sigma^r = \theta^r_\sigma (\theta^r_\sigma)^{-1}$ between each pair of gauge links, with $\theta \in U(N_C)$ and $r$ an index identifying the Wilson Loop. {We choose $\theta$ so that it diagonalises the gauge links along the Wilson Loop.} The index $j$ runs over only the diagonal Gell-Mann matrices. \begin{align} \theta_s^\dagger W[C_s]\theta_s = &\lim_{\delta\sigma \rightarrow 0}\prod_{\sigma = 0,\delta\sigma,2\delta\sigma,\ldots}^{(N-1)\delta\sigma} \theta_\sigma^\dagger U_{\mu(\sigma)}(x(\sigma))\theta_{\sigma+\delta\sigma}\nonumber\\ [\theta_\sigma^\dagger U_{\mu(\sigma)}(x(\sigma)) \theta_{\sigma+\delta\sigma},\lambda^j] = & 0,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \lambda^j \text{ diagonal Gell-Mann matrix}, \forall j,U_\mu \in C_s. \end{align} $\theta$ is uniquely defined up to a $U(1)^{N_C}$ transformation $\chi$ (i.e. $\theta \rightarrow \theta\chi$) and the ordering of the eigenvectors. By diagonalising the gauge links on all nested Wilson Loops, we can extend this definition of $\theta$ across the entire lattice. We now introduce new $SU(N_C)$ fields, $\hat{U}$ and $\hat{X}$, so that \begin{align} {[\theta_\sigma^\dagger \hat{U}_{\mu(\sigma)}(x(\sigma)) \theta_{\sigma+\delta\sigma},\lambda^j] =} &{ 0\;\; \forall x,\mu,j}; &U_\mu(x) =& \hat{X}_\mu \hat{U}_\mu\;\; \forall x,\mu; % &\hat{U}_{\mu}(x) =& U_{\mu}(x)\;\; \forall x,\mu \in C_s. \end{align} This allows us to express the Wilson Loop without any path ordering, \begin{align} \hat{U}_\mu(x) = &\theta_x e^{-i \int_x^{x+\delta\sigma \hat{\mu}} dx'_\mu\hat{u}^j_\mu(x') \lambda^j}\theta^\dagger_{x+\delta\sigma \hat{\mu}}; & \tr W[C_s,U] = & \tr W[C_s,\hat{U}] = \tr e^{-ig\oint_{C_s} dx^\mu \lambda^j \hat{u}^j_\mu(x)}. \end{align} We can extract the string tension from the Abelian field $\hat{u}$ (a function of $\theta$ and $A_\mu$). We use Stoke's theorem to express the line integral over the Abelian field as a surface integral. \begin{align} \oint_{C_s} dx^\mu \hat{u}_{\mu,x}^j =& \int_{x \in \Sigma,x \not{\in} \tilde{\Sigma}} d\Sigma^{\mu\nu} \hat{F}_{\mu\nu}^j {+ \sum_{n=1}^{\tilde{N}} \oint_{\tilde{C}_n} dx^\mu \hat{u}^j_{\mu,x}};& \hat{F}_{\mu\nu}^j =& \partial_\mu \hat{u}^j_\nu - \partial_\nu \hat{u}^j_\mu. \end{align} $\Sigma$ represents the planar surface bound by $C_s$. $\tilde{\Sigma}_n$ represents the $\tilde{N}$ regions within $\Sigma$ (bound by the curves $\tilde{C}_n$ $\in\Sigma$) where $\hat{u}$ is not analytic. $\hat{F}$ and $\hat{u}$ are gauge invariant. Defining $X_0 = \frac{1}{2}(\hat{X} +\hat{X}^\dagger)$, \begin{multline} i\delta\tilde{\sigma} \hat{u}^j_{\mu,x} = \frac{1}{\tr (\lambda^j)^2}\text{Im}\left(\;\tr \left[\lambda^j\theta^\dagger_x \hat{X}^\dagger_{\mu,x} \theta_x \theta_x^\dagger U_{\mu,x}\theta_{x+\delta\tilde{\sigma} \hat{\mu}}\right]\right)\\= \frac{1}{2\tr (\lambda^j)^2}\tr[\lambda^j \theta^\dagger_x (\hat{X}_{\mu,x}^\dagger - \hat{X}_{\mu,x}) \theta_x - 2i \lambda^j \delta \tilde{\sigma}\theta^\dagger_x [X_0]_{\mu,x} gA^a_{\mu,x} \lambda^a\theta_x + \\ { 2\lambda^j \theta^\dagger [X_0]_{\mu,x} \theta_x \delta \tilde{\sigma}\theta_x^\dagger\partial_{\tilde{\sigma}} \theta}]+O(\delta\sigma^2). \end{multline} We choose $\hat{X}$ so that {$\tr(\lambda^j\theta^\dagger_x(\hat{X}_\mu(x) - \hat{X}_\mu(x)^\dagger)\theta_x) = 0$} and $\tr \hat{X}_\mu(x)$ is maximised. If the singularity in $\hat{u}$ occurs over a small region where $A_\mu$ and ${X}_0$ are smooth, the $\theta^\dagger \partial_\mu \theta$ term will dominate, and \begin{align} \oint_{C_s} dx^\mu \hat{u}_\mu(x)^j =& {\sum_n \oint_{\tilde{C}_n} d\tilde{\sigma} \tr [\lambda^j \theta^\dagger [X_0]_{\mu,x} \theta_x \delta \tilde{\sigma}\theta_x^\dagger\partial_{\tilde{\sigma}} \theta]} + \ldots.\nonumber \end{align} From the above analysis, we see that $\hat{U}$ and $\hat{X}$ are uniquely defined by the equations \begin{align} {\hat{U}_{\mu,x} n_{j,x+\delta\sigma\hat{\mu} }\hat{U}^\dagger_{\mu,x} - n_{j,x}=} & {0} & {\tr\; n^j (\hat{X} - \hat{X}^\dagger) =} & {0}& n_{j,x} \equiv &\theta_x \lambda^j \theta^\dagger_x\nonumber. \end{align} This is a lattice representation of the continuum Cho-Duan-Ge gauge-invariant Abelian decomposition~\cite{Cho:1980}, which is known to contain topological singularities within the colour field $n$. Non-analyticities in the $\theta$ field occur when (a) the Wilson Loop has degenerate eigenvalues; (b) $A_\mu$ is discontinuous (in the chosen gauge); or (c) the situation described below. After gauge fixing, for a SU(2) theory, we parametrise $\theta$ using a complex Givens rotation and a U(1) term, \begin{align} \theta =& e^{ia\phi} e^{id_3 \lambda^3};& \phi = &\left(\begin{array}{cc} 0 & e^{ic}\\ e^{-ic} & 0\end{array}\right);&\overline{\phi} = &\left(\begin{array}{cc} 0 & ie^{ic}\\ -i e^{-ic} & 0\end{array}\right). \end{align} $a$, $c$ and $d_3$ are not gauge invariant; $0\le a\le \pi/2$, $c,d_3 \in \mathbb{R}$. In SU(3), we construct $\theta$ from three Givens terms and a $U(1)\times U(1)$ matrix parametrised by $d_3$ and $d_8$. The arbitrary parameters $d_3$ and $d_8$ do not affect the field $n$ (they can be chosen to be zero). {$c$ is ill defined at $a = 0$ or $a = \pi/2$.} We parametrise space-time around one of these points as \begin{gather} { (t,x,y,z) \equiv r (\cos\psi_3,\sin\psi_3\cos\psi_2,\sin\psi_3\sin\psi_2\cos\psi_1, \sin\psi_3\sin\psi_2\sin\psi_1),}\label{eq:72} \end{gather} with $r=0$ at $a = \pi/2$. In SU(2), by writing $c = \nu_n \psi_3$ for an integer gauge-invariant winding number $\nu_n\neq 0$, and using {$ \theta^\dagger\partial_\sigma \theta = e^{-id_3\lambda^3}\big[ i \partial_\sigma a \phi + i \lambda^3 \partial_\sigma d + i \sin a \cos a \overline{\phi}\partial_\sigma c - i \sin^2 a \partial_\sigma c \lambda^3 \big]e^{id_3 \lambda^3} $}, we may integrate around a curve at fixed $a = a_{0n}$ surrounding the structure in $\hat{F}$ to obtain \begin{gather} \oint_{C_s} dx^\mu \hat{u}_\mu(x)^j = {\sum_{n=1}^{\tilde{N}} 2\pi \nu_n \sin^2 a_{0n}\tr [ [X_0]_{\mu,x} ]} + \ldots. \end{gather} If the number of structures, $\tilde{N}$, within the Wilson loop is proportional to the area enclosed by the loop, as might be expected, then this leads to an area law string tension and confinement. We parametrise $a = \frac{\pi}{2} - G(r,\psi_1,\psi_2,\psi_3)$ and $c = J(\psi_3)$ for unknown gauge-dependent functions $G$ and $J$. Then, we can calculate the topological ($\theta$) contribution $H^3_{\mu\nu}$ to the field strength $\hat{F}^3_{\mu\nu}$. $H_{\mu\nu}^j = \frac{1}{8g}\tr n_j[\partial_\mu n_k,\partial_\nu n_k]$. In SU(2), with $G\sim \partial_i G \equiv \frac{\partial G}{\partial \psi_i} \sim r^\xi$; $\partial_r G \sim r^{\xi-1}$ and $\xi > 0$ \begin{align} B_y=&\frac{1}{g}\sin 2G \bigg(\underbrace{\partial_1 G \partial_3 J \frac{yxt}{r^2 r_{xyz} r_{yz}^2}}_{\text{t-string}} + \underbrace{\partial_2 G \partial_3 J \frac{zt}{r^2 r_{xyz} r_{yz}}}_{\text{t-string}}\bigg) & B_x =&\frac{1}{g}\sin 2G \bigg( \underbrace{\partial_1G \partial_3 J \frac{t}{r_{xyz} r^2}}_{\text{t-string}} \bigg)\nonumber\\ E_x= &-\frac{1}{g}\sin2G\bigg(\underbrace{\partial_rG \partial_3 J \frac{x}{r r_{xyz}}}_{\text{point}} - \underbrace{\partial_2 G \partial_3 J \frac{r_{yz}}{r^2 r_{xyz}}}_{\text{point}} \bigg) & B_z=&-\frac{1}{g}\sin 2G \bigg( \underbrace{\partial_1 G \partial_3 J \frac{zxt}{r^2 r_{yz}^2 r_{xyz}}}_{\text{t-string}}\bigg) \nonumber \end{align}\vspace{-0.8cm} \begin{align} E_y=&-\frac{1}{g}\sin 2G \bigg(\underbrace{\partial_r G \partial_3 J \frac{y}{r_{xyz} r}}_{\text{point}} - \underbrace{\partial_1 G \partial_3 J \frac{z r_{xyz}}{r^2 r_{yz}^2}}_{\text{x-string}} + \underbrace{\partial_2 G \partial_3 J\frac{xy}{r^2 r_{xyz}r_{yz}}}_{\text{point}} \bigg)\nonumber\\ E_z=&-\frac{1}{g}\sin 2G\bigg(\underbrace{\partial_r G\partial_3 J \frac{z}{r r_{xyz}}}_{\text{point}} - \underbrace{\partial_1 G \partial_3 J \frac{y r_{xyz}}{r^2 r_{yz}^2}}_{\text{x-string}} -\underbrace{\partial_2G\partial_3J\frac{zx}{r_{yz}r_{xyz} r^2}}_{\text{point}} \bigg) . \end{align} a $\mu$-string is a 1-Dimensional object parallel to the $\mu$-axis; a point is a structure where the maximum falls at least as $1/r$ in all directions (for $\xi = 1$). After rotating the coordinate system consistent with the overall symmetry, we find the following structures in the electromagnetic field strength: \begin{center} \begin{tabular}{l l l l l l l} \hline Parametrisation&$E_x$&$E_y$&$E_z$&$B_x$& $B_y$&$B_z$ \\ \hline Equation (\ref{eq:72})&point&x-string&x-string&t-string&t-string&t-string\\ $t\leftrightarrow x$&point & x-string&x-string&x-string&t-string&t-string\\ $y\leftrightarrow z$&point&x-string&x-string&t-string&t-string&t-string\\ $t\leftrightarrow x$,$y\leftrightarrow z$&point&x-string&x-string&x-string&t-string&t-string\\ \hline\hline \end{tabular} \end{center} If these topological structures exist, they will reveal themselves as points in the $xt$ component of the field strength, and either points, $x$-strings or $t$-strings in the other components of the field strength. \section{Numerical Results} \label{sec:4} We used quenched Luscher-Weisz~\cite{TILW,*TILW2,*TILW4} lattice QCD configurations. Our Lattice spacings and lattice volumes are shown in table \ref{tab:1}. To preserve gauge invariance, we use a stout smeared gauge field $\tilde{U}_p$ (after a large number, $p$, of smearing steps) during our construction of the topological field, $M$, taken from the Abelian decomposition of $\tilde{M}_p = \theta^\dagger \tilde{U}_p \theta$; $\tilde{U}$ will not contribute to the string tension. In figure \ref{fig:1} and table \ref{tab:2} we show results for the string tension for the gauge fields $U$, $\hat{U}$, and $M$, the topological ($\theta$) part of $\hat{U}$. To save computer time, initial results used a single $\theta$ for each Wilson Loop on a configuration, breaking the identity between $\tr W[C_s,U]$ and $\tr W[C_s,\hat{U}]$. The $\hat{U}$ and $M$ string tensions (seen in the slope of the curves) are in good agreement. \begin{table} \begin{center} \begin{tabular}{l| l l l l l} \hline Name&Lattice size & L (fm)&$\beta$&$a$ (fm)& $\#$\\ \hline 8.0&$16^3 \times 32$& 2.30& 8.0&0.144(2) &91 \\ 8.3&$16^3 \times 32$&1.84& 8.3 &0.114(1) &91 \\ 8.52&$16^3 \times 32$&1.58 & 8.52 &0.099(1) &82 \\ 8.3L&$20^3 \times 40$&2.30 & 8.3&0.112(5) &54\\ \hline\hline \end{tabular} \end{center}\vspace{-0.5cm} \caption{Parameters for the ensembles. $\#$ is the number of configurations. $L$ the physical spatial extent.}\label{tab:1} \end{table} \begin{table} \begin{center} \begin{tabular}{l|llllllll} \hline &$U$&$\hat{U}$&$\tilde{{U}}_{600}$&$M_{600}$&$\tilde{{U}}_{800}$&$M_{800}$&$\tilde{{U}}_{1000}$&$M_{1000}$ \\ \hline 8.0&0.094(2)&0.116(4)&0.0273(2)&0.103(10)&0.0213(1)&0.104(8)&0.0174(1)&0.105(9) \\ 8.3&0.0590(8)&0.095(2)&0.0185(1)&0.087(5)&0.0147(1)&0.087(5)&0.0122(1)&0.087(5) \\ 8.52&0.0442(6)&0.077(1)&0.0149(2)&0.076(3)&0.0124(2)&0.076(3)&0.0106(2)&0.077(3) \\ 8.3L&0.057(5)&0.099(1)&0.0179(1)&0.099(2)&0.0144(1)&0.099(2)&0.0121(1)&0.098(2) \\ \hline\hline \end{tabular} \end{center}\vspace{-0.5cm} \caption{The string tension for the ensembles ($\theta$ fixed/configuration)} \label{tab:2} \end{table} \begin{figure} \begin{center} \begin{tabular}{cc} {\small \input{figs/b8.52p.tex} }& {\small \input{figs/b8.52realp.tex} } \end{tabular} \end{center}\vspace{-0.8cm} \caption{The string tension calculated from (R,T) Wilson Loops on the $\beta = 8.52$ ensemble, for $\theta$ is fixed for each configuration (left) and early results with $\theta$ recalculated for each Wilson Loop (right). Our data where $\theta$ is recalculated is not yet good enough for a reliable extrapolation to $T=\infty$.}\label{fig:1} \end{figure} \begin{figure} \begin{center} \begin{tabular}{ccc} \input{figs/E_x_b8-52_70_xtp.tex}&\input{figs/E_y_b8-52_70_xtp.tex}&\input{ figs/E_z_b8-52_70_xtp.tex}\\ \input{figs/B_x_b8-52_70_xtp.tex}& \input{figs/B_y_b8-52_70_xtp.tex}& \input{figs/B_z_b8-52_70_xtp.tex} \end{tabular} \end{center} \vspace{-0.8cm} \caption{ Contour plots for the field strength for the $x$ (left), $y$ (middle) and $z$ (right) components of the restricted electric (top) and magnetic (bottom) fields on an X (y-axis)-T (x-axis) planar slice of the lattice. Red indicates positive field strength, green negative field strength.}\label{fig:2} \end{figure} Figure \ref{fig:2} displays contour plots showing the distribution of the various components of the field strength. The dominant structures appear to be points or lines in the expected directions. This is confirmed by a cluster analysis. We identify clusters as sign-coherent regions of field strength with $|\hat{F}_{\mu\nu}| >1$ for each $\mu,\nu$. We then compare the size, shape and orientation of the clusters with the model expectations of the topological objects in the field strength. \begin{figure} \begin{center} \begin{tabular}{cc} \input{figs/Cluster_neighbours_b8_52p.tex}& \input{figs/Cluster_f_neighbours_b8_52p.tex} \end{tabular} \end{center}\vspace{-0.8cm} \caption{ The average number of nearest neighbours within a cluster for each lattice site in the cluster (left) and the same analysis only including points within clusters extended over at least four lattice sites (right).}\label{fig:3} \end{figure} In figures \ref{fig:3} and \ref{fig:4}, we investigate whether the objects of within the Abelian Field strength have the shapes expected from the theory. Figure \ref{fig:3} investigates the dimensionality of the clusters by investigating the number of nearest neighbours of each site in the cluster. Excluding the smallest clusters, the majority of lattice sites have two neighbouring sites within the same cluster, suggesting that these objects are one-dimensional. In figure \ref{fig:4} we investigate the orientations of these strings (excluding the smallest point-like structures from the analysis, including all structures in $E_x$). As expected, $E_y$ and $E_z$ are extended along the $x$-axis, $B_y$ and $B_z$ along the $t$-axis, with $B_x$ extended along both the $x$ and $t$ axes. \begin{figure} \begin{center} \begin{tabular}{ccc} & \input{figs/Cluster_extent_E_y_b8_52p.tex}& \input{figs/Cluster_extent_E_z_b8_52p.tex}\\ \input{figs/Cluster_extent_B_x_b8_52p.tex} &\input{figs/Cluster_extent_B_y_b8_52p.tex} &\input{figs/Cluster_extent_B_z_b8_52p.tex} \end{tabular} \end{center}\vspace{-0.8cm} \caption{The spatial extent of the clusters containing more than four lattice sites along the four spatial directions for the $x$ (left) $y$ (middle) and $z$ (right) components of the electric (top) and magnetic (bottom) fields. The $X$ axis gives the length of the cluster in a given direction; the $Y$ axis the proportion of clusters with that length.}\label{fig:4} \end{figure} \section{Conclusions} \label{sec:5} We have suggested that, by introducing a carefully tuned field, $\theta$, it is possible to diagonalise the gauge links within a Wilson Loop without introducing additional path integrals or dynamical variables, giving a $U(1)^{N_C-1}$ Abelian theory (a CDG decomposition) which can be used to calculate the string tension. This theory can be studied numerically, and modelled theoretically. As expected, the coloured fields do not contribute to confinement. There may be certain topological singularities within $\theta$ which contribute to the string tension, giving characteristic structures appearing in the Abelian field strength tensor. We have confirmed numerically that the topological term accounts for all of the string tension, and that the structures within the field strength have the same dimensionality and directions as expected from the model. \section*{Acknowledgements} Numerical calculations used servers at Seoul National University. Funding was provided by the BK21 program of the NRF, Republic of Korea. The research of W.~Lee is supported by the Creative Research Initiatives Program (2012-0000241) of the NRF grant funded by the Korean government (MEST). W.~Lee would like to acknowledge the support from KISTI supercomputing center through the strategic support program for the supercomputing application research [No. KSC-2011-G2-06]. YMC is supported in part by NRF grant (2012-002-134) funded by MEST. \bibliographystyle{JHEP_mcite.bst}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Additional Experiments} We evaluate the methods according to Signal-to-Noise Ratio (SNR) value wrt the groundtruth denoised image. In this case, we minimize over $\bbb{W}$ and $\bbb{H}$ \emph{alternatingly} with different initial points (OMP initialization or standard normal random initialization). For updating $\bbb{W}$, we use the same proximal gradient method as in \cite{BaoJQS16}. For updating $\bbb{H}$, since the accelerated PGM does not necessarily present better performance than canonical PGM and since the line search strategy does not present better performance than the simple constant step size, we only compare with PGM, which has been implemented in \cite{BaoJQS16} \footnote{Code: \url{http://www.math.nus.edu.sg/~matjh/research/research.htm}} and KSVD \cite{aharon2006img} \footnote{Code: \url{http://www.cs.technion.ac.il/~elad/software/}}. In our experiments, we find that MSM achieves lower objectives than PGM in all cases. We do not report the objective values here but only the best SNR value, since (i) the best SNR result does not correspond to the same $\lambda$ for PGM and MSM, and (ii) KSVD does not solve exactly the same problem in (\ref{eq:card:sparse:coding})\footnote{In fact, it solves an $\ell_0$ norm constrained problem using a greedy pursuit algorithm and performs a codebook update using SVD. It may not necessarily converge, which motivates the use of the alternating minimization algorithm in \cite{BaoJQS16}. }. We observe that MSM is generally 4-8 times faster than KSVD. This is not surprising, since KSVD needs to call OMP to update the dictionary $\bbb{H}$, which involves high computational complexity while our method only needs to call a generalized Gaussian elimination procedure in each iteration. In Table \ref{table:snr:obj}, we summarize the results, from which we make two conclusions. (i) The two initialization strategies generally lead to similar SNR results. (ii) Our MSM method generally leads to a larger SNR than PGM and a comparable SNR as KSVD, but in less time. } \section{Conclusions} This paper presents a new generalized matrix splitting algorithm for minimizing composite functions. We rigorously analyze its convergence behavior for convex problems and discuss its several importance extensions. Experimental results on nonnegative matrix factorization, $\ell_0$ norm regularized sparse coding, and $\ell_1$ norm regularized Danzig selector demonstrate that our methods achieve state-of-the-art performance. \section{Experiments}\label{sect:exp} This section demonstrates the efficiency and efficacy of the proposed Generalized Matrix Splitting Algorithm (GMSA) by considering three important applications: nonnegative matrix factorization (NMF) \cite{lee1999learning,lin2007projected}, $\ell_0$ norm regularized sparse coding \cite{olshausen1996emergence,Quan2016CVPR,lee2006efficient}, and $\ell_1$ norm regularized Danzig selectors. We implement our method in MATLAB on an Intel 2.6 GHz CPU with 8 GB RAM. Only our generalized Gaussian elimination procedure is developed in C and wrapped into the MATLAB code, since it requires an elementwise loop that is quite inefficient in native MATLAB. We consider $\epsilon=0.01$ and $\omega=1$ as our default parameters for GMSA in all our experiments. Some Matlab code can be found in the authors' research webpages. \subsection{Convergence Behavior of Different Methods} We demonstrate the convergence behavior of different methods for solving random least squares problems. We compare the following methods. \bbb{(i)} PGM: classical proximal gradient method with constant step size \cite{nesterov2013introductory}; \bbb{(ii)} PGM-LS: classical PGM with line search \cite{beck2009fast}; \bbb{(iii)} PGM-A: accelerated PGM with constant step size \cite{nesterov2013introductory}; \bbb{(iv)} PGM-A-LS: accelerated PGM with line search \cite{beck2009fast,nesterov2013introductory}; \bbb{(v)} GMSA $(\omega=1/0.5/1.5)$: generalized matrix splitting algorithm with varying the parameter $\omega$ described in Algorithm \ref{alg:main}; \bbb{(vi)} GMSA-C: generalized matrix splitting algorithm with correction step described in (\ref{alg:gmsa:C}), where a local constant for computing the step size $\alpha^k$ is used; \bbb{(vii)} GMSA-A: generalized matrix splitting algorithm with Richardson extrapolation acceleration described in (\ref{alg:acc:gmsa}). We report the objective values of the comparing methods for each iteration when they are applied to solve non-negative/$\ell_1$ norm regularized/$\ell_0$ norm regularized least squares problems in Figure \ref{fig:sample:l01inf}. Note that all the methods have the same computational complexity for one iteration. We have the following observations. \bbb{(i)} GMSA with the default parameters $\omega=1$ and $\theta=0.01$ significantly outperforms proximal gradient method and its variants. \bbb{(ii)} GMSA$(\omega=1.5)$ gives better performance than GMSA $(\omega=0.5)$ for solving non-negative least squares problem but it gives worse performance than GMSA $(\omega=0.5)$ for solving $\ell_1$ norm regularized least squares problem. The choice of the parameter $\omega$ seems to be sensitive to the specific data. \bbb{(iii)} GMSA-C converges slower than GMSA but faster than $\{$PGM,~PGM-LS$\}$. \bbb{(iv)} GMSA-A generally outperforms the other methods in the convex problems. \bbb{(v)} GMSA generally presents the best performance in the nonconvex problems. Since (i) GMSA with the choice $\epsilon = 0.01$ and $\omega=1$ gives comparable performance to its variants, and (ii) GMSA-A is not necessarily a monotonic algorithm (although it achieves acceleration upon GMSA), we only report the results for GMSA with $\epsilon = 0.01$ and $\omega=1$ in our following experiments. \begin{table*}[!th] \scalebox{0.66}{\begin{tabular}{cc} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{8}{|>{\columncolor{mycyana}}c|}{\centering time limit=20} \\ \hline data & n & \cite{lin2007projected} & \cite{kim2011fast} & \cite{kim2011fast} & \cite{guan2012nenmf} & \cite{hsieh2011fast} & [ours] \\ & & PG & AS & BPP & APG & CGD & GMSA \\ \hline 20news & 20 &5.001e+06 & 2.762e+07 & 8.415e+06 & \textcolor[rgb]{0,0.7,0}{4.528e+06} & \textcolor[rgb]{0,0.5820,0.9}{4.515e+06} & \textcolor[rgb]{0.9961,0,0}{4.506e+06} \\ 20news & 50 &5.059e+06 & 2.762e+07 & 4.230e+07 & \textcolor[rgb]{0,0.7,0}{3.775e+06} & \textcolor[rgb]{0,0.5820,0.9}{3.544e+06} & \textcolor[rgb]{0.9961,0,0}{3.467e+06} \\ 20news & 100 &6.955e+06 & 5.779e+06 & 4.453e+07 & \textcolor[rgb]{0,0.5820,0.9}{3.658e+06} & \textcolor[rgb]{0,0.7,0}{3.971e+06} & \textcolor[rgb]{0.9961,0,0}{2.902e+06} \\ 20news & 200 &7.675e+06 & \textcolor[rgb]{0,0.5820,0.9}{3.036e+06} & 1.023e+08 & \textcolor[rgb]{0,0.7,0}{4.431e+06} & 3.573e+07 & \textcolor[rgb]{0.9961,0,0}{2.819e+06} \\ 20news & 300 &\textcolor[rgb]{0,0.7,0}{1.997e+07} & 2.762e+07 & 1.956e+08 & \textcolor[rgb]{0,0.5820,0.9}{4.519e+06} & 4.621e+07 & \textcolor[rgb]{0.9961,0,0}{3.202e+06} \\ COIL & 20 &2.004e+09 & 5.480e+09 & 2.031e+09 & \textcolor[rgb]{0.9961,0,0}{1.974e+09} & \textcolor[rgb]{0,0.7,0}{1.976e+09} & \textcolor[rgb]{0,0.5820,0.9}{1.975e+09} \\ COIL & 50 &1.412e+09 & 1.516e+10 & 6.962e+09 & \textcolor[rgb]{0,0.7,0}{1.291e+09} & \textcolor[rgb]{0,0.5820,0.9}{1.256e+09} & \textcolor[rgb]{0.9961,0,0}{1.252e+09} \\ COIL & 100 &2.960e+09 & 2.834e+10 & 3.222e+10 & \textcolor[rgb]{0,0.7,0}{9.919e+08} & \textcolor[rgb]{0,0.5820,0.9}{8.745e+08} & \textcolor[rgb]{0.9961,0,0}{8.510e+08} \\ COIL & 200 &3.371e+09 & 2.834e+10 & 5.229e+10 & \textcolor[rgb]{0,0.7,0}{8.495e+08} & \textcolor[rgb]{0,0.5820,0.9}{5.959e+08} & \textcolor[rgb]{0.9961,0,0}{5.600e+08} \\ COIL & 300 &3.996e+09 & 2.834e+10 & 1.017e+11 & \textcolor[rgb]{0,0.7,0}{8.493e+08} & \textcolor[rgb]{0,0.5820,0.9}{5.002e+08} & \textcolor[rgb]{0.9961,0,0}{4.956e+08} \\ TDT2 & 20 &1.597e+06 & 2.211e+06 & 1.688e+06 & \textcolor[rgb]{0.9961,0,0}{1.591e+06} & \textcolor[rgb]{0,0.7,0}{1.595e+06} & \textcolor[rgb]{0,0.5820,0.9}{1.592e+06} \\ TDT2 & 50 &1.408e+06 & 2.211e+06 & 2.895e+06 & \textcolor[rgb]{0,0.7,0}{1.393e+06} & \textcolor[rgb]{0,0.5820,0.9}{1.390e+06} & \textcolor[rgb]{0.9961,0,0}{1.385e+06} \\ TDT2 & 100 &1.300e+06 & 2.211e+06 & 6.187e+06 & \textcolor[rgb]{0,0.5820,0.9}{1.222e+06} & \textcolor[rgb]{0,0.7,0}{1.224e+06} & \textcolor[rgb]{0.9961,0,0}{1.214e+06} \\ TDT2 & 200 &1.628e+06 & 2.211e+06 & 1.791e+07 & \textcolor[rgb]{0,0.5820,0.9}{1.119e+06} & \textcolor[rgb]{0,0.7,0}{1.227e+06} & \textcolor[rgb]{0.9961,0,0}{1.079e+06} \\ TDT2 & 300 &1.915e+06 & \textcolor[rgb]{0,0.7,0}{1.854e+06} & 3.412e+07 & \textcolor[rgb]{0,0.5820,0.9}{1.172e+06} & 7.902e+06 & \textcolor[rgb]{0.9961,0,0}{1.066e+06} \\ \hline \end{tabular} \hspace{-5.1pt} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{8}{|>{\columncolor{mycyanb}}c|}{\centering time limit=30} \\ \hline data & n & \cite{lin2007projected} & \cite{kim2011fast} & \cite{kim2011fast} & \cite{guan2012nenmf} & \cite{hsieh2011fast} & [ours] \\ & & PG & AS & BPP & APG & CGD & GMSA \\ \hline 20news & 20 &4.716e+06 & 2.762e+07 & 7.471e+06 & \textcolor[rgb]{0,0.7,0}{4.510e+06} & \textcolor[rgb]{0,0.5820,0.9}{4.503e+06} & \textcolor[rgb]{0.9961,0,0}{4.500e+06} \\ 20news & 50 &4.569e+06 & 2.762e+07 & 5.034e+07 & \textcolor[rgb]{0,0.7,0}{3.628e+06} & \textcolor[rgb]{0,0.5820,0.9}{3.495e+06} & \textcolor[rgb]{0.9961,0,0}{3.446e+06} \\ 20news & 100 &6.639e+06 & 2.762e+07 & 4.316e+07 & \textcolor[rgb]{0,0.7,0}{3.293e+06} & \textcolor[rgb]{0,0.5820,0.9}{3.223e+06} & \textcolor[rgb]{0.9961,0,0}{2.817e+06} \\ 20news & 200 &\textcolor[rgb]{0,0.7,0}{6.991e+06} & 2.762e+07 & 1.015e+08 & \textcolor[rgb]{0,0.5820,0.9}{3.609e+06} & 7.676e+06 & \textcolor[rgb]{0.9961,0,0}{2.507e+06} \\ 20news & 300 &\textcolor[rgb]{0,0.7,0}{1.354e+07} & 2.762e+07 & 1.942e+08 & \textcolor[rgb]{0,0.5820,0.9}{4.519e+06} & 4.621e+07 & \textcolor[rgb]{0.9961,0,0}{3.097e+06} \\ COIL & 20 &1.992e+09 & 4.405e+09 & 2.014e+09 & \textcolor[rgb]{0.9961,0,0}{1.974e+09} & \textcolor[rgb]{0,0.7,0}{1.975e+09} & \textcolor[rgb]{0,0.5820,0.9}{1.975e+09} \\ COIL & 50 &1.335e+09 & 2.420e+10 & 5.772e+09 & \textcolor[rgb]{0,0.7,0}{1.272e+09} & \textcolor[rgb]{0,0.5820,0.9}{1.252e+09} & \textcolor[rgb]{0.9961,0,0}{1.250e+09} \\ COIL & 100 &2.936e+09 & 2.834e+10 & 1.814e+10 & \textcolor[rgb]{0,0.7,0}{9.422e+08} & \textcolor[rgb]{0,0.5820,0.9}{8.623e+08} & \textcolor[rgb]{0.9961,0,0}{8.458e+08} \\ COIL & 200 &3.362e+09 & 2.834e+10 & 4.627e+10 & \textcolor[rgb]{0,0.7,0}{7.614e+08} & \textcolor[rgb]{0,0.5820,0.9}{5.720e+08} & \textcolor[rgb]{0.9961,0,0}{5.392e+08} \\ COIL & 300 &3.946e+09 & 2.834e+10 & 7.417e+10 & \textcolor[rgb]{0,0.7,0}{6.734e+08} & \textcolor[rgb]{0,0.5820,0.9}{4.609e+08} & \textcolor[rgb]{0.9961,0,0}{4.544e+08} \\ TDT2 & 20 &1.595e+06 & 2.211e+06 & 1.667e+06 & \textcolor[rgb]{0.9961,0,0}{1.591e+06} & \textcolor[rgb]{0,0.7,0}{1.594e+06} & \textcolor[rgb]{0,0.5820,0.9}{1.592e+06} \\ TDT2 & 50 &1.397e+06 & 2.211e+06 & 2.285e+06 & \textcolor[rgb]{0,0.7,0}{1.393e+06} & \textcolor[rgb]{0,0.5820,0.9}{1.389e+06} & \textcolor[rgb]{0.9961,0,0}{1.385e+06} \\ TDT2 & 100 &1.241e+06 & 2.211e+06 & 5.702e+06 & \textcolor[rgb]{0,0.5820,0.9}{1.216e+06} & \textcolor[rgb]{0,0.7,0}{1.219e+06} & \textcolor[rgb]{0.9961,0,0}{1.212e+06} \\ TDT2 & 200 &1.484e+06 & 1.878e+06 & 1.753e+07 & \textcolor[rgb]{0,0.5820,0.9}{1.063e+06} & \textcolor[rgb]{0,0.7,0}{1.104e+06} & \textcolor[rgb]{0.9961,0,0}{1.049e+06} \\ TDT2 & 300 &1.879e+06 & 2.211e+06 & 3.398e+07 & \textcolor[rgb]{0,0.5820,0.9}{1.060e+06} & \textcolor[rgb]{0,0.7,0}{1.669e+06} & \textcolor[rgb]{0.9961,0,0}{1.007e+06} \\ \hline \end{tabular} \\ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{8}{|>{\columncolor{mycyanc}}c|}{\centering time limit=40} \\ \hline data & n & \cite{lin2007projected} & \cite{kim2011fast} & \cite{kim2011fast} & \cite{guan2012nenmf} & \cite{hsieh2011fast} & [ours] \\ & & PG & AS & BPP & APG & CGD & GMSA \\ \hline 20news & 20 &4.622e+06 & 2.762e+07 & 7.547e+06 & \textcolor[rgb]{0.9961,0,0}{4.495e+06} & \textcolor[rgb]{0,0.7,0}{4.500e+06} & \textcolor[rgb]{0,0.5820,0.9}{4.496e+06} \\ 20news & 50 &4.386e+06 & 2.762e+07 & 1.562e+07 & \textcolor[rgb]{0,0.7,0}{3.564e+06} & \textcolor[rgb]{0,0.5820,0.9}{3.478e+06} & \textcolor[rgb]{0.9961,0,0}{3.438e+06} \\ 20news & 100 &6.486e+06 & 2.762e+07 & 4.223e+07 & \textcolor[rgb]{0,0.7,0}{3.128e+06} & \textcolor[rgb]{0,0.5820,0.9}{2.988e+06} & \textcolor[rgb]{0.9961,0,0}{2.783e+06} \\ 20news & 200 &6.731e+06 & 1.934e+07 & 1.003e+08 & \textcolor[rgb]{0,0.5820,0.9}{3.304e+06} & \textcolor[rgb]{0,0.7,0}{5.744e+06} & \textcolor[rgb]{0.9961,0,0}{2.407e+06} \\ 20news & 300 &\textcolor[rgb]{0,0.7,0}{1.041e+07} & 2.762e+07 & 1.932e+08 & \textcolor[rgb]{0,0.5820,0.9}{3.621e+06} & 4.621e+07 & \textcolor[rgb]{0.9961,0,0}{2.543e+06} \\ COIL & 20 &1.987e+09 & 5.141e+09 & 2.010e+09 & \textcolor[rgb]{0.9961,0,0}{1.974e+09} & \textcolor[rgb]{0,0.7,0}{1.975e+09} & \textcolor[rgb]{0,0.5820,0.9}{1.975e+09} \\ COIL & 50 &1.308e+09 & 2.403e+10 & 5.032e+09 & \textcolor[rgb]{0,0.7,0}{1.262e+09} & \textcolor[rgb]{0,0.5820,0.9}{1.250e+09} & \textcolor[rgb]{0.9961,0,0}{1.248e+09} \\ COIL & 100 &2.922e+09 & 2.834e+10 & 2.086e+10 & \textcolor[rgb]{0,0.7,0}{9.161e+08} & \textcolor[rgb]{0,0.5820,0.9}{8.555e+08} & \textcolor[rgb]{0.9961,0,0}{8.430e+08} \\ COIL & 200 &3.361e+09 & 2.834e+10 & 4.116e+10 & \textcolor[rgb]{0,0.7,0}{7.075e+08} & \textcolor[rgb]{0,0.5820,0.9}{5.584e+08} & \textcolor[rgb]{0.9961,0,0}{5.289e+08} \\ COIL & 300 &3.920e+09 & 2.834e+10 & 7.040e+10 & \textcolor[rgb]{0,0.7,0}{6.221e+08} & \textcolor[rgb]{0,0.5820,0.9}{4.384e+08} & \textcolor[rgb]{0.9961,0,0}{4.294e+08} \\ TDT2 & 20 &1.595e+06 & 2.211e+06 & 1.643e+06 & \textcolor[rgb]{0.9961,0,0}{1.591e+06} & \textcolor[rgb]{0,0.7,0}{1.594e+06} & \textcolor[rgb]{0,0.5820,0.9}{1.592e+06} \\ TDT2 & 50 &1.394e+06 & 2.211e+06 & 1.933e+06 & \textcolor[rgb]{0,0.7,0}{1.392e+06} & \textcolor[rgb]{0,0.5820,0.9}{1.388e+06} & \textcolor[rgb]{0.9961,0,0}{1.384e+06} \\ TDT2 & 100 &1.229e+06 & 2.211e+06 & 5.259e+06 & \textcolor[rgb]{0,0.5820,0.9}{1.213e+06} & \textcolor[rgb]{0,0.7,0}{1.216e+06} & \textcolor[rgb]{0.9961,0,0}{1.211e+06} \\ TDT2 & 200 &1.389e+06 & 1.547e+06 & 1.716e+07 & \textcolor[rgb]{0,0.5820,0.9}{1.046e+06} & \textcolor[rgb]{0,0.7,0}{1.070e+06} & \textcolor[rgb]{0.9961,0,0}{1.041e+06} \\ TDT2 & 300 &1.949e+06 & 1.836e+06 & 3.369e+07 & \textcolor[rgb]{0,0.5820,0.9}{1.008e+06} & \textcolor[rgb]{0,0.7,0}{1.155e+06} & \textcolor[rgb]{0.9961,0,0}{9.776e+05} \\ \hline \end{tabular} \hspace{-5.1pt} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{8}{|>{\columncolor{mycyand}}c|}{\centering time limit=50} \\ \hline data & n & \cite{lin2007projected} & \cite{kim2011fast} & \cite{kim2011fast} & \cite{guan2012nenmf} & \cite{hsieh2011fast} & [ours] \\ & & PG & AS & BPP & APG & CGD & GMSA \\ \hline 20news & 20 &4.565e+06 & 2.762e+07 & 6.939e+06 & \textcolor[rgb]{0.9961,0,0}{4.488e+06} & \textcolor[rgb]{0,0.7,0}{4.498e+06} & \textcolor[rgb]{0,0.5820,0.9}{4.494e+06} \\ 20news & 50 &4.343e+06 & 2.762e+07 & 1.813e+07 & \textcolor[rgb]{0,0.7,0}{3.525e+06} & \textcolor[rgb]{0,0.5820,0.9}{3.469e+06} & \textcolor[rgb]{0.9961,0,0}{3.432e+06} \\ 20news & 100 &6.404e+06 & 2.762e+07 & 3.955e+07 & \textcolor[rgb]{0,0.7,0}{3.046e+06} & \textcolor[rgb]{0,0.5820,0.9}{2.878e+06} & \textcolor[rgb]{0.9961,0,0}{2.765e+06} \\ 20news & 200 &5.939e+06 & 2.762e+07 & 9.925e+07 & \textcolor[rgb]{0,0.5820,0.9}{3.121e+06} & \textcolor[rgb]{0,0.7,0}{4.538e+06} & \textcolor[rgb]{0.9961,0,0}{2.359e+06} \\ 20news & 300 &\textcolor[rgb]{0,0.7,0}{9.258e+06} & 2.762e+07 & 1.912e+08 & \textcolor[rgb]{0,0.5820,0.9}{3.621e+06} & 2.323e+07 & \textcolor[rgb]{0.9961,0,0}{2.331e+06} \\ COIL & 20 &1.982e+09 & 7.136e+09 & 2.033e+09 & \textcolor[rgb]{0.9961,0,0}{1.974e+09} & \textcolor[rgb]{0,0.7,0}{1.975e+09} & \textcolor[rgb]{0,0.5820,0.9}{1.975e+09} \\ COIL & 50 &1.298e+09 & 2.834e+10 & 4.365e+09 & \textcolor[rgb]{0,0.7,0}{1.258e+09} & \textcolor[rgb]{0,0.5820,0.9}{1.248e+09} & \textcolor[rgb]{0.9961,0,0}{1.248e+09} \\ COIL & 100 &1.945e+09 & 2.834e+10 & 1.428e+10 & \textcolor[rgb]{0,0.7,0}{9.014e+08} & \textcolor[rgb]{0,0.5820,0.9}{8.516e+08} & \textcolor[rgb]{0.9961,0,0}{8.414e+08} \\ COIL & 200 &3.362e+09 & 2.834e+10 & 3.760e+10 & \textcolor[rgb]{0,0.7,0}{6.771e+08} & \textcolor[rgb]{0,0.5820,0.9}{5.491e+08} & \textcolor[rgb]{0.9961,0,0}{5.231e+08} \\ COIL & 300 &3.905e+09 & 2.834e+10 & 6.741e+10 & \textcolor[rgb]{0,0.7,0}{5.805e+08} & \textcolor[rgb]{0,0.5820,0.9}{4.226e+08} & \textcolor[rgb]{0.9961,0,0}{4.127e+08} \\ TDT2 & 20 &1.595e+06 & 2.211e+06 & 1.622e+06 & \textcolor[rgb]{0.9961,0,0}{1.591e+06} & \textcolor[rgb]{0,0.7,0}{1.594e+06} & \textcolor[rgb]{0,0.5820,0.9}{1.592e+06} \\ TDT2 & 50 &1.393e+06 & 2.211e+06 & 1.875e+06 & \textcolor[rgb]{0,0.7,0}{1.392e+06} & \textcolor[rgb]{0,0.5820,0.9}{1.386e+06} & \textcolor[rgb]{0.9961,0,0}{1.384e+06} \\ TDT2 & 100 &1.223e+06 & 2.211e+06 & 4.831e+06 & \textcolor[rgb]{0,0.5820,0.9}{1.212e+06} & \textcolor[rgb]{0,0.7,0}{1.214e+06} & \textcolor[rgb]{0.9961,0,0}{1.210e+06} \\ TDT2 & 200 &1.267e+06 & 2.211e+06 & 1.671e+07 & \textcolor[rgb]{0,0.5820,0.9}{1.040e+06} & \textcolor[rgb]{0,0.7,0}{1.054e+06} & \textcolor[rgb]{0.9961,0,0}{1.036e+06} \\ TDT2 & 300 &1.903e+06 & 2.211e+06 & 3.328e+07 & \textcolor[rgb]{0,0.5820,0.9}{9.775e+05} & \textcolor[rgb]{0,0.7,0}{1.045e+06} & \textcolor[rgb]{0.9961,0,0}{9.606e+05} \\ \hline \end{tabular} \\ \end{tabular}} \caption{Comparisons of objective values for non-negative matrix factorization for all the compared methods. The $1^{st}$, $2^{nd}$, and $3^{rd}$ best results are colored with \textcolor[rgb]{0.9961,0,0}{red}, \textcolor[rgb]{0,0.5820,0.9}{blue} and \textcolor[rgb]{0,0.7,0}{green}, respectively.} \label{tab:nmf} \end{table*} \subsection{Nonnegative Matrix Factorization } Nonnegative matrix factorization \cite{lee1999learning} is a very useful tool for feature extraction and identification in the fields of text mining and image understanding. It is formulated as the following optimization problem: \begin{eqnarray} \textstyle \underset{\bbb{W},\bbb{H}}\min~~\frac{1}{2}\|\bbb{Y}-\bbb{WH}\|_F^2 ,~~s.t.~~\bbb{W}\geq 0,~\bbb{H}\geq 0 \nonumber \end{eqnarray} \noindent where $\bbb{W}\in\mathbb{R}^{m\times n}$ and $\bbb{H}\in\mathbb{R}^{n\times d}$. Following previous work \cite{kim2011fast,guan2012nenmf,lin2007projected,hsieh2011fast}, we alternatively minimize the objective while keeping one of the two variables fixed. In each alternating subproblem, we solve a convex nonnegative least squares problem, where our GMSA is used. We conduct experiments on three datasets \footnote{\url{http://www.cad.zju.edu.cn/home/dengcai/Data/TextData.html}} 20news, COIL, and TDT2. The size of the datasets are $18774\times 61188,~7200\times 1024,~9394\times 36771$, respectively. We compare GMSA against the following state-of-the-art methods: (1) Projective Gradient (PG) \cite{lin2007projected,bertsekas1999nonlinear} that updates the current solution via steep gradient descent and then maps a point back to the bounded feasible region \footnote{\url{https://www.csie.ntu.edu.tw/~cjlin/libmf/}}; (2) Active Set (AS) method \cite{kim2011fast} and (3) Block Principal Pivoting (BPP) method \cite{kim2011fast} \footnote{\url{http://www.cc.gatech.edu/~hpark/nmfsoftware.php}} that iteratively identify an active and passive set by a principal pivoting procedure and solve a reduced linear system; (4) Accelerated Proximal Gradient (APG) \cite{guan2012nenmf} \footnote{\url{https://sites.google.com/site/nmfsolvers/}} that applies Nesterov's momentum strategy with a constant step size to solve the convex sub-problems; (5) Coordinate Gradient Descent (CGD) \cite{hsieh2011fast} \footnote{\url{http://www.cs.utexas.edu/~cjhsieh/nmf/}} that greedily selects one coordinate by measuring the objective reduction and optimizes for a single variable via closed-form update. Similar to our method, the core procedure of CGD is developed in C and wrapped into the MATLAB code, while all other methods are implemented using builtin MATLAB functions. We use the same settings as in \cite{lin2007projected}. We compare objective values after running $t$ seconds with $t$ varying from 20 to 50. Table \ref{tab:nmf} presents average results of using 10 random initial points, which are generated from a standard normal distribution. While the other methods may quickly lower objective values when $n$ is small ($n=20$), GMSA catches up very quickly and achieves a faster convergence speed when $n$ is large. It generally achieves the best performance in terms of objective value among all the methods. \subsection{$\ell_0$ Norm Regularized Sparse Coding} Sparse coding is a popular unsupervised feature learning technique for data representation that is widely used in computer vision and medical imaging. Motivated by recent success in $\ell_0$ norm modeling \cite{yuan2015l0tv,BaoJQS16,Yang2016}, we consider the following $\ell_0$ norm regularized (\emph{i.e.} cardinality) sparse coding problem: \begin{eqnarray} \label{eq:card:sparse:coding} \underset{\bbb{W},\bbb{H}}{\min}~~\tfrac{1}{2}\|\bbb{Y}-\bbb{WH}\|_F^2 + \lambda \|\bbb{H}\|_0,~s.t.~\|\bbb{W}(:,i)\|=1,~\forall i, \end{eqnarray} with $\bbb{W}\in \mathbb{R}^{m \times n}$ and $\bbb{H}\in \mathbb{R}^{n \times d}$. Existing solutions for this problem are mostly based on the family of proximal point methods \cite{nesterov2013introductory,BaoJQS16}. We compare GMSA with the following methods: (1) Proximal Gradient Method (PGM) with constant step size, (2) PGM with line search, (3) accelerated PGM with constant step size, and (4) accelerated PGM with line search. We evaluate all the methods for the application of image denoising. Following \cite{aharon2006img,BaoJQS16}, we set the dimension of the dictionary to $n = 256$. The dictionary is learned from $m=1000$ image patches randomly chosen from the noisy input image. The patch size is $8\times 8$, leading to $d=64$. The experiments are conducted on 16 conventional test images with different noise standard deviations $\sigma$. We tune the regularization parameter $\lambda$ and compare the resulting objective values and the Signalto-Noise Ratio (SNR) values for all methods. We do not include the comparison of SNR values here. Interested readers can refer to Section 4.2 of the conference version of this paper \cite{YuanZG17}. We compare the objective values for all methods by \emph{fixing} the variable $\bbb{W}$ to an over-complete DCT dictionary \cite{aharon2006img} and \emph{only} optimizing over $\bbb{H}$. We compare all methods with varying regularization parameter $\lambda$ and different initial points that are either generated by random Gaussian sampling or the Orthogonal Matching Pursuit (OMP) method \cite{tropp2007signal}. In Figure \ref{fig:convergece:obj}, we observe that GMSA converges rapidly in 10 iterations. Moreover, it often generates much better local optimal solutions than the compared methods. \begin{figure*} [!t] \centering \subfloat[ \footnotesize $\lambda=50$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{type2_imagecameraman_lambda50.png}} \subfloat[ \footnotesize $\lambda=500$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{type2_imagecameraman_lambda500.png}} \subfloat[ \footnotesize $\lambda=5000$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{type2_imagecameraman_lambda5000.png}} \subfloat[ \footnotesize $\lambda=50000$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{type2_imagecameraman_lambda5000.png}} \subfloat[ \footnotesize $\lambda=50$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{type4_imagecameraman_lambda50.png}} \subfloat[ \footnotesize $\lambda=500$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{type4_imagecameraman_lambda500.png}} \subfloat[ \footnotesize $\lambda=5000$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{type4_imagecameraman_lambda5000.png}} \subfloat[ \footnotesize $\lambda=50000$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{type4_imagecameraman_lambda5000.png}} \subfloat[ \footnotesize $\lambda=50$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{type1_imagecameraman_lambda50.png}} \subfloat[ \footnotesize $\lambda=500$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{type1_imagecameraman_lambda500.png}} \subfloat[ \footnotesize $\lambda=5000$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{type1_imagecameraman_lambda5000.png}} \subfloat[ \footnotesize $\lambda=50000$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{type1_imagecameraman_lambda5000.png}} \caption{Convergence behavior for solving (\ref{eq:card:sparse:coding}) with fixing $\bbb{W}$ for different $\lambda$ and initializations. Denoting $\tilde{\bbb{O}}$ as an arbitrary standard Gaussian random matrix of suitable size, we consider the following three initializations for $\bbb{H}$. First row: $\bbb{H}=0.1 \times \tilde{\bbb{O}}$. Second row: $\bbb{H}=10\times \tilde{\bbb{O}}$. Third row: $\bbb{H}$ is set to the output of the orthogonal matching pursuit.} \label{fig:convergece:obj} \end{figure*} \subsection{$\ell_1$ Norm Regularized Danzig Selectors} Danzig selectors \cite{candes2007dantzig} can be formulated as the following optimization problem: $\min_{\bbb{x}}~\|\bbb{x}\|_1,~s.t.~\|\bbb{D}^{-1}\bbb{W}^T(\bbb{Wx} - \bbb{y})\|_{\infty}\leq \delta$, where $\bbb{W}\in\mathbb{R}^{m\times n},~\bbb{y}\in\mathbb{R}^{m},~\delta\in \mathbb{R}>0$, and $\bbb{D}\in \mathbb{R}^{n\times n}$ is the diagonal matrix whose diagonal entries are the norm of the columns of $\bbb{W}$. For the ease of discussion, we consider the following equivalent unconstrained optimization problem: \begin{eqnarray} \label{eq:danzig:2} \min_{\bbb{x}}~\|\bbb{x}\|_1+ \lambda \|\bbb{Qx}-\bbb{s}\|_{\infty} \end{eqnarray} \noindent with $\lambda \propto \frac{1}{\delta}$, and $\bbb{Q}=\bbb{D}^{-1}\bbb{W}^T\bbb{W},~\bbb{s}=\bbb{D}^{-1}\bbb{W}^T\bbb{y}$. We generate the design matrix $\bbb{W}\in \mathbb{R}^{m\times n}$ via sampling from a standard Gaussian distribution. The sparse original signal $\ddot{\bbb{x}}\in \mathbb{R}^{n\times 1}$ is generated via selecting a support set of size 20 uniformly at random and set them to arbitrary number sampled from standard Gaussian distribution. We set $\bbb{y}=\bbb{W}\ddot{\bbb{x}}$. We fix $n=1000$ and consider different choices for $\lambda$ and $m$. We compare the proposed method GMSA-ADMM against linearized ADMM algorithm and classical ADMM algorithm. The penalty parameter $\beta$ is fixed to a constant with $\beta=1$. For linearized ADMM, we use the same splitting strategy as in \cite{WangY12}. For classical ADMM, we introduce addition two variables and rewrite (\ref{eq:danzig:2}) as: $\min_{\bbb{x},\bbb{y},\bbb{z}}~\|\bbb{z}\|_{\infty} + \lambda \|\bbb{x}\|_1,~s.t.~\bbb{x}=\bbb{y}, \bbb{Ay}-\bbb{b} = \bbb{z}$ to make sure that the smooth subproblem of the resulting augmented Lagrangian function is quadratic and can be solved by linear equations. For GMSA-ADMM, we do not solve the $\bbb{x}$-subproblem exactly using GMSA but solve it using one GMSA iteration. We demonstrate the objective values for the comparing methods. It can be been in Figure \ref{fig:convergece:danzig:obj} that our GMSA-ADMM significantly outperforms linearized ADMM and classical ADMM. \begin{figure*} [!t] \centering \subfloat[ \footnotesize $\lambda=0.05,~m=100$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_danzig_1.png}} \subfloat[ \footnotesize $\lambda=0.05,~m=500$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_danzig_2.png}} \subfloat[ \footnotesize $\lambda=0.05,~m=1000$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_danzig_3.png}} \subfloat[ \footnotesize $\lambda=0.05,~m=2000$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_danzig_4.png}}\\ \subfloat[ \footnotesize $\lambda=0.5,m=100$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_danzig_5.png}} \subfloat[ \footnotesize $\lambda=0.5,m=500$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_danzig_6.png}} \subfloat[ \footnotesize $\lambda=0.5,m=1000$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_danzig_7.png}} \subfloat[ \footnotesize $\lambda=0.5,m=2000$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_danzig_8.png}}\\ \subfloat[ \footnotesize $\lambda=5,~m=100$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_danzig_9.png}} \subfloat[ \footnotesize $\lambda=5,~m=500$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_danzig_10.png}} \subfloat[ \footnotesize $\lambda=5,~m=1000$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_danzig_11.png}} \subfloat[ \footnotesize $\lambda=5,~m=2000$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_danzig_12.png}} \caption{A comparison of linearized ADMM, classical ADMM, and GMSA-ADMM for solving the $\ell_1$ regularized Danzig selectors problem.} \label{fig:convergece:danzig:obj} \end{figure*} \section{Extensions}\label{sect:extension} This section discusses several extensions of our proposed generalized matrix splitting algorithm. \subsection{Pointwise Contraction via a Correction Strategy}\label{sect:ext:strong} This section considers a new correction strategy to achieve pointwise contraction for the proposed method to solve (\ref{eq:main}). One remarkable feature of this strategy is that the resulting iterated solutions of $\bbb{x}^k$ always satisfy the monotone/contractive property that $\|\bbb{x}^{k+1}-\bbb{x}^*\|_2^2<\|\bbb{x}^{k}-\bbb{x}^*\|_2^2$ for all $k$ if $\bbb{x}^{k}$ is not the optimal solution. We summarize our new algorithm in Algorithm \ref{alg:gmsa:C}. We provide detailed theoretical analysis for Algorithm \ref{alg:gmsa:C}. The following lemmas are useful in our proof. \begin{algorithm} [!t] \fontsize{9.5}{12}\selectfont \caption{\label{alg:gmsa:C} {GMSA-C: Generalized Matrix Splitting Algorithm with Correction Strategy for Solving (\ref{eq:main}).}} \begin{algorithmic}[1] \STATE Choose suitable parameters $\{\omega,~\epsilon\}$.~Initialize $\bbb{x}^0$.\\ \STATE \text{for $k=0,1,2,...$}\\ \STATE~~~$\bbb{y}^{k} = \mathcal{T}(\bbb{x}^k)$ \STATE~~~Choose a suitable parameter $\alpha^k$ (\text{e.g.}~$\alpha^k = \frac{2 \epsilon + \tfrac{2-\omega}{\omega} \min(diag(\bbb{D}))}{\|\bbb{B}^T\bbb{B}\|}$\text{~or~}$\alpha^k = \frac{\|\bbb{y}^k - \bbb{x}^k\|_{2\bbb{B}-0.5\bbb{A}}^2}{\|\bbb{y}^k - \bbb{x}^k\|_{ 2\bbb{B}^T\bbb{B}}^2}$) \STATE~~~$\bbb{x}^{k+1} = \bbb{x}^k + \alpha^k \bbb{B} (\bbb{y}^{k} - \bbb{x}^k )$ \STATE \text{end for}\\ \STATE Output $\bbb{x}^{k+1}$\\ \end{algorithmic} \end{algorithm} \begin{lemma}\label{eq:dis:key} Assume that $h(\cdot)$ is convex. For all $\bbb{x} \in \mathbb{R}^n$, it holds that: $$\langle \bbb{Ax}^* - \bbb{A}\mathcal{T}(\bbb{x}) + \bbb{C} (\mathcal{T}(\bbb{x})-\bbb{x}), ~\bbb{x}^* - \mathcal{T}(\bbb{x}) \rangle \leq 0.$$ \begin{proof} By the optimality of $\bbb{x}^*$, we have: $\bbb{0} \in ~\nabla q(\bbb{x}^*) + \partial h(\bbb{x}^*)$. Combining with (\ref{eq:opt:bound0}) in Lemma (\ref{lemma:opt:ineq}), we obtain: \begin{align} \label{eq:dis:1} \bbb{0}~ \in& ~\nabla q(\mathcal{T}(\bbb{x})) + \partial h(\mathcal{T}(\bbb{x})) + \bbb{C}(\bbb{x}-\mathcal{T}(\bbb{x})) \nonumber\\ &~ - \nabla q(\bbb{x}^*) - \partial h(\bbb{x}^*) \end{align} \noindent Using the monotonicity of $\partial h(\cdot)$, we obtain: $\langle h'-h'', \bbb{x}^*-\mathcal{T}(\bbb{x}) \rangle \geq 0,~\forall h' \in \partial h(\bbb{x}^{*}),~h'' \in \partial h(\mathcal{T}(\bbb{x}))$. Combining with (\ref{eq:dis:1}), we conclude this lemma. \end{proof} \end{lemma} \begin{lemma} \label{eq:hhh} Assume $\bbb{A}\succeq \bbb{0}$. For all $\bbb{x},~\bbb{y},~\bbb{z} \in \mathbb{R}^n$, it holds that: $$\langle \bbb{x}- \bbb{z} , \bbb{Az} -\bbb{Ay}\rangle \leq \tfrac{1}{4} \|\bbb{x}-\bbb{y}\|_{\bbb{A}}^2.$$ \begin{proof} Using the variable substitution that $\bbb{z}-\bbb{x}=\bbb{p},~\bbb{z}-\bbb{y} = \bbb{u},~\bbb{x}-\bbb{y} = \bbb{u}-\bbb{p}$, we have the following equivalent inequalities: $\langle -\bbb{p},\bbb{Au} \rangle \leq \tfrac{1}{4}\|\bbb{u}-\bbb{p}\|_{\bbb{A}}^2 \Leftrightarrow \|\bbb{u}-\bbb{p}\|_{\bbb{A}}^2 + 4 \langle \bbb{Ap},\bbb{u} \rangle \geq 0 \Leftrightarrow \|\bbb{u}+\bbb{p}\|_{\bbb{A}}^2 \geq 0$. Clearly, these inequalities hold since $\bbb{A}\succeq \bbb{0}$. \end{proof} \end{lemma} The following theorem provides important theoretical insights on choosing suitable parameters $\{\omega,~\epsilon,~\alpha^k\}$ to guarantee convergence of Algorithm \ref{alg:gmsa:C}. \begin{theorem} \label{theorem:strong} We define $\delta \triangleq 2 \epsilon + \tfrac{2-\omega}{\omega} \min(diag(\bbb{D}))$ and let $\{\omega,~\epsilon\}$ be chosen such that $\delta\in(0,\infty)$. Assuming that $h(\cdot)$ is convex and $\bbb{x}^k$ generated by Algorithm \ref{alg:gmsa:C} is not the optimal solution, we have the following results: (i) \begin{eqnarray} \label{eq:dis:ok} &~~~~\|\bbb{x}^{k+1} - \bbb{x}^*\|_2^2 - \|\bbb{x}^{k} - \bbb{x}^*\|_2^2 \leq \|\bbb{y}^k - \bbb{x}^k\|_{\bbb{G}^k}^2 \\ &\bbb{G}^k\triangleq \frac{1}{2} (\alpha^k)^2 \bbb{P} - \alpha^k\bbb{Q},~\bbb{P} \triangleq 2\bbb{B}^T\bbb{B},~\bbb{Q} \triangleq 2 \bbb{B}-\tfrac{1}{2}\bbb{A} \nonumber \end{eqnarray} \noindent (ii) If we choose a global constant $0<\alpha^k<\frac{\delta}{\|\bbb{B}^T\bbb{B}\|}$, we have $\bbb{G}^k\prec 0$ and $\|\bbb{x}^{k+1} - \bbb{x}^*\|_2^2 - \|\bbb{x}^{k} - \bbb{x}^*\|_2^2 <0$. \noindent (iii) If we choose a local constant $\alpha^k = \frac{\|\bbb{y}^k - \bbb{x}^k\|_{\bbb{Q}}^2}{\|\bbb{y}^k - \bbb{x}^k\|_{\bbb{P}}^2}$, we have $ \|\bbb{x}^{k+1} - \bbb{x}^*\|_2^2 - \|\bbb{x}^{k} - \bbb{x}^*\|_2^2 \leq - \tfrac{\delta^2 \|\bbb{y}^k - \bbb{x}^k\|_2^2 }{4 \|\bbb{BB}^T\|} <0$. \begin{proof} (i) First of all, we derive the following inequalities: \begin{align} \label{eq:dis:prop:1} &~\langle \bbb{y}^{k}-\bbb{x}^{*}, \bbb{B} (\bbb{y}^{k} - \bbb{x}^k )\rangle \nonumber\\ \overset{(a)}{=}&~ \langle \bbb{y}^{k}-\bbb{x}^{*}, \bbb{A} (\bbb{y}^{k} - \bbb{x}^k )\rangle + \langle \bbb{y}^{k}-\bbb{x}^{*}, \bbb{C} (\bbb{x}^k - \bbb{y}^{k} )\rangle \nonumber\\ \overset{(b)}{\leq}&~\langle \bbb{y}^{k}-\bbb{x}^{*}, \bbb{A} (\bbb{y}^{k} - \bbb{x}^k ) \rangle + \langle \bbb{y}^{k}-\bbb{x}^{*}, \bbb{A} (\bbb{x}^{*} - \bbb{y}^k )\rangle \nonumber\\ \overset{(c)}{=}&~\langle \bbb{y}^{k}-\bbb{x}^{*}, \bbb{A} (\bbb{x}^{*} - \bbb{x}^k )\rangle \overset{(d)}{\leq} \tfrac{1}{4} \|\bbb{y}^{k}-\bbb{x}^{k}\|_{\bbb{A}}^2 \end{align} \noindent where step ($a$) uses the fact that $\bbb{B}=\bbb{-C}+\bbb{A}$; step ($b$) Lemma \ref{eq:dis:key} with $\bbb{x}=\bbb{x}^k$; step $(c)$ uses the fact that $(\bbb{x}^{*} - \bbb{y}^k )+(\bbb{y}^{k} - \bbb{x}^k ) = (\bbb{x}^{*} - \bbb{x}^k )$; step $(d)$ uses Lemma \ref{eq:hhh}. We then have the following results: \begin{align} &~\|\bbb{x}^{k+1}-\bbb{x}^*\|_{2}^2 - \|\bbb{x}^{k}-\bbb{x}^*\|_{2}^2 \nonumber\\ \overset{(a)}{=}&~\|\bbb{x}^{k+1}-\bbb{x}^k\|_{2}^2 + 2\langle \bbb{x}^k-\bbb{x}^{*},\bbb{x}^{k+1} - \bbb{x}^{k} \rangle \nonumber \\ \overset{(b)}{=}&~\|\alpha^k \bbb{B} (\bbb{y}^{k} - \bbb{x}^k )\|_{2}^2 + 2\alpha^k \langle \bbb{x}^k-\bbb{x}^{*}, \bbb{B} (\bbb{y}^{k} - \bbb{x}^k )\rangle\nonumber\\ \overset{(c)}{=}&~\|\alpha^k \bbb{B} (\bbb{y}^{k} - \bbb{x}^k )\|_{2}^2 + 2\alpha^k \langle \bbb{x}^k - \bbb{y}^{k} , \bbb{B} (\bbb{y}^{k} - \bbb{x}^k )\rangle \nonumber\\ &~ + 2\alpha^k \langle \bbb{y}^{k}-\bbb{x}^{*}, \bbb{B} (\bbb{y}^{k} - \bbb{x}^k )\rangle \nonumber\\ \overset{(d)}{\leq}&~\|\alpha^k \bbb{B} (\bbb{y}^{k} - \bbb{x}^k )\|_{2}^2 + 2\alpha^k \langle \bbb{x}^k - \bbb{y}^{k} , \bbb{B} (\bbb{y}^{k} - \bbb{x}^k )\rangle \nonumber\\ &~ + \tfrac{1}{2}\alpha^k \|\bbb{y}^{k} - \bbb{x}^k\|_{\bbb{A}}^2 \overset{(e)}{=}\|\bbb{y}^{k} - \bbb{x}^k\|_{\bbb{G}^k}^2\nonumber \end{align} \noindent where step $(a)$ uses Pythagoras relation that $\|\bbb{x}-\bbb{z}\|_{2}^2 - \|\bbb{y}-\bbb{z}\|_{2}^2 =\|\bbb{x}-\bbb{y}\|_{2}^2 + 2\langle \bbb{y}-\bbb{z},\bbb{x} - \bbb{y} \rangle,~\forall \bbb{x},~\bbb{y},~\bbb{z}$; step $(b)$ uses the update rule for $\bbb{x}^{k+1}$; step $(c)$ uses the fact that $\bbb{x}^k - \bbb{x}^*=(\bbb{x}^k-\bbb{y}^k)+(\bbb{y}^k-\bbb{x}^*)$; step $(d)$ uses (\ref{eq:dis:prop:1}); step $(e)$ uses the definition of $\bbb{G}^k$. (ii) We have the following inequalities: \begin{eqnarray} \label{eq:Q:delta} \bbb{Q} \overset{(a)}{=} 2\bbb{B} - \tfrac{1}{2}\bbb{A} \overset{(b)}{\succeq} 2\bbb{B} - \bbb{A} \overset{(c)}{=} \bbb{B} - \bbb{C} \overset{(d)}{\succeq} \delta \bbb{I} \end{eqnarray} \noindent where $(a)$ uses the definition of $\bbb{Q}$ in (\ref{eq:dis:ok}); step $(b)$ uses $\tfrac{1}{2} \bbb{A} \preceq \bbb{A}$; step $(c)$ uses the fact that $\bbb{A}=\bbb{B}+\bbb{C}$; step $(d)$ uses the fact that $\forall \bbb{z},~\bbb{z}^T(\bbb{B}-\bbb{C})\bbb{z}\geq \delta\|\bbb{z}\|_2^2$, which is due to (\ref{eq:upperbound:0}). \noindent Then we derive the following inequalities: \begin{equation}\begin{split} \label{eq:strong:negative} \bbb{G}^k = \alpha^k (\tfrac{1}{2}\alpha^k \bbb{P} -\bbb{Q}) &\overset{(a)}{\preceq } \alpha^k (\alpha^k \bbb{B}^T\bbb{B} - \delta \bbb{I} ) \overset{(b)}{\prec}\bbb{0}\nonumber\\ \end{split}\end{equation} \noindent where step $(a)$ uses (\ref{eq:Q:delta}); step $(b)$ uses the choice that $0<\alpha^k<\frac{\delta}{\|\bbb{B}^T\bbb{B}\|}$. (iii) We define $\bbb{v}\triangleq\bbb{y}^k-\bbb{x}^k$. Minimizing the right-hand side of (\ref{eq:dis:ok}) over the variable $\alpha$, we obtain (\ref{eq:dis:final2}). \begin{equation}\begin{split} \label{eq:dis:final2} \alpha^* = \arg \min_{\alpha}~\psi(\alpha) \triangleq \tfrac{1}{2}(\|\bbb{v}\|^2_{\bbb{P}}) \alpha^2 - (\|\bbb{v}\|_{\bbb{Q}}^2) \alpha.\\ \end{split}\end{equation} \noindent Setting the gradient of quadratic function $\psi(\alpha)$ to zero, we obtain the optimal solution for $\alpha^* = { \|\bbb{v}\|_{\bbb{Q}}^2 }/\|\bbb{v}\|_{\bbb{P}}^2$. We obtain $\psi(\alpha^*) = \tfrac{1}{2} \|\bbb{v}\|_{\bbb{P}}^2 \cdot \frac{ \|\bbb{v}\|_{\bbb{Q}}^2 }{\|\bbb{v}\|_{\bbb{P}}^2} \cdot \frac{ \|\bbb{v}\|_{\bbb{Q}}^2 }{\|\bbb{v}\|_{\bbb{P}}^2} - \|\bbb{v}\|_{\bbb{Q}}^2\cdot \frac{ \|\bbb{v}\|_{\bbb{Q}}^2 }{\|\bbb{v}\|_{\bbb{P}}^2} = -\tfrac{1}{2}\tfrac{\|\bbb{v}\|_{\bbb{Q}}^4}{ \|\bbb{v}\|_{\bbb{P}}^2}$. Therefore, we have $\psi(\alpha^*) \leq - \tfrac{\delta^2 \|\bbb{v}\|_2^2 }{4 \|\bbb{BB}^T\|}$. \end{proof} \end{theorem} \noindent \textbf{Remarks.} \textbf{(i)} The iterated solutions generated by Algorithm \ref{alg:gmsa:C} satisfy the monotone/contractive property. Therefore, the convergence properties in Theorem \ref{theorem:strong} are stronger than the results in Theorem \ref{theorem:general:rate} and Theorem \ref{theorem:general:rate2}. \textbf{(ii)} There are two methods to decide the value of $\alpha^k$ in Algorithm \ref{alg:gmsa:C}. One method is to use the global constant as indicated in part (ii) in Theorem \ref{theorem:strong}, and another method is to use a local constant parameter as shown in part (iii) in Theorem \ref{theorem:strong}. We remark that a local constant is more desirable in practice since it provides a better estimation of the local structure of the problem and does not require any sparse eigenvalue solver. \subsection{Acceleration via Richardson Extrapolation} \label{sect:ext:acc} This subsection discusses using Richardson extrapolation to further accelerate GMSA. We introduce a parameter $\theta^k \in (0,\infty)$ and consider the following iterative procedure: \begin{align} \begin{split} \label{eq:acceleration} \bbb{y}^{k} = ~& \mathcal{T}(\bbb{x}^k) \\ \bbb{x}^{k+1} =~& \bbb{x}^{k} + \theta^k (\bbb{y}^{k}-\bbb{x}^{k}). \end{split}\end{align} \noindent Note that the SOC update rule is not a special case of this formulation. Values of $0<\theta^k <1$ are often used to help establish convergence of the iterative procedure, while values of $\theta^k >1$ are used to speed up convergence of a slow-converging procedure which is also known as Richardson extrapolation \cite{Varga1962Matrix}. Such strategy is closely related to Nesterov's extrapolation acceleration strategy \cite{beck2009fast,nesterov2013introductory}. The following proposition provides important insights on how to choose the parameter $\theta^k$. \begin{proposition} We define: \begin{eqnarray}\label{eq:optimal:theta} \min_{\theta}~ \varphi (\theta) &\triangleq& \|\bbb{x}^{k+1}-\bbb{x}^*\|_2^2 - \|\bbb{x}^k - \bbb{x}^*\|_2^2~\\ &=&\|\bbb{x}^{k} + \theta^k (\bbb{y}^k - \bbb{x}^{k})-\bbb{x}^*\|_2^2 - \|\bbb{x}^k - \bbb{x}^*\|_2^2 \nonumber \end{eqnarray} \noindent The optimal solution of (\ref{eq:optimal:theta}) can be computed as $\theta^*=\frac{\langle \bbb{x}^{k}-\bbb{x}^{*},~\bbb{x}^{k}-\bbb{y}^{k} \rangle}{\|\bbb{x}^{k}-\bbb{y}^{k}\|_2^2}$. In addition, if $\langle \bbb{x}^{k}-\bbb{x}^{*},~\bbb{x}^{k}-\bbb{y}^{k} \rangle \neq 0$, there exists a constant $0<\nu<1$ such that $\|\bbb{x}^{k+1}-\bbb{x}^*\|_2^2 \leq \nu^{k+1}\|\bbb{x}^{0}-\bbb{x}^*\|_2^2$. \begin{proof} From (\ref{eq:optimal:theta}), we have: \begin{eqnarray} \label{eq:jjjj} \varphi (\theta)&=&\|\theta (\bbb{y}^k-\bbb{x}^k)+\bbb{x}^k-\bbb{x}^*\|_2^2 - \|\bbb{x}^k - \bbb{x}^*\|_2^2 \nonumber\\ &=&\theta^2\| \bbb{y}^k-\bbb{x}^k\|_2^2 + 2\theta\langle \bbb{x}^k-\bbb{x}^*, \bbb{y}^k-\bbb{x}^k\rangle~~~~~~~~ \end{eqnarray} \noindent Setting the gradient of the quadratic function $\varphi (\theta)$ to zero, we obtain: $\theta^*=\frac{\langle \bbb{x}^{k}-\bbb{y}^{k+1},~\bbb{x}^{k}-\bbb{y}^{k} \rangle}{\|\bbb{x}^{k}-\bbb{y}^{k}\|_2^2}$. Putting the optimal solution $\theta^*$ to (\ref{eq:jjjj}), we obtain: $\varphi (\theta^*) ~=~ \frac{\langle \bbb{x}^{k}-\bbb{x}^{*},~\bbb{x}^{k}-\bbb{y}^{k} \rangle^2}{\|\bbb{x}^{k}-\bbb{y}^{k}\|_2^2} ~+ 2 \frac{\langle \bbb{x}^{k}-\bbb{x}^{*},~\bbb{x}^{k}-\bbb{y}^{k} \rangle}{\|\bbb{x}^{k}-\bbb{y}^{k}\|_2^2} \langle \bbb{x}^k-\bbb{x}^*, \bbb{y}^k-\bbb{x}^k\rangle =~ -\frac{\langle \bbb{x}^{k}-\bbb{x}^{*},~\bbb{x}^{k}-\bbb{y}^{k} \rangle^2}{\|\bbb{x}^{k}-\bbb{y}^{k}\|_2^2}$. Under the assumption that $\langle \bbb{x}^{k}-\bbb{x}^{*},~\bbb{x}^{k}-\bbb{y}^{k} \rangle \neq 0$, we have $\|\bbb{x}^{k+1}-\bbb{x}^*\|_2^2 - \|\bbb{x}^k - \bbb{x}^*\|_2^2<0$ for all $k$. There exists a constant such that $\|\bbb{x}^{k+1}-\bbb{x}^*\|_2^2\leq \nu \|\bbb{x}^k - \bbb{x}^*\|_2^2$ for all $k$. Solving this recursive inequality, we obtain: $\|\bbb{x}^{k+1}-\bbb{x}^*\|_2^2 \leq \nu \|\bbb{x}^k - \bbb{x}^*\|_2^2 \leq \nu^2 \|\bbb{x}^{k-1} - \bbb{x}^*\|_2^2 \leq ...\leq \nu^{k+1} \|\bbb{x}^0 - \bbb{x}^*\|_2^2$. \end{proof} \end{proposition} \noindent \textbf{Remarks.} \textbf{(i)} The assumption $\langle \bbb{x}^{k}-\bbb{x}^{*},~\bbb{x}^{k}-\bbb{y}^{k} \rangle \neq 0$ also implies that $\bbb{x}^{k}$ is not the optimal solution since it holds that $\bbb{x}^{k} \neq \bbb{x}^{*}$ and $\bbb{x}^{k} \neq \bbb{y}^{k}$ when $\bbb{x}^{k}$ is not the optimal solution. \textbf{(ii)} The step size selection strategy is attractive since it guarantees contractive property for $\|\bbb{x}^k - \bbb{x}\|^2_2$. However, it is not practical since the optimal solution $\bbb{x}^*$ is unknown. In what follows, we consider the following solution. Since $\bbb{y}^k$ is the current available solution which is the closest to $\bbb{x}^*$, we replace $\bbb{x}^*$ with $\bbb{y}^k$, and $k$ with $k-1$. We obtain the follow update rule for $\theta^k$: \begin{eqnarray} \theta^k = \frac{\langle \bbb{x}^{k-1}-\bbb{y}^{k},~\bbb{x}^{k-1}-\bbb{y}^{k-1} \rangle}{\|\bbb{x}^{k-1}-\bbb{y}^{k-1}\|_2^2}. \nonumber \end{eqnarray} \noindent We summarize our accelerated generalized matrix splitting algorithm in Algorithm \ref{alg:acc:gmsa}. Note that we set $\theta^k=1$ in the first iteration ($k=0$) and introduce two additional parameters $L$ and $U$ to avoid $\theta^k$ to become arbitrarily small or large. Since the strategy in (\ref{eq:acceleration}) is expected to achieve acceleration when $\theta^k>1$, we set $L=1$ and $U=10$ as the default parameters for Algorithm \ref{alg:acc:gmsa}. \begin{algorithm} [!h] \fontsize{9.5}{12}\selectfont \caption{\label{alg:acc:gmsa} {\bbb{GMSA-A}: \bbb{G}eneralized \bbb{M}atrix \bbb{S}plitting \bbb{A}lgorithm with Richardson Extrapolation \bbb{A}cceleration for Solving (\ref{eq:main}).}} \begin{algorithmic}[1] \STATE Choose suitable parameters $\{\omega,~\epsilon\}$.~Initialize $\bbb{x}^0$.\\ \STATE \text{for $k=0,1,2,...$}\\ \STATE~~~$\bbb{y}^{k} = \mathcal{T}(\bbb{x}^k)$ \STATE~~~\text{if $k=0$} \STATE~~~\text{~~~~$\theta^k=1$} \STATE~~~\text{else} \STATE~~~\text{~~~~$\theta^k=\frac{\langle \bbb{x}^{k-1}-\bbb{y}^{k},~\bbb{x}^{k-1}-\bbb{y}^{k-1} \rangle}{\|\bbb{x}^{k-1}-\bbb{y}^{k-1}\|_2^2}$} \STATE~~~\text{~~~~$\theta^k = \min[U,\max(L,\theta^k)]$} \STATE~~~\text{end~if} \STATE~~~$\bbb{x}^{k+1} = \bbb{x}^{k} + \theta^k (\bbb{y}^{k}-\bbb{x}^{k}) $ \STATE \text{end for}\\ \STATE Output $\bbb{x}^{k+1}$\\ \end{algorithmic} \end{algorithm} \subsection{When h is Nonconvex} \label{sect:ext:noncvx} When $h(\bbb{x})$ is nonconvex, our theoretical analysis breaks down in (\ref{eq:fail}) and the exact solution to the triangle proximal operator $\mathcal{T}(\bbb{x}^k)$ in (\ref{eq:subproblem}) cannot be guaranteed. However, our Gaussian elimination procedure in Algorithm \ref{alg:sub} can still be applied. What one needs is to solve a one-dimensional nonconvex subproblem in (\ref{eq:1d:subp}). For example, when $h_j(t)=\lambda |t|_0,~\forall j=1,2,...,n$ (\emph{e.g.} in the case of the $\ell_0$ norm), it has an analytical solution: $t^* = {\tiny \left\{ \begin{array}{cc} -\bbb{w}_j/\bbb{B}_{j,j}, & {\bbb{w}_j^2 > 2\lambda \bbb{B}_{j,j}} \\ 0, & {\bbb{w}_j^2 \leq 2\lambda \bbb{B}_{j,j}} \end{array} \right.}$; when $h_j(t)=\lambda |{t}|^p,~\forall j=1,2,...,n$ and $p<1$, it admits a closed form solution for some special values \cite{xu2012regularization,Cao0X13}, such as $p=\frac{1}{2}$ or $\frac{2}{3}$. Our generalized matrix splitting algorithm is guaranteed to converge even when ${h}(\cdot)$ is nonconvex. Specifically, we present the following theorem. \begin{theorem} (Proof of Global Convergence when ${h}(\cdot)$ is Nonconvex) We define $\delta\triangleq \epsilon +\frac{1-\omega}{\omega} \min(diag(\bbb{D}))$ and let $\{\omega,~\epsilon\}$ be chosen such that $\delta\in(0,\infty)$. Assuming the nonconvex one-dimensional subproblem in (\ref{eq:1d:subp}) can be solved globally and analytically, we have: (i) \begin{eqnarray} \label{eq:nonconvex:suf:dec} \textstyle f(\bbb{x}^{k+1}) - f(\bbb{x}^k) \leq - \frac{\delta}{2} \|\bbb{x}^{k+1}-\bbb{x}^k\|_2^2 \leq 0 \end{eqnarray} \noindent (ii) Algorithm \ref{alg:main} is globally convergent. \begin{proof} (i) Due to the optimality of the one-dimensional subproblem in (\ref{eq:1d:subp}), for all $j=1,2,...,n$, we have: \begin{eqnarray} \textstyle \tfrac{1}{2}\bbb{B}_{j,j}(\bbb{x}^{k+1}_j)^2 + (\bbb{u}_j + \sum_{i=1}^{j-1} \bbb{B}_{j,i}\bbb{x}^{k+1}_{i}) \bbb{x}^{k+1}_j + h(\bbb{x}^{k+1}_j) \nonumber \\ \leq \textstyle\tfrac{1}{2}\bbb{B}_{j,j}{t}_j^2 + (\bbb{u}_j + \sum_{i=1}^{j-1} \bbb{B}_{j,i}\bbb{x}^{k+1}_{i}) {t}_j + h({t}_j),~\forall {t}_j~~~~\nonumber \end{eqnarray} \noindent Letting ${t}_1=\bbb{x}^k_1,~{t}_2=\bbb{x}^k_2,~...~,{t}_n=\bbb{x}^k_n$, we obtain: \begin{eqnarray} &\textstyle \tfrac{1}{2} \sum_{i}^n\bbb{B}_{i,i} (\bbb{x}_i^{k+1})^2 + \langle \bbb{u} + \bbb{Lx}^{k+1},\bbb{x}^{k+1} \rangle + h(\bbb{x}^{k+1})\nonumber\\ &\textstyle \leq\tfrac{1}{2}\sum_{i}^n\bbb{B}_{i,i} (\bbb{x}_i^{k})^2 + \langle \bbb{u}+\bbb{Lx}^{k+1},\bbb{x}^{k} \rangle + h(\bbb{x}^{k})\nonumber \end{eqnarray} \noindent Since $\bbb{u} = \bbb{b}+\bbb{C}\bbb{x}^k$, we obtain the following inequality: \begin{eqnarray} \textstyle f^{k+1} + \tfrac{1}{2}\langle \bbb{x}^{k+1},(\tfrac{1}{\omega}\bbb{D}+ \epsilon \bbb{I}+2\bbb{L}-\bbb{A})\bbb{x}^{k+1} + 2 \bbb{C}\bbb{x}^k \rangle\nonumber\\ \leq f^k + \tfrac{1}{2} \langle \bbb{x}^{k},(\tfrac{1}{\omega}\bbb{D}+\epsilon \bbb{I}+2\bbb{C}-\bbb{A}) \bbb{x}^{k} + 2\bbb{L}\bbb{x}^{k+1}\rangle~~~~\nonumber \end{eqnarray} \noindent By denoting $\bbb{S} \triangleq \bbb{L}-\bbb{L}^T$ and $\bbb{T}\triangleq(\omega-1)/\omega\bbb{D}-\epsilon \bbb{I}$, we have: $\tfrac{1}{\omega}\bbb{D}+\epsilon \bbb{I}+2\bbb{L}-\bbb{A}=\bbb{T}-\bbb{S}$, $\tfrac{1}{\omega}\bbb{D}+\epsilon \bbb{I}+2\bbb{C}-\bbb{A}=\bbb{S}-\bbb{T}$, and $\bbb{L}-\bbb{C}^T=-\bbb{T}$. Therefore, we have the following inequalities: \begin{align} &~\textstyle f^{k+1} - f^k~~~~ \nonumber\\ \leq&~~ \tfrac{1}{2}\langle \bbb{x}^{k+1},\bbb{T}\bbb{x}^{k+1}\rangle + \tfrac{1}{2}\langle \bbb{x}^k,\bbb{T}\bbb{x}^k\rangle - \langle \bbb{x}^k, \bbb{T}\bbb{x}^{k+1}\rangle \nonumber\\ &~-\tfrac{1}{2}\langle \bbb{x}^{k+1}, \bbb{S}\bbb{x}^{k+1}\rangle - \tfrac{1}{2}\langle \bbb{x}^k, \bbb{S}\bbb{x}^k\rangle \nonumber\\ \overset{(a)}{=}&~\tfrac{1}{2}\langle \bbb{x}^k-\bbb{x}^{k+1}, \bbb{T}(\bbb{x}^k-\bbb{x}^{k+1})\rangle\overset{(b)}{\leq}-\tfrac{\delta }{2}\|\bbb{x}^{k+1}-\bbb{x}^{k}\|_2^2\nonumber \end{align} \noindent where step ($a$) uses $\langle \bbb{x},\bbb{S}\bbb{x}\rangle =0~\forall \bbb{x}$, since $\bbb{S}$ is a Skew-Hermitian matrix; step ($b$) uses $\bbb{T} + \delta \bbb{I} \preceq \bbb{0}$, since $\bbb{x}+ \min(-\bbb{x})\leq \bbb{0}~\forall \bbb{x}$. Thus, we obtain the sufficient decrease inequality in (\ref{eq:nonconvex:suf:dec}). (ii) Based on the sufficient decrease inequality in (\ref{eq:nonconvex:suf:dec}), we have: $f(\bbb{x}^k)$ is a non-increasing sequence, $\|\bbb{x}^k-\bbb{x}^{k+1}\|\rightarrow 0$, and $f(\bbb{x}^{k+1})<f(\bbb{x}^k)$ if $\bbb{x}^k\neq\bbb{x}^{k+1}$. We note that (\ref{eq:opt:bound0}) can be still applied even ${h}(\cdot)$ is nonconvex. Using the same methodology as in the second part of Theorem \ref{theorem:1}, we obtain that $\nabla q(\bbb{x}^{k+1}) + \partial h(\bbb{x}^{k+1}) \rightarrow \bbb{0}$, which implies the convergence of the algorithm. Note that guaranteeing $\delta\in(0,\infty)$ can be achieved by simply choosing $\omega\in(0,1)$ and setting $\epsilon$ to a small number. \end{proof} \end{theorem} \subsection{When q is not Quadratic} \label{sect:ext:nonq} This subsection discusses how to adapt GMSA to solve (\ref{eq:main}) even when $q(\cdot)$ is not quadratic but it is convex and twice differentiable. Following previous work \cite{TsengY09,yuan2014newton}, we keep the nonsmooth function $h(\cdot)$ and approximate the smooth function $q(\cdot)$ around the current solution $\bbb{x}^k$ by its second-order Taylor expansion: \begin{align} \begin{split} \mathcal{Q}(\bbb{y},\bbb{x}^k)~\triangleq& ~h(\bbb{y})+ q(\bbb{\bbb{x}}^k) + \langle \nabla q(\bbb{\bbb{x}}^k) , \bbb{y} - \bbb{\bbb{x}}^k\rangle + \nonumber \\ &~\tfrac{1}{2} (\bbb{y} - \bbb{\bbb{x}}^k)^T \nabla^2 q(\bbb{\bbb{x}}^k) (\bbb{y} - \bbb{\bbb{x}}^k)\nonumber \end{split} \end{align} \noindent where $\nabla q(\bbb{\bbb{x}^k})$ and $\nabla^2 q(\bbb{\bbb{x}^k})$ denote the gradient and Hessian of $q(\bbb{x})$ at $\bbb{x}^k$, respectively. In order to generate the next solution that decreases the objective, one can minimize the quadratic model above by solving \begin{eqnarray} \label{eq:newton:exact} \textstyle \bbb{y}^{k} = \arg\min_{\bbb{y}}~\mathcal{Q}(\bbb{y},\bbb{x}^k). \end{eqnarray} \noindent And then one performs line search by the update: $\bbb{x}^{k+1} \Leftarrow \bbb{x}^k + \alpha^k (\bbb{x}^{k+1}-\bbb{x}^k)$ for the greatest descent in objective (as in the damped Newton method). Here, $\alpha^k\in(0,1]$ is the step-size selected by backtracking line search. \begin{algorithm} [!t] \fontsize{9.5}{12}\selectfont \caption{\label{alg:non:quad} {Generalized Matrix Splitting Algorithm for Minimizing non-Quadratic Composite Functions in (\ref{eq:main}).}} \begin{algorithmic}[1] \STATE Choose suitable parameters $\{\omega,~\epsilon\}$.~Initialize $\bbb{x}^0$.\\ \STATE \text{for $k=0,1,2,...$}\\ \STATE~~~Define $\bbb{A}=\nabla^2 q(\bbb{x}^k)$,~$\bbb{b}=\nabla q(\bbb{x}^k) - \nabla^2 q(\bbb{x}^k)\bbb{x}^k$ \STATE~~~$\bbb{y}^k = \mathcal{T}(\bbb{x}^k;\bbb{A},\bbb{b},h)$ \STATE~~~Define $\bbb{d}^k \triangleq \bbb{y}^k-\bbb{x}^k$ \\ \STATE~~~\label{defdef}Define ${\Delta}^k \triangleq \langle \nabla q(\bbb{\bbb{x}}^k) ,\bbb{d}^k \rangle + h(\bbb{x}^k+\bbb{d}^k) - h(\bbb{x}^k)$ \STATE~~~Find the largest $\alpha^k \in \{\eta^0,\eta^1,...\}$ such that \begin{eqnarray} \label{eq:armijo:cond} f(\bbb{x}^k+\alpha^k \bbb{d}^k) \leq f(\bbb{x}^k) + \alpha^k \tau \Delta^k \end{eqnarray} \vspace{-10pt} \STATE~~~$\bbb{x}^{k+1}=\bbb{x}^k + \alpha^k \bbb{d}^k$ \STATE \text{end for}\\ \STATE Output $\bbb{x}^{k+1}$\\ \end{algorithmic} \end{algorithm} In practice, one does not need to solve the Newton approximation subproblem in (\ref{eq:newton:exact}) exactly and one iteration suffices for global convergence. We use $\bbb{x}^{k} = \mathcal{T}(\bbb{x}^k;\bbb{A}^k,\bbb{b}^k,h)$ to denote one iteration of GMSA with the parameter $\{\bbb{A}^k,\bbb{b}^k,h\}$. Note that both $\bbb{A}^k$ and $\bbb{b}^k$ are changing with $k$. We use $\{\bbb{B}^k,\bbb{C}^k,\bbb{D}^k,\bbb{L}^k\}$ to denote the associated matrices of $\bbb{A}^k$ using the same splitting strategy as in (\ref{eq:matrix:dec}). In some situation, we drop the superscript $k$ for simplicity since it can be known from the context. We summarized our algorithm to solve the general convex composite problem in Algorithm \ref{alg:non:quad}. Algorithm \ref{alg:non:quad} iteratively calls Algorithm \ref{alg:main} for one time to compute the next point $\bbb{y}^k$. Based on the search direction $\bbb{d}^k = \bbb{y}^k-\bbb{x}^k$, we employ Armijo's rule and try step size $\alpha \in \{\eta^0,\eta^1,...\}$ with a constant decrease rate $0<\eta<1$ until we find the smallest $t\in \mathbb{N}$ with $\alpha=\eta^t$ such that $\bbb{x}^k+\alpha^k \bbb{d}^k$ satisfies the sufficient decrease condition. A typical choice for the parameters $\{\eta,~\tau\}$ is $\{0.1,~0.25\}$. In what follows, we present our convergence analysis for Algorithm \ref{alg:non:quad}. The following lemma is useful in our proof. \begin{lemma}\label{lemma:tail:bound} Let $\Delta^k$ be defined in Line \ref{defdef} of Algorithm \ref{alg:non:quad}. It holds that: $$\Delta^k \leq -\langle \bbb{d}^k, \bbb{B}^k \bbb{d}^k\rangle.$$ \begin{proof} It is not hard to notice that GMSA reduces to the following inclusion problem: \begin{eqnarray} \bbb{0} \in \bbb{A}^k \bbb{y}^k + \nabla q(\bbb{x}^k) - \bbb{A}^k\bbb{x}^k + \partial h(\bbb{y}^k) + \bbb{C}^k(\bbb{x}^k-\bbb{y}^k) \nonumber \end{eqnarray} \noindent Using the definition that $\bbb{d}^k \triangleq \bbb{y}^k-\bbb{x}^k$ and $\bbb{A}^k = \bbb{B}^k + \bbb{C}^k$, we have: \begin{eqnarray} \label{eq:hbbb} \bbb{0} \in \bbb{B}^k \bbb{d}^k + \nabla q(\bbb{x}^k) + \partial h(\bbb{y}^k) \end{eqnarray} \noindent We further derive the following inequalities: \begin{align} &h(\bbb{x}^k+\bbb{d}^k) - h(\bbb{x}^k) \overset{(a)}{=} h(\bbb{y}^k) - h(\bbb{x}^k) \nonumber\\ \overset{(b)}{\leq}& \langle \bbb{y}^k-\bbb{x}^k, h' \rangle,~\forall h' \in \partial h(\bbb{y}^k) \overset{(c)}{=} \langle \bbb{d}^k, -\bbb{B}^k \bbb{d}^k - \nabla q(\bbb{x}^k) \rangle \nonumber \end{align} \noindent where step $(a)$ uses the fact that $\bbb{x}^k+\bbb{d}^k=\bbb{y}^k$; step $(b)$ uses the convexity of $h(\cdot)$; step $(c)$ uses (\ref{eq:hbbb}) and $\bbb{y}^k-\bbb{x}^k=\bbb{d}^k$. Rearranging terms we finish the proof of this lemma. \end{proof} \end{lemma} \begin{theorem} \label{theorem:conv} We define $\delta^k \triangleq ({1}/{\omega}-{1}/{2})\min(diag(\bbb{D}^k))+\epsilon$ and let $\{\omega,~\epsilon\}$ be chosen such that $\delta^k\in(0,\infty)$. Assuming that the gradient of $q(\cdot)$ is $L$-Lipschitz continuous, we have the following results: (i) There always exists a strictly positive constant $\alpha^k$ such that the descent condition in (\ref{eq:armijo:cond}) is satisfied. (ii) The sequence $\{f(\bbb{x}^k)\}_{k=0}^{\infty}$ is nonincreasing and Algorithm \ref{alg:non:quad} is globally convergent. \begin{proof} For simplicity, we drop the iteration counter $k$ as it can be inferred from the context. First of all, for all $\bbb{v}\in\mathbb{R}^n$, we have: \begin{align} \label{eq:bound:normB} \bbb{v}^T\bbb{B} \bbb{v} & \overset{(a)}{=} \bbb{v}^T(\tfrac{1}{2}\bbb{L}+ \tfrac{1}{2}\bbb{L}^T+ \tfrac{1}{2}\bbb{D} + (\tfrac{1}{\omega}-\tfrac{1}{2}) \bbb{D}+\epsilon \bbb{I} )\bbb{v} \nonumber\\ & \overset{(b)}{=} \tfrac{1}{2} \bbb{v}^T\bbb{A}\bbb{v} + \bbb{v}^T[(\tfrac{1}{\omega}-\tfrac{1}{2})\bbb{D}+\epsilon \bbb{I} ]\bbb{v} \nonumber\\ &\overset{(c)}{\geq} 0+ \delta^k\|\bbb{v}\|_2^2 \end{align} \noindent where step $(a)$ uses the definition of $\bbb{B}$ that $\bbb{B}=\bbb{L} + \frac{1}{\omega} \bbb{D}+\epsilon \bbb{I}$ and the fact that $\bbb{v}^T\bbb{Lv}=\bbb{v}^T\bbb{L}^T\bbb{v}$; step $(b)$ uses $\bbb{A}=\bbb{L}+\bbb{L}^T+\bbb{D}$; step $(c)$ uses the fact that $\bbb{A}$ is positive semidefinite for convex problems. For any $\alpha\in(0,1]$, we have the following results: \begin{align} &~f(\bbb{x}+\alpha \bbb{d}) - f(\bbb{x}) \nonumber\\ =&~ q(\bbb{x}+\alpha \bbb{d}) - q(\bbb{x}) + h(\bbb{x}+\alpha \bbb{d}) - h(\bbb{x}) \nonumber\\ \overset{(a)}{\leq}&~ q(\bbb{x}+\alpha \bbb{d}) - q(\bbb{x}) + \alpha [h(\bbb{x}+ \bbb{d}) - h(\bbb{x})] \nonumber\\ \overset{(b)}{\leq}& ~\tfrac{\alpha^2L}{2} \|\bbb{d}\|_2^2 + \alpha [\langle \bbb{d} , \nabla q(\bbb{x}) \rangle + h(\bbb{x}+ \bbb{d}) - h(\bbb{x})] \nonumber\\ \overset{(c)}{=} &~ \tfrac{\alpha^2L}{2} \|\bbb{d}\|_2^2 + \alpha \Delta^k \overset{(d)}{\leq} - \tfrac{\alpha^2L}{2} \tfrac{\Delta^k}{\delta^k} + \alpha \Delta^k \nonumber\\ \overset{(e)}{=} & ~(1-\tfrac{\alpha L}{2\delta^k})\alpha \Delta^k \leq \tau^k \alpha \Delta^k \nonumber \end{align} \noindent where step $(a)$ uses the fact that $h(\bbb{x} +\alpha \bbb{d} ) = h(\alpha (\bbb{x}+\bbb{d}) + (1-\alpha)\bbb{x}) \leq \alpha h(\bbb{x}+\bbb{d}) + (1-\alpha) h(\bbb{x})$ which is due to the convexity of $h(\cdot)$; step $(b)$ uses the inequality $q(\bbb{y})\leq f(\bbb{x}) +\langle \nabla q(\bbb{x}), \bbb{y}-\bbb{x} \rangle + \tfrac{L}{2}\|\bbb{y}-\bbb{x}\|_2^2$ with $\bbb{y}=\bbb{x}+\alpha \bbb{d}$; step $(c)$ uses the definition of $\Delta^k$ in Algorithm \ref{alg:non:quad}; step $(d)$ uses Lemma (\ref{lemma:tail:bound}) that $\Delta^k\leq -\langle \bbb{d}^k, \bbb{B}^k\bbb{d}^k \rangle \leq -\delta^k \|\bbb{d}^k\|_2^2$ which is due to (\ref{eq:bound:normB}); step $(e)$ uses the choice that $0<\alpha<\min[1,{2\delta^k (1-\tau^k)}/{L}]$. (ii) We obtain the following inequality: \begin{align} \label{eq:conv} \forall k,~f(\bbb{x}^{k+1}) - f(\bbb{x}^{k})~\leq ~- \tau^k \alpha^k \|\bbb{d}^k\|_{2}^2 ~~\text{with} ~~\tau^k \alpha^k >0,\nonumber \end{align} \noindent and the sequence $f(\bbb{x}^k)$ is non-increasing. Using the same methodology as in the second part of Theorem \ref{theorem:1}, we have $\bbb{d}^k \rightarrow 0$ as $k\rightarrow \infty$. Therefore, any cluster point of the sequence $\bbb{x}^k$ is a stationary point. Finally, we have $\bbb{d}^k=\bbb{0}$. From (\ref{eq:opt:bound0}) in Lemma (\ref{lemma:opt:ineq}), we obtain: $\bbb{0} = -\bbb{C}(\bbb{x}^k-\mathcal{T}(\bbb{x}^k)) \in \nabla q(\mathcal{T}(\bbb{x}^k)) + \partial h(\mathcal{T}(\bbb{x}^k))$. Therefore, we conclude that $\mathcal{T}(\bbb{x}^k) = \bbb{y}^k = \bbb{x}^k$ is the global optimal solution. \end{proof} \end{theorem} \subsection{Adapting into ADMM Optimization Framework} \label{sect:ext:admm} This subsection shows how to adapt GMSA into the general optimization framework of ADMM \cite{HeY12,HeY15} to solve the following structured convex problem: \begin{eqnarray} \label{eq:structured:general:convex2} \min_{\bbb{x},\bbb{y}}~\tfrac{1}{2}\bbb{x}^T\bbb{Q}\bbb{x} + \bbb{x}^T\bbb{p} + h(\bbb{x}) + r(\bbb{y}),~\bbb{Ex}+\bbb{z}=\bbb{y} \end{eqnarray} \noindent where $\bbb{x}\in\mathbb{R}^{n}$ and $\bbb{y}\in\mathbb{R}^{m}$ are decision variables, and $\bbb{Q}\in\mathbb{R}^{n\times n},~\bbb{p}\in\mathbb{R}^n,~\bbb{E}\in\mathbb{R}^{m\times n},~\bbb{z}\in\mathbb{R}^m$ are given. We assume that $\bbb{Q}$ is positive semidefinite and $r(\cdot)$ is simple and convex (may not necessarily be separable) such that its proximal operator $\text{prox}_{r}(\bbb{a}) = \arg \min_{\bbb{x}} ~\frac{1}{2}\|\bbb{x}-\bbb{a}\|_2^2 + r(\bbb{x})$ can be evaluated analytically. We let $\mathcal{L}: \mathbb{R}^n,~\mathbb{R}^m,~\mathbb{R}^m \mapsto \mathbb{R}$ be the augmented Lagrangian function of (\ref{eq:structured:general:convex2}): \begin{align}\begin{split} &\mathcal{L}(\bbb{x},~\bbb{y},~\bbb{\pi}) = \tfrac{1}{2}\bbb{x}^T\bbb{Q}\bbb{x} + \bbb{x}^T\bbb{p} + h(\bbb{x}) + r(\bbb{y})\\ &+ \langle \bbb{Ex}+\bbb{z}-\bbb{y},~\bbb{\pi} \rangle + \tfrac{\beta}{2}\| \bbb{Ex}-\bbb{z}+\bbb{y}\|_2^2 \nonumber \end{split}\end{align} \noindent where $\bbb{\pi}\in\mathbb{R}^m$ is the multiplier associated with the equality constraint $\bbb{Ex}+\bbb{z}=\bbb{y}$, and $\beta>0$ is the penalty parameter. We summarize our algorithm for solving (\ref{eq:structured:general:convex2}) in Algorithm \ref{alg:admm}. The algorithm optimizes for a set of primal variables at a time and keeps the other primal and dual variables fixed, with the dual variables updating via gradient ascent. For the $\bbb{y}$-subproblem, it admits a closed-form solution. For the $\bbb{x}$-subproblem, since the smooth part is quadratic and the nonsmooth part is separable, it can be solved by Algorithm \ref{alg:main}. Algorithm \ref{alg:admm} is convergent if the $\bbb{x}$-subproblem is solved exactly since it reduces to classical ADMM \cite{HeY12}. We remark that similar to linearized ADMM \cite{HeY12}, Algorithm \ref{alg:admm} also suffices for convergence empirically even if we only solve the $\bbb{x}$-subproblem approximately. \begin{algorithm} [!t] \fontsize{9.5}{12}\selectfont \caption{\label{alg:admm} {\textbf{GMSA-ADMM}: \textbf{G}eneralized \textbf{M}atrix \textbf{S}plitting \textbf{A}lgorithm-based \textbf{A}lternating \textbf{D}irection \textbf{M}ethod of \textbf{M}ultipliers for Solving (\ref{eq:structured:general:convex2}).}} \begin{algorithmic}[1] \STATE Choose $\omega\in(0,2),~ \epsilon\in[0,\infty)$.~Initialize $\bbb{x}^0$.\\ \STATE \text{for $k=0,1,2,...$}\\ \STATE~~~Use Algorithm \ref{alg:main} to solve the following problem:\\ ~~~~~~~~~~~~~~~$\bbb{x}^{k+1} = \min_{\bbb{x}}~\mathcal{L}(\bbb{x},~\bbb{y}^{k},~\bbb{\pi}^k)$ \STATE~~~$\bbb{y}^{k+1} = \min_{\bbb{y}}~\mathcal{L}(\bbb{x}^{k+1},~\bbb{y},~\bbb{\pi}^k) $ \STATE~~~$\bbb{\pi}^{k+1} = \bbb{\pi}^{k} + \beta (\bbb{Ex}^{k+1}+\bbb{z}-\bbb{y}^{k+1})$ \STATE \text{end for}\\ \STATE Output $\bbb{x}^{k+1}$\\ \end{algorithmic} \end{algorithm} \begin{figure*} [!t] \centering \subfloat[ \footnotesize $m=200,~\bbb{x}^0 = \bbb{0}$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_linf_1-eps-converted-to.pdf}} \subfloat[ \footnotesize $m=200,~\bbb{x}^0 = rand(n,1)$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_linf_2-eps-converted-to.pdf}} \subfloat[ \footnotesize $m=500,~\bbb{x}^0 = \bbb{0}$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_linf_3-eps-converted-to.pdf}} \subfloat[ \footnotesize $m=500,~\bbb{x}^0 = rand(n,1)$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_linf_4-eps-converted-to.pdf}} \subfloat[\footnotesize $m=200,~\bbb{x}^0 = \bbb{0}$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_l1_1-eps-converted-to.pdf}} \subfloat[ \footnotesize $m=200,~\bbb{x}^0 = rand(n,1)$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_l1_2-eps-converted-to.pdf}} \subfloat[ \footnotesize $m=500,~\bbb{x}^0 = \bbb{0}$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_l1_3-eps-converted-to.pdf}} \subfloat[ \footnotesize $m=500,~\bbb{x}^0 = rand(n,1)$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_l1_4-eps-converted-to.pdf}} \subfloat[ \footnotesize $m=200,~\bbb{x}^0 = \bbb{0}$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_l0_1-eps-converted-to.pdf}} \subfloat[ \footnotesize $m=200,~\bbb{x}^0 = rand(n,1)$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_l0_2-eps-converted-to.pdf}} \subfloat[ \footnotesize $m=500,~\bbb{x}^0 = \bbb{0}$]{\includegraphics[width=0.25\textwidth,height=0.14\textheight]{demo_l0_3-eps-converted-to.pdf}} \subfloat[ \footnotesize $m=500,~\bbb{x}^0 = rand(n,1)$]{\includegraphics[width=0.25\textwidth,height=0.12\textheight]{demo_l0_4-eps-converted-to.pdf}} \caption{Convergence behavior for solving convex non-negative least squares problem (first row): $\min_{\bbb{x}\geq \bbb{0}}~\tfrac{1}{2}\|\bbb{Cx}-\bbb{d}\|_2^2$, convex $\ell_1$ norm regularized least squares problem (second row): $\min_{\bbb{x}}~\tfrac{1}{2}\|\bbb{Cx}-\bbb{d}\|_2^2+\|\bbb{x}\|_1$, and non-convex $\ell_0$ norm regularized least squares problem: $\min_{\bbb{x}}~\tfrac{1}{2}\|\bbb{Cx}-\bbb{d}\|_2^2+0.1\|\bbb{x}\|_0$ with $\bbb{C}\in\mathbb{R}^{m\times n}$ and $\bbb{d}\in\mathbb{R}^{m}$ being generated from a standard Gaussian distribution. Here $rand(n,1)$ is a function that returns a random vector sampled from a (0-1) uniform distribution. } \label{fig:sample:l01inf} \end{figure*} \subsection{When x is a Matrix} In many applications (e.g. nonegative matrix factorization and sparse coding), the solutions exist in the matrix form as follows: $\min_{\bbb{X}\in \mathbb{R}^{n\times r}}~~\tfrac{1}{2}tr(\bbb{X}^T\bbb{A}\bbb{X}) + tr(\bbb{X}^T\bbb{R})+ h(\bbb{X})$, where $\bbb{R}\in \mathbb{R}^{n\times r}$. Our matrix splitting algorithm can still be applied in this case. Using the same technique to decompose $\bbb{A}$ as in (\ref{eq:matrix:dec}): $\bbb{A}=\bbb{B}+\bbb{C}$, one needs to replace (\ref{eq:subproblem}) to solve the following nonlinear equation: $\bbb{BZ}^* + \bbb{U} + \partial h(\bbb{Z}^*) \in 0$, where $\bbb{U}=\bbb{R}+\bbb{C}\bbb{X}^k$. It can be decomposed into $r$ independent components. By updating every column of $\bbb{X}$, the proposed algorithm can be used to solve the matrix problem above. Thus, our algorithm can also make good use of existing parallel architectures to solve the matrix optimization problem. \section{Introduction} In this paper, we focus on the following composite function minimization problem: \begin{eqnarray} \label{eq:main} \min_{\bbb{x}}~f(\bbb{x}) \triangleq q(\bbb{x}) + h(\bbb{x});~q(\bbb{x}) = \tfrac{1}{2}\bbb{x}^T\bbb{A}\bbb{x} + \bbb{x}^T\bbb{b} \end{eqnarray} \noindent where $\bbb{x} \in\mathbb{R}^n,~\bbb{b} \in \mathbb{R}^n$, $\bbb{A}\in \mathbb{R}^{n\times n}$ is a symmetric positive semidefinite matrix, and $h(\bbb{x}): \mathbb{R}^{n}\mapsto \mathbb{R}$ is a piecewise separable function (\emph{i.e.}~$h(\bbb{x})=\sum_{i=1}^n h_i (\bbb{x}_i)$) but not necessarily convex. Typical examples of $h(\bbb{x})$ include the bound constrained function and the $\ell_0$ and $\ell_1$ norm functions. We assume that $f(\bbb{x})$ is bounded below for any feasible solution $\bbb{x}$. The optimization in (\ref{eq:main}) is flexible enough to model a variety of applications of interest in both computer vision and machine learning, including compressive sensing \cite{Donoho06}, nonnegative matrix factorization \cite{lee1999learning,lin2007projected,guan2012nenmf}, sparse coding \cite{lee2006efficient,aharon2006img,BaoJQS16,Quan2016CVPR}, support vector machine \cite{hsieh2008dual}, logistic regression \cite{yu2011dual}, subspace clustering \cite{elhamifar2013sparse}, to name a few. Although we only focus on the quadratic function $q(\cdot)$, our method can be extended to handle general non-quadratic composite functions by considering a Newton approximation of the objective \cite{TsengY09,yuan2014newton} and to solve general linear constrained problems by using its associated augmented Lagrangian function of the problem \cite{HeY12,HeY15}. The most popular method for solving problem (\ref{eq:main}) is perhaps the proximal gradient method \cite{nesterov2013introductory,beck2009fast}. It considers a fixed-point proximal iterative procedure $\bbb{x}^{k+1}= \text{prox}_{\gamma h}\left(\bbb{x}^k - \gamma \nabla q(\bbb{x}^k)\right)$ based on the current gradient $\nabla q(\bbb{x}^k)$. Here the proximal operator $\text{prox}_{\tilde{h}}(\bbb{a}) = \arg \min_{\bbb{x}} ~\frac{1}{2}\|\bbb{x}-\bbb{a}\|_2^2 + \tilde{h}(\bbb{x})$ can often be evaluated analytically, $\gamma={1}/{L}$ is the step size with $L$ being the local (or global) Lipschitz constant. It is guaranteed to decrease the objective at a rate of $\mathcal{O}({L}/{k})$, where $k$ is the iteration number. The accelerated proximal gradient method can further boost the rate to $\mathcal{O}({L}/{k^2})$. Tighter estimates of the local Lipschitz constant leads to better convergence rate, but it scarifies additional computation overhead to compute $L$. Our method is also a fixed-point iterative method, but it does not rely on a sparse eigenvalue solver or line search backtracking to compute such a Lipschitz constant, and it can exploit the specified structure of the quadratic Hessian matrix $\mathbf{A}$. The proposed method is essentially a generalization of the classical Gauss-Seidel (GS) method and Successive Over-Relaxation (SOR) method \cite{demmel1997applied,saad2003iterative}. In numerical linear algebra, the Gauss-Seidel method, also known as the successive displacement method, is a fast iterative method for solving a linear system of equations. It works by solving a sequence of triangular matrix equations. The method of SOR is a variant of the GS method and it often leads to faster convergence. Similar iterative methods for solving linear systems include the Jacobi method and symmetric SOR. Our proposed method can solve versatile composite function minimization problems, while inheriting the efficiency of modern linear algebra techniques. Our method is closely related to coordinate gradient descent and its variants such as randomized coordinate descent \cite{hsieh2008dual,patrascu2014iteration}, cyclic coordinate descent \cite{sun2015improved}, block coordinate descent \cite{nesterov2012efficiency,beck2013convergence,hong2013iteration}, randomized block coordinate descent \cite{richtarik2014iteration,lu2015complexity}, accelerated randomized coordinate descent \cite{nesterov2012efficiency,lin2015accelerated,lu2013randomized} and others \cite{liu2015asynchronous,johnson2013accelerating,zeng2015gauss}. However, all these work are based on gradient-descent type iterations and a constant Lipschitz step size. They work by solving a first-order majorization/surrogate function via closed form updates. Their algorithm design and convergence result cannot be applied here. In contrast, our method does not rely on computing the Lipschicz constant step size, yet it adopts a triangle matrix factorization strategy, where the triangle subproblem can be solved by an alternating cyclic coordinate strategy. We are aware that matrix splitting algorithm has been considered to solve symmetric linear complementarity problems \cite{Luo1991,Luo1992,Iusem1993} and second-order cone complementarity problems \cite{zhang2014efficient} in the literature. However, we focus on minimizing a general separable nonsmooth composite function which is different from theirs. In addition, our algorithm is derived from a novel triangle operator mapping, which can be computed exactly using a new Gaussian elimination procedure. It is worthwhile to mention that matrix splitting has been extended to operator splitting recently to solve multi-term nonsmooth convex composite optimization problems \cite{ShenLYM17}. \textbf{Contributions.} \textbf{(i)} We propose a new Generalized Matrix Splitting Algorithm (GMSA) for composite function minimization (See Section \ref{sect:alg}). Our method is derived from a novel triangle proximal operator (See Subsection \ref{sect:alg:subprob}). We establish the global convergence, convergence rate, and iteration complexity of GMSA for convex problems (See Subsection \ref{sect:alg:convergence}). \textbf{(ii)} We discuss several important extensions of GMSA (see Section \ref{sect:extension}). First, we consider a new correction strategy to achieve pointwise contraction for the proposed method (See Subsection \ref{sect:ext:strong}). Second, we discuss using Richardson extrapolation technique to further accelerate GMSA (See Subsection \ref{sect:ext:acc}). Third, we extend GMSA to solve nonconvex problems with global convergence guarantee (See Subsection \ref{sect:ext:noncvx}). Fourth, we discuss how to adapt GMSA to minimize non-quadratic functions (See Subsection \ref{sect:ext:nonq}). Fifth, we show how to incorporate GMSA into the general optimization framework of Alternating Direction Method of Multipliers (ADMM) (See Subsection \ref{sect:ext:admm}). \textbf{(iii)} Our extensive experiments on nonnegative matrix factorization, sparse coding and Danzig selectors have shown that GMSA achieves state-of-the-art performance in term of both efficiency and efficacy (See Section \ref{sect:exp}). A preliminary version of this paper appeared in \cite{YuanZG17}. \bbb{Notation.} We use lowercase and uppercase boldfaced letters to denote real vectors and matrices respectively. The Euclidean inner product between $\bbb{x}$ and $\bbb{y}$ is denoted by $\langle \bbb{x},\bbb{y}\rangle$ or $\bbb{x}^T\bbb{y}$. We denote $\|\bbb{x}\|=\|\bbb{x}\|_2=\sqrt{\langle \bbb{x},\bbb{x} \rangle}$, $\|\bbb{x}\|_{\bbb{A}} = \sqrt{\bbb{x}^T\bbb{A}\bbb{x}}$, and $\|\bbb{C}\|$ as the spectral norm (\emph{i.e.} the largest singular value) of $\bbb{C}$. We denote the $i^{\text{th}}$ element of vector $\bbb{x}$ as $\bbb{x}_i$ and the $(i,j)^{\text{th}}$ element of matrix $\mathbf{C}$ as $\mathbf{C}_{i,j}$. $diag(\bbb{D}) \in \mathbb{R}^{n}$ is a column vector formed from the main diagonal of $\bbb{D}\in\mathbb{R}^{n\times n}$. $\bbb{C}\succeq0$ and $\bbb{C}\succ0$ indicate that the matrix $\bbb{C}\in\mathbb{R}^{n\times n}$ is positive semidefinite and positive definite, respectively. Here $\bbb{C}$ is not necessarily symmetric \footnote{$\bbb{C}\succeq0 \Leftrightarrow \forall \bbb{x},~\bbb{x}^T\bbb{Cx}\geq 0 \Leftrightarrow \forall \bbb{x},~\frac{1}{2}\bbb{x}^T(\bbb{C}+\bbb{C}^T)\bbb{x}\geq 0$}. We denote $\bbb{D}$ as a diagonal matrix of $\bbb{A}$ and $\bbb{L}$ as a strictly lower triangle matrix of $\bbb{A}$ \footnote{For example, when $n=3$, $\bbb{D}$ and $\bbb{L}$ take the following form: \\$\textstyle \renewcommand\arraystretch{1} \setlength{\arraycolsep}{3pt} \bbb{D}=\begin{bmatrix} \bbb{A}_{1,1} & 0 & 0 \\ 0 & \bbb{A}_{2,2} & 0 \\ 0 & 0 & \bbb{A}_{3,3} \\ \end{bmatrix},~\bbb{L}=\begin{bmatrix} 0 & 0 & 0 \\ \bbb{A}_{2,1} & 0 & 0 \\ \bbb{A}_{3,1} & \bbb{A}_{3,2} &0 \\ \end{bmatrix}$}. Thus, we have $\bbb{A} = \bbb{L}+\bbb{D}+\bbb{L}^T$. Throughout this paper, $\bbb{x}^k$ denotes the value of $\bbb{x}$ at $k$-th iteration if $\bbb{x}\in\mathbb{R}^n$ is a variable, and ${x}^k$ denotes the $k$-th power of ${x}$ if ${x}\in\mathbb{R}$ is a constant scalar. We use $\bbb{x}^*$ to denote any solution of the optimal solution set of (\ref{eq:main}). For notation simplicity, we denote: \begin{equation}\begin{split} &~~~~~~~~~\bbb{r}^k\triangleq \bbb{x}^{k} - \bbb{x}^*,~\bbb{d}^k \triangleq \bbb{x}^{k+1}-\bbb{x}^k \nonumber\\ &u^k \triangleq f(\bbb{x}^{k})-f(\bbb{x}^*),~f^k \triangleq f(\bbb{x}^k),~f^* \triangleq f(\bbb{x}^*)\nonumber \end{split}\end{equation} \section*{Acknowledgments} \noindent This work was supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research. This work was also supported by the NSF-China (61772570, 61472456, 61522115, 61628212). \bibliographystyle{plain} \section{Proposed Algorithm} \label{sect:alg} This section presents our proposed Generalized Matrix Splitting Algorithm (GMSA) for solving (\ref{eq:main}). Throughout this section, we assume that $h(\bbb{x})$ is convex and postpone the discussion for nonconvex $h(\bbb{x})$ to Section \ref{sect:ext:noncvx}. Our solution algorithm is derived from a fixed-point iterative method based on the first-order optimal condition of (\ref{alg:main}). It is not hard to validate that a solution $\bbb{x}$ is the optimal solution of (\ref{alg:main}) if and only if $\bbb{x}$ satisfies the following nonlinear equation (``$\triangleq$'' means define): \begin{equation}\begin{split} \label{eq:opt:cond} \textstyle \bbb{0} &\in \partial f(\bbb{x}) \\ &= \nabla q(\bbb{x}) + \partial h(\bbb{x})= \bbb{Ax} + \bbb{b} + \partial h(\bbb{x}) \end{split}\end{equation} \noindent where $\nabla q(\bbb{x})$ and $\partial h(\bbb{x})$ denote the gradient of $q(\cdot)$ and the sub-gradient of $h(\cdot)$ in $\bbb{x}$, respectively. In numerical analysis, a point $\bbb{x}$ is called a fixed point if it satisfies the equation $\bbb{x} \in \mathcal{T}(\bbb{x})$, for some operator $\mathcal{T}(\cdot)$. Converting the transcendental equation $\bbb{0}\in\partial f(\bbb{x})$ algebraically into the form $\bbb{x} \in \mathcal{T}(\bbb{x})$, we obtain the following iterative scheme with recursive relation: \begin{eqnarray} \label{eq:iterative} \textstyle \bbb{x}^{k+1}\in \mathcal{T}(\bbb{x}^k),~k = 0, 1, 2,... \end{eqnarray} We now discuss how to adapt our algorithm into the iterative scheme in (\ref{eq:iterative}). First, we split the matrix $\bbb{A}$ in (\ref{eq:opt:cond}) using the following strategy: \begin{eqnarray}\label{eq:matrix:dec} \textstyle \bbb{A} = \underbrace{\bbb{L}+\tfrac{1}{\omega}\bbb{D}+ \epsilon \bbb{I}}_{\bbb{B}} + \underbrace{ \bbb{L}^T + \tfrac{\omega-1}{\omega}\bbb{D} - \epsilon \bbb{I}}_{\bbb{C}} \end{eqnarray} \noindent Here, $\omega \in (0,2)$ is a relaxation parameter and $\epsilon\in[0,\infty)$ is a parameter for strong convexity that enforces $diag(\bbb{B})>\bbb{0}$. These parameters are specified by the user beforehand. Using these notations, we obtain the following optimality condition which is equivalent to (\ref{eq:opt:cond}): \begin{eqnarray} \label{eq:kkt:BC} \textstyle \textstyle-\bbb{Cx} - \bbb{b} \in (\bbb{B}+\partial h ) (\bbb{x}) \nonumber \end{eqnarray} \noindent Then, we have the following equivalent fixed-point equation: \begin{eqnarray}\label{eq:P} \textstyle \bbb{x} \in \mathcal{T}(\bbb{x};\bbb{A},\bbb{b},h) \triangleq (\bbb{B}+\partial h )^{-1}(-\bbb{Cx} - \bbb{b}) \end{eqnarray} \noindent For notation simplicity, we denote $\mathcal{T}(\bbb{x};\bbb{A},\bbb{b},h)$ as $\mathcal{T}(\bbb{x})$ since $\{\bbb{A},\bbb{b},h\}$ can be known from the context. We name $\mathcal{T}$ the triangle proximal operator, which is novel in this paper\footnote{This is in contrast with Moreau's proximal operator \cite{parikh2014proximal}: $\text{prox}_{h}(\bbb{a}) = \arg\min_{\bbb{x}} ~\frac{1}{2}\|\bbb{x}-\bbb{a}\|_2^2 + h(\bbb{x})=(\bbb{I}+\partial h)^{-1}(\bbb{a})$, where the mapping $(\bbb{I}+\partial h)^{-1}$ is called the resolvent of the subdifferential operator $\partial h$.}. Due to the triangle property of the matrix $\bbb{B}$ and the element-wise separable structure of $h(\cdot)$, the triangle proximal operator $\mathcal{T}(\bbb{x})$ in (\ref{eq:P}) can be computed exactly and analytically, by a generalized Gaussian elimination procedure (discussed later in Section \ref{sect:alg:subprob}). Our generalized matrix splitting algorithm iteratively applies $\bbb{x}^{k+1} \Leftarrow\mathcal{T}(\bbb{x}^k)$ until convergence. We summarize our algorithm in Algorithm \ref{alg:main}. In what follows, we show how to compute $\mathcal{T}(\bbb{x})$ in (\ref{eq:P}) in Section \ref{sect:alg:subprob}, and then we study the convergence properties of Algorithm \ref{alg:main} in Section \ref{sect:alg:convergence}. \subsection{Computing the Triangle Proximal Operator}\label{sect:alg:subprob} We now present how to compute the triangle proximal operator in (\ref{eq:P}), which is based on a new generalized Gaussian elimination procedure. Notice that (\ref{eq:P}) seeks a solution $\bbb{z}^*\triangleq \mathcal{T}(\bbb{x}^k)$ that satisfies the following nonlinear system: \begin{eqnarray} \label{eq:subproblem} \textstyle \bbb{0} \in \bbb{B}\bbb{z}^* + \bbb{u} + \partial h(\bbb{z}^*) ,~\text{where}~\bbb{u}=\bbb{b} + \bbb{C}\bbb{x}^k \end{eqnarray} \noindent By taking advantage of the triangular form of $\bbb{B}$ and the element-wise/decomposable structure of $h(\cdot)$, the elements of $\bbb{z}^*$ can be computed sequentially using forward substitution. Specifically, the above equation can be written as a system of nonlinear equations: { \fontsize{8.7}{8.7}\selectfont \renewcommand\arraystretch{1} \setlength{\arraycolsep}{1.5pt} \begin{eqnarray} \label{eq:sub:nonlinear} \bbb{0} \in \begin{bmatrix} \bbb{B}_{1,1}& 0 & 0 & 0 &0 \\ \bbb{B}_{2,1}& \bbb{B}_{2,2} & 0 & 0 &0 \\ \vdots & \vdots & \ddots & 0 & 0 \\ \bbb{B}_{n-1,1} &\bbb{B}_{n-1,2} & \cdots & \bbb{B}_{n-1,n-1} & 0\\ \bbb{B}_{n,1}& \bbb{B}_{n,2} & \cdots & \bbb{B}_{n,n-1} &\bbb{B}_{n,n} \\ \end{bmatrix} \begin{bmatrix} \bbb{z}^*_{1} \\ \bbb{z}^*_{2}\\ \vdots \\ \bbb{z}^*_{n-1} \\ \bbb{z}^*_{n} \\ \end{bmatrix} + \bbb{u} + \partial h(\bbb{z}^*) \nonumber \end{eqnarray}}{\normalsize}\noindent If $\bbb{z}^*$ satisfies the equations above, it must solve the following one-dimensional subproblems: \begin{eqnarray} 0 \in \bbb{B}_{j,j} \bbb{z}^*_j + \bbb{w}_j + \partial h_j{(\bbb{z}^*_j)},~\forall j=1,2,~...~,n,\nonumber\\ \textstyle \bbb{w}_j=\bbb{u}_j + \sum_{i=1}^{j-1} \bbb{B}_{j,i}\bbb{z}^*_{i}~~~~~~~~~~~~~~~~~\nonumber \end{eqnarray} \noindent This is equivalent to solving the following one-dimensional problem for all $j=1,2,...,n$: \begin{eqnarray}\label{eq:1d:subp} \textstyle \bbb{z}^*_j= t^* \triangleq \underset{t}{\arg\min}~~\tfrac{1}{2}\bbb{B}_{j,j} t^2 + \bbb{w}_j t + h_j(t) \end{eqnarray} \noindent Note that the computation of $\bbb{z}^{*}$ uses only the elements of $\bbb{z}^{*}$ that have already been computed and a successive displacement strategy is applied to find $\bbb{z}^{*}$. We remark that the one-dimensional subproblem in (\ref{eq:1d:subp}) often admits a closed form solution for many problems of interest. For example, when $h_j(t)=I_{[lb,ub]}(t),~\forall j =1,2,...,n$ with $I_{[lb,ub]}(t)$ denoting an indicator function on the box constraint $lb\leq t\leq ub$, the optimal solution can be computed as: $t^* = \min(ub,\max(lb,-\bbb{w}_j/\bbb{B}_{j,j}))$; when $h_j(t)=\lambda |t|,~\forall j=1,2,...,n$ (\emph{i.e.} in the case of the $\ell_1$ norm), the optimal solution can be computed as: $t^* = - \max\left(0,| \bbb{w}_j/\bbb{B}_{j,j}|-\lambda/\bbb{B}_{j,j}\right) \cdot \rm sign\left(\bbb{w}_j/\bbb{B}_{j,j}\right) $. Our generalized Gaussian elimination procedure for computing $\mathcal{T}(\bbb{x}^k)$ is summarized in Algorithm \ref{alg:sub}. Note that its computational complexity is $\mathcal{O}(n^2)$, which is the same as computing a matrix-vector product. \begin{algorithm} [!t] \fontsize{9.5}{12}\selectfont \caption{\label{alg:main} {\bbb{GMSA}: A Generalized Matrix Splitting Algorithm for Solving the Composite Function Minimization Problem in (\ref{eq:main})}} \begin{algorithmic}[1] \STATE Choose suitable parameters $\{\omega,\epsilon\}$.~Initialize $\bbb{x}^0$, ~$k=0$.\\ \STATE \text{while not converge}\\ \STATE~~~$\bbb{x}^{k+1} = \mathcal{T}(\bbb{x}^k)$ (Solve (\ref{eq:subproblem}) by Algorithm \ref{alg:sub}) \STATE~~~$k = {k+1}$ \STATE \text{end while}\\ \STATE Output $\bbb{x}^{k+1}$\\ \end{algorithmic} \end{algorithm} \begin{algorithm} [!t] \fontsize{9.5}{12}\selectfont \caption{\label{alg:sub} {A Generalized Gaussian Elimination Procedure for Computing the Triangle Proximal Operator $\mathcal{T}(\bbb{x}^k)$.}} \begin{algorithmic}[1] \STATE Input $\bbb{x}^k$\\ \STATE Initialization: compute $\bbb{u}= \bbb{b} + \bbb{C}\bbb{x}^k$\\ \STATE $\bbb{x}_1=\arg\min_{t}\frac{1}{2}\bbb{B}_{1,1}t^2 + (\bbb{u}_1) t + h_1(t)$\\ \STATE $\bbb{x}_2=\arg\min_{t}\frac{1}{2}\bbb{B}_{2,2}t^2 + (\bbb{u}_2 + \bbb{B}_{2,1}\bbb{x}_{1}) t + h_2(t)$\\ \STATE $\bbb{x}_3=\arg\min_{t}\frac{1}{2}\bbb{B}_{3,3}t^2 + (\bbb{u}_3+ \bbb{B}_{3,1}\bbb{x}_{1} + \bbb{B}_{3,2}\bbb{x}_{2}) t + h_3(t)$\\ \STATE ...\\ \STATE $\bbb{x}_{n}=\arg\min_{t}\frac{1}{2}\bbb{B}_{n,n}t^2 + (\bbb{u}_n + \sum_{i=1}^{n-1} \bbb{B}_{n,i}\bbb{x}_{i}) t + h_n(t)$\\ \STATE Collect $(\bbb{x}_1,\bbb{x}_2,\bbb{x}_3,...,\bbb{x}_n)^T$ as $\bbb{x}^{k+1}$ and Output $\bbb{x}^{k+1}$\\ \end{algorithmic} \end{algorithm} \subsection{Convergence Analysis}\label{sect:alg:convergence} In what follows, we present our convergence analysis for Algorithm \ref{alg:main}. \noindent The following lemma characterizes the optimality of the triangle proximal operator $\mathcal{T}(\bbb{x})$ for any $\bbb{x}$. \begin{lemma} \label{lemma:opt:ineq} For all $\bbb{x},\bbb{y}\in\mathbb{R}^n$, it holds that: \begin{eqnarray} \text{(i)}~ \bbb{0} \in ~\nabla q(\mathcal{T}(\bbb{x})) + \partial h(\mathcal{T}(\bbb{x})) + \bbb{C}(\bbb{x}-\mathcal{T}(\bbb{x})) ~ \label{eq:opt:bound0} \end{eqnarray} \vspace{-16pt} \begin{align} \begin{split} \text{(ii)}~h(\mathcal{T}(\bbb{x})) -h(\bbb{y}) + \langle \nabla q(\mathcal{T}(\bbb{x})),\mathcal{T}(\bbb{x}) - \bbb{y} \rangle~~~~~~~~ \\ \leq \langle \bbb{C}(\mathcal{T}(\bbb{x})-\bbb{x}),~\mathcal{T}(\bbb{x})-\bbb{y} \rangle ~~~~~~~~~~~~~ \label{eq:opt:bound00} \end{split} \end{align} \vspace{-5pt} \begin{equation}\begin{split}\label{eq:opt:bound} \text{(iii)}~&~f(\mathcal{T}(\bbb{x}))- f(\bbb{y}) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \leq~& ~ \langle \bbb{C}(\mathcal{T}(\bbb{x})-\bbb{x}) ,\mathcal{T}(\bbb{x})-\bbb{y} \rangle - \tfrac{1}{2}\|\mathcal{T}(\bbb{x})-\bbb{y}\|_{\bbb{A}}^2 \end{split}\end{equation} \begin{proof} (i) Using the optimality of $\mathcal{T}(\bbb{x})$ in (\ref{eq:subproblem}), we derive the following results: $\bbb{0} \in \bbb{B}\mathcal{T}(\bbb{x}) + \partial h(\mathcal{T}(\bbb{x})) + \bbb{b} + \bbb{C} \bbb{x} \overset{(a)}{\Rightarrow} \bbb{0} \in \bbb{A}\mathcal{T}(\bbb{x}) + \partial h(\mathcal{T}(\bbb{x})) + \bbb{b} + \bbb{C} (\bbb{x}-\mathcal{T}(\bbb{x}))\overset{(b)}{\Rightarrow} \bbb{0} \in \nabla q(\mathcal{T}(\bbb{x})) + \partial h(\mathcal{T}(\bbb{x})) + \bbb{C} (\bbb{x}-\mathcal{T}(\bbb{x}))$, where step $(a)$ uses $\bbb{B}=\bbb{A}-\bbb{C}$ and step $(b)$ uses the definition of $\nabla q(\cdot)$ in (\ref{eq:opt:cond}). (ii) Since $h(\cdot)$ is convex, we have: \begin{eqnarray}\label{eq:fail} \forall \bbb{s},~\bbb{z},~ h(\bbb{s}) - h(\bbb{z}) \leq \langle h' ,\bbb{s} - \bbb{z} \rangle,~\forall h' \in \partial h(\bbb{s}). \end{eqnarray} \noindent Letting $s=\mathcal{T}(\bbb{x}),~\bbb{z}=\bbb{y}$, we derive the following inequalities: $\forall h' \in \partial h(\mathcal{T}(\bbb{x})),~h(\mathcal{T}(\bbb{x})) - h(\bbb{y})\overset{}{\leq}~ \langle h' ,\mathcal{T}(\bbb{x}) - \bbb{y} \rangle \overset{(a)}{\leq}~ \langle - \nabla q(\mathcal{T}(\bbb{x})) - \bbb{C}(\bbb{x}-\mathcal{T}(\bbb{x})) ,\mathcal{T}(\bbb{x}) - \bbb{y} \rangle$, where step $(a)$ uses (\ref{eq:opt:bound0}). (iii) Since $q(\cdot)$ is a quadratic function, we have: \begin{eqnarray} \label{eq:quadratic} \forall \bbb{s},~\bbb{z},~q(\bbb{s}) - q(\bbb{z})= \langle \nabla q(\bbb{s}),\bbb{s}-\bbb{z} \rangle - \tfrac{1}{2}\|\bbb{s}-\bbb{z}\|_{\bbb{A}}^2 \end{eqnarray} \noindent We naturally derive the following results: $f(\mathcal{T}(\bbb{x}))- f(\bbb{y})$$\overset{(a)}{=}$$h(\mathcal{T}(\bbb{x})) - h(\bbb{y})+q(\mathcal{T}(\bbb{x})) - q(\bbb{y})$$\overset{(b)}{=}$$h(\mathcal{T}(\bbb{x})) - h(\bbb{y})+\langle \nabla q(\mathcal{T}(\bbb{x})),\mathcal{T}(\bbb{x})-\bbb{y} \rangle$$ - \tfrac{1}{2}\|\mathcal{T}(\bbb{x})-\bbb{y}\|_{\bbb{A}}^2$$\overset{(c)}{\leq}$$\langle \bbb{C}(\mathcal{T}(\bbb{x})-\bbb{x}),\mathcal{T}(\bbb{x})-\bbb{y} \rangle - $$\tfrac{1}{2}\|\mathcal{T}(\bbb{x})-\bbb{y}\|_{\bbb{A}}^2$, where step $(a)$ uses the definition of $f(\cdot)$; step $(b)$ uses (\ref{eq:quadratic}) with $\bbb{s}=\mathcal{T}(\bbb{x})$ and $\bbb{z}=\bbb{y}$; step $(c)$ uses (\ref{eq:opt:bound00}). \end{proof} \end{lemma} \noindent \bbb{Remarks.} Both (\ref{eq:opt:bound0}) and (\ref{eq:opt:bound00}) can be used to characterize the optimality of (\ref{eq:main}). Recall that we have the following sufficient and necessary conditions for the optimal solution: $\bbb{x}^* \text{~is the optimal solution} \Leftrightarrow \bbb{0} \in \nabla q(\mathcal{T}(\bbb{x}^*)) + \partial h(\mathcal{T}(\bbb{x}^*)) $ $ \Leftrightarrow \langle \nabla q(\mathcal{T}(\bbb{x}^*)),\mathcal{T}(\bbb{x}^*) - \bbb{y} \rangle + h(\mathcal{T}(\bbb{x}^*)) -h(\bbb{y}) \leq 0,~\forall \bbb{y}$. When $\bbb{x}=\mathcal{T}(\bbb{x})$ occurs, (\ref{eq:opt:bound0}) and (\ref{eq:opt:bound00}) coincide with the optimal condition and one can conclude that $\bbb{x}$ is the optimal solution. \begin{theorem} \label{theorem:1} (Proof of Global Convergence) We define $\delta \triangleq 2 \epsilon + \tfrac{2-\omega}{\omega} \min(diag(\bbb{D}))$ and let $\{\omega,\epsilon\}$ be chosen such that $\delta\in(0,\infty)$. Algorithm \ref{alg:main} is globally convergent. \begin{proof} (i) First, the following results hold for all $\bbb{z}\in\mathbb{R}^n$: \begin{align} \label{eq:upperbound:0} \bbb{z}^T(\bbb{A}-2\bbb{C})\bbb{z} = &~\textstyle \bbb{z}^T( \bbb{B}-\bbb{C}) \bbb{z} \nonumber\\ =&~\textstyle \textstyle \bbb{z}^T(\bbb{L} - \bbb{L}^T + \tfrac{2-\omega}{\omega}\bbb{D} + 2 \epsilon \bbb{I})\bbb{z} \nonumber\\ =&~\textstyle \bbb{z}^T( {2 \epsilon} \bbb{I} + \tfrac{2-\omega}{\omega}\bbb{D} )\bbb{z} \geq \delta \|\bbb{z}\|_2^2 \end{align} \noindent where we have used the definition of $\bbb{A}$ and $\bbb{C}$, and the fact that $\bbb{z}^T \bbb{L}\bbb{z} = (\bbb{z}^T \bbb{L}\bbb{z} )^T = \bbb{z}^T\bbb{L}^T\bbb{z},~\forall \bbb{z}$. We invoke (\ref{eq:opt:bound}) in Lemma \ref{lemma:opt:ineq} with $\bbb{x}=\bbb{x}^k,~\bbb{y}=\bbb{x}^k$ and combine the inequality in (\ref{eq:upperbound:0}) to obtain: \begin{eqnarray} \label{eq:descent} \textstyle f^{k+1} - f^k \textstyle \leq -\frac{1}{2} \langle \bbb{d}^{k},(\bbb{A}-2\bbb{C})\bbb{d}^{k}\rangle \leq \textstyle -\frac{\delta}{2} \|\bbb{d}^k\|_2^2 \end{eqnarray} \noindent (ii) Second, summing (\ref{eq:descent}) over $i=0,...,k-1$, we have: \begin{align} & ~\textstyle \tfrac{\delta}{2}\sum_{i=0}^{k-1} \|\bbb{d}^{i}\|_2^2 \leq f^0 - f^{k}\overset{(a)}{\leq} \textstyle f^0 - f^*\nonumber\\ \Rightarrow &~\textstyle \tfrac{\delta}{2} \min_{i=0,...,k-1}~\|\bbb{d}^{i}\|_2^2 \leq (f^0 - f^*)/k\nonumber \end{align} \noindent where step $(a)$ uses the fact that $f^*\leq f^{k}$. Note that $f^*$ is bounded below. As $k\rightarrow \infty$, we have $ \bbb{d}^{k} \triangleq \bbb{x}^{k+1}-\bbb{x}^k \rightarrow \bbb{0}$, which implies the convergence of the algorithm. Invoking (\ref{eq:opt:bound0}) in Lemma \ref{lemma:opt:ineq} with $\bbb{x}=\bbb{x}^k$, we obtain: $\nabla q(\bbb{x}^{k+1}) + \partial h(\bbb{x}^{k+1}) \ni - \bbb{C}(\bbb{x}^k- \bbb{x}^{k+1} ) \rightarrow \bbb{0}$. The fact that $\nabla q(\bbb{x}^{k+1}) + \partial h(\bbb{x}^{k+1}) \ni \bbb{0}$ implies that $\bbb{x}^{k+1}$ is the global optimal solution of the convex problem. Note that guaranteeing $\delta\in(0,\infty)$ can be achieved by simply choosing $\omega\in(0,2)$ and setting $\epsilon$ to a small number. \end{proof} \end{theorem} \noindent \bbb{Remarks.} \textbf{(i)} When $h(\cdot)$ is empty and $\epsilon=0$, Algorithm \ref{alg:main} reduces to the classical Gauss-Seidel method ($\omega=1$) and Successive Over-Relaxation method ($\omega\neq1$). \textbf{(ii)} When $\bbb{A}$ contains zeros in its diagonal entries, one needs to set $ \epsilon$ to a strictly positive number. This is to guarantee the strong convexity of the one dimensional subproblem and a bounded solution for any $h(\cdot)$ in (\ref{eq:1d:subp}). The introduction of the parameter $\epsilon$ is novel in this paper and it removes the assumption that $\bbb{A}$ is strictly positive-definite or strictly diagonally dominant, which is used in the classical result of GS and SOR method \cite{saad2003iterative,demmel1997applied}. We now prove the convergence rate of Algorithm \ref{alg:main}. We make the following assumption, which characterizes the relations between $\mathcal{T}(\bbb{x})$ and $\bbb{x}^*$ for any $\bbb{x}$. \begin{assumption}\label{lemma:local:bound} If $\bbb{x}$ is not the optimum of (\ref{eq:main}), there exists a constant $\eta\in(0,\infty)$ such that $\|\bbb{x} - \bbb{x}^* \| \leq \eta \|\bbb{x} - \mathcal{T}(\bbb{x})\|$. \end{assumption} \noindent \bbb{Remarks.} Assumption \ref{lemma:local:bound} is similar to the classical local proximal error bound assumption in the literature \cite{luo1993error,TsengY09,Tseng10,yun2011block}, and it is mild. Firstly, if $\bbb{x}$ is not the optimum, we have $\bbb{x} \neq \mathcal{T}(\bbb{x})$. This is because when $\bbb{x} = \mathcal{T}(\bbb{x})$, we have $\bbb{0} = -\bbb{C}(\bbb{x}-\mathcal{T}(\bbb{x})) \in \nabla q(\mathcal{T}(\bbb{x})) + \partial h(\mathcal{T}(\bbb{x}))$ (refer to the optimal condition of $\mathcal{T}(\bbb{x})$ in (\ref{eq:opt:bound0})), which contradicts with the condition that $\bbb{x}$ is not the optimal solution. Secondly, by the boundedness of $\bbb{x}$ and $\bbb{x}^*$, there exists a sufficiently large constant $\eta\in(0,\infty)$ such that $\|\bbb{x} - \bbb{x}^* \| \leq \eta \|\bbb{x} - \mathcal{T}(\bbb{x})\|$. We now prove the convergence rate of Algorithm \ref{alg:main}. \begin{theorem} \label{theorem:general:rate} (Proof of Convergence Rate) We define $\delta \triangleq {2 \epsilon} + \tfrac{2-\omega}{\omega} \min(diag(\bbb{D}))$ and let $\{\omega,~\epsilon\}$ be chosen such that $\delta\in(0,\infty)$. Assuming that $\bbb{x}^k$ is bound for all $k$, we have: \begin{eqnarray} f(\bbb{x}^{k}) - f(\bbb{x}^*) \leq \left(\frac{C_1}{1+C_1}\right)^k [f(\bbb{x}^{0}) - f(\bbb{x}^*)], \label{eq:QQ2}\\ \|\bbb{x}^k - \bbb{x}^{k+1}\|_2^2 \leq \frac{2}{\delta} \left(\frac{C_1}{1+C_1}\right)^k [f(\bbb{x}^{0}) - f(\bbb{x}^*)]. \label{eq:QQ3} \end{eqnarray} \noindent where $C_1 \triangleq 2 \|\bbb{B}\|\eta/\delta - 1$. \begin{proof} Invoking Assumption \ref{lemma:local:bound} with $\bbb{x}=\bbb{x}^k$, we obtain: \begin{eqnarray} \label{eq:opt:bound:ineq} \|\bbb{x}^k - \bbb{x}^*\| \leq \eta \|\bbb{x}^k - \mathcal{T}(\bbb{x}^k)\| ~\Rightarrow~\|\bbb{r}^k\| \leq \eta \|\bbb{d}^k\| \end{eqnarray} \noindent We derive the following inequalities: \begin{align} & ~~~~~~~f^{k+1} - f^* \nonumber \\ &\overset{(a)}{\leq}~ \textstyle \langle \bbb{r}^{k+1} ,\bbb{C} \bbb{d}^k \rangle - \tfrac{1}{2}\langle \bbb{r}^{k+1},\bbb{A}\bbb{r}^{k+1}\rangle \label{eq:linear:conv0}\\ &\overset{(b)}{=}~ \textstyle \langle \bbb{r}^k,~(\bbb{C}-\bbb{A}) \bbb{d}^k \rangle -\tfrac{1}{2} \|\bbb{r}^{k}\|_{\bbb{A}}^2 + \tfrac{1}{2} \|\bbb{d}^{k}\|_{2\bbb{C}-\bbb{A}}^2 \nonumber \\ &\overset{(c)}{\leq}~ \textstyle - \langle \bbb{r}^k,~\bbb{B} \bbb{d}^k \rangle + 0 - \tfrac{\delta}{2} \|\bbb{d}^{k}\|_{2}^2 \nonumber\\ &\overset{(d)}{\leq}~ \textstyle \|\bbb{r}^k\| \|\bbb{B}\| \|\bbb{d}^k\| - \tfrac{\delta}{2} \|\bbb{d}^{k}\|_{2}^2 \overset{(e)}{\leq}~ \textstyle (\eta \|\bbb{B}\|- \tfrac{\delta}{2} ) \|\bbb{d}^k\|_2^2 \nonumber \\ &\overset{(f)}{\leq}~ \textstyle (\eta \|\bbb{B}\|- \tfrac{\delta}{2} ) \frac{2}{\delta} (f^k-f^{k+1}) \overset{(g)}{=}~ \textstyle C_1 (f^k-f^{k+1})\label{eq:linear:conv} \end{align} \noindent where step $(a)$ uses (\ref{eq:opt:bound}) in Lemma \ref{lemma:opt:ineq} with $\bbb{x}=\bbb{x}^*,~\bbb{y}=\bbb{x}^k$; step $(b)$ uses the fact that $\bbb{r}^{k+1}=\bbb{r}^{k}+\bbb{d}^{k}$ and $\bbb{A}=\bbb{B}+\bbb{C}$; step $(c)$ uses $\bbb{A} \succeq \bbb{0}$ and the inequality that $\bbb{A}-2\bbb{C}\succeq \delta \bbb{I}$ which is due to (\ref{eq:upperbound:0}); step ($d$) uses the Cauchy-Schwarz inequality $\langle \bbb{x},\bbb{y} \rangle\leq \|\bbb{x}\|\|\bbb{y}\|,~\forall \bbb{x},\bbb{y}\in\mathbb{R}^n$ and the norm inequality $\|\bbb{Bx}\|\leq \|\bbb{B}\|\|\bbb{x}\|,~\forall \bbb{x} \in\mathbb{R}^n$; step ($e$) uses (\ref{eq:opt:bound:ineq}); step $(f)$ uses the descent condition in (\ref{eq:descent}); step $(g)$ uses the definition of $C_1$. Rearranging the last inequality in (\ref{eq:linear:conv}), we have $ f^{k+1} - f^* \leq C_1 (f^k-f^{k+1}) = \textstyle C_1 (f^k-f^*) - C_1(f^{k+1}-f^*) \Rightarrow (1+C_1)[f(\bbb{x}^{k+1}) - f(\bbb{x}^*)] \leq C_1 [f(\bbb{x}^k)-f(\bbb{x}^*)]$, leading to: $\tfrac{f(\bbb{x}^{k+1}) - f(\bbb{x}^*)}{f(\bbb{x}^k)-f(\bbb{x}^*)} \leq \tfrac{C_1}{1+C_1} < 1$. Solving this recursive formulation, we obtain (\ref{eq:QQ2}). In other words, the sequence $\{f(\bbb{x}^k)\}_{k=0}^{\infty}$ converges to $f(\bbb{x}^*)$ linearly in the quotient sense. Using (\ref{eq:descent}), we derive the following inequalities: $\|\bbb{x}^k - \bbb{x}^{k+1}\|_2^2 \leq \frac{2 (f^k - f^{k+1})}{\delta} \leq \frac{2 (f^k - f^* ) }{\delta}$. Therefore, we obtain (\ref{eq:QQ3}). \end{proof} \end{theorem} The following lemma is useful in our proof of iteration complexity. \begin{lemma} \label{lemma:quadratic:recursive} Suppose a nonnegative sequence $\{u^k\}_{k=0}^{\infty}$ satisfies $u^{k+1} \leq -2 C + 2C \sqrt{1+ \frac{u^k}{C} }$ for some constant $C> 0$. It holds that: $u^{k+1} \leq \frac{\max(8C,\sqrt{4 Cu^0})}{k+1}$. \begin{proof} The proof of this lemma can be obtained by mathematical induction. We denote $\chi \triangleq \max(8C,\sqrt{4Cu^0})$. (i) When $k=0$, we have $u^{1} \leq -2 C + 2C \sqrt{1+ \frac{1}{C} u^0} \leq -2C + 2C (1+\sqrt{\frac{u^0}{C}} )=2\sqrt{Cu^0}\leq \frac{\chi}{k+1}$. (ii) When $k\geq 1$, we assume that $u^{k} \leq \frac{\chi}{k}$ holds. We derive the following results: $k\geq 1 \Rightarrow \frac{k+1}{k} \leq 2$ $\overset{(a)}{\Rightarrow} 4C \frac{k+1}{k} \leq 8C \leq \chi$ $\overset{(b)}{\Rightarrow} \frac{4C}{k(k+1)} = 4C (\frac{1}{k} - \frac{1}{k+1} ) \leq \frac{\chi}{(k+1)^2}$ $\Rightarrow \frac{4C}{k} \leq \frac{4C}{k+1} + \frac{\chi}{(k+1)^2}$ $\Rightarrow 4C^2 ( 1+ \frac{\chi}{k C}) \leq 4C^2 + \tfrac{4\chi C}{k+1} + \tfrac{\chi^2}{(k+1)^2}$ $\Rightarrow 2C \sqrt{1+\frac{\chi}{k C}} \leq 2C + \frac{\chi}{k+1}$ $\Rightarrow -2C + 2C \sqrt{1+\frac{\chi}{kC}} \leq \frac{\chi}{k+1}$ $\overset{(c)}{\Rightarrow} -2C + 2C \sqrt{1+\frac{u^k}{C}} \leq \frac{\chi}{k+1}$ $\Rightarrow u^{k+1} \leq \frac{\chi}{k+1}$. Here, step $(a)$ uses $8C\leq \chi$; step $(b)$ uses the inequality that $\frac{1}{k(k+1)} = \frac{1}{k}-\frac{1}{k+1}$; step $(c)$ uses $u^k \leq \frac{\chi}{k}$. \end{proof} \end{lemma} We now prove the iteration complexity of Algorithm \ref{alg:main}. \begin{theorem} \label{theorem:general:rate2} (Proof of Iteration Complexity) We define $\delta \triangleq {2 \epsilon} + \tfrac{2-\omega}{\omega} \min(diag(\bbb{D}))$ and let $\{\omega,~\epsilon\}$ be chosen such that $\delta \in(0,\infty)$. Assuming that $\|\bbb{x}^k\|\leq R$ for all $k$, we have: \begin{eqnarray} \textstyle f^k - f^* \leq \begin{cases} u^0(\frac{2C_3}{2C_3+1})^k, &\mbox{if~$\sqrt{f^{k}-f^{k+1}} \geq \frac{C_2}{C_3}$},~\forall k\leq \breve{k}\nonumber \\ \frac{C_4}{k}, &\mbox{if~$\sqrt{f^{k}-f^{k+1}} < \frac{C_2}{C_3}$}, ~\text{else} \end{cases} \end{eqnarray} \noindent where $C_2 \triangleq 2 R\|\bbb{C}\| \sqrt{{2}/{\delta}}$, $C_3 \triangleq \frac{2}{\delta}\|\bbb{C}\|$, $C_4 \triangleq \max(8C_2^2,\sqrt{4C_2^2u^0})$, and $\breve{k}$ is some unknown iteration index. \begin{proof} We have the following inequalities: \begin{align} &~~~~~~ u^{k+1} \nonumber\\ & \overset{(a)}{\leq} \textstyle \langle \bbb{r}^{k+1} ,\bbb{C} \bbb{d}^k \rangle - \tfrac{1}{2}\langle \bbb{r}^{k+1},\bbb{A}\bbb{r}^{k+1}\rangle \nonumber\\ & \overset{(b)}{\leq} \textstyle \langle \bbb{r}^{k}+\bbb{d}^{k} ,\bbb{C} \bbb{d}^k \rangle +0 \nonumber \\ & \overset{(d)}{\leq} \textstyle \|\bbb{r}^{k}\| \cdot \|\bbb{C}\| \cdot\|\bbb{d}^k\| + \|\bbb{C}\| \cdot \|\bbb{d}^k\|_2^2 \nonumber \\ & \overset{(d)}{\leq} \textstyle 2 R\|\bbb{C}\| \cdot\|\bbb{d}^k\| + \|\bbb{C}\| \cdot \|\bbb{d}^k\|_2^2 \nonumber \\ & \overset{(e)}{\leq} \textstyle 2 R\|\bbb{C}\| \cdot \sqrt{\tfrac{2}{\delta} (u^k-u^{k+1})} + \|\bbb{C}\| \cdot \tfrac{2}{\delta} \cdot (u^k-u^{k+1}) \nonumber \\ &\overset{(f)}{=} C_2 \sqrt{u^k-u^{k+1}} + C_3 (u^k-u^{k+1})\label{eq:recursion:u} \end{align} \noindent where step $(a)$ uses (\ref{eq:linear:conv0}); step $(b)$ uses the fact that $\bbb{r}^{k+1}=\bbb{r}^{k}+\bbb{d}^{k},~\bbb{A}\succeq \bbb{0}$; step $(c)$ uses the Cauchy-Schwarz inequality and the norm inequality; step $(d)$ uses the fact that $\|\bbb{r}^{k}\|_2 \leq \|\bbb{x}^{k}\|_2 + \|\bbb{x}^{*}\|_2\leq 2R$; step $(e)$ uses (\ref{eq:descent}); step $(f)$ uses the definition of $C_2$ and $C_3$. Now we consider the two cases for the recursion formula in (\ref{eq:recursion:u}): (i) $\sqrt{u^k-u^{k+1}} \geq \frac{C_2}{C_3}$ for some $k\leq \breve{k}$ (ii) $\sqrt{u^k-u^{k+1}} \leq \frac{C_2}{C_3}$ for some $k>\breve{k}$. In case (i), (\ref{eq:recursion:u}) implies that we have $u^{k+1}\leq 2 C_3 (u^{k}-u^{k+1})$ and rearranging terms gives: $u^{k+1}\leq \frac{2C_3}{2C_3+1} u^k$. Thus, we have: $u^{k+1}\leq (\frac{2C_3}{2C_3+1})^{k+1} u^0$. We now focus on case (ii). When $\sqrt{u^k-u^{k+1}} \leq \frac{C_2}{C_3}$, (\ref{eq:recursion:u}) implies that we have $u^{k+1}\leq 2 C_2 \sqrt{u^{k}-u^{k+1}}$ and rearranging terms yields:$\frac{(u^{k+1})^2}{4 C_2^2} + u^{k+1} \leq u^{k}$. Solving this quadratic inequality, we have: $u^{k+1} \leq -2 C_2^2 + 2 C_2^2 \sqrt{1+\frac{1}{C_2^2} u^k}$; solving this recursive formulation by Lemma \ref{lemma:quadratic:recursive}, we obtain $ u^{k+1} \leq \frac{C_4}{k+1}$. \end{proof} \end{theorem} \noindent \bbb{Remarks.} The convergence result in Theorem \ref{theorem:general:rate2} is weaker than that in Theorem \ref{theorem:general:rate}, however, it does not rely on Assumption \ref{lemma:local:bound} and the unknown constant $\eta$. We now derive the convergence rate when $q (\cdot)$ is strongly convex. \begin{theorem}\label{theorem:general:rate3} (Proof of Convergence Rate when $q(\cdot)$ is Strongly Convex) We define $\delta \triangleq {2 \epsilon} + \tfrac{2-\omega}{\omega} \min(diag(\bbb{D}))$ and let $\{\omega,~\epsilon\}$ be chosen such that $\delta\in(0,\infty)$. Assuming that $q(\bbb{x})$ is strongly convex with respect to $\bbb{x}$ such that $\bbb{A} \succeq \sigma \bbb{I}$ with $\sigma>0$ and $\|\bbb{x}^k\|\leq R$ for all $k$, we have: \begin{eqnarray} \label{eq:aaa} f(\bbb{x}^{k}) - f(\bbb{x}^*) \leq \left(\tfrac{C_5}{1+C_5}\right)^k [f(\bbb{x}^{0}) - f(\bbb{x}^*)], \label{eq:QQ22} \label{eq:conv:rate:strong:f} \\ \|\bbb{x}^k - \bbb{x}^*\|^2_2 \leq \frac{8\| \bbb{C}\|^2}{\sigma^2\delta} \left(\tfrac{C_5}{1+C_5}\right)^k [f(\bbb{x}^{0}) - f(\bbb{x}^*)].~~~ \label{eq:conv:rate:strong:x} \end{eqnarray} \noindent where $C_5 \triangleq {\|\bbb{C}\|^2}/{(\delta\sigma)}$. \begin{proof} Invoking (\ref{eq:opt:bound}) in Lemma \ref{lemma:opt:ineq} with $\bbb{x}=\bbb{x}^k,~\bbb{y}=\bbb{x}^*$, we derive the following inequalities: \begin{align} \label{eq:maximization} &~f(\bbb{x}^{k+1}) - f(\bbb{x}^*)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nonumber\\ \leq& ~ \langle \bbb{C}(\bbb{x}^{k+1}-\bbb{x}^{k}),\bbb{x}^{k+1}-\bbb{x}^* \rangle - \tfrac{1}{2}\|\bbb{x}^{k+1}-\bbb{x}^{*}\|_{\bbb{A}}^2 \nonumber\\ \leq&~ \langle \bbb{C}(\bbb{x}^{k+1}-\bbb{x}^{k}),\bbb{x}^{k+1}-\bbb{x}^* \rangle - \tfrac{\sigma}{2}\|\bbb{x}^{k+1}-\bbb{x}^{*}\|_2^2 \end{align} \noindent We notice that the right-hand side in (\ref{eq:maximization}) is concave. Maximizing over $\bbb{x}^*$, we obtain: \begin{align} \label{eq:optimal:solution} & ~~\sigma (\bbb{x}^{*}-\bbb{x}^{k+1}) + \bbb{C}(\bbb{x}^{k+1}-\bbb{x}^{k}) = 0 \nonumber\\ \Rightarrow & ~~\bbb{x}^{*} = \bbb{x}^{k+1} - { \bbb{C}(\bbb{x}^{k+1}-\bbb{x}^{k})}/{\sigma} \end{align} \noindent Putting (\ref{eq:optimal:solution}) into (\ref{eq:maximization}), we further derive the following inequalities: \begin{align} f(\bbb{x}^{k+1}) - f(\bbb{x}^*)\overset{}{\leq} \textstyle \tfrac{\|\bbb{C}(\bbb{x}^{k+1}-\bbb{x}^{k})\|_2^2}{2 \sigma} \overset{(a)}{\leq} \textstyle \frac{\|\bbb{C}\|^2\cdot \|\bbb{x}^k-\bbb{x}^{k+1}\|_2^2}{2\sigma} \nonumber\\ \overset{(b)}{\leq} \textstyle \frac{\|\bbb{C}\|^2 \cdot [f(\bbb{x}^k)-f(\bbb{x}^{k+1})]}{\delta\sigma} \overset{(c)}{=} C_5 [f(\bbb{x}^k)-f(\bbb{x}^{k+1})]\nonumber \end{align} \noindent where step $(a)$ uses the norm inequality $\|\bbb{Cx}\|\leq \|\bbb{C}\|\cdot\|\bbb{x}\|$; step $(b)$ uses (\ref{eq:descent}); step $(c)$ uses the definition of $C_5$. Finally, we obtain: $\tfrac{f(\bbb{x}^{k+1}) - f(\bbb{x}^{*})}{f(\bbb{x}^{k}) - f(\bbb{x}^{*}) } \leq \tfrac{C_5}{1+C_5}$. Solving the recursive formulation, we obtain the result in (\ref{eq:conv:rate:strong:f}). Using the similar strategy for deriving (\ref{eq:QQ3}), we have: \begin{eqnarray} \label{eq:similar} \|\bbb{x}^k - \bbb{x}^{k+1}\|_2^2 \leq \tfrac{2}{\delta} (\tfrac{C_5}{1+C_5})^k [f(\bbb{x}^{0}) - f(\bbb{x}^*)] \end{eqnarray} \noindent Finally, we derive the following inequalities: \begin{align} &~\tfrac{\sigma}{2}\|\bbb{x}^{k+1} - \bbb{x}^*\|^2_2 \nonumber\\ \overset{(a)}{\leq} &~\langle \bbb{C}(\bbb{x}^{k+1}-\bbb{x}^{k}),\bbb{x}^{k+1}-\bbb{x}^* \rangle + f(\bbb{x}^*) - f(\bbb{x}^{k+1}) \nonumber\\ \overset{(b)}{\leq} & ~\langle \bbb{C}(\bbb{x}^{k+1}-\bbb{x}^{k}),\bbb{x}^{k+1}-\bbb{x}^* \rangle \nonumber\\ \overset{(c)}{\leq} & ~\| \bbb{C}\|\cdot\|\bbb{x}^{k+1}-\bbb{x}^*\| \cdot \|\bbb{x}^{k+1}-\bbb{x}^{k}\| \nonumber \nonumber \end{align} \noindent where step $(a)$ uses (\ref{eq:maximization}); step $(b)$ uses the fact that $f(\bbb{x}^*)\leq f(\bbb{x}^{k+1})$; step $(c)$ uses the norm inequality. Therefore, we obtain: \begin{eqnarray} \tfrac{\sigma}{2} \| \bbb{x}^{k+1} - \bbb{x}^* \|_2 \leq \|\bbb{C}\|\|\bbb{x}^{k+1} - \bbb{x}^k\| \nonumber \end{eqnarray} \noindent Combining with (\ref{eq:similar}), we obtain (\ref{eq:conv:rate:strong:x}). \end{proof} \end{theorem} \noindent \bbb{Remarks.} Thanks to the strongly convexity of $q(\cdot)$, we can characterize the convergence rate for both $\|\bbb{x}^k-\bbb{x}^*\|$ and $\|\bbb{x}^k-\bbb{x}^{k+1}\|$ in Theorem \ref{theorem:general:rate3} without using Assumption \ref{lemma:local:bound} and the unknown constant $\eta$. Therefore, the convergence result in Theorem \ref{theorem:general:rate3} is stronger than that in Theorem \ref{theorem:general:rate}. \section{Related Work}\label{sect:related} This section provide existing related work which is close to our work. Proximal gradient method is very popular for solving the problem in (\ref{eq:main}).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The motion of quantum mechanical particles can be associated with interesting topological properties. Beyond the standard example of the quantum Hall effect, \cite{Thouless82,Avron85} lattice problems with zero net magnetic field attracted considerable recent interest. The honeycomb model with complex next-to-nearest neighbor hopping by Haldane \cite{Haldane88} provided the blueprint for a considerable fraction of the current day literature on topological band structures. \cite{Qi11,Hasan10} Despite its pivotal role in the development of this field, a direct experimental implementation was only recently demonstrated with ultra-cold atoms.\cite{Jotzu14} The interesting topological properties of this model arise from the interplay of two energy scales: the strength of the complex next-to-nearest neighbor hopping $t'e^{i\varphi}$, which breaks time-reversal symmetry if $\varphi \notin \lbrace 0,\pi \rbrace$, and the sub-lattice potential $V$, which breaks inversion symmetry. The natural question that poses itself is how an additional energy scale in the form of interactions enriches the picture. Interactions can alter the physics of particles on topological band structures profoundly. There are several possible scenarios of how interactions can induce new phases. First, for partially filled bands interactions might stabilize gapped quantum liquids akin the Laughlin states for the fractional quantum Hall effects.\cite{Tang11,Sun11,Neupert11} Another possibility is that the interplay of $t'$, $V$ and an interaction scale $U$ leads to symmetry broken states, where the quasi-particles above these states inherit the underlying band-topology.\cite{He11} In this manuscript we discuss how such symmetry broken states can arise at half filling. We explain how they can be described beyond a simple Hartree-Fock theory using slave-particle techniques. Finally, we calculate response functions relevant to current experiments with cold atoms and show how the topological properties of the band structure are revealed. These questions deserve attention as current experiments implement fully tunable honeycomb lattices\cite{Tarruell12,Jotzu14} where both, the Berry curvature of the bands have been measured\cite{Duca14} and interactions effects have been observed.\cite{Uehlinger13} In the following, Sec.~\ref{sec:model}, we introduce the concrete model under investigation. We discuss its possible phases and derive them using both a simple Hartree-Fock (Slater determinant) trial wave function as well as a more sophisticated ${\mathds Z}_{2}$ slave-spin method\cite{Huber09a,Ruegg10} which is able to capture interaction effects beyond the physics of Slater determinants. In Sec.~\ref{sec:response} we derive the response functions relevant to the current experiments with ultra-cold fermions. \section{Ionic Hubbard model at half filling on the honeycomb lattice} \label{sec:model} \begin{figure}[b] \includegraphics{setup} \caption{ {\bf Setup.} (Left) The honeycomb lattice with its two sub-lattices $A$ and $B$. The gray arrows indicate the phase convention of the next-to-nearest neighbor hopping, see text. (Right) The different terms in the Hamiltonian: the hopping amplitudes $t$ and $t'$, respectively; the sub-lattice potential $V$; and the local repulsion between different spin species $U$. } \label{fig:setup} \end{figure} \subsection{Model} We study the ionic Hubbard model on the honeycomb lattice \begin{multline} \label{eqn:ionic} H = -\sum_{i,j;\sigma} t_{ij}c_{i\sigma}^{\dag}c_{j\sigma}^{\phantom{\dag}} + \frac{U}{2} \sum_i \bigg(\sum_{\sigma=\uparrow\downarrow}n_{i\sigma}-1\bigg)^2 \\ +\frac{V}{2}\bigg(\!\sum_{i \in A,\sigma} n_{i\sigma} \!-\! \sum_{i \in B,\sigma} n_{i\sigma}\!\bigg). \end{multline} The operators $c_{i\sigma}^{\dag}$ create fermions in two different spin species $\sigma=\uparrow,\downarrow$ and $t_{ij}$ denote the hoppings on the honeycomb lattice as indicated in Fig.~\ref{fig:setup}. The hopping to the next-to-nearest neighbors is associated with a phase $\phi$, such that the fermion gains $\phi$, when the hopping is performed clockwise around the unit cell. Finally, we have terms proportional to an onsite repulsion $U$ between the different spin-species and a sub-lattice potential $V$. We do not specify a chemical potential as we only consider the case of half filling, i.e., one particle per lattice site where the number of $\uparrow$-fermions equals the number of $\downarrow$-fermions. Let us discuss the well-known phases of this model. For nearest-neighbor hopping only ($t'=0$) and $V=U=0$, the half-filled system is a semi-metal. The density of states vanishes linearly at the particle-hole symmetric Dirac points at $\mathbf{K}=(2\pi/a)(2/3,0)$ and $\mathbf{K}'=(2\pi/a)(1/3,1/\sqrt{3})$, respectively.\cite{Wallace47} Turning on $t'$ breaks the particle-hole symmetry. Moreover, if $\phi\notin \{0,\pm \pi\}$ the system enters a quantum Hall state with Chern numbers in both spin sectors $\mathcal C=(\mathcal C_{\uparrow},\mathcal C_{\downarrow})=\pm(1,1)$.\cite{Haldane88}. An inversion-symmetry-breaking term as the sub-lattice potential, $V\ne 0$, opposes the quantum Hall state and eventually renders the system a simple band insulator with strong density modulation.\cite{Haldane88} For $V=0$ but $U > U_{\scriptscriptstyle\rm crit}$ the fermions form a spin-density wave (SDW). Note that due to the vanishing density of states at the Dirac point, a finite interaction strength $U_{\scriptscriptstyle\rm crit}$ is need for the SDW to occur.\cite{Sorella92,Honerkamp08} Eventually, for $U\gg t$ the fermions get localized in a Mott insulator and form a Heisenberg anti-ferromagnet.\cite{Martelo96,Raghu08} How are the transitions between these phases characterized? The onset of the SDW goes along with a symmetry breaking of the spin-rotation symmetry $\mathsf{SU}(2)$ and can be well described within the Ginzburg-Landau framework. The Mott transition on the other hand is only characterized by a qualitative change in the charge fluctuations, concretely by a vanishing charge imbalance between the two sub-lattices. Finally, the transition where the Chern numbers $\mathcal C$ are changing requires necessarily a closing of the excitation gap. We are seeking a method that can capture all these phases and transitions in a unified framework. Readers who are not interested in the technical details can skip the next section and directly advance to Sec.~\ref{sec:result}. \subsection{Method} In order to describe all aforementioned phases and transitions we employ a slave-spin technique.\cite{Huber09a,Ruegg10} This method, which is tailored to half filling, can track both the excitation spectrum and strongly correlated phases such as the Mott insulator.\cite{orthogonal} \nocite{Nandkishore12} In the following we give a concise account of the slave-spin method and refer the interested reader to Ref.~\onlinecite{Ruegg10} for further details. The basic building block of the slave-spin method is the introduction of auxiliary degrees of freedom in the form of a constrained slave spin-1/2 (with eigenvalues $I_{i}^{z}=\pm 1/2$) on every site \begin{equation} \label{eqn:costraint} c_{i\sigma} = 2I^x_i f_{i\sigma},\quad I_i^z +\frac{1}{2} = \bigg(\sum_{\sigma=\uparrow\downarrow}n_{i\sigma}-1\bigg)^2, \end{equation} where $f_{i\sigma}$ are regular fermionic operators and $n_{i\sigma}=f_{i\sigma}^{\dag}f_{i\sigma}^{\phantom{\dag}}$. The second part of Eq.~(\ref{eqn:costraint}) represents the constraint which slaves the two operators $f_{i\sigma}$ and ${\bf I}_{i}$ to each other. Moreover, it is evident from the constraint that $I_{i}^{z}=1/2$ corresponds to either an {\em empty or a double occupied} site, while $I_{i}^{z}=-1/2$ signals a {\em singly occupied} site. Expressed in the new operators the Hamiltonian reads \begin{align} \nonumber H&= -\sum_{\langle i,j\rangle,\sigma} 4t_{ij} I^x_iI^x_jf_{i\sigma}^{\dag}f_{j\sigma}^{\phantom{\dag}} + \frac{U}{2} \sum_{i} I_i^z\\ &\phantom{=}+\frac{V}{2}\bigg(\sum_{i \in A,\sigma} n_{i\sigma} - \sum_{i \in B,\sigma} n_{i\sigma}\bigg), \end{align} Where we used the constraint to write the interaction part $\propto U$ in the slave-spin sector alone. Assuming an ansatz for the ground-state wave-function of the form $|\Psi\rangle=|\Psi_{f}\rangle\otimes |\Psi_{I}\rangle$ we readily obtain the mean-field Hamiltonian \begin{multline} H_{\scriptscriptstyle\rm MF} = \langle \Psi_{I} | H |\Psi_{I} \rangle + \langle \Psi_{f} | H |\Psi_{f} \rangle \\ - \frac{\lambda}{2} \sum_{i} \left(I_{z} - 2n_{i\uparrow}n_{i\downarrow} + \sum_{\sigma} n_{i\sigma} \right), \end{multline} where we added a global Lagrange-multiplier $\lambda$ to enforce the the constraint on average. The resulting meanfield Hamiltonians are given by \begin{multline} \label{eqn:slavef} H_{f} = \sum_{ij} t_{ij}g_{ij} f_{i\sigma}^{\dag}f_{j\sigma} + \frac{V}{2}\bigg(\sum_{i \in A,\sigma} n_{i\sigma} - \sum_{i \in B,\sigma} n_{i\sigma}\bigg) \\ - \frac{\lambda}{2} \sum_{i\sigma} n_{i\sigma} + \lambda \sum_{i} n_{i\downarrow}n_{i\uparrow} \end{multline} and \begin{equation} \label{eqn:slaveI} H_{I} = \sum_{ij}t_{ij}\chi_{ij}I_{i}^{x}I_{j}^{x} + \left(\frac{U-\lambda}{2}\right) \sum_{i} I_{i}^{z} \end{equation} The two sectors (fermion and slave-sector) are linked via the self-consistency equations for the two renormalization factors \begin{align} \label{eqn:link} \begin{split} g_{ij} &= 4\langle \Psi_{I}| I_{i}^{x}I_{j}^{x}|\Psi_{I} \rangle \quad \mbox{and} \\ \chi_{ij} &= 4\sum_{\sigma} \left( \langle \Psi_{f}| f_{i\sigma}^{\dag}f_{j\sigma}^{\phantom{\dag}}|\Psi_{f}\rangle + c.c. \right). \end{split} \end{align} We are now confronted with the problem of solving the two mean-field Hamiltonians (\ref{eqn:slavef}) and (\ref{eqn:slaveI}). To this end we employ a molecular field approximation to the transverse field Ising model (\ref{eqn:slaveI}) and a Hartree-Fock approximation to (\ref{eqn:slavef}). The benefit of using the slave-spin approximation over a direct Hartree-Fock approximation to the original model (\ref{eqn:ionic}) lies in the fact that the slave-spin method allows the interactions to renormalize the hopping strength via $g_{ij}$ and eventually render the system Mott insulating at $g_{ij}=0$.\cite{Brinkman70} \subsubsection{Hartree-Fock} We start with the Hartree-Fock approximation of the fermionic sector. We assume that the ground-state wave-function is a Slater determinant. To parameterize this Slater-determinant we use the parameters of a quadratic Hamiltonian which we determine self-consistently. Our trial Hamiltonian for this purpose can be written as \begin{equation} \label{eqn:hf} H_{\scriptscriptstyle\rm HF} = \sum_{ij} t_{ij}g_{ij} f_{i\sigma}^{\dag}f_{j\sigma} + \sum_{i\in A\sigma} \mu_{\sigma}^{A} n_{i\sigma} + \sum_{i\in B\sigma} \mu_{\sigma}^{B} n_{i\sigma}. \end{equation} This ansatz contains four parameters $\mu_{\sigma}^{\alpha}$ with $\sigma=\uparrow,\downarrow$ and $\alpha=A,B$. The Hamiltonian can be easily diagonalized to find the full spectrum and eigenstates. We then minimize the energy in the ground state $|\Psi_{\scriptscriptstyle\rm HF} \rangle$ of the Hartree-Fock Hamiltonian (\ref{eqn:hf}) \begin{equation} \frac{\partial \langle \Psi_{\scriptscriptstyle\rm HF} | H_{f} | \Psi_{\scriptscriptstyle\rm HF} \rangle}{\partial \mu_{\sigma}^{\alpha}}=0. \end{equation} This minimization yields the self-consistency equations \begin{align} \label{eqn:sc1} \mu_{\sigma}^{A} &= \phantom{-}V + \frac{\lambda}{2} - \lambda \langle \Psi_{\scriptscriptstyle\rm HF} | n_{i\bar\sigma}^{A}| \Psi_{\scriptscriptstyle\rm HF} \rangle, \\ \label{eqn:sc2} \mu_{\sigma}^{B} &= -V + \frac{\lambda}{2} - \lambda \langle \Psi_{\scriptscriptstyle\rm HF} | n_{i\bar\sigma}^{B}| \Psi_{\scriptscriptstyle\rm HF} \rangle. \end{align} Here $\bar\sigma$ denotes the opposite spin to $\sigma$. Moreover, due to the translationally invariant ansatz (\ref{eqn:hf}), the density $ \langle \Psi_{\scriptscriptstyle\rm HF} |n_{i\sigma}^{\alpha}| \Psi_{\scriptscriptstyle\rm HF} \rangle$ depends only on the sub-lattice index $\alpha=A,B$, not on $i$. The variational parameters $\mu_{\sigma}^{\alpha}$ are conjugate to the densities $\rho_{\sigma}^{\alpha}=\langle \Psi_{\scriptscriptstyle\rm HF} | n_{i\sigma}^{\alpha}| \Psi_{\scriptscriptstyle\rm HF} \rangle$. As we constrain ourselves to half-filling, $\sum_{\sigma\alpha}\rho_{\sigma}^{\alpha}=2$, and zero net magnetization, $\sum_{\alpha}\rho_{\uparrow}^{\alpha}=\sum_{\alpha}\rho_{\downarrow}^{\alpha}$, only two of them are independent. For convenience we introduce two independent parameters \begin{align} m &= \rho_{\downarrow}^{A} +\rho_{\uparrow}^{B}-(\rho_{\uparrow}^{A}+\rho_{\downarrow}^{B}), \\ \Delta n & = \rho_{\downarrow}^{A}+\rho_{\uparrow}^{A} - (\rho_{\uparrow}^{B}+\rho_{\downarrow}^{B}). \end{align} While $m$ characterizes a staggered magnetization and hence a breaking of $\mathsf{SU}(2)$ symmetry, $\Delta n$ describes a staggered density, indicating a charge imbalance between the sublattices as long as $\Delta n \ne 0$. For the Hartree-Fock approximation to the original model (\ref{eqn:ionic}), solving the self-consistency equations (\ref{eqn:sc1}) and (\ref{eqn:sc2}) provides us with the mean-field phase diagram.\cite{He11} For the case of the slave-spin method we also need to solve the spin-part and find a solution of both the spin and the fermion sector linked by (\ref{eqn:link}). \subsubsection{Molecular-field approximation} To solve the spin-part we employ a molecular field approximation to the transverse field Ising model (\ref{eqn:slaveI}). To this end, we replace $I_{i}^{x}I_{j}^{x}\rightarrow \langle I_{i}^{x}\rangle I_{j}^{x}$, based on the assumption that the fluctuations from the mean value are small. This renders the slave-spin sector essentially a single-site problem in which we have to self-consistently determine $\langle I^{x}\rangle$. The single-site problem reads \begin{equation} \label{eqn:mf} H_{I}^{\scriptscriptstyle\rm MF} = {\bf h} \cdot {\bf I} \end{equation} with \begin{equation} {\bf h} = \left[\underbrace{-(zt\chi+z't'\chi')}_{\bar{\chi}} \langle I^{x}\rangle,0,\frac{U-\lambda}{2}\right]. \end{equation} Here, $z$ and $z'$ are the number of nearest and next-to-nearest neighbors and $\chi$ and $\chi'$ the respective expectation values in the fermionic sector (\ref{eqn:link}). Solving (\ref{eqn:mf}) amounts to a simple rotation in spin space under the constraint that we recover a self-consistent solution for $\langle I^{x}\rangle$ which yields \begin{equation} \langle I^{x} \rangle = \sqrt{\frac{1}{4}-\left(\frac{U-\lambda}{2\bar\chi}\right)^{2}}. \end{equation} \begin{figure*}[t!] \includegraphics{pd} \caption{ {\bf Phase diagram.} The phase diagram of the topological honeycomb model as derived via a Hartree-Fock approximation (left panels) and via a slave-spin method (right panels) as a function of sub-lattice potential $V$ and interaction strength $U$. The phases $A$ are simple band insulators while $B$ stands for a spin-density wave (SDW) phase which is adiabatically connected to the phase $A$. The phases $C$ are SDW phases that are separated from the band insulator via a gap-closing in the excitation spectrum (solid or dashed lines). The phases $D$ are Mott insulators where strong interaction effects renormalize the hopping to zero. For broken time-reversal symmetry (bottom panels), the different phases are additionally labelled by their respective Chern numbers $\mathcal C=(\mathcal C_{\uparrow},\mathcal C_{\downarrow})$, where all phases with at least one non-zero Chern number are labelled with a star. See text for details. } \label{fig:pd} \end{figure*} \subsubsection{Combinig the ${\bf I}$ and $f$ sectors} Let us review our progress so far. First, we introduced slave-spins $\bf I$ and $f$-fermions. We then assumed a product wave function $|\Psi\rangle = |\Psi_{f}\rangle\otimes|\Psi_{I}\rangle$. Solving both sectors individually (via a Hartree-Fock and a molecular-field approximation, respectively) we obtained solutions parametrized by \begin{align*} \mbox{$f$-sector:} \quad&m(g_{ij},\lambda),\;\Delta n(g_{ij},\lambda),\; \chi(g_{ij},\lambda),\; \chi'(g_{ij},\lambda),\\ \mbox{$\bf I$-sector:}\quad&\langle I^{x}\rangle(\bar\chi,\lambda). \end{align*} The procedure consists in using (\ref{eqn:sc1}) and (\ref{eqn:sc2}) to resolve the self-consistency conditions. Owing to the single-site nature of the slave-spin sector the hopping renormalization factor simplifies to \begin{equation} \label{eqn:gij} g_{ij} =g = 4 \langle I_{i}^{x} I_{j}^{x}\rangle \approx 4\langle I^{x} \rangle^{2} = \left[1-\left(\frac{U-\lambda}{\bar\chi}\right)^{2}\right]. \end{equation} The remaining issue is to determine $\lambda$. To this end we use the following trick $g=4\langle I^{x}\rangle^{2}=1-4\langle I^{z}\rangle^{2}$. To further simplify this expression we use the exact constraint, introduced in Eq.~(\ref{eqn:costraint}), linking $I^{z}$ to fermionic properties. After straight-forward algebra we find \begin{equation} \label{eqn:U} U = \lambda \pm \bar \chi \frac{(m+\Delta n)(m-\Delta n)}{4}. \end{equation} This equation closes the self-consistency loop. The slave-spin sector is completely absorbed in Eqns. (\ref{eqn:gij}) and (\ref{eqn:U}). The fermionic sector can be solved iteratively: \begin{enumerate} \item Choose a $\lambda$, $g$ and $\mu_{\sigma}^{\alpha}$. \item Find the ground state of Eq. (\ref{eqn:hf}). \item Update $\mu_{\sigma}^{\alpha}$ via (\ref{eqn:sc1}) and (\ref{eqn:sc2}). \item Calculate $\bar \chi$ and update $g$ via (\ref{eqn:gij}). \item Iterate the above steps until convergence is reached. \item After convergence is reached, determine $U$ via (\ref{eqn:U}). \end{enumerate} Note that (\ref{eqn:U}) has a $\pm$ ambiguity. As our method has a variational character, for a given $U$ one can compare the mean-field energies to determine which $\lambda$ to choose. Before we discuss our result, let us make a few comments on the approximations applied so far. First, the assumption of a product state $|\Psi\rangle = |\Psi_{f}\rangle\otimes|\Psi_{I}\rangle$ can be improved via the inclusion of gauge fluctuations in the ${\mathds Z}_{2}$ gauge freedom.\cite{Ruegg10} If the presented mean-field solutions captures the main features of the phase diagram, such gauge-fluctuations should only renormalize the parameters of the mean-field solutions. Second, the restriction to the form of (\ref{eqn:hf}) excludes more complicated symmetry breaking patterns such as superconductivity or incommensurate charge- or spin-density waves. We do not expect such phases to occur at half-filling.\cite{Honerkamp08} Finally, the single-site approximation in the molecular-field solution of the slave-spin-model can be improved via a Holstein-Primakoff spin-wave theory. However, as shown in Ref.~\onlinecite{Huber09a}, this is only needed very close to the Mott transition, or if one is interested in the high-energy excitation spectrum.\cite{Maldague77} \subsection{Results} \label{sec:result} In Fig.~\ref{fig:pd} we show the resulting phase diagrams obtained via the methods outlined above. The value of $t'=0.3\,t$ is fixed and we choose two different values for the phase: $\phi \in \{0,\pi/4\}$. We compare the direct Hartree-Fock calculation\cite{He11} (left panels) to the slave-spin results (right panels). Let us start with the time-reversal invariant setup for $\phi=0$. The phases labeled by $A$ are simple band insulators with a density modulation $\Delta n \ne 0$ induced by $V$. The lightly shaded regions $C$ indicate a finite SDW order parameter $m$, whereas the dark region $D$ is a Mott insulator with $\Delta n = 0$ and $g=0$. The onset of $m$ is smooth, i.e., happens in a second order transition. The transition between $C$ and $D$ is of first order, which is a known artifact of the slave-spin mean-field theory.\cite{Ruegg10} The solid line marks a gap closing for both spin species at both the $K$ and $K'$ points. The region labeled by $B$ is characterized by $m\neq 0$ (and hence a broken spin-rotation symmetry) but is adiabatically connected to the band insulator (no gap closing). To summarize: It is evident that $V$ opposes the instability towards an SDW. Note, that the slave-spin approach enhances the stability of the phase $B$ with respect to the Hartree-Fock results. In general, it stabilizes the SDW ordering towards larger values of the sublattice potential. Moreover, it predicts a Mott insulator which is beyond the reach of the direct Hartree-Fock calculation. For the time-reversal broken phase ($\phi=\pi/4$) the slave-spin approach predicts a rich phase diagram. In addition to the presence of a staggered magnetization $m$ we can now also have band structures with a non-vanishing Chern numbers $\mathcal C$. For vanishing $U$, we recover the phase diagram of Haldane:\cite{Haldane88} Below a critical $V$ both spin-species have a Chern number $\mathcal C=(1,1)$. We label this phase $A^{*}$. It is separated from the regular band insulator by a gap-closing at $K$. Note, that in the absence of a $\mathsf{SU}(2)$-symmetry breaking both spin species close the gap at the same time as indicated by the solid line. Let us turn to the influence of $U$. As for $\phi=0$, at a critical strength $U_{\scriptscriptstyle\rm c}$ a staggered magnetization appears, giving rise to an SDW. If the system starts out in the trivial region $A$, the SDW phase $B$ is also trivial. Coming from $A^{*}$, on the other hand, in phase $B^{*}$ the quasi-particles still have Chern numbers $\mathcal C=(1,1)$. Increasing $U$ further closes the gap at $K$ (dashed line) for one of the two spin species. We end up in a phase $C^{*}$ where only one of the two spin-components has topologically non-trivial excitations $\mathcal C=(1,0)$. Eventually, for yet stronger interactions the gap at $K'$ (dashed line) of the other spin-component closes and we reach a trivial SDW. In the slave-spin frame-work, this transition is always preempted by the first order transition (dash-dotted line) into the (trivial) Mott insulating state $D$. As in the case of the time-reversal invariant situation, the slave-spin approach seems to enhance the stability of the phases $B$ and $B^{*}$ with respect to the Hartree-Fock calculation. We further discuss the changes in $\mathcal C$. The low-energy Hamiltonian around the Dirac points is given by \begin{align} H_{K\sigma}^{\scriptscriptstyle\rm D} &= \frac{3}{2}t(k_y\tau_x-k_x\tau_y) - 3t^{\prime}\text{cos}(\varphi) - \Delta_{K\sigma} \tau_z, \\ H_{K'\sigma}^{\scriptscriptstyle\rm D} &= -\frac{3}{2}t(k_y'\tau_x+k_x'\tau_y) - 3t^{\prime}\text{cos}(\varphi) - \Delta_{K'\sigma} \tau_z, \end{align} where $k_{x/y}$ and $k_{x/y}'$ denote the deviation from the $K$ and $K'$ points, respectively. We have further defined the gap functions at the two Dirac points \begin{align} \Delta_{K\sigma}&= \frac{U}{2}(\rho_{\bar{\sigma}}^A - \rho_{\bar{\sigma}}^B) - V + 3\sqrt{3}t^{\prime}\text{sin}(\varphi),\\ \Delta_{K^{\prime}\sigma}&=\frac{U}{2}(\rho_{\bar{\sigma}}^A - \rho_{\bar{\sigma}}^B) - V - 3\sqrt{3}t^{\prime}\text{sin}(\varphi), \end{align} within the Hartree-Fock approximation. In Fig.~\ref{fig:gaps}, we plot the evolution of $\Delta_{K/K',\sigma}$ along the dash-dotted line in Fig.~\ref{fig:pd}. It is evident that for $m=0$ the Chern numbers of both spin species changes together. Also visible is that the onset of $m\neq0$, where the value of the gap for the two spin-species starts to deviate, does not coincide with the gap-closing transition. This statement refers to the gap for single-particle fermionic excitations. The breaking of the spin-rotation symmetry certainly involves the appearance of an emergent bosonic Goldstone mode. \begin{figure}[t] \includegraphics{gaps} \caption{ {\bf Gaps.} Evolution of the gaps along dash-dotted line in Fig.~\ref{fig:pd} (bottom left). Green lines denote the gaps at the $K$ point while blue lines are the gaps at the $K'$ points. Solid (dashed) lines indicate the $\uparrow$ ($\downarrow$) gaps, respectively. The circles indicate topological transitions where the Chern number of the respective spin-component changes by one. A deviation of dashed and solid lines indicates the breaking of the spin-rotation symmetry in the spin-density phase. } \label{fig:gaps} \end{figure} In summary, we find that the phase diagram of Ref.~\onlinecite{He11} survives the inclusion of strong interactions via the slave-spin method. Moreover, the renormalization of the hopping $g$ seems to stabilize the SDW phases $B$, $B^{*}$, and $C^{*}$, where both $m \ne 0$ and $\Delta n \ne 0$. The qualitative features of the phase diagrams are the same for the two methods. Therefore, we use the simple Hartree-Fock variant to calculate the response functions in the following. The same calculation within the slave-spin framework can be performed but is considerably more involved while yielding qualitatively similar results. \section{Response functions} \label{sec:response} One of the standard probing techniques for strongly interacting cold atoms is the lattice modulation spectroscopy.\cite{Stoferle04, Jordens08, Huber09a, Endres12, Kollath06a} In this probing scheme, the depth of the optical lattice is modulated with a given frequency $\omega$. For the case of fermionic atoms it has been shown that the most sensitive probe is to count the number of doubly occupied sites after ramping up a very deep optical lattice.\cite{Jordens08,Uehlinger13} The relevant response function is given by\cite{Huber09a,Kollath06a} \begin{equation} \label{eqn:chi} \Xi(\omega) = \sum_{m} \langle m | \delta D | m \rangle |\langle m | K | 0 \rangle|^{2} \delta(\omega-\omega_{m0}), \end{equation} where $\delta D = \sum_{i} n_{i\uparrow}n_{i\downarrow} - \langle 0 | \sum_{i} n_{i\uparrow}n_{i\downarrow} | 0\rangle$ measures the change in double-occupancy with respect to the ground state $|0\rangle$, $K=\sum_{ij\sigma} t_{ij} c_{i\sigma}^{\dag} c_{j\sigma}^{\phantom{\dag}}$ is the kinetic energy operator, and $\hbar\omega_{m0}$ is the energy difference between the ground and the excited state $|m\rangle$. After optimizing the parameters of the Hartree-Fock slater-determinant it is straight forward to evaluate Eq.~(\ref{eqn:chi}). We show the resulting response function along a cut through the phase diagram in Fig~\ref{fig:response}. \begin{figure}[tb] \includegraphics{response} \caption{ {\bf Response.} (Top panel) Intensity plot of the response function $\Xi(\omega)$ along the dash-dotted line (horizontal axis) in Fig.~\ref{fig:pd} as a function of frequency (vertical axis). Shades of violet indicate a positive signal where the double-occupancy in the excited state is higher than in the ground state. Shades of green quantify a negative signal where the double occupancy is reduced by the excitations. The gap closing transitions between the different phases $A$, $A^{*}$, $C^{*}$, and $C$ are clearly visible. In addition the two topologically non-trivial phases $A^{*}$ and $C^{*}$ {\em can} give rise to an ``inverted band picture'' where, depending on the frequency, both positive and negative signals can be expected. (Lower panel) two cuts through the top panel indicated by the arrows. } \label{fig:response} \end{figure} Let us discuss the results. As expected, we find a negative signal in the band-insulator ($A$). The sub-lattice potential $V$ favors doubly occupied sites and the modulation of the lattice induces a depletion of such doublons. In the other extreme, for $U \gg t$ and $U\gg V$, we expect very little double occupancy in the ground state and indeed the modulation leads to a increase in the doublon density, i.e., a positive signal. The most interesting response is predicted for the topologically non-trivial phases $A^{*}$ and $C^{*}$. First, all gap-closing transitions are clearly visible also in the lattice modulation spectroscopy. Moreover, it is interesting to note that in the two non-trivial phases $A^{*}$ and $C^{*}$, the character of the response changes as a function of energy can change: At low energy (high energy) the response is positive (negative) in some parts of the $A^{*}$ phase. In $C^{*}$ phase this is reversed. Note, however, that this inversion is not in one-to-one correspondence with the phase and hence cannot be used as a strict indication of the topology. However, if observed, such an inversion is a indication of the two phases $A^{*}$ and $C^{*}$. \section{Summary and outlook} We have calculated the phase diagram of the strongly interacting topological honeycomb model using a $\mathds Z_{2}$ slave-spin technique. Our results demonstrate that the simple mean-field diagram of Ref.~\onlinecite{He11} is stable under the inclusion of strong interaction effects. Moreover, we find that all the interesting phases where symmetry breaking in the form of a spin-density-wave and a topological band structure co-exist are enhanced compared to the simple Hartree-Fock results. In addition to the ground-state phase diagram we have calculated the response function relevant for recent experiments with cold atoms.\cite{Uehlinger13} Our main finding is that with the lattice-modulation spectroscopy one can see all interesting gap closing transitions responsible for the change in the Chern number $\mathcal C$. Moreover, the topologically non-trivial nature of the ground state {\em can} reveal itself via a change in sign of the response $\Xi(\omega)$ as a function of $\omega$. \acknowledgments We acknowledge stimulating discussions with Erez Berg, Evert van Nieuwenburg, Andreas R\"uegg, Roman S\"usstrunk, Murad Tovmasyan, and the lattice team in the group of T. Esslinger. This work was supported in part by National Science Foundation Grant No. PHYS-1066293 and the hospitality of the Aspen Center for Physics. We acknowledge support by the Swiss National Science Foundation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let a compact metric group $G$ acts on a compact metric space $X$. In \cite[Theorem 5.1]{mil4}, V. Milman considered a H\"{o}lder action (see Section 3.6.2 for the definition) and estimated the diameters of orbits from above by words of an isoperimetric property of the group $G$ and a covering property of $X$. As he refered in the introduction, his idea came from the fixed point theory of a L\'{e}vy group action by M. Gromov and Milman in \cite[Theorem 7.1]{milgro} (see Section 4 for the definition of a L\'{e}vy group). In this paper, we consider general continuous actions of a compact metric group and a L\'{e}vy group to some concrete noncompact metric spaces, such as $\mathbb{R}$-trees, doubling spaces, metric graphs, and Hadamard manifolds. Of isoperimetric inspiring, the L\'{e}vy-Milman concentration theory of maps played an important role in Milman's estimation (and also Gromov and Milman's theorem of a L\'{e}vy group action). Taking a point $x\in X$, he considered how concentrates the orbit map $G\ni g \to gx\in X$ to a constant map. Recent developments of the concentration theory of maps by the author (\cite{funano2}, \cite{funad}, \cite{funano1}), by Gromov (\cite{gromovcat}, \cite{gromov}), and by M. Ledoux and K. Oleszkiewvicz (\cite{ledole}) enable us to estimate how the orbit map concentrate to a constant map in the case where $X$ is an $\mathbb{R}$-tree, a doubling space, a metric graph, and a Hadamard manifold. In stead of considering a H\"{o}lder action and a covering property, we provide an estimate of the diameters of orbits of a continuous action of a compact metric group to those metric spaces by words of the continuity of the action, an isoperimetric property of $G$, and a metric space property of $X$. Our results assert that we can measure how the action to those metric spaces is closed to the trivial action by the above words. In the same point of view, we obtain two results of a L\'{e}vy group action to the above spaces. A L\'{e}vy group was first introduced and analyzed by Gromov and Milman in \cite{milgro}. Gromov and Milman proved that every continuous action of a L\'{e}vy group to a compact metric space has a fixed point. They also pointed out that the unitary group $U(\ell^2)$ of the separable Hilbert space $\ell^2$ with the strong topology is a L\'{e}vy group. Many concrete examples of L\'{e}vy groups are known by the works of S. Glasner \cite{gla}, H. Furstenberg and B. Weiss (unpublished), T. Giordano and V. Pestov \cite{giopes1}, \cite{giopes2}, and Pestov \cite{pestov1}, \cite{pestov3}. For examples, groups of measurable maps from the standard Lebesgue measure space to compact groups, unitary groups of some von Neumann algebras, groups of measure and measure-class preserving automorphisms of the standard Lebesgue measure space, full groups of amenable equivalence relations, and the isometry groups of the universal Urysohn metric spaces are L\'{e}vy groups (see the recent monograph \cite{pestov2} for precise). One of our results states that there is no non-trivial uniformly continuous action of a L\'{e}vy group to the above spaces (Proposition \ref{th3}). We also obtain a generalization of Gromov and Milman's fixed point theorem (Proposition \ref{th2}). Both two results are obtained by making Gromov and Milman's argument precise. The article is organized as follows. In Section $2$, we recall basic facts about the concentration theory of maps and prepare for the Sections $3$ and $4$. In Section $3$, we estimates the diameter of orbits of a compact group action to $\mathbb{R}$-trees, doubling spaces, meric graphs, and Hadamard manifolds. Section $4$ is devoted to a L\'{e}vy group action to those spaces. \section{Preliminaries} \subsection{Concentration function and observable diameter} In this subsection, we recall some basic facts in the concentration theory of 1-Lipschitz maps. We recall relationships between an isoperimetric property of an mm-space (metric measure space) and the concentration theory of $1$-Lipschitz functions. The concentration theory of $1$-Lipschitz functions was introduced by Milman in his investigations of asymptotic geometric analysis (\cite{mil1}, \cite{mil2}, \cite{mil3}). While the concentration theory of functions developed, the concentration theory of maps into general metric spaces was first studied by Gromov (\cite{gromovcat}, \cite{gromov2}, \cite{gromov}). He established the theory by introducing the observable diameter in \cite{gromov}. We first recall its definition. Let $Y$ be a metric space and $\nu$ a Borel measure on $Y$ such that $m:=\nu(Y)<+\infty$. We define for any $\kappa >0$ \begin{align*} \mathop{\mathrm{diam}} \nolimits (\nu , m-\kappa):= \inf \{ \mathop{\mathrm{diam}} \nolimits Y_0 \mid Y_0 \subseteq Y \text{ is a Borel subset such that }\nu(Y_0)\geq m-\kappa\} \end{align*}and call it the \emph{partial diameter} of $\nu$. Let $(X,\mathop{\mathit{d}} \nolimits_X)$ be a complete sparable metric space equipped with a finite Borel measure $\mu_X$ on $X$. Henceforth, we call such a triple an \emph{mm-space}. \begin{dfn}[Observable diameter]\upshape Let $(X,\mathop{\mathit{d}} \nolimits_X,\mu_X)$ be an mm-space with $m_X:=\mu_X(X)$ and $Y$ a metric space. For any $\kappa >0$ we define the \emph{observable diameter} of $X$ by \begin{align*} \mathop{\mathrm{ObsDiam}} \nolimits_Y (X; -\kappa):= \sup \{ \mathop{\mathrm{diam}} \nolimits (f_{\ast}(\mu_X),m_X-\kappa) \mid f:X\to Y \text{ is a }1 \text{{\rm -Lipschitz map}} \}, \end{align*}where $f_{\ast}(\mu_X)$ stands for the push-forward measure of $\mu_X$ by $f$. \end{dfn} The idea of the observable diameter comes from the quantum and statistical mechanics, that is, we think of $\mu_X$ as a state on a configuration space $X$ and $f$ is interpreted as an observable. Given sequences $\{X_n\}_{n=1}^{\infty}$ of mm-spaces and $\{ Y_n\}_{n=1}^{\infty}$ of metric spaces, observe that $\lim_{n\to \infty}\mathop{\mathrm{ObsDiam}} \nolimits_{Y_n}(X_n;-\kappa)=0$ for any $\kappa >0$ if and only if for any sequence $\{ f_n:X_n \to Y_n\}_{n=1}^{\infty}$ of $1$-Lipschitz maps there exists a sequence $\{ m_{f_n}\}_{n=1}^{\infty}$ of points such that $m_{f_n}\in Y_n$ and \begin{align*} \lim_{n\to \infty}\mu_{X_n}(\{ x_n \in X_n \mid \mathop{\mathit{d}} \nolimits_{Y_n}(f_n(x_n),m_{f_n})\geq \varepsilon\})=0 \end{align*}for any $\varepsilon>0$. A sequence $\{X_n\}_{n=1}^{\infty} $ of mm-spaces is said to be a \emph{L\'{e}vy family} if $\lim_{n\to \infty}\mathop{\mathrm{ObsDiam}} \nolimits_{\mathbb{R}}(X_n;-\kappa)=0$ for any $\kappa>0$. The concept of L\'{e}vy families was first introduced in \cite{milgro}. For an mm-space $X$ with $\mu_X(X)=1$, we define the \emph{concentration function} $\alpha_X:(0,+\infty)\to \mathbb{R}$ as the supremum of $\mu_X(X\setminus A_{+r})$, where $A$ runs over all Borel subsets of $X$ with $\mu_X(A)\geq 1/2$ and $A_{+r}$ is an open $r$-neighbourhood of $A$. This function describes an isoperimetric feature of the space $X$. We shall consider each closed Riemannian manifold as an mm-space equipped with the volume measure normalized to have the total volume $1$. \begin{ex}\label{exl1}\upshape Let $M$ be a closed Riemannian manifold such that $\mathop{\mathit{Ric}} \nolimits_M \geq \widetilde{\kappa}_1>0$. By virtue of the L\'{e}vy-Gromov isoperimetric inequality, we obtain $\alpha_M(r)\leq e^{-\widetilde{\kappa}_1 r^2/2}$ (see \cite[Section 1.2, Remark 2]{milgro} or \cite[Theorem 2.4]{ledoux}). Since $\mathop{\mathit{Ric}} \nolimits_{SO(n)}\geq (n-1)/4$, we have $\alpha_{SO(n)}(r)\leq e^{-(n-1)r^2/8}$ for example. \end{ex} \begin{ex}\label{exl2}\upshape Let $M$ be a closed Riemannian manifold. We denote by $\lambda_1(M)$ the non-zero first eigenvalue of the Laplacian on $M$. Then, for any $r >0$, we have $\alpha_M(r)\leq e^{-\sqrt{\lambda_1(M)}r/3}$ (see \cite[Theorem 4.1]{milgro} or \cite[Theorem 3.1]{ledoux}). Since the $n$-dimensional torus $\mathbb{T}^n:=\mathbb{S}^1 \times \mathbb{S}^1 \times \cdots \times \mathbb{S}^1$ satisfies $\lambda_1(\mathbb{T}^n)=\lambda_1(\mathbb{S}^1)=1$, we obtain $\alpha_{\mathbb{T}^n}(r)\leq e^{-r/3}$ for example. \end{ex} Let $X$ be an mm-space and $f:X\to \mathbb{R}$ a Borel measurable function. A number $m_f\in \mathbb{R}$ is called a \emph{median} of $f$ if it satisfies that $f_{\ast}(\mu_X)((-\infty, m_f]) \geq m_X /2$ and $f_{\ast}(\mu_X)([m_f ,+\infty))\geq m_X/2$. We remark that $m_f$ does exist, but it is not unique in general. Relationships between the concentration function and the observable diameter are the following: \begin{lem}[{cf.~\cite[Section 1.3]{ledoux}}]\label{vel1}Let $X$ be an mm-space with $\mu_X(X)=1$. Then, for any $1$-Lipschitz function $f:X\to \mathbb{R}$ and $\varepsilon >0$, we have \begin{align*} \mu_X (\{ x\in X \mid |f(x)-m_f|\geq \varepsilon \})\leq 2\alpha_X(\varepsilon). \end{align*} \end{lem} \begin{lem}[{cf.~\cite[Section 1.3]{ledoux}}]\label{vel2}Let $X$ be an mm-space with $\mu_X(X)=1$. Assume that a function $\alpha:(0,+\infty)\to \mathbb{R}$ satisfies that \begin{align*} \mu_X (\{ x\in X \mid |f(x)-m_f|\geq \varepsilon \})\leq \alpha(\varepsilon) \end{align*}for any $1$-Lipschitz function $f:X\to \mathbb{R}$. Then, we have $\alpha_X(\varepsilon) \leq \alpha(\varepsilon)$. \end{lem} By Lemmas \ref{vel1} and \ref{vel2}, we obtain the following corollary: \begin{cor}[{\cite[Section 1.3]{ledoux}}]A sequence $\{X_n\}_{n=1}^{\infty}$ of mm-spaces is a L\'{e}vy family if and only if $\lim_{n\to \infty}\alpha_{X_n}(r)=0$ for any $r>0$. \end{cor} Combining Lemma \ref{vel1} with Examples \ref{exl1} and \ref{exl2}, we obtain the following corollaries: \begin{cor}Let $M$ be a closed Riemannian manifold such that $\mathop{\mathit{Ric}} \nolimits_M \geq \widetilde{\kappa}_1>0$. Then, for any $\kappa >0$, we have \begin{align*} \mathop{\mathrm{ObsDiam}} \nolimits_{\mathbb{R}}(M;-\kappa)\leq 2\sqrt{\frac{2\log \big(\frac{2}{\kappa}\big)}{\widetilde{\kappa}_1}}. \end{align*}In particular, we have \begin{align*} \mathop{\mathrm{ObsDiam}} \nolimits_{\mathbb{R}}(SO(n);-\kappa)\leq 4\sqrt{\frac{2\log\big(\frac{2}{\kappa}\big)}{n-1}}. \end{align*} \end{cor} \begin{cor}Let $M$ be a closed Riemannian manifold. Then, for any $\kappa >0$, we have \begin{align*} \mathop{\mathrm{ObsDiam}} \nolimits_{\mathbb{R}}(M;-\kappa)\leq \frac{6 \log \big( \frac{2}{\kappa}\big)}{\sqrt{\lambda_1(M)}}. \end{align*} In particular, we have \begin{align*} \mathop{\mathrm{ObsDiam}} \nolimits_{\mathbb{R}}(\mathbb{T}^n;-\kappa)\leq 6 \log \Big(\frac{2}{\kappa}\Big). \end{align*} \end{cor} \subsection{Concentration and separation} In this section, we recall the notion of the separation distance for an mm-space which was introduced in \cite{gromov}. We review relationships between the observable diameter and the separation distance. The separation distance plays an important role throughout this paper. Let $X$ be an mm-space. For $\kappa_1, \kappa_2\geq 0$, we define the \emph{separation distance} $\mathop{\mathrm{Sep}} \nolimits (X;\kappa_1,\kappa_2)= \mathop{\mathrm{Sep}} \nolimits (\mu_X;\kappa_1,\kappa_2)$ of $X$ as the supremum of the distance $\mathop{\mathit{d}} \nolimits_X(A,B)$, where $A$ and $B$ are Borel subsets of $X$ satisfying that $\mu_X(A)\geq \kappa_1$ and $\mu_X(B)\geq \kappa_2$. Relationships between the observable diameter and the separation distance are followings. We refer to \cite[Subsection 2.2]{funad} for precise proofs. \begin{lem}[{cf.~\cite[Section $3\frac{1}{2}.33$]{gromov}}]\label{noranoraneko}Let $X$ be an mm-space and $\kappa,\kappa' >0$ with $\kappa > \kappa'$. Then we have \begin{align*} \mathop{\mathrm{ObsDiam}} \nolimits_{\mathbb{R}} (X ;-\kappa')\geq \mathop{\mathrm{Sep}} \nolimits (X;\kappa,\kappa). \end{align*} \end{lem} \begin{rem}\upshape In {\cite[Section $3\frac{1}{2}.33$]{gromov}}, Lemma \ref{noranoraneko} is stated as $\kappa =\kappa'$, but that is not true in general. For example, let $X:=\{ x_1 , x_2\}$, $\mathop{\mathit{d}} \nolimits_X (x_1,x_2):=1$, and $\mu_X (\{ x_1\})=\mu_X (\{ x_2 \}):= 1/2$. Putting $\kappa =\kappa'=1/2$, we have $\mathop{\mathrm{ObsDiam}} \nolimits_{\mathbb{R}} (X;-1/2)=0$ and $\mathop{\mathrm{Sep}} \nolimits (X;1/2,1/2)=1$. \end{rem} \begin{lem}[cf.~{\cite[Section $3\frac{1}{2}.33$]{gromov}}]\label{l2.1.2}Let $\nu$ be a Borel measure on $\mathbb{R}$ with $m:=\nu(\mathbb{R})<+\infty$. Then, for any $\kappa >0$ we have \begin{align*} \mathop{\mathrm{diam}} \nolimits (\nu, m-2\kappa)\leq \mathop{\mathrm{Sep}} \nolimits (\nu; \kappa, \kappa). \end{align*} In particular, for any $\kappa >0$ we have \begin{align*} \mathop{\mathrm{ObsDiam}} \nolimits_{\mathbb{R}}(X;-2\kappa)\leq \mathop{\mathrm{Sep}} \nolimits (X; \kappa, \kappa). \end{align*} \end{lem} \begin{cor}[{cf.~\cite[Section $3\frac{1}{2}.33$]{gromov}}]\label{c2.1.1}A sequence $\{ X_n\}_{n=1}^{\infty}$ of mm-spaces is a L\'{e}vy family if and only if $\lim_{n\to \infty}\mathop{\mathrm{Sep}} \nolimits (X_n;\kappa,\kappa) =0$ for any $\kappa >0$. \end{cor} \subsection{Compact metric group action and diameter of a measure} Let a compact metric group $G$ continuously acts on a metric space $X$. For each $\eta >0$, we define a (possibly infinite) number $\rho(\eta)= \rho^{(G,X)}(\eta)$ as the supremum of $\mathop{\mathit{d}} \nolimits_X(gx,gy)$ for all $g\in G$ and $x,y\in X$ with $\mathop{\mathit{d}} \nolimits_X(x,y)\leq \eta$. Given a point $x\in X$, we indicate by $f_x:G\to X$ the orbit map of $x$, that is, $f_x(g):=gx$ for any $g\in G$. For the Haar measure $\mu_G$ on $G$ normalized as $\mu_G(G)=1$, we put $\nu_{G,x}:=(f_x)_{\ast}(\mu_G)$. \begin{prop}\label{p3.1}Assume that $\nu_{G,x}(B_X(y,\delta))>1/2$ for some $y\in X$ and $\delta >0$. Then, we have \begin{align}\label{s3.1} \mathop{\mathit{d}} \nolimits_X(y,gy)\leq \delta + \rho(\delta) \end{align}for any $g\in G$. Moreover, there exists a point $x_0 \in Gx$ such that \begin{align}\label{s3.2} \mathop{\mathit{d}} \nolimits_X (x_0,gx_0) \leq \min \{ 2\delta + \rho(2\delta),2\delta + 2\rho(\delta) \} \end{align}for any $g\in G$. \begin{proof}Taking any $g\in G$, we first prove (\ref{s3.1}). Since $gB_X(y,\delta)\subseteq B_X(gy, \rho (\delta))$ and the measure $\nu_{G,x}$ is $G$-invariant, from the assumption, we have \begin{align*} \nu_{G,x}(B_X(gy,\rho(\delta)))\geq \nu_{G,x}(gB_X(y,\delta))=\nu_{G,x}(B_X(y,\delta))>1/2. \end{align*}Combining this with $\nu_{G,x}(B_X(y,\eta))>1/2$, we get $\nu_{G,x}(B_X(y,\delta)\cap B_X(gy,\rho(\delta)))>0$, which implies (\ref{s3.1}). We next prove (\ref{s3.2}). Since the orbit $Gx$ is compact, the support of the measure $\nu_{G,x}$ is included in $Gx$. Hence, there exists a point $x_0 \in B_X(y,\delta)\cap Gx$. Let $g\in G$. Since $\nu_{G,x}(B_X(x_0,2\delta))\geq \nu_{G,x}(B_X(x_0,2\delta))>0$, by using (\ref{s3.1}), we obtain $\mathop{\mathit{d}} \nolimits_X(x_0,gx_0)\leq 2\delta +\rho(2\delta)$. We also have \begin{align*} \mathop{\mathit{d}} \nolimits_X(x_0,gx_0)\leq \ &\mathop{\mathit{d}} \nolimits_X(x_0,y)+ \mathop{\mathit{d}} \nolimits_X(y,gy)+ \mathop{\mathit{d}} \nolimits_X(gy, g x_0)\\ \leq \ & \delta + (\delta + \rho(\delta))+ \rho(\delta)\\ = \ & 2\delta+ 2\rho(\delta), \end{align*}which implies (\ref{s3.2}). This completes the proof. \end{proof} \end{prop} \begin{prop}\label{p3.2}Assume that $\nu_{G,x}(A)>1/2$ for some Borel subset $A\subseteq X$. Then, there exists a point $x_0\in Gx$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_X(x_0,gx_0)\leq \mathop{\mathrm{diam}} \nolimits A+\rho(\mathop{\mathrm{diam}} \nolimits A) \end{align*}for any $g\in G$. \begin{proof}Since $A\cap Gx\neq \emptyset$, the claim follows from the same argument in the proof of Proposition \ref{p3.1}. \end{proof} \end{prop} For any $\eta>0$, we put $\rho(+\eta):=\lim_{\eta'\downarrow \eta}\omega_x(\eta')$. \begin{cor}\label{c3.1}There exists a point $z_x \in Gx$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_X (z_x , gz_x) \leq \lim_{\kappa \uparrow 1/2}\mathop{\mathrm{diam}} \nolimits (\nu_{G,x},1-\kappa) +\rho\big(+ \lim_{\kappa \uparrow 1/2}\mathop{\mathrm{diam}} \nolimits (\nu_{G,x},1-\kappa) \big) \end{align*}for any $g\in G$. \end{cor} For any $\eta>0$, we define a (possibly infinite) number $\omega_x(\eta)=\omega_x^{(G,X)}(\eta)$ as the supremum of $\mathop{\mathit{d}} \nolimits_X(gx, g'x)$ for all $g,g'\in G$ with $\mathop{\mathit{d}} \nolimits_G(g,g')\leq \eta$. \begin{lem}\label{l3.1}For any $\kappa_1,\kappa_2 >0$, we have \begin{align*} \mathop{\mathrm{Sep}} \nolimits (\nu_{G,x};\kappa_1,\kappa_2)\leq \omega_x(+\mathop{\mathrm{Sep}} \nolimits(G;\kappa_1, \kappa_2)). \end{align*} \begin{proof}Let $A$ and $B$ be two Borel subsets such that $\nu_{G,x}(A)\geq \kappa_1$ and $\nu_{G,x}(B)\geq \kappa_2$. Since $\mu_G((f_x)^{-1}(A))\geq \kappa_1$ and $\mu_G((f_x)^{-1}(B))\geq \kappa_2$, we have $\mathop{\mathit{d}} \nolimits_G((f_x)^{-1}(A), (f_x)^{-1}(B))\leq \mathop{\mathrm{Sep}} \nolimits (G;\kappa_1,\kappa_2)$. Thus, from the definition of $\omega_x$, we obtain $\mathop{\mathit{d}} \nolimits_X(A,B)\leq \omega_x(+\mathop{\mathrm{Sep}} \nolimits(G;\kappa_1,\kappa_2))$. This completes the proof. \end{proof} \end{lem} \begin{cor}[{cf.~\cite[Section 5.2]{milgro}}]\label{c3.2}Assume that a sequence $\{ G_n\}_{n=1}^{\infty}$ of compact metric groups is a L\'{e}vy family and each $G_n$ acts on a metric space $X$. Assume also that there exist a sequence $\{x_n\}_{n=1}^{\infty}$ of points in $X$ and a function $\omega:(0,+\infty) \to [0,+\infty]$ such that $\lim_{\eta \to 0}\omega (\eta)=0$ and $\omega^{(G_n,X)}_{x_n}(\eta)\leq \omega(\eta)$ for any $n\in \mathbb{N}$ and $\eta >0$. Then, the sequence $\{ (X,\mathop{\mathit{d}} \nolimits_X, \nu_{G_n,x_n})\}_{n=1}^{\infty}$ of mm-spaces is a L\'{e}vy family. \end{cor} \section{Estimates of the diameters of orbits} Throughout this section, we always assume that a compact metric group $G$ continuously acts on a metric space $X$. We shall consider the group $G$ as an mm-space $(G,\mathop{\mathit{d}} \nolimits_G,\mu_G )$, where $\mu_G$ is the Haar measure on $G$ normalized as $\mu_G(G)=1$. In this section, motivated by the work of Milman \cite{mil4}, we shall estimate the diameters of orbits $Gx$ from above for concrete metric spaces $X$ by words of the continuity of the action, an isoperimetric property of $G$, and a metric space property of $X$. For this purpose, we use the notation $\rho=\rho^{(G,X)}$ and $\omega_x=\omega_x^{(G,X)}$ defined in Subsection 2.3. We first consider the case where the orbit map $f_x:G\ni g\mapsto gx\in X$ for some $x\in X$ is a $1$-Lipschitz map. In this case, applying Corollary \ref{c3.1}, we obtain the following: \begin{prop}For any $\kappa\in (0,1/2)$, there exists a point $z_{\kappa}\in X$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_X(z_{\kappa},gz_{\kappa})\leq \mathop{\mathrm{ObsDiam}} \nolimits_X(G;-\kappa)+ \rho (\mathop{\mathrm{ObsDiam}} \nolimits_X(G;-\kappa)) \end{align*}for any $g\in G$. \end{prop} \subsection{Case of Euclidean spaces} In this subsection, we consider the case where the metric space $X$ is the Euclidean space $\mathbb{R}^k$. Let $\mathop{\mathrm{pr}} \nolimits_i:\mathbb{R}^k \ni x= (x_i)_{i=1}^{k}\mapsto x_i \in \mathbb{R}$ be the projection. \begin{prop}[{cf.~\cite[Section $3\frac{1}{2}.32$]{gromov}}]\label{p4.1.1}For any finite Borel measure $\nu$ on $\mathbb{R}^k$ with $m:=\nu(\mathbb{R}^k)$, we have \begin{align*} \mathop{\mathrm{diam}} \nolimits (\nu,m-\kappa)\leq \sqrt{k}\max_{1\leq i\leq k} \mathop{\mathrm{diam}} \nolimits \Big((\mathop{\mathrm{pr}} \nolimits_i)_{\ast}(\nu), m-\frac{\kappa}{k}\Big). \end{align*} \end{prop} Applying Corollary \ref{c2.1.1} to Proposition \ref{p4.1.1}, we obtain the following corollary: \begin{cor}[{cf.~\cite[Section $3\frac{1}{2}.32$]{gromov}}]For any L\'{e}vy family $\{X_n\}_{n=1}^{\infty}$ and any $\kappa>0$, we have \begin{align*} \lim_{n\to \infty}\mathop{\mathrm{ObsDiam}} \nolimits_{\mathbb{R}^k}(X_n;-\kappa)=0. \end{align*} \end{cor} \begin{prop}\label{p4.1.2}Assume that a compact metric group $G$ continuously acts on the Euclidean space $\mathbb{R}^k$ and put $r:=\lim_{\kappa \uparrow 1/(4k)}\mathop{\mathrm{Sep}} \nolimits (G;\kappa,\kappa)$. Then, for any $x\in \mathbb{R}^k$, there exists a point $z_{x}\in Gx$ such that \begin{align}\label{s4.1.1} \mathop{\mathit{d}} \nolimits_{\mathbb{R}^k}(z_{x},g z_{x})\leq \sqrt{k} \omega_x (+ r )+ \rho(+\sqrt{k}\omega_x (+ r ) ) \end{align}for any $g\in G$. \begin{proof}Combining Lemma \ref{l3.1} with Proposition \ref{p4.1.1}, we get \begin{align*} \mathop{\mathrm{diam}} \nolimits (\nu_{G,x},1-\kappa) \leq \ &\sqrt{k}\max_{1\leq i\leq k} \mathop{\mathrm{diam}} \nolimits \Big((\mathop{\mathrm{pr}} \nolimits_i)_{\ast}(\nu_{G,x}),1- \frac{\kappa}{k}\Big)\\ \leq \ & \sqrt{k}\max_{1\leq i\leq k} \mathop{\mathrm{Sep}} \nolimits \Big((\mathop{\mathrm{pr}} \nolimits_i)_{\ast}(\nu_{G,x});\frac{\kappa}{2k},\frac{\kappa}{2k}\Big)\\ \leq \ & \sqrt{k}\mathop{\mathrm{Sep}} \nolimits \Big(\nu_{G,x};\frac{\kappa}{2k},\frac{\kappa}{2k}\Big)\\ \leq \ & \sqrt{k} \omega_x \Big(+\mathop{\mathrm{Sep}} \nolimits \Big(G;\frac{\kappa}{2k},\frac{\kappa}{2k}\Big) \Big). \end{align*}Applying this to Corollary \ref{c3.1}, we obtain (\ref{s4.1.1}). This completes the proof. \end{proof} \end{prop} \subsection{Case of compact metric spaces} In this subsection, we treat the case where the metric space $X$ is a compact metric space $K$. For any $\delta >0$, we denote by $N_K(\delta)$ the minimum number of Borel subsets of diameter at most $\delta$ which cover $K$. \begin{prop}[{cf.~\cite[Section $3\frac{1}{2}.34$]{gromov}}]\label{p4.2.1}For any $\delta,\kappa>0$ and any finite Borel measure $\nu$ on $K$ with $m:=\nu(K)$, we have \begin{align*} \mathop{\mathrm{diam}} \nolimits (\nu,m-\kappa)\leq \mathop{\mathrm{Sep}} \nolimits \Big(\nu;\frac{\kappa}{N_K(\delta)}, \frac{\kappa}{N_K(\delta)} \Big) +2\delta. \end{align*} \end{prop} \begin{cor}[{cf.~\cite[Section $3\frac{1}{2}.34$]{gromov}}]Let $\{ X_n\}_{n=1}^{\infty}$ be a L\'{e}vy family and $K$ a compact metric space. Then, for any $\kappa >0$, we have \begin{align*} \lim_{n\to \infty}\mathop{\mathrm{ObsDiam}} \nolimits_{K}(X_n;-\kappa)=0. \end{align*} \end{cor} By virtue of Proposotion \ref{p4.2.1}, the same proof of Proposition \ref{p4.1.2} yields the following proposition: \begin{prop}\label{p4.2.2}Assume that a compact metric group $G$ continuously acts on a compact metric space $K$ and put $r_{x,\delta}:= \omega_x(+\lim_{\kappa \uparrow 1/(2N_K(\delta))}\mathop{\mathrm{Sep}} \nolimits (G;\kappa,\kappa))+2\delta$ for $x\in K$ and $\delta>0$. Then, there exists a point $z_{x,\delta}\in Gx$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_K(z_{x,\delta},gz_{x,\delta})\leq \ & r_{x,\delta}+ \rho(+r_{x,\delta}) \end{align*}for any $g\in G$. \end{prop} Proposition \ref{p4.2.2} generalizes Milman's result \cite[Theorem 5.1]{mil4}. \subsection{Case of $\mathbb{R}$-trees} In this subsection, we consider the case where the metric space $X$ is an $\mathbb{R}$-tree $T$. For this purpose, we first recall some standard terminologies in metric geometry. Let $(X,\mathop{\mathit{d}} \nolimits_X)$ be a metric space. A rectifiable curve $\gamma:[0,1]\to X$ is called a \emph{geodesic} if its arclength coincides with the distance $\mathop{\mathit{d}} \nolimits_X(\gamma(0),\gamma(1))$ and it has a constant speed, i.e., parameterized proportionally to the arc length. We say that $(X,\mathop{\mathit{d}} \nolimits_X)$ is a \emph{geodesic space} if any two points in $X$ are joined by a geodesic between them. A complete metric space $T$ is called an \emph{$\mathbb{R}$-tree} if it has the following properties: \begin{itemize} \item[$(1)$]Any two points in $T$ are connected by a unique unit speed geodesic. \item[$(2)$]The image of every simple path in $T$ is the image of a geodesic. \end{itemize} To answer Gromov's exercise in \cite[Section $3\frac{1}{2}.32$]{gromov}, the author proved the following theorem: \begin{thm}[{cf.~\cite[Proposition 5.1]{funano2}}]\label{t4.3.1}For any $\kappa>0$ and finite Borel measure $\nu$ on $T$ with $m:=\nu(T)$, we have \begin{align*} \nu \Big(B_T \Big(x_{\nu},\mathop{\mathrm{Sep}} \nolimits\Big(\nu; \frac{\kappa}{2},\frac{m}{3} \Big) \Big) \Big)\geq 1-\kappa. \end{align*} \end{thm} \begin{cor}[{cf.~\cite[Theorem 1.1]{funano2}}]Let $\{X_n\}_{n=1}^{\infty}$ be a L\'{e}vy family and $T$ an $\mathbb{R}$-tree. Then, for any $\kappa >0$, we have \begin{align*} \lim_{n\to \infty}\mathop{\mathrm{ObsDiam}} \nolimits_T(X_n;-\kappa)=0. \end{align*} \end{cor} By Proposition \ref{p3.1} and Theorem \ref{t4.3.1}, the following proposition follows from the same proof of Proposition \ref{p4.1.2}. \begin{prop}Assume that a compact metric group $G$ continuously acts on an $\mathbb{R}$-tree $T$. Then, for any $x\in T$ and $\kappa\in (0,1/4)$, there exists a point $ z_{x,\kappa}\in T$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_T(z_{x,\kappa},gz_{x,\kappa} )\leq \omega_x \Big( +\mathop{\mathrm{Sep}} \nolimits\Big(G;\kappa,\frac{1}{3}\Big)\Big) + \rho\Big( \omega_x \Big( +\mathop{\mathrm{Sep}} \nolimits\Big(G;\kappa,\frac{1}{3}\Big)\Big)\Big) \end{align*}for any $g\in G$. Put $r:=\lim_{\kappa\uparrow 1/4}\mathop{\mathrm{Sep}} \nolimits (G;\kappa,\kappa)$. Then, there also exists a point $z_{x} \in Gx$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_T (z_{x} ,gz_{x})\leq \ &\min \{2\omega_x (+r) + \rho(+2\omega_x(+r)), 2\omega_x (+r)+ 2\rho(\omega_x (+r) \} \end{align*}for any $g\in G$. \end{prop} \subsection{Case of doubling spaces}Throughout this subsection, we consider the case where the metric space $X$ is a doubling space. A complete metric space $X$ is called a \emph{doubling space} if there exist $R_1>0$ and a function $D=D_X:(0,R_1]\to (0,+\infty)$ satisfying the following condition: Every closed ball with radius $2r_1\leq 2R_1$ is covered by at most $D(r_1)$ closed balls with radius $r_1$. This condition is equivalent to the following condition: There exists a function $C=C_X=C(r_1,r_2):(0, 2R_{1}]\times (0,2R_1]\to (0,+\infty)$ such that for every $(r_1,r_2)\in (0,2R_1]\times (0,2R_1]$, every $r_1$-separated subset in any closed ball in $X$ with radius $r_2$ contains at most $C(r_1,r_2)$ elements. For example, a complete Riemannian manifold with Ricci curvature bounded from below is a doubling space (see the proof of Corollary \ref{c4.4.3}). Although the proof of the following theorem is the same analogue to \cite[Theorem 1.3]{funad}, we give it for completeness. \begin{thm}\label{t4.4.1}Let $X$ be a doubling space and $\nu$ a finite Borel measure on $X$ with $m:= \nu(X)$. Assume that a positive number $r_0$ satisfies \begin{align*} r_0> \max \Big\{ \mathop{\mathrm{Sep}} \nolimits \Big(\nu;\kappa, \frac{m}{C(r_0,5r_0)}\Big), \mathop{\mathrm{Sep}} \nolimits \Big( \nu; \frac{m-\kappa}{3},\frac{m-\kappa}{3} \Big), \mathop{\mathrm{Sep}} \nolimits \Big(\nu; \frac{m-\kappa}{3}, \kappa \Big)\Big\} \end{align*}for some $\kappa >0$. Then there exists a point $x_{0}\in X$ such that $\nu ( B_X(x_{0},3r_0))\geq m-\kappa$. \begin{proof}Take a maximal $r_0$-separated set $\{\xi_{\alpha} \}_{\alpha\in \mathcal{A}}$ of $X$. From the doubling property of $X$, there exists $\alpha_0\in \mathcal{A}$ such that \begin{align*} k:= \# \{ \beta\in \mathcal{A} \mid \xi_{\beta}\in B_X(\xi_{\alpha_0}, 5r_0) \} = \max_{\alpha \in \mathcal{A}} \#\{ \beta\in \mathcal{A} \mid \xi_{\beta}\in B_X(\xi_{\alpha}, 5r_0) \}\leq C(r_0,5r_0). \end{align*}Putting $\{\beta_1, \beta_2 ,\cdots , \beta_k\}:=\{ \beta\in \mathcal{A} \mid \xi_{\beta}\in B_X(\xi_{\alpha_0}, 5r_0) \} $, we take a subset $J_1 \subseteq \{ \xi_{\alpha}\}_{\alpha\in \mathcal{A}}$ which is maximal with respect to the properties that $J_1$ is $5r_0$-separated and $\xi_{\beta_1}\in J_1$, $\xi_{\beta_2}\not\in J_1$, $\cdots$, $\xi_{\beta_k} \not\in J_1$. We then take $J_2 \subseteq \{ \xi_{\alpha}\}_{\alpha \in \mathcal{A}} \setminus J_1$ which is maximal with respect to the properties that $J_2$ is $5r_0$-separated and $ \xi_{\beta_2}\in J_2$, $\xi_{\beta_3}\not\in J_2$, $\cdots$, $\xi_{\beta_k}\not\in J_2$. In the same way, we pick $J_3 \subseteq \{ \xi_{\alpha}\}_{\alpha \in \mathcal{A}}\setminus (J_1 \cup J_2)$, $\cdots$, $J_k\subseteq \{ \xi_{\alpha}\}_{\alpha\in \mathcal{A}} \setminus (J_1 \cup J_2 \cup \cdots \cup J_{k-1})$. We then have \begin{claim}\label{cl4.4.1}$\{\xi_{\alpha} \}_{\alpha \in \mathcal{A}} = J_1 \cup J_2 \cup \cdots \cup J_k$. \begin{proof}Suppose that $\xi_{\alpha}\not\in J_1 \cup J_2 \cup \cdots \cup J_k$ for some $\alpha \in \mathcal{A}$. Since each $J_i$ is maximal, there exists $\xi_{\alpha_i} \in J_i$ such that $\mathop{\mathit{d}} \nolimits_X(\xi_{\alpha}, \xi_{\alpha_i})<5r_0$ and $\xi_{\alpha}\neq \xi_{\alpha_i}$. We therefore obtain \begin{align*} k+1 \leq \# \{ \xi_{\alpha}, \xi_{\alpha_1}, \xi_{\alpha_2}, \cdots , \xi_{\alpha_k} \}\leq \# \{ \beta \in \mathcal{A}\mid \xi_{\beta}\in B_X(\xi_{\alpha},5r_0)\}\leq k, \end{align*}which is a contradiction. This completes the proof of the claim. \end{proof} \end{claim} By Claim \ref{cl4.4.1}, we have $X= \bigcup_{i=1}^k\bigcup_{\xi_{\alpha}\in J_i}B_X(\xi_{\alpha}, r_0)$. Hence there exists $i$, $1\leq i\leq k$ such that \begin{align*} \nu\Big(\bigcup_{\xi_{\alpha}\in J_i}B_X(\xi_{\alpha},r_0)\Big)\geq \frac{m}{k}\geq \frac{m}{C(r_0,5r_0)}. \end{align*}We then have \begin{claim}\label{cl4.4.2} \begin{align*} \nu \Big( \bigcup_{\xi_{\alpha}\in J_i}B_X(\xi_{\alpha},2r_0) \Big)\geq m-\kappa. \end{align*} \begin{proof}Supposing that $\nu ( \bigcup_{\xi_{\alpha}\in J_i}B_X(\xi_{\alpha},2r_0) )< m-\kappa$, from the assumption of $r_0$, we have \begin{align*} r_0 \leq \mathop{\mathit{d}} \nolimits_X\Big(X\setminus \bigcup_{\xi_{\alpha}\in J_i}B_X(\xi_{\alpha}, 2r_0), \bigcup_{\xi_{\alpha}\in J_i}B_X(\xi_{\alpha},r_0)\Big)\leq \mathop{\mathrm{Sep}} \nolimits \Big(\nu;\kappa,\frac{m}{C(r_0,5r_0)}\Big)<r_0. \end{align*}This is a contradiction. This completes the proof of the claim. \end{proof} \end{claim} \begin{claim}\label{cl4.4.3}There exists $\xi_{\gamma}\in J_i$ such that $ \nu(B_X(\xi_{\gamma}, 2r_0))\geq (m-\kappa)/3$. \begin{proof}Suppose that $\nu(B_X(\xi_{\alpha}, 2r_0))< (m-\kappa)/3$ for any $\xi_{\alpha}\in J_i$. Then, by Claim \ref{cl4.4.2}, there exists $J_i'\subseteq J_i$ such that \begin{align*} \frac{m-\kappa}{3}\leq \nu \Big(\bigcup_{\xi_{\alpha}\in J_i'}B_X(\xi_{\alpha}, 2r_0)\Big)< \frac{2(m-\kappa)}{3}. \end{align*}Thus, putting $J_i'':= J_i \setminus J_i'$, from the assumption of $r_0$, we get \begin{align*} r_0 \leq \mathop{\mathit{d}} \nolimits_X \Big( \bigcup_{\xi_{\alpha}\in J_i'}B_X(\xi_{\alpha}, 2r_0), \bigcup_{\xi_{\alpha}\in J_i''}B_X(\xi_{\alpha}, 2r_0) \Big)\leq \mathop{\mathrm{Sep}} \nolimits \Big(\nu;\frac{m-\kappa}{3}, \frac{m-\kappa}{3}\Big)<r_0. \end{align*}This is a contradiction. This completes the proof of the claim. \end{proof} \end{claim}Combining Claim \ref{cl4.4.3} with the same method of the proof of Claim \ref{cl4.4.2}, we finally obtain $\nu(B_X(\xi_{\gamma}, 3r_0))\geq 1-\kappa$. This completes the proof of the theorem. \end{proof} \end{thm} By Corollary \ref{c2.1.1} and Theorem \ref{t4.4.1}, we get the following corollary: \begin{cor}[{cf.~\cite[Theorem 1.3]{funad}}]\label{c4.4.1}Let $\{X_n\}_{n=1}^{\infty}$ be a L\'{e}vy family and $X$ a doubling space. Then, for any $\kappa >0$, we have \begin{align*} \lim_{n\to \infty}\mathop{\mathrm{ObsDiam}} \nolimits_X(X_n;-\kappa)=0. \end{align*} \end{cor} Applying Theorem \ref{t4.4.1} to Proposition \ref{p3.1}, we obtain the following proposition: \begin{prop}\label{p4.4.1}Let a compact metric group $G$ continuously acts on a doubling space $X$. Assume that a positive number $r_0$ satisfies \begin{align*} r_0 >\ & \max \Big\{ \omega_{x}\Big(+\mathop{\mathrm{Sep}} \nolimits \Big(\nu;\kappa, \frac{1}{C(r_0,5r_0)}\Big)\Big), \omega_x\Big(+\mathop{\mathrm{Sep}} \nolimits \Big( \nu; \frac{1-\kappa}{3},\frac{1-\kappa}{3} \Big)\Big), \\ \ & \hspace{9cm}\omega_x \Big(+\mathop{\mathrm{Sep}} \nolimits \Big(\nu; \frac{1-\kappa}{3}, \kappa \Big) \Big) \Big\} \end{align*}for some $\kappa\in (0,1/2)$. Then there exists a point $z_{x,\kappa}\in X$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_X(z_{x,\kappa},gz_{x,\kappa})\leq 3r_0 +\rho(3r_0) \end{align*}for any $g\in G$. Moreover, there exists a point $z_{x,\kappa}'\in Gx$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_X (z_{x,\kappa}',gz_{x,\kappa}')\leq \min \{ 6r_0 + \rho(6r_0), 6r_0 + 2\rho(3r_0) \} \end{align*}for any $g\in G$. \end{prop} We next consider the case where the function $D=D_X:(0,+\infty )\to (0,+\infty)$ is a constant function. This is equivalent to the following condition: The function $C=C_X:(0,+\infty)\times (0,+\infty)\to (0,+\infty)$ satisfies that $C(\alpha r, \alpha s)= C(r,s)$ for any $r,s, \alpha>0$. We call such a metric space a \emph{large scale doubling space}. By Theorem \ref{t4.4.1}, we obtain the following corollary: \begin{cor}\label{c4.4.2}Let $X$ be a large scale doubling space and $\nu$ be a finite Borel measure on $X$ with $m:=\nu(X)$ and put \begin{align*} r_{\kappa}:=\max \Big\{ \mathop{\mathrm{Sep}} \nolimits \Big(\nu;\kappa, \frac{m}{C(1,5)}\Big), \mathop{\mathrm{Sep}} \nolimits \Big( \nu; \frac{m-\kappa}{3},\frac{m-\kappa}{3} \Big), \mathop{\mathrm{Sep}} \nolimits \Big(\nu; \frac{m-\kappa}{3}, \kappa \Big)\Big\} \end{align*}for $\kappa>0$. Then, there exists a point $x_{\kappa}\in X$ such that $\nu(B_X(x_{\kappa},3r_{\kappa}))\geq m-\kappa$. \end{cor} Applying Corollary \ref{c4.4.2} to Proposition \ref{p3.1}, we obtain the following proposition: \begin{prop}\label{p4.4.2}Assume that a compact metric group $G$ continuously acts on a large scale doubling space $X$. Put \begin{align*} r_{x,\kappa} := \ &\max\Big\{ \omega_x\Big(+\mathop{\mathrm{Sep}} \nolimits\Big(G;\kappa, \frac{1}{C(1,5)}\Big)\Big), \omega_x\Big( +\mathop{\mathrm{Sep}} \nolimits\Big(G;\frac{1-\kappa}{3}, \frac{1-\kappa}{3}\Big) \Big), \\ & \hspace{8cm}\omega_x\Big(+ \mathop{\mathrm{Sep}} \nolimits\Big( G;\frac{1-\kappa}{3},\kappa \Big)\Big) \Big\} \end{align*}for $x\in X$ and $\kappa>0$. Then, for any $\kappa \in (0,1/2)$, there exists a point $z_{x,\kappa}\in X$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_X (z_{x,\kappa},gz_{x,\kappa}) \leq \ & 3r_{x,\kappa}+ \rho (3r_{x,\kappa} ) \end{align*}for any $g\in G$. There also exists a point $z_{x,\kappa}' \in Gx$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_X (z_{x,\kappa}',gz_{x,\kappa}')\leq \min \{ 6r_{x,\kappa}+\rho(6r_{x,\kappa}), 6r_{x,\kappa}+2\rho(3r_{x,\kappa}) \} \end{align*}for any $g\in G$. \end{prop} Assume that a complete metric space $X$ has a doubling measure $\nu_X$, that is, $\nu_X$ is a (not only finite) Borel measure on $X$ having the following properties: $X=\mathop{\mathrm{Supp}} \nolimits \nu_X$ and there exists a constant $C=C(X)>0$ such that \begin{align} \nu_X(B_X(x,2r))\leq C \nu_X(B_X(x,r)) \end{align}for any $x\in X$ and $r>0$. For example, by virtue of the Bishop-Gromov volume comparison theorem, the volume measure of an $n$-dimensional complete Riemannian manifold $M$ with nonnegative Ricci curvature is a doubling measure with $C(M)=2^n$. \begin{lem}[{cf.~\cite[Lemma 2.1]{funad}}]\label{l4.4.1}Let $(X,\nu_X)$ be a complete metric space with a doubling measure $\nu_X$. Then, for any $0< r_1\leq r_2$ and $x,y\in X$ with $x\in B_X(y,r_2)$, we have \begin{align*} \frac{\nu_X(B_X(x,r_1))}{\nu_X(B_X(y,r_2))}\geq \frac{1}{C^2}\Big(\frac{r_1}{r_2}\Big)^{\log_2 C}=C^{\log_2 \frac{r_1}{r_2}-2}. \end{align*} \end{lem} \begin{cor}\label{c4.4.3}The space $(X,\nu_X)$ is a large scale doubling space with $C_X(r_1,r_2)\leq C^{2+\log_2 \{ (r_1+2r_2)/r_1\}}$. In particular, we have $C_X(1,5)\leq C^{2+\log_2 11}$. \begin{proof}Given any $x\in X$ and $r_1,r_2>0$ with $r_2\geq r_1$, we let $\{ \xi_{\alpha}\}_{\alpha \in \mathcal{A}} \subseteq B_X(x,r_2)$ be an arbitrary $r_1$-separated set. Note that closed balls $B_X(\xi_{\alpha}, 2^{-1}r_1 -\varepsilon)$ are mutually dijoint for any $\varepsilon>0$. We hence have \begin{align*} \nu_X(B_X(x, 2^{-1}r_1 +r_2))\geq \ &\nu_X\Big(\bigcup_{\alpha \in \mathcal{A}}B_X(\xi_{\alpha}, 2^{-1}r_1-\varepsilon) \Big)\\ =\ & \sum_{\alpha \in \mathcal{A}} \nu_X (B_X(\xi_{\alpha}, 2^{-1}r_1-\varepsilon))\\ \geq \ & \nu_X(B_X(\xi_{\alpha_0}, 2^{-1}r_1-\varepsilon))\# \mathcal{A}, \end{align*}where $\nu_X(B_X(\xi_{\alpha_0}, 2^{-1}r_1-\varepsilon))= \min_{\alpha\in \mathcal{A}}\nu_X(B_X(\xi_{\alpha}, 2^{-1}r_1-\varepsilon))$. Applying this to Lemma \ref{l4.4.1}, we obtain \begin{align*} \# \mathcal{A} \leq \frac{\nu_X(B_X(x, 2^{-1}r_1+r_2))}{\nu_X (B_X(\xi_{\alpha_0}, 2^{-1}r_1-\varepsilon))}\leq C^{2+ \log_2 \{ (r_1+2r_2)/ (r_1-2\varepsilon)\}}. \end{align*}This completes the proof. \end{proof} \end{cor} Combining Corollary \ref{c4.4.2} with Corollary \ref{c4.4.3}, we obtain the following corollary: \begin{cor}\label{c4.4.4}Let $\nu$ be a finite Borel measure on $(X,\nu_X)$ with $m:=\nu(X)$. Put \begin{align*} r_{\kappa}:= \max \Big\{ \mathop{\mathrm{Sep}} \nolimits (\nu;\kappa,C^{-2-\log_2 11}), \mathop{\mathrm{Sep}} \nolimits \Big( \nu;\frac{m-\kappa}{3}, \frac{m-\kappa}{3} \Big), \mathop{\mathrm{Sep}} \nolimits\Big( \nu; \frac{m-\kappa}{3} , \kappa\Big) \Big\} \end{align*}for $\kappa>0$. Then, there exists a point $x_{\kappa}\in X$ such that $\nu(B_X (x_{\kappa}, 3r_{\kappa}))\geq 1-\kappa$. In particular, we have $\mathop{\mathrm{diam}} \nolimits (\nu,m-\kappa)\leq 6r_{\kappa}$. \end{cor} By using Corollary \ref{c4.4.4}, we obtain the following propostion: \begin{prop}\label{p4.4.3}Assume that a compact metric group $G$ continuously acts on $(X,\nu_X)$. Put \begin{align*} r_{x,\kappa}:= &\max \Big\{ \omega_x (+\mathop{\mathrm{Sep}} \nolimits (G;\kappa,C^{-2-\log_2 11})), \omega_x\Big(+\mathop{\mathrm{Sep}} \nolimits \Big( G;\frac{1-\kappa}{3}, \frac{1-\kappa}{3} \Big)\Big), \\ &\hspace{10cm} \omega_x\Big(+\mathop{\mathrm{Sep}} \nolimits\Big( G; \frac{1-\kappa}{3} , \kappa\Big)\Big) \Big\} \end{align*}for $x\in X$ and $\kappa>0$. Then, for any $\kappa \in (0,1/2)$, there exists a point $z_{x,\kappa}\in X$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_X (z_{x,\kappa},gz_{x,\kappa}) \leq \ & 3r_{x,\kappa} + \rho(3r_{x,\kappa}) \end{align*}for any $g\in G$. There also exists a point $z_{x,\kappa}' \in Gx$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_X (z_{x,\kappa}',gz_{x,\kappa}')\leq \min \{ 6r_{x,\kappa}+\rho(6r_{x,\kappa}), 6r_{x,\kappa}+2\rho(3r_{x,\kappa}) \} \end{align*}for any $g\in G$. \end{prop} \begin{cor}Assume that a compact metric group $G$ continuously acts on an $n$-dimensional complete Riemannian manifold $M$ with nonnegative Ricci curvature. Put \begin{align*} r_{\kappa}:= &\max \Big\{ \omega_x (+\mathop{\mathrm{Sep}} \nolimits (G;\kappa,2^{-(2+\log_2 11)n})), \omega_x\Big(+\mathop{\mathrm{Sep}} \nolimits \Big( G;\frac{1-\kappa}{3}, \frac{1-\kappa}{3} \Big)\Big), \\ &\hspace{10cm} \omega_x\Big(+\mathop{\mathrm{Sep}} \nolimits\Big( G; \frac{1-\kappa}{3} , \kappa\Big)\Big) \Big\} \end{align*}for $x\in M$ and $\kappa>0$. Then, for any $x\in M$ and $\kappa \in (0,1/2)$, there exists a point $z_{x,\kappa}\in M$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_M (z_{x,\kappa},gz_{x,\kappa}) \leq \ & 3r_{x,\kappa} + \rho(3r_{x,\kappa}) \end{align*}for any $g\in G$. There also exists a point $z_{x,\kappa}' \in Gx$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_M (z_{x,\kappa}',gz_{x,\kappa}')\leq \min \{ 6r_{x,\kappa}+\rho(6r_{x,\kappa}), 6r_{x,\kappa}+2\rho(3r_{x,\kappa}) \} \end{align*}for any $g\in G$. \end{cor} \subsection{Case of metric graphs}In this subsection, we treat the case where $X$ is a metric graph. Let $\Gamma=(V,E)$ be a (possibly infinite) undirected connected combinatorial graph, that is, $\Gamma$ is a $1$-dimensional cell complex with the set $V$ of vertices and the set $E$ of edges. We allow the graph $\Gamma$ to have multiple edges and loops. For vertices $v,w\in V$ which are endpoints of an edge, we assign a positive number $a_{v,w}$ such that $a_{\Gamma}:=\inf_{v'\neq w'}a_{v'w'}>0$. Every edge is identified with a bounded closed interval or a circle in $\mathbb{R}^2$ with lengh $a_{vw}$, where $v$ and $w$ are endpoints of the edge. We then define the distance between two points in $\Gamma$ to be the infimum of the length of paths joining them. The graph $\Gamma$ together with such a distance function is called a \emph{metric graph}. \begin{lem}\label{l4.5.1}Let $(C,\mathop{\mathit{d}} \nolimits_C)$ be a circle in $\mathbb{R}^2$ with the Riemannian distance function $\mathop{\mathit{d}} \nolimits_C$ and $\nu $ a finite Borel measure on $C$ with $m:=\nu(C)$. Then, for any $\kappa>0$, we have \begin{align*} \mathop{\mathrm{diam}} \nolimits(\nu,m-\kappa)\leq \frac{\pi}{\sqrt{2}} \mathop{\mathrm{Sep}} \nolimits \Big(\nu; \frac{\kappa}{4}, \frac{\kappa}{4} \Big) \end{align*} \begin{proof}Note that \begin{align*} \mathop{\mathit{d}} \nolimits_{\mathbb{R}^2}(x,y)\leq \mathop{\mathit{d}} \nolimits_C(x,y) \leq \frac{\pi}{2} \mathop{\mathit{d}} \nolimits_{\mathbb{R}^2}(x,y) \end{align*}for any $x,y\in C$. Denoting by $\mathop{\mathrm{pr}} \nolimits_i:\mathbb{R}^2 \ni (x_1,x_2)\mapsto x_i\in \mathbb{R}$ the projection, by using Lemma \ref{l2.1.2}, we therefore obtain \begin{align*} \mathop{\mathrm{diam}} \nolimits (\nu,m-\kappa)=\ & \mathop{\mathrm{diam}} \nolimits (\nu|_{(C,\mathop{\mathit{d}} \nolimits_C)},m-\kappa)\\ \leq \ &\frac{\pi}{2}\mathop{\mathrm{diam}} \nolimits (\nu|_{(C,\mathop{\mathit{d}} \nolimits_{\mathbb{R}^2})},m-\kappa)\\ \leq \ &\frac{\pi}{\sqrt{2}}\max_{i=1,2} \mathop{\mathrm{diam}} \nolimits\Big( (\mathop{\mathrm{pr}} \nolimits_i)_{\ast}( \nu|_{(C,\mathop{\mathit{d}} \nolimits_{\mathbb{R}^2})}),m-\frac{\kappa}{2} \Big)\\ \leq \ &\frac{\pi}{\sqrt{2}}\max_{i=1,2} \mathop{\mathrm{Sep}} \nolimits \Big( (\mathop{\mathrm{pr}} \nolimits_i)_{\ast}( \nu|_{(C,\mathop{\mathit{d}} \nolimits_{\mathbb{R}^2})});\frac{\kappa}{4},\frac{\kappa}{4} \Big)\\ \leq \ &\frac{\pi}{\sqrt{2}} \mathop{\mathrm{Sep}} \nolimits \Big( \nu|_{(C,\mathop{\mathit{d}} \nolimits_{\mathbb{R}^2})};\frac{\kappa}{4},\frac{\kappa}{4} \Big)\\ \leq \ & \frac{\pi}{\sqrt{2}} \mathop{\mathrm{Sep}} \nolimits \Big( \nu;\frac{\kappa}{4},\frac{\kappa}{4} \Big). \end{align*}This completes the proof. \end{proof} \end{lem} For every edge $e\in E$ and $r>0$, we put $e_{-r}:=\{ x\in e \mid \mathop{\mathit{d}} \nolimits_{\Gamma}(e,v)>r \text{ and } \mathop{\mathit{d}} \nolimits_{\Gamma}(e,w)>r\}$, where $v$ and $w$ are endpoints of the edge $e$. \begin{thm}\label{t4.5.1}Let $\nu$ be a finite Borel measure on a metric graph $\Gamma$ with $m:=\nu(\Gamma)$. Assume that postive numbers $a,\kappa,\kappa'$ satisfy that $\kappa'< \kappa$, $a<a_{\Gamma}$, and \begin{align*} \max \Big\{ 2\mathop{\mathrm{Sep}} \nolimits \Big( \nu;\frac{\kappa}{3},\frac{\kappa}{3}\Big), 4\mathop{\mathrm{Sep}} \nolimits \Big( \nu;\frac{m-\kappa}{3}, \kappa'\Big) \Big\}<a \end{align*}Then, we have \begin{align}\label{s4.5.1} \mathop{\mathrm{diam}} \nolimits (\nu,m-\kappa)\leq \max \Big\{ \frac{a}{2}+2\mathop{\mathrm{Sep}} \nolimits \Big(\nu; \frac{\kappa}{3},\kappa\Big), \frac{\pi}{\sqrt{2}}\mathop{\mathrm{Sep}} \nolimits \Big(\nu;\frac{\kappa-\kappa'}{4},\frac{\kappa-\kappa'}{4}\Big) \Big\}. \end{align} \begin{proof}We first consider the case of $\nu (\bigcup_{v\in V}B_X(v,a/4))\geq \kappa$. Since $\mathop{\mathrm{Sep}} \nolimits (\nu;\kappa/3,\kappa/3)<a/2$, as in the proof of Claim \ref{cl4.4.3}, there exists a vertex $v\in V$ such that $\nu (B_X(v,a/4))\geq \kappa /3$. We thus obtain $\nu (B_X(v,a/4 +\mathop{\mathrm{Sep}} \nolimits(\nu;\kappa/3,\kappa/3)))\geq m-\kappa$, which implies (\ref{s4.5.1}). We consider the other case that $\nu (X\setminus \bigcup_{v\in V}B_X(v,a/4)) >m-\kappa$. By the same method of Claim \ref{cl4.4.3}, either the following (1) or (2) holds: (1) There exists an edge $e\in E$ such that $e$ is not a loop and $\nu (e_{-a/4})\geq (m-\kappa)/3$. (2) There exists a loop $\ell\in E$ with $\nu (\ell_{-a/4})\geq (m-\kappa)/3$. If (1) holds, combining the same proof of Claim \ref{cl4.4.2} with $\mathop{\mathrm{Sep}} \nolimits(\nu;\kappa/3,\kappa')<a/4$, we then have $\nu (e)\geq m-\kappa'$. We therefore obtain \begin{align*} \mathop{\mathrm{diam}} \nolimits (\nu,m-\kappa)\leq \ &\mathop{\mathrm{diam}} \nolimits (\nu|_{e},m-\kappa)\\ =\ &\mathop{\mathrm{diam}} \nolimits (\nu|_{e}, \nu(e)-(\nu(e)-m+\kappa))\\ \leq \ &\mathop{\mathrm{Sep}} \nolimits \Big(\nu|_{e}; \frac{\nu(e)-m+\kappa}{2},\frac{\nu(e)-m+\kappa}{2}\Big)\\ \leq \ & \mathop{\mathrm{Sep}} \nolimits \Big(\nu; \frac{\kappa-\kappa'}{2}, \frac{\kappa-\kappa'}{2}\Big). \end{align*}If (2) holds, by Claim \ref{cl4.4.2} and $\mathop{\mathrm{Sep}} \nolimits(\nu;\kappa/3,\kappa')< a/4$, we then get $\nu(\ell)\geq m-\kappa'$. Applying Lemma \ref{l4.5.1}, we therefore obtain \begin{align*} \mathop{\mathrm{diam}} \nolimits (\nu,m-\kappa)\leq \ &\mathop{\mathrm{diam}} \nolimits (\nu|_{\ell},m-\kappa)\\ = \ & \mathop{\mathrm{diam}} \nolimits (\nu|_{\ell}, \nu(\ell)- (\nu(\ell)-m+\kappa))\\ \leq \ & \frac{\pi}{\sqrt{2}} \mathop{\mathrm{Sep}} \nolimits \Big( \nu|_{\ell}; \frac{\nu(\ell)-m+\kappa}{4}, \frac{\nu(\ell)-m+\kappa}{4}\Big)\\ \leq \ & \frac{\pi}{\sqrt{2}} \mathop{\mathrm{Sep}} \nolimits \Big( \nu|_{\ell}; \frac{\kappa -\kappa'}{4}, \frac{\kappa -\kappa'}{4}\Big)\\ \leq \ & \frac{\pi}{\sqrt{2}} \mathop{\mathrm{Sep}} \nolimits \Big( \nu ; \frac{\kappa -\kappa'}{4}, \frac{\kappa -\kappa'}{4}\Big). \end{align*}This completes the proof of the theorem. \end{proof} \end{thm} \begin{cor}Let $\{X_n\}_{n=1}^{\infty}$ be a L\'{e}vy family and $\Gamma$ a metric graph. Then, for any $\kappa >0$, we have \begin{align*} \lim_{n\to \infty}\mathop{\mathrm{ObsDiam}} \nolimits_{\Gamma}(X_n;-\kappa)=0. \end{align*} \end{cor} By virtue of Theorem \ref{t4.5.1}, we obtain the following: \begin{prop}Assume that a compact metric group $G$ continuously acts on a metric graph $\Gamma$. We also assume that postive numbers $a,\kappa,\kappa'$ and a point $x\in X$ satisfy that $\kappa'< \kappa$, $a<a_{\Gamma}$, and \begin{align*} \max\Big\{2\omega_x\Big(+\mathop{\mathrm{Sep}} \nolimits\Big(G;\frac{\kappa}{3}, \frac{\kappa}{3}\Big)\Big), 4\omega_x \Big(+\mathop{\mathrm{Sep}} \nolimits \Big(G;\frac{1-\kappa}{3}, \kappa'\Big)\Big)\Big\}<a. \end{align*}Put \begin{align*} s_{x,\kappa,\kappa'}:= \max \Big\{ \frac{a}{2} + 2\omega_x \Big(+\mathop{\mathrm{Sep}} \nolimits \Big(G;\frac{\kappa}{3},\kappa \Big)\Big), \frac{\pi}{\sqrt{2}}\omega_x \Big(+\mathop{\mathrm{Sep}} \nolimits \Big( G;\frac{\kappa-\kappa'}{4},\frac{\kappa-\kappa'}{4}\Big)\Big)\Big\}. \end{align*}Then, there exists a point $z_{x,\kappa,\kappa'}\in Gx$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_X( z_{x,\kappa,\kappa'}, gz_{x,\kappa,\kappa'})\leq s_{x,\kappa,\kappa'}+ \rho(s_{x,\kappa,\kappa'}) \end{align*}for any $g\in G$. \end{prop} \subsection{Case of Hadamard manifolds} In this subsection, we consider the case where $X$ is a Hadamard manifold $N$, i.e., a complete simply connected Riemannian manifold with nonpositive sectional curvature. The following theorem was obtained in \cite[Theorem 1.3]{funano1}. \begin{thm}Let $\{X_n\}_{n=1}^{\infty}$ be a L\'{e}vy family and $N$ a Hadamard manifold. Then, for any $\kappa >0$, we have \begin{align*} \lim_{n\to \infty}\mathop{\mathrm{ObsDiam}} \nolimits_N (X_n;-\kappa)=0. \end{align*} \end{thm} \subsubsection{Central radius} Let $N$ be a Hadamard manifold. For a finite Borel measure on $N$ with compact support, we indicate the center of mass of the measure $\nu$ by $c(\nu)$. Given any $\kappa>0$, putting $m:= \nu(N)$, we define the \emph{central radius} $\mathop{\mathrm{CRad}} \nolimits(\nu,m-\kappa)$ of $\nu$ as the infimum of $\rho>0$ such that $\nu(B_N(c(\nu),\rho))\geq m-\kappa$. \begin{prop}[{cf.~\cite[Proposition 5.4]{sturm}}]\label{p4.6.1.1}For a finite Borel measure $\nu$ on $\mathbb{R}^k$ with the compact support, we have \begin{align*} c(\nu)=\frac{1}{\nu(\mathbb{R}^k)}\int_{\mathbb{R}^k}x d\nu(x). \end{align*} \end{prop} \begin{prop}[{cf.~\cite[Proposition 5.10]{sturm}}]\label{p4.6.1.2}Let $N$ be a Hadamard manifold and $nu$ a finite Borel measure on $N$ with the compact support. Then, $x=c(\nu)$ if and only if \begin{align*} \int_N \exp_x^{-1}(y)d\nu(y)=0. \end{align*}In particular, identifying the tangent space of $N$ at the point $c(\nu)$ with the Euclidean space of the same dimension of $N$, we have $c((\exp_{c(\nu)}^{-1})_{\ast}(\nu))=0$. \end{prop} Proposition \ref{p3.1} directly implies the following corollary: \begin{cor}\label{c4.6.1.1}Assume that a compact metric group acts on a Hadamard manifold $N$ and put $r_x:=\lim_{\kappa \uparrow 1/2}\mathop{\mathrm{CRad}} \nolimits(\nu_{G,x},1-\kappa)$ for $x\in X$. Then, we have \begin{align*} \mathop{\mathit{d}} \nolimits_X(c(\nu_{G,x}), gc(\nu_{G,x}))\leq r_x+ \rho(+r_x) \end{align*}for any $g\in G$. Moreover, there exists a point $z_{x}\in Gx$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_X(z_{x},gz_{x}) \leq \ &\min \{ 2r_x+\rho(+2r_x), 2r_x+2\rho (+r_x) \} \end{align*}for any $g\in G$. \end{cor} \subsubsection{H\"{o}lder actions} In this subsubsection, we consider a H\"{o}lder action of a compact Lie group to a Hadamard manifold. Let a compact Lie group $G$ acts on a Hadamard manifold $N$. We shall consider the case where $\omega_x(\eta)\leq C_1 \eta^{\alpha}$ holds for some $x\in N$ and $C_1,\alpha>0$. Combining Gromov's observation in \cite[Section 13]{gromovcat} with one in \cite[Section $3\frac{1}{2}.41$]{gromov}, we obtain the following theorem: \begin{thm}\label{t4.6.2.1}Let $M$ be a compact Riemannian manifold and $N$ be a Hadamard manifold. Assume that a continuous map $f:M\to N$ satisfies that \begin{align*} \mathop{\mathit{d}} \nolimits_N(f(x),f(y))\leq C_1 \mathop{\mathit{d}} \nolimits_M (x,y)^{\alpha} \end{align*}for some $C_1>0$, $\alpha> 1$, and all $x,y\in M$. Then, the map $f:M\to N$ is a constant map. \begin{proof}Put $\mathbb{E}(f):=c(f_{\ast}(\mu_M))$. We shall prove that $\mathop{\mathrm{Supp}} \nolimits f_{\ast}(\mu_X)= \{ \mathbb{E}(f)\}$, which implies the theorem. Suppose that $\mathop{\mathrm{Supp}} \nolimits f_{\ast}(\mu_X) \neq \{ \mathbb{E}(f)\}$. We identify the tangent space of $N$ at $\mathbb{E}(f)$ with the Euclidean space $\mathbb{R}^k$, where $k$ is the dimension of $N$. According to the hinge theorem (see \cite[Chapter \Roman{yon}, Remark 2.6]{sakai}), the map $\exp_{\mathbb{E}(f)}^{-1}:N\to \mathbb{R}^k$ is $1$-Lipschitz. Since the map $\exp^{-1}_{\mathbb{E}(f)}$ is isometric on rays issuing from $\mathbb{E}(f)$ and $\mathop{\mathrm{Supp}} \nolimits f_{\ast}(\mu_M)\neq \{ \mathbb{E}(f)\}$, we have \begin{align*} \int_M |(\exp_{\mathbb{E} (f)}^{-1} \circ f)(x)|^2 d\mu_M(x)= \int_M \mathop{\mathit{d}} \nolimits_N(f(x),\mathbb{E}(f))^2 d\mu_M(x)>0. \end{align*}Denoting by $((\exp_{\mathbb{E} (f)}^{-1} \circ f)(x))_i$ the $i$-th component of $(\exp_{\mathbb{E} (f)}^{-1} \circ f)(x)$, we hence see that there exists $i_0$ such that \begin{align*} \int_M |((\exp_{\mathbb{E} (f)}^{-1} \circ f)(x))_{i_0}|^2d\mu_M(x)>0. \end{align*}Putting $\varphi:= (\exp_{\mathbb{E} (f)}^{-1} \circ f)_{i_0}$, we observe that \begin{align*} \| \mathop{\mathrm{grad}} \nolimits_x \varphi \|=\limsup_{y \to x}\frac{|\varphi(y)-\varphi (x)|}{\mathop{\mathit{d}} \nolimits_M(y,x)}\leq \limsup_{y\to x} \frac{C_1\mathop{\mathit{d}} \nolimits_{M}(y,x)^{\alpha}}{\mathop{\mathit{d}} \nolimits_M(y,x)}= 0 \end{align*}and the function $\varphi$ has mean zero by Proposition \ref{p4.6.1.2}. We therefore obtain \begin{align*} 0<\lambda_1(M)= \inf \frac{\int_M \| \mathop{\mathrm{grad}} \nolimits_x g \|^2 d\mu_M (x)}{\int_M g(x)^2 d\mu_M(x)} \leq \frac{\int_M \| \mathop{\mathrm{grad}} \nolimits_x \varphi\|^2 d\mu_M(x)}{\int_M \varphi(x)^2 d\mu_M(x)}=0, \end{align*}where the infimum is taken over all nontrivial Lipschitz maps $g:M\to \mathbb{R}$ with mean zero. This is a contradiction. This completes the proof. \end{proof} \end{thm} \begin{cor}\label{c4.6.2.1}Assume that a compact Lie group $G$ continuously acts on a Hadamard manifold $N$. We also assume that there exists a point $x\in X$ such that the condition $\omega_x(\eta)\leq C_1 \eta^{\alpha}$ holds for some $\alpha>1$. Then, the point $x$ is a fixed point. \end{cor} Assume that a compact metric group $G$ contnuously acts on a Hadamard manifold $N$. In view of Corollary \ref{c4.6.2.1}, we shall consider the case of $0<\alpha\leq 1$. We assume that a compact metric group $G$ satisfies that \begin{align}\label{s4.6.2.1} \alpha_G(r)\leq C_2 e^{-C_3 r^{\beta}} \text{ for some }C_2,C_3, \beta>0. \end{align}See Examples \ref{exl1} and \ref{exl2} for examples. Let a compact metric group continuously acts on a metric space $X$. For any $r>0$ and $x\in X$, we define $\omega_x^{-1}(r)$ as the infimum of $\mathop{\mathit{d}} \nolimits_G(g,g')$, where $g$ and $g'$ run over all elements in $G$ such that $\mathop{\mathit{d}} \nolimits_X(gx,g'x)\geq r$. \begin{lem}\label{l4.6.2.1}Assume that a compact metric group continuously acts on a metric space $X$. Then, for any $x\in X$, we have \begin{align*} \alpha_{(X,\nu_{G,x})}(r)\leq \alpha_G(\omega_x^{-1}(r)). \end{align*} \begin{proof}Let $A\subseteq X$ be any Borel subset such that $\nu_{G,x}(A)\geq 1/2$. From the difinition of $\omega_x^{-1}(r)$, we get \begin{align*} \{ g\in G \mid gx\in A\}_{+\omega_x^{-1}(r)}\subseteq \{ g\in G \mid gx \in A_{+r}\}. \end{align*}Since $\mu_G(\{ g\in G \mid gx\in A\})\geq 1/2$, we hence obtain \begin{align*} \nu_{G,x}(X\setminus A_{+r})\leq \mu_G(G\setminus \{ g\in G \mid gx \in A\}_{+\omega_x^{-1}(r)})\leq \alpha_G(\omega_x^{-1}(r)). \end{align*}This completes the proof. \end{proof} \end{lem} \begin{lem}\label{l4.6.2.2}Let a compact metric group $G$ continuously acts on a metric space $X$. Assume that a point $x\in X$ satisfies the following H\"{o}lder condition: \begin{align}\label{s4.6.2.2} \omega_x (\eta)\leq C_1 \eta^{\alpha} \text{ holds for some }C_1>0 \text{ and }0< \alpha\leq 1. \end{align}We also assume that the group $G$ satisfies the condition (\ref{s4.6.2.1}). Then, we have \begin{align*} \alpha_{(N,\nu_{G,x})}(r)\leq C_2 e^{-C_1^{-\beta/\alpha} C_3 r^{\beta /\alpha}}. \end{align*} \begin{proof}By the assumption (\ref{s4.6.2.2}), $\mathop{\mathit{d}} \nolimits_{X}(gx,g'x)> C_1s^{\alpha}$ implies that $\mathop{\mathit{d}} \nolimits_G(g,g')>s$, that is, $\mathop{\mathit{d}} \nolimits_X(gx,g'x)\geq r$ yields that $\mathop{\mathit{d}} \nolimits_G(g,g')\geq (r/C_1)^{1/\alpha}$. We hence get $\omega_x^{-1}(r)\geq (r/C_1)^{1/\alpha}$. By using this and Lemma \ref{l4.6.2.1}, we obtain \begin{align*} \alpha_{(X,\nu_{G,x})}(r)\leq \alpha_G(\omega_x^{-1}(r))\leq \alpha_G ((r/C_1 )^{1/\alpha})\leq C_2 e^{-C_1^{-\beta/\alpha}C_3 r^{\beta /\alpha}}. \end{align*}This completes the proof. \end{proof} \end{lem} We denote by $\gamma_k$ the standard Gaussian measure on $\mathbb{R}^k$ with density $(2\pi)^{-k/2}e^{-|x|^2 /2}$. For any $p\geq 0$, we put \begin{align*} M_p:=\int_{\mathbb{R}}|s|^p d\gamma_1(s)=2^{p/2}\pi^{-1/2}\Gamma \Big(\frac{p+1}{2}\Big). \end{align*}The same proof of \cite[Theorem 1]{ledole} implies the following theorem: \begin{thm}[{cf.~\cite[Theorem 1]{ledole}}]\label{t4.6.2.2}Assume that an mm-space $X$ satisfies that $\alpha_X(r)\leq C_1e^{-C_2 r^p}$ for some $C_1,C_2>0$ and some $p\geq 1$. Then, for any $1$-Lipschitz function $f:X\to \mathbb{R}^k$ with mean zero, we have \begin{align*} \int_X|f(x)|^p d\mu_X(x)\leq \frac{C}{C_2 M_p}\int_{\mathbb{R}^k}|y|^p d\gamma_k(y) = \frac{C}{C_2 M_p}\cdot \frac{2^{p/2}\Gamma(\frac{p+k}{2})}{\Gamma (\frac{k}{2})} \approx \frac{Ck^{p/2}}{C_2}, \end{align*}where $C$ is a constant depending only on $p$ and $C_1$. \end{thm} \begin{thm}\label{t4.6.2.3}Let a compact metric group $G$ continuously acts on a $k$-dimensional Hadamard manifold $N$. Assume that a point $x\in N$ satisfies the H\"{o}lder condition (\ref{s4.6.2.2}). We also assume that the group $G$ satisfies (\ref{s4.6.2.1}) and $\alpha\leq \beta$. Then, there exists a point $z_{x}\in Gx$ such that \begin{align}\label{s4.6.2.3} \mathop{\mathrm{diam}} \nolimits (G z_{x})\leq \frac{C C_1 k^{1/2}}{(C_3)^{\alpha/\beta}}+ \rho \Big( \frac{C C_1 k^{1/2}}{(C_3)^{\alpha/\beta}}\Big), \end{align}where $C$ is a constant depending only on $\alpha / \beta$ and $C_1$. \begin{proof}To apply Corollary \ref{c4.6.1.1}, we shall estimate $\mathop{\mathrm{CRad}} \nolimits (\nu_{G,x},1-\kappa)$ for $0<\kappa <1/2$ from the above. Putting $z:=c(\nu_{G,x})$, as in the proof of Theorem \ref{t4.6.2.1}, we identify the tangent space of $N$ at $z$ with the Euclidean space $\mathbb{R}^k$. Since the map $\exp_z^{-1}:N\to \mathbb{R}^k$ is a $1$-Lipschitz map, by virtue of Lemma \ref{l4.6.2.2} and Theorem \ref{t4.6.2.2}, we have \begin{align*} \int_N \mathop{\mathit{d}} \nolimits_N (y,z)^{\beta/ \alpha}d\nu_{G,x}(y) = \int_N |(\exp_z^{-1})(y)|^{\beta /\alpha }d\nu_{G,x}(y)\leq \frac{C C_1^{\beta /\alpha}k^{\beta/(2\alpha)}}{C_3}, \end{align*}where $C$ is a constant depending only on $C_2$ and $\beta /\alpha$. Combining this inequality with the Chebyshev inequality, we hence get \begin{align*} \mathop{\mathrm{CRad}} \nolimits(\nu_{G,x},1-\kappa)\leq \frac{CC_1k^{1/2}}{(C_3 \kappa)^{\alpha /\beta}} \end{align*}for any $0<\kappa$. Applying Corollary \ref{c4.6.1.1}, we therefore obtain (\ref{s4.6.2.3}). This completes the proof. \end{proof} \end{thm} \subsubsection{Cases of finite groups} In this subsubsection, we shall consider the case where $G$ is a finite group. Let $G$ be a finite group and $S\subseteq G\setminus \{e_G\}$ be a symmetric set of generators of $G$. We denote by $\Gamma (G,S)$ the \emph{Cayley graph} of $G$ with respect to $S$. For such $S$, we shall consider the group $G$ as a metric group with respect to the Cayley graph distance function. Let $\Gamma =(V,E)$ be a simple finite graph, where \emph{simple} means that there is at most one edge joining two vertices and no loops from a vertex to itself. The discrete Laplacian $\triangle_{\Gamma}$ act on functions $f$ on $V$ as follows \begin{align*} \triangle_{\Gamma}f(x):= \sum_{y \sim x}(f(x)-f(y)), \end{align*}where $x \sim y$ means that $x$ and $y$ are connected by an edge. We denote by $\lambda_1(\Gamma)$ the non-zero first eigenvalue of the Laplacian $\triangle_{\Gamma}$. As Theorem \ref{t4.6.2.1}, Gromov's observation in \cite[Section 13]{gromovcat} together with one in \cite[Section $3\frac{1}{2}.41$]{gromov} imply the following lemma: \begin{lem}\label{l4.6.3.1}Let $S\subseteq G\setminus \{e_G\}$ be a symmetric set of generators of a finite group $G$ and assume that the group $G$ continuously acts on a $k$-dimensional Hadamard manifold $N$. Then, for any $x\in N$ and $\kappa>0$, we have \begin{align*} \mathop{\mathrm{CRad}} \nolimits(\nu_{G,x},1-\kappa)\leq \omega_x(1) \Big(\frac{k\#S}{2\kappa\lambda_1(\Gamma (G,S) )}\Big)^{1/2}. \end{align*} \begin{proof}Suppose that \begin{align}\label{s4.6.3.1} r:=\mathop{\mathrm{CRad}} \nolimits(\nu_{G,x},1-\kappa)> \omega_x(1) \Big(\frac{k\#S}{2\kappa\lambda_1(\Gamma (G,S) )}\Big)^{1/2}. \end{align}As in the proof of Theorem \ref{t4.6.2.1}, we identify the tangent space of $N$ at $z:=c(\nu_{G,x})$ with the Euclidean space $\mathbb{R}^k$. By the Chebyshev inequality, we get \begin{align*} \int_{G} |(\exp_z^{-1} \circ f^x)(g) |^2d \mu_G(g)= \int_G \mathop{\mathit{d}} \nolimits_N(f^x(g),z)^2 d\mu_G(g)\geq \kappa r^2. \end{align*}Hence, there exists $i_0$ such that \begin{align}\label{s4.6.3.2} \int_G ((\exp_z^{-1} \circ f^x)(g))_{i_0}^2d \mu_G(g)\geq \frac{\kappa r^2}{k}. \end{align} Putting $\varphi:= (\exp_z^{-1} \circ f^x)_{i_0}$, by (\ref{s4.6.3.1}) and (\ref{s4.6.3.2}), we obtain \begin{align*} \lambda_1(\Gamma (G,S))=\ & \inf\frac{\sum_{g,g'\in G;g\sim g'} (f(g)- f(g'))^2}{2\sum_{g\in G} f(g)^2}\\ \leq \ & \frac{\sum_{g,g'\in G;g\sim g'}(\varphi(g)-\varphi(g'))^2}{2\sum_{g\in G} \varphi (g)^2}\\ \leq \ & \frac{\sum_{g,g'\in G;g\sim g'}\mathop{\mathit{d}} \nolimits_N(f^x(g),f^x(g'))^2}{2\sum_{g\in G} \varphi (g)^2}\\ \leq \ & \frac{ \#G \#S \cdot \omega_x(1)^2 }{\#G\int_G \varphi(g)^2 d\mu_G(g)}\\ = \ & \frac{\omega_x(1)^2\#S}{\int_{G}\varphi(g)^2 d\mu_G(g)}\\ \leq \ & \frac{\omega_x(1)^2 k \#S}{\kappa r^2}\\ < \ & \lambda_1(\Gamma (G,S)), \end{align*}where the infimum is taken over all nontrivial functions $f:G\to \mathbb{R}$ such that $\sum_{g\in G}f(g)=0$. This is a contradiction. This completes the proof. \end{proof} \end{lem} Applying Lemma \ref{l4.6.3.1} to Corollary \ref{c4.6.1.1}, we obtain the following theorem: \begin{thm}Let $S\subseteq G\setminus \{e_G\}$ be a symmetric set of generators of a finite group $G$ and assume that the group $G$ continuously acts on a $k$-dimensional Hadamard manifold $N$. Then, for any $x\in N$, we have \begin{align*} \mathop{\mathit{d}} \nolimits_N(c(\nu_{G,x}),g c(\nu_{G,x})) \leq \omega_x(1) \Big(\frac{k \# S}{\lambda_1(\Gamma(G,S))}\Big)^{1/2}+ \rho \Big(+ \omega_x (1)\Big(\frac{k \# S}{\lambda_1(\Gamma(G,S))}\Big)^{1/2}\Big) \end{align*}for any $g\in G$. There also exists a point $z_{x}\in Gx$ such that \begin{align*} \mathop{\mathit{d}} \nolimits_N (z_{x},gz_{x})\leq \ &\min \Big\{ \omega_x(1) \Big(\frac{k \# S}{ \lambda_1(\Gamma(G,S))}\Big)^{1/2} + \rho \Big( +2\omega_x(1) \Big(\frac{k \# S}{ \lambda_1(\Gamma(G,S))}\Big)^{1/2}\Big),\\ & \hspace{1cm} \omega_x(1) \Big(\frac{k \# S}{ \lambda_1(\Gamma(G,S))}\Big)^{1/2}+ 2 \rho \Big( + \omega_x(1) \Big(\frac{k \# S}{\lambda_1(\Gamma(G,S))}\Big)^{1/2}\Big) \Big\} \tag*{} \end{align*}for any $g\in G$. \end{thm} \section{L\'{e}vy group action} In this section, we discuss about a L\'{e}vy group action to concrete metric spaces appeared in Section 3. A metrizable group $G$ is called a \emph{L\'{e}vy group} if it contains an increasing chain of compact subgroups $G_1\subseteq G_2 \subseteq \cdots \subseteq G_n \subseteq \cdots$ having an everywhere dense union in $G$ and such that for some right-invariant compatible distance function $\mathop{\mathit{d}} \nolimits_G$ on $G$ the groups $G_n$, $n\in \mathbb{N}$, equipped with the Haar measures $\mu_{G_n}$ normalized as $\mu_{G_n}(G_n)=1$ and the restrictions of the distance function $\mathop{\mathit{d}} \nolimits_G$, form a L\'{e}vy family. See \cite{milgro}, \cite{mil5}, \cite{pestov2}, \cite{pestov4} and references therein for informations about a L\'{e}vy group. Let a topological group $G$ acts on a metric space $X$. The action is called \emph{bounded} if for any $\varepsilon >0$ there exists a neighbourhood $U$ of the identity element $e_{G}\in G$ such that $\mathop{\mathit{d}} \nolimits_X(x,gx)<\varepsilon$ for any $g\in U$ and $x\in X$. Note that every bounded action is continuous. \begin{lem}[{cf.~\cite[Theorem 1]{pestov4}}]\label{l3.2}Assume that a metric group $G$ with a right invariant distance function $\mathop{\mathit{d}} \nolimits_G$ boundedly acts on a metric space $X$. Then, orbit maps $f_x:G\to X$ for all $x\in X$ are uniformly equicontinuous. \end{lem} We shall consider an action of a L\'{e}vy group to a metric space $X$ satisfying the following condition: ($\lozenge $): We have $\lim_{n\to \infty}\mathop{\mathrm{ObsDiam}} \nolimits_{X}(X_n;-\kappa)=0$ for any $\kappa>0$ and any L\'{e}vy family $\{X_n\}_{n=1}^{\infty}$. Note that $\mathbb{R}$-trees, doubling spaces, metric graphs, and Hadamard manifolds satify the condition ($\lozenge$) (see Section 3). \begin{conj} Any complete Riemannian manifolds satisfy the condition($\lozenge$). \end{conj} Let a topological group $G$ acts on a metric space $X$. We say that the topological group $G$ acts on $X$ \emph{by uniform isomorphims} if for each $g\in G$, the map $X\ni x\mapsto gx\in X$ is uniform continuous. The action is said to be \emph{uniformly equicontinuous} if for any $\varepsilon > 0$ there exists $\delta>0$ such that $\mathop{\mathit{d}} \nolimits_X(gx,gy)< \varepsilon$ for every $g\in G$ and $x,y\in X$ with $\mathop{\mathit{d}} \nolimits_X(x,y)< \delta$. Given a subset $S\subseteq G$ and $x\in X$, we put $S x:=\{gx \mid g\in S\}$. \begin{prop}\label{th2}Assume that a L\'{e}vy group $G$ boundedly acts on a metric space $X$ having the property ($\lozenge$) by uniform isomorphisms. Then for any compact subset $K\subseteq G$ and any $\varepsilon >0$, there exists a point $x_{\varepsilon, K}\in X$ such that $\mathop{\mathrm{diam}} \nolimits (K x_{\varepsilon,K})\leq \varepsilon$. \end{prop} \begin{prop}\label{th3}There are no non-trivial bounded uniformly equicontinuous actions of a L\'{e}vy group on a metric space having the property ($\lozenge$). \end{prop} \begin{proof}[Proof of Propositions \ref{th2} and \ref{th3}]From the definition of $G$, the group $G$ contains an increasing chain of compact subgroups $G_1\subseteq G_2 \subseteq \cdots \subseteq G_n \subseteq \cdots $ having an everywhere dense union in $G$ such that for some right-invariant compatible distance function $\mathop{\mathit{d}} \nolimits_G$ on $G$, the sequence $\{(X,\mathop{\mathit{d}} \nolimits_X,\mu_{G_n})\}_{n=1}^{\infty}$ forms a L\'{e}vy family. Let $x\in X$ be an arbitrary point. We first prove Proposition \ref{th2}. Since $G$ boundedly acts on $X$ and $\mathop{\mathit{d}} \nolimits_G$ is right-invarinat, by vritue of Lemma \ref{l3.2}, for any $\varepsilon >0$ there exists $\delta >0$ such that $\mathop{\mathit{d}} \nolimits_X(gy,g'y)<\varepsilon/2$ for any $y\in X$ and $g,g'\in G$ with $\mathop{\mathit{d}} \nolimits_G(g,g')\leq \delta$. Take a subset $\{g_1,g_2, \cdots, g_N\}\subseteq G$ such that each $g\in K$ is within distance $\delta$ of the set $\{ g_1,g_2, \cdots, g_N\}$ and all $g_i$ are contained in $G_\ell$ for some large $\ell\in \mathbb{N}$. Since the orbit map $f_x:G\to X$ is uniformly continuous, by using Corollary \ref{c3.2}, the sequence $\{(X,\mathop{\mathit{d}} \nolimits_X, \nu_{G_n,x}) \}_{n=1}^{\infty}$ is a L\'{e}vy family. From the property ($\lozenge$) of the space $X$ the identity maps $\mathop{\mathrm{id}} \nolimits_n: (X,\mathop{\mathit{d}} \nolimits_X,\nu_{G_n,x})\to X$ concentrate, that is, $\lim_{n\to \infty}\mathop{\mathrm{diam}} \nolimits (\nu_{G_n,x},1-\kappa)=0$ for any $\kappa >0$. Hence there exist $\varepsilon_n >0$ and $x_n \in X_n$ such that $\lim_{n\to \infty}\varepsilon_n =0$ and $\lim_{n\to \infty}\nu_{G_n,x}(B_X(x_n,\varepsilon_n))=1$. Take $n_0\in \mathbb{N}$ such that $n_0\in \mathbb{N}$, $\nu_{G_{n_0},x}(B_X(x_{n_0},\varepsilon_{n_0}))>1/2$ and $\varepsilon_{n_0}\leq \rho^{(\{g_1, g_2, \cdots, g_N\},X)}(\varepsilon_{n_0})<\varepsilon /4$. The same method of the proof of (\ref{s3.1}), we obtain \begin{align*} \mathop{\mathit{d}} \nolimits_X(x_{n_0}, g_ix_{n_0})\leq \varepsilon_{n_0}+ \rho^{(\{g_1,g_2, \cdots, g_N\},X)}(\varepsilon_{n_0})< \varepsilon /2 \end{align*}for any $g_i$. For any $g\in K$, choosing $g_i$ with $\mathop{\mathit{d}} \nolimits_G(g_i,g)<\delta$, we obtain \begin{align*} \mathop{\mathit{d}} \nolimits_X(x_{n_0}, gx_{n_0})\leq \mathop{\mathit{d}} \nolimits_X(x_{n_0},g_ix_{n_0}) + \mathop{\mathit{d}} \nolimits_X(g_i x_{n_0}, g x_{n_0})\leq \frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon \end{align*}by the definition of $\delta>0$. This completes the proof of Proposition \ref{th2}. We next prove Proposition \ref{th3}. Since $\lim_{\eta \to 0}\rho^{(G,X)}(\eta)=0$, by using Corollary \ref{c3.1}, we get \begin{align*} \mathop{\mathrm{diam}} \nolimits (G_n x)\leq 2\lim_{\kappa \uparrow 1/2}\mathop{\mathrm{diam}} \nolimits (\nu_{G_n,x},1-\kappa)+ 2\rho^{(G,X)}\big(+\lim_{\kappa \uparrow 1/2}\mathop{\mathrm{diam}} \nolimits (\nu_{G_n ,x},1-\kappa)\big) \to 0 \end{align*}as $n\to \infty$. Since $G_1 x\subseteq G_2 x\subseteq \cdots \subseteq G_n x \subseteq G_{n+1}x \subseteq \cdots$, we therefore obtain $G_n x= \{ x\}$ for any $n\in \mathbb{N}$. This completes the proof of Proposition \ref{th3}. \end{proof} Note that every continuous action of a topological group on a compact metric space is bounded. Since a compact metric space has the property ($\lozenge$) and a L\'{e}vy group $G$ contains an increasing chain of compact subgroups $G_n$ having an everywhere dense union, Proposition \ref{th2} includes the fixed point theorem (\cite[Theorem 7.1]{milgro}) by Gromov and Milman. \begin{ack}\upshape The author would like to express his thanks to Professor Takashi Shioya for his valuable suggestions and assistances during the preparation of this paper. \end{ack}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Learning Curve Comparison with Sample-based Algorithms} \begin{figure}[H] \begin{centering} \includegraphics[width=0.3\columnwidth]{images/cifar10-valid-x-valid-results-500length-500avg}\includegraphics[width=0.3\columnwidth]{images/cifar100-x-valid-results-500length-500avg}\includegraphics[width=0.3\columnwidth]{images/ImageNet16-120-x-valid-results-500length-500avg} \par\end{centering} \caption{Comparison of CATCH with other sample-based algorithms on CIFAR-10 \cite{krizhevsky2009learning}, CIFAR-100 \cite{krizhevsky2009learning}, and ImageNet16-120 \cite{dong2020bench}.} \end{figure} We compare the learning curve of CATCH with other sample-based algorithms in Figure 1. We plot each curve with the highest fully-train validation accuracy the agent has seen at each search epoch. Each curve is plotted with an average of 500 trials. The shaded area shows the mean $\pm$ standard deviation among all trials at each search epoch. CATCH stands out among others with higher performance and lower variation on all three datasets (CIFAR-10, CIFAR-100, and ImageNet16-120). It is also on average a magnitude faster than other algorithms to find their best architectures after 500 searching epochs. On ImageNet16-120, none of the algorithms except CATCH could even identify the best architecture within 500 searching epochs across all 500 trials. CATCH is also more stable, as is indicated by its much lower variation compared with other algorithms. Its variance tends to shrink over time, while R-EA and REINFORCE policies are almost as unstable as random search. Through this comparison, we further prove the adaptation speed and stability of CATCH, along with its competency across various datasets and random seeds. \section{Encoder's Adaptation Result} \begin{figure}[t] \begin{centering} \includegraphics[width=0.3\columnwidth]{images/pca-pr0-z50-cRunning-running-3sets-freeze-v1} \includegraphics[width=0.3\columnwidth]{images/pca-pr10-z50-cRunning-running-3sets-freeze-v1} \includegraphics[width=0.3\columnwidth]{images/pca-pr50-z50-cRunning-running-3sets-freeze-v1} \par\end{centering} \caption{The encoder's adaptation process. It learns to distinguish different datasets throughout the learning process, and thus provide informed input to the controller and the evaluator.} \end{figure} Throughout the adaptation process, we hypothesize that the encoder can provide dataset-specific guidance to the controller and the evaluator. To test this hypothesis, we visualize the encoded latent context vector $z$ of each dataset through Principle Component Analysis, with the results presented in Figure 2. Each point is generated by randomly selecting and encoding 80\% network-reward pairs from the search history. We freeze the weights of the meta-trained controller and evaluator policy, and only allow gradient updates for the encoder. This operation eliminates influence from the changing controller and evaluator policies, and thus enables us to closely observe just the behaviors of the encoder. When the encoder is first adapted to CIFAR-10, CIFAR-100, and ImageNet16-120, the generated context vectors are not distinguishable across the three datasets. However, after just 10 search epochs of adaptation, we can already identify a cluster of ImageNet16-120 context vectors. The clusters then quickly evolve as the encoder sees more architectures. By the 50-th search epoch, we can see three distinctive clusters as a result of the encoder\textquoteright s fast adaptation towards the three datasets. This observation is consistent with the results of NAS-Bench-201 \cite{dong2020bench}. In the original paper, the network-performance pairs have higher correlation between CIFAR-10 and CIFAR-100 (0.968) than that between CIFAR-10 and ImageNet16-120 (0.827). This correlation is also higher than the correlation between CIFAR-100 and ImageNet16-120 (0.91). This attributes to the reason why the encoder takes more search epochs to distinguish CIFAR-10 from CIFAR-100. The results are in support of our hypothesis, and show the encoder's capability to learn and express dataset-specific information effectively. \section{Ablation Study on the Evaluator} We also explored the effects of the evaluator by eliminating it from both the meta-training and adaptation phase, and its performance is presented in Figure \ref{t_graphs-ab-enc} (a)-(c). As the figure shows, the evaluator lifts the performance by a large margin, making it a crucial component in the search algorithm. Table \ref{t_evaluator} provides further information on the evaluator when comparing it with CATCH using ground truth as the evaluator (CATCH-GT). CATCH-GT is a hard-to-defeat baseline, but CATCH-meta managed to get very close to it and the global max accuracy. \vspace*{-0.5 cm} \begin{figure}[t] \begin{centering} \subfloat[]{\includegraphics[scale=0.21]{images/c10_abl_eva} }\subfloat[]{\includegraphics[scale=0.21]{images/c100_abl_eva} }\subfloat[]{\includegraphics[scale=0.21]{images/imgnet_abl_eva} } \par\end{centering} \caption{Comparison of CATCH-meta, CATCH-sfs with CATCH-without-evaluator. Including the evaluator significantly raises the performance.} \label{t_graphs-ab-enc} \end{figure} \begin{table}[] \caption{Comparison of CATCH when using ground truth as the evaluator (CATCH-GT), CATCH without evaluator (CATCH-w/o-evaluator), and CATCH-meta. The results are taken from 100 trials where each trail contains 50 search epochs. We report the mean $\pm$ std for each setting in the table.} \centering{} \begin{tabular}{c|c|c|c} \hline & CIFAR-10 & CIFAR-100 & ImageNet16-120 \\ \hline CATCH-GT & 91.64$\pm$0.09 & 73.31$\pm$0.16 & 47.18$\pm$0.09 \tabularnewline CATCH-w/o-evaluator & 91.17$\pm$0.25 & 72.08$\pm$0.68 & 45.86$\pm$0.54 \tabularnewline CATCH-meta & 91.63$\pm$0.11 & 73.29$\pm$0.31 & 46.37$\pm$0.53 \tabularnewline Max Acc. & 91.719 & 73.45 & 47.19 \tabularnewline \hline \end{tabular} \label{t_evaluator} \end{table} \vspace*{-0.5 cm} \section{CATCHer Training Details} \subsection{Controller Settings and Hyperparameters} The controller is trained with Proximal Policy Optimization (PPO) \cite{schulman2017proximal} algorithm, and its loss $\mathcal{L}_{c}$ is defined following the original PPO loss: \[ \mathcal{L}_{c}=\mathbb{\hat{E}}_{t}\left[min\left(r_{t}\left(\theta_{c}\right)\hat{A_{t}},clip\left(r_{t}\left(\theta_{c}\right),1-\epsilon,1+\epsilon\right)\hat{A}_{t}\right)\right] \] $\epsilon$ is the PPO clipping parameter, $r_{t}\left(\theta_{c}\right)=\frac{\pi_{\theta_{c}}\left(a_{l}|s_{t}\right)}{\pi_{\theta_{old}}\left(a_{l}|s_{t}\right)}$ is the probability ratio, and $\hat{A_{t}}$ is the General Advantage Estimate (GAE) \cite{schulman2015high} estimate: \[ \hat{A_{t}}=\sum_{l=0}^{t}\left(\gamma\lambda\right)^{l}\delta_{l}^{V} \] where $\delta_{l}^{V}=r_{t}+\gamma V(s_{l+1})-V(s_{l})$ is the Bellman residual term. The definition of $s_{l}$ can be found in Table \ref{mdp2nas}. We show the training hyperparameters and our settings on translating architecture search elements as Markov Decision Processes (MDP) in the following tables. \begin{table} \caption{Controller hyperparameters} \centering{}% \begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{Hyperparameter} & Value & NAS-Bench-201 \cite{dong2020bench} & Residual Block\tabularnewline & (meta-train) & (adaptation) & Search Space (adaptation)\tabularnewline \hline Learning rate & 0.001 & 0.001 & 0.0001\tabularnewline Adam scheduler step size & 20 & 20 & 20\tabularnewline Adam scheduler gamma & 0.99 & 0.99 & 0.99\tabularnewline Update frequency & 1 epoch & 1 epoch & 1 epoch\tabularnewline Clipping parameter $\epsilon$ & 0.2 & 0.2 & 0.2\tabularnewline Memory size & 200 & 200 & 200\tabularnewline Discount $\gamma$ & 0.99 & 0.99 & 0.99\tabularnewline GAE parameter $\lambda$ & 0.95 & 0.95 & 0.95\tabularnewline Value Function coeff. & 1 & 1 & 1\tabularnewline Entropy coeff. & 0.01 & 0.03 & 0.05\tabularnewline \hline \end{tabular} \end{table} \begin{table} \caption{A mapping of Neural Architecture Search elements to MDP factors for controller training. $l$ denotes the current timestep. Invalid actions are masked by zeroing out their probabilities in the outputs, then softmax the remaining probabilities and sample accordingly.} \begin{centering} \begin{tabular}{c|c|c} \hline MDP Factor & Value & Explanation\tabularnewline \hline Current state $s_{l}$ & ($z,$ $[a_{1}...a_{l-1}]$) & Latent context and the current network design.\tabularnewline Current action $a$ & $a^{l}$ & A one-hot vector of the current design choice.\tabularnewline Reward $r$ & $R$ & A function of the evaluated network's performance.\tabularnewline Next state $s_{l+1}$ & ($z,$ $[a_{1}...a_{l}]$) & Latent context and the current network design.\tabularnewline \hline \end{tabular} \par\end{centering} \label{mdp2nas} \end{table} \vspace{-2mm} \subsection{Encoder and Evaluator Settings} \begin{algorithm}[t] \begin{verbatim} def encode_z(B, D, Contexts, Encoder): # Contexts: a batch of contexts {(m, r)} use for encoding # B: len(Contexts), batch # D: the dimension of latent context variable z # Encoder: 3-layer MLP mapping (m, r) to (mean, var) of z_i # encode each (m, r) to (mean, var) of z context_batch.rewards = normalize(context_batch.rewards) params = Encoder.forward(context_batch) # shape: [B, 2*D] # get mean and var; t(): matrix transpose means = params[..., :D].t() # shape: [D, B] vars = F.softplus(params[..., D:].t()) # shape: [D, B] # get mean & var of each z_i; ds: torch.distributions posteriors = [] for ms, vs in zip(unbind(means), unbind(vars)): z_i_mean, z_i_var = _product_of_gaussian(ms, vs) # form a Gaussian Posterior from z_i_mean, sqrt(z_i_var) z_i_posterior = ds.Gaussian(z_i_mean, sqrt(z_i_var)) posteriors.append(z_i_posterior) # sample z from q(z|Contexts); rsample(): random sample z = [d.rsample() for d in posteriors] return torch.stack(z) \end{verbatim} \label{algo} \caption{Pseudocode of Latent Context Encoding Procedure in a PyTorch-like style.} \end{algorithm} The encoder generates the latent conext through the network-reward information $(m, r)$. This is done by taking the encoder output as the means and variances of a $D$-dimensional Gaussian distribution, from which we sample $\boldsymbol{z}$. We provide pseudocode for this process in Algorithm 1. \begin{table} \caption{Encoder hyperparameters} \begin{centering} \begin{tabular}{c|c} \hline Hyperparameter & Value\tabularnewline \hline Learning rate & 0.01\tabularnewline Dimension of $z$ & 10\tabularnewline KL weight $\beta$ & 0.1\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} The evaluator uses the Huber loss \cite{huber1992robust} to close the gap between its predicted network performance $\tilde{r}$ and the actual performance $r$. \begin{equation} \mathcal{L}_{e}=\frac{1}{n}\sum_{i}loss(r_{i},\tilde{r_{i}}),\text{where }loss(r,\tilde{r})=\begin{cases} 0.5(r_{i}-\tilde{r}_{i})^{2} & if\mid r_{i}-\tilde{r}_{i}\mid<1,\\ \mid r_{i}-\tilde{r}_{i}\mid-0.5 & otherwise. \end{cases} \end{equation} \begin{table}[H] \caption{Evaluator hyperparameters} \begin{centering} \begin{tabular}{c|c|c} \hline \multirow{2}{*}{Hyperparameter} & Value & Value\tabularnewline & (meta-train) & (adaptation)\tabularnewline \hline Learning rate & 0.0001 & 0.0001\tabularnewline Exploration factor $\epsilon$ initial value & 1.0 & 0.5\tabularnewline Exploration factor $\epsilon$ decay rate & 0.025 & 0.025\tabularnewline Exploration factor $\epsilon$ decay step & 20 & 20\tabularnewline Number of networks evaluated per epoch & 25 & 25\tabularnewline PER \cite{schaul2015prioritized} prioritization factor $\alpha$ & 0.5 & 0.5\tabularnewline PER bias correction factor $\beta$ & 0.575 & 0.575\tabularnewline PER $\beta$ annealing step size & 0.01 & 0.01\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} \vspace*{-0.9 cm} \section{ImageNet, COCO, and Cityscapes Training Settings} Table 6-8 shows our training configurations on ImageNet \cite{deng2009imagenet} , COCO \cite{lin2014microsoft}, and Cityscapes \cite{Cordts2016Cityscapes} . On COCO, Faster R-CNN with the ResNet backbone and Cascade FPN is used as our baseline. It is extremely costly to perform ImageNet pretrain for search, but training detection networks without ImageNet pretrain was made possible by \cite{DBLP:journals/corr/abs-1811-08883}. For COCO and Cityscapes, we use Group Normalization with halved-base-channel groups instead of Batch Normalization. Conv2D with weight standardization (ConvWS2D) is also applied. \begin{table}[H] \caption{ImageNet training hyperparameters with 8 GPUs.} \begin{centering} \begin{tabular}{c|c|c} \hline \multirow{2}{*}{Hyperparameter} & Value & Value\tabularnewline & (partial-train) & (fully-train)\tabularnewline \hline Learning rate & 0.1 & 0.1\tabularnewline Learning rate momentum & 0.9 & 0.9\tabularnewline Weight decay & $1\times10^{-3}$ & $4\times10^{-5}$\tabularnewline Learning rate warmup & linear for 3 epochs & linear for 3 epochs\tabularnewline Learning rate decay policy & cosine & cosine\tabularnewline Total epoch & 40 & 240\tabularnewline Batch size & 1024 & 512\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} \begin{table}[H] \vspace{-2mm} \caption{COCO training hyperparameters with 8 GPUs. } \begin{centering} \begin{tabular}{c|c|c} \hline \multirow{2}{*}{Hyperparameters} & Value & Value\tabularnewline & (partial-train) & (fully-train)\tabularnewline \hline Normalization & Group Normalization & Batch Normalization\tabularnewline Batch size & 16 & 16\tabularnewline Learning rate & 0.18 & 0.02\tabularnewline Learning rate momentum & 0.9 & 0.9\tabularnewline Weight decay & 0.0001 & 0.0001\tabularnewline Learning rate decay policy & cosine & step\tabularnewline Total epoch & 9 & 24\tabularnewline \hline \end{tabular} \par\end{centering} \vspace{-2mm} \end{table} \begin{table}[H] \vspace{-2mm} \caption{Cityscapes training hyperparameters with 8 GPUs.} \begin{centering} \begin{tabular}{c|c|c} \hline \multirow{2}{*}{Hyperparameters} & Value & Value\tabularnewline & (partial-train) & (fully-train)\tabularnewline \hline Baseline model & BiSeNet \cite{Yu_2018_ECCV} & BiSeNet\tabularnewline Convolution & ConvWS2D & Conv2D\tabularnewline Normalization & Group Normalization & Synchronized BN\tabularnewline Batch size & 32 & 16\tabularnewline Learning rate & 0.02 & 0.025\tabularnewline Learning rate momentum & 0.9 & 0.9\tabularnewline Weight decay & $5\times10^{-4}$ & $1\times10^{-4}$ \tabularnewline Learning rate warmup & linear for 5 epochs & linear for 5 epochs\tabularnewline Learning rate decay policy & cosine & polynomial\tabularnewline Total epoch & 40 & 100\tabularnewline \hline \end{tabular} \par\end{centering} \vspace{-2mm} \end{table} \section{Searched Models of Residual Block Search Space} We show an example model in our Residual Block search space in Figure 3. It consists of 5 stages, with depth=15, stage distribution={[}3,3,4,5{]}, and channel distribution={[}2,2,4,7{]}. We use the same notation format to show the searched models in Table 9. \begin{figure}[H] \begin{centering} \includegraphics[height=0.26\columnwidth]{images/resnet.pdf} \par\end{centering} \caption{An example model in the Residual Block search space following \cite{yao2019sm,DBLP:journals/corr/abs-1906-04423}. C-N-R stands for a combination of Convolution layer, Normalization layer, and a ReLU operation.} \end{figure} \vspace*{-1.3 cm} \begin{table}[H] \caption{Searched models in Residual Block search space.} \begin{centering} \begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{Searched Model} & Input & \multirow{2}{*}{Depth} & Stage & Channel & \multirow{2}{*}{FLOPS(G)} & \multirow{2}{*}{Params(MB)}\tabularnewline & Channel & & Distribution & Distribution & & \tabularnewline \hline CATCH-Net-A & 64 & 20 & {[}2, 7, 8, 3{]} & {[}5, 4, 8, 3{]} & 4.45 & 25.96\tabularnewline CATCH-Net-B & 64 & 25 & {[}8, 5, 8, 4{]} & {[}3, 10, 8, 4{]} & 9.84 & 32.16\tabularnewline CATCH-Net-C & 64 & 20 & {[}5, 4, 5, 6{]} & {[}1, 8, 5, 6{]} & 8.08 & 37.03\tabularnewline CATCH-Net-D & 64 & 20 & {[}1, 8, 5, 6{]} & {[}2, 7, 7, 4{]} & 4.46 & 30.98\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} \bibliographystyle{plain} \section{Introduction} The emergence of many high-performance neural networks has been one of the pivotal forces pushing forward the progress of deep learning research and production. Recently, many neural networks discovered by Neural Architecture Search (NAS) methods have surpassed manually designed ones on a variety of domains including image classification \cite{tan2019efficientnet,zoph2018learning}, object detection \cite{zoph2018learning}, semantic segmentation \cite{chen2018searching}, and recommendation systems \cite{liu2020autofis}. Many potential applications of practical interests are calling for solutions that can (1) efficiently handle a myriad of tasks, (2) be widely applicable to different search spaces, and (3) maintain their levels of competency across various settings. We believe these are important yet somewhat neglected aspects in the past research, and a transformative NAS algorithm should be able to respond to these needs to make a real influence. \begin{figure}[t] \begin{centering} \includegraphics[width=1\columnwidth]{images/figure1} \par\end{centering} \caption{Upper: drawbacks of current NAS schemes. Lower: the overall framework of CATCH. Our search agent, CATCHer, consists of three core components: context encoder, RL controller and network evaluator. CATCHer first goes through the meta-training phase to learn an initial search policy, then it adapts to target tasks efficiently.} \label{overall} \end{figure} Many algorithms \cite{liu2018darts,pham2018efficient} have been proposed to improve the efficiency of NAS. However, they lack mechanisms to seek and preserve information that can be meaningfully reused. Hence, these algorithms can only repeatedly and inefficiently search from scratch when encountering new tasks. To tackle this problem, a rising direction of NAS attempts to create efficient transferrable algorithms. Several works \cite{kim2018auto,pasunuru2019continual} try to search for architectures that perform well across tasks, but the solutions may not be optimal on the target tasks, especially when the target task distributions are distant from the training task distributions. Some recent works \cite{Lian2020Towards,elsken2019meta} use meta-learning \cite{finn2017model,li2017meta} for one-shot NAS instead. With recent critiques \cite{Yang2020NAS,li2019random} pointing out some one-shot solutions\textquoteright{} dependence on particular search spaces and sensitivity to hyperparameters, many concerns arise on the practicality of these meta NAS works based on one-shot methods. To avoid ambiguity, throughout this paper, \emph{tasks} are defined as problems that share the same action space, but differ in reward functions. In NAS, the change of either the dataset or domain (e.g. from classification to detection) alters the underlying reward function, and thus can be treated as different tasks. Striking a balance between universality and efficiency is hard. Solving the universality problem needs a policy to disentangle from specifics of search spaces, which uproots an important foundation of many efficient algorithms. The aim to improve efficiency on multiple tasks naturally links us to a transfer/meta learning paradigm. Meta Reinforcement Learning (RL) \cite{rakelly2019efficient,lan2019meta} offers a solution to achieving both efficiency and universality, which largely inspired our proposal of CATCH, a novel context-guided meta reinforcement learning framework that is both search space-agnostic and swiftly adaptive to new tasks. The search agent in our framework, namely CATCHer, acts as the decision-maker to quickly \say{catch} top-performing networks on a task. As is shown in Figure \ref{overall}, it is first trained on a set of meta-training tasks then deployed to target tasks for fast adaptation. CATCHer leverages three core components: context encoder, RL controller, and network evaluator. The context encoder adopts an amortized variational inference approach \cite{alemi2016deep,rakelly2019efficient,kingma2013auto} to encode task properties into latent context variables that guide the controller and evaluator. The RL controller makes sequential decisions to generate candidate networks in a stochastic manner. The network evaluator predicts the performance of the candidate networks and decides which nets are valuable for training. All three components are optimized in an end-to-end manner. We test the method's universality and adaptation efficiency on two fundamentally different search spaces: cell-based search space \cite{dong2020bench} and Residual block-based \cite{he2016deep,yao2019sm} search space. The former focuses on cell structure design, while the latter targets macro skeleton search. With NAS-Bench-201 \cite{dong2020bench}, we can compare CATCH fairly with other algorithms by eliminating performance fluctuations rising from different search spaces and training settings. Our experiments demonstrate CATCH's superiority over various other works, including R-EA \cite{real2019regularized} and DARTS \cite{liu2018darts}. On Residual block-based search space, we use image classification tasks on sub-datasets of ImageNet \cite{deng2009imagenet} as meta-training tasks, and then adapt the CATCHer to target tasks, such as image classification on full ImageNet, object detection on COCO \cite{lin2014microsoft}, and semantic segmentation on Cityscapes \cite{Cordts2016Cityscapes}. CATCH discovered networks on these tasks with competitive performance and inference latency. Our results demonstrated CATCH\textquoteright s robustness across various settings, easing previously raised concerns of NAS algorithms' sensitivity to search space, random seeds, and tendencies to overfit to only one or two reported tasks. Our key contribution is the first attempt to design an efficient and universal transferrable NAS framework. It swiftly handles various tasks through fast adaptation, and robustly maintains competitive performance across different settings. Our work brings along new perspectives on solving NAS problems, including using amortized variational inference to generate task characteristics that inform network designs. It also demonstrates the possibility of creating efficient sample-based NAS solutions that are comparable with widely-recognized one-shot methods. With competitive networks identified across classification, detection, and segmentation domains, it further opens the investigation on the feasibility of cross-domain architecture search. \section{Related Work} NAS is an algorithmic approach to design neural networks through searching over candidate architectures. Many harness the power of Reinforcement Learning (RL) \cite{zoph2016neural}, Bayesian Optimization \cite{bergstra2013making,bergstra2011algorithms}, Evolutionary Algorithm \cite{elsken2018efficient,real2019aging}, and Monte Carlo Tree Search \cite{negrinho2017deeparchitect,wistuba2017finding}. The field gradually gains its tractions with the emergence of highly-efficient algorithms \cite{liu2018darts,pham2018efficient,real2019aging} and architectures \cite{real2019regularized,tan2019efficientnet} with remarkable performance. Our method is inspired by PEARL \cite{rakelly2019efficient}, a recent work in context-based meta reinforcement learning, which captures knowledge about a task with probabilistic latent contexts. The knowledge is then leveraged for informed policy training. There are a few key challenges in efficiently applying it to NAS: (1) PEARL models the latent context embeddings of RL tasks as distributions over Markov Decision Processes (MDP), but it is less clear how a task in NAS can be meaningfully encoded. (2) RL is notoriously famous for its sample inefficiency, but it is extremely expensive to obtain reward signals on NAS. We address these challenges by (1) proposing the use of network-reward pairs to represent a task, (2) introducing meta-training tasks that can be cheaply evaluated to obtain more data for learning, and including a network evaluator that acts like Q-learning agents to speed up learning. Previous works also explored the possibility of using meta-learning for NAS. Some \cite{kim2018auto,pasunuru2019continual} aimed to identify a single architecture that simultaneously works well on all considered tasks. These solutions may not be scalable when confronting a large pool of target tasks. An early work \cite{wong2018transfer} aimed to learn a general policy across tasks. However, it generates task embeddings from images, which may fail at datasets with the same images, and is unable to differentiate among classification, detection, and segmentation tasks on the same dataset. A few recent papers \cite{Lian2020Towards,elsken2019meta} combined gradient-based meta-learning with DARTS, but the algorithms are only applicable to search spaces compatible with DARTS. Additionally, none of the above proposals reported their performance on large-scale tasks like ImageNet full dataset. This leaves questions on these proposals' generalizability and adaptation efficiency on more challenging datasets, where scientists expect meta-NAS algorithms should have an edge over typical NAS methods. CATCH is the first NAS algorithm to our knowledge that deploys meta-learning while maintaining universality, robustness across different search spaces, and capability to handle large-scale tasks. \section{CATCH Framework\label{sec:Methods}} In NAS, the change of dataset (e.g. CIFAR-10 vs. ImageNet) or domain (e.g. image classification vs. object detection) essentially indicates the shift of underlying reward distribution. The goal of a cross-task transfer algorithm is hence to quickly identify the best actions under the changed reward dynamics. To handle this challenge, the CATCH framework consists of two phases: a meta-training phase and an adaptation phase, as is presented in Algorithm 1. In the meta-training phase, we train the CATCHer on a pool of meta-training tasks that can be cheaply evaluated. A key goal of this phase is to present the context encoder with sufficiently diversified tasks, and encourage it to consistently encode meaningful information for different tasks. Meanwhile, both the controller and the evaluator may gain a good initialization for adaptation. In the adaptation phase, the meta-trained CATCHer then learns to find networks on the target task efficiently through the guidance of the latent context encoding. We show the search procedure on any single task in Figure \ref{procedure}, which corresponds to line 3-13 of Algorithm \ref{CATCH}. \begin{figure}[t] \begin{centering} \includegraphics[width=1\columnwidth]{images/figure2} \par\end{centering} \caption{The search procedure of CATCH on a given task. The procedure starts from initializing the search history by storing a randomly selected network $m$ and its reward $r$. The encoder applies amortized variational inference approach to generate latent context encoding $\boldsymbol{z}$ by encoding network-reward pairs from the search history. The controller then generates candidate networks for the evaluator to choose the most promising ones to train and evaluate. Newly selected networks and their rewards will be stored in the search history. The loop continues after the three components are optimized.} \label{procedure} \end{figure} \subsection{Context Encoding} The use of latent context encoding is a crucial part of CATCH. The question is what information about the task is reliable to construct such latent contexts. Directly extracting feature maps of images of the dataset is an intuitive solution. However, for the same dataset, the best network configurations to perform different tasks like object detection and semantic segmentation may differ a lot. Hence, simply extracting information directly from images may not be a viable approach. We instead believe that the task-specific contextual knowledge can be mined from the search history (i.e. sets of network-reward pairs). If the same group of networks have similar relative strengths on two tasks, it might mean these tasks are \say{close} to each other. It is also helpful to break the barriers for cross-task architecture search, since the network-reward pair of information is universal across tasks. Before searching on a task, we randomly form a few networks $m$ and evaluate their performance $r$ to initialize the search history. The retrieved network-reward pairs are stored in the search history for its initialization. To start the search, we sample a number of network-reward pairs $\{(m,r)_{i}\}_{1}^{N}$ (denoted by $\boldsymbol{c}_{1:N}$ for simplicity) from the search history, which will be fed into the encoder to generate a latent context vector $\boldsymbol{z}$ representing the salient knowledge about the task. We model the latent context encoding process in a probabilistic manner, because it allows the context encoder to model a distribution over tasks and conduct exploration via posterior sampling. Following the amortized variational inference approach used in \cite{rakelly2019efficient,alemi2016deep,kingma2013auto}, we aim to estimate the posterior $p(\boldsymbol{z}|\boldsymbol{c}_{1:N})$ with the encoder $q_{\phi}(\boldsymbol{z}|\boldsymbol{c}_{1:N})$, parametrized by $\phi $. We assume the prior $p(\boldsymbol{z})$ is a unit multivariate Gaussian distribution with diagonal covariance matrix $\mathcal{N}(\boldsymbol{0},diag(\boldsymbol{1}))$, and hence, the posterior $p(\boldsymbol{z}|\boldsymbol{c})$ conditioning on $\boldsymbol{c}$ is Gaussian. Since the network-reward pairs $\boldsymbol{c}_{1:N}$ are independent on a task, we could factor $q_{\phi}(\boldsymbol{z}|\boldsymbol{c}_{1:N})$ into the product of Gaussian factors conditioning on each piece of contexts $\boldsymbol{c}_{i}$, \begin{equation} q_{\phi}(\boldsymbol{z}|\boldsymbol{c}_{1:N})\propto\prod_{i=1}^{N}\mathcal{N}(f_{\phi}^{\tilde{\mu}}(\boldsymbol{c}_{i}),diag(f_{\phi}^{\tilde{\sigma}}(\boldsymbol{c}_{i})), \end{equation} where $f_{\phi}$ is an inference network parametrized by $\phi$, which predicts the mean $\tilde{\boldsymbol{\mu}}_i$ and the standard deviation $\tilde{\boldsymbol{\sigma}}_i$ of $q_{\phi}(\boldsymbol{z}|\boldsymbol{c}_{i})$ as a function of $\boldsymbol{c}_{i}$ to approximate Gaussian $p(\boldsymbol{z}|\boldsymbol{c}_i)$. During the forward pass, the encoder network $f_\phi$ outputs $\tilde{\boldsymbol{\mu}}_i$, $\tilde{\boldsymbol{\sigma}}_i$ of the Gaussian posterior $q_{\phi}(\boldsymbol{z}|\boldsymbol{c}_{i})$ conditioning on each context, then we take their product $q_{\phi}(\boldsymbol{z}|\boldsymbol{c}_{1:N})$. Each context $\boldsymbol{c}_i$ is $(m,r)_i$, where $r$ is normalized among $\{r\}_{1:N}$ to reflect the relative advantage of each network. All the network-reward pairs in the search history are utilized. We then sample $\boldsymbol{z}$ from $q_{\phi}(\boldsymbol{z}|\boldsymbol{c}_{1:N})$. Further implementation details can be found in the Appendix. \subsection{Network Sampling} The generation of a network can be treated as a decision-making problem, where each of the RL controller's actions determines one attribute of the resulting architecture. The attribute can be an operation type to form a certain edge in a cell-based search (e.g. skip-connect, convolution operations, etc.), or the shape of a network in a macro-skeleton search (e.g. width, depth, etc.). Both ways are explored in our work. A network, denoted by $m$, is represented as a list of actions $[a_{1},a_{2},...,a_{L}]$ taken by the controller in a sequential manner. At each time step $l$, the controller makes a decision $a_l$ according to its policy $\pi_{\theta_{c}}$, parametrized by $\theta_{c}$. The controller policy takes $\boldsymbol{z}$ and the previous actions $[a_{1}...a_{l-1},\boldsymbol{0},...,\boldsymbol{0}]$ as inputs, and outputs the probability distribution of choosing a certain action $\pi_{\theta_{c}}(a^{l}|[a_{1}...a_{l-1},\boldsymbol{0},...,\boldsymbol{0}],\boldsymbol{z})$, where the actions will be sampled accordingly. $\boldsymbol{z}$ is the latent context vector generated by the encoder, and $[a_{1}...a_{l-1},\boldsymbol{0},...,\boldsymbol{0}]$ is a collection of one-hot vectors indicating all the actions taken so far at $l$-th timestep, leaving untaken actions $[a_{l},...,a_{L}]$ as zero vectors. The reward for each action is the normalized performance score of the network. The controller samples $M$ networks stochastically as candidates for the network evaluator. \subsection{Network Scoring and Evaluation} Since the candidate networks are sampled stochastically by the controller, it is almost inevitable that some inferior models will be generated. We set up a filtering mechanism, namely network evaluator, which acts like a Q-learning agent that predicts the actual performance of each network, and selects the top one for training. The predicted value is not necessarily an accurate prediction of the training performance, but should be able to provide a ranking among candidate models roughly similar to their true performance. The evaluator $f_{\theta_{e}}(m,\boldsymbol{z})$ is parameterized by $\theta_{e}$. It takes $M $ tuples of network-context pairs $(m,\boldsymbol{z})$ as inputs, and outputs the predicted performance of input architectures. The network with the highest predicted performance score will be trained to obtain the true reward $r$. The network-context-reward tuple $(m,\boldsymbol{z},r)$ is then stored in the evaluator's local memory for future gradient updates. \begin{algorithm}[t] \begin{algorithmic}[1] \global\long\def\algorithmicrequire{\textbf{Inputs:}}% \REQUIRE $\{\mathcal{T}_{meta}\}$ (meta-training task pool), $\{\mathcal{T}_{target}\}$ (target task pool), $N_{meta}$ (\# of meta epochs), $N_{search}$ (\# of search epochs), $C $ (\# of contexts to sample), $M$ (\# of models to sample) \textbf{Meta-training Phase:} \FOR{$N_{meta}$ meta epochs} \STATE Select meta-training task $\mathcal{T}$ from $\{\mathcal{T}_{meta}\}$ \STATE Initialize SearchHistory \FOR{$n=1$ to $N_{search}$} \STATE $\{(m,r)_{i}\}_{1}^{C}=$ SearchHistory.sample\_contexts($C $) \STATE$\boldsymbol{z}=\ $Encoder.encode($\{(m,r)_{i}\}_{1}^{C}$) \STATE $\{m\}_{1}^{M}\leftarrow\ $Controller.sample\_networks($\boldsymbol{z}$, $M $) \STATE$m'\leftarrow\ $Evaluator.choose\_best($\{m_{j}\}_{1}^{M}$, $\boldsymbol{z}$) \STATE $r\leftarrow\ $train\_and\_evaluate($m',\mathcal{T}$) \STATE SearchHistory.add($(m',\boldsymbol{z},r)$) \STATE Encoder, Controller, Evaluator optimization \ENDFOR\ENDFOR \textbf{Adaptation Phase:} \STATE Select target task $\mathcal{T}$ from $\{\mathcal{T}_{target}\}$ \STATE\textbf{Repeat} Line 3-13 \STATE BestModel $\leftarrow$ SearchHistory.best\_model()\RETURN BestModel\end{algorithmic} \caption{Context-based Meta Architecture Search (CATCH)} \label{CATCH} \end{algorithm} \subsection{Optimization of CATCHer} To optimize the controller policy, we maximize the expected reward for the task it is performed on. The controller is trained using Proximal Policy Optimization (PPO) \cite{schulman2017proximal} with a clipped surrogate objective $\mathcal{L}_{c}$. To optimize the evaluator, we deploy Prioritized Experience Replay (PER) \cite{schaul2015prioritized}, a Deep Q-learning \cite{mnih2013playing} optimization technique. During the update, it prompts the evaluator to prioritize sampling entries that it makes the most mistakes on, and thus improves sample efficiency. The loss of the evaluator $\mathcal{L}_{e}$ is the Huber loss \cite{huber1992robust} between the evaluator's prediction $\tilde{r}$ and the normalized true performance score. Further details of $\mathcal{L}_{c}$ and $\mathcal{L}_{e}$ can be found in the Appendix. To optimize the encoder, we take $\mathcal{L}_{c}$ and $\mathcal{L}_{e}$ as part of the objective. The resulting variational lower bound for each task $\mathcal{T}$ is \begin{equation} \mathcal{L}=\mathbb{E}_{\boldsymbol{z}\sim q_{\phi}(\boldsymbol{z}|c^{\mathcal{T}})}[\mathcal{L}_{c}+\mathcal{L}_{e}+\beta D_{KL}(q_{\phi}(\boldsymbol{z}|\boldsymbol{c}^{\mathcal{T}})||p(\boldsymbol{z}))], \label{loss} \end{equation} where $D_{KL}$ serves as an approximation to a variational information bottleneck that constrains the mutual information between $\boldsymbol{z}$ and $\boldsymbol{c}$, as is shown in \cite{alemi2016deep,rakelly2019efficient}. This information bottleneck acts as a regularizer to avoid overfitting to training tasks. $\beta$ is the weight of $D_{KL}$ in the objective, and $p(\boldsymbol{z})$ is a unit Gaussian prior. Since (1) the latent context $\boldsymbol{z}$ serves as input to both controller and evaluator, and (2) $q_{\phi}(\boldsymbol{z}|\boldsymbol{c})$ and $p(\boldsymbol{z})$ are Gaussian, with $D_{KL}$ computed using their mean and variance, gradient of Eq. \ref{loss} can be back-propagated end-to-end to the encoder with the reparameterization trick. \section{Experiments\label{sec:Experiments}} \begin{figure}[t] \begin{centering} \subfloat[CIFAR-10]{\includegraphics[width=0.33\columnwidth]{images/c10_t} }\subfloat[CIFAR-100]{\includegraphics[width=0.33\columnwidth]{images/c100_t} }\subfloat[ImageNet16-120]{\includegraphics[width=0.33\columnwidth]{images/imgnet_t} } \par\end{centering} \caption{(a)-(c) show the results of 500 trials for CATCH-meta, CATCH-sfs(search from scratch) and other sample-based algorithms. Each individual trial is sorted by the final validation accuracy of the searched network.} \label{t_graphs} \end{figure} \subsection{Implementation Details} We use Multi-layer Perceptrons (MLP) as the controller policy network to generate the probability of choosing a certain action. The parameters $\theta_{c}$ of the controller is trained on-policy via the PPO algorithm. We mask invalid actions by zeroing out their probabilities in the controller\textquoteright s outputs, then softmax the remaining probabilities and sample actions accordingly. The evaluator is an MLP to generate the predicted score of a network. In the meta-training phase, we reset $\epsilon$ in the $\epsilon$-greedy exploration strategy each time when the agent initializes a new task. We sample 80\% of the entries as a batch from the replay buffer using PER. The encoder MLP outputs a 10-dim latent context vector $\boldsymbol{z}$, and the weight of the KL-Divergence $\beta$ in the combined loss is set to be 0.1. More details of the components' hyperparameters can be found in the Appendix. \subsection{Benchmark on NAS-Bench-201} \begin{figure}[t] \begin{centering} \par\end{centering} \begin{centering} \subfloat[CIFAR-10]{\includegraphics[height=0.26\columnwidth]{images/CIFAR-10--None.pdf} }\subfloat[CIFAR-100]{\includegraphics[height=0.26\columnwidth]{images/CIFAR-100--None.pdf} }\subfloat[ImageNet16-120]{\includegraphics[height=0.26\columnwidth]{images/ImageNet16-120--None.pdf} } \par\end{centering} \caption{Learning curves of one-shot algorithms and CATCH. Each curve is an average of three runs. We plot the first 100 search epochs for algorithms except for DARTS, which is trained only for 50 search epochs.} \label{one-shot-lc} \end{figure} As recent work \cite{Yang2020NAS} indicated, NAS algorithms are usually compared unfairly under different settings. To mitigate such problems, we first tested CATCH on NAS-Bench-201. It is a benchmark dataset that enables fair comparisons among NAS methods under the same configurations. It supports searching over cell-based architectures, where a directed acyclic graph represents each cell with 4 nodes and 5 possible connection operations on each edge. It provides the validation and test accuracies of 15,625 architectures on CIFAR-10, CIFAR-100, and ImageNet16-120 datasets. ImageNet16-120 is a subdataset for ImageNet, which downsampled all its images to $16\times16$, and contains only the first 120 classes of ImageNet. \subsubsection{Experiment Settings.} In the meta-training phase, each task is formed as a classification task on an $X$-class sub-dataset of ImageNet16 (ImageNet downsampled to $16\times16$) to maintain consistency with the configurations in NAS-Bench-201. The number of classes $X$ $\in$ $[10,20,30]$. In each meta-epoch, the agent searches 20 networks whose validation accuracies after 12 training epochs are used as the reward signals. The hyperparameters used for training the networks in both phases are identical to those in NAS-Bench-201. In the following experiments, CATCH-meta is meta-trained with 25 meta epochs for 10.5 GPU hours on Tesla V100. We apply the same configurations as those in NAS-Bench-201. \subsubsection{Comparison with Sample-based Algorithms.} We display the search results of the meta-trained version (CATCH-meta) and the search-from-scratch version (CATCH-sfs where the meta-training phase is skipped) of our method, and compare them with other sample-based algorithms: Random Search (RS) \cite{bergstra2012random}, Regularized Evolution Algorithm (R-EA) \cite{real2019regularized}, and REINFORCE \cite{williams1992simple}. The results of other methods are reproduced by running the code and configurations originally provided by NAS-bench-201. Each experiment is repeated for 500 trials with different seeds. The algorithms are trained for 50 search epochs in each trial. Figure \ref{t_graphs} presents the search results on CIFAR-10, CIFAR-100, ImageNet16-120, with the highest validation accuracy on each task. The reproduced results are consistent with the experiments performed in NAS-Bench-201. The performance of CATCH-sfs is similar to the other four methods, but CATCH-meta dominates all other algorithms in the searched network accuracies. On CIFAR-10, CATCH-meta finds the best model in 280/500 trials. On CIFAR-100, over half of them find top-3 performance networks within 50 samples, while other algorithms barely touch the roof. On ImageNet16-120, CATCH reaches the best network for more than 22\% trials. We can see tremendous benefits for using the meta-trained CATCH to reduce time and cost. \begin{table}[t] \caption{Comparison of CATCH with one-shot algorithms. The top accuracies of identified models, standard deviations, search time (hour), total search time (hour), and the highest validation accuracies among all the networks in NAS-Bench-201 are reported. The same three random seeds are used to run through each algorithm. The time budget for search on CIFAR-10, CIFAR-100, and ImageNet16-120 are 3, 4, and 5 hours respectively.\label{tab:Comparison-one-shot}} \begin{centering} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \multirow{1}{*}{Algorithm} & \multicolumn{2}{c|}{CIFAR-10} & \multicolumn{2}{c|}{CIFAR-100} & \multicolumn{2}{c|}{ImageNet16-120} & \multirow{2}{*}{Total Time}\tabularnewline \cline{1-7} \cline{2-7} \cline{3-7} \cline{4-7} \cline{5-7} \cline{6-7} \cline{7-7} & Acc $\pm$std & Time & Acc$\pm$std & Time & Acc$\pm$std & Time & \tabularnewline \hline DARTS-V1 \cite{liu2018darts} & 88.08$\pm $1.89 & 2.46 & 68.99$\pm$1.93 & 2.44 & 23.66$\pm$0 & 4.55 & 9.45\tabularnewline DARTS-V2 \cite{liu2018darts} & 87.16$\pm $0.39 & 9 & 65.06$\pm$2.95 & 7.91 & 26.29$\pm$0 & 22.14 & 39.05\tabularnewline GDAS \cite{dong2019searching} & 90.32$\pm $0.08 & 6 & 70.33$\pm$0.85 & 6.23 & 44.81$\pm$0.97 & 17 & 29.23\tabularnewline R-NAS \cite{li2019random} & 90.45$\pm $0.43 & 2.19 & 70.39$\pm$1.36 & 2.26 & 44.12$\pm$1.04 & 5.94 & 10.39\tabularnewline ENAS \cite{pham2018efficient} & 90.2$\pm $0.63 & 4.22 & 69.99$\pm$1.03 & 4.26 & 44.92$\pm$0.51 & 5.18 & 13.66\tabularnewline SETN \cite{dong2019one} & 90.26$\pm $0.75 & 7.62 & 68.01$\pm$0.21 & 7.74 & 41.04$\pm$1.64 & 20.33 & 35.69\tabularnewline CATCH-meta & \textbf{91.33$\pm $0.07} & \textbf{3} & \textbf{72.57$\pm$0.81} & \textbf{4} & \textbf{46.07$\pm$0.6} & \textbf{5} & 22.5\tabularnewline \hline Max Acc. & \multicolumn{2}{c|}{91.719} & \multicolumn{2}{c|}{73.45} & \multicolumn{2}{c|}{47.19} & ---\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} \subsubsection{Comparison with One-shot Algorithms.} One of the central controversies around meta-NAS algorithms is: given the high searching efficiency of one-shot methods, can sample-based algorithms outperform them? We therefore compare the performance of CATCH with many state-of-the-art one-shot NAS solutions. For fair comparisons, instead of querying the NAS-Bench-201 network database, we train each child network for 12 epochs and obtain their early-stop validation accuracies as training feedbacks. The early-stop training setup is the same as the one in the meta-training phase. The one-shot algorithms involved are first-order DARTS (DARTS-V1) \cite{liu2018darts}, second-order DARTS (DARTS-V2), GDAS \cite{dong2019searching}, Random NAS (R-NAS) \cite{li2019random}, ENAS \cite{pham2018efficient}, and SETN \cite{dong2019one}. We run the algorithms with the original code and configurations released from NAS-Bench-201. DARTS-V1 and DARTS-V2 are run for 50 search epochs, and other algorithms are trained for 250 search epochs. Figure \ref{one-shot-lc} presents the learning curves of each algorithm in the first 100 search epochs. For CATCH, at each search epoch, we identify networks with the best partially trained accuracy found so far, and report their fully trained accuracies. Both DARTS and ENAS have a relatively strong performance at the beginning, but the curves drop significantly afterward. SETN resembles Random NAS a lot. GDAS is among the best one-shot algorithms, but it seems to plateau at local maximums after a few search epochs. CATCH has the best performance among all, as it quickly adapts and identifies promising architectures that are beyond other algorithms' search capacity. In Table \ref{tab:Comparison-one-shot}, we report the best fully trained accuracy of networks that each algorithm identifies over their complete training process. We set the time budget for CATCH to search on CIFAR-10, CIFAR-100, and ImageNet16-120 as 3, 4, and 5 hours. It is roughly equivalent to cutting the search on these tasks at 70, 50, and 40 search epochs, respectively. Although DARTS-V1, R-NAS, and ENAS spend less time in total, they are highly unstable and the performance of DARTS and ENAS tends to deteriorate over time. CATCH spends 22.5 (10.5 meta + 12 adaptation) hours on all three tasks, and its searched networks surpass all other algorithms. The presented results have proved that CATCH is swiftly adaptive, and it is able to identify networks beyond many one-shot algorithms\textquoteright{} reach within a reasonable time. \subsection{Experiments on Residual Block-based Search Space} \begin{table}[t] \caption{Results on ImageNet compared to manually designed and NAS searched architectures. Latency is measured on one Tesla V100 with one image with shape (3, 720, 1080).} \begin{centering} \begin{tabular}{c|c|c|c} \hline Network & Top-1 Acc (\%) & Top-5 Acc (\%) & Latency (ms)\tabularnewline \hline ResNet50 \cite{he2016deep} & 77.15 & 93.29 & 16.4\tabularnewline DenseNet201 \cite{huang2017densely} & 77.42 & 93.66 & 31.6\tabularnewline ResNext101 \cite{xie2017aggregated} & 79.31 & 94.5 & 76.7\tabularnewline Inception-V3 \cite{szegedy2016rethinking} & 78.8 & 94.4 & 16.4\tabularnewline \hline EfficientNet-B1 \cite{tan2019efficientnet} & 77.3 & 93.5 & 29.5\tabularnewline EfficientNet-B2 & 79.2 & 94.5 & 47.6\tabularnewline NASNet-A \cite{zoph2018learning} & 78.6 & 94.2 & -\tabularnewline BASE \cite{shaw2019meta} & 74.3 & 91.9 & -\tabularnewline \hline CATCH-Net-A & 79.04 & 94.43 & \textbf{16.9}\tabularnewline CATCH-Net-B & \textbf{79.46} & \textbf{94.7} & 33.7\tabularnewline \hline \end{tabular} \par\end{centering} \label{imagenet table} \end{table} Having proved that CATCH can adapt to new tasks efficiently with meta-training, we further inquire whether CATCH has the ability to transfer across different domains including image classification, objection detection, and semantic segmentation. In this section, we consider a more challenging setting where the meta-training phase contain only image classification tasks while tasks in all the three domains are targeted in the adaptation phase. The architectures are very different among these domains, so we search for their common component - the feature extractor (backbone). ResNet is one popular backbone for these tasks, thus we design the search space following \cite{DBLP:journals/corr/abs-1906-04423,yao2019sm}. Constructing a model in the Residual block-based search space requires the controller to make several decisions: (1) select the network's base channel from $[48,56,64,72]$, (2) decide the network's depth within $[15,20,25,30]$, (3) choose the number of stages $s$, which is either 4 or 5, (4) schedule the number of blocks contained in each stage, and (5) arrange the distribution of blocks holding different channels. Details of the Residual block-based search space can be found in the Appendix. \begin{table}[t] \caption{Results on COCO compared to manually designed and NAS searched backbones. Latency results of networks except CATCH are referred from \cite{yao2019sm}.} \begin{centering} \begin{tabular}{c|c|c|c|c} \hline Method & Backbone & Input size & Latency (ms) & mAP\tabularnewline \hline RetinaNet \cite{lin2017focal} & ResNet101-FPN & 1333x800 & 91.7 (V100) & 39.1\tabularnewline FSAF \cite{zhu2019feature} & ResNet101-FPN & 1333x800 & 92.5 (V100) & 40.9\tabularnewline GA-Faster RCNN \cite{wang2019region} & ResNet50-FPN & 1333x800 & 104.2 (V100) & 39.8\tabularnewline Faster-RCNN \cite{ren2015faster} & ResNet101-FPN & 1333x800 & 84.0 (V100) & 39.4\tabularnewline Mask-RCNN \cite{he2017mask} & ResNet101-FPN & 1333x800 & 105.0 (V100) & 40.2\tabularnewline \hline DetNAS \cite{chen2019detnas} & Searched Backbone & 1333x800 & - & 42.0\tabularnewline SM-NAS: E3 & Searched Backbone & 800x600 & 50.7(V100) & 42.8\tabularnewline SM-NAS: E5 & Searched Backbone & 1333x800 & 108.1(V100) & 45.9\tabularnewline Auto-FPN \cite{xu2019auto} & Searched Backbone & 1333x800 & - & 40.5\tabularnewline \hline CATCH & CATCH-Net-C & 1333x800 & 123.5 (V100) & \textbf{43.2}\tabularnewline \hline \end{tabular} \par\end{centering} \label{coco} \end{table} \subsubsection{Experiment Settings.} We use the same meta-training settings as the ones we used in NAS-Bench-201. For each meta epoch, an ImageNet sub-dataset is created. To form such sub-datasets, we sample $X$ classes from all classes of ImageNet, where $X\in [10,20,30]$. Then the images are resize to $16\times16$, $32\times32$, or $224\times224$. Thus there are $3\times\left[\tbinom{1000}{10}+\tbinom{1000}{20}+\tbinom{1000}{30}\right]$ possible sub-datasets. To achieve the balance between inference latency and network performance, we adopt the multi-objectve reward function $R=P(m)\times[\frac{LAT(m)}{T_{target}}]^{w}$ in \cite{Tan_2019_CVPR}, where $P(m)$ denotes the model \textquoteright s performance (e.g. validation accuracy for classification, mAP for object detection or mIoU for semantic segmentation), $LAT(m)$ measures the model's inference latency, and $T_{target}$ is the target latency. $w$ serves as a hyperparameter adjusting the performance-latency tradeoff. In our experiments, we set $w=-0.05$. With this reward, we hope to find models that excel not only in performance but also in inference speed. We meta train CATCHer for 5 GPU days, and adapt on each target task to search for 10 architectures. We target ImageNet dataset for image classification, COCO dataset for object detection and Cityscapes dataset for semantic segmentation. The detailed settings can be found in the Appendix. \subsubsection{Search Results.} \begin{table}[t] \begin{centering} \caption{Results on Cityscapes compared to manually designed and NAS searched backbones. Latency is measured on Tesla V100 with one image with shape (3, 1024, 1024). SS and MS denote for single scale and multiple scale testing respectively.\label{Tab:cityscapes_val}} \par\end{centering} \begin{centering} \begin{tabular}{c|c|c|c|c} \hline Method & Backbone & Latency (ms) & mIoU (SS) & mIoU (MS)\tabularnewline \hline BiSeNet \cite{Yu_2018_ECCV} & ResNet101 & 41 & - & 80.3\tabularnewline DeepLabv3+ \cite{Chen_2018_ECCV} & Xception-65 & 85 & 77.82 & 79.3\tabularnewline CCNet \cite{Huang_2019_ICCV} & ResNet50 & 175 & - & 78.5\tabularnewline DUC \cite{8354267} & ResNet152 & - & 76.7 & -\tabularnewline DANet \cite{Fu_2019_CVPR} & ResNet50 & - & 76.34 & -\tabularnewline \hline Auto-DeepLab \cite{liu2019auto} & Searched Backbone & - & 79.94 & -\tabularnewline DPC \cite{NIPS2018_8087} & Xception-71 & - & 80.1 & -\tabularnewline \hline CATCH & CATCH-Net-D & \textbf{27} & 79.52 & \textbf{81.12}\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} Table \ref{imagenet table} compares the searched architectures with other widely-recognized networks on ImageNet. CATCH-Net-A outperforms many listed networks. Its accuracy is comparable with EfficientNet-B1 and ResNext-101, yet it is 2.82X and 4.54X faster. CATCH-Net-B outperforms ResNext-101 while shortens the latency by 2.28X. The network comparison on COCO and Cityscapes is presented in Table \ref{coco} and Table \ref{Tab:cityscapes_val}. Our network again shows faster inference time and competitive performance. We also transfer CATCH-Net-B found during the search on ImageNet to COCO and Cityscapes, which yield 42\% mAP with 136ms inference time and 80.87\% mIoU (MS) with 52ms latency, respectively. Our results again show that directly transferring top architectures from one task to another cannot guarantee optimality. It also reveals CATCH\textquoteright s potentials to transfer across tasks even when they are distant from the meta-training ones. \section{Ablation Study} The context encoder is the spotlight component of our algorithm. We are especially curious about: (1) Is the encoder actually helpful for adaptation (compared with simply plugging in the meta-learned controller and evaluator priors)? (2) If so, does the improvement come from good estimates of the posterior, or is it from the stochastic generation of $\boldsymbol{z}$ that encourages exploration and benefits generalization? To answer these questions, we designed two extra sets of experiments: (1) CATCH-zero: We set $\boldsymbol{z}=\boldsymbol{0}$, and thereby completely eliminate the encoder's effect on both the controller and the evaluator; (2) CATCH-random: We sample each $\boldsymbol{z}$ from a unit Gaussian prior $\mathcal{N}(\boldsymbol{0},diag(\boldsymbol{1}))$ during the search as random inputs. The results are presented in Figure \ref{t_graphs-ab-enc} (a)-(c). In both settings, the agents are still meta-trained for 10.5 hours before they are plugged in for adaptation. \begin{figure}[t] \begin{centering} \subfloat[]{\includegraphics[scale=0.21]{images/c10_abl_enc} }\subfloat[]{\includegraphics[scale=0.21]{images/c100_abl_enc} }\subfloat[]{\includegraphics[scale=0.21]{images/imgnet_abl_enc} } \par\end{centering} \caption{(a)-(c) compare results of 500 trials for CATCH-meta, CATCH-sfs(search from scratch), CATCH-zero, CATCH-random.} \label{t_graphs-ab-enc} \end{figure} The gaps among the lines in Figure \ref{t_graphs-ab-enc} answered our questions. The encoder not only helps with adaptation (through comparing CATCH-meta and CATCH-zero), but also provides assistance in a much more meaningful way than using random inputs for exploration, as CATCH-meta outperforms CATCH-random on both CIFAR-10 and CIFAR-100. Interestingly, we observe less significant improvement on ImageNet16-120. One hypothesis is since we perform the meta-training phase on sub-datasets of ImageNet16, the meta-trained controller and evaluator are already tuned towards policies that fit the search on ImageNet16. Hence, the transferred policies require less adaptation assistance from the encoder. More ablation studies can be found in the Appendix. \section{Conclusion and Discussion} In this work, we propose CATCH, a transferrable NAS approach, by designing an efficient learning framework that leverages the benefits of context-based meta reinforcement learning. The key contribution of CATCH is to boost NAS efficiency by extracting and utilizing task-specific latent contexts, while maintaining universality and robustness in various settings. Experiments and ablation studies show its dominant position in search efficiency and performance over non-transferrable schemes on NAS-Bench-201. Extensive experiments on residual block-based search space also demonstrate its capability in handling cross-task architecture search. As a task-agnostic transferrable NAS framework, CATCH possesses great potentials in scaling NAS to large datasets and various domains efficiently. During our research into transferrable NAS frameworks, we identified many potentially valuable questions to be explored. Efficient adaptation among domains is challenging, and we demonstrated a first attempt to simplify it by searching for backbones with a shared search space. A possible future investigation would be to generalize cross-task architecture search to flexibly include more decisions, such as searching for detection and segmentation heads. Meanwhile, our meta-training tasks involve only classification tasks, but it is also possible to diversify the pool and explore whether it leads to further performance boosts. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \lettrine[findent=2pt]{\textbf{W}}{ }ireless connectivity has become an irreplaceable commodity in our modern society. The exponential trend expected in the wireless technology usage has lead analysts to predict that by 2023, $71\%$ of the global population will enjoy some kind of wireless service. In the group of Wireless Local Area Networks (WLANs), Wireless Fidelity (Wi-Fi) technology presents a growth up to 4-fold over a period of 5 years from 2018 to 2023 \cite{CiscoSystemsInc.2020}. The newest Wi-Fi standard IEEE-802.11ax \cite{9442429}, also known as Wi-Fi 6 expects to grow 4-fold by 2023 becoming $11\%$ of all the public Wi-Fi hostpots \cite{Cisco2020}. Spatial reuse (SR) has been of interest for more than 20 years in the wireless community since it contributes to the reduction of the collisions among stations and the determination of channel access rights \cite{Ye2003}. As the number of dense WLAN deployments increases, SR becomes more challenging in the context of Carrier Sense Multiple Access (CSMA) technology as used in Wi-Fi\cite{Wilhelmi2021}. Wi-Fi 6 comes to address diverse challenges such as increasing number of Wi-Fi users, dense hotspots deployments and high demanded services such as Augmented, Mixed and Virtual Reality. \begin{figure}[h] \center \includegraphics[scale=0.24]{figures/ap.pdf} \caption{Typical operational scenario: APs adjust their Transmission Power and CCA threshold towards an efficient spatial reuse.} \label{state_vs_context} \end{figure} Moreover, 802.11ax included additional features such as dynamic adjustment of the Clear Channel Assessment (CCA) threshold and Transmission Power (TP). Static CCA threshold may not be representative of diverse network topologies, and cause inefficient channel utilization or concurrent transmissions \cite{Thorpe2014}. Additionally, adjusting TP allows to reduce the interference among the APs and consequently maximize the network performance \cite{Huehn2012}. Thus, SR and network performance can be positively improved by adjusting CCA and TP. Yet, the complex interactions between CCA and TP, call for intelligent configuration of both. To this end, data scarcity and data access are key for any Machine Learning (ML) method \cite{Khosla2020}. Recently, AI-based wireless networks have been of remarkable interest among researchers both in WiFi domain \cite{szott2021wifi}, and 5G domain \cite{Elsayed2019} however the proposed solutions usually require complete availability of the data. In reality, data access is not always feasible due to privacy restrictions. Recent wireless network architectures have started to shift to a more open and flexible design. In 5G networks as well as the O-RAN Alliance architecture support the utilization of artificial intelligence to orchestrate main network functions \cite{iturria}. In the context of Wi-Fi, a novel project named OpenWiFi\cite{TIP2022} released by the Telcom Infra Project intends to disaggregate the Wi-Fi technology stack by utilizing open source software for the cloud controller and AP firmware operating system. These paradigm changes allow for the development of many applications in the area of ML and more specifically in Reinforcement Learning (RL) applications to become reality. In this paper\footnote{The present work has been submitted to IEEE}, we intend to optimize TP and CCA threshold to improve SR and overall network KPIs.To do so, we formulate the TP and CCA configuration problem with an objective of maximizing product network fairness and minimizing station starvation. We model the SR problem as a distributed multi-agent decision making problem and use a Multi-Agent Multi-Armed Bandit (MA-MAB) approach to solve it. The contributions of this work, different from the ones found in the literature, can be summarized in the following points: \begin{enumerate} \item We propose a solution for reducing the inherent huge action space given the possible combinations of TP and CCA threshold values per AP. We derive our solution via worst-case interference analysis. \item We analyse the performance of the network KPIs of well-known distributed MA-MAB implementations such as $\epsilon$-greedy, UCB and Thompson on the selection of the TP and CCA values in cooperative and non-cooperative settings. \item We introduce a contextual MA-MAB (MA-CMAB) named Sample Average Uncertainty-Sampling (SAU) in cooperative and non-cooperative settings. SAU-MAB is based on a deep Contextual MAB. \item We propose for the first time, to the best of our knowledge, a deep transfer learning solution to adapt efficiently TP and CCA parameters in dynamic scenarios. \end{enumerate} With these contributions, our simulation results show that the $\epsilon$-greedy MAB solution improves the throughput at least 44.4\%, provides improvement of 12.2\% in terms of fairness and 94.5\% in terms of Packet Loss Ratio (PLR) over typical configurations when a reduced set of actions is known. Additionally, we show that the SAU-Coop algorithm improves the throughput by 14.7\% and PLR 32.5\% when compared with non cooperative approaches with full set of actions. Moreover, our proposed transfer learning based approach reduces the service drops by at least 60\%. The rest of the paper is organized as follows. Section \ref{Section2} presents a summary of recent work that uses Machine Learning to improve SR in Wi-Fi. Section \ref{Section3} covers the basics on Multi-Armed Bandits including deep contextual bandits and deep transfer reinforcement learning. In \ref{Section4} we present our system model altogether with an analysis to reduce the action space via worst-case interference. Section \ref{Section5} presents the proposed schemes and the results are discussed in section \ref{Section6}. Finally, section \ref{Section8} concludes the paper. \section{Related work} \label{Section2} Reinforcement learning-based spatial reuse has been of interest in recent literature. The studies have focused on distributed solutions with no cooperation or centralized schemes of multi-armed bandits. These studies are summarized below. In \cite{Wilhelmi2019}, the authors present a comparison among well-known MABs as $\epsilon$-greedy, UCB, Exp3 and Thompson sampling in the context of decentralized SR via Dynamic Channel Allocation (DCA) and Transmission Power Control (TPC) in WLANs. The results showed that ``selfish learning'' in a sequential matter present better performance than ``concurrent learning'' among the agents. Additionally, \cite{Bardou2021} presents a centralized MAB consisting of an optimizer based on a modified Thompson Sampling (TS) algorithm and a sampler based on Gaussian Mixture (GM) algorithm to improve SR in 802.11ax Wi-Fi. More specifically, the authors propose to deal with the large action space comprised by TP and Overlapping BSS/Preamble-Detection (OBSS/PD) threshold by utilizing a MAB variant called Infinitely Many-Armed Bandit (IMAB). Furthermore, a distributed solution based on Bayesian optimizations of Gaussian processes to improve SR is proposed in \cite{bardou2022inspire}. Other solutions that are not related to reinforcement learning can be found in the literature with the aim of improving SR in WLANs. For instance, in \cite{9417353} the authors propose a distributed algorithm where the APs decide their Transmission Power based on their RSSI. Moreover, in \cite{app112211074} the authors present an algorithm to improve SR by utilizing diverse metrics such as SINR, proximity information, RSSI and BSS color and compare with the legacy existing algorithms. The ultimate goal of the previous algorithm is the selection of the channel state (IDLE of BUSY) at the moment of an incoming frame given the previous metrics. Finally, the authors in \cite{wilhelmi2022federated} presented a supervised federated learning approach for SR optimization. In all above works, the authors employ either centralized or decentralized schemes with no cooperation to address SR optimization in WiFi. In this work, we propose to address this via a coordination based MA-MAB. In addition, we tackle some of the issues previously encountered in others works such as the size of action space due the set of possible values TP and CCA. Finally, to the best of our knowledge, we propose for the first time to address SR adaptation in dynamic environments utilizing deep transfer learning. \section{Background } \label{Section3} In this section, we present a background on Multi-Armed Bandits including $\epsilon$-greedy, Upper Confident Bound, Thompson sampling bandits and an introduction on contextual MABs with a focus on a neural network-based contextual bandit. Additionally, we introduce MABs to the multi-agent setting and we finalize with a background on deep transfer reinforcement learning. Multi-Armed Bandits (MABs) are a widely used RL approach that tackles the exploration-exploitation trade-off problem. Their implementation is usually simpler when compared with full RL off-policy or on-policy algorithms. However, simplicity often comes with a cost of obtaining suboptimal solutions \cite{Bouneffouf2020}. The basic model of MABs corresponds to the stochastic bandit, where the agent has $K$ possible actions to choose, called arms, and receive certain reward $R$ as a consequence of pulling the $j^{th}$ arm over $T$ environment steps. The rewards can be modeled as independent and identically distributed (i.i.d), adversarial, constrained adversary or random-process rewards \cite{Slivkins2019}. From the four models previously mentioned, two are more commonly found in the literature: the i.i.d and the adversarial models. In the i.i.d model, each pulled arm's reward is drawn independently from a fixed but unknown distribution $D_j$ with an unknown mean $\mu_j^*$. On the other hand, in the adversarial model each pulled arm's reward is randomly sampled from an adversary or alien to the agent (such as the environment) and not necessarily sampled from any distribution\cite{zhou2015survey}. The performance of MABs is measured in terms of cumulative regret $R_T$ or total expected regret over the $T$ steps defined as: \begin{myequation} R_T = \sum_{t=1}^{T} \mathbb{E}[(\text{max}_j\mu_j^* - \mu_j^*)], \end{myequation} The utmost goal of the agent is to minimize $R_T$ over the T steps such as the $\lim_{T \to \infty} R_T/T$ = 0 which means the agent will identify the action with the highest reward in such limit. \subsection{$\epsilon$-greedy, Upper-Confidence-Bound and Thompson Sampling MAB} The $\epsilon$-greedy MAB is one of the simplest MABs and as the name suggests, it is based on the $\epsilon$-greedy policy. In this method, the agent selects greedily the best arm most of time and once a while, with a predefined small probability ( $\epsilon$), it selects a random arm \cite{sutton2018reinforcement}. The UCB MAB tackles some the disadvantages of the $\epsilon$-greedy policy at the moment of selecting non-greedy arms. Instead of drawing randomly an arm, the UCB policy measures how promising non-greedy arms are close from optimal. In addition, it takes in to consideration the rewards' uncertainty in the selection process. The selected arm is obtained by drawing the action from $\text{\texttt{argmax}}_a\left[Q_{t}(a) + c\sqrt{\text{ln }{t}/N_{t}(a)}\right]$, where $N_{t}(a)$ corresponds to the number of times that action $a$ via the $j^{th}$ arm has been chosen and $Q_{t}(a)$ the Q-value of action $a$\cite{sutton2018reinforcement,Agrawal1995}. Finally, Thompson Sampling MAB action selection is based on Thompson Sampling algorithm as the name indicates. Thompson sampling or posterior sampling is a Bayesian algorithm that constantly constructs and updates the distribution of the observed rewards given a previously selected action. This allows the MAB to select arms based on the probability of how optimal the chosen arm is. The parameters of the distribution are updated depending on the selection of the distribution class\cite{Russo2018}. \subsection{Deep Contextual Multi-Armed Bandits} Contextual Multi-Armed Bandits (CMABs) are a variant of MABs, that before selecting an arm, observe a series of features commonly named context\cite{Bouneffouf2020}. \iffalse Figure \ref{state_vs_context}, depicts the difference between the stateless MAB and CMAB. \fi Different from the stateless MAB, a CMAB is expected to relate the observed context with the feedback or reward gathered from the environment in $T$ episodes and consequently predict the best arm given the received features \cite{zhou2015survey}. Diverse CMABs have been proposed throughout the literature such as LinUCB, Neural Bandit, Contextual Thompson Sampling and Active Thompson Sampling \cite{Bouneffouf2020}. More recently, a deep neural contextual bandit named SAU-Sampling has been presented in \cite{zhu2021deep} where the context is related with the rewards using neural networks. The details of SAU-Sampling will be discussed in following sections. \subsection{Multi-Agent Multi-Armed Bandits (MA-MABs)} Multi-agent Multi-Armed Bandits is the multi-agent variant of MABs in which $N$ agents pull their $j^{th}$ arm and each $m^{th}$ agent will receive a reward drawn from their distribution $D_{m,j}$ with an unknown mean $\mu_{m,j}^*$ \cite{NEURIPS2021_c96ebeee}. MA-MABs can be modeled as centralized or distributed. In centralized settings the agents' actions are taken by a centralized controller and in distributed settings each agent will independently choose their own actions. Distributed decision-making settings scale more effectively \cite{Landgren2019} and naturally deals easily with large $K$ set of arms when compared with centralized settings that suffers of $K$ arms' cardinality explosion. Finally, the total regret can be defined as: \begin{myequation} R_T = \sum_{t=1}^{T}\sum_{m=1}^{N} \mathbb{E}[(\text{max}_j\mu_{m,j}^* - \mu_{m,j}^*)] \end{myequation} In this work, we consider two main approaches: distributed non-cooperative and cooperative MA-MABs with adversarial rewards. \subsection{Deep Transfer Reinforcement Learning} Transfer learning or knowledge transfer techniques improve learning time efficiency by utilizing prior knowledge. Typically, this is done by extracting the knowledge from one or diverse source tasks and then applying such knowledge in a target task \cite{Pan10}. If the tasks are related in nature and the target task benefits positively with the acquired knowledge from the source, then it is called inductive transfer learning \cite{Scott2018}. This type of learning is not uncommon and it is used by the human brain on a daily basis. However, a phenomena called negative transfer can occur, if after knowledge transfer, the target task performance is negatively affected \cite{Zhuang2021}. In the realm of transfer learning we can find Deep Transfer Learning (DTL). DTL is a subset of transfer learning that studies how to utilize knowledge in deep neural networks. In the context of classification/prediction tasks, large amount of data is required to properly train the model of interest \cite{Vu2020}. In many practical applications where training time is essential to respond to new domains \cite{Elsayed2021}, retraining using large amount of data is not always feasible and possibly catastrophic in terms of performance. ``What to transfer" corresponds to one of the main research topics in transfer learning. Specifically, in the case of deep transfer learning four categories have been broadly identified: instances-based transfer, where data instances from a source task are utilized; mapping-based transfer, where a mapping of two tasks is used on a new target task; network-based transfer, where the network pre-trained model is transferred to the target task; and adversarial-based transfer, where an adversarial model is employed to find which features from diverse source tasks can be transferred to the target task\cite{Tan2018}. In this work, we utilize the DTL form called network-based transfer learning to adapt efficiently TP and CCA parameters in dynamic scenarios. An example of network-based transfer learning technique is presented in Fig. \ref{transfer_network}. Such technique is utilized in deep transfer reinforcement learning as part of a transfer learning type called policy transfer \cite{zhu2020transfer}. In particular, policy transfer takes a set of source policies $\pi_{S_1}, ..., \pi_{S_K}$ that are trained on a set of source tasks and uses them in a target policy $\pi_{T}$ in a way that is able to leverage the former knowledge from the source policies to learn its own. More specifically, the weights and biases that comprise each of the hidden layers of the source policies are the elements transferred to the target polices. Note that in practice policies are modeled as neural networks. \begin{figure}[h] \center \includegraphics[scale=0.32]{figures/transfer_learning.pdf} \setlength{\belowcaptionskip}{-5pt} \caption{Network-based transfer learning: the neural network source task's hidden layers are reutilized in the target network} \label{transfer_network} \end{figure} In this paper, we take advantage of the design of a contextual multi-armed bandit presented in \cite{zhu2021deep} and apply policy transfer to improve the agent's SR adaptability in dynamic environments. The results and observations of applying DTRL are discussed in section \ref{adaptiveSR}. In the next section, we will discuss the details of the system model and present an analysis on reducing the cardinality of the action space in the proposed SR problem formulation. \section{System model and Problem Formulation} \label{Section4} \begin{table} \caption{Notations} \centering \label{param-def} \begin{tabular}{p{1.74cm}|p{6.24cm}} \textbf{Notation} & \textbf{Definition}\\ \hline $s$ and $\mathcal{S}$ & Index and set of stations, \\ $m$ and $\mathcal{M}$ & Index and set and the number of \noteblue{APs} \sout{RUs}, \\ $x^{|\mathcal{S}|}$ and $c^{|\mathcal{M}|}$ & Stations' positions and AP's positions \\ \hline $P_{cs}^{m}$ & CCA threshold of $m^{th}$ AP, \\ $P_{tx}^{m}$ & Transmission Power of $m^{th}$ AP, \\ $R_{s}^{m}$ & Throughput of $s^{th}$ STA of $m^{th}$ AP, \\ $R_{s,A}^{m}$ & Achievable throughput of $s^{th}$ STA of $m^{th}$ AP, \\ $D_{s}^{m}$ & Adaptive data link rate of $s^{th}$ STA of $m^{th}$ AP \\ \hline $P_{IDLE}^{m}$ & Probability of a STA is idle in a BSS, \\ $P_{SUCC,s}^{m}$ & Probability of succesful transmission by station $s^{th}$ STA to the $m^{th}$ AP,\\ $\phi_s^m $ & Probability of $s^{th}$ STA be transmitting to the $m^{th}$ AP, \\ $\xi_{CCA}$ & Binary function, $\xi_{CCA} = 1$ if signal is bellow the CCA threshold $P_{cs}$, \\ $\xi_{ED}$ & Binary function, $\xi_{ED} = 1$ if signal is bellow the Energy Detection (ED) threshold $P_{ed}$, \\ $\xi_{STA}$ & Binary function, $\xi_{STA} = 1$ if throug is bellow the Energy Detection (ED) threshold $P_{ed}$, \\ $E(T_{g,s}^m)$ and $E(I_{g,s}^m)$ & Expected length of general time-slot and expected information transmitted by the $s^{th}$ STA of $m^{th}$ AP, \\ $T_{TXOP}$ and $T_{EDCA}$ & Packet transmission duration and time required for a successful Enhanced Distributed Channel Access (EDCA) transmission, \\ $\Bar{P}^{fair}$ and $\Bar{U}$ & Average linear product-based network fairness and average station starvation, \\ $\omega$, $g_s^m$ and $\sigma^2$ & Fraction of $R_{s,A}^{m}$ in which STAs are consider in starvation, the channel power gain and the power noise. \\ \hline $P_{tx}^m$ and $P_{tx}^r$ & The transmission power at the $m^{th}$ transmitter (AP) and the received signal strength at the $r^{th}$ receiver, \\ $d_{m,r}$ and $\theta$ & Distance between the $m^{th}$ transmitter and $r^{th}$ receiver and path loss exponent, \\ $\mathcal{F}_m^{+}$ and $\mathcal{F}_m^{-}$ & Subset of interferers and non-AP interferers, \\ $\gamma_{m,r}$, $C_{m,r}$ and $C_T$ & Worst-case SINR and Shannon's maximum capacity of $m^{th}$ transmitter and $r^{th}$ receiver and cumulative maximum network capacity. \\ \hline \end{tabular} \label{notations} \end{table} In this work, we consider an infrastructure mode Wi-Fi 802.11ax network $\mathcal{N}$ with $N = |\mathcal{S}| + |\mathcal{M}|$ nodes where $\mathcal{S}$ is the set of stations with $\{\bm{x}^1, \bm{x}^2,..., \bm{x}^{|\mathcal{S}|}\} \in \mathbb{R}^2$ positions and $\mathcal{M}$ is the set of APs with $\{\bm{c}^1, \bm{c}^2,..., \bm{c}^{|\mathcal{M}|}\} \in \mathbb{R}^2$ positions. We can assume that $|\mathcal{M}|$ APs positions correspond to cluster centers and the stations will attach to their closest AP. In addition, the list of notations utilized in this work can be found in Table \ref{notations}. In this paper, we improve SR via maximization of the linear product-based fairness and minimization of the number of stations under starvation by configuring TP and CCA parameters. \begin{subequations}\label{opt-Verbal-CCmanagement} \begin{align} \label{opt-Verbal} & \textbf{Max} && \begin{pmatrix} \text{fairness} \\ \text{avg. station starvation complement} \end{pmatrix} \\ \label{opt-Verbal2} & \textbf{s.t.} && \text{Throughput} \\ &\textbf{var.} && \text{Transmission power and CCA threshold selection} \end{align} \end{subequations} Let's define the probability of an STA being idle in a BSS as: \begin{align} P_{IDLE}^{m} = \prod_{s \in \mathcal{S}} \phi_s^{m'} &&\forall m\in \mathcal{M}. \end{align} where $\phi_s^m \in [0,1]$ is the probability of an STA transmitting to the $m^{th}$ AP. In addition, we proceed to define the probability in which an STA will successfully transmit a packet as: \begin{align} P_{SUCC,s}^{m} = \phi_s^m\xi_{CCA}^{m}(\cdot)\xi_{ED}^{m}(\cdot)\prod_{s'\in\mathcal{S}, s'\neq s}^{\mathcal{S}}\phi_s^{m'} &&\forall m\in \mathcal{M}. \end{align} where $\xi_{CCA}(\cdot) = 1$ if the sensed signal of a packet sent by the $s^{th}$ STA is below the CCA threshold ($P_{cs}$), otherwise becomes zero. Here, $\xi_{ED}(\cdot) = 1$ if the sensed signal of packet sent by the $s^{th}$ STA is below the Energy Detection (ED) threshold ($P_{ed}$), otherwise becomes zero. Additionally, we consider $P_{cs} = P_{ed}$ to simplify our analysis. As indicated by \cite{Derakhshani2018} the expected length of the general time-slot $\mathbb{E}(T_g)$ and the expected information transmitted by the $s^{th}$ STA to $m^{th}$ AP $\mathbb{E}(I_g)$ can be expressed as: \begin{align} E(T_{g,s}^m) = \delta P_{IDLE}^{m} + P_{IDLE}^{m'}T &&\forall m\in \mathcal{M}. \end{align} \begin{align} E(I_{g,s}^m) = P_{SUCC,s}^{m}D_s^m T_{TXOP} &&\forall m\in \mathcal{M}, s\in \mathcal{S}. \end{align} where $D_s^m$ corresponds to the link data rate, $T_{EDCA}$ corresponds to the time required for a successful Enhanced Distributed Channel Access (EDCA) transmission, $T_{TXOP}$ is the transmission duration and $\delta$ the duration of an idle time slot. The link data rate will adaptively depend on SNR \cite{Holland2001} and mapped based on SNR/BER curves\cite{Riley2010}. The received SNR can be defined as $P_{tx}^m g_s^m\//\sigma^2$ where $P_{tx}$ is the transmission power, $g_s^m$ the channel power gain and $\sigma^2$ the power noise. Finally, the throughput of the $s^{th}$ station attached to the $m^{th}$ AP can be defined as: \begin{align} \label{thr_eq} R_s^m = \frac{E(I_{g,s}^m)}{E(T_{g,s}^m)} = \frac{P_{SUCC,s}^{m} D_s^m T_{TXOP}}{P_{IDLE}^{m}\delta + P_{IDLE}^{m'} T_{EDCA} }, \end{align} Additionally, let's define the average linear product-based network fairness and average station starvation in a distributed setting: \begin{align} \Bar{P}^{fair}(t) = \frac{1}{|\mathcal{M}|}\sum_{m \in \mathcal{M}}\prod_{s \in \mathcal{S}} \frac{R_s^m}{R_{m,A}^s}, \end{align} \begin{align} \Bar{U}(t) = \frac{1}{|\mathcal{M}|}\sum_{m \in \mathcal{M}}\frac{1}{|\mathcal{S}|}\sum_{s \in \mathcal{S}} \xi_{STA}(R_s^m > \omega R_{m,A}^s), \end{align} where $R_{m,A}^s$ is the achievable throughput of the $s^{th}$ station attached to the $m^{th}$ AP. Additionally, $\xi_{STA} = 1$ if $s^{th}$ station's throughput is greater than a fraction $\omega \in (0,1]$ of the achievable throughput, otherwise becomes zero in which case the station is considered in starvation. The considered problem is a multi-objective problem and can be addressed with the weighted sum approach. Thus, in each time step, the problem can be formulated as follows: \begin{problem}\label{problem_opt} \begin{align} &\underset{\mathbf{P_{tx}},\mathbf{P_{cs}}}{\operatorname{max}}\;A_1 \Bar{P}^{fair}(t)+A_2 (1-\Bar{U}(t))\\ &\text{s.t.}\nonumber\\ &\text{\eqref{thr_eq}},\\ &P_{tx}^{m}\in [P_{tx}^{min}, P_{tx}^{max}], P_{cs}^{m} \in [P_{cs}^{min},P_{cs}^{max}] &&\forall m \in \mathcal{M} \end{align} \end{problem} Due the dynamic nature of the scenario, the transmission probabilities of the STAs $\phi_s^m$ are not directly controllable and require an additional step to map them to EDCA parameters \cite{Derakhshani2018}. Instead, we simplify our analysis by utilizing a network simulator to model such dynamics and propose to solve the previous linear programming (LP) problem using a MA-MAB solution as described in section \ref{Section5}. \subsection{Optimal action set via worst-case interference} \label{worst-case} Wi-Fi typical scenarios consist in APs and stations distributed non-uniformly. Contrary to the analysis presented in \cite{Kim2006} we aim obtaining an optimal subset of TP and CCA threshold values to further reduce the action space size in SR problems. In this analysis, we only consider the Carrier Sense (CS) threshold term as form of the CCA threshold. First, let's consider the worst-case interference scenario in a $N >2$ arrangement. For the sake of simplicity we use the path-loss radio propagation model: \begin{myequation} P_{rx}^{r} = \frac{P_{tx}^{m}}{{d_{m,r}}^{\theta}}, \end{myequation} where $P_{tx}^{m}$ and $P_{rx}^{r}$ are the TP at the $m^{th}$ transmitter (AP) and the received signal strength at the $r^{th}$ receiver, respectively. In addition, $d_{m,r}$ is the distance between the transmitter and receiver. Finally, $\theta \in [2,4]$ corresponds to the path loss exponent. Thus, from the perspective of $m^{th}$ AP the worst-case interference $I_{m}$ is defined as: \begin{myequation} I_m = \sum_{v \in \mathcal{F}_m^{+}} \frac{P_{tx}^{v}}{{X^{(m,v)}}^\theta} + P_{tx}^{sta}\sum_{w\in \mathcal{F}_m^{-}} \frac{1}{{X^{(m,w)}}^\theta}, \label{interference} \end{myequation} where $\mathcal{F}_m^{+}$ is the subset of interferers $|\mathcal{F}_m^{+}|=|\mathcal{M}|-1$, corresponding to APs interfering with the $m^{th}$ AP and $\mathcal{F}_m^{-}$ the subset of non-AP interferers $|\mathcal{F}_m^{-}| = |\mathcal{S}|$, corresponding to the stations interfering with the $m^{th}$ AP. Furthermore, $P_{tx}^{v}$ is the TP of the $v^{th}$ interferer and $P_{tx}^{sta}$ is a constant corresponding to the fixed power assigned to all the stations based on the fact that typically stations are not capable to modify their TP. Additionally, $X^{(m,v)}$ and $X^{(m,w)}$ corresponds to the distance from the $m^{th}$ AP to the $v^{th}$ AP interferer and $m^{th}$ AP to the $v^{th}$ station interferer, respectively. $X^{(m,.)}$ is calculated as follows: \begin{myequation} X^{(m,.)} = \sqrt{(D_m+x_{m,.})^2 + {d_{m,r}^2 - 2(D_m+x_{m,.})d_{m,r}\cos \varsigma_{r,.}}}, \label{distance} \end{myequation} where $(.)$ refers either to the AP or non-AP interferer, $D_m$ is the CCA threshold range of the $m^{th}$ AP, $\varsigma_{r,.}$ is the distance between the receiver to the interferer $(.)$ and $x_{m,.}$ corresponds to the distance between any $(.)$ interferer and $D_m$. The corresponding worst-case SINR $\gamma_{m,r}$ at the receiver is defined as: \begin{equation} \gamma_{m,r} = \frac{P_{tx}^{m}}{{d_{m,r}}^{\theta} (I_m + N_{0})}, \end{equation} Let's assume that $N_0 << I_m$, thus the equation is reduced to: \begin{equation} \gamma_{m,r} = \frac{P_{tx}^{m}}{{d_{m,r}}^{\theta}I_m }, \label{sinr_formula} \end{equation} Substituting equations (\ref{interference}) and (\ref{distance}) in (\ref{sinr_formula}) we obtain equation (\ref{p_final}): \begin{strip} \begin{align} \gamma_{m,r}= \frac{\frac{P_{tx}^{m}}{{d_{m,r}}^{\theta}}}{\sum_{v\in \mathcal{F}_m^{+}} \frac{P_{tx}^{m}}{({\sqrt{(D_m+x_{m,v})^2 + {d_{m,r}}^2 - 2(D_m+x_{m,v})d_{m,r}\cos \varsigma_{r,.}}})^\theta} + P_{tx}^{sta}\sum_{w\in\mathcal{F}_m^{-}} \frac{1}{({\sqrt{(D_m+x_{m,w})^2 + d_{m,r}^2 - 2(D_m+x_{m,w})d_{m,r}\cos \varsigma_{r,w}}})^\theta}} \label{p_final} \end{align} \end{strip} The aforementioned equation describes $\gamma_{m,r}$ in function of $D_m$ and $d_{m,r}$. Additionally, we substitute $D_m =\left(P_{tx}^{m}/T_{cs}^{m}\right)^{1/\theta}$ in equation (\ref{p_final}), obtaining: \begin{myequation} \gamma_{m,r}=\frac{\frac{P_{tx}^{m}}{{d_{m,r}}^{\theta}}}{\sum_{v\in\mathcal{F}_{m}^+}\frac{P_{tx}^{m}}{\Gamma^{m} + P_{tx}^{sta}\sum_{w=1}^{K_{m}^-} \iota^{(m,w)}} }, \end{myequation} where, \[ \scalebox{.7}{$\Gamma^{m} = \left({\sqrt{\left[\left(\frac{P_{tx}^{m}}{T_{cs}^{m}}\right)^\frac{1}{\theta}+x_{m,v}\right]^2 + d_{m,r}^2 - 2\left[\left(\frac{P_{tx}^{m}}{T_{cs}^{m}}\right)^\frac{1}{\theta}+x_{m,v}\right]d_{m,r}\cos \varsigma_{r,v}}}\right)^\theta$},\] $\iota^{(m,w)} = \frac{1}{{(\sqrt{(\Omega_{sta}+x_{m,w})^2 + d_{m,r}^2 - 2((\Omega_{sta}+x_{m,w})d_{m,r}\cos \varsigma_{r,w}}})^\theta}$ and $\Omega_{sta} = \left(\frac{P_{tx}^{sta}}{T_{cs}^{sta}}\right)^\frac{1}{\theta}$. Now, we proceed to define the maximum channel capacity in terms of TP and Carrier Sense (CS) threshold ($T_{cs}$). Given a certain value of SINR, the Shannon maximum capacity is expressed as: \begin{myequation} C_{m,r} = W\log_2(1 + \gamma_{m,r}), \label{capacity} \end{myequation} where $W$ is the channel bandwidth in Hz. Then, the cumulative maximum network capacity can be calculated as: \begin{myequation} C_T = \sum_{m=1}^{|\mathcal{M}|-1}\sum_{r=1}^{N} C_{m,r}, \end{myequation} \begin{figure}[h] \center \includegraphics[scale=0.50]{figures/optimal_graph.pdf} \setlength{\belowcaptionskip}{-5pt} \caption{Network capacity as a function of TP and CS threshold.} \label{network_capacity} \end{figure} In figure \ref{network_capacity}, it is shown a graph of the network maximum capacity as a function of TP and CS threshold. As observed, the network capacity achieves its higher values when a combination of high TP and low CS threshold is utilized. \iffalse That allows to select a small set from the action space in order to improve exploration time and consequently, convergence time.\fi Note that, prior knowledge of the locations are required. \section{Proposed Multi-Agent Multi-armed bandit algorithms}\label{Section5} In this section, we present the action space, context definition and reward function for the MA-MAB algorithms utilized in this work. \subsection{Action space} \label{action_space} The action space corresponds to the number of combinations of $P_{cs}$ and $P_{tx}$ which in the context of MABs translates to the number of arms for each MAB agent. The action space is defined as: \begin{myequation} A_{cs} = \{P_{cs}^{min}, P_{cs}^{min} + \frac{P_{cs}^{max} - P_{cs}^{min}}{L_{cs}-1},..., P_{cs}^{max} \}, \end{myequation} \begin{myequation} A_{tx} = \{P_{tx}^{min}, P_{tx}^{min} + \frac{P_{tx}^{max} - P_{tx}^{min}}{L_{tx}-1},..., P_{cs}^{max}\}, \end{myequation} where $P_{cs}^{min}$, $P_{cs}^{max}$ and $P_{tx}^{min}$, $P_{tx}^{max}$ are the minimum and maximum values of CCA threshold and TP values, respectively. $L_{cs}$ and $L_{tx}$ corresponds to the number of levels to be discretized the CCA threshold and TP values, respectively. Finally, the number of arms corresponding to the action space for the $m^{th}$ agent $K_{m}^{AP}$ is $|A_{cs}^{m}| \cdot |A_{tx}^{m}|$. \subsection{Reward function in distributed non-cooperative settings} The reward is defined following the optimization problem \ref{problem_opt}. The reward resembles the reward presented in \cite{Bardou2021} which includes a linear product-based fairness and station's starvation term \cite{app112211074,Bardou2021} but defined in a distributed manner. A station is considered to be on starvation when its performance is bellow to a predefined percentage of its theoretical achievable throughput. The reward is defined as: \begin{equation} \resizebox{0.9\columnwidth}{!}{$r_{m}^{AP} = \frac{|\Psi_m^{AP}|\prod_{j\in \Psi_m^{AP}} \frac{R_m^s}{\omega R_{m,A}^s} + |N_m^{AP} \setminus \Psi_m^{AP}|(N_m^{AP} + \prod_{j\in N_m^{AP} \setminus \Psi_m^{AP}} \frac{R_m^s}{R_{m,A}^s})}{N_m^{AP}(N_m^{AP} + 1 )}$}, \end{equation} where $\Psi_m^{AP}$ is the set of starving stations attached to the $m^{th}$ AP , $N_m^{AP}$ the set of stations attached to the $m^{th}$ AP. We can also observe, that $r_{m}^{AP} \propto C_{m,r}$ as defined in Eq. \ref{capacity}. In the next subsection, we present the definition of the context considered in our MA-CMAB solution. \subsection{Distributed Sample Average Uncertainty-Sampling MA-CMAB } In \cite{zhu2021deep}, the authors present an efficient contextual multi-arm bandit based on a ``frequentist approach'' to compute the uncertainty instead of using bayesian solutions as Thompson Sampling. The frequentist approach consist in measuring the uncertainty of the action-values based on the sample average rewards just computed instead of relaying on the posterior distribution given the past rewards. In this work, we present multi-agent cooperative and not cooperative variants of the previously mentioned RL algorithm. In our problem, the context is comprised only by the APs' local observations: \begin{enumerate} \item Number of starving stations, $|\Psi_m^{AP}|$ where $m$ corresponds to the $m^{th}$ AP under $\omega$ fraction of their attainable throughput during the $t$ episode. \item Average RSSI, $\overline{S}_m^{AP}$ where $m$ is the $m^{th}$ AP during the $t$ episode. \item Average Noise, $\overline{\Upsilon}_m^{AP}$ where $m$ denotes the $m^{th}$ AP during the $t$ episode. \end{enumerate} Additionally the context is normalized as follows: \begin{myequation} \psi_m^{AP} = |\Psi_m^{AP}|/N_m^{AP}, \end{myequation} \begin{myequation} s_m^{AP} =\begin{cases} 0, & -50 \text{ dBm} \leq \overline{S}_m^{AP} \leq -60 \text{ dBm}, \\ 0.25, & -60 \text{ dBm} \leq \overline{S}_m^{AP} \leq -70 \text{ dBm},\\ 0.5, & -70 \text{ dBm} \leq \overline{S}_m^{AP} \leq -80 \text{ dBm}, \\ 0.75, & -80 \text{ dBm} \leq \overline{S}_m^{AP} \leq -90 \text{ dBm}, \\ 1, & -90 \text{ dBm} \geq \overline{S}_m^{AP} \end{cases}\\ \end{myequation} \begin{myequation} \hat{\Upsilon}_m^{AP} = \overline{\Upsilon}_m^{AP}/100, \end{myequation} The multi-agent SAU-Sampling algorithm in its non-cooperative version (SAU-NonCoop) is described in Algorithm \ref{sau-sampling}.The algorithm starts by initializing action-value functions $\mu(\bm{x}_m|\bm{\hat{\theta}}_{m})$ as a deep neural networks and the exploration parameters $J_{m,a}^2$ and $n_{m,a}$ for each $m^{th}$ AP. $n_{m,a}$ correspond to the number of times action $a$ was selected in the $m^{th}$ AP and $J_{m,a}^2$ is defined as an exploration bonus. In each environment step (Algorithm \ref{sau-sampling}, \texttt{line 2}), each agent will observe their local context and compute the selected arm given the reward prediction. In (Algorithm \ref{sau-sampling}, \texttt{line 11}) each CMAB agent will update $\bm{\hat{\theta}}_{m,a}$ using stochastic gradient descent on the loss between the predicted reward and the real observed reward. Finally, the exploration parameters are accordingly updated given the the prediction error as depicted in (Algorithm \ref{sau-sampling}, \texttt{line 12}). \normalem \setlength{\textfloatsep}{0pt}% \begin{algorithm} \algsetup{linenosize=\small} \scriptsize \textbf{Initialize} network $\bm{\hat{\theta}}_{m,a}$, exploration parameters $J_{m,a}^2(t=0) = 1$ and $n_{m,a}(t=0) = 0$ for all actions $a \in K_m$. \For{environment step $t\gets1$ \textbf{to} $T$}{ \For{agent $m$} { Observe context ${\bm{x}_m(t)} = [\psi_m^{AP}(t), s_m^{AP}(t), \hat{\Upsilon}_m^{AP}(t)]$ \\ \For{$a = 1,...,K_m$} { Calculate reward prediction $\hat{\mu}_{i,t}(t) = \mu(\bm{x}_m|\bm{\hat{\theta}}_{m})$ and $\tau_{m,a}^2(t) = J_{m,a}^2/n_{m,a}$\\ $\tilde{\mu}_{m,a} \sim \mathcal{N}(\hat{\mu}_{m,a},n_{m,a}^{1-}\tau_{m,a}^2)$ } Compute $a_{m}(t) = \text{\texttt{argmax}}_a(\{\tilde{\mu}_{m,a}(t)\}_{a \in K_m}\})$ if $t > K_m$, otherwise $a_{m}(t) \sim \mathcal{U}(0,K)$;\\ Select action $a_m(t)$, observe reward $r_m^{AP}$;\\ Update $\bm{\hat{\theta}}_{m,a}$ using SGD with gradients $\partial l_m/\partial \theta$ where $l_m= 0.5(r_m^{AP} - \hat{\mu}_{m,a}(t)) $ ; \\ Update $J_{m,a}^2 \leftarrow J_{m,a}^2 + e_m^2$ using prediction error $e_m = r_m^{AP}(t) - \hat{\mu}_{m,a}(t)$ and $n_{m,a} \leftarrow n_{m,a} + 1$; } } \caption{SAU-Sampling MA-CMAB} \medskip \label{sau-sampling} \end{algorithm} \subsection{Cooperative Sample Average Uncertainty-Sampling MA-CMAB} In this section we present a cooperative version of SAU-Sampling named SAU-Coop. Different from the non-cooperative version, the total reward $r_{m}^{C}$ considers the network Jain's fairness index in addition to their local reward $r_m^{AP}$ as: \begin{myequation} r_{m}^{C} = r_{m}^{AP} + r_{\mathcal{J}}, \end{myequation} where $r_{\mathcal{J}}$ as the overall network Jain's fairness index is defined as: \begin{myequation} r_{\mathcal{J}} = \mathcal{J}(R_1,...,R_{N}) = \frac{(\sum_{m=1}^{|\mathcal{M}|} R_m )^2}{|\mathcal{M}|\cdot\sum_{m=1}^{|\mathcal{M}|}R_m^2}, \end{myequation} where $R_m =\sum_{s=1}^{|\mathcal{S}_m|}R_s^m$ is the total throughput of all the $S_m$ stations of the $m^{th}$ AP. \subsection{Reward-cooperative $\epsilon$-greedy MA-MAB } In addition to the previous cooperative algorithm, we propose a cooperative approach based on the classical $\epsilon$-greedy strategy \cite{sutton2018reinforcement} that takes into account in the action's reward update a percentage of the average reward of other agents. This algorithm is described in Algorithm \ref{egreedy-coop}. \normalem \setlength{\textfloatsep}{0pt}% \begin{algorithm} \caption{Reward-cooperative $\epsilon$-greedy MA-MAB} \algsetup{linenosize=\small} \scriptsize \textbf{Initialize} $\epsilon_m(t=0) = \epsilon_0$, $Q_{m,a}(t=0)\leftarrow 0$, $N_{m,a}(t=0)\leftarrow 0$ and $\beta$. \For{environment step $t\gets1$ \textbf{to} $T$}{ \For{agent $m$} { Execute action $a_{m}(t)$: $a_{m}(t) =\begin{cases} \text{\texttt{argmax}}_{k=1,...,K} r_{k,i}(t) & \text{with probability} 1 - \epsilon_{m}(t) \\ k \sim \mathcal{U}(0,K) & \text{o.w} \end{cases}$\\ Calculate reward $r_m^{AP}(t)$ based on feedback of the environment\\ Update $Q_{m,a}(t+1) = Q_{m,a}(t) + \frac{1}{N_{m}(t)}[(r_m^{AP} + \beta\cdot\frac{1}{M-1}\sum_{m=1}^{M^{-i}} r_m^{AP}) - Q_{m,a}(t)] $ \\ Update $N_{m} \leftarrow N_{m}(t) + 1$;\\ Update $\epsilon_{m} \leftarrow \frac{\epsilon_{m}(t)}{\sqrt{t}}$ } } \medskip \label{egreedy-coop} \end{algorithm} Finally, in the next subsection we present the details of the the DTRL scheme to improve SR adaptation in dynamic environments. \subsection{Sample Average Uncertainty-Sampling MA-CMAB based Deep Transfer Reinforcement Learning } Typically, RL agents learn their best policy based on the feedback received from the environment in a $T$ horizon time. However, in real-world scenarios the environment conditions can change in $T+1$ and thus, adapting to the updated environment is necessary\cite{Padakandla2021}. In such cases, the ``outdated'' agent's policy might not be optimal to address the new conditions efficiently. For instance, a modification on the stations' distribution over the APs can cause that the SR-related parameters chosen by the ``outdated'' agents' policy affect the network performance. \normalem \setlength{\textfloatsep}{0pt}% \begin{algorithm} \algsetup{linenosize=\small} \scriptsize \textbf{Function} \textsc{Detect\_Singularity}($\mathcal{K}$) \tcp*[l]{returns True if anomaly is detected in network KPIs data $\mathcal{K}$ at time $t$, and False otherwise.} \textbf{Let} $\mathcal{L} = \{l | l \in \mathbb{N}, l>0\} $ the set of layers of model $\bm{\hat{\theta}}_{m,a}^l $ and $\mathcal{M} \subset \mathcal{L}$ the subset of layers to be transferred. \text{\normalfont \textbf{Run} algorithm \textsc{SAU-Sampling MA-CMAB} \texttt{(Algorithm \ref{sau-sampling})}} \While{environment step $t < T$} { \eIf{$\neg$\textsc{Detect\_Singularity}} { continue; } { \text{\normalfont\textbf{Reset} exploration parameters $S_{m,a}^2, n_{m,a}$\;} \text{\normalfont \textbf{Reinitialize} weights $w$ and biases $b$ of the $l^{th}$ layer of $\bm{\hat{\theta}}_{m,a}^{l \not\in \mathcal{M}}$ via:} \\ $\nu_{l} = \left(\sqrt{|\bm{\hat{\theta}}_{m,a}^{l\not\in \mathcal{M}}|}\right)^{-1}$ \; $\bm{\hat{\theta}}_{m,a}^{l \not\in \mathcal{M}} (w,b) \rightarrow w_{l} \sim \mathcal{U}(-\nu_l, \nu_l) , b_{l} \sim \mathcal{U}(-\nu_l, \nu_l) $\; \text{\normalfont \textbf{Transfer }weights and biases via: } \\ $\bm{\hat{\theta}}_{m,a}^{l \in \mathcal{M}} (w,b) \rightarrow \bm{\hat{\theta}}_{m,a}^{l \in \mathcal{M}'} (w,b)$\; }} \caption{SAU-Sampling MA-MAB Transfer Learning} \label{transfer_algo} \end{algorithm} To address the previous situation we propose two main solutions: \textbf{1.} If the agent detects a change in the environment indicated by a singularity, it will decide to correct its configuration via forgetting the policy already learnt (\textbf{forget}) or \textbf{2.} adapting the agent's policy to the new conditions via a transfer learning technique. A singularity is defined as a anomalous behavior of the KPIs of interest after the policy of the MAB agent has converged. In this work, we don't delve into how to detect a singularity and moreover, we assume the existence of an anomaly detector in our system \cite{10.1145/3444690}. In Algorithm \ref{transfer_algo}, we present the transfer learning algorithm depicting the second proposed solution. At $t=0$ each SAU-Sampling agent will reset their weights and biases and start learning as part of Algorithm \ref{sau-sampling}. At $t=S1$, where $S1$ corresponds to the time when an anomaly is detected and the transfer procedure is activated (Algorithm \ref{transfer_algo}, \texttt{line 7}). In our setup we transfer $l=2$ and reset $l=1$ (Algorithm \ref{transfer_algo}, \texttt{line 11}) , where $l$ corresponds to the layer of the neural network utilized in the SAU-Sampling agent. However, as indicated (Algorithm \ref{transfer_algo}, \texttt{line 13}), the transfer is not constrained to one layer but more generally to a set of layers. The set of transferred layers is considered as an hyperparameter to be tuned. The partial transfer of a model avoids negative transfer by giving the agent room to adapt to the new context since it mitigates model overfitting. \section{Performance Evaluation} \label{Section6} \subsection{Simulation Setting}\label{AA} We consider two scenarios in our simulations. The first one considers stationary users, meanwhile the second scenario considers mobile users to model dynamic scenarios (see section \ref{adaptiveSR}). In addition, stations and APs are two-antenna devices supporting up to two spatial streams in transmission and reception. In this work, we assume a frequency of 5 GHz with a 80 MHz channel bandwidth in a Line of Sight (LOS) setting. The propagation loss model is the Log Distance propagation loss model with a constant speed propagation delay. In addition, an adaptive rate data mode is considered with a UDP downlink traffic. We implement our proposed solutions using ns-3 and also we use OpenAI Gym to interface between ns-3 and the MA-MAB solution\cite{Gawowicz2019}. In Table \ref{q_settings} and Table \ref{net_settings} we present the learning hyperparameters and network settings parameters, respectively. \begin{table} [ht] \centering \caption{Learning hyperparameters} \begin{threeparttable} \resizebox{\columnwidth}{!}{ \begin{tabular}{c c} \hline \textbf{Parameter}&\textbf{Value} \\ \hline $\epsilon$-greedy MAB & { Annealing $\epsilon$: $\sqrt{T}$} \\ Thompson Sampling MAB & { Prior distribution: Beta}\\ Upper Confidence Bound MAB & {Level of exploration, $c = 1$} \\ SAU-Sampling & { Number of hidden layers, $N_h=2$ } \\ & {Number of neurons per hidden layer, $n_h=100$}\\ & {Number of inputs, $N_m=3$ and number of outputs, $N_o = K$}\\ & {Batch size, $B_s = 64$} \\ & Optimizer : {RMSProp (8e-3)} \\ & Weight decay : {5e-4} \\ & Activation function : {ReLU}\\ \hline Gym environment step time & { 0.05 s } \\ \bottomrule \end{tabular} } \end{threeparttable} \label{q_settings} \end{table} \begin{table} \caption{Network settings} \begin{center} \resizebox{\columnwidth}{!}{% \begin{tabular}{c c} \hline \textbf{Parameter}&\textbf{Value} \\ \hline Number of APs & { 6 } \\ Number of Stations & {15}\\ Number of antennas (AP) & {2} \\ Max Supported Tx Spatial Streams & { 2 } \\ Max Supported Tx Spatial Streams & {2} \\ Channel Number \footnotemark & { 1 } \\ Propagation Loss Model & { Log Distance Propagation Loss Model } \\ Wi-Fi standard & { 802.11 ax } \\ Frequency & { 5 GHz } \\ Channel Bandwidth & {80 MHz} \\ Traffic Model - UDP application & { $[0.011, 0.056, 0.11 \text{\cite{WILHELMI201926}}, 0.16]$ Gbps } \\ Maximum $\&$ minimum Transmission Power & $P_{tx}^{max}=21.0$ dBm $\&$ $P_{tx}^{min}=1.0$ dBm \\ Maximum $\&$ minimum CCA threshold & $P_{cs}^{max}=-62.0$ dbm $\&$ $P_{cs}^{min}=-82.0$ dbm\\ & $K_{cs} = 1$ and $K_{tx}= 1$\\ \hline \end{tabular} } \label{net_settings} \end{center} \end{table} \subsection{Reduced set of actions vs. all actions}\label{resultsA} In subsection \ref{worst-case} we presented a mathematical analysis to obtain a reduced set of optimal actions with the goal of decreasing exploration time and consequently improving convergence time. As concluded in figure \ref{network_capacity}, high TP and low CCA threshold values maximize the network capacity in the simulation scenario under study. Therefore, we selected a fixed value of CCA threshold ($P_{cs}=-82.0$ dBm) and a reduced set of TP $P_{tx} \in \{15,16,17,18,19,20,21\}$ dBm and observed the performance against the full set of possible actions described in \ref{action_space}. \begin{figure}[t!] \center \includegraphics[scale=0.38]{figures/optimal_vs_all_graphs_all_techniques2.pdf} \caption{Convergence performance of $\epsilon$-greedy, UCB and Thompson Sampling MA-MABs under non-cooperative and distributed regimen. The subscript ``\textbf{all}" indicates the usage of the full set of actions.} \label{optimal_vs_all} \end{figure} In figure \ref{optimal_vs_all}, we present the convergence performance of three MA-MAB algorithms under UDP traffic of 0.056 Gbps in a non-cooperative and cooperative settings (indicated with subscripts ``\textbf{non-coop}" and ``\textbf{coop}", respectively ). The algorithms correspond to $\epsilon$-greedy ($MAB^{eg}$), UCB ($MAB^{ucb}$) and Thompson Sampling ($MAB^{thom}$) MA-MABs. For each algorithm, we plotted three convergence graphs in terms of fairness, cumulative throughput and station starvation representing the behavior when a reduced set of actions and the full action set (indicated with the subscript ``\textbf{all}") are used, respectively. For the case of the set of optimal actions, we can observe that the performance is similar with a slight improvement when utilizing MAB-Thompson Sampling. On the other hand, when utilizing the full action set the behavior shows a noticeable improvement with MAB $\epsilon$-greedy algorithm with respect the others. In \cite{NEURIPS2020_12d16adf}, the authors study the unreasonable behavior of greedy algorithms when $K$ is sufficiently large. They concluded that when $K$ increases above 27 arms, intelligent algorithms are affected greatly by the exploration stage. The former results validate ours based on the fact that $K =|A_{cs}| \cdot |A_{tx}| = 21^2$. Finally, it can be noted that the impact of utilizing reduced optimal actions in terms of convergence time and KPI maximization. The set of optimal tasks allows to reduce the station starvation when compared with the best performer $MAB_{nocoop_{all}}^{eg}$ by an average of two starving users. However, in order to obtain such a set it is requires a prior knowledge of stations and APs geographical locations. In the following section we compare the results of $\epsilon$-greedy MA-MAB and a default typical configuration without machine learning. \footnotetext{We assume all APs are configured to use 1 channel out of the available 11. This is a practical selection to create dense deployment scenarios.} \begin{figure}[h] \center \includegraphics[scale=0.37]{figures/8262egnoncoop2.pdf} \caption{Performance results: $\epsilon$-greedy MAB w/ optimal set vs. default configuration with $P_{cs} \in \{-62.0, -82.0\} $ dBm. } \label{mabegreedy} \end{figure} \subsection{Distributed $\epsilon$-greedy MA-MAB vs. default configuration performance results}\label{dist_legacy} In this subsection, we present the comparative results and advantages of utilizing a distributed intelligent solution such as MAB $\epsilon$-greedy over the default CCA threshold and TP configuration with no ML. In figure \ref{mabegreedy}, we show the performance under four different UDP data traffic regimes: $\{0.011, 0.056, 0.11, 0.16\}$ Gbps. We considered two typical configurations of CCA threshold: $-82.0$ dBm and $-62.0$ dBm. In both cases, the AP's TP is $16.0$ dBm. It can be observed that MAB $\epsilon$-greedy achieves a significant improvement over the default configuration ($P_{cs}=-82.0$ dBm) with an average gain over all the considered traffic of $44.4\%$ in terms of cumulative throughput, $70.9\%$ in terms of station starvation, $12.2\%$ in terms of fairness, $138.0\%$ in terms of latency and $94.5\%$ in terms of packet loss ratio (PLR), respectively. Additionally, a gain over the default configuration ($P_{cs}=-62.0$ dBm) with an average gain over all the considered traffics of $53.9\%$ in terms of cumulative throughput, $138.4\%$ in terms of station starvation, $43.0\%$ in terms of fairness, $84.0\%$ in terms of latency and $105.4\%$ in terms of packet loss ratio (PLR) is shown, respectively. \begin{figure}[h] \center \includegraphics[scale=0.38]{figures/coopvsnoncoop2.pdf} \caption{Performance results of cooperative algorithms: $\epsilon$-greedy MA-MAB (Rew-Coop), SAU-Sampling MA-CMAB (SAU-Coop) and non-cooperative versions of the previous algorithms SAU-NonCoop and Eg-NonCoop under full-set of actions.} \label{sau_results} \end{figure} \vspace{0em} \subsection{Cooperation vs. non-cooperation performance results}\label{AA} In the two past subsections we have shown the results considering the set of optimal actions. In this subsection we assume the non-existence of stations and APs location information and thus, we must rely on the full set of actions. In consequence, we investigate if cooperation can improve the KPIs of interest by utilizing the cooperative proposal of the MAB $\epsilon$-greedy algorithm (Rew-Coop) and the contextual SAU-Sampling algorithm (SAU-Coop). Additionally, we present two non-cooperative algorithms: SAU-NonCoop which corresponds to the non-cooperative version of the SAU-Sampling and Eg-NonCoop that refers to the MAB $\epsilon$-greedy algorithm utilized in the previous section. As observed in figure \ref{sau_results}, simulations show that SAU-Coop improves Eg-NonCoop over all the data traffic with an average of $14.7\%$ in terms of cumulative throughput, $21.3\%$ in terms of station starvation, $4.64\%$ in terms of network fairness, $36.7\%$ in terms of latency and $32.5\%$ in terms of PLR. Similarly, the distributed version of SAU-Sampling presents a better performance over Eg-NonCoop, indicating that context is beneficial to solve the current optimization problem. Additionally, SAU-Coop presents a better performance over its non-cooperative version, specially when the data rate increases up to 0.16 Gbps where it is observed a gain of $14.1\%$ in terms of cumulative throughput, $32.1\%$ in terms of station starvation, $18.2\%$ in terms of network fairness, $16.5\%$ in terms of latency and $4\%$ in terms of PLR. To sum up, cooperative approaches contribute positively to the improvement of SR in WiFi over non-cooperative approaches. In addition, in cases where cooperation is not possible it is advisable to utilize contextual multi-armed bandits over stateless multi-armed bandits. \subsection{Deep Transfer Learning in Adaptive SR in Dynamic scenarios results}\label{adaptiveSR} In order to model a dynamic scenario, we design a simulation where the users move across the simulation area and attach to the AP that offers the best signal quality. Consequently, the user load in each AP will change and thus, the dynamics of the environment. We model this scenario with 3 APs and 15 users where the load will change twice throughout the simulation. As depicted in table \ref{dynamic_table} the user load of the $m^{th}$ AP denoted as $C_m$ will change in two instances in time: 3 and 6 minutes, respectively. \begin{table}[ht] \caption{Dynamic scenario load distribution} \begin{center} \centering \begin{tabular}{c|cccccc|} \cline{2-7} & \multicolumn{2}{c|}{\bfseries $\bm{t=0}$ min} & \multicolumn{2}{c|}{\bfseries $\bm{t=3}$ min} & \multicolumn{2}{c|}{\bfseries $\bm{t=6}$ min} \\ \hline \multicolumn{1}{|c|}{$C_1$} & \multicolumn{2}{c|}{8} & \multicolumn{2}{c|}{5} & \multicolumn{2}{c|}{2} \\ \hline \multicolumn{1}{|c|}{$C_2$} & \multicolumn{2}{c|}{5} & \multicolumn{2}{c|}{5} & \multicolumn{2}{c|}{2} \\ \hline \multicolumn{1}{|c|}{$C_3$} & \multicolumn{2}{c|}{2} & \multicolumn{2}{c|}{5} & \multicolumn{2}{c|}{11} \\ \hline \end{tabular} \end{center} \label{dynamic_table} \end{table} \vspace{0.0em} \vspace{0em} \begin{figure}[h] \center \includegraphics[scale=0.4]{figures/comparison_transfer_notransfer_forget2.pdf} \setlength{\belowcaptionskip}{-5pt} \caption{Network response in terms of fairness and station starvation when utilizing the forget, full transfer and transfer strategies. } \label{transfer_forget_results} \end{figure} In figure \ref{transfer_forget_results} we present the network behavior in terms of fairness and station starvation under the scenario depicted by Table \ref{dynamic_table}. In addition to the two methods previously mentioned: \textbf{forget} and \textbf{transfer}, we present the performance of a third approach called \textbf{full transfer} where the full transfer of the model is considered. During the first interval ($0-3$min) the performance is similar in the three methods as expected. However, after the two changes on the network load, two singularities in each graph are visible in the fairness and starvation graphs. More specifically, the \textbf{forget} method experiences the worst behavior, with a $54.3\%$ and $11.7\%$ decrease when compared with the transfer method in terms of station starvation and fairness, respectively. The \textbf{forget} method shows some peaks at the moment of the singularities representing $60\%$ of total of the users with a service drop; this behavior is inherently related to the agents' process of start learning again and cannot be avoided. From the quality of service perspective, a disturbance such as the one observed is highly non-preferable. Meanwhile, the \textbf{full transfer} method underperforms the \textbf{transfer} method with $18.7\%$ and $6\%$ decrease in the previously mentioned KPIs. Interestingly, it can be observed in the second interval under study ($3-6$min) the \textbf{forget} method is able to overperform at the end of the period the \textbf{full transfer} method. This is due to a negative transfer as a result of transferring the whole model. As observed, not only the partial transfer learning reduces considerably the peaks in performance of the \textbf{forget} method but also it is able to achieve better adaptation over the \textbf{full transfer} method. In all methods, the cumulative throughput is similar, however as observed in figure \ref{transfer_forget_results} station starvation and consequently, fairness are affected. \section{Conclusion} \label{Section8} In this paper, we propose Machine Learning (ML)-based solutions to the Spatial Reuse (SR) problem in distributed Wi-Fi 802.11ax scenarios. We presented a solution to reduce the huge action space given the possible values of Transmission Power (TP) and Clear-Channel-Assessment (CCA) threshold values per Access Point (AP) and analysed its impact on diverse well-known distributed Multi-Agent Multi-Armed Bandit (MA-MAB) implementations. In distributed scenarios, we showed that $\epsilon$-greedy MA-MAB significantly improves the performance over typical configurations when the optimal actions are known. Moreover, the Contextual Multi-Agent Multi-Armed (MA-CMAB) named SAU-Sampling in the cooperative setting contributes positively to an increase in throughput and fairness and reduction of PLR when compared with no cooperation approaches. Under a dynamic scenarios, transfer learning benefits the SAU-Sampling algorithm to overcome the service drops for at least $60\%$ of the total of users when utilizing the forget method. Additionally, we obtained that partial transfer learning offers better results than the full transfer method. To conclude, the utilization of the cooperative version of the MA-CMAB to improve SR in WiFi scenarios is preferable since it outperforms the presented ML-based solutions and prevents service drops in dynamic environments via transfer learning. \section{Acknowledgment }\label{Section9} This research is supported by Mitacs Accelerate Program and NetExperience Inc. \section{Introduction} \lettrine[findent=2pt]{\textbf{W}}{ }ireless connectivity has become an irreplaceable commodity in our modern society. The exponential trend expected in the wireless technology usage has lead analysts to predict that by 2023, $71\%$ of the global population will enjoy some kind of wireless service. In the group of Wireless Local Area Networks (WLANs), Wireless Fidelity (Wi-Fi) technology presents a growth up to 4-fold over a period of 5 years from 2018 to 2023 \cite{CiscoSystemsInc.2020}. The newest Wi-Fi standard IEEE-802.11ax \cite{9442429}, also known as Wi-Fi 6 expects to grow 4-fold by 2023 becoming $11\%$ of all the public Wi-Fi hostpots \cite{Cisco2020}. Spatial reuse (SR) has been of interest for more than 20 years in the wireless community since it contributes to the reduction of the collisions among stations and the determination of channel access rights \cite{Ye2003}. As the number of dense WLAN deployments increases, SR becomes more challenging in the context of Carrier Sense Multiple Access (CSMA) technology as used in Wi-Fi\cite{Wilhelmi2021}. Wi-Fi 6 comes to address diverse challenges such as increasing number of Wi-Fi users, dense hotspots deployments and high demanded services such as Augmented, Mixed and Virtual Reality. \begin{figure}[h] \center \includegraphics[scale=0.24]{figures/ap.pdf} \caption{Typical operational scenario: APs adjust their Transmission Power and CCA threshold towards an efficient spatial reuse.} \label{state_vs_context} \end{figure} Moreover, 802.11ax included additional features such as dynamic adjustment of the Clear Channel Assessment (CCA) threshold and Transmission Power (TP). Static CCA threshold may not be representative of diverse network topologies, and cause inefficient channel utilization or concurrent transmissions \cite{Thorpe2014}. Additionally, adjusting TP allows to reduce the interference among the APs and consequently maximize the network performance \cite{Huehn2012}. Thus, SR and network performance can be positively improved by adjusting CCA and TP. Yet, the complex interactions between CCA and TP, call for intelligent configuration of both. To this end, data scarcity and data access are key for any Machine Learning (ML) method \cite{Khosla2020}. Recently, AI-based wireless networks have been of remarkable interest among researchers both in WiFi domain \cite{szott2021wifi}, and 5G domain \cite{Elsayed2019} however the proposed solutions usually require complete availability of the data. In reality, data access is not always feasible due to privacy restrictions. Recent wireless network architectures have started to shift to a more open and flexible design. In 5G networks as well as the O-RAN Alliance architecture support the utilization of artificial intelligence to orchestrate main network functions \cite{iturria}. In the context of Wi-Fi, a novel project named OpenWiFi\cite{TIP2022} released by the Telcom Infra Project intends to disaggregate the Wi-Fi technology stack by utilizing open source software for the cloud controller and AP firmware operating system. These paradigm changes allow for the development of many applications in the area of ML and more specifically in Reinforcement Learning (RL) applications to become reality. In this paper\footnote{The present work has been submitted to IEEE}, we intend to optimize TP and CCA threshold to improve SR and overall network KPIs.To do so, we formulate the TP and CCA configuration problem with an objective of maximizing product network fairness and minimizing station starvation. We model the SR problem as a distributed multi-agent decision making problem and use a Multi-Agent Multi-Armed Bandit (MA-MAB) approach to solve it. The contributions of this work, different from the ones found in the literature, can be summarized in the following points: \begin{enumerate} \item We propose a solution for reducing the inherent huge action space given the possible combinations of TP and CCA threshold values per AP. We derive our solution via worst-case interference analysis. \item We analyse the performance of the network KPIs of well-known distributed MA-MAB implementations such as $\epsilon$-greedy, UCB and Thompson on the selection of the TP and CCA values in cooperative and non-cooperative settings. \item We introduce a contextual MA-MAB (MA-CMAB) named Sample Average Uncertainty-Sampling (SAU) in cooperative and non-cooperative settings. SAU-MAB is based on a deep Contextual MAB. \item We propose for the first time, to the best of our knowledge, a deep transfer learning solution to adapt efficiently TP and CCA parameters in dynamic scenarios. \end{enumerate} With these contributions, our simulation results show that the $\epsilon$-greedy MAB solution improves the throughput at least 44.4\%, provides improvement of 12.2\% in terms of fairness and 94.5\% in terms of Packet Loss Ratio (PLR) over typical configurations when a reduced set of actions is known. Additionally, we show that the SAU-Coop algorithm improves the throughput by 14.7\% and PLR 32.5\% when compared with non cooperative approaches with full set of actions. Moreover, our proposed transfer learning based approach reduces the service drops by at least 60\%. The rest of the paper is organized as follows. Section \ref{Section2} presents a summary of recent work that uses Machine Learning to improve SR in Wi-Fi. Section \ref{Section3} covers the basics on Multi-Armed Bandits including deep contextual bandits and deep transfer reinforcement learning. In \ref{Section4} we present our system model altogether with an analysis to reduce the action space via worst-case interference. Section \ref{Section5} presents the proposed schemes and the results are discussed in section \ref{Section6}. Finally, section \ref{Section8} concludes the paper. \section{Related work} \label{Section2} Reinforcement learning-based spatial reuse has been of interest in recent literature. The studies have focused on distributed solutions with no cooperation or centralized schemes of multi-armed bandits. These studies are summarized below. In \cite{Wilhelmi2019}, the authors present a comparison among well-known MABs as $\epsilon$-greedy, UCB, Exp3 and Thompson sampling in the context of decentralized SR via Dynamic Channel Allocation (DCA) and Transmission Power Control (TPC) in WLANs. The results showed that ``selfish learning'' in a sequential matter present better performance than ``concurrent learning'' among the agents. Additionally, \cite{Bardou2021} presents a centralized MAB consisting of an optimizer based on a modified Thompson Sampling (TS) algorithm and a sampler based on Gaussian Mixture (GM) algorithm to improve SR in 802.11ax Wi-Fi. More specifically, the authors propose to deal with the large action space comprised by TP and Overlapping BSS/Preamble-Detection (OBSS/PD) threshold by utilizing a MAB variant called Infinitely Many-Armed Bandit (IMAB). Furthermore, a distributed solution based on Bayesian optimizations of Gaussian processes to improve SR is proposed in \cite{bardou2022inspire}. Other solutions that are not related to reinforcement learning can be found in the literature with the aim of improving SR in WLANs. For instance, in \cite{9417353} the authors propose a distributed algorithm where the APs decide their Transmission Power based on their RSSI. Moreover, in \cite{app112211074} the authors present an algorithm to improve SR by utilizing diverse metrics such as SINR, proximity information, RSSI and BSS color and compare with the legacy existing algorithms. The ultimate goal of the previous algorithm is the selection of the channel state (IDLE of BUSY) at the moment of an incoming frame given the previous metrics. Finally, the authors in \cite{wilhelmi2022federated} presented a supervised federated learning approach for SR optimization. In all above works, the authors employ either centralized or decentralized schemes with no cooperation to address SR optimization in WiFi. In this work, we propose to address this via a coordination based MA-MAB. In addition, we tackle some of the issues previously encountered in others works such as the size of action space due the set of possible values TP and CCA. Finally, to the best of our knowledge, we propose for the first time to address SR adaptation in dynamic environments utilizing deep transfer learning. \section{Background } \label{Section3} In this section, we present a background on Multi-Armed Bandits including $\epsilon$-greedy, Upper Confident Bound, Thompson sampling bandits and an introduction on contextual MABs with a focus on a neural network-based contextual bandit. Additionally, we introduce MABs to the multi-agent setting and we finalize with a background on deep transfer reinforcement learning. Multi-Armed Bandits (MABs) are a widely used RL approach that tackles the exploration-exploitation trade-off problem. Their implementation is usually simpler when compared with full RL off-policy or on-policy algorithms. However, simplicity often comes with a cost of obtaining suboptimal solutions \cite{Bouneffouf2020}. The basic model of MABs corresponds to the stochastic bandit, where the agent has $K$ possible actions to choose, called arms, and receive certain reward $R$ as a consequence of pulling the $j^{th}$ arm over $T$ environment steps. The rewards can be modeled as independent and identically distributed (i.i.d), adversarial, constrained adversary or random-process rewards \cite{Slivkins2019}. From the four models previously mentioned, two are more commonly found in the literature: the i.i.d and the adversarial models. In the i.i.d model, each pulled arm's reward is drawn independently from a fixed but unknown distribution $D_j$ with an unknown mean $\mu_j^*$. On the other hand, in the adversarial model each pulled arm's reward is randomly sampled from an adversary or alien to the agent (such as the environment) and not necessarily sampled from any distribution\cite{zhou2015survey}. The performance of MABs is measured in terms of cumulative regret $R_T$ or total expected regret over the $T$ steps defined as: \begin{myequation} R_T = \sum_{t=1}^{T} \mathbb{E}[(\text{max}_j\mu_j^* - \mu_j^*)], \end{myequation} The utmost goal of the agent is to minimize $R_T$ over the T steps such as the $\lim_{T \to \infty} R_T/T$ = 0 which means the agent will identify the action with the highest reward in such limit. \subsection{$\epsilon$-greedy, Upper-Confidence-Bound and Thompson Sampling MAB} The $\epsilon$-greedy MAB is one of the simplest MABs and as the name suggests, it is based on the $\epsilon$-greedy policy. In this method, the agent selects greedily the best arm most of time and once a while, with a predefined small probability ( $\epsilon$), it selects a random arm \cite{sutton2018reinforcement}. The UCB MAB tackles some the disadvantages of the $\epsilon$-greedy policy at the moment of selecting non-greedy arms. Instead of drawing randomly an arm, the UCB policy measures how promising non-greedy arms are close from optimal. In addition, it takes in to consideration the rewards' uncertainty in the selection process. The selected arm is obtained by drawing the action from $\text{\texttt{argmax}}_a\left[Q_{t}(a) + c\sqrt{\text{ln }{t}/N_{t}(a)}\right]$, where $N_{t}(a)$ corresponds to the number of times that action $a$ via the $j^{th}$ arm has been chosen and $Q_{t}(a)$ the Q-value of action $a$\cite{sutton2018reinforcement,Agrawal1995}. Finally, Thompson Sampling MAB action selection is based on Thompson Sampling algorithm as the name indicates. Thompson sampling or posterior sampling is a Bayesian algorithm that constantly constructs and updates the distribution of the observed rewards given a previously selected action. This allows the MAB to select arms based on the probability of how optimal the chosen arm is. The parameters of the distribution are updated depending on the selection of the distribution class\cite{Russo2018}. \subsection{Deep Contextual Multi-Armed Bandits} Contextual Multi-Armed Bandits (CMABs) are a variant of MABs, that before selecting an arm, observe a series of features commonly named context\cite{Bouneffouf2020}. \iffalse Figure \ref{state_vs_context}, depicts the difference between the stateless MAB and CMAB. \fi Different from the stateless MAB, a CMAB is expected to relate the observed context with the feedback or reward gathered from the environment in $T$ episodes and consequently predict the best arm given the received features \cite{zhou2015survey}. Diverse CMABs have been proposed throughout the literature such as LinUCB, Neural Bandit, Contextual Thompson Sampling and Active Thompson Sampling \cite{Bouneffouf2020}. More recently, a deep neural contextual bandit named SAU-Sampling has been presented in \cite{zhu2021deep} where the context is related with the rewards using neural networks. The details of SAU-Sampling will be discussed in following sections. \subsection{Multi-Agent Multi-Armed Bandits (MA-MABs)} Multi-agent Multi-Armed Bandits is the multi-agent variant of MABs in which $N$ agents pull their $j^{th}$ arm and each $m^{th}$ agent will receive a reward drawn from their distribution $D_{m,j}$ with an unknown mean $\mu_{m,j}^*$ \cite{NEURIPS2021_c96ebeee}. MA-MABs can be modeled as centralized or distributed. In centralized settings the agents' actions are taken by a centralized controller and in distributed settings each agent will independently choose their own actions. Distributed decision-making settings scale more effectively \cite{Landgren2019} and naturally deals easily with large $K$ set of arms when compared with centralized settings that suffers of $K$ arms' cardinality explosion. Finally, the total regret can be defined as: \begin{myequation} R_T = \sum_{t=1}^{T}\sum_{m=1}^{N} \mathbb{E}[(\text{max}_j\mu_{m,j}^* - \mu_{m,j}^*)] \end{myequation} In this work, we consider two main approaches: distributed non-cooperative and cooperative MA-MABs with adversarial rewards. \subsection{Deep Transfer Reinforcement Learning} Transfer learning or knowledge transfer techniques improve learning time efficiency by utilizing prior knowledge. Typically, this is done by extracting the knowledge from one or diverse source tasks and then applying such knowledge in a target task \cite{Pan10}. If the tasks are related in nature and the target task benefits positively with the acquired knowledge from the source, then it is called inductive transfer learning \cite{Scott2018}. This type of learning is not uncommon and it is used by the human brain on a daily basis. However, a phenomena called negative transfer can occur, if after knowledge transfer, the target task performance is negatively affected \cite{Zhuang2021}. In the realm of transfer learning we can find Deep Transfer Learning (DTL). DTL is a subset of transfer learning that studies how to utilize knowledge in deep neural networks. In the context of classification/prediction tasks, large amount of data is required to properly train the model of interest \cite{Vu2020}. In many practical applications where training time is essential to respond to new domains \cite{Elsayed2021}, retraining using large amount of data is not always feasible and possibly catastrophic in terms of performance. ``What to transfer" corresponds to one of the main research topics in transfer learning. Specifically, in the case of deep transfer learning four categories have been broadly identified: instances-based transfer, where data instances from a source task are utilized; mapping-based transfer, where a mapping of two tasks is used on a new target task; network-based transfer, where the network pre-trained model is transferred to the target task; and adversarial-based transfer, where an adversarial model is employed to find which features from diverse source tasks can be transferred to the target task\cite{Tan2018}. In this work, we utilize the DTL form called network-based transfer learning to adapt efficiently TP and CCA parameters in dynamic scenarios. An example of network-based transfer learning technique is presented in Fig. \ref{transfer_network}. Such technique is utilized in deep transfer reinforcement learning as part of a transfer learning type called policy transfer \cite{zhu2020transfer}. In particular, policy transfer takes a set of source policies $\pi_{S_1}, ..., \pi_{S_K}$ that are trained on a set of source tasks and uses them in a target policy $\pi_{T}$ in a way that is able to leverage the former knowledge from the source policies to learn its own. More specifically, the weights and biases that comprise each of the hidden layers of the source policies are the elements transferred to the target polices. Note that in practice policies are modeled as neural networks. \begin{figure}[h] \center \includegraphics[scale=0.32]{figures/transfer_learning.pdf} \setlength{\belowcaptionskip}{-5pt} \caption{Network-based transfer learning: the neural network source task's hidden layers are reutilized in the target network} \label{transfer_network} \end{figure} In this paper, we take advantage of the design of a contextual multi-armed bandit presented in \cite{zhu2021deep} and apply policy transfer to improve the agent's SR adaptability in dynamic environments. The results and observations of applying DTRL are discussed in section \ref{adaptiveSR}. In the next section, we will discuss the details of the system model and present an analysis on reducing the cardinality of the action space in the proposed SR problem formulation. \section{System model and Problem Formulation} \label{Section4} \begin{table} \caption{Notations} \centering \label{param-def} \begin{tabular}{p{1.74cm}|p{6.24cm}} \textbf{Notation} & \textbf{Definition}\\ \hline $s$ and $\mathcal{S}$ & Index and set of stations, \\ $m$ and $\mathcal{M}$ & Index and set and the number of \noteblue{APs} \sout{RUs}, \\ $x^{|\mathcal{S}|}$ and $c^{|\mathcal{M}|}$ & Stations' positions and AP's positions \\ \hline $P_{cs}^{m}$ & CCA threshold of $m^{th}$ AP, \\ $P_{tx}^{m}$ & Transmission Power of $m^{th}$ AP, \\ $R_{s}^{m}$ & Throughput of $s^{th}$ STA of $m^{th}$ AP, \\ $R_{s,A}^{m}$ & Achievable throughput of $s^{th}$ STA of $m^{th}$ AP, \\ $D_{s}^{m}$ & Adaptive data link rate of $s^{th}$ STA of $m^{th}$ AP \\ \hline $P_{IDLE}^{m}$ & Probability of a STA is idle in a BSS, \\ $P_{SUCC,s}^{m}$ & Probability of succesful transmission by station $s^{th}$ STA to the $m^{th}$ AP,\\ $\phi_s^m $ & Probability of $s^{th}$ STA be transmitting to the $m^{th}$ AP, \\ $\xi_{CCA}$ & Binary function, $\xi_{CCA} = 1$ if signal is bellow the CCA threshold $P_{cs}$, \\ $\xi_{ED}$ & Binary function, $\xi_{ED} = 1$ if signal is bellow the Energy Detection (ED) threshold $P_{ed}$, \\ $\xi_{STA}$ & Binary function, $\xi_{STA} = 1$ if throug is bellow the Energy Detection (ED) threshold $P_{ed}$, \\ $E(T_{g,s}^m)$ and $E(I_{g,s}^m)$ & Expected length of general time-slot and expected information transmitted by the $s^{th}$ STA of $m^{th}$ AP, \\ $T_{TXOP}$ and $T_{EDCA}$ & Packet transmission duration and time required for a successful Enhanced Distributed Channel Access (EDCA) transmission, \\ $\Bar{P}^{fair}$ and $\Bar{U}$ & Average linear product-based network fairness and average station starvation, \\ $\omega$, $g_s^m$ and $\sigma^2$ & Fraction of $R_{s,A}^{m}$ in which STAs are consider in starvation, the channel power gain and the power noise. \\ \hline $P_{tx}^m$ and $P_{tx}^r$ & The transmission power at the $m^{th}$ transmitter (AP) and the received signal strength at the $r^{th}$ receiver, \\ $d_{m,r}$ and $\theta$ & Distance between the $m^{th}$ transmitter and $r^{th}$ receiver and path loss exponent, \\ $\mathcal{F}_m^{+}$ and $\mathcal{F}_m^{-}$ & Subset of interferers and non-AP interferers, \\ $\gamma_{m,r}$, $C_{m,r}$ and $C_T$ & Worst-case SINR and Shannon's maximum capacity of $m^{th}$ transmitter and $r^{th}$ receiver and cumulative maximum network capacity. \\ \hline \end{tabular} \label{notations} \end{table} In this work, we consider an infrastructure mode Wi-Fi 802.11ax network $\mathcal{N}$ with $N = |\mathcal{S}| + |\mathcal{M}|$ nodes where $\mathcal{S}$ is the set of stations with $\{\bm{x}^1, \bm{x}^2,..., \bm{x}^{|\mathcal{S}|}\} \in \mathbb{R}^2$ positions and $\mathcal{M}$ is the set of APs with $\{\bm{c}^1, \bm{c}^2,..., \bm{c}^{|\mathcal{M}|}\} \in \mathbb{R}^2$ positions. We can assume that $|\mathcal{M}|$ APs positions correspond to cluster centers and the stations will attach to their closest AP. In addition, the list of notations utilized in this work can be found in Table \ref{notations}. In this paper, we improve SR via maximization of the linear product-based fairness and minimization of the number of stations under starvation by configuring TP and CCA parameters. \begin{subequations}\label{opt-Verbal-CCmanagement} \begin{align} \label{opt-Verbal} & \textbf{Max} && \begin{pmatrix} \text{fairness} \\ \text{avg. station starvation complement} \end{pmatrix} \\ \label{opt-Verbal2} & \textbf{s.t.} && \text{Throughput} \\ &\textbf{var.} && \text{Transmission power and CCA threshold selection} \end{align} \end{subequations} Let's define the probability of an STA being idle in a BSS as: \begin{align} P_{IDLE}^{m} = \prod_{s \in \mathcal{S}} \phi_s^{m'} &&\forall m\in \mathcal{M}. \end{align} where $\phi_s^m \in [0,1]$ is the probability of an STA transmitting to the $m^{th}$ AP. In addition, we proceed to define the probability in which an STA will successfully transmit a packet as: \begin{align} P_{SUCC,s}^{m} = \phi_s^m\xi_{CCA}^{m}(\cdot)\xi_{ED}^{m}(\cdot)\prod_{s'\in\mathcal{S}, s'\neq s}^{\mathcal{S}}\phi_s^{m'} &&\forall m\in \mathcal{M}. \end{align} where $\xi_{CCA}(\cdot) = 1$ if the sensed signal of a packet sent by the $s^{th}$ STA is below the CCA threshold ($P_{cs}$), otherwise becomes zero. Here, $\xi_{ED}(\cdot) = 1$ if the sensed signal of packet sent by the $s^{th}$ STA is below the Energy Detection (ED) threshold ($P_{ed}$), otherwise becomes zero. Additionally, we consider $P_{cs} = P_{ed}$ to simplify our analysis. As indicated by \cite{Derakhshani2018} the expected length of the general time-slot $\mathbb{E}(T_g)$ and the expected information transmitted by the $s^{th}$ STA to $m^{th}$ AP $\mathbb{E}(I_g)$ can be expressed as: \begin{align} E(T_{g,s}^m) = \delta P_{IDLE}^{m} + P_{IDLE}^{m'}T &&\forall m\in \mathcal{M}. \end{align} \begin{align} E(I_{g,s}^m) = P_{SUCC,s}^{m}D_s^m T_{TXOP} &&\forall m\in \mathcal{M}, s\in \mathcal{S}. \end{align} where $D_s^m$ corresponds to the link data rate, $T_{EDCA}$ corresponds to the time required for a successful Enhanced Distributed Channel Access (EDCA) transmission, $T_{TXOP}$ is the transmission duration and $\delta$ the duration of an idle time slot. The link data rate will adaptively depend on SNR \cite{Holland2001} and mapped based on SNR/BER curves\cite{Riley2010}. The received SNR can be defined as $P_{tx}^m g_s^m\//\sigma^2$ where $P_{tx}$ is the transmission power, $g_s^m$ the channel power gain and $\sigma^2$ the power noise. Finally, the throughput of the $s^{th}$ station attached to the $m^{th}$ AP can be defined as: \begin{align} \label{thr_eq} R_s^m = \frac{E(I_{g,s}^m)}{E(T_{g,s}^m)} = \frac{P_{SUCC,s}^{m} D_s^m T_{TXOP}}{P_{IDLE}^{m}\delta + P_{IDLE}^{m'} T_{EDCA} }, \end{align} Additionally, let's define the average linear product-based network fairness and average station starvation in a distributed setting: \begin{align} \Bar{P}^{fair}(t) = \frac{1}{|\mathcal{M}|}\sum_{m \in \mathcal{M}}\prod_{s \in \mathcal{S}} \frac{R_s^m}{R_{m,A}^s}, \end{align} \begin{align} \Bar{U}(t) = \frac{1}{|\mathcal{M}|}\sum_{m \in \mathcal{M}}\frac{1}{|\mathcal{S}|}\sum_{s \in \mathcal{S}} \xi_{STA}(R_s^m > \omega R_{m,A}^s), \end{align} where $R_{m,A}^s$ is the achievable throughput of the $s^{th}$ station attached to the $m^{th}$ AP. Additionally, $\xi_{STA} = 1$ if $s^{th}$ station's throughput is greater than a fraction $\omega \in (0,1]$ of the achievable throughput, otherwise becomes zero in which case the station is considered in starvation. The considered problem is a multi-objective problem and can be addressed with the weighted sum approach. Thus, in each time step, the problem can be formulated as follows: \begin{problem}\label{problem_opt} \begin{align} &\underset{\mathbf{P_{tx}},\mathbf{P_{cs}}}{\operatorname{max}}\;A_1 \Bar{P}^{fair}(t)+A_2 (1-\Bar{U}(t))\\ &\text{s.t.}\nonumber\\ &\text{\eqref{thr_eq}},\\ &P_{tx}^{m}\in [P_{tx}^{min}, P_{tx}^{max}], P_{cs}^{m} \in [P_{cs}^{min},P_{cs}^{max}] &&\forall m \in \mathcal{M} \end{align} \end{problem} Due the dynamic nature of the scenario, the transmission probabilities of the STAs $\phi_s^m$ are not directly controllable and require an additional step to map them to EDCA parameters \cite{Derakhshani2018}. Instead, we simplify our analysis by utilizing a network simulator to model such dynamics and propose to solve the previous linear programming (LP) problem using a MA-MAB solution as described in section \ref{Section5}. \subsection{Optimal action set via worst-case interference} \label{worst-case} Wi-Fi typical scenarios consist in APs and stations distributed non-uniformly. Contrary to the analysis presented in \cite{Kim2006} we aim obtaining an optimal subset of TP and CCA threshold values to further reduce the action space size in SR problems. In this analysis, we only consider the Carrier Sense (CS) threshold term as form of the CCA threshold. First, let's consider the worst-case interference scenario in a $N >2$ arrangement. For the sake of simplicity we use the path-loss radio propagation model: \begin{myequation} P_{rx}^{r} = \frac{P_{tx}^{m}}{{d_{m,r}}^{\theta}}, \end{myequation} where $P_{tx}^{m}$ and $P_{rx}^{r}$ are the TP at the $m^{th}$ transmitter (AP) and the received signal strength at the $r^{th}$ receiver, respectively. In addition, $d_{m,r}$ is the distance between the transmitter and receiver. Finally, $\theta \in [2,4]$ corresponds to the path loss exponent. Thus, from the perspective of $m^{th}$ AP the worst-case interference $I_{m}$ is defined as: \begin{myequation} I_m = \sum_{v \in \mathcal{F}_m^{+}} \frac{P_{tx}^{v}}{{X^{(m,v)}}^\theta} + P_{tx}^{sta}\sum_{w\in \mathcal{F}_m^{-}} \frac{1}{{X^{(m,w)}}^\theta}, \label{interference} \end{myequation} where $\mathcal{F}_m^{+}$ is the subset of interferers $|\mathcal{F}_m^{+}|=|\mathcal{M}|-1$, corresponding to APs interfering with the $m^{th}$ AP and $\mathcal{F}_m^{-}$ the subset of non-AP interferers $|\mathcal{F}_m^{-}| = |\mathcal{S}|$, corresponding to the stations interfering with the $m^{th}$ AP. Furthermore, $P_{tx}^{v}$ is the TP of the $v^{th}$ interferer and $P_{tx}^{sta}$ is a constant corresponding to the fixed power assigned to all the stations based on the fact that typically stations are not capable to modify their TP. Additionally, $X^{(m,v)}$ and $X^{(m,w)}$ corresponds to the distance from the $m^{th}$ AP to the $v^{th}$ AP interferer and $m^{th}$ AP to the $v^{th}$ station interferer, respectively. $X^{(m,.)}$ is calculated as follows: \begin{myequation} X^{(m,.)} = \sqrt{(D_m+x_{m,.})^2 + {d_{m,r}^2 - 2(D_m+x_{m,.})d_{m,r}\cos \varsigma_{r,.}}}, \label{distance} \end{myequation} where $(.)$ refers either to the AP or non-AP interferer, $D_m$ is the CCA threshold range of the $m^{th}$ AP, $\varsigma_{r,.}$ is the distance between the receiver to the interferer $(.)$ and $x_{m,.}$ corresponds to the distance between any $(.)$ interferer and $D_m$. The corresponding worst-case SINR $\gamma_{m,r}$ at the receiver is defined as: \begin{equation} \gamma_{m,r} = \frac{P_{tx}^{m}}{{d_{m,r}}^{\theta} (I_m + N_{0})}, \end{equation} Let's assume that $N_0 << I_m$, thus the equation is reduced to: \begin{equation} \gamma_{m,r} = \frac{P_{tx}^{m}}{{d_{m,r}}^{\theta}I_m }, \label{sinr_formula} \end{equation} Substituting equations (\ref{interference}) and (\ref{distance}) in (\ref{sinr_formula}) we obtain equation (\ref{p_final}): \begin{strip} \begin{align} \gamma_{m,r}= \frac{\frac{P_{tx}^{m}}{{d_{m,r}}^{\theta}}}{\sum_{v\in \mathcal{F}_m^{+}} \frac{P_{tx}^{m}}{({\sqrt{(D_m+x_{m,v})^2 + {d_{m,r}}^2 - 2(D_m+x_{m,v})d_{m,r}\cos \varsigma_{r,.}}})^\theta} + P_{tx}^{sta}\sum_{w\in\mathcal{F}_m^{-}} \frac{1}{({\sqrt{(D_m+x_{m,w})^2 + d_{m,r}^2 - 2(D_m+x_{m,w})d_{m,r}\cos \varsigma_{r,w}}})^\theta}} \label{p_final} \end{align} \end{strip} The aforementioned equation describes $\gamma_{m,r}$ in function of $D_m$ and $d_{m,r}$. Additionally, we substitute $D_m =\left(P_{tx}^{m}/T_{cs}^{m}\right)^{1/\theta}$ in equation (\ref{p_final}), obtaining: \begin{myequation} \gamma_{m,r}=\frac{\frac{P_{tx}^{m}}{{d_{m,r}}^{\theta}}}{\sum_{v\in\mathcal{F}_{m}^+}\frac{P_{tx}^{m}}{\Gamma^{m} + P_{tx}^{sta}\sum_{w=1}^{K_{m}^-} \iota^{(m,w)}} }, \end{myequation} where, \[ \scalebox{.7}{$\Gamma^{m} = \left({\sqrt{\left[\left(\frac{P_{tx}^{m}}{T_{cs}^{m}}\right)^\frac{1}{\theta}+x_{m,v}\right]^2 + d_{m,r}^2 - 2\left[\left(\frac{P_{tx}^{m}}{T_{cs}^{m}}\right)^\frac{1}{\theta}+x_{m,v}\right]d_{m,r}\cos \varsigma_{r,v}}}\right)^\theta$},\] $\iota^{(m,w)} = \frac{1}{{(\sqrt{(\Omega_{sta}+x_{m,w})^2 + d_{m,r}^2 - 2((\Omega_{sta}+x_{m,w})d_{m,r}\cos \varsigma_{r,w}}})^\theta}$ and $\Omega_{sta} = \left(\frac{P_{tx}^{sta}}{T_{cs}^{sta}}\right)^\frac{1}{\theta}$. Now, we proceed to define the maximum channel capacity in terms of TP and Carrier Sense (CS) threshold ($T_{cs}$). Given a certain value of SINR, the Shannon maximum capacity is expressed as: \begin{myequation} C_{m,r} = W\log_2(1 + \gamma_{m,r}), \label{capacity} \end{myequation} where $W$ is the channel bandwidth in Hz. Then, the cumulative maximum network capacity can be calculated as: \begin{myequation} C_T = \sum_{m=1}^{|\mathcal{M}|-1}\sum_{r=1}^{N} C_{m,r}, \end{myequation} \begin{figure}[h] \center \includegraphics[scale=0.50]{figures/optimal_graph.pdf} \setlength{\belowcaptionskip}{-5pt} \caption{Network capacity as a function of TP and CS threshold.} \label{network_capacity} \end{figure} In figure \ref{network_capacity}, it is shown a graph of the network maximum capacity as a function of TP and CS threshold. As observed, the network capacity achieves its higher values when a combination of high TP and low CS threshold is utilized. \iffalse That allows to select a small set from the action space in order to improve exploration time and consequently, convergence time.\fi Note that, prior knowledge of the locations are required. \section{Proposed Multi-Agent Multi-armed bandit algorithms}\label{Section5} In this section, we present the action space, context definition and reward function for the MA-MAB algorithms utilized in this work. \subsection{Action space} \label{action_space} The action space corresponds to the number of combinations of $P_{cs}$ and $P_{tx}$ which in the context of MABs translates to the number of arms for each MAB agent. The action space is defined as: \begin{myequation} A_{cs} = \{P_{cs}^{min}, P_{cs}^{min} + \frac{P_{cs}^{max} - P_{cs}^{min}}{L_{cs}-1},..., P_{cs}^{max} \}, \end{myequation} \begin{myequation} A_{tx} = \{P_{tx}^{min}, P_{tx}^{min} + \frac{P_{tx}^{max} - P_{tx}^{min}}{L_{tx}-1},..., P_{cs}^{max}\}, \end{myequation} where $P_{cs}^{min}$, $P_{cs}^{max}$ and $P_{tx}^{min}$, $P_{tx}^{max}$ are the minimum and maximum values of CCA threshold and TP values, respectively. $L_{cs}$ and $L_{tx}$ corresponds to the number of levels to be discretized the CCA threshold and TP values, respectively. Finally, the number of arms corresponding to the action space for the $m^{th}$ agent $K_{m}^{AP}$ is $|A_{cs}^{m}| \cdot |A_{tx}^{m}|$. \subsection{Reward function in distributed non-cooperative settings} The reward is defined following the optimization problem \ref{problem_opt}. The reward resembles the reward presented in \cite{Bardou2021} which includes a linear product-based fairness and station's starvation term \cite{app112211074,Bardou2021} but defined in a distributed manner. A station is considered to be on starvation when its performance is bellow to a predefined percentage of its theoretical achievable throughput. The reward is defined as: \begin{equation} \resizebox{0.9\columnwidth}{!}{$r_{m}^{AP} = \frac{|\Psi_m^{AP}|\prod_{j\in \Psi_m^{AP}} \frac{R_m^s}{\omega R_{m,A}^s} + |N_m^{AP} \setminus \Psi_m^{AP}|(N_m^{AP} + \prod_{j\in N_m^{AP} \setminus \Psi_m^{AP}} \frac{R_m^s}{R_{m,A}^s})}{N_m^{AP}(N_m^{AP} + 1 )}$}, \end{equation} where $\Psi_m^{AP}$ is the set of starving stations attached to the $m^{th}$ AP , $N_m^{AP}$ the set of stations attached to the $m^{th}$ AP. We can also observe, that $r_{m}^{AP} \propto C_{m,r}$ as defined in Eq. \ref{capacity}. In the next subsection, we present the definition of the context considered in our MA-CMAB solution. \subsection{Distributed Sample Average Uncertainty-Sampling MA-CMAB } In \cite{zhu2021deep}, the authors present an efficient contextual multi-arm bandit based on a ``frequentist approach'' to compute the uncertainty instead of using bayesian solutions as Thompson Sampling. The frequentist approach consist in measuring the uncertainty of the action-values based on the sample average rewards just computed instead of relaying on the posterior distribution given the past rewards. In this work, we present multi-agent cooperative and not cooperative variants of the previously mentioned RL algorithm. In our problem, the context is comprised only by the APs' local observations: \begin{enumerate} \item Number of starving stations, $|\Psi_m^{AP}|$ where $m$ corresponds to the $m^{th}$ AP under $\omega$ fraction of their attainable throughput during the $t$ episode. \item Average RSSI, $\overline{S}_m^{AP}$ where $m$ is the $m^{th}$ AP during the $t$ episode. \item Average Noise, $\overline{\Upsilon}_m^{AP}$ where $m$ denotes the $m^{th}$ AP during the $t$ episode. \end{enumerate} Additionally the context is normalized as follows: \begin{myequation} \psi_m^{AP} = |\Psi_m^{AP}|/N_m^{AP}, \end{myequation} \begin{myequation} s_m^{AP} =\begin{cases} 0, & -50 \text{ dBm} \leq \overline{S}_m^{AP} \leq -60 \text{ dBm}, \\ 0.25, & -60 \text{ dBm} \leq \overline{S}_m^{AP} \leq -70 \text{ dBm},\\ 0.5, & -70 \text{ dBm} \leq \overline{S}_m^{AP} \leq -80 \text{ dBm}, \\ 0.75, & -80 \text{ dBm} \leq \overline{S}_m^{AP} \leq -90 \text{ dBm}, \\ 1, & -90 \text{ dBm} \geq \overline{S}_m^{AP} \end{cases}\\ \end{myequation} \begin{myequation} \hat{\Upsilon}_m^{AP} = \overline{\Upsilon}_m^{AP}/100, \end{myequation} The multi-agent SAU-Sampling algorithm in its non-cooperative version (SAU-NonCoop) is described in Algorithm \ref{sau-sampling}.The algorithm starts by initializing action-value functions $\mu(\bm{x}_m|\bm{\hat{\theta}}_{m})$ as a deep neural networks and the exploration parameters $J_{m,a}^2$ and $n_{m,a}$ for each $m^{th}$ AP. $n_{m,a}$ correspond to the number of times action $a$ was selected in the $m^{th}$ AP and $J_{m,a}^2$ is defined as an exploration bonus. In each environment step (Algorithm \ref{sau-sampling}, \texttt{line 2}), each agent will observe their local context and compute the selected arm given the reward prediction. In (Algorithm \ref{sau-sampling}, \texttt{line 11}) each CMAB agent will update $\bm{\hat{\theta}}_{m,a}$ using stochastic gradient descent on the loss between the predicted reward and the real observed reward. Finally, the exploration parameters are accordingly updated given the the prediction error as depicted in (Algorithm \ref{sau-sampling}, \texttt{line 12}). \normalem \setlength{\textfloatsep}{0pt}% \begin{algorithm} \algsetup{linenosize=\small} \scriptsize \textbf{Initialize} network $\bm{\hat{\theta}}_{m,a}$, exploration parameters $J_{m,a}^2(t=0) = 1$ and $n_{m,a}(t=0) = 0$ for all actions $a \in K_m$. \For{environment step $t\gets1$ \textbf{to} $T$}{ \For{agent $m$} { Observe context ${\bm{x}_m(t)} = [\psi_m^{AP}(t), s_m^{AP}(t), \hat{\Upsilon}_m^{AP}(t)]$ \\ \For{$a = 1,...,K_m$} { Calculate reward prediction $\hat{\mu}_{i,t}(t) = \mu(\bm{x}_m|\bm{\hat{\theta}}_{m})$ and $\tau_{m,a}^2(t) = J_{m,a}^2/n_{m,a}$\\ $\tilde{\mu}_{m,a} \sim \mathcal{N}(\hat{\mu}_{m,a},n_{m,a}^{1-}\tau_{m,a}^2)$ } Compute $a_{m}(t) = \text{\texttt{argmax}}_a(\{\tilde{\mu}_{m,a}(t)\}_{a \in K_m}\})$ if $t > K_m$, otherwise $a_{m}(t) \sim \mathcal{U}(0,K)$;\\ Select action $a_m(t)$, observe reward $r_m^{AP}$;\\ Update $\bm{\hat{\theta}}_{m,a}$ using SGD with gradients $\partial l_m/\partial \theta$ where $l_m= 0.5(r_m^{AP} - \hat{\mu}_{m,a}(t)) $ ; \\ Update $J_{m,a}^2 \leftarrow J_{m,a}^2 + e_m^2$ using prediction error $e_m = r_m^{AP}(t) - \hat{\mu}_{m,a}(t)$ and $n_{m,a} \leftarrow n_{m,a} + 1$; } } \caption{SAU-Sampling MA-CMAB} \medskip \label{sau-sampling} \end{algorithm} \subsection{Cooperative Sample Average Uncertainty-Sampling MA-CMAB} In this section we present a cooperative version of SAU-Sampling named SAU-Coop. Different from the non-cooperative version, the total reward $r_{m}^{C}$ considers the network Jain's fairness index in addition to their local reward $r_m^{AP}$ as: \begin{myequation} r_{m}^{C} = r_{m}^{AP} + r_{\mathcal{J}}, \end{myequation} where $r_{\mathcal{J}}$ as the overall network Jain's fairness index is defined as: \begin{myequation} r_{\mathcal{J}} = \mathcal{J}(R_1,...,R_{N}) = \frac{(\sum_{m=1}^{|\mathcal{M}|} R_m )^2}{|\mathcal{M}|\cdot\sum_{m=1}^{|\mathcal{M}|}R_m^2}, \end{myequation} where $R_m =\sum_{s=1}^{|\mathcal{S}_m|}R_s^m$ is the total throughput of all the $S_m$ stations of the $m^{th}$ AP. \subsection{Reward-cooperative $\epsilon$-greedy MA-MAB } In addition to the previous cooperative algorithm, we propose a cooperative approach based on the classical $\epsilon$-greedy strategy \cite{sutton2018reinforcement} that takes into account in the action's reward update a percentage of the average reward of other agents. This algorithm is described in Algorithm \ref{egreedy-coop}. \normalem \setlength{\textfloatsep}{0pt}% \begin{algorithm} \caption{Reward-cooperative $\epsilon$-greedy MA-MAB} \algsetup{linenosize=\small} \scriptsize \textbf{Initialize} $\epsilon_m(t=0) = \epsilon_0$, $Q_{m,a}(t=0)\leftarrow 0$, $N_{m,a}(t=0)\leftarrow 0$ and $\beta$. \For{environment step $t\gets1$ \textbf{to} $T$}{ \For{agent $m$} { Execute action $a_{m}(t)$: $a_{m}(t) =\begin{cases} \text{\texttt{argmax}}_{k=1,...,K} r_{k,i}(t) & \text{with probability} 1 - \epsilon_{m}(t) \\ k \sim \mathcal{U}(0,K) & \text{o.w} \end{cases}$\\ Calculate reward $r_m^{AP}(t)$ based on feedback of the environment\\ Update $Q_{m,a}(t+1) = Q_{m,a}(t) + \frac{1}{N_{m}(t)}[(r_m^{AP} + \beta\cdot\frac{1}{M-1}\sum_{m=1}^{M^{-i}} r_m^{AP}) - Q_{m,a}(t)] $ \\ Update $N_{m} \leftarrow N_{m}(t) + 1$;\\ Update $\epsilon_{m} \leftarrow \frac{\epsilon_{m}(t)}{\sqrt{t}}$ } } \medskip \label{egreedy-coop} \end{algorithm} Finally, in the next subsection we present the details of the the DTRL scheme to improve SR adaptation in dynamic environments. \subsection{Sample Average Uncertainty-Sampling MA-CMAB based Deep Transfer Reinforcement Learning } Typically, RL agents learn their best policy based on the feedback received from the environment in a $T$ horizon time. However, in real-world scenarios the environment conditions can change in $T+1$ and thus, adapting to the updated environment is necessary\cite{Padakandla2021}. In such cases, the ``outdated'' agent's policy might not be optimal to address the new conditions efficiently. For instance, a modification on the stations' distribution over the APs can cause that the SR-related parameters chosen by the ``outdated'' agents' policy affect the network performance. \normalem \setlength{\textfloatsep}{0pt}% \begin{algorithm} \algsetup{linenosize=\small} \scriptsize \textbf{Function} \textsc{Detect\_Singularity}($\mathcal{K}$) \tcp*[l]{returns True if anomaly is detected in network KPIs data $\mathcal{K}$ at time $t$, and False otherwise.} \textbf{Let} $\mathcal{L} = \{l | l \in \mathbb{N}, l>0\} $ the set of layers of model $\bm{\hat{\theta}}_{m,a}^l $ and $\mathcal{M} \subset \mathcal{L}$ the subset of layers to be transferred. \text{\normalfont \textbf{Run} algorithm \textsc{SAU-Sampling MA-CMAB} \texttt{(Algorithm \ref{sau-sampling})}} \While{environment step $t < T$} { \eIf{$\neg$\textsc{Detect\_Singularity}} { continue; } { \text{\normalfont\textbf{Reset} exploration parameters $S_{m,a}^2, n_{m,a}$\;} \text{\normalfont \textbf{Reinitialize} weights $w$ and biases $b$ of the $l^{th}$ layer of $\bm{\hat{\theta}}_{m,a}^{l \not\in \mathcal{M}}$ via:} \\ $\nu_{l} = \left(\sqrt{|\bm{\hat{\theta}}_{m,a}^{l\not\in \mathcal{M}}|}\right)^{-1}$ \; $\bm{\hat{\theta}}_{m,a}^{l \not\in \mathcal{M}} (w,b) \rightarrow w_{l} \sim \mathcal{U}(-\nu_l, \nu_l) , b_{l} \sim \mathcal{U}(-\nu_l, \nu_l) $\; \text{\normalfont \textbf{Transfer }weights and biases via: } \\ $\bm{\hat{\theta}}_{m,a}^{l \in \mathcal{M}} (w,b) \rightarrow \bm{\hat{\theta}}_{m,a}^{l \in \mathcal{M}'} (w,b)$\; }} \caption{SAU-Sampling MA-MAB Transfer Learning} \label{transfer_algo} \end{algorithm} To address the previous situation we propose two main solutions: \textbf{1.} If the agent detects a change in the environment indicated by a singularity, it will decide to correct its configuration via forgetting the policy already learnt (\textbf{forget}) or \textbf{2.} adapting the agent's policy to the new conditions via a transfer learning technique. A singularity is defined as a anomalous behavior of the KPIs of interest after the policy of the MAB agent has converged. In this work, we don't delve into how to detect a singularity and moreover, we assume the existence of an anomaly detector in our system \cite{10.1145/3444690}. In Algorithm \ref{transfer_algo}, we present the transfer learning algorithm depicting the second proposed solution. At $t=0$ each SAU-Sampling agent will reset their weights and biases and start learning as part of Algorithm \ref{sau-sampling}. At $t=S1$, where $S1$ corresponds to the time when an anomaly is detected and the transfer procedure is activated (Algorithm \ref{transfer_algo}, \texttt{line 7}). In our setup we transfer $l=2$ and reset $l=1$ (Algorithm \ref{transfer_algo}, \texttt{line 11}) , where $l$ corresponds to the layer of the neural network utilized in the SAU-Sampling agent. However, as indicated (Algorithm \ref{transfer_algo}, \texttt{line 13}), the transfer is not constrained to one layer but more generally to a set of layers. The set of transferred layers is considered as an hyperparameter to be tuned. The partial transfer of a model avoids negative transfer by giving the agent room to adapt to the new context since it mitigates model overfitting. \section{Performance Evaluation} \label{Section6} \subsection{Simulation Setting}\label{AA} We consider two scenarios in our simulations. The first one considers stationary users, meanwhile the second scenario considers mobile users to model dynamic scenarios (see section \ref{adaptiveSR}). In addition, stations and APs are two-antenna devices supporting up to two spatial streams in transmission and reception. In this work, we assume a frequency of 5 GHz with a 80 MHz channel bandwidth in a Line of Sight (LOS) setting. The propagation loss model is the Log Distance propagation loss model with a constant speed propagation delay. In addition, an adaptive rate data mode is considered with a UDP downlink traffic. We implement our proposed solutions using ns-3 and also we use OpenAI Gym to interface between ns-3 and the MA-MAB solution\cite{Gawowicz2019}. In Table \ref{q_settings} and Table \ref{net_settings} we present the learning hyperparameters and network settings parameters, respectively. \begin{table} [ht] \centering \caption{Learning hyperparameters} \begin{threeparttable} \resizebox{\columnwidth}{!}{ \begin{tabular}{c c} \hline \textbf{Parameter}&\textbf{Value} \\ \hline $\epsilon$-greedy MAB & { Annealing $\epsilon$: $\sqrt{T}$} \\ Thompson Sampling MAB & { Prior distribution: Beta}\\ Upper Confidence Bound MAB & {Level of exploration, $c = 1$} \\ SAU-Sampling & { Number of hidden layers, $N_h=2$ } \\ & {Number of neurons per hidden layer, $n_h=100$}\\ & {Number of inputs, $N_m=3$ and number of outputs, $N_o = K$}\\ & {Batch size, $B_s = 64$} \\ & Optimizer : {RMSProp (8e-3)} \\ & Weight decay : {5e-4} \\ & Activation function : {ReLU}\\ \hline Gym environment step time & { 0.05 s } \\ \bottomrule \end{tabular} } \end{threeparttable} \label{q_settings} \end{table} \begin{table} \caption{Network settings} \begin{center} \resizebox{\columnwidth}{!}{% \begin{tabular}{c c} \hline \textbf{Parameter}&\textbf{Value} \\ \hline Number of APs & { 6 } \\ Number of Stations & {15}\\ Number of antennas (AP) & {2} \\ Max Supported Tx Spatial Streams & { 2 } \\ Max Supported Tx Spatial Streams & {2} \\ Channel Number \footnotemark & { 1 } \\ Propagation Loss Model & { Log Distance Propagation Loss Model } \\ Wi-Fi standard & { 802.11 ax } \\ Frequency & { 5 GHz } \\ Channel Bandwidth & {80 MHz} \\ Traffic Model - UDP application & { $[0.011, 0.056, 0.11 \text{\cite{WILHELMI201926}}, 0.16]$ Gbps } \\ Maximum $\&$ minimum Transmission Power & $P_{tx}^{max}=21.0$ dBm $\&$ $P_{tx}^{min}=1.0$ dBm \\ Maximum $\&$ minimum CCA threshold & $P_{cs}^{max}=-62.0$ dbm $\&$ $P_{cs}^{min}=-82.0$ dbm\\ & $K_{cs} = 1$ and $K_{tx}= 1$\\ \hline \end{tabular} } \label{net_settings} \end{center} \end{table} \subsection{Reduced set of actions vs. all actions}\label{resultsA} In subsection \ref{worst-case} we presented a mathematical analysis to obtain a reduced set of optimal actions with the goal of decreasing exploration time and consequently improving convergence time. As concluded in figure \ref{network_capacity}, high TP and low CCA threshold values maximize the network capacity in the simulation scenario under study. Therefore, we selected a fixed value of CCA threshold ($P_{cs}=-82.0$ dBm) and a reduced set of TP $P_{tx} \in \{15,16,17,18,19,20,21\}$ dBm and observed the performance against the full set of possible actions described in \ref{action_space}. \begin{figure}[t!] \center \includegraphics[scale=0.38]{figures/optimal_vs_all_graphs_all_techniques2.pdf} \caption{Convergence performance of $\epsilon$-greedy, UCB and Thompson Sampling MA-MABs under non-cooperative and distributed regimen. The subscript ``\textbf{all}" indicates the usage of the full set of actions.} \label{optimal_vs_all} \end{figure} In figure \ref{optimal_vs_all}, we present the convergence performance of three MA-MAB algorithms under UDP traffic of 0.056 Gbps in a non-cooperative and cooperative settings (indicated with subscripts ``\textbf{non-coop}" and ``\textbf{coop}", respectively ). The algorithms correspond to $\epsilon$-greedy ($MAB^{eg}$), UCB ($MAB^{ucb}$) and Thompson Sampling ($MAB^{thom}$) MA-MABs. For each algorithm, we plotted three convergence graphs in terms of fairness, cumulative throughput and station starvation representing the behavior when a reduced set of actions and the full action set (indicated with the subscript ``\textbf{all}") are used, respectively. For the case of the set of optimal actions, we can observe that the performance is similar with a slight improvement when utilizing MAB-Thompson Sampling. On the other hand, when utilizing the full action set the behavior shows a noticeable improvement with MAB $\epsilon$-greedy algorithm with respect the others. In \cite{NEURIPS2020_12d16adf}, the authors study the unreasonable behavior of greedy algorithms when $K$ is sufficiently large. They concluded that when $K$ increases above 27 arms, intelligent algorithms are affected greatly by the exploration stage. The former results validate ours based on the fact that $K =|A_{cs}| \cdot |A_{tx}| = 21^2$. Finally, it can be noted that the impact of utilizing reduced optimal actions in terms of convergence time and KPI maximization. The set of optimal tasks allows to reduce the station starvation when compared with the best performer $MAB_{nocoop_{all}}^{eg}$ by an average of two starving users. However, in order to obtain such a set it is requires a prior knowledge of stations and APs geographical locations. In the following section we compare the results of $\epsilon$-greedy MA-MAB and a default typical configuration without machine learning. \footnotetext{We assume all APs are configured to use 1 channel out of the available 11. This is a practical selection to create dense deployment scenarios.} \begin{figure}[h] \center \includegraphics[scale=0.37]{figures/8262egnoncoop2.pdf} \caption{Performance results: $\epsilon$-greedy MAB w/ optimal set vs. default configuration with $P_{cs} \in \{-62.0, -82.0\} $ dBm. } \label{mabegreedy} \end{figure} \subsection{Distributed $\epsilon$-greedy MA-MAB vs. default configuration performance results}\label{dist_legacy} In this subsection, we present the comparative results and advantages of utilizing a distributed intelligent solution such as MAB $\epsilon$-greedy over the default CCA threshold and TP configuration with no ML. In figure \ref{mabegreedy}, we show the performance under four different UDP data traffic regimes: $\{0.011, 0.056, 0.11, 0.16\}$ Gbps. We considered two typical configurations of CCA threshold: $-82.0$ dBm and $-62.0$ dBm. In both cases, the AP's TP is $16.0$ dBm. It can be observed that MAB $\epsilon$-greedy achieves a significant improvement over the default configuration ($P_{cs}=-82.0$ dBm) with an average gain over all the considered traffic of $44.4\%$ in terms of cumulative throughput, $70.9\%$ in terms of station starvation, $12.2\%$ in terms of fairness, $138.0\%$ in terms of latency and $94.5\%$ in terms of packet loss ratio (PLR), respectively. Additionally, a gain over the default configuration ($P_{cs}=-62.0$ dBm) with an average gain over all the considered traffics of $53.9\%$ in terms of cumulative throughput, $138.4\%$ in terms of station starvation, $43.0\%$ in terms of fairness, $84.0\%$ in terms of latency and $105.4\%$ in terms of packet loss ratio (PLR) is shown, respectively. \begin{figure}[h] \center \includegraphics[scale=0.38]{figures/coopvsnoncoop2.pdf} \caption{Performance results of cooperative algorithms: $\epsilon$-greedy MA-MAB (Rew-Coop), SAU-Sampling MA-CMAB (SAU-Coop) and non-cooperative versions of the previous algorithms SAU-NonCoop and Eg-NonCoop under full-set of actions.} \label{sau_results} \end{figure} \vspace{0em} \subsection{Cooperation vs. non-cooperation performance results}\label{AA} In the two past subsections we have shown the results considering the set of optimal actions. In this subsection we assume the non-existence of stations and APs location information and thus, we must rely on the full set of actions. In consequence, we investigate if cooperation can improve the KPIs of interest by utilizing the cooperative proposal of the MAB $\epsilon$-greedy algorithm (Rew-Coop) and the contextual SAU-Sampling algorithm (SAU-Coop). Additionally, we present two non-cooperative algorithms: SAU-NonCoop which corresponds to the non-cooperative version of the SAU-Sampling and Eg-NonCoop that refers to the MAB $\epsilon$-greedy algorithm utilized in the previous section. As observed in figure \ref{sau_results}, simulations show that SAU-Coop improves Eg-NonCoop over all the data traffic with an average of $14.7\%$ in terms of cumulative throughput, $21.3\%$ in terms of station starvation, $4.64\%$ in terms of network fairness, $36.7\%$ in terms of latency and $32.5\%$ in terms of PLR. Similarly, the distributed version of SAU-Sampling presents a better performance over Eg-NonCoop, indicating that context is beneficial to solve the current optimization problem. Additionally, SAU-Coop presents a better performance over its non-cooperative version, specially when the data rate increases up to 0.16 Gbps where it is observed a gain of $14.1\%$ in terms of cumulative throughput, $32.1\%$ in terms of station starvation, $18.2\%$ in terms of network fairness, $16.5\%$ in terms of latency and $4\%$ in terms of PLR. To sum up, cooperative approaches contribute positively to the improvement of SR in WiFi over non-cooperative approaches. In addition, in cases where cooperation is not possible it is advisable to utilize contextual multi-armed bandits over stateless multi-armed bandits. \subsection{Deep Transfer Learning in Adaptive SR in Dynamic scenarios results}\label{adaptiveSR} In order to model a dynamic scenario, we design a simulation where the users move across the simulation area and attach to the AP that offers the best signal quality. Consequently, the user load in each AP will change and thus, the dynamics of the environment. We model this scenario with 3 APs and 15 users where the load will change twice throughout the simulation. As depicted in table \ref{dynamic_table} the user load of the $m^{th}$ AP denoted as $C_m$ will change in two instances in time: 3 and 6 minutes, respectively. \begin{table}[ht] \caption{Dynamic scenario load distribution} \begin{center} \centering \begin{tabular}{c|cccccc|} \cline{2-7} & \multicolumn{2}{c|}{\bfseries $\bm{t=0}$ min} & \multicolumn{2}{c|}{\bfseries $\bm{t=3}$ min} & \multicolumn{2}{c|}{\bfseries $\bm{t=6}$ min} \\ \hline \multicolumn{1}{|c|}{$C_1$} & \multicolumn{2}{c|}{8} & \multicolumn{2}{c|}{5} & \multicolumn{2}{c|}{2} \\ \hline \multicolumn{1}{|c|}{$C_2$} & \multicolumn{2}{c|}{5} & \multicolumn{2}{c|}{5} & \multicolumn{2}{c|}{2} \\ \hline \multicolumn{1}{|c|}{$C_3$} & \multicolumn{2}{c|}{2} & \multicolumn{2}{c|}{5} & \multicolumn{2}{c|}{11} \\ \hline \end{tabular} \end{center} \label{dynamic_table} \end{table} \vspace{0.0em} \vspace{0em} \begin{figure}[h] \center \includegraphics[scale=0.4]{figures/comparison_transfer_notransfer_forget2.pdf} \setlength{\belowcaptionskip}{-5pt} \caption{Network response in terms of fairness and station starvation when utilizing the forget, full transfer and transfer strategies. } \label{transfer_forget_results} \end{figure} In figure \ref{transfer_forget_results} we present the network behavior in terms of fairness and station starvation under the scenario depicted by Table \ref{dynamic_table}. In addition to the two methods previously mentioned: \textbf{forget} and \textbf{transfer}, we present the performance of a third approach called \textbf{full transfer} where the full transfer of the model is considered. During the first interval ($0-3$min) the performance is similar in the three methods as expected. However, after the two changes on the network load, two singularities in each graph are visible in the fairness and starvation graphs. More specifically, the \textbf{forget} method experiences the worst behavior, with a $54.3\%$ and $11.7\%$ decrease when compared with the transfer method in terms of station starvation and fairness, respectively. The \textbf{forget} method shows some peaks at the moment of the singularities representing $60\%$ of total of the users with a service drop; this behavior is inherently related to the agents' process of start learning again and cannot be avoided. From the quality of service perspective, a disturbance such as the one observed is highly non-preferable. Meanwhile, the \textbf{full transfer} method underperforms the \textbf{transfer} method with $18.7\%$ and $6\%$ decrease in the previously mentioned KPIs. Interestingly, it can be observed in the second interval under study ($3-6$min) the \textbf{forget} method is able to overperform at the end of the period the \textbf{full transfer} method. This is due to a negative transfer as a result of transferring the whole model. As observed, not only the partial transfer learning reduces considerably the peaks in performance of the \textbf{forget} method but also it is able to achieve better adaptation over the \textbf{full transfer} method. In all methods, the cumulative throughput is similar, however as observed in figure \ref{transfer_forget_results} station starvation and consequently, fairness are affected. \section{Conclusion} \label{Section8} In this paper, we propose Machine Learning (ML)-based solutions to the Spatial Reuse (SR) problem in distributed Wi-Fi 802.11ax scenarios. We presented a solution to reduce the huge action space given the possible values of Transmission Power (TP) and Clear-Channel-Assessment (CCA) threshold values per Access Point (AP) and analysed its impact on diverse well-known distributed Multi-Agent Multi-Armed Bandit (MA-MAB) implementations. In distributed scenarios, we showed that $\epsilon$-greedy MA-MAB significantly improves the performance over typical configurations when the optimal actions are known. Moreover, the Contextual Multi-Agent Multi-Armed (MA-CMAB) named SAU-Sampling in the cooperative setting contributes positively to an increase in throughput and fairness and reduction of PLR when compared with no cooperation approaches. Under a dynamic scenarios, transfer learning benefits the SAU-Sampling algorithm to overcome the service drops for at least $60\%$ of the total of users when utilizing the forget method. Additionally, we obtained that partial transfer learning offers better results than the full transfer method. To conclude, the utilization of the cooperative version of the MA-CMAB to improve SR in WiFi scenarios is preferable since it outperforms the presented ML-based solutions and prevents service drops in dynamic environments via transfer learning. \section{Acknowledgment }\label{Section9} This research is supported by Mitacs Accelerate Program and NetExperience Inc.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A natural way to numerically calculate the integral of a function $f:[0,1]\rightarrow\mathbb{R}$ is to take a sequence ${\bf x}=\{x_n\}_{n=1}^\infty\subset[0,1]$ and use the approximation \begin{equation} \label{MainApproximation} \int_0^1f(t)dt\approx\frac{1}{N}\sum_{n=1}^N f(x_n). \end{equation} Introduce the error \begin{equation} \label{MainError} \mathcal{E}_N(f;{\bf x})=\left|\frac{1}{N}\sum_{n=1}^Nf(x_n)-\int_0^1f(t)dt\right|. \end{equation} In the \emph{Monte Carlo method} (MC), one takes ${\bf x}$ in (\ref{MainApproximation}) to be a sequence of random numbers sampled uniformly from $[0,1]$. The expression inside the absolute value signs of (\ref{MainError}) is then a random variable with expected value 0 and standard deviation of the order $1/\sqrt{N}$ as $N\rightarrow\infty$, see e.g. \cite{Caflisch}. The \emph{quasi-Monte Carlo method} (QMC) is based on instead taking a deterministic ${\bf x}\subset[0,1]$ in (\ref{MainApproximation}) with good properties (more precisely described below). This can lead to a better convergence rate of (\ref{MainError}) than when taking random {\bf x}. In fact, there exist deterministic ${\bf x}\subseteq [0,1]$ such that the rate of decay of $\mathcal{E}_N(f;{\bf x})$ is close to $1/N$ as $N\rightarrow\infty$ (see below). Useful references for the theory and applications of the MC and QMC methods are e.g. \cite{Caflisch, DP, Owen}. The aim of this note is to discuss error estimates for QMC on $[0,1]$. More specifically, we establish an extension of the elegant \emph{Koksma's inequality}. Koksma's inequality is the main general error estimate for QMC, we need some auxiliary notions in order to formulate it. Let ${\bf x}=\{x_n\}_{n=1}^\infty\subseteq[0,1]$ and let $E\subset[0,1]$. Denote for $N\in\mathbb{N}$ $$ A_N(E,{\bf x})=\sum_{n=1}^N\chi_E(x_n), $$ where $\chi_E(x)=1$ if $x\in E$ and $\chi_E(x)=0$ if $x\notin E$. Note that $A_N$ counts how many of the first $N$ terms of ${\bf x}$ that belong to $E$. The \emph{extreme discrepancy} of ${\bf x}$ is defined as \begin{equation} \nonumber D_N({\bf x})=\sup_{0\le a<b\le1}\left|\frac{A_N([a,b),{\bf x})}{N}-(b-a)\right|. \end{equation} If we set $a=0$ and take supremum over all $b\in(0,1]$, then we obtain the \emph{star discrepancy} of ${\bf x}$, denoted $D_N^*({\bf x})$. It is clear that \begin{equation} \label{discComparison} D_N^*({\bf x})\le D_N({\bf x})\le 2D_N^*({\bf x}) \end{equation} In a sense, the quantities $D_N, D_N^*$ measure how much the distribution of the points of ${\bf x}$ deviates from the uniform distribution. The \emph{total $p$-variation} of a function $f:[0,1]\rightarrow\mathbb{R}$ is given by \begin{equation} \nonumber {\rm Var}_p(f)=\sup\left(\sum_{j=1}^\infty|f(I_j)|^p\right)^{1/p}, \end{equation} where the supremum is taken over all non-overlapping collections of intervals $\{I_j\}_{j=1}^\infty$ contained in $[0,1]$ and $f(I)=f(b)-f(a)$ if $I=[a,b]$. If ${\rm Var}_p(f)<\infty$ we say that $f$ has \emph{bounded $p$-variation} (written $f\in BV_p$). Note that $$ BV_1\hookrightarrow BV_p\quad(1<p<\infty). $$ See \cite{Dudley} for a thorough discussion of bounded $p$-variation and applications. Koksma's inequality states that \begin{equation} \label{Koksma} \mathcal{E}_N(f;{\bf x})\le D_N^*({\bf x}){\rm Var}_1(f). \end{equation} In other words, the error of QMC is bounded by a product of two factors, the first measuring the "spread" of the sequence ${\bf x}$ and the second measuring the variation of the integrand $f$. We mention that there is a sequence ${\bf x}_C$ called the \emph{van der Corput sequence} (see \cite{KN}) such that $D_N^*({\bf x}_C)=\mathcal{O}(\log(N)/N)$, and this is the best rate of decay one can expect on $D_N^*$ (see Section 2 below). Hence, if $f\in BV_1$ and we use the van der Corput sequence ${\bf x}_C$, then $\mathcal{E}_N(f;{\bf x}_C)=\mathcal{O}(\log(N)/N)$. A drawback of (\ref{Koksma}) is that it provides no error estimate in the case when $f\notin BV_1$. For instance, we were originally interested in finding a general error estimates for $f\in BV_p~~(p>1)$ (see Corollary \ref{pVarCor} below). This led us to our main result (Theorem \ref{MainTeo}), which is a sharpening of (\ref{Koksma}) that is effective also when $f\notin BV_1$. In fact, Theorem \ref{MainTeo} provides an estimate of (\ref{MainError}) for \emph{any} function. For this, we recall the notion of \emph{modulus of variation}, first introduced in \cite{Lagrange} (see also \cite{Chanturiya1}). For any $N\in\mathbb{N}$, we set $$ \nu(f;N)=\sup\sum_{j=1}^N|f(I_j)|, $$ where the supremum is taken over all non-overlapping collections of \emph{at most} $N$ sub-intervals of $[0,1]$. An attractive feature of the modulus of variation is that it is finite for any bounded function. Of course, $f\in BV_1$ if and only if $\nu(f;N)=\mathcal{O}(1)$ as $N\rightarrow\infty$ and the growth of $\nu(f;N)$ then tells us how "badly" a function has unbounded 1-variation. The next result is our main theorem. \begin{teo} \label{MainTeo} There is an absolute constant $C>0$ such that for any function $f$ and $N\in\mathbb{N}$ there holds \begin{equation} \label{KoksmaNew} \mathcal{E}_N(f;{\bf x})\le CD_N^*({\bf x})\nu(f;N). \end{equation} \end{teo} We can take $C=25$ in (\ref{KoksmaNew}). This value is a consequence of our method of proof and not optimal. The main point is that we can replace the total variation in (\ref{Koksma}) with a quantity that is finite for all functions. On the other hand, we can improve the constant of (\ref{KoksmaNew}) if $f$ is continuous. \begin{teo} \label{continuityTeo} For any continuous function $f$ and $N\in\mathbb{N}$ there holds \begin{equation} \label{KoksmaContinuity} \mathcal{E}_N(f;{\bf x})\le D_N^*({\bf x})\nu(f;2N+2). \end{equation} \end{teo} We shall point out some corollaries of (\ref{KoksmaNew}). First, we have the following Koksma-type inequality for $BV_p$. \begin{cor} \label{pVarCor} Let $f\in BV_p~~(1\le p<\infty)$, there is a constant $C>0$ such that for all $N\in\mathbb{R}$ there holds \begin{equation} \label{pVarEst} \mathcal{E}_N(f;{\bf x})\le CN^{1-1/p}D_N^*({\bf x}){\rm Var}_p(f). \end{equation} \end{cor} We suspect that (\ref{pVarEst}) is known, but we have not been able to locate it in the literature. We shall also discuss error estimates for functions with some continuity properties. Our result here (Corollary \ref{HolderCor}) is known, see \cite{KN}, but Theorem \ref{MainTeo} allows us to derive it in a very simple way. First, we need to recall the \emph{modulus of continuity} of $f$: $$ \omega(f;\delta)=\sup_{|x-y|\le\delta}|f(x)-f(y)|\quad(0\le\delta\le1). $$ Let $\omega:[0,1]\rightarrow[0,\infty)$ be a non-decreasing function with $\omega(0)=0$, strictly concave and differentiable on (0,1). We denote by $H^\omega$ the class of functions such that $$ |f|_{H^\omega}=\sup_{0<\delta\le1}\frac{\omega(f;\delta)}{\omega(\delta)}<\infty. $$ In particular, if $\omega(\delta)=\delta^\alpha~~(0<\alpha<1)$, then $H^\omega$ is the space of $\alpha$-H\"{o}lder continuous functions. \begin{cor}[see e.g. \cite{KN}, p. 146] \label{HolderCor} Let $f\in H^\omega$, there is a constant $C>0$ such that for all $N\in\mathbb{N}$ there holds \begin{equation} \label{ErrorHomega} \mathcal{E}_N(f;{\bf x})\le C|f|_{H^\omega}N\omega\left(\frac{1}{N}\right)D_N^*({\bf x}). \end{equation} \end{cor} Corollaries \ref{pVarCor} and \ref{HolderCor} follow from simple estimates of the quantity $\nu(f;N)$ (see Proposition \ref{CorollaryProof} below). It is perhaps not clear that Theorem \ref{MainTeo} really provides any additional information compared to Koksma's inequality (\ref{Koksma}), or the previously known results Corollaries \ref{pVarCor}-\ref{HolderCor}. In the following example, we show that (\ref{KoksmaNew}) can give a better convergence rate of $\mathcal{E}_N$ than (\ref{Koksma}), (\ref{pVarEst}) or (\ref{ErrorHomega}). \begin{example} Let $g(x)=x\sin(1/x)$, $x\in(0,1]$ and $g(0)=0$. It is easy to see that $g\in BV_p$ for any $p>1$ but $g\notin BV_1$. Hence, (\ref{Koksma}) does not apply. On the other hand, by Corollary \ref{pVarCor} $$ \mathcal{E}_N(g;{\bf x}_C)\le C(\alpha)\frac{\log N}{N^\alpha} $$ for any $\alpha<1$, where ${\bf x}_C$ is the van der Corput sequence. Here, $C(\alpha)\rightarrow\infty$ as $\alpha\rightarrow1-$. (This result can also be obtained from Corollary \ref{HolderCor} since $\omega(g;\delta)=\mathcal{O}(\delta^\alpha)$ for any $\alpha\in(0,1)$.) However, we get a sharper result if we use Theorem \ref{MainTeo} directly. Indeed, it is not difficult to see that $\nu(g;N)=\mathcal{O}(\log(N))$. Whence, $$ \mathcal{E}_N(g;{\bf x}_C)=\mathcal{O}\left(\frac{(\log N)^2}{N}\right), $$ where the implied constant is absolute. \end{example} \section{Auxiliary results} In this section, we collect some auxiliary results that will be useful to us. For the sake of the reader, we start by briefly recalling some facts from discrepancy theory. We follow closely the presentation given in \cite{KN}. Let $N$ be fixed and consider a finite sequence ${\bf x}=\{x_n:1\le n\le N\}$. When calculating the discrepancy, the order of the elements does not matter. Thus, we can always assume that $x_1\le x_2\le...\le x_N$. Furthermore, we also have \begin{lem}{\cite[p. 91]{KN}} \label{discreteDiscrepancyLemma} Assume that $x_1\le x_2\le...\le x_N$, then \begin{eqnarray} \label{discreteDiscrepancy} D_N^*({\bf x})&=&\max_{1\le n\le N}\left\{\left|x_n-\frac{n-1}{N}\right|,\left|x_n-\frac{n}{N}\right|\right\}\\ \nonumber &=&\frac{1}{2N}+\max_{1\le n\le N}\left|x_n-\frac{2n-1}{2N}\right| \end{eqnarray} \end{lem} Hence, for any finite sequence ${\bf x}$ of $N$ points, we have \begin{equation} \label{lowerFinite} D^*_N({\bf x})\ge\frac{1}{2N}, \end{equation} with equality if and only if ${\bf x}$ is a permutation of $\{(2n-1)/(2N): 1\le n\le N\}$. For an infinite sequence, the optimal rate of decay of the discrepancy is $\mathcal{O}(\log(N)/N)$. This rate is attained by the above mentioned van der Corput sequence ${\bf x}_C$. The fact that $\log(N)/N$ is optimal is due to the following result of Schmidt (see e.g. \cite[p. 109]{KN}): for any infinite sequence ${\bf x}$ there holds $$ \liminf_{N\rightarrow\infty}\frac{ND_N^*({\bf x})}{\log(N)}>0. $$ Assume as above that $x_1\le x_2\le...\le x_N$, and set $x_0=0$ and $x_{N+1}=1$. Then the following identity holds (see \cite[p. 143]{KN}) \begin{equation} \label{Zaremba} \frac{1}{N}\sum_{n=1}^Nf(x_n)-\int_0^1f(t)dt=\sum_{n=0}^N\int_{x_n}^{x_{n+1}}\left(t-\frac{n}{N}\right)df. \end{equation} Koksma's inequality (\ref{Koksma}) follows easily from (\ref{Zaremba}) and (\ref{discreteDiscrepancy}). The author could not derive (\ref{KoksmaNew}) from (\ref{Zaremba}) and thus found the argument presented in the proof of Theorem \ref{MainTeo} below. However, the proof of Theorem \ref{continuityTeo} \emph{does} make essential use of (\ref{Zaremba}) together with the following mean value theorem due to Hobson \cite{Hobson} (see also \cite{Dixon}). \begin{lem} \label{HobsonMVT} Let $\varphi$ be a non-constant function that is monotone on the open interval $(a,b)$ and let $f$ be integrable on $(a,b)$. Then there exists $c\in(a,b)$ such that \begin{equation} \nonumber \int_a^b\varphi(t)f(t)dt=\varphi(a+)\int_a^cf(t)dt+\varphi(b-)\int_c^bf(t)dt, \end{equation} where $$ \varphi(a+)=\lim_{h\rightarrow0, h>0}\varphi(a+h)\quad{\rm and}\quad \varphi(b-)=\lim_{h\rightarrow0, h>0}\varphi(b-h). $$ \end{lem} In proving Theorems \ref{MainTeo} and \ref{continuityTeo}, we use some approximation arguments and need to control the variation of the approximants. This is accomplished by the following two simple lemmas. \begin{lem} \label{approximationLemma} Let $M\in\mathbb{N}$ and let $s_M$ be the continuous first-order spline interpolating $f$ at the knots $0=x_0<x_1<...<x_M=1$. Then \begin{equation} \label{varIneq1} {\rm Var}_1(s_M)\le\nu(f;M). \end{equation} \end{lem} \begin{proof} Let $\{x_{n_k}\}$ be the subset of $\{x_n\}$ consisting of points of local extremum of $s_M$. It is easy to see that $$ {\rm Var}_1(s_M)=\sum_k|s_M(x_{n_{k+1}})-s_M(x_{n_k})|=\sum_k|f(x_{n_{k+1}})-f(x_{n_k})|\le \nu(f;M), $$ where the last inequality holds since the sum extends over at most $M$ terms. \end{proof} \begin{lem} \label{steklovLemma} Let $h>0$ be fixed but arbitrary and set \begin{equation} \label{steklovMean} f_h(x)=\frac{1}{h}\int_0^hf(x+t)dt \end{equation} for $x\in[0,1]$ (assume that $f(x)=f(1)$ for $x\in[1,1+h]$). Then \begin{equation} \nonumber \nu(f_h;k)\le\nu(f;k) \end{equation} for any $k\in\mathbb{N}$. \end{lem} \begin{proof} Let $\{[a_n,b_n]:1\le n\le k\}$ be an arbitrary collection of $k$ non-overlapping intervals. Then \begin{eqnarray} \nonumber \sum_{n=1}^k|f_h(b_n)-f_h(a_n)|&=&\sum_{n=1}^k\left|\frac{1}{h}\int_0^h[f(b_n+t)-f(a_n+t)]dt\right|\\ \nonumber &\le&\frac{1}{h}\int_0^h\sum_{n=1}^k|f(b_n+t)-f(a_n+t)|dt\\ \nonumber &\le&\nu(f;k). \end{eqnarray} \end{proof} \section{Proofs} In this section we prove our results. Throughout the section, ${\bf x}$ is an infinite sequence and $N\in\mathbb{N}$ a fixed number. \begin{proof}[Proof of Theorem \ref{MainTeo}] Without loss of generality, we may assume that $$ x_1<x_2<...<x_N. $$ Set $x_0=0$ and $x_{N+1}=1$ and let $s_N$ be the continuous first-order spline interpolating $f$ at the knots $x_0, x_1,...,x_N, x_{N+1}$. Since $f(x_n)=s_N(x_n)$ for $0\le n\le N+1$, we have $$ \frac{1}{N}\sum_{n=1}^N f(x_n)-\int_0^1f(t)dt=\frac{1}{N}\sum_{n=1}^N s_N(x_n)-\int_0^1s_N(t)dt+\int_0^1[s_N(t)-f(t)]dt. $$ Hence, $$ \mathcal{E}_N(f;{\bf x})\le\mathcal{E}_N(s_N,{\bf x})+\|f-s_N\|_{L^1(0,1)}. $$ By (\ref{Koksma}) and Lemma \ref{approximationLemma}, we have \begin{equation} \label{errorEst} \mathcal{E}_N(f;{\bf x})\le D_N^*({\bf x})\nu(f;N)+\|f-s_N\|_{L^1(0,1)}. \end{equation} We shall estimate $\|f-s_N\|_{L^1(0,1)}$. We have $$ \|f-s_N\|_{L^1(0,1)}=\sum_{n=0}^{N}\int_{x_n}^{x_{n+1}}|f(t)-s_N(t)|dt. $$ Observe now that for each $n=0,1,...,N$, there are $y_n\in(x_n,x_{n+1})$ such that \begin{equation} \label{contra} \int_{x_n}^{x_{n+1}}|f(t)-s_N(t)|dt\le2(x_{n+1}-x_n)|f(y_n)-s_N(y_n)|. \end{equation} Indeed, assume that (\ref{contra}) is false for some $k$, i.e., $$ \frac{1}{(x_{k+1}-x_k)}\int_{x_k}^{x_{k+1}}|f(t)-s_N(t)|dt> 2|f(t)-s_N(t)| $$ holds for all $t\in(x_k,x_{k+1})$. Integrating the previous inequality over $(x_k,x_{k+1})$ gives $$ \int_{x_k}^{x_{k+1}}|f(t)-s_N(t)|dt\ge 2\int_{x_k}^{x_{k+1}}|f(t)-s_N(t)|dt, $$ which is a contradiction since we may assume that $\{f\neq s_N\}$ has positive measure. Thus, by (\ref{contra}), \begin{equation} \label{L1estimate} \|f-s_N\|_{L^1(0,1)}\le2\delta_N({\bf x})\sum_{n=0}^{N}|f(y_n)-s_N(y_n)|, \end{equation} where $\delta_N({\bf x})=\max_{0\le n\le N}(x_{n+1}-x_n)$. We shall first prove that \begin{equation} \label{largestInterval} \delta_N({\bf x})\le 4D_N^*({\bf x}). \end{equation} For $1\le n\le N-1$, we have \begin{eqnarray} \nonumber x_{n+1}-x_n&\le&\left|x_n-\frac{2n-1}{2N}\right|+\left|x_{n+1}-\frac{2n+1}{2N}\right|+\frac{1}{N}\\ \nonumber &\le& 2D_N^*({\bf x}), \end{eqnarray} by (\ref{lowerFinite}) and (\ref{discreteDiscrepancy}). For $n=0$, set $J_0=[x_0,x_1)$ and observe that $$ x_1-x_0=|J_0|=\left|\frac{A_N(J_0,{\bf x})}{N}-|J_0|\right|\le D_N({\bf x})\le 2D_N^*({\bf x}), $$ by (\ref{discComparison}). Finally, for $n=N$, set $J_N=[x_N,x_{N+1})$ and observe that $A_N(J_N,{\bf x})=1$. Thus, \begin{eqnarray} \nonumber x_{N+1}-x_N=|J_N|&\le&\left|\frac{A_N(J_N,{\bf x})}{N}-|J_N|\right|+\frac{1}{N}\le D_N({\bf x})+\frac{1}{N}\\ \nonumber &\le& 2D_N^*({\bf x})+\frac{1}{N}\\ \nonumber &\le& 4D_N^*({\bf x}). \end{eqnarray} This proves (\ref{largestInterval}). Hence, by (\ref{L1estimate}), we have $$ \|f-s_N\|_{L^1(0,1)}\le 8D_N^*({\bf x})\sum_{n=0}^N|f(y_n)-s_N(y_n)|. $$ Furthermore, \begin{eqnarray} \nonumber |f(y_n)-s_N(y_n)|&\le& |f(y_n)-f(x_n)|+|f(x_n)-s_N(y_n)|\\ \nonumber &=&|f(y_n)-f(x_n)|+|s_N(x_n)-s_N(y_n)| \end{eqnarray} and it follows that \begin{eqnarray} \nonumber \sum_{n=0}^N|f(y_n)-s_N(y_n)|&\le&\sum_{n=0}^N|f(y_n)-f(x_n)|+\sum_{n=0}^N|s_N(x_n)-s_N(y_n)|\\ \nonumber &\le&\nu(f;N+1)+{\rm Var}_1(s_N)\\ \nonumber &\le&\nu(f;N+1)+\nu(f;N)\\ \nonumber &\le& 3\nu(f;N), \end{eqnarray} where we used the obvious inequality $\nu(f;N+1)\le 2\nu(f;N)$. Consequently, $$ \|f-s_N\|_{L^1(0,1)}\le24 D_N^*({\bf x})\nu(f;N). $$ and by (\ref{errorEst}) we obtain $$ \mathcal{E}_N(f;{\bf x})\le 25D_N^*({\bf x})\nu(f;N). $$ \end{proof} \begin{proof}[Proof of Theorem \ref{continuityTeo}] We will use the identity (\ref{Zaremba}). Assume first that $f\in C^1(0,1)$ (i.e. has a continuous derivative on $(0,1)$). In this case $$ \int_{x_n}^{x_{n+1}}\left(t-\frac{n}{N}\right)df=\int_{x_n}^{x_{n+1}}\left(t-\frac{n}{N}\right)f'(t)dt, $$ see e.g. \cite{WZ}. Since $t\mapsto(t-n/N)$ is monotone on $(x_n,x_{n+1})$ and $f'$ is integrable, we may use Lemma \ref{HobsonMVT} to get $$ \int_{x_n}^{x_{n+1}}\left(t-\frac{n}{N}\right)df=(x_n-n/N)\int_{x_n}^{y_n}f'(t)dt+(x_{n+1}-n/N)\int_{y_n}^{x_{n+1}}f'(t)dt $$ for some $y_n\in(x_n,x_{n+1})$. Hence, by (\ref{discreteDiscrepancy}) and the fundamental theorem of calculus, we have for $n=0,1,...,N$ \begin{eqnarray} \nonumber \left|\int_{x_n}^{x_{n+1}}\left(t-\frac{n}{N}\right)df\right|&\le&\left|x_n-\frac{n}{N}\right|\left|\int_{x_n}^{y_n}f'(t)dt\right|+\left|x_{n+1}-\frac{n}{N}\right|\left|\int_{y_n}^{x_{n+1}}f'(t)dt\right|\\ \nonumber &\le& D_N^*({\bf x})\left(|f(J'_n)|+|f(J''_n)|\right), \end{eqnarray} where $J_n'=[x_n,y_n]$ and $J_n''=[y_n,x_{n+1}]$. It follows that \begin{eqnarray} \nonumber \mathcal{E}_N(f;{\bf x})&\le&D_N^*({\bf x})\sum_{n=0}^N\left(|f(J_n')|+|f(J_n'')|\right)\\ \nonumber &\le& D_N^*({\bf x})\nu(f;2N+2). \end{eqnarray} Thus, (\ref{KoksmaContinuity}) holds for any $C^1$-function. To conclude the argument, we approximate a continuous $f$ with functions in $C^1$ in the following way. Fix $h>0$ and let $f_h$ be given by (\ref{steklovMean}). Since $f$ is continuous, $f_h\in C^1(0,1)$. Hence, for any $h>0$ \begin{equation} \label{MainTeo2ineq1} \mathcal{E}_N(f_h;{\bf x})\le D^*_N({\bf x})\nu(f_h;2N+2). \end{equation} Further, by Lemma \ref{steklovLemma}, we have for any $k\in\mathbb{N}$ \begin{equation} \label{MainTeo2ineq2} \nu(f_h;k)\le\nu(f;k) \end{equation} Combining (\ref{MainTeo2ineq1}) and (\ref{MainTeo2ineq2}), we get \begin{equation} \label{MainTeo2ineq3} \mathcal{E}_N(f_h;{\bf x})\le D^*_N({\bf x})\nu(f;2N+2) \end{equation} for any $h>0$. It is easy to see that $\|f-f_h\|_\infty\le\omega(f;h)$ (where $\omega(f;h)$ denotes the modulus of continuity of $f$), whence $f_h\rightarrow f$ uniformly since $f$ is continuous. Uniform convergence allows us to interchange the order of limit and integration to obtain $$ \lim_{h\rightarrow0+}\mathcal{E}_N(f_h;{\bf x})=\mathcal{E}_N(f;{\bf x}). $$ Hence, letting $h\rightarrow0+$ in (\ref{MainTeo2ineq3}), we obtain (\ref{KoksmaContinuity}). \end{proof} The next result proves the corollaries \ref{pVarCor} and \ref{HolderCor}. \begin{prop} \label{CorollaryProof} Let $N\in\mathbb{N}$, then we have \begin{equation} \label{pEstim} \nu(f;N)\le N^{1-1/p}{\rm Var}_p(f)\quad(1\le p<\infty), \end{equation} and \begin{equation} \label{holderEstim} \nu(f;N)\le |f|_{H^\omega}N\omega\left(\frac{1}{N}\right). \end{equation} \end{prop} \begin{proof} The inequality (\ref{pEstim}) follows immediately from H\"{o}lder's inequality. For (\ref{holderEstim}), take $N$ intervals $\{I_j\}_{j=1}^N$, then clearly $$ \sum_{j=1}^N|f(I_j)|\le |f|_{H^\omega}\sum_{j=1}^N\omega(|I_j|). $$ Define \begin{equation} \label{maximum} M_\omega(N)=\max\left(\sum_{j=1}^N\omega(t_j)\right) \quad{\rm subject\,\, to}\quad \sum_{j=1}^N t_j=1,\quad t_j>0. \end{equation} Since $\sum_{j=1}^N|I_j|\le 1$ and $\omega$ is non-decreasing, we clearly have $$ \nu(f;N)\le |f|_{H^\omega}M_\omega(N). $$ To calculate $M_\omega(N)$, we use Lagrange multipliers. The critical point $(t_1,t_2,...,t_N,\lambda)$ of the Lagrangian function solves $$ \omega'(t_j)-\lambda=0\quad(1\le j\le N)\quad{\rm and}\quad\sum_{j=1}^Nt_j-1=0. $$ By the strict concavity of $\omega$, the above system has the unique solution $t_1=t_2=...=t_N$. Hence, the maximum (\ref{maximum}) is $$ M_\omega(N)=N\omega\left(\frac{1}{N}\right). $$ \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} This paper studies a general form of the sparse recovery problem, where our goal is to estimate a certain signal ${\bar{\beta}}_*$ from observations. We are especially interested in solving this problem using convex programming; that is, given a convex set $\Omega$, our estimator ${\hat{\beta}}$ is obtained from the following regularized minimization problem: \begin{equation} {\hat{\beta}}= \arg\min_{\beta \in \Omega} \left[ L(\beta) + R(\beta) \right] . \label{eq:hbeta} \end{equation} Here $L(\beta)$ is a loss function, which measures how closely $\beta$ matches the observation; and $R(\beta)$ is a regularizer, which captures the structure of ${\bar{\beta}}_*$. Note that the theory developed in this paper does not need to assume that ${\bar{\beta}}_* \in \Omega$ although this is certainly a desirable property (especially if we would like to recover ${\bar{\beta}}_*$ without error). Our primary interest is in the case where $\Omega$ lives in an Euclidean space ${\bar\Omega}$. However, our analysis holds automatically when $\Omega$ is contained in a separable Banach space ${\bar\Omega}$, and both $L(\cdot)$ and $R(\cdot)$ are convex functions that are defined in the whole space ${\bar\Omega}$, both inside and outside of $\Omega$. As an example, assume that ${\bar{\beta}}_*$ is a $p$ dimensional vector: ${\bar{\beta}}_* \in {\mathbb{R}}^p$; we observe a vector $y \in {\mathbb{R}}^n$ and an $n \times p$ matrix $X$ such that \[ y=X {\bar{\beta}}_* + \text{noise}. \] We are interested in estimating ${\bar{\beta}}_*$ from the noisy observation $y$. However, in modern applications we are mainly interested in the high dimensional situation where $p \gg n$. Since there are more variables than the number of observations, traditional statistical methods such as least squares regression will suffer from the so-called curse-of-dimensionality problem. To remedy the problem, it is necessary to impose structures on ${\bar{\beta}}_*$; and a popular assumption is sparsity. That is $\|{\bar{\beta}}_*\|_0=|{\mathrm{supp}}({\bar{\beta}}_*)|$ is smaller than $n$, where ${\mathrm{supp}}(\beta) = \{j: \beta_j \neq 0\}$. A direct formulation of sparsity constraint leads to the nonconvex $\ell_0$ regularization formulation, which is difficult to solve. A frequent remedy is to employ the so-called {\em convex relaxation} approach, where the $\ell_0$ regularization is replaced by an $\ell_1$ regularizer $R(\beta)=\lambda \|\beta\|_1$ that is convex. If we further consider the least squares loss $L(\beta)=\|y - X\beta\|_2^2$, then we obtain the following $\ell_1$ regularization method (Lasso) \begin{equation} {\hat{\beta}}=\arg\min_{\beta \in {\mathbb{R}}^p} \left[ \|y-X\beta\|_2^2 + \lambda \|\beta\|_1 \right] , \label{eq:L1} \end{equation} where $\Omega$ is chosen to be the whole parameter space ${\bar\Omega}={\mathbb{R}}^p$. \section{Related Work} In sparse recovery analysis, we want to know how good is our estimator ${\hat{\beta}}$ in comparison to the target ${\bar{\beta}}_*$. Consider the standard $\ell_1$ regularization method (\ref{eq:L1}), two types of theoretical questions are of interests. The first is support recovery; that is, whether ${\mathrm{supp}}({\hat{\beta}}) = {\mathrm{supp}}({\bar{\beta}}_*)$. The second is parameter estimation; that is, how small is $\|{\hat{\beta}}-{\bar{\beta}}_*\|_2^2$. The support recovery problem is often studied under the so-called {\em irrepresentable condition} (some types also referred more generally as coherence condition) \cite{MeinshausenB06,Tropp06,ZhaoYu06,Wainwright09}, while the parameter estimation problem is often studied under the so-called {\em restricted isometry property} (or RIP) as well as its generalizations \cite{CandesTao07,ZhangHuang08,BickelRT09,ZhangT09,vandeGeerB09,YeZ10}. Related ideas have been extended to more complex structured sparse regularization problems such as group sparsity \cite{HuangZhang09,LMTG09} and certain matrix problems \cite{KoTsLo10,NegaWain10,KoltchinskiiLT11}. Closely related to parameter estimation is the so-called {\em oracle inequality}, which is particularly suitable for the dual-certificate analysis considered here. This paper is interested in the second question of parameter estimation, and the related problem of sparse oracle inequality. Our goal is to present a general theoretical framework using the notation of dual certificate to analyze sparse regularization problems such as the standard Lasso (\ref{eq:L1}) as well as its generalization to more complex structured sparsity problems in (\ref{eq:hbeta}). We note that there were already some recent attempts in developing such a general theory such as \cite{NeRaWaYu10} and \cite{ChRePaWi10}, but both have limitations. In particular the technique of \cite{ChRePaWi10} only applies to noise-less regression problems with Gaussian random design (its main contribution is the nice observation that Gordon's minimum singular value result can be applied to structured sparse recovery problems; the consequences will be further investigated in our paper); results in \cite{ChRePaWi10} are subsumed by our more general results given in Section~\ref{sec:gordon}. The analysis in \cite{NeRaWaYu10} relied on a direct generalization of RIP for decomposable regularizers which has technical limitations in its applications to more complex structured problems such as matrix regularization: the technique of RIP-like analysis and its generalization such as \cite{KoTsLo10,NegaWain10} gives performance bounds that do not imply exact recovery even when the noise is zero, while the technique we investigate here (via the notation of dual certificate) can get exact recovery \cite{CandesT09,Recht09}. In addition, not all regularizers can be easily considered as decomposable (for example, the mixed norm example in Section~\ref{sec:mixed-norm} is not). Even for Gaussian random design, the complexity statement in Section~\ref{sec:gordon} replies only on Gaussian width calculation that is more general than decomposable. Therefore our analysis in this paper extends those of \cite{NeRaWaYu10} in multiple ways. While the notation of dual certificate has been successfully employed in some earlier work (especially for some matrix regularization problems) such as \cite{Recht09,CaLiMaWr11,HsKaTz11-robust}, these results focused on special problems without a general theory. In fact, from earlier work it is not even clear what should be a general definition of dual certificate for structured sparsity formulation (\ref{eq:hbeta}). This paper addresses this issue. Specifically we will provide a general definition of dual certificate for the regularized estimation problem (\ref{eq:hbeta}) and demonstrate that this definition can be used to develop a theoretical framework to analyze the sparse recovery performance of ${\hat{\beta}}$ with noise. Not only does it provide a direct generalization of earlier work such as \cite{Recht09,CaLiMaWr11,HsKaTz11-robust}, but also it unifies RIP type analysis (or its generalization to restricted strong convexity) such as \cite{CandesTao07,NeRaWaYu10} and irrepresentable (or incoherence) conditions such as \cite{ZhaoYu06,Wainwright09}. In this regard the general theory also includes as special cases some recent work by Candes and Plan that tried to develop non-RIP analysis for $\ell_1$ regularization \cite{CandesPlan09,CandesPlan11}. In fact, even for the simple case of $\ell_1$ regularization, we show that our theory can lead to new and sharper results than existing ones. Finally, we would like to point out that while this paper successfully unifies the irrepresentable (or incoherence) conditions and RIP conditions under the general method of dual certificate, our analysis does not subsume some of the more elaborated analysis such as \cite{ZhangT09} and \cite{YeZ10} as special case. Those studies employed a different generalization of RIP which we may refer to as the {\em invertibility factor} approach using the terminology of \cite{YeZ10}. It thus remains open whether it is possible to develop an even more general theory that can include all previous sparse recovery analysis as special cases. \section{Primal-Dual Certificate} As mentioned before, while fragments of the dual certificate idea has appeared before, there are so far no general definition and theory. Therefore in this section we will introduce a formal definition that can be used to analyze (\ref{eq:hbeta}). Recall that the parameter space $\Omega$ lives in a separable Banach space ${\bar\Omega}$. Let ${\bar\Omega}^*$ be the dual Banach space of ${\bar\Omega}$ containing all continuous linear functions $u(\beta)$ defined on ${\bar\Omega}$. We use $\innerprod{u}{\beta} = u(\beta)$ to denote the bi-linear function defined on ${\bar\Omega}^*\times {\bar\Omega}$. If ${\bar\Omega}$ is an Euclidean space, then $\innerprod{\cdot}{\cdot}$ is just an inner product. In this notation $\innerprod{\cdot}{\cdot}$, the first argument is always in the dual space ${\bar\Omega}^*$ and the second in the primal space ${\bar\Omega}$. This allows as to keep track of the geometrical interpretation of our analysis even when ${\bar\Omega}$ is an Euclidean or Hilbert space with ${\bar\Omega}^*={\bar\Omega}$. In what follows, we will endow ${\bar\Omega}^*$ with the weak topology: $u_k\to u$ iff $\innerprod{u_k-u}{\beta}\to 0$ for all $\beta\in{\bar\Omega}$. This is equivalent to $\|u_k-u\|_D\to 0$ for any norm $\|\cdot\|_D$ in ${\bar\Omega}^*$ when ${\bar\Omega}$ is an Euclidean space. In the following, given any convex function $\phi(\cdot)$, we use the notation $\nabla \phi(\beta)\in\Omega^*$ to denote a subgradient of $\phi(\beta)$ with respect to the geometry of ${\bar\Omega}$ in the following sense: \[ \phi(\beta') \geq \phi(\beta) + \innerprod{\nabla \phi(\beta)}{\beta'-\beta},\ \forall\ \beta'. \] By convention, we also use $\partial \phi(\beta)$ to denote its sub-differential (or the set of subgradient at $\beta$). The sub-differential is always a closed convex set in ${\bar\Omega}^*$. Moreover, we define the Bregman divergence with respect to $\phi$ as: \[ D_\phi(\beta,\beta')=\phi(\beta)-\phi(\beta')- \innerprod{\nabla \phi(\beta')}{\beta-\beta'} . \] Clearly, by the definition of sub-gradient, Bregman divergence is non-negative. These quantities are standard in convex analysis; for example, additional details can be found in \cite{Roc70}. Instead of working directly with the target ${\bar{\beta}}_*$, we consider an approximation ${\bar{\beta}} \in \Omega$ of ${\bar{\beta}}_*$, which may have certain nice properties that will become clear later on. Nevertheless, for the purpose of understanding the main idea, it may be convenient to simply assume that ${\bar{\beta}}={\bar{\beta}}_*$ (thus ${\bar{\beta}}_* \in \Omega$) during the first reading. Given any ${\bar{\beta}} \in \Omega$ and subset $G \subset \partial R({\bar{\beta}})$, we define a modified regularizer \[ R_G(\beta) = R({\bar{\beta}}) + \sup_{v \in G} \innerprod{v}{\beta-{\bar{\beta}}} . \] It is clear that $R_G(\beta) \leq R(\beta)$ for all $\beta$ and $R({\bar{\beta}})=R_G({\bar{\beta}})$. The value of $R_G(\beta)$ is unchanged if $G$ is replaced by the closure of its convex hull. Moreover, if $G$ is convex and closed, then the sub-differential of $R_G(\beta)$ is identical to $G$ at ${\bar{\beta}}$ and contained in $G$ elsewhere. In fact, by checking the condition $R_G(b)-R_G(\beta)\ge \innerprod{v}{b-\beta}$ for $b=t\beta$ and $b={\bar{\beta}}$, we see that for closed convex $G$ \begin{eqnarray*} \partial R_G(\beta) = \big\{v\in G: R_G(\beta)=\innerprod{v}{\beta} = R({\bar{\beta}})+\innerprod{v}{\beta-{\bar{\beta}}} \big\}. \end{eqnarray*} In what follows, we pick a closed convex $G$ unless otherwise stated. In optimization, $\beta$ is generally referred to as primal variable and $\nabla L(\beta)$ as the corresponding dual variable, since they live in ${\bar\Omega}$ and ${\bar\Omega}^*$ respectively. An optimal solution ${\hat{\beta}}$ of (\ref{eq:hbeta}) satisfies the KKT condition when its dual satisfies the relationship $-\nabla L({\hat{\beta}}) \in \partial R({\hat{\beta}})$. However, for the general formulation (\ref{eq:hbeta}), this condition can be rather hard to work with. Therefore in order to analyze (\ref{eq:hbeta}), we introduce the notion of {\em primal-dual certificate}, which is a primal variable $Q_G$ satisfying a simplified dual constraint $-\nabla L(Q_G) \in \partial R({\bar{\beta}})$. To be consistent with some earlier literature, one may refer to the quantity $-\nabla L(Q_G)$ as the corresponding dual certificate. For notational simplicity, without causing confusion, in this paper we will also refer to $Q_G$ as a dual certificate. \subsection{Primal Dual Certificate Sparse Recovery Bound} The formal definition of dual certificate is given in Definition~\ref{def:primal-dual-certificate}. In this definition, we also allow approximate dual certificate which may have a small violation of the dual constraint; such an approximation can be convenient for some applications. \begin{definition}[Primal-Dual Certificate] \label{def:primal-dual-certificate} Given any ${\bar{\beta}} \in \Omega$ and a closed convex subset $G \subset \partial R({\bar{\beta}})$. A $\delta$-approximate primal-dual (or simply dual) certificate $Q_G$ (with respect to $G$) of (\ref{eq:hbeta}) is a primal variable that satisfies the following condition: \begin{equation} - \nabla L(Q_G) + \delta \in G . \label{eq:dual-certificate} \end{equation} If $\delta=0$, we call $Q_G$ an exact primal-dual certificate or simply a dual certificate. \end{definition} We may choose a convex function $\bar{L}(\beta)$ that is close to $L(\beta)$ and use it to construct an approximate dual certificate with \begin{equation} Q_G = \mathop{\rm arg\, min}_\beta\big\{\bar{L}(\beta)+R_G(\beta)\big\}. \label{eq:dual-certificate-simple} \end{equation} Since $- \nabla \bar{L}(Q_G) \in \partial R_G(Q_G)\subseteq G$, (\ref{eq:dual-certificate}) holds for $\delta = \nabla\bar{L}(Q_G)-\nabla L(Q_G)$. However, this choice may not always lead to the best result in the analysis of the estimator (\ref{eq:hbeta}), especially when $- \nabla L(Q_G) + \delta = - \nabla \bar{L}(Q_G)$ is an interior point of $G$. Possible choices of $\bar{L}(\beta)$ include $\gamma L(\beta)$ with a constant $\gamma$, its expectation, and their approximations. Note that we do not assume that $Q_G \in \Omega$. In order to approximately enforce such a constraint, we may replace $L(\beta)$ by $L(\beta) + L_\Delta(\beta)$ for any convex function $L_\Delta(\beta) \geq 0$ such that $L_\Delta(\beta)=0$ when $\beta \in \Omega$. If $L_\Delta(\beta)$ is sufficiently large, then we can construct a $Q_G$ that is approximately contained in $\Omega$. More detailed dual certificate construction techniques are discussed in Section~\ref{sec:construction}. An essential result that relates a primal-dual certificate $Q_G$ to ${\hat{\beta}}$ is stated in the following fundamental theorem, which says that if $Q_G$ is close to ${\bar{\beta}}$, then ${\hat{\beta}}$ is close to ${\bar{\beta}}$ (when $\delta=0$). In order to apply this theorem, we shall choose ${\bar{\beta}} \approx {\bar{\beta}}_*$. \begin{theorem}[Primal-Dual Certificate Sparse Recovery Bound] Given an approximate primal-dual certificate $Q_G$ in Definition~\ref{def:primal-dual-certificate}, we have the following inequality: \[ D_L({\bar{\beta}},{\hat{\beta}}) + D_L({\hat{\beta}},Q_G) + [ R({\hat{\beta}})- R_G({\hat{\beta}})] \leq D_L({\bar{\beta}},Q_G) - \innerprod{\delta}{{\hat{\beta}}-{\bar{\beta}}} . \] \label{thm:dual_certificate-recovery} \end{theorem} The proof is a simple application of the following two propositions. \begin{proposition} \label{prop:bregman} For any convex function $L(\cdot)$, the following identity holds for Bregman divergence: \[ D_L(a,b) + D_L(b,c) - D_L(a,c)= \innerprod{\nabla L(c) - \nabla L(b)}{a-b} . \] \end{proposition} \begin{proof} This can be easily verified using simple algebra. We can expand the left hand side as follows. \begin{align*} & D_L(a,b) + D_L(b,c) - D_L(a,c) \\ =& \left[ L(a) - L(b) - \innerprod{\nabla L(b)}{a-b} \right] + \left[ L(b) - L(c) - \innerprod{\nabla L(c)}{b-c} \right] - \left[ L(a) - L(c) - \innerprod{\nabla L(c)}{a-c} \right] \\ =& - \innerprod{\nabla L(b)}{a-b} - \innerprod{\nabla L(c)}{b-c} + \innerprod{\nabla L(c)}{a-c} . \end{align*} This can be simplified to obtain the right hand side. \end{proof} \begin{proposition} \label{prop:subgrad} Let ${\tilde{\beta}}={t} {\hat{\beta}} + (1-{t}) {\bar{\beta}}$ for some ${t} \in [0,1]$. Then, given any $v \in G$, we have \[ \innerprod{- v - \nabla L({\tilde{\beta}})}{{\bar{\beta}} - {\tilde{\beta}}} \leq R_G({\tilde{\beta}}) - R({\tilde{\beta}}) . \] \end{proposition} \begin{proof} The definition of ${\hat{\beta}}$ and the convexity of (\ref{eq:hbeta}) imply that ${\tilde{\beta}}$ achieves the minimum objective value $L(\beta)+R(\beta)$ for $\beta$ that lies in the line segment between ${\tilde{\beta}}$ and ${\bar{\beta}}$. This is equivalent to $\innerprod{\nabla L({\tilde{\beta}}) + \nabla R({\tilde{\beta}})}{{\bar{\beta}}-{\tilde{\beta}}}\geq 0$. Since $R(\cdot)$ is convex, this implies $\innerprod{\nabla L({\tilde{\beta}})}{{\bar{\beta}}-{\tilde{\beta}}} + R({\bar{\beta}}) \ge R({\tilde{\beta}})$. Thus, \begin{eqnarray*} \innerprod{-v-\nabla L({\tilde{\beta}})}{{\bar{\beta}}-{\tilde{\beta}}}\le \innerprod{v}{{\tilde{\beta}}-{\bar{\beta}}}+R({\bar{\beta}})-R({\tilde{\beta}}) \le R_G({\tilde{\beta}})-R({\tilde{\beta}}) \end{eqnarray*} by the definition of $R_G(\beta)$. \end{proof} \begin{proof}{\bf of Theorem~\ref{thm:dual_certificate-recovery}}. We apply Proposition~\ref{prop:bregman} with $a={\bar{\beta}}$, $b={\hat{\beta}}$, and $c=Q_G$ to obtain: \[ D_L({\bar{\beta}},{\hat{\beta}}) + D_L({\hat{\beta}},Q_G) - D_L({\bar{\beta}},Q_G) = \innerprod{\nabla L(Q_G) - \nabla L({\hat{\beta}})}{{\bar{\beta}}-{\hat{\beta}}} =\innerprod{-v+\delta - \nabla L({\hat{\beta}})}{{\bar{\beta}}-{\hat{\beta}}} , \] where $v \in G$. We can now apply Proposition~\ref{prop:subgrad} with $t=1$ to obtain the desired bound. \end{proof} The results shows that if we have a good bound on $D_L({\bar{\beta}},Q_G)$, then it is possible to obtain a bound on $D_L({\bar{\beta}},{\hat{\beta}})$. In general, we also choose $G$ so that the difference $R({\hat{\beta}})-R_G({\hat{\beta}})$ can effectively control the magnitude of ${\hat{\beta}}$ outside of the support (or a tangent space) of ${\bar{\beta}}$. \subsection{Primal Dual Certificate Sparse Oracle Inequality} It is also possible to derive a stronger form of oracle inequality for special $L$ with a more refined definition of dual certificate. \begin{definition}[Generalized Primal-Dual Certificate] \label{def:primal-dual-certificate-2} Given ${\bar{\beta}} \in \Omega$, a closed convex set $G \subset \partial R({\bar{\beta}})$, a convex function $\bar{L}$ on ${\bar\Omega}$, and an additional parameter $\beta^* \in {\bar\Omega}$. A generalized $\delta$-approximate primal-dual (or simply dual) certificate $Q_G$ with respect to $(L,\bar{L},{\bar{\beta}},\beta^*)$ is a primal variable that satisfies the following condition: \begin{equation} - \nabla \bar{L}_*(Q_G) +\delta \in G , \label{eq:dual-certificate-2} \end{equation} where $\bar{L}_*(\beta)= \bar{L}(\beta) - \innerprod{\nabla \bar{L}({\bar{\beta}})-\nabla L(\beta^*)}{\beta-{\bar{\beta}}}$. \end{definition} Note that if $\innerprod{\cdot}{\cdot}$ is an inner product and $L$ is a quadratic function of the form \begin{equation} L(\beta) = \innerprod{H\beta - z}{\beta} \label{eq:quadratic-loss} \end{equation} for some self-adjoint operator $H$ and vector $z$, then $D_L(\beta,\beta')=\innerprod{H (\beta - \beta')}{\beta-\beta'}$. In this case, we may simply take $\bar{L}(\cdot)= L(\cdot)$. For other cost functions, it will be useful to take $\bar{L}(\cdot)=\gamma L(\cdot)$ with $\gamma <1$. The reason will become clear later on. Definition~\ref{def:primal-dual-certificate-2} is equivalent to Definition~\ref{def:primal-dual-certificate} with $L(\beta)$ replaced by a redefined convex function $\bar{L}_*(\beta)= \bar{L}(\beta) - \innerprod{\nabla \bar{L}({\bar{\beta}})-\nabla L(\beta^*)}{\beta-{\bar{\beta}}}$. We may consider $\beta^*$ to be the true target ${\bar{\beta}}_*$ (or its approximation) in that we can assume that $\nabla L(\beta^*)$ is small although $\beta^*$ may not be sparse. The main advantage of Definition~\ref{def:primal-dual-certificate-2} is that it allows comparison to an arbitrary sparse approximation ${\bar{\beta}}$ to $\beta^*$ even when $\nabla L({\bar{\beta}})$ is not small --- the definition only requires $\nabla \bar{L}_*({\bar{\beta}})=\nabla L(\beta^*)$ to be small. This implies that ${\bar{\beta}}$ may have a dual certificate $Q_G$ with respect to $\bar{L}_*(\cdot)$ that is close to ${\bar{\beta}}$ (see error bounds in Section~\ref{sec:construction}). The following result shows that one can obtain an oracle inequality that generalizes Theorem~\ref{thm:dual_certificate-recovery}. In order to apply this theorem, we should choose $\beta^* \approx {\bar{\beta}}_*$. \begin{theorem}[Primal-Dual Certificate Sparse Oracle Inequality] Given a generalized $\delta$ approximate primal-dual certificate $Q_G$ in Definition~\ref{def:primal-dual-certificate-2}, we have for all ${\tilde{\beta}}$ in the line segment between ${\bar{\beta}}$ and ${\hat{\beta}}$: \begin{eqnarray*} && D_L({\bar{\beta}},{\tilde{\beta}})+ D_L({\tilde{\beta}},\beta^*) + D_{\bar{L}}({\tilde{\beta}},Q_G) + [R({\tilde{\beta}})- R_G({\tilde{\beta}}) ] \cr &\leq& D_{\bar{L}}({\tilde{\beta}},{\bar{\beta}})+ D_L({\bar{\beta}},\beta^*) + D_{\bar{L}}({\bar{\beta}},Q_G) - \innerprod{\delta}{{\tilde{\beta}}-{\bar{\beta}}}. \end{eqnarray*} \label{thm:dual_certificate-oracle} \end{theorem} \begin{proof} We apply Proposition~\ref{prop:bregman} with $a={\bar{\beta}}$, $b={\tilde{\beta}}$, and $c=\beta^*$ to obtain: \[ D_L({\bar{\beta}},{\tilde{\beta}})+ D_L({\tilde{\beta}},\beta^*) - D_L({\bar{\beta}},\beta^*) =\innerprod{\nabla L(\beta^*) - \nabla L({\tilde{\beta}})}{{\bar{\beta}}-{\tilde{\beta}}}. \] Similarly, we can apply Proposition~\ref{prop:bregman} with $a={\tilde{\beta}}$, $b={\bar{\beta}}$, and $c=Q_G$ to $\bar{L}$ to obtain: \[ D_{\bar{L}}({\tilde{\beta}},{\bar{\beta}})+ D_{\bar{L}}({\bar{\beta}},Q_G) - D_{\bar{L}}({\tilde{\beta}},Q_G) =\innerprod{\nabla \bar{L}(Q_G) - \nabla \bar{L}({\bar{\beta}})}{{\tilde{\beta}}-{\bar{\beta}}}. \] By subtracting the above two displayed equations, we obtain \begin{align*} &D_L({\bar{\beta}},{\tilde{\beta}})+ D_L({\tilde{\beta}},\beta^*) - D_L({\bar{\beta}},\beta^*) -D_{\bar{L}}({\tilde{\beta}},{\bar{\beta}})- D_{\bar{L}}({\bar{\beta}},Q_G) + D_{\bar{L}}({\tilde{\beta}},Q_G) \\ =& \innerprod{\nabla L(\beta^*) - \nabla L({\tilde{\beta}}) +\nabla \bar{L}(Q_G) - \nabla \bar{L}({\bar{\beta}})}{{\bar{\beta}}-{\tilde{\beta}}}. \end{align*} Since $\nabla \bar{L}(Q_G) + \nabla L(\beta^*) - \nabla \bar{L}({\bar{\beta}})=-v +\delta$ for some $v\in G$, the right hand side can be written as $\innerprod{-v+\delta-\nabla L({\tilde{\beta}})}{{\bar{\beta}}-{\tilde{\beta}}}$. The conclusion then follows from Proposition~\ref{prop:subgrad}. \end{proof} Note that if we choose $L=\bar{L}$ and $\beta^*={\bar{\beta}}$ in Theorem~\ref{thm:dual_certificate-oracle}, then Definition~\ref{def:primal-dual-certificate-2} is consistent with Definition~\ref{def:primal-dual-certificate}, and Theorem~\ref{thm:dual_certificate-oracle} becomes Theorem~\ref{thm:dual_certificate-recovery}. Since $\bar{L}_*(\beta)-\bar{L}(\beta)$ is linear in $\beta$, $D_{\bar{L}}({\bar{\beta}},Q_G) = D_{\bar{L}_*}({\bar{\beta}},Q_G)$. Moreover, when $\nabla L(\beta_*)$ is small, $\nabla\bar{L}_*({\bar{\beta}})$ is small by the choice of $\bar{L}_*(\cdot)$ in Definition~\ref{def:primal-dual-certificate-2}, so that $D_{\bar{L}_*}({\bar{\beta}},Q_G)$ is small when $\bar{L}_*$ has sufficient convexity near ${\bar{\beta}}$. This motivates a choice $\bar{L}(\cdot)$ satisfying $D_{\bar{L}}(\beta,{\bar{\beta}})\le D_L({\bar{\beta}},\beta)$ for all $\beta\in\Omega$ whenever such a choice is available and reasonably convex near ${\bar{\beta}}$. This lead to the following corollary. \begin{corollary} \label{cor:dual_certificate-oracle} Given a generalized exact primal-dual certificate $Q_G$ in Definition~\ref{def:primal-dual-certificate-2} with $\bar{L}(\cdot)$ satisfying $D_L({\bar{\beta}},\beta)\ge D_{\bar{L}}(\beta,{\bar{\beta}})$ for all $\beta\in\Omega$. Then, \[ D_L({\hat{\beta}},\beta^*) + [R({\hat{\beta}})- R_G({\hat{\beta}}) ] \leq D_L({\bar{\beta}},\beta^*) + D_{\bar{L}_*}({\bar{\beta}},Q_G). \] \end{corollary} In some problems, Corollary \ref{cor:dual_certificate-oracle} is applicable with $\bar{L}(\cdot)=\gamma L(\cdot)$ for some $\gamma \in (0,1]$. In the special case that $L(\cdot)$ is a quadratic function as in (\ref{eq:quadratic-loss}), we have $D_L(\beta,{\bar{\beta}})=D_L({\bar{\beta}},\beta)$. Therefore we may take $\gamma=1$, and the bound in Corollary~\ref{cor:dual_certificate-oracle} can be further simplified to \[ D_L({\hat{\beta}},\beta^*)+ [R({\hat{\beta}})- R_G({\hat{\beta}}) ] \leq D_L({\bar{\beta}},\beta^*)+ D_{L}({\bar{\beta}},Q_G) . \] If $L(\cdot)$ comes from a generalized linear model of the form $L(\beta)=\sum_{i=1}^n \ell_i(\innerprod{x_i}{\beta})$, with $x_i\in{\bar\Omega}^*$ and second order differentiable convex scalar functions $\ell_i$, then the condition $D_L({\bar{\beta}},\beta) \geq \gamma D_L(\beta,{\bar{\beta}})$ is satisfied as long as: \[ \inf_{\beta \in \Omega} \frac{D_L({\bar{\beta}},\beta)}{D_L(\beta,{\bar{\beta}})} \geq \inf_{\{\beta',\beta''\} \in \Omega} \frac{\sum_{i=1}^n \ell_i''(\innerprod{x_i}{\beta'}) \innerprod{x_i}{\beta-{\bar{\beta}}}^2} {\sum_{i=1}^n \ell_i''(\innerprod{x_i}{\beta''}) \innerprod{x_i}{\beta-{\bar{\beta}}}^2} \geq \inf_{i \in \{1,\ldots,n\}} \inf_{\{\beta',\beta''\} \in \Omega}\frac{\ell_i''(\innerprod{x_i}{\beta'})}{\ell_i''(\innerprod{x_i}{\beta''})}\ge\gamma. \] This means that the condition of Corollary~\ref{cor:dual_certificate-oracle} holds as long as for all $i,\beta,\beta' \in \Omega$: $\ell_i''(\innerprod{x_i}{\beta}) \geq \gamma \ell_i''(\innerprod{x_i}{\beta'})$. For example, for logistic regression $\ell_i(t)= \ln(1+\exp(-t))$ with $\sup_i\sup_{\beta\in\Omega}|\innerprod{x_i}{\beta}|\leq A$, we can pick $\gamma=4/(2+\exp(-A)+\exp(A))$. This choice of $\gamma$ can be improved if we have additional constraints on $\hat{\beta}$; an example is given in Corollary~\ref{cor:recovery-global-dc-oracle}. In~\ref{sec:genlin-example}, we will present a more concrete and elaborated analysis for generalized linear models. Note that the result of Corollary~\ref{cor:dual_certificate-oracle} gives an oracle inequality that compares $D_L({\hat{\beta}},\beta^*)$ to $D_L({\bar{\beta}},\beta^*)$ with leading coefficient one. The bound is meaningful as long as ${\bar{\beta}}$ has a good dual certificate $Q_G$ under $\bar{L}_*(\beta)$ that is close to ${\bar{\beta}}$. The possibility to obtain oracle inequalities of this kind with leading coefficient one was first noticed in \cite{KoTsLo10} under restricted strong convexity. The advantage of such an oracle inequality is that we do not require $\beta^*$ to be sparse, but rather the competitor ${\bar{\beta}}$ to be sparse --- which implies the dual certificate $Q_G$ is close to ${\bar{\beta}}$ when $\bar{L}_*(\beta)$ is sufficiently convex. Here we generalize the result of \cite{KoTsLo10} in two ways. First it is possible to deal with non-quadratic loss. Second we only require the existence of a good dual certificate $Q_G$, which is a weaker requirement than restricted strong convexity in \cite{KoTsLo10}. Generally speaking, the dual certificate technique allows us to obtain oracle inequality $D_L({\hat{\beta}},\beta^*)+ [R({\hat{\beta}})- R_G({\hat{\beta}}) ]$ directly. If we are interested in other results such as parameter estimation bound $\|{\hat{\beta}}-\beta^*\|$, then additional estimates will be needed on top of the dual certificate theory of this paper. Instead of working out general results, we will study this problem for structured $\ell_1$ regularizer in Section~\ref{sec:struct-L1}. \section{Constructing Primal-Dual Certificate} \label{sec:construction} We will present some general results for estimating $D_L({\bar{\beta}},Q_G)$ under various assumptions. For notational simplicity, the main technical derivation considers Definition~\ref{def:primal-dual-certificate}, with dual certificate $Q_G$ with respect to $L(\beta)$. One can then apply these results to the dual certificate $Q_G$ in Definition~\ref{def:primal-dual-certificate-2}. \subsection{Global Restricted Strong Convexity} \label{sec:RSC} We first consider the following construction of primal-dual certificate. \begin{proposition} \label{prop:global-dc} Let \begin{equation} Q_G = \arg\min_{\beta} \left[ L(\beta) + R_G(\beta) \right] , \label{eq:dc-opt} \end{equation} then $Q_G$ is an exact primal-dual certificate of (\ref{eq:hbeta}). \end{proposition} \begin{proof} It is clear from the optimality condition of (\ref{eq:dc-opt}) that $\nabla L(Q_G) + v=0$ for some $v \in G$. \end{proof} The symmetrized Bregman divergence is defined as \begin{eqnarray*} D_L^s(\beta,{\bar{\beta}})=D_L(\beta,{\bar{\beta}}) + D_L({\bar{\beta}},\beta) = \innerprod{\nabla L(\beta) - \nabla L({\bar{\beta}})}{\beta-{\bar{\beta}}}. \end{eqnarray*} We introduce the concept of restricted strong convexity to bound $D_L^s({\bar{\beta}},Q_G)$. \begin{definition}[Restricted Strong Convexity]\label{def:RSC} We define the following quantity which we refer to as global restricted strong convexity (RSC) constant: \[ \gamma_L({\bar{\beta}};r,G,\|\cdot\|)= \inf \left\{ \frac{D_L^s(\beta,{\bar{\beta}})}{\|\beta-{\bar{\beta}}\|^2} : 0<\|\beta-{\bar{\beta}}\|\leq r; \; D_L^s(\beta,{\bar{\beta}})+\sup_{u\in G} \innerprod{u+\nabla L({\bar{\beta}})}{\beta-{\bar{\beta}}} \leq 0 \right\} , \] where $\|\cdot\|$ is a norm in ${\bar\Omega}$, $r>0$ and $G \subset \partial R({\bar{\beta}})$. \end{definition} The parameter $r$ is introduced for localized analysis, where the Hessian may be small when $\|\beta-{\bar{\beta}}\| >r$. For least squares loss that has a constant Hessian, one can just pick $r=\infty$. We recall the concept of dual norm in ${\bar\Omega}$: $\|\cdot\|_D$ is the dual norm of $\|\cdot\|$ if \[ \|u\|_D = \sup_{\|\beta\|=1} \innerprod{u}{\beta} . \] It implies the inequality that $\innerprod{u}{\beta} \leq \|u\|_D \|\beta\|$. \begin{theorem}[Dual Certificate Error Bound under RSC] Let $\|\cdot\|$ be a norm in ${\bar\Omega}$ and $\|\cdot\|_D$ its dual norm in ${\bar\Omega}^*$. Consider ${\bar{\beta}} \in \Omega$ and a closed convex $G \subset \partial R({\bar{\beta}})$. Let $\Delta_r=\gamma_L({\bar{\beta}};r,G,\|\cdot\|)^{-1}\inf_{u \in G} \|u+\nabla L({\bar{\beta}})\|_D$. If $\Delta_r < r$ for some $r>0$, then for any $Q_G$ given by (\ref{eq:dc-opt}), \[ D_L^s({\bar{\beta}},Q_G) \leq \gamma_L({\bar{\beta}};r,G,\|\cdot\|) \Delta_r^2, \quad \|{\bar{\beta}}-Q_G\| \leq \Delta_r . \] \label{thm:dual_certificate-error} \end{theorem} \begin{proof} By the optimality condition (\ref{eq:dc-opt}) of $Q_G$, there exists $v\in\partial R_G(Q_G)$ such that $\nabla L(Q_G) + v = 0$. For $v\in\partial R_G(Q_G)$, $R_G(Q_G)-R({\bar{\beta}})=\innerprod{v}{Q_G-{\bar{\beta}}} \geq \sup_{u\in G}\innerprod{u}{Q_G-{\bar{\beta}}}$. Therefore, \begin{eqnarray}\label{eq:Q-RSC} D_L^s(Q_G,{\bar{\beta}}) = \innerprod{\nabla L(Q_G)-\nabla L({\bar{\beta}})}{Q_G-{\bar{\beta}}} \leq -\innerprod{u+\nabla L({\bar{\beta}})}{Q_G-{\bar{\beta}}}, \forall \ u \in G. \end{eqnarray} Let ${\tilde Q}_G={\bar{\beta}}+t(Q_G-{\bar{\beta}})$ where we pick $t=1$ if $\|Q_G-{\bar{\beta}}\| \leq r$ and $t \in (0,1)$ with $\|{\tilde Q}_G-{\bar{\beta}}\|=r$ otherwise. Let $f(t) = D_L({\tilde Q}_G,{\bar{\beta}})$ so that $D_L^s({\tilde Q}_G,{\bar{\beta}}) = tf'(t)$. The convexity of $L(\beta)$ implies $f'(t) \le f'(1)=D_L^s(Q_G,{\bar{\beta}})$. It follows that \begin{eqnarray*} D_L^s({\tilde Q}_G,{\bar{\beta}}) +\innerprod{u+\nabla L({\bar{\beta}})}{{\tilde Q}_G-{\bar{\beta}}} \le t\{D_L^s(Q_G,{\bar{\beta}})+\innerprod{u+\nabla L({\bar{\beta}})}{Q_G-{\bar{\beta}}}\}\le 0, \end{eqnarray*} which implies the restricted cone condition for ${\tilde Q}_G$ in the definition of RSC. Thus, \[ \gamma_L({\bar{\beta}};r,G,\|\cdot\|) \|{\tilde Q}_G-{\bar{\beta}}\|^2 -\|u+\nabla L({\bar{\beta}})\|_D \|{\tilde Q}_G-{\bar{\beta}}\| \leq 0 . \] Now by moving the term $\|u+\nabla L({\bar{\beta}})\|_D \|{\tilde Q}_G-{\bar{\beta}}\|$ to the right hand side and taking $\inf$ over $u$, we obtain $\gamma_L({\bar{\beta}};r,G,\|\cdot\|)\|{\tilde Q}_G-{\bar{\beta}}\| \leq \inf_{u\in G} \|u+\nabla L({\bar{\beta}})\|_D =\gamma_L({\bar{\beta}};r,G,\|\cdot\|) \Delta_r$. Since $\Delta_r < r$, we have $t=1$ and ${\tilde Q}_G=Q_G$. It means that we always have $\|Q_G-{\bar{\beta}}\| \leq \Delta_r <r$. Consequently, (\ref{eq:Q-RSC}) gives $D_L^s(Q_G,{\bar{\beta}}) \leq \inf_{u\in G}\|u+\nabla L({\bar{\beta}})\|_D\Delta_r$. This completes the proof. \end{proof} \begin{remark} Although for simplicity, the proof of Theorem~\ref{thm:dual_certificate-error} implicitly assumes that the solution of (\ref{eq:dc-opt}) is finite, this extra assumption is not necessary with a slightly more complex argument (which we excludes in the proof in order not to obscure the main idea). An easy way to see this is by adding a small (unrestricted) strongly convex term $L_\Delta(\beta)$ to $L$ and consider dual certificate for the modified function $\tilde{L}(\beta)=L(\beta)+L_\Delta(\beta)$. Since the solution of (\ref{eq:dc-opt}) with $\tilde{L}(\beta)$ is finite, we can apply the proof to $\tilde{L}(\beta)$ and then simply let $L_\Delta(\beta) \to 0$. \end{remark} Note that if $\nabla L(Q_G)$ is not unique, then the same value can be used both in Theorem~\ref{thm:dual_certificate-recovery} and in Theorem~\ref{thm:dual_certificate-error}. Since $D_L({\bar{\beta}},Q_G)\le D_L^s({\bar{\beta}},Q_G)$, this implies the following bound: \begin{corollary} Under the conditions of Theorem~\ref{thm:dual_certificate-error}, we have \[ D_L({\bar{\beta}},{\hat{\beta}}) + [ R({\hat{\beta}})- R_G({\hat{\beta}})] \leq \gamma_L({\bar{\beta}};r,G,\|\cdot\|)^{-1}\inf_{u \in G} \|u+\nabla L({\bar{\beta}})\|_D^2 . \] \label{cor:recovery-global-dc-error} \end{corollary} Similarly, we may apply Theorem~\ref{thm:dual_certificate-oracle} and Theorem~\ref{thm:dual_certificate-error} with $L(\beta)$ replaced by $\bar{L}_*(\beta)$ as in Definition~\ref{def:primal-dual-certificate-2}. This implies the following general recovery bound. \begin{corollary} Let $\|\cdot\|$ be a norm in ${\bar\Omega}$ and $\|\cdot\|_D$ its dual norm in ${\bar\Omega}^*$. Consider ${\bar{\beta}} \in \Omega$ and a closed convex $G \subset \partial R({\bar{\beta}})$. Consider $\bar{L}(\beta)$ as in Definition~\ref{def:primal-dual-certificate-2}, and define \[ \gamma_{\bar{L}_*}({\bar{\beta}};r,G,\|\cdot\|)= \inf \left\{ \frac{D_{\bar{L}}^s(\beta,{\bar{\beta}})}{\|{\bar{\beta}}-\beta\|^2} : \|\beta-{\bar{\beta}}\|\leq r; \; D_{\bar{L}}^s(\beta,{\bar{\beta}})+\sup_{u \in G} \innerprod{u+\nabla L(\beta^*)}{\beta-{\bar{\beta}}} \leq 0 \right\} \] and $\Delta_r = (\gamma_{\bar{L}_*}({\bar{\beta}};r,G,\|\cdot\|))^{-1}\inf_{u \in G} \|u+\nabla L(\beta^*)\|_D$. Assume for some $r>0$, we have $\Delta_r < r$; and assume there exists $\tilde{r}> D_L({\bar{\beta}},\beta^*)+ \gamma_{\bar{L}_*}({\bar{\beta}};r,G,\|\cdot\|) \Delta_r^2$ such that for all $\beta \in \Omega$: $D_L(\beta,\beta^*) + [R(\beta)- R_G(\beta) ] \leq \tilde{r}$ implies $D_L({\bar{\beta}},\beta) \geq D_{\bar{L}}(\beta,{\bar{\beta}})$. Then, \[ D_L({\hat{\beta}},\beta^*) + [R({\hat{\beta}})- R_G({\hat{\beta}}) ] \leq D_L({\bar{\beta}},\beta^*)+ \gamma_{\bar{L}_*}({\bar{\beta}};r,G,\|\cdot\|) \Delta_r^2. \] \label{cor:recovery-global-dc-oracle} \end{corollary} \begin{proof} Let $\bar{L}_*(\beta)= \bar{L}(\beta) - \innerprod{\nabla \bar{L}({\bar{\beta}})-\nabla L(\beta^*)}{\beta-{\bar{\beta}}}$ and define \[ Q_G = \arg\min_{\beta} \left[ \bar{L}_*(\beta) + R_G(\beta) \right] . \] Then $Q_G$ is a generalized dual certificate in Definition~\ref{def:primal-dual-certificate-2}. Note that $D_{\bar{L}_*}(\beta,\beta')=D_{\bar{L}}(\beta,\beta')$ and $\nabla \bar{L}_*({\bar{\beta}})=\nabla L(\beta^*)$. The conditions of the corollary and Theorem~\ref{thm:dual_certificate-error}, applied with $L$ replaced by $\bar{L}_*$, imply that $\|Q_G-{\bar{\beta}}\| \leq r$ and $D_{\bar{L}}({\bar{\beta}},Q_G) \leq \gamma_{\bar{L}_*}({\bar{\beta}};r,G,\|\cdot\|) \Delta_r^2$. Now we simply apply Theorem~\ref{thm:dual_certificate-oracle} to obtain that for all $t \in [0,1]$ and ${\tilde{\beta}}= {\bar{\beta}} + t ({\hat{\beta}}-{\bar{\beta}})$: \[ D_L({\tilde{\beta}},\beta^*) + [R({\tilde{\beta}})- R_G({\tilde{\beta}}) ] \leq D_L({\bar{\beta}},\beta^*)+ \gamma_{\bar{L}_*}({\bar{\beta}};r,G,\|\cdot\|) \Delta_r^2 + [D_{\bar{L}}({\tilde{\beta}},{\bar{\beta}})-D_L({\bar{\beta}},{\tilde{\beta}})] . \] It is clear that when $t=0$, we have $D_L({\tilde{\beta}},\beta^*) + [R({\tilde{\beta}})- R_G({\tilde{\beta}}) ] < \tilde{r}$. If the condition $D_L({\tilde{\beta}},\beta^*) + [R({\tilde{\beta}})- R_G({\tilde{\beta}}) ] \leq \tilde{r}$ holds for $t=1$, then the desired bound is already proved due to the condition $D_L({\bar{\beta}},{\tilde{\beta}}) \geq D_{\bar{L}}({\tilde{\beta}},{\bar{\beta}})$. Otherwise, there exists $t \in [0,1]$ such that $D_L({\tilde{\beta}},\beta^*) + [R({\tilde{\beta}})- R_G({\tilde{\beta}}) ] = \tilde{r}$. However, this is impossible because the same argument gives \[ D_L({\tilde{\beta}},\beta^*) + [R({\tilde{\beta}})- R_G({\tilde{\beta}}) ] \leq D_L({\bar{\beta}},\beta^*)+ \gamma_{\bar{L}_*}({\bar{\beta}};r,G,\|\cdot\|) \Delta_r^2 < \tilde{r} . \] This proves the desired bound. \end{proof} Corollary~\ref{cor:recovery-global-dc-oracle} gives an oracle inequality with leading coefficient one for general loss functions, but the statement is rather complex. The situation for quadratic loss is much simpler, where we can take $\bar{L}(\beta)=L(\beta)$. This is because the condition $D_L({\bar{\beta}},{\tilde{\beta}}) \geq D_{\bar{L}}({\tilde{\beta}},{\bar{\beta}})$ always holds. We also have a better constant because $D_L^s(\beta,\beta')= 2 D_L(\beta,\beta')=2D_L(\beta',\beta)$. \begin{corollary} Assume that $L(\beta)$ is a quadratic loss in (\ref{eq:quadratic-loss}). Let $\|\cdot\|_D$ and $\|\cdot\|$ be dual norms, and consider ${\bar{\beta}} \in \Omega$ and a closed convex $G \subset \partial R({\bar{\beta}})$. We have \[ D_L({\hat{\beta}},\beta^*)+ [R({\hat{\beta}})- R_G({\hat{\beta}}) ] \leq D_L({\bar{\beta}},\beta^*)+ (2\gamma_{\bar{L}_*}({\bar{\beta}};\infty,G,\|\cdot\|))^{-1}\inf_{u \in G} \|u+\nabla L(\beta^*)\|_D^2 , \] where \[ \gamma_{\bar{L}_*}({\bar{\beta}};\infty,G,\|\cdot\|)= \inf \left\{ \frac{2D_L(\beta,{\bar{\beta}})}{\|{\bar{\beta}}-\beta\|^2} : 2 D_L(\beta,{\bar{\beta}})+\sup_{u \in G} \innerprod{u+\nabla L(\beta^*)}{\beta-{\bar{\beta}}} \leq 0 \right\} . \] \label{cor:recovery-global-dc-quadratic} \end{corollary} \subsection{Quadratic Loss with Gaussian Random Design Matrix} \label{sec:gordon} While in the general case, the estimation of $\gamma_{\bar{L}_*}({\bar{\beta}};r,G,\|\cdot\|)$ may be technically involved, for the special application of compressed sensing with Gaussian random design matrix and quadratic loss, we can obtain a relatively general and simple bound using Gordon's minimum restricted singular value estimation in \cite{Gordon88}. This section describes the underlying idea. In this section, we consider the quadratic loss function \begin{equation} L(\beta) = \|X \beta - Y\|_2^2 , \label{eq:gaussian-design} \end{equation} where $\beta \in {\mathbb{R}}^p, Y \in {\mathbb{R}}^n$, and $X$ is an $n \times p$ matrix with iid Gaussian entries $N(0,1)$. Here $\innerprod{\cdot}{\cdot}$ is the Euclidean dot product in ${\mathbb{R}}^p$: $\innerprod{u}{v}= u^\top v$ for $u,v \in {\mathbb{R}}^p$. \begin{definition}[Gaussian Width] Given any set ${\cal C} \subset {\mathbb{R}}^p$, we define its Gaussian width as \[ {\mathrm{width}}({\cal C}) = {\mathbf E}_{\epsilon} \sup_{z \in {\cal C}; \|z\|_2=1} \epsilon^\top z , \] where $\epsilon \sim N(0, I_{p \times p})$ and ${\mathbf E}_{\epsilon}$ is the expectation with respect to $\epsilon$. \end{definition} The following estimation of Gaussian width is based on a similar computational technique used in \cite{ChRePaWi10}. \begin{proposition}\label{prop:gw} Let ${\cal C}=\left\{\beta \in {\mathbb{R}}^p : \sup_{u \in G} \innerprod{u+\nabla L(\beta^*)}{\beta} \leq 0 \right\}$ and $\epsilon \sim N(0,I_{p \times p})$. Then, \[ {\mathrm{width}}({\cal C}) \leq {\mathbf E}_{\epsilon} \inf_{u \in G; \gamma >0} \|\gamma(u+\nabla L(\beta^*)) - \epsilon\|_2. \] \end{proposition} \begin{proof} For all $\beta \in {\cal C}$ and $\|\beta\|_2=1$, $\gamma \geq 0$, and $u \in G$, let $g= (u+\nabla L(\beta^*))$. We have $\innerprod{g}{\beta}=\innerprod{u + \nabla L(\beta^*)}{\beta} \leq 0$. Therefore, $\epsilon^\top \beta = (\epsilon- \gamma g)^\top \beta +\gamma g^\top \beta \leq (\epsilon- \gamma g)^\top \beta \leq \|\epsilon- \gamma g\|_2$. Since $u$ is arbitrary, we have \[ \epsilon^\top \beta \leq \inf_{u \in G; \gamma >0} \|\gamma(u+\nabla L(\beta^*)) - \epsilon\|_2 . \] Taking expectation with respect to $\epsilon$, we obtain the desired result. \end{proof} Gaussian width is useful when we apply Gordon's restricted singular value estimates, which give the following result. \begin{theorem} \label{thm:gordon} Let $f_{\min}(X)=\min_{z \in {\cal C}; \|z\|_2=1} \|X z\|_2$ and $f_{\max}(X)=\max_{z \in {\cal C}; \|z\|_2=1} \|X z\|_2$. Let $\lambda_n=\sqrt{2} \Gamma((n+1)/2)/\Gamma(n/2)$ where $\Gamma(\cdot)$ is the $\Gamma$-function. We have for any $\delta >0$: \[ {\mathbf P} \left[ f_{\min}(X) \leq \lambda_n - {\mathrm{width}}({\cal C}) - \delta \right] \leq {\mathbf P}[ N(0,1)>\delta ] \le 0.5 \exp \left(-\delta^2/2 \right) , \] \[ {\mathbf P} \left[ f_{\max}(X) \geq \lambda_n + {\mathrm{width}}({\cal C}) + \delta \right] \leq {\mathbf P}[ N(0,1)>\delta ] \le 0.5 \exp \left(-\delta^2/2 \right). \] \end{theorem} \begin{proof} Since both $f_{\min}(X)$ and $f_{\max}(X)$ are Lipschitz-1 functions with respect to the Frobenius norm of $X$. We may apply the Gaussian concentration bound \cite{Borell75,Pisier85} to obtain: \[ {\mathbf P} \left[ f_{\min}(X) \leq {\mathbf E} [f_{\min}(X)] - \delta \right] \leq {\mathbf P}[ N(0,1)>\delta ], \] \[ {\mathbf P} \left[ f_{\max}(X) \geq {\mathbf E} [f_{\max}(X)] + \delta \right] \leq {\mathbf P}[ N(0,1)>\delta ]. \] Now we may apply Corollary 1.2 of \cite{Gordon88} to obtain the estimates \[ {\mathbf E} [f_{\min}(X)] \geq \lambda_n - {\mathrm{width}}({\cal C}) , \qquad {\mathbf E} [f_{\max}(X)] \leq \lambda_n + {\mathrm{width}}({\cal C}) , \] which proves the theorem. \end{proof} Note that we have $n/\sqrt{n+1} \leq \lambda_n \leq \sqrt{n}$. Therefore we may replace $\lambda_n-{\mathrm{width}}({\cal C})$ by $n/\sqrt{n+1}-{\mathrm{width}}({\cal C})$ and $\lambda_n + {\mathrm{width}}({\cal C})$ by $\sqrt{n}+{\mathrm{width}}({\cal C})$. By combining Theorem~\ref{thm:gordon} and Proposition~\ref{prop:gw} to estimate $\gamma_{\bar{L}_*}(\cdot)$ in Corollary~\ref{cor:recovery-global-dc-quadratic}, we obtain the following result for Gaussian random projection in compressed sensing. The result improves the main ideas of \cite{ChRePaWi10}. \begin{theorem}\label{thm:recovery-gaussian} Let $L(\beta)$ be given by (\ref{eq:gaussian-design}) and $\epsilon\sim N(0,I_{p \times p})$. Suppose the conditions of Theorem~\ref{thm:dual_certificate-error} hold. Then, given any $g,\delta \geq 0$ such that $g+\delta \leq n/\sqrt{n+1}$, with probability at least \[ 1 - \frac{1}{2}\exp \left(-\frac{1}{2} (n/\sqrt{n+1}-g-\delta)^2\right) , \] we have either \[ \|X({\hat{\beta}}-\beta^*)\|_2^2 + [R({\hat{\beta}})- R_G({\hat{\beta}}) ] \leq \|X({\bar{\beta}}-\beta^*)\|_2^2 + (4\delta)^{-1}\inf_{u \in G} \|u+\nabla L(\beta^*)\|_2^2 , \] or \[ g < {\mathbf E}_{\epsilon} \inf_{u \in G; \gamma >0} \|\gamma(u+\nabla L(\beta^*)) - \epsilon\|_2 . \] \end{theorem} \begin{proof} Let $\|\cdot\|=\|\cdot\|_D=\|\cdot\|_2$ in Corollary~\ref{cor:recovery-global-dc-quadratic}. We simply note that $\gamma_{\bar{L}_*}({\bar{\beta}};\infty,G,\|\cdot\|_2)$ is no smaller than $\inf \{2\|X \beta\|_2 : \|\beta\|_2=1, \beta \in {\cal C}\}$, where ${\cal C}=\left\{\beta \in {\mathbb{R}}^p : \sup_{u \in G} \innerprod{u+\nabla L(\beta^*)}{\beta} \leq 0 \right\}$. Let $E_1$ be the event $g \geq {\mathbf E}_{\epsilon} \inf_{u \in G; \gamma >0} \|\gamma(u+\nabla L(\beta^*)) - \epsilon\|_2$. In the event $E_1$, Proposition~\ref{prop:gw} implies $g \geq {\mathrm{width}}({\cal C})$, so that by Theorem~\ref{thm:gordon} \begin{eqnarray*} {\mathbf P}\Big[\gamma_{\bar{L}_*}({\bar{\beta}};\infty,G,\|\cdot\|_2)\le 2\delta \hbox{ and } E_1 \Big] \le {\mathbf P}\Big[\inf \{\|X \beta\|_2 : \|\beta\|_2=1, \beta \in {\cal C}\}\le \delta \Big| E_1 \Big] \le \frac{1}{2}e^{-(\lambda_n-g-\delta)^2/2}. \end{eqnarray*} The desired result thus follows from Corollary~\ref{cor:recovery-global-dc-quadratic}. \end{proof} \begin{remark} If $Y= X \beta^* + \epsilon$, with iid Gaussian noise $\epsilon \sim N(0,\sigma^2 I_{n \times n})$, then the error bound in Theorem~\ref{thm:recovery-gaussian} depends on $\inf_{u \in G} \|u+\nabla L(\beta^*)\|_2^2 = \inf_{u \in G} \|u+2 X^\top X \epsilon\|_2^2 \approx 2n\sigma^2 \inf_{u \in G} \|\gamma u+\epsilon\|_2^2$ when $X^\top X/n$ is near orthogonal, where $\gamma=0.5 \sigma^{-2}/n$. In comparison, under the noise free case $\sigma=0$ (and $\nabla L(\beta^*)=0$), the number of samples required in Gaussian random design is upper bounded by \[ {\mathbf E}_{\epsilon \sim N(0,I_{p \times p})} \inf_{u \in G} \|\gamma u+ \epsilon\|_2 \] for appropriate $\gamma$. The similarity of the two terms means that it is expected that the error bound in oracle inequality and the number of samples required in Gaussian design are closely related. \end{remark} \subsection{Tangent Space Analysis} \label{sec:TRSC} In some applications, the restricted strong convexity condition may not hold globally. In this situation, one can further restrict the condition into a subspace ${\cal T}$ of ${\bar\Omega}$ call {\em tangent space} in the literature. We may regard tangent space as a generalization of the support set concept for sparse regression. A more formal definition will be presented later in Section~\ref{sec:struct-tangent}. In the current section, it can be motivated by considering the following decomposition of $G$: \begin{equation} G = \{u_0 + u_1 : u_0 \in G_0 \subset G, u_1 \in G_1\} , \label{eq:G-tangent-decomp} \end{equation} where $G_1$ is a convex set that contains zero. Note that we can always take $G_0=G$ and $G_1=\{0\}$. However, this is not an interesting decomposition. This decomposition becomes useful when there exist $G_0$ and $G_1$ such that $G_0$ is small and $G_1$ is large. With this decomposition, we may define the tangent space as: \[ {\cal T} = \{\beta \in {\bar\Omega}: \innerprod{u_1}{\beta}=0 \text{ for all } u_1 \in G_1 \} . \] For simple sparse regression with $\ell_1$ regularization, tangent space can be considered as the subspace spanned by the nonzero coefficients of ${\bar{\beta}}$ (that is, support of ${\bar{\beta}}$). Typically ${\bar{\beta}} \in {\cal T}$ (although this requirement is not essential). With the above defined ${\cal T}$, we may construct a tangent space dual certificate $Q_G^{\cal T}$ given any $u_0 \in G_0$ as: \begin{equation} Q_G^{\cal T}= {\bar{\beta}} + \Delta Q, \quad \Delta Q = \arg\min_{\Delta \beta \in {\cal T}} \left[ L({\bar{\beta}}+\Delta \beta) + \innerprod{u_0}{\Delta \beta} \right] . \label{eq:dc-tangent-opt} \end{equation} Note that one may also define generalized dual tangent space certificate simply by working with $\bar{L}_*(\beta)= \bar{L}(\beta) - \innerprod{\nabla \bar{L}({\bar{\beta}})-\nabla L(\beta^*)}{\beta-{\bar{\beta}}}$ instead of $L(\beta)$. The idea of tangent space analysis is to verify that the restricted dual certificate $Q_G^{\cal T}$ is a dual certificate. Note that to bound $D_L^s({\bar{\beta}},Q_G^{\cal T})$, we only need to assume restricted strong convexity inside ${\cal T}$, which is weaker than globally defined restricted convexity in Section~\ref{sec:RSC}. The construction of $Q_G^{\cal T}$ ensures that it satisfies the dual certificate definition in ${\cal T}$ according to Definition~\ref{def:primal-dual-certificate}, in that given any $\beta \in {\cal T}: \innerprod{\nabla L(Q_G^{\cal T})-u_0}{\beta}=0$. However, we still have to check that the condition (\ref{eq:dual-certificate}) holds for all $\beta \in {\bar\Omega}$ to ensure that $Q_G=Q_G^{\cal T}$ is a (globally defined) dual certificate. The sufficient condition is presented in the following proposition. \begin{proposition} Consider $Q_G^{\cal T}$ in (\ref{eq:dc-tangent-opt}). If $-\nabla L(Q_G^{\cal T}) -u_0 \in G_1$, then $Q_G=Q_G^{\cal T}$ is a dual certificate that satisfies condition (\ref{eq:dual-certificate}). \end{proposition} Technically speaking, the tangent space dual certificate analysis is a generalization of the irrepresentable condition for $\ell_1$ support recovery \cite{ZhaoYu06}. However, we are interested in oracle inequality rather than support recovery, and in such context the analysis presented in this section generalizes those of \cite{CandesPlan09,CandesPlan11}. \begin{definition}[Restricted Strong Convexity in Tangent Space] Given a subspace ${\cal T}$ that contains ${\bar{\beta}}$, we define the following quantity which we refer to as tangent space restricted strong convexity (TRSC) constant: \[ \gamma_L^{\cal T}({\bar{\beta}};r,G,\|\cdot\|)= \inf \left\{ \frac{D_L^s(\beta,{\bar{\beta}})}{\|{\bar{\beta}}-\beta\|^2} : \|\beta-{\bar{\beta}}\|\leq r; \beta-{\bar{\beta}} \in {\cal T}; D_L^s(\beta,{\bar{\beta}})+ \innerprod{u_0+\nabla L({\bar{\beta}})}{\beta-{\bar{\beta}}} \leq 0 \right\} , \] where $\|\cdot\|$ is a norm, $r>0$ and $G \subset \partial R({\bar{\beta}})$. \end{definition} \begin{theorem}[Dual Certificate Error Bound in Tangent Space] Let $\|\cdot\|_D$ and $\|\cdot\|$ be dual norms, and consider convex $G \subset \partial R({\bar{\beta}})$ with the decomposition (\ref{eq:G-tangent-decomp}). If $\inf_{u \in G} \|u+\nabla L({\bar{\beta}})\|_D < r \cdot \gamma_L^{\cal T}({\bar{\beta}};r,G,\|\cdot\|)$ for some $r>0$, then \[ D_L^s({\bar{\beta}},Q_G^{\cal T}) \leq (\gamma_L^{\cal T}({\bar{\beta}};r,G,\|\cdot\|))^{-1}\|u_0 + P_{\cal T} \nabla L({\bar{\beta}})\|_D^2 , \] where $Q_G^{\cal T}$ is given by (\ref{eq:dc-tangent-opt}). \label{thm:dual_certificate-tangent-error} \end{theorem} If the condition $\inf_{u \in G} \|u+\nabla L({\bar{\beta}})\|_D < r \cdot \gamma_L^{\cal T}({\bar{\beta}};r,G,\|\cdot\|)$ holds for some $r>0$, then Theorem~\ref{thm:dual_certificate-tangent-error} implies that (\ref{eq:dc-tangent-opt}) has a finite solution. However, the bound using Theorem~\ref{thm:dual_certificate-tangent-error} may not be the sharpest possible. For specific problems, better bounds may be obtained using more refined estimates (for example, in \cite{HsKaTz11-robust}). If $Q_G^{\cal T}$ is a globally defined dual certificate in that (\ref{eq:dual-certificate}) holds, then we immediately obtain results analogous to Corollary~\ref{cor:recovery-global-dc-error} and Corollary~\ref{cor:recovery-global-dc-quadratic}. Let ${\bar{\beta}}_*$ be the target parameter in the sense that $\nabla L({\bar{\beta}}_*)$ is small. If we want to apply Theorem~\ref{thm:dual_certificate-oracle} in tangent space analysis, it may be convenient to consider the following choice of $\beta^*$ instead of setting $\beta^*$ to be the target ${\bar{\beta}}_*$: \begin{equation} \beta^* = {\bar{\beta}}_* + \Delta \beta^* , \qquad \Delta \beta^* = \arg\min_{\Delta \beta \in {\cal T}} L({\bar{\beta}}_* + \Delta \beta) . \label{eq:target-opt} \end{equation} The advantage of this choice is that $\beta^*$ is close to the target ${\bar{\beta}}_*$, and thus $\nabla L(\beta^*)$ is small. Moreover,$\innerprod{\nabla L(\beta^*)}{\beta}=0$ for all $\beta \in {\cal T}$, which is convenient since it means $\innerprod{\nabla \bar{L}_*({\bar{\beta}})}{\beta}=0$ for all $\beta \in {\cal T}$ with $\bar{L}_*(\beta)= \bar{L}(\beta) - (\nabla \bar{L}({\bar{\beta}})-\nabla L(\beta^*))^\top (\beta-{\bar{\beta}})$. For quadratic loss of (\ref{eq:quadratic-loss}), we have an analogy of Corollary~\ref{cor:recovery-global-dc-quadratic}. Since $\innerprod{\cdot}{\cdot}$ becomes an inner product in a Hilbert space with ${\bar\Omega}={\bar\Omega}^*$, we may further define the orthogonal projection to ${\cal T}$ as $P_{\cal T}$ and to its orthogonal complements ${\cal T}^\perp$ as $P_{\cal T}^\perp$. It is clear that in this case we also have $G_1 \subset {\cal T}^\perp$. \begin{corollary} \label{cor:recovery-tangent-dc-quadratic} Assume that $L(\beta)$ is a quadratic loss as in (\ref{eq:quadratic-loss}). Consider convex $G \subset \partial R({\bar{\beta}})$ with decomposition in (\ref{eq:G-tangent-decomp}). Consider $\beta^* \in {\bar\Omega}$ such that $2H\beta^*-z=\tilde{a}+\tilde{b}$ with $\tilde{a} \in {\cal T}$ and $\tilde{b} \in {\cal T}^\perp$. Assume $H_{\cal T}$, the restriction of $H$ to ${\cal T}$, is invertible. If $u_0 \in {\cal T}$, then let \[ \Delta Q = - 0.5 H_{\cal T}^{-1} (u_0+\tilde{a}) = \arg\min_{\Delta \beta \in {\cal T}} \left[ \innerprod{H \Delta \beta}{\Delta \beta} + \innerprod{u_0+\tilde{a}}{\Delta \beta} \right] . \] If $P_{\cal T}^\perp H H_{\cal T}^{-1} u_0 - \tilde{b} \in G_1$, then \[ D_L({\hat{\beta}},\beta^*)+ [R({\hat{\beta}})- R_G({\hat{\beta}}) ] \leq D_L({\bar{\beta}},\beta^*)+ 0.25 \innerprod{u_0+\tilde{a}}{H_{\cal T}^{-1} (u_0+\tilde{a})} . \] \end{corollary} \begin{proof} Let $Q_G={\bar{\beta}} + \Delta Q$, then $Q_G$ is a generalized dual certificate that satisfies condition (\ref{eq:dual-certificate-2}) with $\bar{L}=L$. This is because \begin{align*} -\nabla \bar{L}_*(Q_G) -u_0 =& -2 H \Delta Q- \tilde{a}-\tilde{b} - u_0 \\ =& -2 H H_{\cal T}^{-1}(u_0+\tilde{a})- \tilde{a}-\tilde{b} - u_0 \\ =& - 2 P_{\cal T} H H_{\cal T}^{-1}(u_0+\tilde{a})- 2 P_{\cal T}^\perp H H_{\cal T}^{-1} (u_0 + \tilde{a}) - \tilde{a} - \tilde{b} - u_0 \\ =& P_{\cal T}^\perp H H_{\cal T}^{-1} (u_0+\tilde{a}) - \tilde{b} \in G_1 . \end{align*} We thus have \[ D_L({\hat{\beta}},\beta^*)+ [R({\hat{\beta}})- R_G({\hat{\beta}}) ] \leq D_L({\bar{\beta}},\beta^*)+ D_L({\bar{\beta}},Q_G) . \] Since $D_L({\bar{\beta}},Q_G)= \innerprod{H \Delta Q}{\Delta Q}=0.25 \innerprod{u_0+\tilde{a}}{H_{\cal T}^{-1} (u_0+\tilde{a})}$, the desired bound follows. \end{proof} If $\beta^*$ is given by (\ref{eq:target-opt}), then $\tilde{a}=0$, and Corollary~\ref{cor:recovery-tangent-dc-quadratic} can be further simplified. \section{Structured $\ell_1$ regularizer} \label{sec:struct-L1} This section introduces a generalization of $\ell_1$ regularization for which the calculations in the dual certificate analysis can be relatively easily performed. It should be noted that the general theory of dual certificate developed earlier can be applied to other regularizers that may not have the structured form presented here. Recall that ${\bar\Omega}$ is a Banach space containing $\Omega$, ${\bar\Omega}^*$ is its dual, and $\innerprod{u}{\beta}$ denotes $u(\beta)$ for linear functionals $u\in {\bar\Omega}^*$. Let $E_0$ be either a Euclidean (thus $\ell_1$) space of a fixed dimension or a countably infinite dimensional $\ell_1$ space. We write any $E_0$-valued quantity as $a=(a_1,a_2,\ldots)^\top$ and bounded linear functionals on $E_0$ as $w^\top a = \sum_j w_ja_j=\innerprod{w}{a}$, with $w=(w_1,w_2,\ldots)^\top \in\ell_\infty$. Let ${\mathscr M}$ be the space of all bounded linear maps from ${\bar\Omega}$ to $E_0$. Let ${\mathscr A}$ be a class of linear mappings in ${\mathscr M}$. We may define a regularizer as follows: \begin{equation} R(\beta) = \|\beta\|_{\mathscr A}, \qquad \|\beta\|_{\mathscr A}= \sup_{A\in{\mathscr A}}\| A\beta\|_1. \label{eq:struct-L1} \end{equation} As a maximum of seminorms, the regularizer $\|\beta\|_{\mathscr A}$ is clearly a seminorm in $\{\beta: R(\beta)<\infty\}$. The choice of ${\mathscr A}$ is quite flexible. We allow $R(\cdot)$ to have a nontrivial kernel $\hbox{\rm ker}(R)=\cap_{A\in{\mathscr A}}\hbox{\rm ker}(A)$. Given the ${\mathscr A}$-norm $\|\cdot\|_{\mathscr A}$ on ${\bar\Omega}$, we may define its dual norm on ${\bar\Omega}^*$ as \[ \|u\|_{{\mathscr A},D} = \sup \{ \innerprod{u}{\beta}: \|\beta\|_{\mathscr A} \leq 1\} . \] Since $\|\beta\|_{\mathscr A}$ may take zero-value even if $\beta \neq 0$; this means that $\|u\|_{{\mathscr A},D}$ may take infinite value, which we will allow in the following discussions. We call the class of regularizers defined in (\ref{eq:struct-L1}) structured-$\ell_1$ (or structured-Lasso) regularizers. This class of regularizers contain enough structure so that dual certificate analysis can be carried out in generality. In the following, we shall discuss various properties of structured $\ell_1$ regularizer by generalizing the corresponding concepts of $\ell_1$ regularizer for sparse regression. This regularizer obviously includes vector $\ell_1$ penalty as a special case. In addition, we give two more structured regularization examples to illustrate the general applicability of this regularizer. \begin{example}\label{example:group-lasso} Group $\ell_1$ penalty: Let $E_j$ be fixed Euclidean spaces, $X_j:{\bar\Omega}\to E_j$ be fixed linear maps, $\lambda_j$ be fixed positive numbers, and ${\mathscr A} = \big\{(v_1^\top X_1,v_2^\top X_2,\ldots)^\top: v_j\in E_j, \|v_j\|_2\le \lambda_j\big\}$. Then, \begin{eqnarray*} R(\beta) = \sup_{A\in {\mathscr A}} \|A\beta\|_1 = \hbox{$\sum_j \lambda_j\|X_j\beta\|_2$.} \end{eqnarray*} \end{example} \begin{example}\label{example:weighted-nuc} Nuclear penalty: ${\bar\Omega}$ contains matrices of a fixed dimension. Let $s_j(\beta)\ge s_{j+1}(\beta)$ denote the singular values of matrix $\beta$ and ${\mathscr A} = \big\{A: A\beta = (w_j(U^\top \beta V)_{jj}, j\ge 1), U^\top U =I_r, V^\top V =I_r, r\ge 0, 0\le w_j\le \lambda\big\}$. Then, the nuclear norm (or trace-norm) penalty for matrix $\beta$ is \begin{eqnarray*} R(\beta) = \sup_{A\in {\mathscr A}} \|A\beta\|_1 = \lambda \sum_j s_j(\beta). \end{eqnarray*} \end{example} \subsection{Subdifferential} We characterize the subdifferential of $R(\beta)$ by studying the maximum property of ${\mathscr A}$. A set ${\mathscr A}$ is the largest class to generate (\ref{eq:struct-L1}) if for any $A_0\in{\mathscr M}$, $\sup_{\beta\in{\bar\Omega}}\{\|A_0\beta\|_1-R(\beta)\} = 0$ implies $A_0\in{\mathscr A}$. We also need to introduce additional notations. \begin{definition} Given any map $M\in{\mathscr M}$, define its dual map $M^*$ from $\ell_\infty$ to ${\bar\Omega}^*$ as: $\forall w\in\ell_\infty$, $M^* w$ satisfies $\innerprod{M^*w}{\beta} = w^\top(M\beta), \forall \beta \in {\bar\Omega}$. Given any $w \in \ell_\infty$, define $w(\cdot)$ as a linear map from ${\mathscr M} \to {\bar\Omega}^*$ as $w(M)=M^* w$. We also denote by $\overline{w({\mathscr A})}$ the closure of $w({\mathscr A})$ in ${\bar\Omega}^*$. \end{definition} The purpose of this definition is to introduce $e \in \ell_\infty$ so that $R(\beta)$ can be written as \[ R(\beta)=\sup_{A\in{\mathscr A}}\innerprod{e(A)}{\beta} = \sup_{u \in e({\mathscr A})}\innerprod{u}{\beta} . \] In this regard, one only needs to specifiy $e({\mathscr A})$ although for various problems it is more convenient to specify ${\mathscr A}$. Using this simpler representation, we have the following result characterizes the sub-differentiable of structured $\ell_1$ regularizer. \begin{proposition}\label{prop:struct-subdiff} Let $E_1=\{w=(w_1,w_2,\ldots)^\top \in \ell_\infty: |w_j|=1\ \forall\ j\}$ and $e=(1,1,...)\in E_1$. \\ (i) A set ${\mathscr A}$ is the largest class generating $R(\beta)$ iff the following conditions hold: (a) $w({\mathscr A})=e({\mathscr A})$ for all $w\in E_1$; (b) ${\mathscr A}$ is convex; (c) ${\mathscr A} = \cap_{w\in E_1}w^{-1}(\overline{e({\mathscr A})})$, where $w^{-1}$ is the set inverse function. \\ (ii) Suppose ${\mathscr A}$ satisfied condition (a) in part (i). Then, $R(\beta)=\sup_{A\in{\mathscr A}}\innerprod{e(A)}{\beta}$. \\ (iii) Suppose ${\mathscr A}$ satisfied conditions (a) and (b) in part (i). Then, for $R(\beta)<\infty$, \begin{eqnarray*} \partial R(\beta) = \{u\in \overline{e({\mathscr A})}: A\in{\mathscr A}, \innerprod{u}{\beta} = R(\beta)\}. \end{eqnarray*} \end{proposition} In what follows, we assume ${\mathscr A}$ satisfied conditions (a) and (b) in (i). For notational simplicity, we also assume $e({\mathscr A}) = \overline{e({\mathscr A})}$, which holds in the finite-dimensional case for closed ${\mathscr A}$. This gives \begin{eqnarray}\label{eq:struct-subdiff} \partial R(\beta) = \{u\in e({\mathscr A}): A\in{\mathscr A}, \innerprod{u}{\beta} = R(\beta)\}. \end{eqnarray} Condition (c) in part (i) is then nonessential as it allows permutation of elements in $A$. Condition (c) holds for the specified ${\mathscr A}$ in Example \ref{example:weighted-nuc} but not in Example \ref{example:group-lasso}. \begin{proof} We assume (a) since it is necessary for ${\mathscr A}$ to be maximal in part (i). (ii) Under (a), $\sup_{A\in{\mathscr A}}\innerprod{e(A)}{\beta} = \sup_{w\in E_1,A\in{\mathscr A}}\innerprod{w(A)}{\beta} =\sup_{A\in{\mathscr A},w\in E_1}w^\top(A\beta)=R(\beta)$. (i) We assume (b) since it is necessary. It suffices to prove the equivalence between the following two conditions for each $A_0\in{\mathscr M}$: $\sup_{\beta\in{\bar\Omega}}\{\|A_0\beta\|_1-R(\beta)\} = 0$ and $A_0 \in \cap_{w\in E_1}w^{-1}(\overline{e({\mathscr A})})$. Let $A_0\in \cap_{w\in E_1}w^{-1}(\overline{e({\mathscr A})})$. For any $\beta\in{\bar\Omega}$, there exists $w_0\in E_1$ such that $\|A_0\beta\|_1=w_0^\top A_0\beta = \innerprod{w_0(A_0)}{\beta}$. Since $A_0\in w_0^{-1}(\overline{e({\mathscr A})})$, $w_0(A_0)$ is the weak limit of $e(A_k)$ for some $A_k\in A$. It follows that $\|A_0\beta\|_1=\innerprod{w_0(A_0)}{\beta} = \lim_k\innerprod{e(A_k)}{\beta} = \lim_k e^\top A_k\beta\le R(\beta)$. Now, consider $A_0\not\in w_0^{-1}(\overline{e({\mathscr A})})$, so that $w_0(A_0)\not\in\overline{e({\mathscr A})}$. This implies the existence of $\beta\in{\bar\Omega}$ with $\|A_0\beta\|_1\ge \innerprod{w_0(A_0)}{\beta} > \sup_{A\in{\mathscr A}}\innerprod{e(A)}{\beta} =R(\beta)$. (iii) If $R(\beta)= \innerprod{u}{\beta}$ with $u\in \overline{e(A)}$, then $R(b)-R(\beta) \ge \innerprod{u}{b} - \innerprod{u}{\beta} = \innerprod{u}{b-\beta}$ for all $b$, so that $u\in\partial R(\beta)$. Now, suppose $v\in\partial R(\beta)$, so that $R(b)-R(\beta)\ge \innerprod{v}{b-\beta}$ for all $b\in{\bar\Omega}$. Since $R(b)$ is a seminorm, taking $b=t\beta$ yields $R(\beta)=\innerprod{v}{\beta}$. Moreover, $\innerprod{v}{b-\beta}\le R(b-\beta)$ implies $v\in\overline{e(A)}$. The proof is complete. \end{proof} \subsection{Structured Sparsity} \label{sec:struct-tangent} An advantage of the structured $\ell_1$ regularizer, compared with a general seminorm, is to allow the following notion of {\em structured sparsity}. A vector ${\bar{\beta}}$ is sparse in the structure ${\mathscr A}$ if \begin{eqnarray}\label{eq:struct-sparse} \exists W\in {\mathscr A}: \quad R({\bar{\beta}}) = \innerprod{e(W)}{{\bar{\beta}}},\quad S = {\mathrm{supp}}(W{\bar{\beta}}), \end{eqnarray} for certain set $S$ of relatively small cardinality. This means a small structured $\ell_0$ ``norm'' $\|W{\bar{\beta}}\|_0$. In Example \ref{example:weighted-nuc}, this means $\beta$ has low rank. Let $e_S$ be the 0-1 valued $\ell_\infty$ vector with 1 on $S$ and 0 elsewhere. If $A\in{\mathscr A}$ can be written as $A=(W_S^\top,B_{S^c}^\top)^\top$, then $\|A{\bar{\beta}}\|_1=\|W_S{\bar{\beta}}\|_1+\|B_{S^c}{\bar{\beta}}\|_1\le R({\bar{\beta}})$, which implies $\|B_{S^c}{\bar{\beta}}\|_1=0$ by (\ref{eq:struct-sparse}). By (\ref{eq:struct-subdiff}), $e(A)=e((W_S^\top,B_{S^c}^\top)^\top) =e_S(W) + e_{S^c}(B) \in \partial R({\bar{\beta}})$. Thus, we may choose \begin{eqnarray}\label{eq:structG} G_{{\mathscr B}} = \big\{e_S(W) + e_{S^c}(B) , B_{S^c}\in {\mathscr B} \big\} \subseteq \partial R({\bar{\beta}}) \end{eqnarray} for a certain class ${\mathscr B} \subseteq \{B_{S^c}: (W_S^\top,B_{S^c}^\top)^\top\in {\mathscr A}\}$. Now let $G=G_{{\mathscr B}}$. Since members of $G$ can be written as $e_S(W)+e_{S^c}(B), B \in {\mathscr B}$, this gives a decomposition of $G$ as in (\ref{eq:G-tangent-decomp}) with $G_0=\{u_0\}=\{e_S(W)\}$ and $G_1=e_{S^c}({\mathscr B})$. Since $B{\bar{\beta}}=0$ for $B\in {\mathscr B}$, we have \begin{eqnarray*} R_G(\beta) = R({\bar{\beta}}) + \sup_{u\in G}\innerprod{u}{\beta-{\bar{\beta}}} = \innerprod{e_S(W)}{\beta} + \sup_{B\in{\mathscr B}}\innerprod{e_{S^c}(B)}{\beta}. \end{eqnarray*} Unless otherwise stated, we assume the following conditions on ${\mathscr B}$: (a) $w_{S^c}({\mathscr B})=e_{S^c}({\mathscr B})$ for all $w\in E_1$; (b) ${\mathscr B}$ is convex; (c) $e_{S^c}({\mathscr B})$ is closed in ${\bar\Omega}^*$. This is always possible since they match the assumed conditions on ${\mathscr A}$. Under these conditions, Proposition \ref{prop:struct-subdiff} gives \[ \sup_{B\in{\mathscr B}}\innerprod{e_{S^c}(B)}{\beta}= \sup_{B\in{\mathscr B}}\|B\beta\|_1 = \|\beta\|_{\mathscr B} . \] It's dual norm can be defined on ${\bar\Omega}^*$ as \[ \|u\|_{{\mathscr B},D}=\sup \left\{\innerprod{u}{\beta} : \|\beta\|_{\mathscr B} \leq 1 \right\} . \] This leads to the following simplified expression: \begin{eqnarray}\label{eq:structR_G} R_G(\beta) = R({\bar{\beta}}) + \sup_{u\in G}\innerprod{u}{\beta-{\bar{\beta}}} = \innerprod{e_S(W)}{\beta} + \|\beta\|_{\mathscr B} . \end{eqnarray} Since $B{\bar{\beta}}=0$ for all $B \in {\mathscr B}$, ${\mathscr B}$ may be used to represent a generalization of the zero coefficients of ${\bar{\beta}}$, while $W_S$ can be used to represent a generalization of the sign of ${\bar{\beta}}$. The larger the class ${\mathscr B}$ is, the more zero-coefficients ${\bar{\beta}}$ has (thus ${\bar{\beta}}$ is sparser). One may always choose ${\mathscr B}=\emptyset$ when ${\bar{\beta}}$ is not sparse. \subsection{Tangent Space} Given a convex function $\phi(\beta)$ and a point ${\bar{\beta}}\in\Omega$, $b\in{\bar\Omega}$ is a primal tangent vector if $\phi({\bar{\beta}} + tb)$ is differentiable at $t=0$. This means the equality of the left- and right-derivatives of $\phi({\bar{\beta}} + tb)$ at $t=0$. If $\phi(\beta)$ is a seminorm and ${\bar{\beta}}\neq 0$, $\phi({\bar{\beta}}+t{\bar{\beta}})=(1+t)\phi({\bar{\beta}})$ for all $|t|<1$, so that ${\bar{\beta}}$ is always a primal tangent vector at ${\bar{\beta}}$. If $\innerprod{u}{b}< \innerprod{v}{b}$ for $\{u,v\}\in\partial \phi({\bar{\beta}})$, then \begin{eqnarray*} \{\phi({\bar{\beta}})-\phi({\bar{\beta}} - tb)\}/(0-t) \le \innerprod{u}{b} < \innerprod{v}{b} \le \{\phi({\bar{\beta}}+tb)-\phi({\bar{\beta}})\}/t,\ \forall t>0, \end{eqnarray*} so that $\phi({\bar{\beta}} + tb)$ cannot be differentiable at $t=0$. This motivates the following definition of the (primal) tangent space of a regularizer at a point ${\bar{\beta}}$ and its dual complement. \begin{definition}\label{def:struct-tangent} Given a convex regularizer $R(\beta)$, a point ${\bar{\beta}}\in\Omega$, and a class $G\subseteq\partial R({\bar{\beta}})$, we define the corresponding tangent space as \begin{eqnarray*} {\cal T} = {\cal T}_G = \big\{b \in {\bar\Omega}: \innerprod{u-v}{b}=0\ \forall u\in G, v\in G\big\} = \cap_{u,v\in G}\hbox{\rm ker}(u-v). \end{eqnarray*} The dual complement of ${\cal T}$, denoted by ${\cal T}^\perp$, is defined as \begin{eqnarray*} {\cal T}^\perp = {\cal T}_G^\perp =\hbox{closure}\Big\{u: u\in {\bar\Omega}^*, \innerprod{u}{b}=0 \text{ for all } b \in {\cal T} \Big\}. \end{eqnarray*} When $\innerprod{\cdot}{\cdot}$ is an inner product, ${\bar\Omega}={\bar\Omega}^*$ and ${\cal T}^\perp$ is the orthogonal complement of ${\cal T}$ in ${\bar\Omega}$. \end{definition} \begin{remark}\label{remark:proj} Let ${\cal T}$ be any closed subspace of ${\bar\Omega}$. A map $P_{{\cal T}}: {\bar\Omega}\to{\bar\Omega}$ is a projection to ${\cal T}$ if $P_{{\cal T}}\beta=\beta$ is equivalent to $\beta\in\cal T$. For such $P_{{\cal T}}$, its dual $P_{{\cal T}}^*:{\bar\Omega}^*\to{\bar\Omega}^*$, defined by $\innerprod{P_{{\cal T}}^* v}{\beta} = \innerprod{v}{P_{{\cal T}}\beta}$, is a projection from ${\bar\Omega}^*\to P_{{\cal T}}^*{\bar\Omega}^*$. The image of $P_{{\cal T}}^*$, ${\cal T}^* = P_{{\cal T}}^*{\bar\Omega}^*$, is a dual of ${\cal T}$. Since $P_{{\cal T}}$ and $P_{{\cal T}}^*$ are projections, $v- P_{{\cal T}}^*v\in {\cal T}^\perp$ for all $v\in{\bar\Omega}^*$ and $\beta - P_{{\cal T}}\beta \in ({\cal T}^*)^\perp$ for all $\beta \in{\bar\Omega}$. \end{remark} The above definition is general. For the structured $\ell_1$ penalty, we let $G$ be as in (\ref{eq:structG}), we obtain by (\ref{eq:struct-subdiff}) that ${\bar{\beta}}\in {\cal T}$. The default conditions on ${\mathscr B}$ implies $0\in {\mathscr B}$, so that \begin{eqnarray*} {\cal T} = \big\{\beta: \innerprod{e_{S^c}(B)}{\beta} = 0\ \forall B\in{\mathscr B} \big\} = \cap_{B\in{\mathscr B}}\hbox{\rm ker}(B). \end{eqnarray*} Since $G_1=e_{S^c}({\mathscr B})$, this is consistent with the definition of Section~\ref{sec:TRSC}. The dual complement of ${\cal T}$ is \begin{eqnarray*} {\cal T}^\perp = \hbox{ the closure of the linear span of }\{e_{S^c}(B): B\in{\mathscr B} \}. \end{eqnarray*} \subsection{Interior Dual Certificate and Tangent Sparse Recovery Analysis} Consider a structured $\ell_1$ regularizer, a sparse ${\bar{\beta}} \in \Omega$, and a set $G_{{\mathscr B}} \subset\partial R({\bar{\beta}})$ as in (\ref{eq:structG}). In the analysis of (\ref{eq:hbeta}) with structured $\ell_1$ regularizer, members of the following subclass of $G_{{\mathscr B}}$ often appear. \begin{definition}[Interior Dual Certificate] \label{def:interior-dc} Given $G_{{\mathscr B}}$ in (\ref{eq:structG}), $v_0$ is an interior dual certificate if \begin{eqnarray*} v_0\in G_{{\mathscr B}},\ \innerprod{v_0-e_S(W)}{\beta}\le \eta_\beta \|\beta\|_{\mathscr B} \ \hbox{ for some $0\le \eta_\beta < 1$ for all $\beta$.} \end{eqnarray*} \end{definition} Note that in the above definition, we refer to the dual variable $v_0$ as a ``dual certificate'' to be consistent with the literature. This should not be confused with the notation of primal dual certificate $Q_G$ defined earlier. A direct application of interior dual certificate is the following extension of sparse recovery theory to general structured $\ell_1$ regularization. Suppose we observe a map $X:{\bar\Omega} \to V$ with a certain linear space $V$. Suppose there is no noise so that $X{\bar{\beta}}_* = y$ and ${\bar{\beta}}={\bar{\beta}}_*$ is sparse. Then the $R(\beta)$ minimization method for the recovery of ${\bar{\beta}}$ is \begin{eqnarray}\label{eq:hbeta-sparse-recover} {\hat{\beta}} = \mathop{\rm arg\, min}\Big\{R(\beta): X\beta = y\Big\}. \end{eqnarray} The following theorem provides sufficient conditions for the recovery of ${\bar{\beta}}$ by ${\hat{\beta}}$. \begin{theorem}\label{thm:struct-recover} Suppose ${\bar{\beta}}$ is sparse in the sense of (\ref{eq:struct-sparse}). Let $G$ be as in (\ref{eq:structG}) and ${\cal T}$ be as in Definition \ref{def:struct-tangent}. Let $V^*$ be the dual of $V$, $X^*: V^*\to {\bar\Omega}^*$ the dual of $X$, $P_{{\cal T}}$ a projection to ${\cal T}$, $P_{\cal}^*$ the dual of $P_{\cal}$ to ${\cal T}^*$, and $V_T=XP_{{\cal T}}{\bar\Omega}$. Suppose $(XP_{{\cal T}})^*$, the dual of $XP_{{\cal T}}$, is a bijection from $V_T^*$ to $T^*$ and $e_S(W)\in T^*$. Define $v_0 = X^*((XP_{{\cal T}})^*)^{-1}e_S(W)$. If $v_0$ is an interior dual certificate, then \begin{eqnarray*} {\hat{\beta}} = {\bar{\beta}} \hbox{ is the unique solution of (\ref{eq:hbeta-sparse-recover}).} \end{eqnarray*} Moreover, $v_0$ is an interior dual certificate iff for all $\beta$, there exists $\eta_\beta <1$ such that $\innerprod{v_0 - P_{{\cal T}}^*v_0}{\beta} \leq \eta_\beta \sup_{B\in{\mathscr B}}\|B\beta\|_1$. \end{theorem} In matrix completion, this matches the duel certificate condition for recovery of low rank ${\bar{\beta}}$ by constrained minimization of the nuclear penalty \cite{CandesR09,Recht09}. \begin{proof} Suppose $v_0$ is an interior dual certificate of the form $v_0=e_S(W)+e_{S^c}(B_0)$. Then, for all $\beta$ such that $X\beta = y = X{\bar{\beta}}$, \begin{eqnarray*} R(\beta) - R({\bar{\beta}}) &=& R(\beta) - R({\bar{\beta}}) - \innerprod{((XP_{{\cal T}})^*)^{-1}e_S(W)}{X(\beta-{\bar{\beta}})}\\ &=& R(\beta) - R({\bar{\beta}}) - \innerprod{v_0}{\beta-{\bar{\beta}}} \cr &\ge & \sup_{u\in G_{\mathscr B}}\innerprod{u-v_0}{\beta-{\bar{\beta}}} \cr &=& \sup_{B\in{\mathscr B}}\|B\beta\|_1 - \innerprod{e_{S^c}(B_0)}{\beta-{\bar{\beta}}} \cr &\ge& (1-\eta_\beta)\sup_{B\in {\mathscr B}}\|B\beta\|_1 . \end{eqnarray*} with $\eta_\beta<1$. The first equation uses $X\beta=X{\bar{\beta}}$, and the second equation uses the definition $v_0=X^*((XP_{{\cal T}})^*)^{-1}e_S(W)$. Since (\ref{eq:hbeta-sparse-recover}) is constrained to $X\beta = y = X{\bar{\beta}}$, the above inequality means that ${\bar{\beta}}$ is a solution of (\ref{eq:hbeta-sparse-recover}). It remains to prove its uniqueness. Let $\beta$ be another solution of (\ref{eq:hbeta-sparse-recover}). Since $1-\eta_\beta>0$, if $R(\beta)=R({\bar{\beta}})$, then the above inequality implies that $\max_{B\in {\mathscr B}}\|B\beta\|_1=0$, so that $\beta\in {\cal T}$. Since ${\bar{\beta}}\in{\cal T}$, $XP_{{\cal T}}(\beta-{\bar{\beta}})=X(\beta-{\bar{\beta}})=0$. This implies $\beta-{\bar{\beta}}=0$, since the invertibility of $(XP_{{\cal T}})^*$ implies ${\cal T}\cap \hbox{\rm ker}(XP_{{\cal T}}) = \{0\}$. \end{proof} When noise is present, we may employ the construction of Section~\ref{sec:TRSC}. For structured-$\ell_1$ regularizer, the analysis can be further simplified if we assume that there exists a target vector $\beta^*$ having the following property: \begin{equation} \nabla L(\beta^*) = \tilde{a} + \tilde{b}, \label{eq:target} \end{equation} with a small $\tilde{a}$, and $\tilde{b}$ satisfies the condition \[ \tilde{\eta} = \|\tilde{b}\|_{{\mathscr B},D} < 1 . \] Recall that the dual norm $\|\cdot\|_{{\mathscr B},D}$ of $\|\cdot\|_{\mathscr B}$ is defined as $\|\tilde{b}\|_{{\mathscr B},D}=\sup \left\{\innerprod{\tilde{b}}{\beta} : \|\beta\|_{\mathscr B} \leq 1 \right\}$. The condition means that there exists $\tilde{B} \in {\mathscr B}$ such that $\tilde{b}= \tilde{w}_{S^c}(\tilde{B})$ with $\|\tilde{w}\|_\infty \leq \tilde{\eta}$. For such a target vector $\beta^*$, we will further consider an interior subset $G \subset G_{{\mathscr B}}$ in (\ref{eq:structG}) with some $\eta \in [\tilde{\eta},1]$: \begin{equation} G=\{e_S(W)+\eta e_{S^c}(B): B \in {\mathscr B}\} \label{eq:structG-int} . \end{equation} It follows that \[ R(\beta)-R_G(\beta) \geq R_{G_{\mathscr B}}(\beta) - R_G(\beta) \geq \eta \|\beta\|_{\mathscr B} \] and \begin{eqnarray*} \sup_{u\in G}\innerprod{u+\nabla L(\beta^*)}{\beta-{\bar{\beta}}} &=& \innerprod{e_S(W)+\tilde{a}}{\beta-{\bar{\beta}}} + \sup_{u\in G}\innerprod{e_{S^c}(B)+\tilde{\eta} e_{S^c}(\tilde{B})}{\beta-{\bar{\beta}}} \cr &\ge& \innerprod{e_S(W)+\tilde{a}}{\beta-{\bar{\beta}}} + (\eta- \tilde{\eta})\|\beta-{\bar{\beta}}\|_{\mathscr B} . \end{eqnarray*} This estimate can be directly used in the definition of RSC in Corollary \ref{cor:recovery-global-dc-oracle}. One way to construct such a target vector $\beta^*$ is using (\ref{eq:target-opt}). In this case we may further assume that $\tilde{a}=0$ because $\innerprod{\nabla L(\beta^*)}{\beta}=0$ for any $\beta \in {\cal T}$. In general condition (\ref{eq:target}) is relatively easy to satisfy under the usual stochastic noise model with a small $\bar{a}$ since $\nabla L(\beta^*)$ is small. In the special setting of Theorem~\ref{thm:struct-recover}, we have $\nabla L(\beta^*)=0$ with $\beta^*={\bar{\beta}}={\bar{\beta}}_*$. For simplicity, in the following we will consider quadratic loss of the form (\ref{eq:quadratic-loss}) and apply Corollary~\ref{cor:recovery-tangent-dc-quadratic}. Consider $G$ in (\ref{eq:structG-int}), $\beta^*$ in (\ref{eq:target}) with $\tilde{a} \in {\cal T}$ ($\tilde{b} \in {\cal T}^\perp$), and $Q_G^{\cal T}$ defined as in (\ref{eq:dc-tangent-opt}) but with $L(\beta)$ replaced by $\bar{L}_*(\beta)= L(\beta) - (\nabla L({\bar{\beta}})-\nabla L(\beta^*))^\top (\beta-{\bar{\beta}})$, which can be equivalently written as \[ Q_G^{\cal T} = {\bar{\beta}} + \Delta Q, \quad \Delta Q= - 0.5 H_{\cal T}^{-1} (e_S(W)+\tilde{a}) . \] This is consistent with the construction of Theorem~\ref{thm:struct-recover} in the sense that in the noise-free case, we can let $H=X^\top X$ and $v_0=-2H \Delta Q= H H_{\cal T}^{-1} e_S(W)$ with $\tilde{a}=0$. We assume that the following condition holds for all $\beta$: \begin{equation} \| P_{\cal T}^\perp H H_{\cal T}^{-1} (e_S(W)+\tilde{a}) - \tilde{b}\|_{{\mathscr B},D} \leq \eta , \label{eq:irrep-struct} \end{equation} which is consistent with the noise free interior dual certificate existence condition in Theorem~\ref{thm:struct-recover} by setting $\eta_\beta=\eta$. The condition is a direct generalization of the strong irrepresentable condition for $\ell_1$ regularization in \cite{ZhaoYu06} to structured $\ell_1$ regularization. Under this condition, $Q_G^{\cal T}$ is a dual certificate that satisfies the generalized condition (\ref{eq:dual-certificate-2}) in Definition~\ref{def:primal-dual-certificate-2} with $\bar{L}=L$ and $\delta=0$. Corollary~\ref{cor:recovery-tangent-dc-quadratic} implies that \[ D_L(\beta^*,{\hat{\beta}})+ (1-\eta) \|{\hat{\beta}}\|_{\mathscr B} \leq D_L(\beta^*,{\bar{\beta}}) + 0.25 \innerprod{e_S(W)+\tilde{a}}{H_{\cal T}^{-1} (e_S(W)+\tilde{a})} . \] \subsection{Recovery Analysis with Global Restricted Strong Convexity} We can also employ the dual certificate construction of Section~\ref{sec:RSC} with $G$ in (\ref{eq:structG-int}) and $\beta^*={\bar{\beta}}$. Corollary~\ref{cor:recovery-global-dc-error} implies the following result: \[ D_L({\bar{\beta}},{\hat{\beta}}) + (1-\eta) \|{\hat{\beta}}\|_{\mathscr B} \leq \gamma_L({\bar{\beta}};r,G,\|\cdot\|)^{-1} \|e_S(W) +\tilde{a}\|_D^2 , \] where $\gamma_L({\bar{\beta}};r,G,\|\cdot\|)$ is lower bounded by \[ \inf \left\{ \frac{D_L^s(\beta,{\bar{\beta}})}{\|{\bar{\beta}}-\beta\|^2} : \|\beta-{\bar{\beta}}\|\leq r; \; D_L^s(\beta,{\bar{\beta}})+ (\eta-\tilde{\eta}) \|\beta-{\bar{\beta}}\|_{\mathscr B} + \innerprod{e_S(W) + \tilde{a}}{\beta-{\bar{\beta}}} \leq 0 \right\} . \] We may also consider a more general $\beta^*$ instead of assuming $\beta^*={\bar{\beta}}$. For example, consider the definition of $\beta^*$ in (\ref{eq:target-opt}), which implies that $\tilde{a}=0$ or simply let $\beta^*={\bar{\beta}}_*$. We can apply Corollary~\ref{cor:recovery-global-dc-quadratic} to the quadratic loss function of (\ref{eq:quadratic-loss}). It implies \begin{equation} D_L(\beta^*,{\hat{\beta}})+ (1-\eta) \sup_{B \in {\mathscr B}} \|B {\hat{\beta}}\|_1 \leq D_L(\beta^*,{\bar{\beta}}) + (2\gamma_{\bar{L}_*}({\bar{\beta}};\infty,G,\|\cdot\|))^{-1} \|\tilde{a}+e_S(W)\|_D^2 , \label{eq:recovery-global-dc-quadratic-struct} \end{equation} where $\gamma_{\bar{L}_*}({\bar{\beta}};\infty,G,\|\cdot\|)$ is lower bounded by \[ \inf \left\{ \frac{2\innerprod{H\beta}{\beta}}{\|\beta\|^2} : 2\innerprod{H\beta}{\beta} + (\eta-\tilde{\eta}) \|\beta\|_{\mathscr B} + \innerprod{e_S(W)+\tilde{a}}{\beta} \leq 0 \right\} . \] \subsection{Recovery Analysis with Gaussian Random Design} \label{sec:gordon-struct} We can also apply the results of Section~\ref{sec:gordon} by considering quadratic loss with Gaussian random design matrix in (\ref{eq:gaussian-design}). We can use the following proposition \begin{proposition} If $\tilde{\eta}<\eta$ and $\epsilon \sim N(0,I_{p \times p})$, then \begin{align*} {\mathbf E}_{\epsilon}^2 \inf_{u \in G; \gamma >0} \|\gamma(u+\nabla L(\beta^*)) - \epsilon\|_2 \leq \inf_{\gamma>0} {\mathbf E}_{\epsilon} \inf_{B \in {\mathscr B}} \| \gamma (e_S(W) +\tilde{a} +(\eta-\tilde{\eta})e_S(B)) - \epsilon\|_2^2 . \end{align*} \end{proposition} Therefore we may apply Theorem~\ref{thm:recovery-gaussian}, which implies that given any $g,\delta \geq 0$ such that $g+\delta \leq n/\sqrt{n+1}$, with probability at least \[ 1 - \frac{1}{2}\exp \left(-\frac{1}{2} (n/\sqrt{n+1}-g-\delta)^2\right) , \] we have either $\tilde{\eta} \geq \eta$, or \[ \|X({\hat{\beta}}-\beta^*)\|_2^2 + (1-\eta) \|{\hat{\beta}}\|_{\mathscr B} \leq \|X({\bar{\beta}}-\beta^*)\|_2^2 + (4\delta)^{-1} \|e_S(W) +\tilde{a}\|_2^2 , \] or \[ g^2 < \inf_{\gamma>0} {\mathbf E}_{\epsilon \sim N(0,I_{p \times p})} \inf_{B \in {\mathscr B}} \| \gamma (e_S(W) +\tilde{a} +(\eta-\tilde{\eta}) e_S(B)) - \epsilon\|_2^2 . \] \subsection{Parameter Estimation Bound} Generally speaking, the technique of dual certificate allows us to directly obtain an oracle inequality \begin{equation} D_L({\hat{\beta}},\beta^*)+ (1-\eta) \|{\hat{\beta}}\|_{\mathscr B}\leq \delta \label{eq:struct-oracle} \end{equation} for some $\delta >0$. If $\delta$ is small (in such case, ${\bar{\beta}}$ should be close to $\beta^*$), then we may also be interested in parameter estimation bound $\|{\hat{\beta}}-\beta^*\|$. In such case, additional estimates will be needed on top of the dual certificate theory of this paper. This section demonstrate how to obtain such a bound from (\ref{eq:struct-oracle}). Although parameter estimation bounds can be obtained for general loss functions $L(\cdot)$, they involve relatively complex notations. In order to illustrate the main ideas while avoiding unnecessary complexity, in the following we will only consider the quadratic loss case, where $\innerprod{\cdot}{\cdot}$ is an inner product. \begin{proposition} \label{prop:param-est} Assume that $L(\cdot)$ is the quadratic loss function given by (\ref{eq:quadratic-loss}). Consider any subspace ${\tilde{\cal T}}$ that contains the tangent space ${\cal T}$. Let $\delta' = \delta/(1-\eta) + \|P_{\cal T}^\perp \beta^*\|_{\mathscr B}$ with $\delta$ given by (\ref{eq:struct-oracle}). Define the correlation between ${\tilde{\cal T}}$ and ${\tilde{\cal T}}^\perp$ as: \[ \mathrm{cor}({\tilde{\cal T}},{\tilde{\cal T}}^\perp)= \sup \left\{ |\innerprod{H P_{\tilde{\cal T}} \beta}{P_{\tilde{\cal T}}^\perp \beta}|/ \innerprod{H P_{\tilde{\cal T}} \beta}{P_{\tilde{\cal T}} \beta}^{1/2} : \beta^* + \beta \in \Omega, \|\beta\|_{\mathscr B} \leq \delta' \right\} . \] Let $\Delta={\hat{\beta}}-\beta^*$. Then, $\|\Delta \|_{\mathscr B} \leq \delta'$, and \[ \innerprod{H_{\tilde{\cal T}} P_{\tilde{\cal T}} \Delta}{P_{\tilde{\cal T}} \Delta}^{1/2} \leq \sqrt{(1-\eta) \delta'} + 2 \mathrm{cor}({\tilde{\cal T}},{\tilde{\cal T}}^\perp) . \] \end{proposition} \begin{proof} We have \begin{align*} &\innerprod{H_{\tilde{\cal T}} P_{\tilde{\cal T}} \Delta}{P_{\tilde{\cal T}} \Delta} + 2 \innerprod{H P_{\tilde{\cal T}} \Delta}{P_{\tilde{\cal T}}^\perp \Delta} + (1-\eta) \|\Delta \|_{\mathscr B} \\ \leq&\innerprod{H_{\tilde{\cal T}} P_{\tilde{\cal T}} \Delta}{P_{\tilde{\cal T}} \Delta} + 2 \innerprod{H P_{\tilde{\cal T}} \Delta}{P_{\tilde{\cal T}}^\perp \Delta} + \innerprod{H_{\tilde{\cal T}} P_{\tilde{\cal T}}^\perp \Delta}{P_{\tilde{\cal T}}^\perp \Delta} + (1-\eta) \|{\hat{\beta}} \|_{\mathscr B} + (1-\eta) \|\beta^* \|_{\mathscr B} \\ =& D_L({\hat{\beta}},\beta^*) + (1-\eta) \|{\hat{\beta}}\|_{\mathscr B} + (1-\eta) \|P_{\cal T}^\perp \beta^*\|_{\mathscr B} \leq (1-\eta) \delta' , \end{align*} where we have used the fact that $\|\beta^*\|_{\mathscr B}=\|P_{\cal T}^\perp \beta^*\|_{\mathscr B}$. This means that if we let $\beta=\Delta$, then we have $\|\beta \|_{\mathscr B} \leq 1$, and $\beta^* + \beta \in \Omega$. Let $x^2=\innerprod{H_{\tilde{\cal T}} P_{\tilde{\cal T}} \Delta}{P_{\tilde{\cal T}} \Delta}$, we have $\innerprod{H P_{\tilde{\cal T}} \Delta}{P_{\tilde{\cal T}}^\perp \Delta} =x \innerprod{H P_{\tilde{\cal T}} \beta}{P_{\tilde{\cal T}}^\perp \beta}/ \innerprod{H P_{\tilde{\cal T}} \beta}{P_{\tilde{\cal T}} \beta}^{1/2}$. It follows that \[ x^2- 2 x \mathrm{cor}({\tilde{\cal T}},{\tilde{\cal T}}^\perp) \leq (1-\eta) \delta' . \] Solving for $x$ leads to the desired bound. \end{proof} Clearly, we can have a cruder estimate: \[ \mathrm{cor}({\tilde{\cal T}},{\tilde{\cal T}}^\perp) \leq \sup \{ \innerprod{H P_{\tilde{\cal T}}^\perp \beta}{P_{\tilde{\cal T}}^\perp \beta}^{1/2} : \beta^* + \beta \in \Omega, \|\beta\|_{\mathscr B} \leq \delta'\} . \] The bound in Proposition~\ref{prop:param-est} is useful when $H$ is invertible on ${\tilde{\cal T}}$: \[ \innerprod{H \beta}{\beta} \geq \gamma_{{\tilde{\cal T}}} \innerprod{\beta}{\beta} \quad \forall \beta \in {\tilde{\cal T}} , \] which leads to a bound on $\|P_{\tilde{\cal T}} \Delta\|_2$. Although one may simply choose ${\tilde{\cal T}}={\cal T}$, the resulting bound may be suboptimal, as we shall see later on. Therefore it can be beneficial to choose a larger ${\tilde{\cal T}}$. Examples of this result will be presented in Section~\ref{sec:examples}. \section{Examples} \label{sec:examples} We will present a few examples to illustrate the analysis as well as concrete substantiations of the relatively abstract notations we have used so far. \subsection{Group $\ell_1$ Least Squares Regression} We assume that ${\bar\Omega} = {\mathbb{R}}^p$, and consider the model \[ Y= X {\bar{\beta}}_* + \epsilon \] with the least squares loss function (\ref{eq:gaussian-design}). This corresponds to the quadratic loss (\ref{eq:quadratic-loss}) with $H=X^\top X$ and $z=2X^\top Y$. The inner product is Euclidean: $\innerprod{u}{{b}}=u^\top {b}$. Now, we assume that $p=q m$, and the variables $\{1,\ldots,p\}$ are divided into $q$ non-overlapping blocks ${\Gamma}_1,\ldots,{\Gamma}_{q} \subset \{1,\ldots,p\}$ of size $m$ each. One method to take advantage of the group structure is to use the group Lasso method \cite{YuanLin06} with \begin{equation} R(\beta)= \lambda \|\beta\|_{{\Gamma},1} , \qquad \|\beta\|_{{\Gamma},1}=\sum_{j=1}^q \|\beta_{{\Gamma}_j}\|_2 . \label{eq:group-L1} \end{equation} Its dual norm is \[ \|\beta\|_{{\Gamma},\infty} = \max_j \|\beta_{{\Gamma}_j}\|_2 . \] Group $\ell_1$ regularization includes the standard $\ell_1$ regularization as a special case, where we choose $m=1$, $q=p$, and ${\Gamma}_j=\{j\}$. Group-$\ell_1$ regularizer is a special case of (\ref{eq:struct-L1}), where we have \[ {\mathscr A}=\{A=(a_j): A \beta=(a_j^\top \beta)_{j=1,\ldots,q}: a_j \in {\mathbb{R}}^p, \|a_j\|_2 \leq \lambda, \; {\mathrm{supp}}(a_j) \subset {\Gamma}_j\} . \] For a group sparse ${\bar{\beta}}$, its group support is the smallest $S \subset \{1,\ldots,q\}$ such that ${\mathrm{supp}}({\bar{\beta}}) \subset {\mathbf S}=\cup_{k \in S} {\Gamma}_k$. We may define ${\mathrm{sgn}}_{\Gamma}({\bar{\beta}}_{{\Gamma}_j})$ to be ${\mathrm{sgn}}_{\Gamma}({\bar{\beta}}_{{\Gamma}_j})={\bar{\beta}}_{{\Gamma}_j}/\|{\bar{\beta}}_{{\Gamma}_j}\|_2$ when $j \in S$, and ${\mathrm{sgn}}_{\Gamma}({\bar{\beta}}_{{\Gamma}_j})=0$ when $j \notin S$. Using notations in Section~\ref{sec:struct-L1}, we may take $W=(\lambda {\mathrm{sgn}}_{\Gamma}({\bar{\beta}}_{{\Gamma}_j}))_{j=1,\ldots,q}$, and ${\mathscr B}=\{B=(b_j) \in {\mathscr A}: b_j=0 \text{ for all } j \in S \}$ in (\ref{eq:structG}). In fact, our computation does not directly depend on $W$ and ${\mathscr B}$. Instead, we may simply specify \[ e_S(W)=\lambda {\mathrm{sgn}}_{\Gamma}({\bar{\beta}}) \quad \text{and} \quad e_{S^c}({\mathscr B})=\{b \in {\mathbb{R}}^p: \|b\|_{{\Gamma},\infty} \leq \lambda; {\mathrm{supp}}(b) \subset {\mathbf S}^c\} , \] and \[ \|\beta\|_{\mathscr B} = \lambda \|\beta_{{\mathbf S}^c}\|_{{\Gamma},1} \qquad \|b_{{\mathbf S}^c}\|_{{\mathscr B},D} = \|b_{{\mathbf S}^c}\|_{{\Gamma},\infty}/\lambda . \] This means that we may take $G$ in (\ref{eq:structG-int}) as $G=\{u; \; u_{\mathbf S}=\lambda {\mathrm{sgn}}_{\Gamma}({\bar{\beta}}) \quad \& \quad \|u_{{\mathbf S}^c}\|_{{\Gamma},\infty} \leq \eta \lambda\}$ for some $0 \leq \eta \leq 1$, which implies that $R(\beta)-R_G(\beta) \geq (1-\eta) \|\beta_{{\mathbf S}^c}\|_{{\Gamma},1}$. The tangent space is ${\cal T}=\{u: {\mathrm{supp}}(u) \in {\mathbf S}\}$. We further consider target $\beta^*$ that satisfies (\ref{eq:target}), which we can rewrite as \[ 2 X^\top (X(\beta^* -{\bar{\beta}}_*)- \epsilon) = \tilde{a} + \tilde{b} , \] where ${\mathrm{supp}}(\tilde{b}) \subset {\mathbf S}^c$, and $\|\tilde{b}\|_{{\Gamma},\infty} = \tilde{\eta} \lambda$. We assume that $\|\tilde{a}\|_2$ is small. Note that we may choose $\lambda$ sufficiently large so that $\tilde{\eta}$ can be arbitrarily close to $0$. In particular, we may choose $\lambda \geq \|\tilde{b}\|_{{\Gamma},\infty}/\eta$ so that $\tilde{\eta} \leq \eta <1$. We are specially interested in the case of $\tilde{a}=0$, which can be achieved with the construction in (\ref{eq:target-opt}). \subsubsection*{Global Restricted Eigenvalue Analysis} Assume that $\lambda \geq \|\tilde{b}\|_{{\Gamma},\infty}/\eta$, and let $\tilde{\eta}=\|\tilde{b}\|_{{\Gamma},\infty}/\lambda$. We have $\tilde{\eta} \leq \eta$. Therefore in order to apply (\ref{eq:recovery-global-dc-quadratic-struct}), we may define restricted eigenvalue as \[ \gamma =\inf \left\{2\|X \Delta \beta\|_2^2/\|\Delta \beta\|^2: 2\|X \Delta \beta\|_2^2 + \Delta \beta^\top (\lambda {\mathrm{sgn}}_{\Gamma}({\bar{\beta}}) +\tilde{a}) + (\eta-\tilde{\eta}) \lambda \|\Delta \beta_{{\mathbf S}^c}\|_{{\Gamma},1} \leq 0 \right\} . \] We then obtain from (\ref{eq:recovery-global-dc-quadratic-struct}) \[ \| X ({\hat{\beta}}-\beta^*)\|_2^2+ (1-\eta) \lambda \|{\hat{\beta}}_{{\mathbf S}^c}\|_{{\Gamma},1} \leq \| X ({\bar{\beta}}-\beta^*)\|_2^2 + (2\gamma)^{-1} \|\tilde{a}+\lambda {\mathrm{sgn}}_{\Gamma}({\bar{\beta}})\|_D^2 . \] If we choose $\tilde{a}=0$, and let $\|\cdot\|=\|\cdot\|_{{\Gamma},1}$ with $\|\cdot\|_D=\|\cdot\|_{{\Gamma},\infty}$, then \[ \| X ({\hat{\beta}}-\beta^*)\|_2^2+ (1-\eta) \lambda \|{\hat{\beta}}_{{\mathbf S}^c}\|_{{\Gamma},1} \leq \| X ({\bar{\beta}}-\beta^*)\|_2^2 + \frac{\lambda^2|S|}{4 \bar{\gamma}} , \] with \[ \bar{\gamma} = \inf \left\{\|X \Delta \beta\|_2^2/(\|\Delta \beta\|_{{\Gamma},1}^2/|S|): \Delta \beta_{\mathbf S}^\top {\mathrm{sgn}}_{\Gamma}({\bar{\beta}}) + (\eta-\tilde{\eta}) \|\Delta \beta_{{\mathbf S}^c}\|_{{\Gamma},1} \leq 0 \right\} . \] The result is meaningful as long as $\bar{\gamma}>0$. Even for the standard $\ell_1$ regularizer, this condition is weaker than previous restricted eigenvalue conditions in the literature. In particular it is weaker than the compatibility condition of \cite{vandeGeerB09} (which is the weakest condition in the earlier literature), that requires \[ \inf \left\{\|X \Delta \beta\|_2^2/(\|\Delta \beta\|_{1}^2/|S|): (1-\tilde{\eta}) \|\Delta \beta_{{\mathbf S}^c}\|_{{\Gamma},1} \leq (1+\tilde{\eta}) \|\Delta \beta_{\mathbf S}\|_{{\Gamma},1} \right\} > 0. \] Our result replaces $\|\Delta \beta_{\mathbf S}\|_{{\Gamma},1}$ by $-\Delta \beta_{\mathbf S}^\top {\mathrm{sgn}}_{\Gamma}({\bar{\beta}})$, which is a useful improvement because the former can be significantly larger than the latter. For $\ell_1$ analysis, the use of ${\mathrm{sgn}}({\bar{\beta}})$ has appeared in various studies such as \cite{Wainwright09,ChRePaWi10,CandesPlan09,CandesPlan11}. In fact, the calculation for Gaussian random design, which we shall perform next, depends on ${\mathrm{sgn}}({\bar{\beta}})$ and ${\mathrm{sgn}}_{\Gamma}({\bar{\beta}})$. \subsubsection*{Gaussian Random Design} Assume that $X$ is Gaussian random design matrix in (\ref{eq:gaussian-design}), then we can apply the analysis in Section~\ref{sec:gordon-struct}. We will first consider the standard $\ell_1$ regularizer with $m=1$, which requires the following estimate. \begin{proposition} Consider standard $\ell_1$ regularization with single element groups. If $\tilde{\eta} < \eta$ and $p \geq 2|S|$, we have \begin{align*} &\inf_{\gamma>0} {\mathbf E}_{\epsilon \sim N(0,I_{p \times p})} \inf_{\|b\|_\infty \leq 1} \| \gamma (\lambda {\mathrm{sgn}}({\bar{\beta}})+\tilde{a}) + \gamma (\eta-\tilde{\eta})\lambda b_{S^c} - \epsilon\|_2^2 \\ \leq& 2 |S| + \frac{2\ln (p/|S|-1)}{(\eta-\tilde{\eta})^2} \|{\mathrm{sgn}}({\bar{\beta}})+\tilde{a}/\lambda\|_2^2 . \end{align*} \end{proposition} \begin{proof} Given $\gamma>0$, and let $t=\gamma(\eta-\tilde{\eta}) \lambda$, we have \[ {\mathbf E}_{\epsilon \sim N(0,I_{p \times p})}\inf_{\|b\|_\infty \leq 1} \| \gamma (\lambda {\mathrm{sgn}}({\bar{\beta}})+\tilde{a})+ \gamma (\eta-\tilde{\eta}) \lambda b_{S^c} - \epsilon\|_2^2 \leq a_0 + a_1 , \] where \[ a_0 = {\mathbf E}_{\epsilon \sim N(0,I_{p \times p})} \| \gamma ({\mathrm{sgn}}({\bar{\beta}})+\tilde{a}) + \epsilon_S\|_2^2 = |S| + \gamma^2 \|\lambda {\mathrm{sgn}}({\bar{\beta}})+\tilde{a}\|_2^2 , \] and \begin{align*} a_1 =& {\mathbf E}_{\epsilon \sim N(0,I_{p\times p})} \inf_{\|b\|_\infty \leq t}\| b_{S^c} - \epsilon_{S^c}\|_2^2 \\ =& (p-|S|) {\mathbf E}_{\epsilon \sim N(0,1)} (|\epsilon|-t)_+^2 \\ =& (p-|S|) \int_{x=0}^\infty \frac{2}{\sqrt{2\pi}} x^2 \exp(-(x+t)^2/2) d x \\ \leq& (p-|S|) \int_{x=0}^\infty \frac{2}{\sqrt{2\pi}} x^2 \exp(-(x^2+t^2)/2) d x \leq (p-|S|) e^{-t^2/2} . \end{align*} By setting $t=\sqrt{2\ln ((p/|S|-1)}$ and $\gamma=\sqrt{2\ln((p/|S|-1))}/(\eta-\tilde{\eta})\lambda$, we have $a_1 \leq |S|$. This gives the bound. \end{proof} For the standard $\ell_1$ regularization ($m=1$), we obtain the following bound if $p \geq 2|S|$: given any $\eta \in (0,1]$, $g,\delta \geq 0$ such that $g+\delta \leq n/\sqrt{n+1}$, with probability at least \[ 1 - \frac{1}{2} \exp \left(-\frac{1}{2} (n/\sqrt{n+1}-g-\delta)^2\right) , \] we have either $\lambda \leq \|\tilde{b}\|_\infty/ \eta$, or \[ \|X({\hat{\beta}}-\beta^*)\|_2^2 + (1-\eta) \lambda \|{\hat{\beta}}_{S^c}\|_1 \leq \|X({\bar{\beta}}-\beta^*)\|_2^2 + (4\delta^2)^{-1} \|\lambda{\mathrm{sgn}}({\bar{\beta}}) +\tilde{a}\|_2^2 , \] or \[ g^2 < 2 |S| + \frac{2\ln (p/|S| -1)}{(\eta-\|\tilde{b}\|_\infty/\lambda)^2} \|{\mathrm{sgn}}({\bar{\beta}})+\tilde{a}/\lambda\|_2^2 . \] Note that in the noise-free case of $\tilde{a}=\tilde{b}=0$, this shows that exact recovery can be achieved with large probability when $n > 2|S| (1+ \ln(p/|S|-1))$, and this sample complex result is a rather sharp. More generally for $m>1$, we have a similar bound with worse constants as follows. \begin{proposition} If $\tilde{\eta} < \eta$ and $p \geq 2m |S|$, we have \begin{align*} &\inf_{\gamma>0} {\mathbf E}_{\epsilon \sim N(0,I_{p \times p})} \inf_{\|b\|_{{\Gamma},\infty} \leq 1} \| \gamma (\lambda {\mathrm{sgn}}_{\Gamma}({\bar{\beta}})+\tilde{a}) + \gamma (\eta-\tilde{\eta})\lambda b_{S^c} - \epsilon\|_2^2 \\ \leq& |S|(m+1) + \frac{(\sqrt{2\ln (q/|S|-1)}+\sqrt{m})^2}{(\eta-\tilde{\eta})^2} \|{\mathrm{sgn}}_{\Gamma}({\bar{\beta}})+\tilde{a}/\lambda\|_2^2 . \end{align*} \end{proposition} \begin{proof} Given $\gamma>0$, and let $t=\gamma (\eta-\tilde{\eta}) \lambda$. Let $\chi$ be a $\chi$-distributed random variable of degree $m$, with $\lambda_m$ being its expectation as defined in Theorem~\ref{thm:gordon}. Since $\chi$ is the singular value of a $1 \times m$ Gaussian matrix, similar to Theorem~\ref{thm:gordon}, we can apply the Gaussian concentration bound \cite{Pisier85} to obtain for all $\delta >0$: \[ {\mathbf P} \left[ \chi \geq \lambda_m + \delta \right] \leq 0.5 \exp \left(-\delta^2/2 \right) . \] Now we assume $t\geq \lambda_m$, and \[ {\mathbf E}_{\epsilon \sim N(0,I_{p \times p})}\inf_{\|b\|_{{\Gamma},\infty} \leq 1} \| \gamma (\lambda {\mathrm{sgn}}_{\Gamma}({\bar{\beta}})+\tilde{a})+ \gamma (\eta-\tilde{\eta}) \lambda b_{S^c} - \epsilon\|_2^2 \leq a_0 + a_1 , \] where \[ a_0 = {\mathbf E}_{\epsilon \sim N(0,I_{p \times p})} \| \gamma ({\mathrm{sgn}}_{\Gamma}({\bar{\beta}})+\tilde{a}) + \epsilon_{\mathbf S}\|_2^2 = m |S| + \gamma^2 \|\lambda {\mathrm{sgn}}({\bar{\beta}})+\tilde{a}\|_2^2 , \] and \begin{align*} a_1 =& {\mathbf E}_{\epsilon \sim N(0,I_{p\times p})} \inf_{\|b\|_{{\Gamma},\infty} \leq t}\| b_{{\mathbf S}^c} - \epsilon_{{\mathbf S}^c}\|_2^2 \\ =& (q-|S|) {\mathbf E}_{\epsilon \sim N(0,I_{m \times m})} (\|\epsilon\|_2-t)_+^2 \\ =& - (q-|S|) \int_{x=0}^\infty x^2 d P(\chi \geq x+t) \\ \leq& 2 (q-|S|) \int_{x=0}^\infty x P(\chi \geq x+t) d x \\ \leq& 2 (q-|S|) \int_{x=0}^\infty 0.5 x \exp(-(x+t- \lambda_m)^2/2) d x \\ \leq& (q-|S|) \exp(-(t-\lambda_m)^2/2) \int_{x=0}^\infty x \exp(-x^2/2) d x \\ =& (q-|S|) \exp(-(t-\lambda_m)^2/2) . \end{align*} By setting $t=\lambda_m + \sqrt{2\ln (q/|S|-1)}$ and $\gamma=t/(\eta-\tilde{\eta})\lambda$, we have $a_1 \leq |S|$. This gives the desired bound using the estimate $\lambda_m \leq \sqrt{m}$. \end{proof} We obtain the following bound for group-Lasso with $m>1$ when $q \geq 2|S|$: given any $\eta \in (0,1]$, $g,\delta \geq 0$ such that $g+\delta \leq n/\sqrt{n+1}$, with probability at least \[ 1 -\frac{1}{2} \exp \left(-\frac{1}{2} (n/\sqrt{n+1}-g-\delta)^2\right) , \] we have either $\lambda \leq \|\tilde{b}\|_{{\Gamma},\infty}/ \eta$, or \[ \|X({\hat{\beta}}-\beta^*)\|_2^2 + (1-\eta) \lambda \|{\hat{\beta}}_{{\mathbf S}^c}\|_{{\Gamma},1} \leq \|X({\bar{\beta}}-\beta^*)\|_2^2 + (4\delta^2)^{-1} \|\lambda{\mathrm{sgn}}_{\Gamma}({\bar{\beta}}) +\tilde{a}\|_2^2 , \] or \[ g^2 < |S|(m+1) + \frac{(\sqrt{2\ln (q/|S| -1)}+\sqrt{m})^2}{(\eta-\|\tilde{b}\|_{{\Gamma},\infty}/\lambda)^2} \|{\mathrm{sgn}}_{\Gamma}({\bar{\beta}})+\tilde{a}/\lambda\|_2^2 . \] Note that in the noise-free case of $\tilde{a}=\tilde{b}=0$, this shows that exact recovery can be achieved with large probability when $n > |S|(m+1) + |S|(\sqrt{2\ln (q/|S| -1)}+\sqrt{m})^2=O(|S| (m+ \ln (q/|S|)))$. If we consider the scenario that noise $\epsilon \sim N(0,\sigma^2 I_{n \times n})$ is Gaussian, then we may set $\lambda$ to be at the order $\sigma \sqrt{n (m+\ln (q/|S|))}$, and with large probability, we have $\lambda > \|b\|_{{\Gamma},\infty}/\eta$, with a nonzero $\tilde{a}$ such that $\|\tilde{a}\|_2^2= O(|S| \lambda^2)$. This gives the following error bound with $\delta$ chosen at order $\sqrt{n}$: \[ \|X({\hat{\beta}}-\beta^*)\|_2^2 + (1-\eta) \lambda \|{\hat{\beta}}_{{\mathbf S}^c}\|_{{\Gamma},1} \leq \|X({\bar{\beta}}-\beta^*)\|_2^2 + O(|S| \lambda^2/n) . \] With optimal choice of $\lambda$, we have \[ \|X({\hat{\beta}}-\beta^*)\|_2^2 + (1-\eta) \lambda \|{\hat{\beta}}_{{\mathbf S}^c}\|_{{\Gamma},1} \leq \|X({\bar{\beta}}-\beta^*)\|_2^2 + O(|S| m + \ln (q/|S|)) . \] \subsubsection*{Tangent Space Analysis} In this analysis, we assume that ${\mathrm{supp}}(\tilde{a}) \in {\mathbf S}$. We can then define \[ Q_G^{\cal T} = {\bar{\beta}} + \Delta Q, \quad \Delta Q_S = -0.5 (X_{\mathbf S}^\top X_{\mathbf S})^{-1} (\lambda {\mathrm{sgn}}_{\Gamma}({\bar{\beta}}_{\mathbf S})+\tilde{a}_{\mathbf S}) \quad \text{ and } \quad \Delta Q_{{\mathbf S}^c}=0 . \] We know that $Q_G^{\cal T}$ is a dual certificate if \[ \|X_{{\mathbf S}^c}^\top X_{\mathbf S} (X_{\mathbf S}^\top X_{\mathbf S})^{-1}{\mathrm{sgn}}({\bar{\beta}}_{\mathbf S}) \|_{{\Gamma},\infty} \leq \eta- \|X_{{\mathbf S}^c}^\top X_{\mathbf S} (X_{\mathbf S}^\top X_{\mathbf S})^{-1}\tilde{a}_{\mathbf S} -\tilde{b}_{{\mathbf S}^c}\|_{{\Gamma},\infty}/\lambda . \] This is essentially the irrepresentable condition of \cite{Bach08-groupLasso}, which reduces to the $\ell_1$ irrepresentable condition of \cite{ZhaoYu06} when $m=1$. This condition implies the following oracle inequality: \[ \|X(\beta^*-{\hat{\beta}})\|_2^2 + (1-\eta) \lambda \|{\hat{\beta}}_{S^c}\|_{{\Gamma},1} \leq \|X(\beta^*-{\bar{\beta}})\|_2^2 + 0.25 \lambda^2 \left\| (X_{\mathbf S}^\top X_{\mathbf S})^{-1/2} ({\mathrm{sgn}}_{\Gamma}({\bar{\beta}}_{\mathbf S}) + \tilde{a}_{\mathbf S}/\lambda)\right\|_2^2 . \] This oracle inequality generalizes a simpler result for $m=1$ in \cite{CandesPlan09}. \subsubsection*{Simple Parameter Estimation Bounds} Next, we consider the parameter estimation bound using Proposition~\ref{prop:param-est}. First, we consider the case of choosing ${\tilde{\cal T}}={\cal T}$; let $\gamma_S$ be the smallest eigenvalue of $X_{\mathbf S}^{\top}X_{\mathbf S}$. If we assume $\beta^* \approx {\bar{\beta}}$ and $\tilde{a}$ is small, we can expect a bound of the form: \[ \|X(\beta^*-{\hat{\beta}})\|_2^2 + (1-\eta) \lambda \|{\hat{\beta}}_{{\mathbf S}^c}\|_{{\Gamma},1} \leq \delta = O(\lambda^2 |S|/\gamma_S) , \] where $\lambda=O(\sigma \sqrt{n (m+\ln q)})$. Now, if we let $X_{{\Gamma}_j}$ be the $j$-th group-column (with indices ${\Gamma}_j$) of $X$, then \[ \mathrm{cor}({\cal T},{\cal T}^\perp) \leq \sup \left\{ \|(X_{\mathbf S}^\top X_{\mathbf S})^{-1/2} X_{\mathbf S}^\top X \beta_{{\mathbf S}^c}\|_2 : \lambda \|\beta_{{\mathbf S}^c}\|_{{\Gamma},1} \leq \delta' \right\} \leq \gamma_S^{-1/2} \max_{j \in S^c} \|X_{\mathbf S}^\top X_{{\Gamma}_j}\|_{\mathrm{sp}} \delta' /\lambda , \] where \[ \gamma_S = \inf \, \{ \|X_{\mathbf S} \beta_{\mathbf S} \|_2^2 : \|\beta_{\mathbf S}\|_2 =1\} \] is the smallest eigenvalue of $X_{\mathbf S}^\top X_{\mathbf S}$. Proposition~\ref{prop:param-est} gives $\|({\hat{\beta}}-\beta^*)_{{\mathbf S}^c}\|_{{\Gamma},1} \leq \delta'/\lambda$ and \[ \|({\hat{\beta}}-\beta^*)_{\mathbf S}\|_2 \leq \sqrt{\gamma_S^{-1} (1-\eta) \delta'} + 2 (\delta'/\lambda) \gamma_S^{-1} \max_{j \in S^c} \|X_S^\top X_{{\Gamma}_j}\|_{\mathrm{sp}} , \] where $\delta' = \delta/(1-\eta) + \lambda \|(\beta^*)_{S^c}\|_1$, and here we use $\|\cdot\|_{\mathrm{sp}}$ to denote the spectral norm of a matrix. For the sake of illustration, we will next assume that the standard error bound of $\delta' = O(\lambda^2 |S|/\gamma_S)$, and the above result leads to the following bound \[ \|({\hat{\beta}}-\beta^*)_{{\mathbf S}^c}\|_{{\Gamma},1} = O(\lambda |S|/\gamma_S), \quad \|({\hat{\beta}}-\beta^*)_{\mathbf S}\|_2 \leq (\lambda \sqrt{|S|}/\gamma_S) \cdot O \left(1 + \sqrt{|S|} \gamma_S^{-1} \max_{j \in {\mathbf S}^c} \|X_{\mathbf S}^\top X_{{\Gamma}_j}\|_{\mathrm{sp}}\right) . \] If $X$ is very weakly correlated, $X_{\mathbf S}^\top X_{{\Gamma}_j}$ will be small. In the ideal case $\gamma_S^{-1} \max_{j \in {\mathbf S}^c} \|X_{\mathbf S}^\top X_{{\Gamma}_j}\|_{\mathrm{sp}}=O(1/\sqrt{|S|})$, we have \[ \|{\hat{\beta}}-\beta^*\|_{{\Gamma},1} = O(\lambda |S|/\gamma_S) , \] which is of the optimal order. However, in the pessimistic case of $\gamma_S^{-1} \max_{j \in S^c} \|X_{\mathbf S}^\top X_{{\Gamma}_j}\|_{\mathrm{sp}}) =O(1)$, then we obtain \[ \|{\hat{\beta}}-\beta^*\|_{{\Gamma},1} = O(\lambda |S|^{3/2}/\gamma_S) , \] which has an extra factor of $\sqrt{|S|}$. Using the above derivation, the 2-norm error bound is always of the order \[ \|{\hat{\beta}}-\beta^*\|_2 \leq \|({\hat{\beta}}-\beta^*)_{\mathbf S}\|_2 + \|({\hat{\beta}}-\beta^*)_{{\mathbf S}^c}\|_{{\Gamma},1} = O(\lambda |S|/\gamma_S) , \] which has an extra factor of $\sqrt{|S|}$ compared to the ideal bound of $\|{\hat{\beta}}-\beta^*\|_2 = O(\lambda \sqrt{|S|})$ in the earlier literature such as \cite{HuangZhang09,LMTG09} under appropriately defined global restricted eigenvalue assumptions. It should be mentioned that the assumptions we have made so far are relatively weak without making global restricted eigenvalue assumptions, and thus the resulting bound $\|{\hat{\beta}}-\beta^*\|_2 = O(\lambda |S|)$ might be the best possible under these assumptions. In order to obtain the ideal bound of $\|{\hat{\beta}}-\beta^*\|_2 = O(\sqrt{|S|})$ (as appeared in the earlier literature), we will consider adding extra assumptions. \subsubsection*{Refined Parameter Estimation Bounds} The first extra assumption we will make is that sparse eigenvalues are bounded from above, which prevents the pessimistic case where $X_j$ are highly correlated for $j \in S^c$. Such correlation can be defined with the upper sparse eigenvalue as: \[ \rho^+(k) = \{\|X \beta\|_2^2/\|\beta\|_2^2 : |{\mathrm{supp}}_{\Gamma}(\beta)| \leq k\} , \] where ${\mathrm{supp}}_{\Gamma}(\beta) \subset \{1,\ldots,q\}$ is the (smallest) index set for groups of $\{{\Gamma}_j\}$ that cover ${\mathrm{supp}}(\beta)$. Using this notation, if we choose the constrained $\Omega$ and $\beta^*$ such that $\beta +\beta^* \in \Omega$ implies that $\|\beta\|_{{\Gamma},\infty} \leq M$ for some $M \leq \delta'/\lambda$, then it can be shown using the standard shifting argument for group $\ell_1$ regularization (e.g., \cite{HuangZhang09}) that for all positive integer $k \leq \delta'/(\lambda M)$: \[ \mathrm{cor}({\cal T},{\cal T}^\perp) \leq \sup \left\{ \|X \beta_{{\mathbf S}^c}\|_2 : \|\beta\|_{{\Gamma},\infty} \leq M , \; \lambda \|\beta_{{\mathbf S}^c}\|_{{\Gamma},1} \leq \delta' \right\} \leq 2 \rho^+(k-1)^{1/2} \delta' /(\lambda \sqrt{k}) . \] This implies that \[ \|({\hat{\beta}}-\beta^*)_{{\mathbf S}}\|_2 \leq \sqrt{\gamma_S^{-1} (1-\eta) \delta'} + 4 \delta' \gamma_S^{-1/2} \rho^+(k-1)^{1/2}/(\lambda \sqrt{k}) , \] and \[ \|({\hat{\beta}}-\beta^*)_{{\mathbf S}^c}\|_2 \leq \sqrt{\delta' M/\lambda} \leq \delta' /(\lambda \sqrt{k}) . \] Therefore assuming the standard error bound of $\delta' = O(\lambda^2 |S|/\gamma_S)$, we obtain \[ \|{\hat{\beta}}-\beta^*\|_2 = O(\lambda \sqrt{|S|}/\gamma_S) \inf_{k \leq \delta'/(\lambda M)} \left[ 1 + \sqrt{|S|/k} \sqrt{\rho^+(k)/\gamma_S} \right] . \] If $M$ is sufficiently small, then we can take $k$ sufficiently large so that $|S|=O(k)$, and it is possible to obtain error bound of $\|{\hat{\beta}}-\beta^*\|_2 = O(\lambda \sqrt{|S|})$. If we do not impose the $\|\cdot\|_{{\Gamma},\infty}$ norm constraint on $\Omega$, then another method is to choose ${\tilde{\cal T}}$ larger than ${\cal T}$, which is the approach employed in \cite{CandesPlan11} for the standard $\ell_1$ regularization. Here we consider a similar assumption for group-Lasso, where we define for all integer $k \geq 1$: \[ \gamma_{S,k} = \inf \, \{ \|X \beta\|_2^2: |{\mathrm{supp}}_{\Gamma}(\beta)\setminus S| < k, \|\beta\|_2=1 \} . \] It is clear that $\gamma_S=\gamma_{S,1}$. Given any $k$ such that $\gamma_{S,k}$ is not too small, we may define \[ {\tilde{\cal T}}=\{ \beta: {\mathrm{supp}}_{\Gamma}(\beta) \subset \tilde{S} \} \] and \[ \tilde{S} =S \cup \{\text{group indices of largest $k-1$ absolute values of $\|{\hat{\beta}}-\beta^*\|_{G_j}: j \notin S$}\} . \] The smallest eigenvalue of $H_{\tilde{\cal T}}$ is no smaller than $\gamma_{S,k}$, and we also have $\|({\hat{\beta}}-\beta^*)_{\tilde{{\mathbf S}}}\|_{{\Gamma},\infty} \leq M=\|({\hat{\beta}}-\beta^*)_{{\mathbf S}^c}\|_{{\Gamma},1}/k \leq \delta'/(k \lambda)$. Using the same derivation as before, we have \[ \|{\hat{\beta}}-\beta^*\|_2 = O(\lambda \sqrt{|S|}/\gamma_{S,k}) \left[ 1 + \sqrt{|S|/k} \sqrt{\rho^+(k)/\gamma_{S,k}} \right] . \] This means that if we can choose $k$ at the order of $|S|$ such that $\rho^+(k)/\gamma_{S,k}=O(1)$, then we have \[ \|{\hat{\beta}}-\beta^*\|_2 = O(\lambda \sqrt{|S|}) . \] In the standard $\ell_1$ case, the requirement of $\rho^+(k)/\gamma_{S,k}=O(1)$ is also needed in the so-called ``RIP-less'' approach of \cite{CandesPlan11} to obtain the ideal bound for $\|{\hat{\beta}}-\beta^*\|_2$. The approach is called ``RIP-less'' because this condition is weaker than the classical RIP condition of \cite{CandesTao07} (or its group-Lasso counterpart in \cite{HuangZhang09}) that is far more restrictive. This bound is also flexible as we can choose any $k\geq 1$: in the worst case of $k=1$, we have $\|{\hat{\beta}}-\beta^*\|_2 = O(\lambda |S|)$ with an extra $\sqrt{|S|}$ factor. This extra factor can be removed as long as we take $k$ at the order of $|S|$. \subsection{Matrix completion} Let ${\bar\Omega}$ be the set of $p \times q$ matrices, and assume that the inner product is defined as $\innerprod{\beta}{\beta'}= {\mathrm{tr}}(\beta^\top \beta')$. We consider $x_1,\ldots,x_n$ and observe \[ y_i = \innerprod{x_i}{{\bar{\beta}}_*}+ \epsilon_i , \] where $\{\epsilon_i\}$ are noises. In order to recover ${\bar{\beta}}_*$, we consider the following convex optimization problem: \[ {\hat{\beta}} = \arg\min\left[ \sum_{i=1}^n (\innerprod{x_i}{\beta} - y_i)^2 + \lambda \|\beta\|_* \right] , \] where $\|\beta\|_*$ is the trace-norm of matrix $\beta$, defined as the sum of its singular values. In the following, we will briefly discuss results that can be obtained from our analysis using the tangent space analysis. For simplicity, we will keep the discussion at a relatively high level, with some detailed discussions skipped. We assume that ${\bar{\beta}}$ is of rank-$r$, and ${\bar{\beta}}=U \Sigma V^\top$ is the SVD of ${\bar{\beta}}$, where $U$ and $V$ are $p \times r$ and $q \times r$ matrices. The tangent space is defined as ${\cal T}=\{\beta: P_{\cal T}(\beta) = \beta\}$, where $P_{\cal T}(\beta) = U U^\top \beta + \beta V V^\top - U U^\top \beta V V^\top$. Using notations in Section~\ref{sec:struct-L1}, we may take $e_S(W)=U V^\top$ and $e_{S^c}({\mathscr B})=\{b \in {\cal T}^\perp: \|b\|_{\mathrm{sp}} \leq \lambda\}$ in (\ref{eq:structG}). Therefore \[ \|\beta\|_{\mathscr B} = \lambda \|P_{\cal T}^\perp \beta\|_* \qquad \|P_{\cal T}^\perp b\|_{{\mathscr B},D} = \| P_{\cal T}^\perp b\|_{\mathrm{sp}}/\lambda . \] This means that we may take $G$ in (\ref{eq:structG-int}) as $G=\{u: \; P_{\cal T} u=\lambda U V^\top \; \& \; \|P_{\cal T}^\perp u\|_{\mathrm{sp}} \leq \eta \lambda\}$ for some $0 \leq \eta \leq 1$, which implies that $R(\beta)-R_G(\beta) \geq (1-\eta) \|P_{\cal T}^\perp \beta\|_1$. We further consider target $\beta^*$ that satisfies (\ref{eq:target}), which we can rewrite as \[ 2 \sum_{i=1}^n x_i (\innerprod{x_i}{\beta^* -{\bar{\beta}}_*}- \epsilon_i) = \tilde{a} + \tilde{b} , \] where $\tilde{b} \subset {\cal T}^\perp$, and $\|\tilde{b}\| = \tilde{\eta} \lambda$. We assume that $\|\tilde{a}\|_2$ is small. For matrix completion, we assume that $\{x_i\}$ are matrices of the form $e_{a,b}$ with 1 at entry $(a,b)$ and 0 elsewhere, where $(a,b)$ is uniformly at random. It can be shown using techniques of \cite{CandesR09,Recht09} that under appropriate incoherence conditions, a tangent space dual certificate can be constructed with large probability that satisfies (\ref{eq:irrep-struct}). Due to the space limitation, we skip the details. This leads to \[ D_L(\beta^*,{\hat{\beta}})+ (1-\eta) \lambda \|P_{\cal T}^\perp {\hat{\beta}} \|_* \leq D_L(\beta^*,{\bar{\beta}}) + \delta, \quad \delta= 0.25 \innerprod{\lambda U V^\top +\tilde{a}}{H_{\cal T}^{-1} (\lambda U V^\top +\tilde{a})} . \] Note that for sufficiently large $n$, the smallest eigenvalue of $H_{\cal T}$ can be lower bounded as $O(pq/n)$. Since $\innerprod{\lambda UV^\top}{\lambda UV^\top}= \lambda^2 r$, we may generally choose $\lambda$ such that $\innerprod{\tilde{a}}{\tilde{a}}= O(\lambda^2 r)$, we thus obtain the following oracle inequality for matrix completion: \[ D_L(\beta^*,{\hat{\beta}})+ (1-\eta) \lambda \|P_{\cal T}^\perp {\hat{\beta}} \|_* \leq D_L(\beta^*,{\bar{\beta}}) + O(\lambda^2 p q r/n) . \] If $\epsilon_i$ are iid Gaussian noise $N(0,\sigma^2)$, then we may choose $\lambda$ at the order $\sigma \sqrt{n \ln \max(p,q)/\min(p,q)}$. This gives \[ D_L(\beta^*,{\hat{\beta}})+ (1-\eta) \lambda \|P_{\cal T}^\perp {\hat{\beta}} \|_* \leq D_L(\beta^*,{\bar{\beta}}) + O(\sigma^2 \max(p,q) r \ln (p+q)) . \] In the noise-free case, we can let $\lambda \to 0$, and exact recovery is obtained. This complements a related result of \cite{KoTsLo10} that does not lead to exact recovery even when $\sigma=0$. In the noisy case, parameter estimation bounds can be obtained in a manner analogous to the parameter estimation bound for group $\ell_1$ regularization. Due to the space limitation, we will leave the details to a dedicated report. \subsection{Mixed norm regularization} \label{sec:mixed-norm} The purpose of this example is to show that the dual certificate analysis can be applied to more complex regularizers that may be difficult to analyze using traditional ideas such as the RIP analysis. The analysis is similar to that of group $\ell_1$ regularization but with more complex calculations. For simplicity, we will only provide a sketch of the analysis while skipping some of the details. We still consider the regression problem \[ y= X {\bar{\beta}}_* + \epsilon , \] where for simplicity we only consider Gaussian noise $\epsilon \sim N(0,\sigma^2 I_{n \times n})$. We assume that $p=q m$, and the variables $\{1,\ldots,p\}$ are divided into $q$ non-overlapping blocks ${\Gamma}_1,\ldots,{\Gamma}_{q} \subset \{1,\ldots,p\}$, each block of size $m$. The standard sparse regularization methods are either using the Lasso regularizer of (\ref{eq:L1}) or using the group-Lasso regularizer of (\ref{eq:group-L1}). Let $S_S={\mathrm{supp}}({\bar{\beta}})$ and $S_{\Gamma}={\mathrm{supp}}_{\Gamma}({\bar{\beta}})$, we know that under suitable restricted strong convexity conditions, the following oracle inequality holds for the Lasso regularizer (\ref{eq:L1}) \[ \|X(\beta^*-{\hat{\beta}})\|_2^2+ (1-\eta) \lambda \|{\hat{\beta}}_{S_S^c}\|_{{\Gamma},1} \leq \|X(\beta^*-{\bar{\beta}})\|_2^2 + O(\sigma^2 n |S_S| \ln p /\gamma_{S_s}) , \] and the following oracle inequality holds for the group Lasso regularizer (\ref{eq:group-L1}): \[ \|X(\beta^*-{\hat{\beta}})\|_2^2 + (1-\eta) \lambda \|{\hat{\beta}}_{{\mathbf S}_{\Gamma}^c}\|_{{\Gamma},1} \leq \|X(\beta^*-{\bar{\beta}})\|_2^2 + O(\sigma^2 n |S_{\Gamma}| (m + \ln q) /\gamma_{S_{\Gamma}}) . \] Note that we always have $|S_S| \leq |S_{\Gamma}| m$. By comparing the above two oracle inequalities, we can see that the benefit of using group sparsity is when $|S_S| \approx |S_{\Gamma}| m$, which means that sparsity pattern occur in groups, and the group structure is correct. In such case, the dimension dependency reduces from $|S_S| \ln p$ to $|S_{\Gamma}| \ln q \approx m^{-1} |S_S| \ln q$. However, if some of the signals do not occur in groups, then it is possible that $|S_{\Gamma}| m$ can be much larger than $|S_S|$, and in such case, Lasso is superior to group Lasso. It is natural to ask whether it is possible to combine the benefits of Lasso and group Lasso regularizers. Assume that ${\bar{\beta}}$ is decomposed into two parts ${\bar{\beta}}={\tilde{\beta}}'+{\tilde{\beta}}''$ so that ${\tilde{\beta}}''$ covers nonzeros of ${\tilde{\beta}}$ that occur in groups, and ${\tilde{\beta}}'$ covers nonzeros of ${\tilde{\beta}}$ that do not occur in groups. Ideally we would like to achieve an oracle inequality of \begin{align} &\|X(\beta^*-{\hat{\beta}})\|_2^2 + (1-\eta) \lambda \|{\hat{\beta}}\| \label{eq:opt-decomp} \\ \leq& \|X(\beta^*-{\bar{\beta}})\|_2^2 + O\left(\frac{\sigma^2 n}{\gamma} \left(|{\mathrm{supp}}({\tilde{\beta}}')| \ln p + |{\mathrm{supp}}_{\Gamma}({\tilde{\beta}}'')|(m + \ln q)\right) \right) \nonumber \\ =& \|X(\beta^*-{\bar{\beta}})\|_2^2 + O\left(\frac{\sigma^2 n}{\gamma} \left(|{\mathrm{supp}}({\bar{\beta}} \setminus \cup_{j \in \tilde{S}} {\Gamma}_j) | \ln p + |\tilde{S}|(m + \ln q)\right) \right) , \nonumber \end{align} where $\|{\hat{\beta}}\|$ is a certain seminorm of ${\hat{\beta}}$, and $\tilde{S}=\{j: (m + \ln q) \leq c |{\mathrm{supp}}({\bar{\beta}}_{{\Gamma}_j})|\ln p\}$ for some constant $c>0$. We note that the optimal decomposition can be achieved by taking ${\tilde{\beta}}'_{{\Gamma}_j}=0$ with ${\tilde{\beta}}''_{{\Gamma}_j}={\bar{\beta}}_{{\Gamma}_j}$ when $j \in S'$ and and ${\tilde{\beta}}''_{{\Gamma}_j}=0$ with ${\tilde{\beta}}'_{{\Gamma}_j}={\bar{\beta}}_{{\Gamma}_j}$ otherwise. In the following, we show that the oracle inequality of (\ref{eq:opt-decomp}) can be achieved via a mixed norm regularizer defined below: \begin{equation} R(\beta) =\inf_{\beta=\beta'+\beta''} \left[ \lambda_1 \|\beta'\|_1 + \lambda_{\Gamma} \|\beta''\|_{{\Gamma},1} \right] . \label{eq:reg-mixed} \end{equation} This mixed regularizer can be referred to as the infimal convolution of Lasso and group Lasso regularizers, and it is a special case of \cite{Jacob09icml}. If we can prove an oracle inequality of (\ref{eq:opt-decomp}) for this regularizer, then it means that we can adaptively decompose the signal ${\bar{\beta}}$ into two parts $\beta'$ and $\beta''$ in order to achieve the most significant benefits with standard sparsity bound for $\beta'$ and group sparsity bound for $\beta''$ (without knowing the decomposition a priori). We will consider the decomposed parametrization $[\beta',\beta'']$, and the mixed norm regularizer (\ref{eq:reg-mixed}) becomes a special case of (\ref{eq:struct-L1}). Although the loss function $L(\cdot)$ is not strongly convex with respect to this parametrization, this does not cause problems because we are only interested in $\beta=\beta'+\beta''$. Since $L(\cdot)$ is strongly convex with respect to $\beta$ with an appropriate tangent space ${\cal T}$, we only need to consider the direction along $\beta=\beta'+\beta''$ when applying the results. In this regard, it is easy to verify that at the optimal decomposition in (\ref{eq:reg-mixed}), there exist $u' \in \partial \|\beta'\|_1$ and $u'' \in \partial \|\beta''\|_{{\Gamma},1}$ such that $\lambda_1 u' = \lambda_{\Gamma} u''$. Moreover, for any such $(u',u'')$, $\lambda_1 u' \in \partial R(\beta)$. In order to define ${\cal T}$, we first define ${\mathscr B}$. Consider $S_{\Gamma}=\{j: \lambda_{\Gamma} < 2 \lambda_1 \|{\mathrm{sgn}}({\bar{\beta}})_{{\Gamma}_j}\|_2\}$, with the corresponding support ${\mathbf S}_{\Gamma}=\cup_{j \in S_{\Gamma}} {\Gamma}_j$. The meaning of $S_{\Gamma}$ is that groups in $S_{\Gamma}$ are allowed to use both standard and group sparsity to represent ${\bar{\beta}}$, while groups in $S_{\Gamma}^c$ always use standard sparsity only. The set ${\mathbf S}_{\Gamma}$ will expand the tangent space for the nonzero group sparsity elements. We also define the tangent space support set for single sparsity elements as ${\mathbf S}_1= {\mathrm{supp}}({\bar{\beta}}) \cup {\mathbf S}_{\Gamma}$. Let \begin{equation} [{\bar{\beta}}',{\bar{\beta}}'']=\arg\min_{(\beta',\beta''): {\bar{\beta}}=\beta'+\beta''}\left[ \lambda_1 \|\beta'\|_1 + \lambda_{\Gamma} \|\beta''\|_{{\Gamma},1} \right] . \label{eq:mixed-norm-optdecomp} \end{equation} It satisfies $\lambda_1 \nabla \|{\bar{\beta}}'\|_1 =\lambda_{\Gamma} \nabla \|{\bar{\beta}}''\|_{{\Gamma},1}$, and $\nabla R({\bar{\beta}})=\lambda_1 \nabla \|{\bar{\beta}}'_{{\Gamma}_j}\|_1 +\lambda_{\Gamma} \nabla \|{\bar{\beta}}''_{{\Gamma}_j}\|_{{\Gamma},1}$. Consider ${\Gamma}_j$ such that ${\bar{\beta}}''_{{\Gamma}_j} \neq 0$, we obtain from $\lambda_1 \nabla \|{\bar{\beta}}'\|_1 =\lambda_{\Gamma} \nabla \|{\bar{\beta}}''\|_{{\Gamma},1}$ that $[\nabla \|{\bar{\beta}}'_{{\Gamma}_j}\|_{1}]_i \neq 0$ only when ${\bar{\beta}}_i \neq 0$ for $i \in {\Gamma}_j$; therefore $\|(\nabla \|{\bar{\beta}}'_{{\Gamma}_j}\|_{1})\|_2 \leq \|{\mathrm{sgn}}({\bar{\beta}})_{{\Gamma}_j}\|_2$, and thus $\lambda_{\Gamma} \leq \lambda_1 \|(\nabla \|{\bar{\beta}}'_{{\Gamma}_j}\|_{1})\|_2 \leq \lambda_1 \|{\mathrm{sgn}}({\bar{\beta}})_{{\Gamma}_j}\|_2$. It implies that $j \in S_{\Gamma}$ and thus ${\mathrm{supp}}({\bar{\beta}}'') \subset S_{\Gamma}$. Now we can define $W$ and ${\mathscr B}$ as \[ e_S(W) = \lambda_1 \nabla \|{\bar{\beta}}'\|_1=\lambda_{\Gamma} \nabla \|{\bar{\beta}}''\|_{{\Gamma},1} \] where we can take $[\nabla \|{\bar{\beta}}'\|_1]_j=0$ when $j \notin {\mathbf S}_1$; and define \[ e_{S^c}({\mathscr B})=\{u_{{\mathbf S}_1^c} : \|u_{{\mathbf S}_1^c}\|_\infty \leq \lambda_1 \; \& \; \|u_{{\mathbf S}_{\Gamma}^c}\|_{{\Gamma},\infty} \leq 0.5 \lambda_{\Gamma} \} . \] With the above choices, we have for all $u \in e_{S^c}({\mathscr B})$, $e_S(W) + u \in \partial R({\bar{\beta}})$ because it can be readily checked that $e_S(W) + u \in \partial (\lambda_1 \|{\bar{\beta}}'\|_1) \cap \partial (\lambda_{\Gamma} \|{\bar{\beta}}''\|_{{\Gamma},1})$. Moreover, we have \[ \sup_{u \in e_{S^c}({\mathscr B})} \innerprod{u}{\beta} = \min_{{\hat{\beta}}_{{\mathbf S}_1^c}=\beta'_{{\mathbf S}_1^c}+\beta''_{{\mathbf S}_{\Gamma}^c}} \left[\lambda_1 \|\beta'_{{\mathbf S}_1^c}\|_1 + 0.5 \lambda_{\Gamma} \|\beta''_{{\mathbf S}_{\Gamma}^c}\|_{{\Gamma},1} \right] . \] We can thus define $G$ according to (\ref{eq:structG-int}) as \[ G=\{e_S(W)+\eta u : u \in e_{S^c}({\mathscr B})\} , \] so that $G \subset \partial R({\bar{\beta}})$. For simplicity, we assume that $\beta^*$ satisfies (\ref{eq:target}) with $\tilde{a}=0$, which can be achieved with the construction in (\ref{eq:target-opt}). With these choices, we obtain from (\ref{eq:recovery-global-dc-quadratic-struct}) the following oracle inequality (under appropriate restricted eigenvalue condition with parameter $\gamma$): \begin{align*} & \| X ({\hat{\beta}}-\beta^*)\|_2^2+ (1-\eta) \min_{{\hat{\beta}}_{{\mathbf S}_1^c}=\beta'_{{\mathbf S}_1^c}+\beta''_{{\mathbf S}_{\Gamma}^c}} \left[\lambda_1 \|\beta'_{{\mathbf S}_1^c}\|_1 + 0.5 \lambda_{\Gamma} \|\beta''_{{\mathbf S}_{\Gamma}^c}\|_{{\Gamma},1} \right]\\ \leq &\| X ({\bar{\beta}}-\beta^*)\|_2^2 + \gamma^{-1} O(\|e_S(W)\|_2^2) \\ \leq& \| X ({\bar{\beta}}-\beta^*)\|_2^2 + \gamma^{-1} O \left( \lambda_1^2 |{\mathrm{supp}}({\bar{\beta}})\setminus S_{\Gamma} | + \lambda_{\Gamma}^2 |S_{\Gamma}| \right) . \end{align*} The last inequality follows from \[ \|e_S(W)\|_2^2 \leq \sum_{j \in S_{\Gamma}} \lambda_{\Gamma}^2 \|(\nabla \|{\bar{\beta}}''\|_{{\Gamma},1})_{{\Gamma}_j}\|_2^2 + \sum_{j \notin S_{\Gamma}} \lambda_1^2 \|(\nabla \|{\bar{\beta}}'\|_{1})_{{\Gamma}_j}\|_2^2 , \] which is a consequence of $e_S(W)=\lambda_1 \nabla \|{\bar{\beta}}'\|_1 =\lambda_{\Gamma} \nabla \|{\bar{\beta}}''\|_{{\Gamma},1}$. Similar to the standard Lasso and group Lasso cases, for mixed norm regularization, we may still choose the Lasso regularizer parameter $\lambda_1=c_1 \sigma \sqrt{n \ln(p)}$, and the group Lasso regularization parameter $\lambda_{\Gamma}= c_2 \sigma \sqrt{n (m+\ln (q))}$ so that (\ref{eq:target}) holds ($c_1,c_2>0$ are constants). Plug in these values, we obtain the following oracle inequality with this choice of parameters: \begin{align*} & \| X ({\hat{\beta}}-\beta^*)\|_2^2+ (1-\eta) \min_{{\hat{\beta}}_{{\mathbf S}_1^c}=\beta'_{{\mathbf S}_1^c}+\beta''_{{\mathbf S}_{\Gamma}^c}} \left[\lambda_1 \|\beta'_{{\mathbf S}_1^c}\|_1 + 0.5 \lambda_{\Gamma} \|\beta''_{{\mathbf S}_{\Gamma}^c}\|_{{\Gamma},1} \right]\\ \leq & \| X ({\bar{\beta}}-\beta^*)\|_2^2 + \gamma^{-1} n \sigma^2 \cdot O \left( (n+\ln p) |{\mathrm{supp}}({\bar{\beta}})\setminus {\mathbf S}_{\Gamma}| + |S_{\Gamma}|(m + \ln q) \right) . \end{align*} Since the definition of $S_{\Gamma}$ is such that $j \in S_{\Gamma}$ when $m+\ln (q) \leq 4 (c_1/c_2)^2 |{\mathrm{supp}}({\bar{\beta}}_{{\Gamma}_j})| \ln(p)$, the right hand side achieves the optimal decomposition error bound in (\ref{eq:opt-decomp}). This means that the mixed norm regularizer (\ref{eq:reg-mixed}) achieves optimal adaptive decomposition of standard and group sparsity. \subsection{Generalized linear models} \label{sec:genlin-example} Results for generalized linear models can be easily obtained under the general framework of this paper, as discussed after Corollary~\ref{cor:dual_certificate-oracle}. This section presents a more elaborated treatment. In generalized linear models, we may write the negative log likelihood as \begin{equation} L(\beta) = \sum_{i=1}^n\ell_i(\innerprod{x_i}{\beta}) , \label{eq:gen-lin-model} \end{equation} where $x_i\in{\bar\Omega}^*$ and $\ell_i$ may depend on certain response variable $y_i$. Suppose $\ell_i(t)$ are convex and twice differentiable. Let \begin{eqnarray*} {\kappa} = \max_{i\le n}\sup_{s<t}\big|\log(\ell_i''(t))-\log(\ell_i''(s))\big|\big/|t-s| \end{eqnarray*} be the maximum Lipschitz norm of $\log(\ell_i''(t))$. We note that $\kappa=1$ for logistic regression with $\ell_i(t) = \ln(1+e^{-t})$, $\kappa=1$ for the Poisson/log linear regression with $\ell_i(t) = e^t - y_i t$, and $\kappa = 0$ for linear regression. For sparse ${\bar{\beta}}$, ${\cal C}\subset\Omega$, norm $\|\cdot\|$, and $j=1,2$, define \begin{eqnarray*} \gamma_j({\bar{\beta}};r,{\cal C},\|\cdot\|) = \inf\Big\{\sum_{i=1}^n\frac{\ell_i''(\innerprod{x_i}{{\bar{\beta}}})}{2e} \min\Big(\frac{\innerprod{x_i}{\beta-{\bar{\beta}}}^2}{\|\beta-{\bar{\beta}}\|^2}, \frac{|\innerprod{x_i}{\beta-{\bar{\beta}}}|^{2-j}}{r^j\|\beta-{\bar{\beta}}\|^{2-j}}\Big): \beta\in{\cal C}\Big\}. \end{eqnarray*} The following lemma can be used to bound $D_L(\beta,{\bar{\beta}})$ and $D_L({\bar{\beta}},\beta)$ from below. \begin{lemma}\label{lm:GLM} Given ${\bar{\beta}}$, ${\cal C}\subset\Omega$ and norm $\|\cdot\|$, let $\beta\in{\bar\Omega}$ such that the ray from ${\bar{\beta}}$ to $\beta$ and beyond intersects with ${\cal C}$, $\{t(\beta-{\bar{\beta}})+{\bar{\beta}}: t>0\}\cap{\cal C}\neq\emptyset$. If $0<\|\beta-{\bar{\beta}}\|\le r$, \begin{eqnarray*} \frac{D_L(\beta,{\bar{\beta}})}{\|\beta-{\bar{\beta}}\|^2} \ge \gamma_1({\bar{\beta}}; \kappa r,{\cal C},\|\cdot\|),\quad \frac{D_L({\bar{\beta}},\beta)}{\|\beta-{\bar{\beta}}\|^2} \ge \gamma_2({\bar{\beta}}; \kappa r,{\cal C},\|\cdot\|). \end{eqnarray*} \end{lemma} \begin{proof} Let ${\tilde{\beta}} = t_0(\beta - {\bar{\beta}}) + {\bar{\beta}} \in{\cal C}$. Since $\gamma_j({\bar{\beta}};r,{\cal C},\|\cdot\|)$ is decreasing in $r$, it suffices to consider $0< \|\beta-{\bar{\beta}}\| =r$. Since ${\kappa}$ is the Lipschitz norm of $\log(\ell_i''(t))$, \begin{eqnarray*} D_L(\beta,{\bar{\beta}})/r^2 &=& \int_0^1(1-t) \sum_{i=1}^n\ell_i''(\innerprod{x_i}{{\bar{\beta}}}+t\innerprod{x_i}{\beta-{\bar{\beta}}}) \innerprod{x_i}{\beta-{\bar{\beta}}}^2 dt/r^2 \cr &\ge& \int_0^1(1-t) \sum_{i=1}^n\ell_i''(\innerprod{x_i}{{\bar{\beta}}}) e^{-t{\kappa}|\innerprod{x_i}{\beta-{\bar{\beta}}}|}\innerprod{x_i}{\beta-{\bar{\beta}}}^2 dt/r^2 \cr &\ge& \sum_{i=1}^n\ell_i''(\innerprod{x_i}{{\bar{\beta}}})\innerprod{x_i}{\beta-{\bar{\beta}}}^2 \int_0^1(1-t)I\{t{\kappa}|\innerprod{x_i}{\beta-{\bar{\beta}}}|\le 1\}dt/(er^2). \end{eqnarray*} Since $\int_0^{x\wedge 1}(1-t)dt = x\wedge 1 - (x\wedge 1)^2/2\ge (x\wedge 1)/2$, we find \begin{eqnarray*} D_L(\beta,{\bar{\beta}})/r^2 &\ge& \sum_{i=1}^n\ell_i''(\innerprod{x_i}{{\bar{\beta}}})\innerprod{x_i}{\beta-{\bar{\beta}}}^2 \min\Big(1, \frac{1}{\kappa |\innerprod{x_i}{\beta-{\bar{\beta}}}|}\Big)\frac{1}{2er^2} \cr &\ge& \sum_{i=1}^n\frac{\ell_i''(\innerprod{x_i}{{\bar{\beta}}})}{2e} \min\Big(\frac{\innerprod{x_i}{\beta-{\bar{\beta}}}^2}{\|\beta-{\bar{\beta}}\|^2}, \frac{|\innerprod{x_i}{\beta-{\bar{\beta}}}|}{\kappa r\|\beta-{\bar{\beta}}\|}\Big). \end{eqnarray*} Since $\innerprod{x_i}{\beta-{\bar{\beta}}}|/\|\beta-{\bar{\beta}}\|=\innerprod{x_i}{{\tilde{\beta}}-{\bar{\beta}}}|/\|{\tilde{\beta}}-{\bar{\beta}}\|$ and ${\tilde{\beta}}\in {\cal C}$, $D_L(\beta,{\bar{\beta}})/r^2\ge \gamma_1({\bar{\beta}}; \kappa r,{\cal C},\|\cdot\|)$. The proof for $D_L({\bar{\beta}},\beta)$ is similar. We have \begin{eqnarray*} D_L({\bar{\beta}},\beta)/r^2 &=& \int_0^1 \sum_{i=1}^n\ell_i''(\innerprod{x_i}{{\bar{\beta}}}+t \innerprod{x_i}{\beta-{\bar{\beta}}})\innerprod{x_i}{\beta-{\bar{\beta}}}^2 tdt /r^2 \cr &\ge & \sum_{i=1}^n\ell_i''(\innerprod{x_i}{{\bar{\beta}}})\innerprod{x_i}{\beta-{\bar{\beta}}}^2 \int_0^1I\{t{\kappa} |\innerprod{x_i}{\beta-{\bar{\beta}}}| \le 1\} tdt/(er^2) \cr &=& (2e)^{-1}\sum_{i=1}^n \ell_i''(\innerprod{x_i}{{\bar{\beta}}}) \min\Big(\frac{\innerprod{x_i}{\beta-{\bar{\beta}}}^2}{\|\beta-{\bar{\beta}}\|^2}, \frac{1}{{\kappa}^2r^2}\Big). \end{eqnarray*} This gives $D_L({\bar{\beta}},\beta)/r^2\ge \gamma_2({\bar{\beta}}; \kappa r,{\cal C},\|\cdot\|)$. \end{proof} Suppose $\gamma_j({\bar{\beta}};r_0,{\cal C},\|\cdot\|)\ge \gamma_0$ for $j=1,2$. Lemma \ref{lm:GLM} asserts that for the $\beta$ considered, both $D_L(\beta,{\bar{\beta}})$ and $D_L({\bar{\beta}},\beta)$ are no smaller than $\gamma_0\|\beta-{\bar{\beta}}\|^2$ for ${\kappa}\|\beta-{\bar{\beta}}\|\le r_0$. For larger $r=\|\beta-{\bar{\beta}}\|$, \begin{eqnarray*} && D_L(\beta,{\bar{\beta}})\ge r^2\gamma_1({\bar{\beta}}; \kappa r,{\cal C},\|\cdot\|) \ge \|\beta-{\bar{\beta}}\| (r_0/{\kappa})\gamma_1({\bar{\beta}}; r_0,{\cal C},\|\cdot\|),\qquad \cr && D_L({\bar{\beta}},\beta)\ge r^2 \gamma_2({\bar{\beta}}; \kappa r,{\cal C},\|\cdot\|) \ge (r_0/{\kappa})^2\gamma_2({\bar{\beta}}; r_0,{\cal C},\|\cdot\|). \end{eqnarray*} Since $D_L(\beta,{\bar{\beta}})$ is convex in ${\bar{\beta}}$ and $D_L({\bar{\beta}},\beta)$ is not, such lower bounds are of the best possible type for large $\|\beta-{\bar{\beta}}\|$ when $\ell_i''(t)$ are small for large $t$, as in the case of logistic regression. Given ${\bar{\beta}}$, setting ${\cal C}_G=\{\beta: \sup_{u\in G} \innerprod{u+\nabla L({\bar{\beta}})}{\beta} \le 0\}$ yields the lower bound \begin{eqnarray*} \gamma_L({\bar{\beta}};r,G,\|\cdot\|) \ge \gamma_1({\bar{\beta}};\kappa r,{\cal C}_G,\|\cdot\|) \end{eqnarray*} for the RSC constant in Definition \ref{def:RSC}. The lower bound $D_L({\bar{\beta}},\beta)\ge (r_0/{\kappa})^2\gamma_2({\bar{\beta}}; r_0,{\cal C},\|\cdot\|)$ can be used to check the condition $D_L({\bar{\beta}},\beta)\ge D_{\bar{L}}(\beta,{\bar{\beta}})$ in Corollaries \ref{cor:dual_certificate-oracle} and \ref{cor:recovery-global-dc-oracle}. We measure the noise level by \begin{eqnarray*} \eta(\beta^*) = \sup\big\{|\innerprod{\nabla L(\beta^*)}{\beta}|/R(\beta): \beta\neq 0,\beta\in\Omega\big\}. \end{eqnarray*} Let ${\bar{\beta}}$ be a sparse vector and $G\subseteq\partial R({\bar{\beta}})$. Given $\{{\bar{\beta}},\beta^*,\|\cdot\|\}$, we measure the penalty level by \begin{eqnarray*} \lambda({\bar{\beta}},\beta^*;\|\cdot\|) = \sup\big\{ \innerprod{\nabla L(\beta^*)+u}{{\bar{\beta}}-\beta}/\|\beta-{\bar{\beta}}\|: u\in\partial R(\beta),\beta\in\Omega\}. \end{eqnarray*} Since for all $u \in \partial R(\beta)$ and $\bar{u} \in \partial R({\bar{\beta}})$, we have $\innerprod{u}{{\bar{\beta}}-\beta} \leq \innerprod{\bar{u}}{{\bar{\beta}}-\beta}$, it follows that \[ \lambda({\bar{\beta}},\beta^*;\|\cdot\|) \leq \inf_{\bar{u} \in \partial R({\bar{\beta}})} \|\nabla L(\beta^*)+\bar{u}\|_D , \] where $\|\cdot\|_D$ is the dual norm of $\|\cdot\|$. This connects the quantity $\lambda(\cdot)$ to $\inf_{u \in G} \|u+\nabla L({\bar{\beta}})\|_D$ used in Theorem~\ref{thm:dual_certificate-error}. Similarly, we may define \begin{eqnarray*} {\cal C}_{{\bar{\beta}},\beta^*} = \Big\{\beta: \sup_{u\in\partial R(\beta)}\innerprod{u+\nabla L(\beta^*)}{{\bar{\beta}}-\beta} > 0\Big\} . \end{eqnarray*} Note that we have ${\cal C}_{{\bar{\beta}},\beta^*} \subset \Big\{\beta: \inf_{\bar{u}\in\partial R({\bar{\beta}})}\innerprod{\bar{u}+\nabla L(\beta^*)}{{\bar{\beta}}-\beta} > 0\Big\}$, and this relationship connects the quantity $\gamma_2({\bar{\beta}};1,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|)$ in Theorem~\ref{thm:GLM} to the quantity $\gamma_L({\bar{\beta}};r,G,\|\cdot\|)$ in Defintion~\ref{def:RSC}. The following result for generalized linear models is related to Theorem~\ref{thm:dual_certificate-error}, but is more specific to the loss function (\ref{eq:gen-lin-model}) and more elaborated. \begin{theorem}\label{thm:GLM} Suppose $\eta(\beta^*)<1$. Let ${\bar{\beta}}$ be a sparse vector such that \begin{eqnarray}\label{thm:GLM-1} \sup_{\beta\in{\cal C}_{{\bar{\beta}},\beta^*}}\|\beta-{\bar{\beta}}\| \le \frac{\gamma_2({\bar{\beta}};1,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|)} {{\kappa}^2\lambda({\bar{\beta}},\beta^*;\|\cdot\|)} + \frac{\lambda({\bar{\beta}},\beta^*;\|\cdot\|)} {4\gamma_2({\bar{\beta}};1,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|)}. \end{eqnarray} Then, \begin{eqnarray*} D_L({\hat{\beta}},\beta^*) \le D_L({\bar{\beta}},\beta^*) + \frac{\lambda^2({\bar{\beta}},\beta^*;\|\cdot\|)} {4\gamma_2({\bar{\beta}};1,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|)}. \end{eqnarray*} \end{theorem} \begin{proof}. Let ${\tilde{\beta}} = {\bar{\beta}}+t({\hat{\beta}}-{\bar{\beta}})$. Define \begin{eqnarray*} f(t) = D_L({\tilde{\beta}},\beta^*) - D_L({\bar{\beta}},\beta^*) = L({\tilde{\beta}})-L({\bar{\beta}}) + t\innerprod{- \nabla L(\beta^*)}{{\hat{\beta}}-{\bar{\beta}}}. \end{eqnarray*} The function $f(t)$ is convex with $f(0)=0$ and $f'(t) = \innerprod{\nabla L({\tilde{\beta}}) - \nabla L(\beta^*)}{{\hat{\beta}}-{\bar{\beta}}}$. If $f'(1)\le 0$, then $D_L({\hat{\beta}},\beta^*) - D_L({\bar{\beta}},\beta^*) = f(1)\le f(0)=0$ and the conclusion holds. Assume $f'(1)>0$ in the sequel. Let $u=-\nabla L({\hat{\beta}})$. By (\ref{eq:hbeta}), $u\in\partial R({\hat{\beta}})$. Since $f'(1)=\innerprod{u+ \nabla L(\beta^*)}{{\bar{\beta}}-{\hat{\beta}}}>0$, ${\hat{\beta}}\in {\cal C}_{{\bar{\beta}},\beta^*}$. It follows that $f'(1)\le \lambda({\bar{\beta}},\beta^*;\|\cdot\|)\|{\hat{\beta}}-{\bar{\beta}}\|$. By Lemma \ref{lm:GLM} \begin{eqnarray*} f(t)-f'(t)t= - D_L({\bar{\beta}},{\tilde{\beta}}) \le - \|{\tilde{\beta}}-{\bar{\beta}}\|^2\gamma_2({\bar{\beta}};\kappa\|{\tilde{\beta}}-{\bar{\beta}}\|,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|). \end{eqnarray*} Consider two cases. If $\kappa \|{\hat{\beta}}-{\bar{\beta}}\|\le 1$, we set $t=1$ to obtain \begin{eqnarray*} f(1) &\le& f'(1) - \|{\hat{\beta}}-{\bar{\beta}}\|^2\gamma_2({\bar{\beta}};1,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|) \cr &\le& \lambda({\bar{\beta}},\beta^*;\|\cdot\|)\|{\hat{\beta}}-{\bar{\beta}}\| - \|{\hat{\beta}}-{\bar{\beta}}\|^2\gamma_2({\bar{\beta}};1,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|). \end{eqnarray*} Taking the maximum of $x\lambda({\bar{\beta}},\beta^*;\|\cdot\|) - x^2\gamma_2({\bar{\beta}};\kappa\|{\hat{\beta}}-{\bar{\beta}}\|,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|)$, we find \begin{eqnarray*} D_L({\hat{\beta}},\beta^*) - D_L({\bar{\beta}},\beta^*) = f(1) \le \frac{\lambda^2({\bar{\beta}},\beta^*;\|\cdot\|)} {4\gamma_2({\bar{\beta}};1,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|)}. \end{eqnarray*} For $\kappa \|{\hat{\beta}}-{\bar{\beta}}\| > 1$, we set $t<1$ so that $\kappa \|{\tilde{\beta}}-{\bar{\beta}}\| = 1$ \begin{eqnarray*} f(1) \le f'(1) + f(t)-tf'(t) \le \lambda({\bar{\beta}},\beta^*;\|\cdot\|)\|{\tilde{\beta}}-{\bar{\beta}}\| - {\kappa}^{-2}\gamma_2({\bar{\beta}};1,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|). \end{eqnarray*} This gives $f(1)\le \lambda^2({\bar{\beta}},\beta^*;\|\cdot\|)/ \{4\gamma_2({\bar{\beta}};1,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|)\}$ when \begin{eqnarray*} \|{\tilde{\beta}}-{\bar{\beta}}\| \le \frac{\gamma_2({\bar{\beta}};1,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|)} {{\kappa}^2\lambda({\bar{\beta}},\beta^*;\|\cdot\|)} + \frac{\lambda({\bar{\beta}},\beta^*;\|\cdot\|)} {4\gamma_2({\bar{\beta}};1,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|)}. \end{eqnarray*} The proof is complete in view of the assumed condition on ${\bar{\beta}}$. \end{proof} Condition (\ref{thm:GLM-1}) holds if $\sup_{\beta\in\Omega}\|\beta\|\le A$ and $2A \le \gamma_2({\bar{\beta}};1,{\cal C}_{{\bar{\beta}},\beta^*}\|\cdot\|)/ \{{\kappa}^2\lambda({\bar{\beta}},\beta^*;\|\cdot\|)\}$. This is a weaker condition that the condition discussed after Corollary~\ref{cor:dual_certificate-oracle} because the quantity $\lambda({\bar{\beta}},\beta^*;\|\cdot\|) \leq \inf_{\bar{u} \in \partial R({\bar{\beta}})} \|\nabla L(\beta^*)+\bar{u}\|_D $ is generally very small, which means that we allow a very large $A$. Under this relatively weak condition, Theorem~\ref{thm:GLM} gives an oracle inequality for generalized linear models that can be easily applied to common formulations such as logistic regression and Poisson regression. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Cold Dark Matter (CDM) makes up 23\% of the energy of the Universe, as deduced from the WMAP measurements of the temperature anisotropies in the Cosmic Microwave Background, in combination with data on the Hubble expansion and the density fluctuations in the universe \citep{wmap}. The nature of the CDM is unknown, but one of the most popular explanation for it is the neutralino, a stable neutral particle predicted by Supersymmetry \citep{lspdm,jungman}. The neutralinos, usually denoted by $\chi$, are spin 1/2 Majorana particles, which can annihilate into pairs of Standard Model particles, but other candidates are possible as well. The only assumptions needed for this analysis is that DM particles were produced in thermal equilibrium with all other particles in the early Universe and its number density $n_\chi$ decreases from the high value in the early universe to the present low number density by annihilation, as it happened with the number density of baryons as well \citep{kolb}. The relic density of CDM is inversely proportional to $\langle\sigma v\rangle$, the averaged annihilation cross section $\sigma$ of DM particles multiplied with their relative velocity $v$. This inverse proportionality is obvious, if one considers that a higher annihilation rate, given by $\langle\sigma v\rangle n_\chi$, would have reduced the relic density before freeze-out, i.e. the time, when the expansion rate of the Universe, given by the Hubble constant, became equal to or larger than the annihilation rate. The relation can be written as \begin{equation} \Omega_\chi h^2=\frac{m_\chi n_\chi}{\rho_c}\approx (\frac{2\cdot 10^{-27} cm^3 s^{-1}}{\langle\sigma v\rangle})\label{wmap}. \end{equation} The nominator in the last part of this equation was calculated with CalcHEP \citep{calchep} and found to be 30\% smaller than the value calculated in \citet{jungman}. For the present value of $\Omega h^2=0.113 \pm 0.009$, as measured by WMAP \citep{wmap}, the thermally averaged total cross section at the freeze-out temperature of $m_\chi/22$ must have been around $2\cdot 10^{-26} ~{\rm cm^3s^{-1}}$. This is a cross section typical for weak interactions and explains why the DM does not cluster strongly in the center of galaxies, like the baryons do: the cross sections are simply too small to have large energy losses when falling towards the center. Therefore the DM particles are generically called WIMPs, Weakly Interacting Massive Particles. All possible enhancements of the annihilation rate from the clustering of DM (usually called boost factor) will be calculated with respect to this generic cross section, which basically only depends on the value of the Hubble constant. Note that $\langle\sigma v\rangle$ as calculated from Eq.~\ref{wmap}, is independent of the WIMP mass (except for logarithmic corrections), as can be shown by a detailed calculation \citep{kolb}. The stable decay and fragmentation products from Dark Matter Annihilation (DMA) are neutrinos, photons, protons, antiprotons, electrons and positrons. From these, the protons and electrons disappear in the sea of many matter particles in the universe, but the photons and antimatter particles may be detectable above the background, generated by particle interactions. Such searches for indirect Dark Matter detection have been actively pursued, see e.g. the reviews by \citet{bergstrom} and \citet{sumner} or more recently by \citet{Bertone:2004pz}. References to earlier work can be found in these reviews. Gamma rays have the advantage that they point back to the source and do not suffer energy losses, so they are the ideal candidates to trace the dark matter density. The charged components interact with Galactic matter and are deflected by the Galactic magnetic field, so they do not point back to the source. A detailed distribution of gamma ray fluxes for energies between 0.03 and 10 GeV was obtained by the Energetic Gamma Ray Emission Telescope EGRET, one of the four instruments on the Compton Gamma Ray Observatory CGRO, which collected data during nine years, from 1991 to 2000. The diffuse component shows a clear excess of about a factor two over the expected background from known nuclear interactions, inverse Compton scattering and bremsstrahlung. The excess was observed first by \citet{hunter}, while the most complete calculation of the background is encoded in the GALPROP program \citep{galprop1,galprop2}. This excess was shown to possess all the key features from dark matter annihilation (DMA), as discussed at various conferences and workshops\citep{deboer1,deboer2,deboer3,deboer4}. By fitting only the {\it shapes} of the background and DMA signal the analysis becomes independent of the absolute normalization of the background. Therefore uncertainties in the background fluxes from e.g. the gas densities and cosmic ray fluxes are eliminated. Apart from fitting the shapes of DMA signal and background the present analysis differs also from previous ones \citep[see e.g.][]{previous1,previous2,previous3,previous4, previous5,previous6,previous7, previous8,previous9,previous10,previous11,previous12,previous13,previous14,previous15,previous16} by the fact that the fluxes {\it and} the energy spectrum in {\it all} sky directions were considered simultaneously. This requires a complete reanalysis of the publicly available EGRET data, since the diffuse gamma ray data have been published only in a limited number of sky directions. Considering the excess in all sky directions with a sufficiently large resolution allows to reconstruct the DM halo profile, which in turn can be used - in combination with the distribution of visible matter - to reconstruct the shape of the rotation curve. The absolute normalization of the DM density distribution or halo profile can be obtained by requiring that the local rotation velocity of our solar system at 8.3 kpc is 220 km/s. It will be shown that the rotation curve, as obtained from the EGRET excess of gamma rays, provides the first explanation for the peculiar change of slope in the rotation curve at around 11 kpc, indicating that the excess traces DM. The paper has been organized as follows: Section \ref{fit} explains the fitting method, Section \ref{analysis} describes the analysis of the EGRET data, Section \ref{halo} describes the determination of the DM halo profile and comparison with the Galactic rotation curve, Section \ref{objections} discusses possible objections to the DMA interpretation of the EGRET excess and Section \ref{summary} summarizes the results. \section{Fitting Method for Indirect Dark Matter Annihilation}\label{fit} As mentioned in the introduction, neutral particles play a very special role for indirect DM searches, since they point back to the source. Therefore the gamma rays provide a perfect means to reconstruct the halo profile of the DM by observing the intensity of the gamma ray emissions in the various sky directions. Of course there are different sources of diffuse gamma rays in the Galaxy, so disentangling the annihilation signal is at first glance not easy. However, the spectral {\it shapes} of the diffuse gamma ray backgrounds and DMA signal are well known from accelerator experiments and it is precisely the shape which was well measured by the EGRET telescope. Furthermore, the shapes of the background and DMA signal are sufficiently different to disentangle their contributions to the data by leaving the absolute normalizations for background and DMA signal as free parameters in the fit. We discuss first the galactic backgrounds, then the DMA signal and finally the extragalactic background. The galactic backgrounds are: decays of $\pi^0$ mesons produced in nuclear interactions, contributions from inverse Compton scattering of electrons on photons and Bremsstrahlung from electrons in the Coulomb field of nuclei. The best estimate of the relative contributions is given by the GALPROP code \citep{galprop,galprop1,galprop2}, which parametrizes the gas densities, cross sections and energy spectra for all processes of interest and solves numerically the diffusion equation to obtain a complete solution for the density map of all primary and secondary nuclei. For this the so-called ``conventional'' GALPROP model was used, which assumes the spectra of protons and electrons, as measured locally in the solar system, to be representative for the whole Galaxy. For protons, which have negligible energy losses, this is a reasonable assumption; for electrons, which have larger energy losses from ionization and Bremsstrahlung, this assumption may be questioned. Therefore we have restricted the analysis to gamma ray energies above 0.07 GeV, in which case the $\pi^0$ component starts to become dominant: electron-induced gamma ray production is of the same order of magnitude as the nuclei-induced gamma ray production at this energy, but at 0.5 GeV the electron-induced component is already below 10\% for the inner Galaxy and below 25\% for other regions. Therefore one is not too sensitive to electron-induced gamma rays in the region of the EGRET excess, which is maximal at energies around 2-4 GeV. The ``conventional'' model is to be contrasted with the ``optimized'' model \citep{optimized}, in which case the electron and proton spectra are ``optimized'' to explain the EGRET excess without DM by allowing them to deviate from the locally observed spectra. Even the freedom for both the proton and electron spectra does not lead to particular good fits of the background to the EGRET data, if all sky directions are considered, as will be discussed in the next section. As mentioned before, only the shape of the background is needed for the fit, not the absolute normalization. Leaving the normalization free in the fit for a given sky direction is important, simply because one does not know the cosmic ray and gas densities to better than about 20\% for a given sky direction. On the other hand, one knows for given cosmic ray spectra the shape of the expected gamma rays quite well: for electron-induced gamma production these processes can be easily calculated, while for the nuclei-induced processes the gamma ray spectra are known from the scattering of beams of nuclei on fixed targets. In the galaxy the ``beams'' are the cosmic rays, while the gas in the disk is the fixed target. The wealth of data on hadronic interactions have resulted in the so called string fragmentation model, encoded e.g. in the PYTHIA program from \citet{pythia}, which describes gamma ray production from nuclear interactions well. The small contribution from the electron-induced gamma rays was added to the dominant contribution from $\pi^0$ decays and the shape of the total background thus obtained was used for a given sky direction. The relative fraction from electron-induced and nuclei-induced gamma rays varies with sky direction and this fraction was taken from GALPROP, but a fit with a constant fraction yielded similar results for the DM profile. Given that we do not attempt to determine the absolute normalization, the analysis is not sensitive to the many GALPROP propagation parameters determining absolute density profiles of the Galactic components. Furthermore the propagation of the gamma rays is straightforward. The fitted normalization factor of the background in each direction was found to agree within errors with the values determined from the GALPROP code, as will be discussed later. \begin{figure} \begin{center} \includegraphics [width=0.5\textwidth,clip]{Plots/fig1.eps} \caption[]{ The expected shape of gamma ray spectra for the fragmentation of WIMPs into different final states (arbitrary normalization): heavy quark pairs (b,c), heavy gauge bosons (W,Z) and $\tau$ leptons. The annihilation of neutralinos, the preferred WIMP candidates, into light final states is helicity suppressed \citep{jungman}. \label{gamma_spectra}} \end{center} \end{figure} WIMPs are expected to annihilate into fermion-antifermion or gauge boson pairs, so a large fraction will result in quark-antiquark pairs, which produce typically 30-40 photons per annihilation in the fragmentation process (mainly from $\pi^0$ decays). However, the photons from DMA are expected to have a spectrum significantly different from the ones from nuclear interactions. This can be understood qualitatively as follows: the WIMPs are strongly non-relativistic, so they annihilate almost at rest. Therefore DM annihilates into almost mono-energetic pairs of particles with an energy approximately equal to the WIMP mass. This results in a rather energetic gamma ray spectrum with a sharp cut-off at the WIMP mass. Such gamma ray spectra from the fragmentation of mono-energetic quark pairs have been measured precisely at electron-positron colliders and the data are well described by the string fragmentation model mentioned above. The expected gamma ray spectra from this program are shown in Fig. \ref{gamma_spectra} for a WIMP mass of 100 GeV and several annihilation channels. The difference between various channels is small, except for the $\tau$ final state, which has only a small $\pi^0$ multiplicity. The corresponding hard gamma ray spectrum from $\tau$ decays does not fit the data and excludes therefore a large annihilation into $\tau$-pairs. In addition to the Galactic background (GB) one expects a contribution from the extragalactic background (EGBG). The origin of these gamma rays can be other galaxies which may yield similar contributions as our Galaxy, or quite different sources like Active Galactic Nuclei (AGN), quasars or blazars. Since each extragalactic object has individual properties, it is difficult to predict the shape or the absolute value of this background component. Experimentally the EGBG can be obtained by subtracting from the EGRET data the Galactic contribution using the extrapolation method pioneered by \citet{sreekumar}. Of course, the Galactic contribution includes a contribution from Galactic DM, which is dependent on the EGBG, so the EGBG can only be obtained in an iterative procedure, as was done by \citet{sander}. This contribution is taken to be of the same shape and same magnitude in all sky directions. It starts to become important towards the galactic poles, where both the galactic background and the DMA become small. \begin{table} \begin{center} \begin{tabular}{cccc} \hline Region & Longitude $l$ & Latitude $|b|$ & Description\\\hline A & 330-30 & 0-5 & Inner Galaxy\\ B & 30-330 & 0-5 & Disk without inner Galaxy\\ C & 90-270 & 0-10 & Outer Galaxy\\ D & 0-360 & 10-20 & Low longitude \\ E & 0-360 & 20-60 & High longitude\\ F & 0-360 & 60-90 & Galactic Poles\\ \hline \end{tabular} \end{center} \caption[skyregions]{The six sky regions mentioned in the text; the detailed halo profile was obtained from 180 independent sky directions, as described in the Appendix.} \label{t1} \end{table} \begin{figure} \begin{center} \includegraphics[width=0.32\textwidth]{Plots/fig2a.eps} \includegraphics[width=0.32\textwidth]{Plots/fig2b.eps} \includegraphics[width=0.32\textwidth]{Plots/fig2c.eps} \includegraphics[width=0.32\textwidth]{Plots/fig2d.eps} \includegraphics[width=0.32\textwidth]{Plots/fig2e.eps} \includegraphics[width=0.32\textwidth]{Plots/fig2f.eps} \caption[Spectrum of Conventional Model Including Dark Matter Annihilation]{Fit of the shapes of background and DMA signal to the EGRET data in the Galactic disk (top row, regions A, B,C from Table~\ref{t1})and outside the Galactic disk (bottom row, regions D,E,F). The light shaded (yellow) areas indicate the background using the shape of the conventional GALPROP model, while the dark shaded (red) areas are the signal contribution from DMA for a 60 GeV WIMP mass. The individual shapes of background and DMA have been indicated by dashed lines, while the extragalactic background is given by the hatched area. The $\chi^2$ of the background is determined with an independent background only fit, which yields a probability practically zero, as can be estimated from the indicated $\chi^2$ values for the statistically independent regions. The fit including DM has a total probability around 0.8.} \label{excess} \end{center} \end{figure} \section{Analysis of EGRET data}\label{analysis} The EGRET data is publicly available as high resolution (0.5x0.5 degree) sky maps from the NASA archive\footnote{NASA archive: http://cossc.gsfc.nasa.gov/archive/.}, which allows an analysis in many independent sky directions after convolution with the point-spread function, which is a function of energy and becomes important for gamma rays below 1 GeV. The data set with the known point sources subtracted have been used. The point sources are defined by a 5$\sigma$ enhancement above the diffuse background. In general these point sources are only a small fraction of the total gamma ray flux, so the analysis is not sensitive to the subtraction of point sources. There are only a few nearby strong sources, which dominate the flux in these directions and the subtraction causes an additional systematic uncertainty. Therefore these directions have been excluded in the determination of the halo profile, which requires a fine scanning of all sky directions, as will be discussed in the next section. The contribution of the subtracted point sources have been indicated in the spectra for 180 independent sky directions, shown in the Appendix. The EGRET telescope was carefully calibrated at SLAC with a quasi mono-energetic photon beam in the energy range of 0.02 to 10 GeV \citep{egret_cal}. The efficiency and calibration during flight was also carefully monitored \citep{egret_cal1}. Using Monte Carlo simulations the energy range was recently extended up to higher energies with a correspondingly larger uncertainty, mainly from the self-vetoing of the detector by the back-scattering from the electromagnetic calorimeter into the veto counters for high energetic showers \citep{optimized,egret1}. Due to this uncertainty only data below 10 GeV were used in the fits discussed below. In total 8 energy ranges were used: 0.07-0.10 GeV; 0.10-0.15 GeV; 0.15-0.30 GeV; 0.30-0.50 GeV, 0.5-1.0 GeV; 1.0-2.0 GeV; 2.0-4.0 GeV; 4.0-10.0 GeV. The data points have been plotted at the arithmetic mean of the low and high endpoints of the bin, i.e. at $\sqrt{E_{low}E_{high}}$. With the 9 years of data taking with the EGRET telescope 180 independent sky directions can be studied without statistical problems. However, the data is limited by systematic uncertainties, which have to be taken into account carefully. The overall normalization error is usually quoted as 15\%, but the relative point-to-point error is much smaller. The latter can be determined by fitting the energy spectrum with a polynomial and if a certain energy bin is left out of the fit, then its variance with respect to the other energy points can be determined to be between 3 and 7\%. Therefore, if one fits only the shape of the spectrum with a free normalization parameter, only these relative errors between the energy points are the relevant ones, which were taken to be 7\%. Fitting the known shapes of the three contributions (GB, EGBG, DMA) to the EGRET data, as discussed before in section \ref{fit}, yielded astonishingly good fits, as shown in Fig. \ref{excess} for the 6 different sky directions given in Table \ref{t1}. For energies between 0.07 and 0.5 GeV the flux is dominated by the background, while above these energies a clear contribution from Dark Matter annihilation is needed. The excess in different sky directions can be explained by a single WIMP mass around 60 GeV and a single boost factor of about 100. The free normalization of the background comes out to be in reasonable agreement with the absolute predictions from the GALPROP propagation model of our Galaxy \citep{galprop,galprop1,galprop2}, as shown in Fig. \ref{bgscaling} for a fine binning of the skymaps. Thus the fitting method yields the correct normalization for the BG and the high energy excess for a given background shape determines the normalization for the DMA. The excess in all sky directions has a similar shape, as demonstrated in Fig. \ref{diff} for the first five sky regions of Table \ref{t1}. The quality of the EGRET data can be appreciated from Fig. \ref{diff}, where only the statistical errors are plotted. They are only visible at large latitudes. The right hand side of Fig. \ref{diff} shows the plots for WIMP masses of 50 and 70 GeV; the 70 GeV WIMP mass clearly fits worse. WIMP masses below 50 GeV lead to a too low relic density, since in that case one hits the $Z^0$-resonance and for WIMP masses below 40 GeV, i.e. on the other side of the resonance, the fit to the EGRET becomes worse. Therefore a lower limit of 50 GeV is taken and the 95\% C.L. upper limit depends somewhat on the background model: 70 GeV for the shape of the conventional model and more like 100 GeV for the shape of the optimized model. Therefore the WIMP mass is estimated to be between 50 and 100 GeV. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{Plots/fig3.eps} \caption[Sky Map of Background Scaling]{The ratio of the fitted background normalization and the absolute prediction of the conventional GALPROP model \citep{optimized} as function of latitude and longitude. For this plot a fine binning was used ($90\times 45$); for the whole sky the scaling is around 1, i.e. the background determination with our method is in good agreement with GALPROP, except for the disk region with latitudes below 50, where a systematic deviation of 20-30\% is seen. Since the density in the disk is known to be asymmetric, but GALPROP uses a symmetric parametrization, this discrepancy is understandable.} \label{bgscaling} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics [width=0.45\textwidth,clip]{Plots/fig4a.eps} \includegraphics [width=0.45\textwidth,clip]{Plots/fig4b.eps} \caption[]{ Left: The difference of the observed EGRET flux and fitted background for the various regions of Table \ref{t1}, i.e. the red area in Fig. \ref{excess}. One observes the same spectral shape for all regions, indicating a common source for the excess. Only the statistical errors have been plotted and the curves are fitted spline functions to guide the eye. Right: The influence of the WIMP mass on the spectrum: the light shaded (blue) curve shows the influence of a WIMP mass variation between 50 and 70 GeV. The lower (upper) solid line at the highest energies corresponds to the total flux for a 50 (70) GeV WIMP mass. \label{diff}} \end{center} \end{figure} Alternative explanations for the excess have been plentiful. Among them: locally soft electron and proton spectra, implying that in other regions of the Galaxy the spectra are harder, thus producing harder gamma ray spectra. A summary of these discussions has been given by \citet{optimized}, who find that hard proton spectra are incompatible with the antiproton yield and hard electron spectra are difficult to reconcile with the EGRET data up to 120 GeV. However, they find that by modifying the electron and proton injection spectra {\it simultaneously}, the description of the data can be improved by increasing the fluxes at high energies. The energy dependence at high energies is kept with the same slope as the locally measured spectra, which is required at least for protons, since the energy loss time of protons above a few GeV is of the order of the lifetime of the universe. Therefore it is hard to have strong inhomogeneities in the proton spectra at high energies. At low energies the fluxes are reduced by solar modulations, so here the spectral shapes of protons and electrons have large uncertainties and the shape can be optimized to fit the EGRET data. \begin{figure} \begin{center} \includegraphics[width=0.32\textwidth]{Plots/fig5a.eps} \includegraphics[width=0.32\textwidth]{Plots/fig5b.eps} \includegraphics[width=0.32\textwidth]{Plots/fig5c.eps} \includegraphics[width=0.32\textwidth]{Plots/fig5d.eps} \includegraphics[width=0.32\textwidth]{Plots/fig5e.eps} \includegraphics[width=0.32\textwidth]{Plots/fig5f.eps} \caption[Spectrum of Optimized Model]{Fit results of the shape of the optimized model to the EGRET data; regions and coding as for Fig. \ref{excess}, but without DMA contribution. The excess above 1 GeV can be improved by adding a DM contribution, in which case the normalization of the background (solid line) will become lower again and fit the low energy data points.} \label{excess1} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.32\textwidth]{Plots/fig6a.eps} \includegraphics[width=0.32\textwidth]{Plots/fig6b.eps} \includegraphics[width=0.32\textwidth]{Plots/fig6c.eps} \includegraphics[width=0.32\textwidth]{Plots/fig6d.eps} \includegraphics[width=0.32\textwidth]{Plots/fig6e.eps} \includegraphics[width=0.32\textwidth]{Plots/fig6f.eps} \caption[Spectrum of Optimized Model]{Fit results of the shape of the optimized model plus DMA to the EGRET data; region and coding as for Fig. \ref{excess}. The fit is now equally good as with the conventional background shape in Fig. \ref{excess}, but the boost factor is roughly a factor three lower.} \label{excess2} \end{center} \end{figure} The problem with this "optimized solution" is however that the shape of the gamma spectra is improved but still not reproduced well, as shown in Fig. \ref{excess1} for the regions of Table \ref{t1}. But it is exactly the shape, which was well measured by EGRET, because the relative errors between neighbouring energies are roughly half of the normalization error of 15\%. The probability, as calculated from the total $\chi^2/d.o.f=110/42$, as indicated for each region in Fig. \ref{excess1}, is below 10$^{-7}$. The fact that the shape is not well fitted in the optimized model can also be seen from Fig. 9 in the original publication \citep{optimized}, which shows the longitudinal profile for various energy bins: above 1 GeV the prediction of the model is clearly too low, especially if one takes into account that the statistical errors in the Galactic plane are negligible, so the plotted errors of 15\% are correlated. And this discrepancy above 1 GeV is observed in all directions, but this energy range is exactly where DMA contributes. Adding DM to the optimized model improves the fit probability from below $10^{-7}$ to 0.8, as shown in Fig. \ref{excess2}. Of course, the DM contribution is smaller than in case of the conventional background in Fig. \ref{excess}, which results in a reduction of the boost factor by roughly a factor three. Similar results are obtained for the shape proposed by \citet{kamae}. As with the optimized model, the absolute prediction of \citet{kamae} overshoots the low energy data and undershoots the high energy data, so if only the shape is fitted with a free normalization factor, the excess is clearly present. Their proposed contribution of diffractive pp-scattering only reduces the excess by 10-20\% and if in addition the proposed harder proton spectrum is used (spectral index -2.5 instead of -2.7 measured locally), the excess can be reduced by 30\% In all cases the proposed background shape from \citet{kamae} fits the data in the different regions considerably worse than the optimized model from \citet{optimized}, mainly because \citet{kamae} try to improve the fit by changing the proton spectrum only, while in the optimized model both the electron spectrum {\it and} proton spectrum are modified. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{Plots/fig7a.eps} \includegraphics[width=0.4\textwidth]{Plots/fig7b.eps} \includegraphics[width=0.4\textwidth]{Plots/fig7c.eps} \includegraphics[width=0.4\textwidth]{Plots/fig7d.eps} \caption[Fits for Pseudo-Isothermal Profile]{ The longitude distribution of diffuse Galactic gamma rays with energies below 0.5 GeV for different latitudes The points represent the EGRET data. The contributions from the background and the almost negligible DMA for energies below 0.5 GeV have been indicated by the light (yellow) and dark (red) shaded areas, respectively. The free normalization of the background for each bin provides a perfect description of the low energy data.} \label{long_low} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{Plots/fig8a.eps} \includegraphics[width=0.4\textwidth]{Plots/fig8b.eps} \includegraphics[width=0.4\textwidth]{Plots/fig8c.eps} \includegraphics[width=0.4\textwidth]{Plots/fig8d.eps} \caption[Fits for Pseudo-Isothermal Profile]{As in Fig. \ref{long_low}, but for for EGRET data above 0.5 GeV and DMA for an isothermal DM profile without ringlike substructures.} \label{long_high} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{Plots/fig9.eps} \caption[Schematic Picture of a DM Ring]{Schematic picture of a DM ring with an elliptical shape and constant DM density $\rho_n$ along the ring. The definitions are also valid for the triaxial halo component, if the coordinate system is rotated with the appropriate angle $\phi_{n}$ towards the Galactic center and the short axis $c$ is modified by the ellipticity in the z-direction: $c=a/\epsilon_z$. } \label{haloschema} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{Plots/fig10a.eps} \includegraphics[width=0.4\textwidth]{Plots/fig10b.eps} \includegraphics[width=0.4\textwidth]{Plots/fig10c.eps} \includegraphics[width=0.4\textwidth]{Plots/fig10d.eps} \caption[Fits for Pseudo-Isothermal Profile]{As in Fig. \ref{long_high}, but including DMA for an isothermal DM profile with ringlike substructures at 4 and 14 kpc.} \label{long_rings} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics [width=0.4\textwidth,clip]{Plots/fig11a.eps} \includegraphics [width=0.4\textwidth,clip]{Plots/fig11b.eps} \caption[]{ The latitude distribution of diffuse gamma-rays for longitudes $-30^\circ<l<30^\circ$ and two energy bins: $E_\gamma<0.5$ GeV (left) and $E_\gamma>0.5$ GeV (right). The points represent the EGRET data. The contributions from the background and the neutralino annihilation signal have been indicated by the light (yellow) and dark (red) shaded area, respectively. The fitted normalization factor of the background comes out in reasonable agreement with the GALPROP prediction, while the fitted normalization of the DM contribution corresponds to a WIMP mass around 60 GeV with a boost factor of about 100.} \label{lat} \end{center} \end{figure} \section{Determination of the halo profile}\label{halo} From the excess in the various sky directions one can obtain the halo profile. However, this requires a finer sampling of the various sky directions than discussed above. Therefore the fits to the 6 regions were repeated for the following regions: the longitude distributions are split into bins of 8$^\circ$ for four different latitude ranges (absolute values of latitude: 0-5$^\circ$, 5-10$^\circ$, 10-20$^\circ$, 20-90$^\circ$), so one has 4x45=180 independent sky regions. The fit result for each of these regions has been displayed in the Appendix. Also the fluxes from the point sources are displayed, which have been subtracted from the data. In most regions the point sources have a negligible contribution except for some disk regions with strong pulsars (CRAB at $l\approx -175^\circ$, GEMINGA at $l\approx -165^\circ$, VELA at $l\approx -98^\circ$ and CYGNUS at $l\sim +80^\circ$). These regions have been left out of the fit for the halo parameters. The differential gamma flux in a direction forming an angle $\psi$ with the direction of the Galactic center is given by: \begin{equation} \phi_\chi(E,\psi)=\frac{\langle \sigma v\rangle}{4\pi} \sum_f \frac{dN_f}{dE} b_f \int_{line~ of~ sight}B_l \frac{1}{2}\frac{\langle \rho_\chi\rangle^2}{M_\chi^2} dl_\psi \label{gammafluxcont} \end{equation} where $b_f$ is the branching ratio into the tree-level annihilation final state, while $dN_f/d E$ is the differential photon yield for the final state $f$. The spectrum $dN_f/d E$ is similar for all hadronic final states, as discussed before in section \ref{fit}. The cross section $\langle\sigma v\rangle$ is taken from Eq. \ref{wmap} with the WMAP value for $\Omega h^2$ \citep{wmap}. So one observes from this equation, that for a given excess and a known WIMP mass (around 60 GeV from the spectrum of the excess as discussed above), the only unknown quantity in the direction $\psi$ is the square of the averaged WIMP mass density $\rho_\chi$ multiplied by the boost factor, which represents the enhancement of the annihilation rate by the clustering of DM. The hierarchical formation of galaxies from small clumps leads to a spectrum of clump masses starting may be as low as $10^{-6}$ solar masses \citep{dokuchaev,moore}. The DMA is higher in the clumps than outside, since the local density is there higher and DMA is proportional to the local density squared, i.e. $<\rho_{DM}^2>$. This can be considerably larger than $<\rho_{DM}>^2$ and the ratio is the enhancement or boost factor, i.e. B=$<\rho_{DM}^2>$/$<\rho_{DM}>^2$. The rotation curve is only sensitive to the total mass, i.e. $<\rho_{DM}>$. The boost factor can be obtained from the fitted normalization and can be large, from a few tens to a few thousands, depending on the unknown details of the DM clustering. In general, the boost factor towards the Galactic center may be lower than in other directions because of the likely tidal disruption of small DM clusters by the fly-by from stars. In this case the flux is proportional to $B(r)<\rho(r)^n>$, where $n$ can vary between 1 and 2 depending on the DM clustering: $n=2$ if no clustering and $n=1$ if DMA is predominantly in non-overlapping clusters. Consequently one has many alternatives to fit, which are outside the scope of the present paper. Therefore we concentrate on a boost factor independent of $r$ and $n=2$, which turns out to yield a good fit. If one assumes that the clustering is similar in all directions, i.e. the same boost factors in all directions, the DM density profile $\rho_\chi (r)$ can be determined from the excess in the various directions. The most transparent way to do this is the following: search for a functional form of the profile as function of the distance from the Galactic center and see which form yields the best fit. A survey of the optical rotation curves of 400 galaxies shows that the halo distributions of most of them can be fitted either with the Navarro-Frenk-White (NFW) or the pseudo-isothermal profile or both \citep{Jimenez:2002vy}. These halo profiles can be parametrized as follows: \begin{equation} \rho (r)=\rho_0\cdot (\frac{r}{a})^{-\gamma}\left[1+ (\frac{r}{a})^\alpha\right] ^{\frac{\gamma-\beta}{\alpha}}, \label{profile0} \end{equation} where $a$ is a scale radius and the slopes $\alpha$, $\beta$ and $\gamma$ can be roughly thought of as the radial dependence at $r\approx a$, $r>>a$ and $r<<a$, respectively. The cuspy NFW profile \citep{nfw} is defined by $(\alpha, \beta, \gamma)$ =(1,3,1) for a scale $a=10$ kpc, while the Moore profile with $\gamma=1.2$ is even more cuspy \citep{moore1}. The isothermal profile with $(\alpha, \beta, \gamma)$ =(2,2,0) has no cusp ($\gamma=0$), but a core which is taken to be the size of the inner Galaxy, i.e. $a=5$ kpc and $\beta=2$ implies a flat rotation curve. The EGRET excess towards the Galactic center does not show a cusp, but is perfectly flat near the center as expected for a cored profile, so a cored isothermal profile was fitted to the excess in 180 sky directions. The fit results are shown in Fig. \ref{long_low} for gamma energies below 0.5 GeV and in Fig. \ref{long_high} for gamma energies above 0.5 GeV as function of longitude for various latitudes, i.e. one determines the flux towards the earth by looking around in a full circle either in the Galactic plane or at various angles above and below the disk. Clearly, the data are well described in all directions for data below 0.5 GeV with hardly any contribution from DMA, while for data above 0.5 GeV only the data towards the Galactic poles are reasonably well described by the background plus DM component (dark (red) contribution). Towards the Galactic center and Galactic anticenter the isothermal profile does not provide enough DM, as is obvious in the upper panels of Fig. \ref{long_high}. There are several reasons why this might be so. For example the infall of a satellite galaxy into the gravitational potential of larger galaxy can lead to toroidal ringlike DM overdensities \citep{hayashi}. When ringlike structures were added with the radius and widths of the ring in and out of the plane as free parameters, the fit quickly converged for only two toroidal ringlike structures, namely at radii of 4 and 14 kpc. The enhanced gamma radiation at 14 kpc was already observed in the original discovery paper of the excess \citep{hunter} and called ``cosmic enhancement factor''. Note that the radius of the ring can be determined from the longitude profile in the plane of the Galaxy, i.e. at low latitudes, because the solar system is not at the center, so if the density is constant along the ring, different segments of the ring yield different fluxes which depend on the radius, orientation and ellipticity of the ring. The extent of the ring above the plane can be obtained from the longitude distribution for higher latitudes. It should be noted that the assumption of a constant density along the ring is not necessarily true. However, the fit is not very sensitive towards the ring density on the opposite side of the galactic center, so only the ring segment nearest to the solar system is assumed to have a constant density; this assumption yields a reasonable fit. Although the overdensities from the infall of a satellite galaxy do not produce rings, but at most ringlike segments during each passage near the pericenter, the precession of the orbit can lead to various segments along the pericenter. Since the analysis prefers a non-zero density over an angular range close to 360 degrees we continue to speak of ``rings'' of DM, although this does not mean at all perfect circularity. With the rings the total DM halo profile can be parametrized as: \begin{equation} \rho_\chi (\vec r) = \rho_0\cdot \left(\frac{\tilde r}{r_0}\right)^{-\gamma}\left[\frac{1+ \left(\frac{\tilde r}{a}\right)^\alpha}{1+\left(\frac{r_0}{a}\right)^\alpha}\right]^{\frac{\gamma-\beta} {\alpha}} + \sum_{n=1}^2 \rho_n \exp\left(-\frac{\left(\tilde{r}_{gc,n}-{R_n}\right)^2} {2\cdot\sigma_{R,n}^2}-\left\vert\frac{z}{\sigma_{z,n}}\right\vert \right)\label{profile1} \end{equation} with \begin{equation} \tilde{r} = \sqrt{x^2+\frac{y^2}{\varepsilon_{xy}^2}+\frac{z^2}{\varepsilon_z^2}}, \hspace*{1cm} \tilde{r}_{gc,n} = \sqrt{x_{(n)}^2+\frac{y_{(n)}^2}{\varepsilon_{xy,n}^2}}; \end{equation} $\varepsilon_{xy}$ and $\varepsilon_z$ ($\varepsilon_{xy,n}$) are the ellipticities of the triaxial halo profile and rings, respectively. The first term of Eq. \ref{profile1} has been modified with respect to Eq. \ref{profile0} in order that $\rho_0$ represents the DM density in the solar system, i.e. at $r=r_0$ $\rho=\rho_0$. Other degrees of freedom are the angles with respect to the axis earth - Galactic center of the halo $\phi_{gc}$ and of the rings $\phi_n$, i.e. each component has its own coordinate system which is rotated around the $z$-axis. The maximum WIMP density of a ring $\rho_{n}$ is reached in the Galactic plane ($z = 0$) at a distance from the Galactic center $\tilde{r}_{gc,n}=R_n$. Figure \ref{haloschema} shows a schematic picture of a ring with the different definitions. The radial width of the outer ring is taken to be different for the inner and outer side as can happen for the infall of a satellite galaxy, which is disrupted most strongly near the pericenter and the matter will be distributed towards larger distances, so the shape was modified to fall off to zero at radii smaller than the pericenter within a distance $d_{r}$ using two quadratic functions: $ \rho(r) = a\cdot (r-(R_n-d_r))^2~ \mbox{\rm for }~ (R_n-d_r)<r<(R_n-d_r/2)$ and $ \rho(r) = \rho_n-a\cdot (r-R_n)^2 ~ \mbox{\rm for } ~(R_n-d_r/2)<r<R_n.$ The parameters of the halo model and the boost-factor are varied to minimize the following $\chi^2$ function: \begin{equation} \chi^2=\sum\limits_{i,j}\left({f^{i,j}\cdot\phi_{\mbox{\scriptsize{bg}}}^{i,j}+ B\cdot\phi_{\mbox{\scriptsize{dm}}}^{i,j}+\phi_{\mbox{\scriptsize{eg}}}- \phi_{\mbox{\scriptsize{exp}}}^{i,j}\over\sigma_{\mbox{\scriptsize{}}}^{i,j}}\right)^2, \end{equation} where $i$ and $j$ denote the different bins in longitude and latitude and $f^{i,j}$ and $B$ are the normalization factors of the background and DMA. Note that the boost factor $B$ and the extragalactic flux $\phi_{eg}$ were taken to be the same in all directions, i.e. independent of $i$ and $j$. The scaling factor $\rho_0$ of the ``spherical'' component (=isothermal profile) is not a free parameter, since it is scaled to get the rotation velocity at $r_0$ correct. This means that if the fit requires a more massive inner ring, the density of the spherical component is adjusted automatically. Note that in total one fits 180 independent sky directions, each with 8 data points above 0.07 GeV, so one has a total of more than 1400 data points, which is enough to determine the halo parameters. In addition, the parameters of the different contributions are determined by completely independent data: the outer (inner) ring is determined by the flux {\it in} the plane of the disk away (towards) the Galactic center, while the isothermal profile is mainly determined by the fluxes {\it outside} the Galactic disk. As a test of the convergence and angular resolution a fit with 360 sky directions was performed as well, which yielded practically identical results. All sky directions are now well described by the basic isothermal profile plus the substructure in the form of two toroidal rings, as shown in Fig. \ref{long_rings} with the contributions of the two rings indicated separately. They clearly dominate for low latitudes, but are small for latitudes above 10 degrees. The latitude distributions are also well described, as shown in Fig. \ref{lat} for the direction towards the Galactic center. The fit results of the parameters are summarized in Table \ref{t2}. The errors in the parameters are mainly systematic, e.g. depending on the fact that we took the boost factor to be the same in all directions etc. Determining these systematic errors is outside the scope of the present paper, but the qualitative picture of two ringlike substructures is independent of such details. \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline parameter & halo without rings & halo with rings \\ \hline $\rho_0$ [GeV cm$^{-3}$] & 0.57 & 0.5 \\ $r_0$ [kpc] & 8.3 & 8.3 \\ $\alpha$ & 2 & 2 \\ $\beta$ & 2 & 2 \\ $\gamma$ & 0 & 0 \\ $a$ [kpc] & 5 & 5 \\ $\varepsilon_{xy}$ & 0.7 & 0.8 \\ $\varepsilon_{z}$ & 0.6 & 0.75 \\ $\phi_{gc}$ [$^\circ$] & 0 & 90 \\ $M_{200}$ [M$_\odot$] & $2.8\cdot 10^{12}$ & $3.4\cdot 10^{12}$ \\ $R_{200}$ [kpc] & 290 & 310 \\ \hline $\rho_1$ [GeV cm$^{-3}$] & - & 4.5 \\ $R_1$ [kpc] & - & 4.15 \\ $\sigma_{r,1}$ [kpc] & - & 4.15 \\ $\sigma_{z,1}$ [kpc] & - & 0.17 \\ $\varepsilon_{xy,1}$ & - & 0.8 \\ $\phi_1$ [$^\circ$] & - & -70 \\ $M_1$ [M$_\odot$] & - & $9.3\cdot 10^{9}$ \\ \hline $\rho_2$ [GeV cm$^{-3}$] & - & 1.85\\ $R_2$ [kpc] & - & 12.9 \\ $\sigma_{r,2}$ [kpc] & - & 3.3 \\ $d_{n}$ [kpc] & - & 4 \\ $\sigma_{z,2}$ [kpc] & - & 1.7 \\ $\varepsilon_{xy,2}$ & - & 0.95 \\ $\phi_2$ [$^\circ$] & - & -20 \\ $M_2$ [M$_\odot$] & - & $\cdot 10^{10}$ \\ \hline \hline $\chi^2$/d.o.f. & 1206/157 & 144.2/157 \\ probability & 0 & $0.74$ \\ \hline \end{tabular} \end{center} \caption[Fit Results for Different Halo Models]{Fit results with and without the ringlike substructures. The triaxial halo and the two ring parameters with subscript 1 and 2 are given. The parameters $r_0, \alpha ,\beta ,\gamma , a_0$ are fixed by the definition of the pseudo-isothermal profile and $M_{200} , R_{200}, M_1, M_2$ are derived values, so the remaining 14 values have been optimized.} \label{t2} \end{table} The boost factor for the profile with rings is around 100, if one assumes the DM annihilation cross section at the WIMP decoupling temperature $m_\chi/22\approx 3$ GeV in the early universe to be still valid for the low kinetic energies in the present universe. The independence of the center-of-mass energy of the annihilation cross section is true, if the annihilation proceeds via the exchange of spin-less particles, like Higgs bosons. But the annihilation depends strongly on momenta for DMA via the exchange of a particle with spin, like the $Z$-boson. The resonant $Z$-exchange becomes dominant if the WIMP mass is close to half the Z-boson (around 45 GeV) and then the annihilation cross section at the temperatures of the present universe is much smaller, thus requiring boost factors of $10^3$ or more for WIMP masses below 50 GeV. Unfortunately, such large boost factors are not necessarily excluded, so one cannot determine a lower limit on the WIMP mass from the boost factors alone. \subsection{Ring structure}\label{ring} \begin{figure} \begin{center} \includegraphics [width=0.4\textwidth,clip]{Plots/fig12a.eps} \includegraphics [width=0.4\textwidth,clip]{Plots/fig12b.eps} \includegraphics [width=0.4\textwidth,clip]{Plots/fig12c.eps} \includegraphics [width=0.4\textwidth,clip]{Plots/fig12d.eps} \caption[]{ 3D-presentations of the isothermal haloprofile in the Galactic plane (xy-projection) (top row) and perpendicular to the disk (xz-plane) (bottom row) without (left) and with (right) toroidal ringlike structures at 4 and 14 kpc. } \label{profile} \end{center} \end{figure} \begin{figure}[] \begin{center} \includegraphics [width=0.7\textwidth,clip]{Plots/fig13.eps} \caption[]{Visualization of the DM (top) and baryonic matter (bottom) distribution of the Milky Way in an edge-on (left) and top (right) view of the disk. The blue dots mark the modified isothermal profile; the red ones mark the outer and the green ones the inner ring; cyan marks the exponential disk and pink the stellar bulge; the large yellow circle marks the position of the sun; the density of points of each component is proportional to their mass fraction.} \label{scatter} \end{center} \end{figure} Fig. \ref{profile} displays the halo distribution in the disk (xy-plane) and perpendicular to the plane (xz-plane) in a 3D plot, while Fig. \ref{scatter} shows it in the projections in comparison with the distribution of baryonic matter. The contributions from the inner and outer rings at radii of 4.2 and 14 kpc, respectively can be clearly seen. The maximum ring densities of the inner (outer) ring are about a factor of 6 (7) higher than the isothermal profile at these maxima. The maximum density of the outer ring is at a radius of 14 kpc with a width of about 3.3 kpc in radius and 1.7 kpc perpendicular to the disk. These coordinates coincide with the ring of stars observed in the plane of the Galaxy at a distance of 14-18 kpc from the Galactic center \citep{newberg,ibata,yanny,rocha-pinto1}. These stars show a much smaller velocity dispersion (10-30 km/s) and a larger z-distribution than the thick disk, so the ring cannot be considered an extension of the disk. A viable alternative is the infall of a satellite galaxy \citep{yanny,helmi,rocha-pinto,penarrubia,penarrubia1}, for which one expects in addition to the visible stars a DM component. From the size of the ring and its peak density one can estimate the amount of DM in the outer ring to be around 9.10$^{10}$ solar masses. Since the gamma ray excess is best fitted with a full 360$^\circ$ of the sky, one can extrapolate the observed $100^\circ$ of visible stars to obtain a total visible mass of $\approx 10^8-10^9$ solar masses \citep{yanny,ibata}, so the baryonic matter in the outer ring is only a small fraction of its total mass. The inner ring at 4.2 kpc with a width of 4.2 kpc in radius and 0.2 kpc in $z$ is more difficult to interpret, since the density of the inner region is modified by adiabatic compression \citep{adiabatic,adiabatic1,adiabatic2} and interactions between the bar and the halo \citep{weinberg,athanassoula}. However, it is interesting to note that its radius coincide with the ring of cold dense molecular hydrogen gas, which reaches a maximum density at 4.5 kpc and has a width around 2 kpc \cite{gordon,hunter}. At the same radius a toroidal structure of dust has been observed, which provides shelter against dissociating UV radiation and allows atomic hydrogen to coalesce at the surfaces into molecular hydrogen. Therefore a ring of molecular hydrogen suggests a gravitational potential well in agreement with the EGRET excess in this region. \subsection{Comparison with rotation curve}\label{rotation} \begin{figure} \begin{center} \includegraphics [width=0.7\textwidth,clip]{Plots/fig14.eps} \caption[]{ The rotation curve of the Galaxy for the DM halo profile of Fig. \ref{profile}. The data are from \citet{honma,brand,brand1,brand2,schneider}. The contributions from the individual mass components have been indicated. Note the negative contribution of the massive ring of DM at 14 kpc, which exerts an outward and hence negative force on a tracer inside that ring.} \label{rot} \end{center} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=0.45\textwidth]{Plots/fig15a.eps} \includegraphics[width=0.45\textwidth]{Plots/fig15b.eps} \caption[Rotation Curves for Pseudo-Isothermal Profiles]{Rotation curve for the pseudo-isothermal profile without (left) and with ( right) rings in the Galactic plane; here the different data sets on rotation velocities are the same as in Fig. \ref{rot}, but for clarity a weighted average has been taken. One can see that the pseudo-isothermal profile without rings does not provide a good fit to the data, while the inclusion of the toroidal substructures provides a perfect fit. The rotation curves (solid black lines) are calculated in two perpendicular directions: along the long and medium axis of the triaxial halo, which both lie in the plane of the disk.} \label{rot1} \end{center} \end{figure} It is interesting to note that the present analysis is able to trace the mass distribution of DM in our Galaxy, since the mass is given by the WIMP number density $n_\chi$ times the WIMP mass $m_\chi$. The first one is obtained from the gamma ray flux, the second from the gamma ray spectrum. The relative contributions of the rings and pseudo-isothermal profile are obtained from the intensity of the EGRET excess and the absolute normalization of the total mass is obtained by requiring that the local rotation velocity of our solar system at 8.3 kpc is 220 km/s. For the WIMP mass of 60 GeV from the spectrum and the halo parameters from Table \ref{t2} we can immediately calculate the mass in the rings and the mass in the pseudo-isothermal profile. For the mass in the outer (inner) ring one finds a value around $9\cdot10^{10}$ ($9\cdot10^9$) solar masses, which is only a small fraction of the total mass of $3\cdot 10^{12}$ solar masses inside a radius $R_{200}$ of 310 kpc. The latter radius represents the volume with an averaged overdensity of 200 times the critical density of the universe. However, the mass in the outer ring is about 50\% of the mass of the Galaxy inside its radius. Therefore one expects a significant influence on the rotation curve, which then should show a minimum below and maximum above this radius, since the rotation speed squared is proportional to the derivative of the potential. Note that the absolute value of the masses in the rings is not sensitive to the background model, since the absolute mass scale is set by the normalization to the rotation curve, so a different background model, like the shape from the optimized model will change the boost factor, but not the absolute masses. A minimum in the rotation curve inside the outer ring radius and maximum outside this radius is indeed observed, as shown in Fig. \ref{rot}. The data were taken from \citet{honma} using HI gas, from \citet{brand, brand1,brand2} using HII gas and \citet{schneider}, who use CO clouds. The contributions from each of the mass terms have been shown separately. The baryonic matter distribution was taken from \citet{olling}. The basic explanation for the negative contribution from the outer ring is that a tracer star at the inside of the ring at 14 kpc feels an outward force from the ring, thus a negative contribution to the rotation velocity. If one just takes the contributions from the visible matter and the isothermal profile without rings, the data cannot be described, as shown on the left hand side of Fig. \ref{rot1}. Here the data points were obtained from the ones in Fig. \ref{rot} by taking a weighted average. With the rings a perfect description is obtained, as shown on the right hand side of Fig. \ref{rot1}. Here two rotation curves were calculated: one along the long axis of the triaxial halo profile, which was found to be in the Galactic plane and one perpendicular to this axis in the plane. Since the ellipticity is small ($\epsilon_{xy}\approx\epsilon_z\approx 0.8$, see Table \ref{t2}), the difference is small. Usually the rotation curves with inhomogeneous mass distributions are calculated by solving the Poisson equation, which yields the gravitational potential at a given position: $\Phi(r,\theta,\phi)$. The rotation velocity for a circular orbit at a radius $r$ can then be calculated by requiring that the resulting gravitational force on a tracer star equals the centrifugal force, i.e. \begin{equation} v^2/r=F_G/m\equiv d\Phi(r)/dr. \label{rc} \end{equation} However, the contribution from ringlike structures is not easily obtained from a standard Poisson solver, since these are usually optimized for spherical symmetries. They did not converge for the ringlike structures in our case. Therefore the derivative of the potential instead of the potential was calculated numerically, which is what is needed for the rotation curve and avoids an additional numerical derivation. To be more precise, the following was done. The gravitational potential $\Phi$ of the Poisson equation can be written in spherical coordinates ($x=r\cos\phi \sin\theta,\ y=r\sin\phi \sin\theta,\ z=r\cos\theta $) as: \begin{equation} \Phi(r,\theta,\phi)=-\int_0^\infty r'^2dr'\int_{-1}^{1}d\cos\theta' \int_0^{2\pi}d\phi'\frac{\rho(r',\theta',\phi')} {\sqrt{r^2+r'^2-2rr'\sin\theta\sin\theta'\cos(\phi-\phi') -2rr'\cos\theta\cos\theta'}} \label{phi1} \end{equation} or in the plane in the direction $\phi=0, \theta=\pi/2$: \begin{equation} \Phi(r,\pi/2,0)=-\int_0^\infty r'^2dr'\int_{-1}^{1}d\cos\theta' \int_0^{2\pi}d\phi'\frac{\rho(r',\theta',\phi')}{\sqrt{r^2+r'^2 -2rr'\sin\theta'\cos(\phi-\phi')}} \label{phi2} \end{equation} Note that $\rho$ includes all masses. The rotation velocity for a circular orbit at a radius $r$ can then be calculated in the direction $\phi=0, \theta=\pi/2$ (using Eq. \ref{rc}): \begin{equation} {v^2(r)\over r}=\frac{d\Phi(r)}{dr}=\int_0^\infty r'^2dr'\int_{-1}^{1}d\cos\theta' \int_0^{2\pi}d\phi'\frac{\rho(r',\theta',\phi') (r-r'\sin\theta'\cos(\phi-\phi'))} {(r^2+r'^2-2rr'\sin\theta'\cos(\phi-\phi'))^{3/2}}. \label{v2} \end{equation} This threefold integral was integrated numerically to obtain the contribution from all mass elements in the halo. The contributions of the bulge, disk and DM contributions from the isothermal halo plus rings have been indicated separately in Fig. \ref{rot}. The negative contribution from a ring is expressed by the fact that the derivative of the gravitational potential $\Phi$ changes its sign, when crossing the maximum of the ring and so does the contribution to $v^2$ (see term $r-r'$ in numerator of Eq. \ref{v2}). This implies an outward gravitational force exerted by the ring for a tracer inside the ring and an inward force for a tracer outside the ring. The hitherto mysterious change of sign of the slope near $r=1.3r_0$ finds then its natural explanation in the large ring of DM at $r=14$ kpc, whose mass is determined by the excess of energetic gamma rays. \subsection{Galactic parameters} From the baryonic density profile and the DM profile determined in this paper, one can determine the following basic properties of our Galaxy: \begin{itemize} \item The radius containing an average density 200 times the critical density equals $R_{200}=310 ~kpc$ \item The total DM mass inside this radius is $M_{200}=3.0\cdot10^{12}M_\odot$ to be compared with a visible mass of 5.5$\cdot10^{10}$ $M_\odot$ \item The inner (outer) ring contribute 0.3 (3) \% to the total DM mass \item The fraction $f_d=M_{baryonic}/M_{200}=0.02$ \item The concentration parameter $c=R_{200}/a=310/5=63$. \end{itemize} It should be remembered that these parameters were obtained assuming an isothermal profile for the DM with a constant boost factor. Assuming a smaller boost factor in the bulge because of tidal disruptions there, will increase the mass in the center, since the flux is proportional to $B(r)<\rho^n>$, as discussed before. This reduces the DM mass in the outer regions, so the numbers given above should be considered an upper limit, but these parameters are well inside the range found for other galaxies with an isothermal profile \citep{Jimenez:2002vy} and earlier mass estimates of the Galaxy \citep{wilkinson}. \section{Possible objections to the DMA interpretation}\label{objections} The DMA interpretation of the EGRET excess would mean that DM is not so dark anymore, but DM is visible from the 30-40 flashes of energetic gamma rays for each annihilation. This would be great, but are there more mundane explanations? Attempts to modify the electron and proton spectra from the locally measured spectra do not describe the shape of the EGRET data in all sky directions, as discussed in detail before by comparing the EGRET data with the ``optimized model''. Here we summarize some other possible objections. \begin{enumerate} \item Are the EGRET data reliable enough to make such strong conclusions? The EGRET detector was calibrated in a quasi mono-energetic gamma ray beam at SLAC, so its response is well known \citep{egret_cal}. Also the monitoring during the flight was done carefully \citep{egret_cal1}. We have only used data in the energy range between 0.07 and 10 GeV, where the efficiency is more or less flat. So the 9 years flight provided accurate and reliable data, especially it would be hard to believe in an undetected calibration problem, which would only effect the data above 0.5 GeV and fake the gamma ray spectrum from the fragmentation of mono-energetic quarks. \item The gamma ray spectrum above 0.07 GeV starts to be dominated by pp-interactions and is therefore strongly dependent on the proton energy spectrum. This cosmic ray spectrum was measured only locally in the solar neighborhood. Could a harder spectrum near the Galactic center, where protons can be accelerated by the many supernovae there, explain the EGRET excess? No, first of all the diffusion times are much shorter than the energy loss times of protons with energies above a few GeV, so one expects everywhere the same energy spectrum. This is proven by the fact, that the gamma ray spectrum for the Galactic center and the Galactic anti-center can be described by the {\it same} background shape. \item Is the background well enough known to provide evidence for DMA? The background is dominated by pp-collisions with a reasonably well known shape and fitting the normalization yields a ``self-calibrating'' background. Trying to obtain a harder gamma ray spectrum by hardening the proton spectrum increases usually not only the high energy gamma rays, but also contributes to the low energy part of the spectrum. Fitting this ``wrong'' shape with a free normalization reduces then the low energy excess again and recovers the high energy excess. Note that this ``self-calibration'' of the background also takes care of gas clouds, ringlike or asymmetric structures in the background, uncertainties in the absolute value of the total cross sections, etc. An alternative way of formulating the problem of models without DMA: if the shape of the EGRET excess can be explained perfectly in all sky directions by a gamma contribution originating from the fragmentation of mono-energetic quarks, it is very difficult to replace such a contribution by an excess from nuclei (quarks) or electrons with a steeply falling energy spectrum, especially if one takes into account that the spatial distribution of gamma rays from DMA is different from the gamma rays from the background. \item Is it possible to explain the excess in diffuse gamma rays with unresolved point sources? This is unlikely, first of all since the known point sources \citep{egret_cat} are only a small fraction of the diffuse gamma rays and the majority of the resolved sources has a rather soft spectrum, typically well below 1 GeV, as can be seen from the plots in the Appendix. If this part of the spectrum would be dominated by unresolved sources, then the {\it diffuse} component below 1 GeV would become lower after subtracting the hypothetical unresolved sources, which in turn would lead to a lower normalization of the background and a correspondingly stronger excess for a fixed background shape. So arguing against DMA by unresolved sources goes in the wrong direction. \item One observes a ring of molecular hydrogen near the inner ring and a ring of atomic hydrogen near the outer ring. Could this excess of hydrogen not be responsible for the excess of the gamma rays? No, our method of fitting only the shapes with a free normalization implies that this analysis is insensitive to density fluctuations of the background, which change the normalization, not the shape. \item Is one not over-interpreting the EGRET data by fitting so many parameters for the different components: triaxial halo, inner ring and outer ring? No, first of all the excess and enhancement in a ringlike structure at 14 kpc was already discovered in the original paper by \citet{hunter}. What we did is just trying to see if the excess fits: a) a single WIMP mass in all directions; b) an isothermal DM profile plus the substructure; c) the Galactic rotation curve. The DM halo components are determined by {\it independent} sky directions: the outer ring parameters are determined mainly by 30 sky directions towards the Galactic anti-center, the inner ring parameters by ca. 15 sky directions towards the Galactic center and the triaxial halo parameters by ca. 130 sky directions out of the Galactic plane. And the most remarkable thing is that all these independent sky directions all show an excess, which can be explained by a single WIMP mass around 60 GeV. This is like having 180 independent experiments at an accelerator all saying we see a significant excess of gamma rays corresponding to $\pi^0$ production from mono-energetic quarks. Then asked what mass they need to describe the excess, they all say 60 GeV! \item The outer rotation curve of our Galaxy has large uncertainties from the distance $r_0$ between the Sun and the Galactic center and is determined with a different method than the inner rotation curve. Can this fake the good agreement between the calculated rotation curve from the gamma ray excess and the measured rotation curve? The outer rotation curve indeed depends strongly on $r_0$, as shown by \citet{honma}, who varied $r_0$ between 7 and 8.5 kpc. At present one knows from the kinematics of the stars near the black hole at the center of our Galaxy that $r_0=8\pm 0.4$ kpc \citep{rnull}, so the distance is already reasonably well known. But whatever the value of $r_0$, the change in slope around 1.3$r_0$ is always present, indicating a ringlike DM structure is always needed. Furthermore the outer rotation curve shows first the same decrease as the inner rotation curve and only then changes the slope, so the different methods agree between $r_0$ and 1.3$r_0$ \item How can one be sure that the outer ring originated from the tidal disruption of a rather massive satellite galaxy, so one can expect an enhanced DM density in the ring? One finds three independent ringlike structures: stars, atomic hydrogen gas and excess of gamma radiation. The stars show a scale height of several kpc and a low velocity dispersion, so they cannot be part of the Galactic disk. Therefore the infall of a satellite galaxy is the natural explanation. Since the tidal forces are proportional to $1/r^3$, the satellite will be disrupted most strongly at its pericenter, which can lead to DM density enhancements at the pericenter after a few orbits \citep{hayashi}. Some of the stars and gas may be caught in this potential well. All three are found at 14 kpc with the stars all being old and more than 90\% of the mass being DM as deduced from the strong EGRET excess at this radius. \item The outer ring at 14 kpc has a mass around $9\cdot10^{10}$ solar masses. This is around 50\% of the total mass inside the ring and one may worry about the disk stability of the Milky Way by the infall of such a heavy Galaxy. However, large spiral galaxies show bumps of similar size \citep{sofue1}, so it seems not to be uncommon to have masses of this size forming ringlike structures. Furthermore, the stars near the 14 kpc ring are all very old, so the infall of the satellite galaxy may have been very early, in which case the disk might have grown after the infall. \item Is it not peculiar that if a ringlike structure originates from the infall of satellite galaxy, that it lies in the plane of the Galaxy? No, in principle the infall can happen in all directions with respect to the plane, but the angular momenta of the inner halo and a baryonic disk tend to align after a certain time by tidal torques \citep{bailin}. This explains the enhanced DMA in the disk, which is orthogonal to the prejudice that DM should be distributed more or less spherically. \end{enumerate} \section{Summary}\label{summary} If Dark Matter is a thermal relic from the early Universe, then it is known to annihilate, since the small amount of relic density measured nowadays requires a large reduction in its number density. The annihilation cross section can be obtained directly from its inverse proportionality to the relic density, the latter being well known from precision cosmology experiments \citep{wmap}. The annihilation into quark pairs will produce $\pi^0$ mesons during the fragmentation into hadrons, which in turn will decay into gamma rays. Since DM is cold, i.e. non-relativistic, the fragmenting quarks have an initial energy equal to the WIMP mass. The gamma spectrum from such mono-energetic quarks is well known from electron-positron colliders, which produce exactly such states. For heavy WIMP masses the gamma spectrum is considerably harder than the background spectrum, mainly from $\pi^0$ mesons produced in pp-collisions from cosmic rays with gas in the disk. Such an excess of hard gamma rays has indeed been observed by the EGRET satellite and the relative contributions from background and DM annihilation signal can be obtained by fitting their different shapes with a free normalization factor for background and signal. The results of the analysis can be summarized as follows: \begin{itemize} \item By analyzing the EGRET data in 180 independent sky directions we find first of all an excess in each direction, as expected for DMA, and secondly, the spectral shape of the excess is the {\it same} in {\it each} direction and corresponds to a WIMP mass around 60 GeV. \item The {\it flux} of the excess determines the halo profile, i.e. the number density $n_\chi$ of the WIMPs. Together with the WIMP mass $m_\chi$ from the {\it spectrum} of the excess one can reconstruct the DM mass distribution (=$n_\chi~\cdot~m_\chi$) in the Galaxy, which in turn can be used - in combination with the visible matter - to calculate the rotation curve. The result explains the hitherto unexplained change of slope in the outer rotation curve. \end{itemize} The results mentioned above make no assumption on the nature of the Dark Matter, except that its annihilation produces hard gamma rays from quark fragmentation. The fitted normalization of the background flux comes out to be close to the absolute prediction of the GALPROP conventional propagation model of our Galaxy \citep{galprop2}, while the normalization of the DM signal corresponds to a boost factor from 20 upwards. Such a boost factor from the clustering of DM was calculated with respect to the annihilation cross section from Eq. \ref{wmap}, which is the cross section at the freeze-out temperature of a few GeV. At the present time the temperature of the universe is much lower, which could reduce the annihilation cross section, thus increasing the boost factor. A good WIMP candidate is the neutralino of Supersymmetry. For the WIMP mass in the range of 50-100 GeV this has very much the properties of a spin 1/2 photon, which would imply that DM is the supersymmetric partner of the cosmic microwave background (CMB). The present analysis is perfectly consistent with such a scenario, if the supersymmetric partners of the quarks and leptons are around 1 TeV. Details about the connection with Supersymmetry have been discussed elsewhwere\citep{deboer2,deboer3,deboer4,sander}. It should be emphasized that the excess of diffuse gamma rays has a statistical significance of at least 10 $\sigma$ if compared with the conventional shape of the background. This combined with all features mentioned above provides an intriguing hint that this excess is a) indeed indirect evidence for Dark Matter Annihilation and b) traces the DM in our Galaxy, as proven by the fact that we can reconstruct the rotation curve of our Galaxy from the gamma rays. \acknowledgements We thank I.V. Moskalenko, O. Reimer and A. Strong for sha\-ring with us all their knowledge about our Galaxy and the EGRET data and allowing us to use their implementation of the EGRET analysis software in the GALPROP program. We also like to thank the EGRET Science Team for their hard work for collecting and calibrating the data and the NASA for their support in making satellite data publicly available. This work was supported by the BMBF (Bundesministerium f\"ur Bildung und Forschung) via the DLR (Deutsches Zentrum f\"ur Luft- und Raumfahrt), a grant from the DFG (Deutsche Forschungsgemeinschaft, Grant 436 RUS 113/626/0-1), RFBR (Russian Foundation for Basic Research, Grant 05-02-17603), and the Heisenberg-Landau Program. \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative} have achieved impressive results in image generation. By taking inspiration from the Turing test, a generator function is asked to fool a discriminator function which, in turn, tries to distinguish real samples from generated ones. GANs are known to generate very realistic images when trained properly. A special generation task is image-to-image translation, which learns to map each image for an input domain into an image in a (possibly different) output domain. In most real-world domains, there are no pairs of examples showing how to translate an image into a corresponding one in another domain, yielding the so called UNsupervised Image-to-image Translation (UNIT) problem. In an UNIT problem, two independent sets of images belonging to two different domains (e.g. cats-dogs, male-female, summer-winter, etc.) are given and the task is to translate an image from one domain into the corresponding image in the other domain, even though there exist no paired examples showing this mapping. Unfortunately, estimating a joint distribution of the images in the two domains from the distributions in the original single domains is known to have infinite possible solutions. Therefore, one possible strategy consists in mapping pairs of corresponding images to the same latent space using auto-encoders and then learning to reconstruct an image from its representation in latent space. Combining auto-encoders with GANs has been proposed in~\cite{rosca2017variational,li2017alice} and outstanding results on image translation have been reported by~\cite{zhu2017unpaired,liu2016coupled,liu2017unsupervised}. This paper proposes a general approach to visual generation and translation that combines learning capabilities with logic descriptions of the images that are generated. The generation problem is translated into a constrained satisfaction problem, where each constraint forces the generated image to have some predefined feature. A main advantage of this approach is to decouple the logic description level from the generative models. The logic layer is architecture agnostic, allowing to inject into the logic layers any generator model based on deep learning. In particular, expressing the task using logic knowledge allows to easily extend the involved classes to additional translation categories as well as yielding an easier to understand learning scheme. The translations are then interleaved and jointly learned using the constraints generated by the framework that allow to obtain truly realistic images on different translation types. Integration of learning and logic reasoning has been studied in the past few years, but no framework emerged as generic interface layer. For example, Minervini et al.~\cite{minervini2017adversarial} corrects the inconsistencies of an adversarial learner but the employed methodology is limited in terms of scope and defined ad-hoc for the task. A fuzzy generalization of First Order Logic is used both by Hu et al.~\cite{hu2016harnessing} and Logic Tensor Networks~\cite{serafini2016learning} to integrate logic and learning, but both approaches are limited to universally quantified FOL clauses with specific forms. Another line of research~\cite{rocktaschel2015injecting,demeester2016lifted} attempts at using logical background knowledge to improve the embeddings for Relation Extraction. Also these works are based on ad-hoc solutions that lack a common declarative mechanism that can be easily reused. Markov Logic Networks (MLN)~\cite{richardson2006markov} and Probabilistic Soft Logic (PSL)~\cite{kimmig2012short,bach2015hinge} are two probabilistic logics, whose parameters are trained to determine the strength of the available knowledge in a given universe. MLN and PSL with their corresponding implementations have received lots of attention but they provide a shallow integration with the underlying learning processes working on the low-level sensorial data. In MLN and PSL, a low-level learner is trained independently, then frozen and stacked with the AI layer providing a higher-level inference mechanism. The framework proposed in this paper instead allows to directly improve the underlying learner, while also providing the higher-level integration with logic. TensorLog~\cite{cohen2016tensorlog} is a recent framework to reuse the deep-learning infrastructure of TensorFlow (TF) to perform probabilistic logical reasoning. However, TensorLog is limited to reasoning and does not allow to optimize the learners while performing inference. This paper utilizes a novel framework, called LYRICS~\cite{marra2019lyrics} (Learning Yourself Reasoning and Inference with ConstraintS)\footnote{URL: https://github.com/GiuseppeMarra/lyrics .}, which is a TensorFlow ~\cite{abadi2016tensorflow} environment based on a declarative language for integrating prior knowledge into machine learning. The proposed language generalizes frameworks like Semantic Based Regularization~\cite{diligenti2012bridging,diligenti2015semantic} to any learner trained using gradient descend. The presented declarative language provides a uniform platform to face both learning and inference tasks by requiring the satisfaction of a set of rules on the domain of discourse. The presented mechanism provides a tight integration of learning and logic as any computational graph can be bound to a FOL predicate. In the experimental section, an image-to-image task is formulated using logic including adversarial tasks with cycle consistency. The declarative approach allows to easily interleave and jointly learn an arbitrary number of translation tasks. \section{Constrained Learning and Reasoning} \label{sec:clare} \begin{table}[b] \centering \begin{tabular}{|c|c|c|c|} \hline \diagbox{formula}{t-norm} & {\bf G$\ddot{\mbox{o}}$del} & {\bf \L ukasiewicz} & {\bf Product} \\ \hline $\neg x$ & $1-x$ & $1-x$ & $1-x$ \\ \hline $x\wedge y$ & $\min\{x,y\}$ & $\max\{0,x+y-1\}$ & $x\cdot y$ \\ \hline $x\vee y$ & $\max\{x,y\}$ & $\min\{1,x+y\}$ & $x+y-x\cdot y$ \\ \hline $x\Rightarrow y$ & $x\leq y?1:y$ & $\min\{1,1-x+y\}$ & $x\leq y?1:y/x$ \\ \hline $x\Leftrightarrow y$ & $x=y?1:\min\{x,y\}$ & $1-|x-y|\}$ & $x=y?1:\min\{x/y,y/x\}$ \\ \hline \end{tabular} \vspace{0.1cm} \caption{Fundamental t-norms and their algebraic semantics.} \label{tab:tnorms} \end{table} In this paper, we consider a unified framework where both learning and inference tasks can be seen as constraint satisfaction problems. In particular, the constraints are assumed to be expressed by First-Order Logic (FOL) formulas and implemented in LYRICS, a software we developed converting automatically FOL expressions into TensorFlow computational graphs. Given a set of task functions to be learned, the logical formalism allows to express high-level statements among the outputs of such functions. For instance, given a certain dataset, if any pattern $x$ has to belong to either a class $A$ or $B$, we may impose that $\forall x:\,f_A(x) \lor f_B(x)$ has to hold true, where $f_A$ and $f_B$ denote two classifiers. As shown in the following of this section, there are several ways to convert FOL into real-valued functions. Exploiting the fuzzy generalization of FOL originally proposed by Novak~\cite{novak1987first}, any FOL knowledge base is translated into a set of real-valued constraints by means of fuzzy logic operators. A \emph{t-norm fuzzy logic} \cite{hajek1998} can be used to transform these statements into algebraic expressions, where a t-norm is a commutative, monotone, associative $[0,1]$-valued operation that models the logical AND. Assuming to convert the logical negation $\neg x$ by means of $1-x$, the algebraic semantics of the other connectives is determined by the choice of a certain t-norm. Different t-norm fuzzy logics have been proposed in the literature and we report in Table~\ref{tab:tnorms} the algebraic operations corresponding to the three fundamental continuous t-norm fuzzy logics, G$\ddot{\mbox{o}}$del, \L ukasiewicz and Product logic. In the following, we will indicate by $\Phi(f(\cal X))$ the algebraic translation of a certain logical formula involving the task functions collected in a vector $f$ and by ${\cal X}$ the available training data. The constraints are aggregated over a set of data by means of FOL quantifiers. In particular, the universal and existential quantifiers can be seen as a logic AND and OR applied over each grounding of the data, respectively. Therefore, different quantifiers can be obtained depending on the selection of the underlying t-norm. For example, for a given logic expression $E\big(f({\cal X})\big)$ using the function outputs $f({\cal X})$ as atoms, the product t-norm defines: \begin{equation}\label{eq:forall} \forall x_i\, E\big(f({\cal X})\big) \longrightarrow \displaystyle\prod_{x_i \in {\cal X}_i} \Phi_E\big(f({\cal X})\big) \ , \end{equation} where ${\cal X}_i$ denotes the available sample for the $i$-th task function $f_i$. In the same way, the expression of the existential quantifier when using the G$\ddot{\mbox{o}}$del t-norm becomes the \textit{maximum} of the expression over the domain of the quantified variable: \[ \exists x_i\,E\big(f({\cal X})\big) \longrightarrow \displaystyle\max_{x_i \in {\cal X}_i} \; \Phi_E\big(f({\cal X}) \big) \ . \] Once the translation of the quantifiers are defined, they can be arbitrarily nested and combined in more complicated expressions. The conversion of formulas into real-valued constraints is carried out automatically in the framework we propose. Indeed, LYRICS takes as input the expressions defined using a declarative language and builds the constraints once we decide the conversion functions to be exploited. This framework is very general and it accommodates learning from examples as well as the integration with FOL knowledge. In general terms, the learning scheme we propose can be formulated as the minimization of the following cost function: \def\mbox{\boldmath $f$}{\mbox{\boldmath $f$}} \begin{equation} \begin{array}{rcl} C(f( {\cal X} )) &=& \displaystyle\sum_{h=1}^H \lambda_h \mathcal{L}\Big(\Phi_h \big(f({\cal X})\big) \Big) \ , \end{array} \label{eq:empirical_objective_function} \end{equation} where $\lambda_h$ denotes the weight for the $h$-th logical constraint and the function $\mathcal{L}$ represents any monotonically decreasing transformation of the constraints conveniently chosen according to the problem under investigation. In particular, in this paper we exploit the following mappings \begin{equation} \begin{array}{l} \label{eq:L} {\bf (a)}\;\;\mathcal{L}\Big(\Phi_h \big(f({\cal X})\big) \Big)=1-\Phi_h \big(f({\cal X})\big),\\ {\bf (b)}\;\;\mathcal{L}\Big(\Phi_h \big(f({\cal X})\big) \Big)=-\log\Big(\Phi_h \big(f({\cal X})\big)\Big) \ . \end{array} \end{equation} When the mapping defined in Equation~\ref{eq:L}-{\bf (b)} is applied to an universally quantified formula as in Equation~\ref{eq:forall}, it yields the following constraint: \[ \mathcal{L}\left( \displaystyle\prod_{x_i \in {\cal X}_i} \Phi_E\big(f({\cal X})\big)\right) = -\log \left(\displaystyle\prod_{x_i \in {\cal X}_i} \Phi_E\big(f({\cal X})\big)\right) = \displaystyle\sum_{x_i \in {\cal X}_i} -\log\left( \displaystyle \Phi_E\big(f({\cal X})\big) \right) \ , \] that corresponds to a generalization to generic fuzzy-logic expressions of the cross-entropy loss, which is commonly used to force the fitting of the supervised data for deep learners. \begin{example}[From logic formulas to constraints] Let us consider the rule \[ \forall x \forall y ~ Married(x,y)\Rightarrow (Republican(x)\Leftrightarrow Republican(y)) \] where $Republican$ and $Married$ are a unary and a binary predicates indicating if a certain person $x$ votes for a republican and if $x$ is married with a certain person $y$, respectively. The rule states that, if two persons are married, then they vote for the same party. From a learning point of view, enforcing such a rule allows us to exploit the manifold defined by the predicate $Married$ (possibly known) to improve the classification performance about $Republican$ predicate by correlating the predictions of married pairs. In this case, the input of the predicates can be any vector of features representing a person (e.g. pixel of images, personal data), while the predicates are generally implemented as deep neural models (e.g. a convolutional neural network). The rule can be converted into a continuous loss function using e.g. the Product t-norm as reported in table~\ref{tab:tnorms} and the previously reported semantics for the quantifiers: \[ \prod_{x,y \in {\cal X}} \min\left\{1, \frac{\min\{f_R(x)/f_R(y),f_R(y)/f_R(x)\}}{f_M(x,y)}\right\} \ , \] where $f_R,f_M$ are the functions approximating the predicates $Republican$ and $Married$, respectively and $\cal X$ is the set of patterns representing the available sample of people\footnote{For simplicity we do not consider here the case $f_R(x)=f_R(y)=0$.}. The corresponding loss is obtained by applying Equation~\ref{eq:L}-{\bf (b)}: \[ \sum_{x,y \in {\cal X}} \max\left\{0,-\log\left( \frac{\min\{f_R(x)/f_R(y),f_R(y)/f_R(x)\}}{f_M(x,y)}\right)\right\} \ . \] \end{example} \section{Generative Learning with Logic} \label{sec:generative} This section shows how the discriminative and generative parts of an image-to-image translation system can be formulated by merging logic and learning, yielding a more understandable and easier to extend setup. Let us assume to be given a set of images $\mathcal{I}$. There are two components of a translator framework. First, a set of \textit{generator} functions $g_{j}: \mathcal{I} \rightarrow \mathcal{I}$, which take as input an image representation and generate a corresponding image in the same output domain, depending on the semantics given to the task. Second, a set of \textit{discriminator} functions $d_i: \mathcal{I} \rightarrow [0,1]$ determining whether an input image $x\in \mathcal{I}$ belongs to class $i$ (i.e. stating if an image has got or not a given property) and, thus, they must be intended in a more general way than in traditional GANs. Interestingly, all learnable FOL functions (i.e. functions mapping input elements into an output element) can be interpreted as generator functions and all learnable FOL predicates (i.e. functions mapping input elements into a truth value) can be interpreted as discriminator functions. The {\bf discriminator training} corresponds to enforcing the fitting of the supervised examples as: \begin{equation}\label{eq:discr1} \forall x\, S_i(x) \Rightarrow d_i(x), ~~ i = 1,2,\ldots\ \end{equation} where $S_i(x)$ is a given function returning true if and only if an image is a positive example for the $i$-th discriminator. These constraints allow to transfer the knowledge provided by the supervision (i.e. the $S_i(x)$) inside the discriminators, which play a similar role. However, $d_i(x)$ functions are differentiable and can be exploited to train the generators functions. To this end, assuming that a given function has to generate an image with a certain property, we can force the corresponding discriminator function for such a property to positively classify it. The {\bf generator training} for the $j$-th class can be performed by enforcing the generator to produce images that look like images of class $j$, this can be compactly expressed by a rule: \begin{equation}\label{eq:gen1} \forall x\, d_j(g_j(x)), ~~ j = 1,2,\ldots \end{equation} The logical formalism provides a simple way to describe complex behavior of generator functions by interleaving multiple positive or negative discriminative atoms (i.e $d_i(g(x))$). By requiring that a given image should be classified as realistic, the GAN framework implements a special case of these constraints, where the required property is the similarity with real images. Cycle consistency~\cite{zhu2017unpaired} is also commonly employed to impose that by translating an image from a domain to another one and then translating it back to the first one, we should recover the input image. Cycle consistency allows to further restrict the number of possible translations. Assuming the semantic of the $i$-th generator is to generate images of class $i$, {\bf cycle consistency} can be naturally formulated as: \begin{equation}\label{eq:cycle} \forall x ~ S_i(x) \Rightarrow g_{i}(g_{j}(x)) = x~~ i=1,2,\ldots, ~~ j=1,2,\ldots \end{equation} Clearly, in complex problems, the chain of functions intervening in these constraints can be longer. \begin{figure}[th] \centering \includegraphics[width=0.3\linewidth]{3x3.jpeg} \caption{The first column pictures represents the input images. The second and third column pictures show the outputs of the functions \texttt{next} and \texttt{previous}, respectively, computed on the input image.} \label{fig:generation} \end{figure} The images in different domains are typically required to share the same latent space. Let us indicate $e:\mathcal{I} \rightarrow \mathbb{R}^n$ an encoding function mapping the image into a latent space. This encoding function must be jointly learned during the learning phase. In this special case, the generators must be re-defined as decoder functions taking as input the latent representation of the images, namely: $g_{j}: \mathbb{R}^n \rightarrow \mathcal{I}$. The {\bf auto-encoding} constraints can be expressed using FOL as follows: \begin{equation} \label{eq:identity} \forall x~ S_i(x)\Rightarrow g_{i}(e(x)) = x, ~~ i=1,2,\ldots \end{equation} Up to now, the described constraints are very general and they can be exploited in almost all generative translation tasks. However, the logical formalism (and the LYRICS environment) allows the enforcement of any complex available knowledge about the task at hand. We will see some examples in the following experiment. \subsubsection{Next and Previous Digits Generation} As a toy example, we show a task in which we are asked to learn two generative functions, $next$ and $previous$, which, given an image of a $0,1,2$ digit, will produce an image of the next and previous digit, respectively. In order to give each image a next and a previous digit in the chosen set, a circular mapping was used such that $0$ is the next digit of $2$ and $2$ is the previous digit of $0$. The functions $next$ and $previous$ are implemented by feedforward neural networks with 50 neurons and 1 hidden layer. Since the output of such functions are still images, the output size of the networks is equal to the input size. A $1$-hidden layer RBF with a $3$-sized softmax output layer is used to implement the $zero$, $one$ and $two$ discriminators bound to the three outputs of the network, respectively. The RBF model, by constructing closed decision boundaries, allows the generated images to resemble the input ones. Finally, let $isZero$, $isOne$ and $isTwo$ be three given functions, defined on the input domain, returning $1$ only if an image is a $0$, $1$ or $2$, respectively. They play the role of the $S_i(x)$ in the general description. The idea behind this task is to learn generative functions without giving any direct supervision to them, but simply requiring that the generation is consistent with the classification performed by some jointly learned classifiers. The problem can be described by the following constraints to learn the discriminators \[ \forall x\,isZero(x) \Rightarrow zero(x), \quad \forall x\,isOne(x) \Rightarrow one(x), \quad \forall x\,isTwo(x) \Rightarrow two(x) \] and the following constraints to express that the generation functions are constrained to return images which are correctly recognized by the discriminators. \begin{equation*} \begin{array}{l} \forall x~zero(x) \Rightarrow one(next(x)) \land two(previous(x)) \\ \forall x~one(x) \Rightarrow two(next(x)) \land zero(previous(x)) \\ \forall x~two(x) \Rightarrow zero(next(x)) \land one(previous(x)) \end{array} \end{equation*} In addition, in order to force the generated images to be similar to at least one digit in the domain, we enforce the following constraints: \begin{equation*} \begin{array}{l} \forall x~\exists y~(isZero(x) \land isOne(y)) \Rightarrow next(x) = y \\ \forall x~\exists y~(isZero(x) \land isTwo(y))\Rightarrow previous(x) = y \\ \forall x~\exists y~(isOne(x) \land isTwo(y))\Rightarrow next(x) = y \\ \forall x~\exists y~(isOne(x) \land isZero(y))\Rightarrow previous(x) = y \\ \forall x~\exists y~(isTwo(x) \land isZero(y))\Rightarrow next(x) = y \\ \forall x~\exists y~(isTwo(x) \land isOne(y))\Rightarrow previous(x) = y \ . \end{array} \end{equation*} Finally, the cycle consistency constraints can be expressed by: \[ \forall x\, next(previous(x)) = x \qquad \forall x\, previous(next(x)) = x \ . \] We test this idea by taking a set of around $15000$ images of handwritten characters, obtained extracting only the $0$, $1$ and $2$ digits from the MNIST dataset. The above constraints have been expressed in LYRICS and the model computational graphs have been bound to the predicates. Figure~\ref{fig:generation} shows an example of image translation using this schema, where the image on the left is an original MNIST image and the two right images are the output of the $next$ and $previous$ generators. Before proceeding, we want to dwell on the possibilities of this approach after an example has been provided. The declarative nature of the logical formalism and its subsequent translation into real-valued constraints, exploited as loss functions of an optimization problem, enable the construction of very complex generative problems by means of only an high-level semantic description. By exploiting models inherited from the literature, a final user is allowed to face the most different problems with the minimum implementation effort. In the following section, we show a real image-to-image translation task applying the general setup described in this section, including auto-encoders, GANs and cycle consistency. The declarative nature of the formulation makes very easy to add an arbitrary number of translation problems and it allows to easily learn them jointly. \section{Experiments on Image Translation} \label{sec:gan} \begin{figure*}[t] \includegraphics[width=0.98\textwidth]{male_to_female.jpg} \caption{\textbf{Face Gender Translation: male to female.} The top row shows input male images, the bottom row shows the correspondent generated female images.} \label{fig:m2f} \end{figure*} \begin{figure*}[t] \includegraphics[width=0.98\textwidth]{female_to_male.jpg} \caption{\textbf{Face Gender Translation: female to male.} The top row shows input female images, the bottom row shows the correspondent generated male images.} \label{fig:f2m} \end{figure*} UNIT translation tasks assume that there are no pairs of examples showing how to translate an image into a corresponding one in another domain. Combining auto-encoders with GANs is the state-of-the-art solution for tackling UNIT generation problems~\cite{zhu2017unpaired,liu2016coupled,liu2017unsupervised}. In this section, we show how this adversarial setting can be naturally described and extended by the proposed logical and learning framework. Furthermore, we show how the logical formulation allows a straightforward extension of this application to a greater number of domains. The CelebFaces Attributes dataset~\cite{liu2015faceattributes} was used to evaluate the proposed approach, where celebrities face images are labeled with various attributes gender, hair color, smiling, eyeglasses, etc. Images are defined as 3D pixel tensors with values belonging to the $[0,1]$ interval. The first two dimensions represent width and height coordinates while the last dimension indexes among the RGB channels. In particular, we used the \emph{Male} attribute to divide the entire dataset into the two input categories, namely male and female images. In the following $S_M(x)$ and $S_F(x)$ (such that $\forall x ~ S_F(x) \Leftrightarrow \lnot S_M(x)$) are two given predicates holding true if and only if an image $x$ is or is not tagged with the \emph{male} tag. Let $e$ be an encoding function mapping images into the the latent domain ${\mathcal Z}=\mathbb{R}^n$. The encoders are implemented as multilayer convolutional neural networks with resblocks~\cite{he2016deep}, leaky-ReLU activation functions and instance normalization at each layer (see \cite{liu2017unsupervised} for a detailed description of the architecture). The generative functions $g_M$ and $g_F$ map vectors of the domain $\mathcal Z$ into images. These functions are implemented as multilayer transposed convolutional neural networks (also called "deconvolutions") with resblocks, leaky-ReLU activation functions and instance normalization at each layer. To implement the shared latent space assumption, $g_M$ and $g_F$ share the parameters of the first layer. The functions $d_M$ and $d_F$ are trained to discriminate whether an image is real or it has been generated by the $g_M$ and $g_F$ generator functions. For example, if $x$ and $y$ are two images such that $S_M(x), S_F(y)$ hold true, then $d_M(x)$ should return $1$ while $d_M(g_M(e(y)))$ should return $0$. The problem can be described by the logical constraints that have been introduced in a general form in Section \ref{sec:generative} and that the encoding and generation functions need to satisfy. First, Equation~\ref{eq:identity} is used to enforce the encoder and generator of the same domain to be circular, that is to map the input into itself: \begin{align} \forall x ~ S_M(x) \Rightarrow g_M(e(x)) = x \label{eq:l11} \\ \forall x ~ S_F(x) \Rightarrow g_F(e(x)) = x \label{eq:l12} \end{align} where the equality operator comparing two images in Equations \ref{eq:l11} and \ref{eq:l12} is bound to a continuous and differentiable function computing a pixel by pixel similarity between the images, defined as $1 -\tanh( \frac{1}{P}\sum_p |x_p - y_p|)$ where $x_p$ and $y_p$ are the $p$-th pixel of the $x$ and $y$ images and $P$ is the total number of pixels. Cycle consistency is also imposed as described by the Equation~\ref{eq:cycle}. \begin{align} \forall x ~ S_M(x) \Rightarrow g_M(e(g_F(e(x))) = x \label{eq:cycle1} \\ \forall x ~ S_F(x) \Rightarrow g_F(e(g_M(e(x))) = x \label{eq:cycle2} \end{align} where the same equality operator is used to compare the images. Finally, according to the Equation~\ref{eq:gen1}, the generated images must fool the discriminators so that they will be detected as real ones as: \begin{align} \forall x ~ S_M(x) \Rightarrow d_F(g_F(e(x)))\label{eq:adv_g1}\\ \forall x ~ S_F(x) \Rightarrow d_M(g_M(e(x)) )\label{eq:adv_g2} \end{align} On the other hand, the discriminators must correctly discriminate real images from generated ones by the satisfaction of the following constraints, as stated by Equation~\ref{eq:discr1}: \begin{align} \forall x ~ S_M(x) \Rightarrow d_M(x) \land \lnot d_F(g_F(e(x)))\label{eq:adv_d1}\\ \forall x ~ S_F(x) \Rightarrow d_F(x) \land \lnot d_M(g_M(e(x))) \label{eq:adv_d2} \end{align} Using logical constraints allows us to give a clean and easy formulation of the adversarial setting. These constraints force the generation function to generate samples that are categorized in the desired class by the discriminator. Moreover, the decoupling between the models, used to implement the functions and which can be inherited from the previous literature, and the description of the problem makes really straightforward to extend or transfer this setting. We implemented this mixed logical and learning task using LYRICS. The Product t-norm was selected to define the underlying fuzzy logic problem. This selection of the t-norm is particularly suited for this task because, as shown earlier, it defines a cross-entropy loss on the output of the discriminators, which is the loss that was used to train these models in their original setup. The $e$, $g_M$, $g_F$ functions are trained to the satisfaction of the constraints defined in \Cref{eq:l11,eq:l12,eq:cycle1,eq:cycle2,eq:adv_g1,eq:adv_g2}, while $d_M$ and $d_F$ are trained to satisfy \Cref{eq:adv_d1,eq:adv_d2}. Weight learning for the models was performed used the Adam optimizer with a fixed learning rate equal to $0.0001$. Some male-to-female and female-to-male translations are shown in figures \ref{fig:m2f} and \ref{fig:f2m} respectively. \subsubsection{Adding Eyeglasses} Given this setting, we can integrate a third domain in the overall problem adding the corresponding constraints for this class. Let $S_E(x)$ be a given predicate holding true if and only if an image $x$ is tagged with the \emph{eyeglasses} tag in the dataset. Let $g_E(x)$ be the corresponding generator and $d_E(x)$ the corresponding discriminator for this property. The same network architectures of the previous description are employed to implement $d_E$ and $g_E$. The addition of this third class requires to add the following constraints for the generators, to be integrated with the male and female classes, \begin{align*} \forall x& ~ S_M(x) \Rightarrow d_E(g_E(e(x)))\\ \forall x& ~ S_F(x) \Rightarrow d_E(g_E(e(x)))\\ \forall x& ~ S_E(x) \Rightarrow g_E(e(x)) = x \\ \forall x& ~ S_M(x) \wedge S_E(x) \Rightarrow d_E(g_F(e(x))) \\ \forall x& ~ S_F(x) \wedge S_E(x) \Rightarrow d_E(g_M(e(x))) \\ \forall x& ~ S_M(x) \wedge S_E(x) \Rightarrow g_E(e(g_F(e(x))) = g_F(e(x)) \\ \forall x& ~ S_F(x) \wedge S_E(x) \Rightarrow g_E(e(g_M(e(x))) = g_M(e(x)) \\ \forall x& ~ S_M(x) \wedge \neg S_E(x) \Rightarrow g_M(e(g_E(e(x))) = g_E(e(x)) \\ \forall x& ~ S_F(x) \wedge \neg S_E(x) \Rightarrow g_F(e(g_E(e(x))) = g_E(e(x)) \end{align*} and to add the following for the discriminator: \begin{align*} \forall x& ~ S_E(x) \Rightarrow d_E(x)\\ \forall x& ~ S_M(x)\wedge\neg S_E(x) \Rightarrow \neg d_E(g_E(e(x)))\\ \forall x& ~ S_F(x)\wedge\neg S_E(x) \Rightarrow \neg d_E(g_E(e(x))) \end{align*} We note that in this case, the class eyeglasses is not mutually exclusive neither with male nor female class. This is the reason why we have to consider some constraints with a conjunction on premises. In addition, we have to distinguish how the male and female generators behave in presence of the attribute eyeglasses. In particular we enforce that translating a gender attribute does not affect the presence of eyeglasses. Figure~\ref{fig:eyeglasses} shows some examples of the original face images, and the corresponding generated images of the faces with added eyeglasses. As we already said, the proposed approach is very general and can be exploited to manage possibly several attributes in a visual generation task combining a high-level logical description with deep neural networks. The most distinguishing property is the flexibility of describing new generation problems by simple logic descriptions, which leads to attack very different problems. Instead of looking for specific hand-crafted cost functions, the proposed approach offers a general scheme for their construction that arises from the t-norm theory. Moreover, the interleaving of different image translations tasks allows us to accumulate a knowledge base that can dramatically facilitate the construction of new translation tasks. The experimental results shows the flexibility of the proposed approach, which makes it possible to deal with realistic face translation tasks. \begin{figure}[t] \includegraphics[width=0.98\textwidth]{eyeglasses.jpg} \caption[Face Gender Translation: male/female to eyeglasses]{\textbf{Face Gender Translation: male/female to eyeglasses.} The top row shows input male/female images whereas the bottom row shows the correspondent generated faces with eyeglasses.} \label{fig:eyeglasses} \end{figure} \section{Conclusions} \label{sec:conclusion} This paper shows a new general approach to visual generation combining logic descriptions of the target to be generated with deep neural networks. The most distinguishing property is the flexibility of describing new generation problems by simple logic descriptions, which leads to attack very different problems. Instead of looking for specific hand-crafted cost functions, the proposed approach offers a general scheme for their construction that arises from the t-norm theory. Moreover, the interleaving of different image translations tasks allows to accumulate a knowledge base that can dramatically facilitate the construction of new translation tasks. The experimental results shows the flexibility of the proposed approach, which makes it possible to deal with realistic face translation tasks. \bibliographystyle{splncs04}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the near future new galaxy surveys will provide a large number of spectra, which will enable important measurements of galaxy properties. For example, the 2 degree field (hereafter 2dF) Galaxy Survey aims to collect 250,000 spectra. The integrated spectrum of a galaxy is a measure of its stellar composition and gas content, as well as its dynamical properties. Moreover, spectral properties often correlate fairly closely with galaxy morphology. Indeed, as the spectra are more directly related to the underlying astrophysics, they could prove a more robust classifier for evolutionary and environmental studies. Spectra can be obtained to larger redshifts than ground-based morphologies and, as 1-D data sets, are easier to analyse. Although the concept of spectral classification goes back to Humason (1936) and Morgan \& Mayall (1957), few uniform data sets are available and a number of different approaches to the problem are possible. Spectral classification is important for several practical and fundamental reasons. In order to derive luminosities corrected for the effects of redshift, the $k$-correction must be estimated for each galaxy. The rest-frame spectral energy distribution is needed, which can be obtained by matching the observed spectrum against templates of local galaxies. The proportion of sources in each class as a function of luminosity and redshift is of major interest. Apart from its relevance for environmental and evolutionary studies, new classes of objects may be discovered as outliers in spectral parameter space. Furthermore, by incorporating spectral features with other parameters (e.g. colour and velocity dispersion) an `H-R diagram for galaxies' can be examined with possible important implications for theories of galaxy formation. In this paper we explore the PCA and Artificial Neural Network (hereafter ANN) combination when applied to noisy galaxy spectra. PCA has been demonstrated to be a useful tool for spectral classification, with applications to stellar spectra (e.g. Murtagh \& Heck 1987; von Hippel et al. 1994), QSO spectra (Francis et al. 1992) and galaxy spectra (Sodr\'e \& Cuevas 1994; Sodr\'e \& Cuevas 1996; Connolly et al. 1995). ANNs have been used for classification of images (Storrie-Lombardi et al. 1992; Naim et al. 1995; Lahav et al. 1995) and stellar spectra (Storrie-Lombardi et al. 1994) along with a variety of other astronomical applications. Other approaches have also been taken, such as analysing the weight of specific components in each galaxy spectrum (Zaritsky et al. 1995). This approach is similar to the PCA technique, but the templates do not form an orthogonal set, although they can be chosen specifically to highlight certain characteristics in the spectra, such as young stars or emission lines. However, this approach does not allow for spectral variations extending outside the scope of the predetermined template set. In this paper we use PCA on the complete spectra of the data set as opposed to some other spectral analyses which use specific measured quantities from the spectra (e.g. line strengths). We prefer to use the complete data so that we are not restricted to a set of predetermined measures. By using all the available data, the S/N inherent in the method is increased. ANNs, originally suggested as simplified models of the human brain, are computer algorithms which provide a convenient general-purpose framework for classification (e.g. Hertz et al. 1991). ANNs are related to other statistical methods common in Astronomy and other fields. In particular ANNs generalize Bayesian methods, multi-parameter fitting, PCA, Wiener filtering and regularisation methods (e.g. Lahav et al. 1996). We take the approach of using a fairly small set of high S/N spectra, and degrade them using the parameters of the 2dF system on the AAT. This produces simulated spectra for a range of possible noise levels which allows us to quantify the affect of the increasing noise and put limits on the success rates we hope to achieve for the spectral classification. In section 2 we describe the data set and show examples of the simulations. In section 3 we utilize the technique of Principal Component Analysis to compress the data set and to extract the `real' information from the noisy spectra, leading to section 4, where we look at the spectral reconstructions based on the PCA, highlighting the ideal methods to use. In section 5 we use an Artificial Neural Network to operate on the results from the PCA, and demonstrate the level of classification success attained by this method. We end with a discussion of the results and the conclusions of the investigation. \section[]{Data} The spectra used in this investigation are taken from the spectrophotometric atlas of galaxies (Kennicutt 1992) and represent the integrated spectra of local galaxies. They have been selected to demonstrate a wide range of spectral signatures. Most of the spectra have 5-8\AA \ resolution but a few have been observed giving a lower resolution of 10-25\AA . The spectra cover the wavelength range from 3650-7100\AA , although for the purposes of this paper, we are left with a slightly shorter range when the simulation process has been applied (see Appendix A). More details of the observations are given in Kennicutt (1992). For the purposes of this paper the spectra have been split into two groups. The `Normal26' spectra have been selected as being representative spectra for galaxies of normal morphological type, i.e. galaxies which conform simply to the Hubble classification scheme (Hubble 1936). The `Unusual29' spectra comprise the remainder of the galaxies observed by Kennicutt. These spectra include peculiar and starburst galaxies and also galaxies with Seyfert nuclei. \begin{table} \caption{Morphological groups. Each group is given a name, a number G (for use in section 5), the T-Types covered by the group and the percentage of the galaxies in the ESO catalogue which fall into that group.} \begin{tabular}{llcl} \hline {\em Group} & {\em G} & {\em T-Types} & {\em ESO \%}\\ \hline \hline {\bf E,S0} & {1} & $T\leq 0.5$ & 21.0\\ {\bf Sa} & {2} & $0.5<T\leq 2.5$ & 16.9\\ {\bf Sb} & {3} & $2.5<T\leq 4.5$ & 20.9\\ {\bf Scd} & {4} & $4.5<T\leq 8.5$ & 30.1\\ {\bf Irr} & {5} & $8.5<T$ & 11.1\\ \end{tabular} \end{table} The Normal26 spectra have been split into five broad groups, based on their visual morphology. These groups can be seen in Table 1 which also gives the percentage of galaxies falling into each group for the ESO catalogue (Lauberts \& Valentijn 1989). We decided to bin the data in this way so that there are a number of spectra in each group and the ANN is not over-trained to recognize a specific spectrum for a particular class. With the small data set that we have, this is still a risk, but the combination of this binning and using many noisy deviates of each spectrum helps to alleviate the problem. Unfortunately the `Irr' group is not well represented in the Normal26 sample, since all but two of these galaxies fall into the Unusual29 set. \begin{figure} \label{fig1} \psfig{figure=fig1.ps,width=3.4in,height=3.4in} \caption{Simulated spectra based on NGC 3379 (E0).} \end{figure} The Unusual29 spectra have also been tentatively placed in these 5 groups, based on their visual morphology where it is defined. Spectra without a defined morphology, or purely labeled as peculiar, have been placed in the final bin. Table 2 summarizes the data set. A notes section is also given for the Unusual29 spectra, which specifies why they have been categorized as unusual. Two of the spectra were found to adversely bias the PCA, and so have been removed from the normal set of spectra. In the case of NGC 1569 this is due to serious galactic reddening being evident in the continuum. In the case of MK 487 the reason is less obvious, but visual inspection of the spectrum (Kennicutt 1992) indicates an erratic continuum. \begin{table*} \caption{The selection of galaxy spectra from Kennicutt (1992).} \begin{tabular}{lllp{1cm}llll} \hline \multicolumn{3}{c}{\bf Normal26} && \multicolumn{4}{c}{\bf Unusual29} \\ \hline \hline {\em Galaxy} & {\em Morphology} & {\em Group} && {\em Galaxy} & {\em Morphology} & {\em Group} & {\em Notes}\\ \hline \hline NGC3379 & E0 & E,S0 && MK487 & Im & Irr & Odd (see text) \\ NGC4472 & E1/S0 & E,S0 && NGC1569 & Sm/Im & Irr & Reddened \\ NGC4648 & E3 & E,S0 && NGC4670 & SB pec & Irr & Peculiar \\ NGC4889 & E4 & E,S0 && NGC3034 & I0 & Irr & Peculiar\\ NGC3245 & S0 & E,S0 && NGC3077 & I0 & Irr & Peculiar\\ NGC3941 & SB0/a & E,S0 && NGC5195 & I0 pec & Irr & Peculiar\\ NGC4262 & SB0 & E,S0 && NGC6240 & I0 pec & Irr & Peculiar\\ NGC5866 & S0 & E,S0 && NGC3310 & Sbc pec & Sb & Global Starburst\\ NGC1357 & Sa & Sa && NGC3690 & Sc pec & Scd & Global Starburst\\ NGC2775 & Sa & Sa && NGC6052 & Sm pec & Irr & Global Starburst\\ NGC3368 & Sab & Sa && UGC6697 & S pec & Irr & Global Starburst\\ NGC3623 & Sa & Sa && NGC2798 & Sa pec & Sa & Starburst Nucleus\\ NGC1832 & SBb & Sb && NGC3471 & Sa & Sa & Starburst Nucleus\\ NGC3147 & Sb & Sb && NGC5996 & SBd & Irr & Starburst Nucleus\\ NGC3627 & Sb & Sb && NGC7714 & S pec & Irr & Starburst Nucleus\\ NGC4750 & Sbpec & Sb && MK35 & pec & Irr & Peculiar\\ NGC2276 & Sc & Scd && MK59 & SBm/Im & Irr & HII Region\\ NGC4775 & Sc & Scd && MK71 & SBm & Irr & HII Region\\ NGC5248 & Sbc & Sb && NGC3516 & S0 & E,S0 & Seyfert I\\ NGC6217 & SBbc & Sb && NGC5548 & Sa & Sa & Seyfert I\\ NGC2903 & Sc & Scd && NGC7469 & Sa & Sa & Seyfert I\\ NGC4631 & Sc & Scd && NGC3227 & Sb & Sb & Seyfert II\\ NGC6181 & Sc & Scd && NGC6764 & SBb & Sb & Seyfert II\\ NGC6643 & Sc & Scd && MK3 & S0 & E,S0 & Seyfert II\\ NGC4449 & Sm/Im & Irr && MK270 & S0 & E,S0 & Seyfert II\\ NGC4485 & Sm/Im & Irr && NGC1275 & E pec & E,S0 & Peculiar\\ & & && NGC3303 & pec & Irr & Peculiar\\ & & && NGC3921 & S0 pec & E,S0 & Peculiar\\ & & && NGC4194 & Sm pec & Irr & Peculiar\\ \end{tabular} \end{table*} \begin{figure} \label{fig2} \psfig{figure=fig2.ps,width=3.4in,height=3.4in} \caption{Simulated spectra based on NGC 3627 (Sb).} \end{figure} In order to accurately test the methods for spectral reconstruction and classification it is first necessary to produce a set of galaxy spectra which resembles the spectra received from large redshift surveys. Details for the 2dF system (Taylor 1994) on the Anglo-Australian Telescope are used to degrade the Normal26 spectra, to simulate spectra from objects with a range of $b_{\rm J}$ magnitudes. Appendix A gives details about the spectral simulation procedure, detailing how the system response function, sky spectrum, fibre size and galaxy magnitude are incorporated into the simulations. It should be noted that it is difficult to predict the exact performance of the 2dF system and observations will obviously differ due to conditions, hence the simulations remain approximate, but demonstrate the level and variation of noise across the spectrum. We do not deal with the effect of aperture bias (the fact that a fibre can only sample a small area of a bright galaxy) in this paper, but acknowledge that it may cause discrepancies between the morphological and spectral type determined for a galaxy. Zaritsky et al. (1995) find that in general aperture bias would not constitute a large effect for the majority of galaxies, but they stress that it may still pose a problem in some cases. Figures 1-3 show examples of the simulated spectra for an elliptical galaxy, a spiral galaxy and an irregular emission line galaxy. In each case the original spectrum and simulations at a $b_{\rm J}$ magnitude of 19, 20 and 21 are shown. We refer to the original spectra (from Kennicutt 1992) in the following sections as the `clean' spectra, and we consider them not to contain noise. This is a reasonable assumption, since figures 1-3 show that the low S/N in the noisy simulations makes any noise in the original spectra negligible. Figures 1-3 indicate the importance of the emission lines for spectral classification at low S/N, and also reveal how the additional noise due to sky lines can produce false features such as those seen in figure 2, at $b_{\rm J}=20$ and $b_{\rm J}=21$. For this method, the spectra must be compared in their rest frame, so we have used the redshifts (from Kennicutt 1992) for the galaxies to de-redshift the spectra (see Appendix A for the full procedure). The accuracy of the redshifts and resolution of the spectra determine the number of wavelength points which are used for the PCA. \begin{figure} \label{fig3} \psfig{figure=fig3.ps,width=3.4in,height=3.4in} \caption{Simulated spectra based on NGC 4485 (Im).} \end{figure} \begin{figure} \label{fig4} \psfig{figure=fig4.ps,width=3.4in,height=3.4in} \caption{The variation in signal to noise per 8\AA \ resolution element measured for simulated spectra as a function of $b_{\rm J}$.} \end{figure} A set of 900 spectra are produced in this way for each $b_{\rm J}$ magnitude. These 900 are based upon the Normal26 spectra, but each spectrum is simulated $N_{\rm s}$ times, where $N_{\rm s}$ is selected so that the final 900 spectra have the same morphological distribution as the ESO catalogue, as given in Table 1. Since our initial data set is limited, this set of 900 spectra does not contain all the variation in a true observed set of spectra, such that we acknowledge that this analysis may lead to an optimistic rate of classification, but it demonstrates the methods we wish to use on more extensive sets of observed spectra. The S/N per 8\AA \ resolution element averaged over all wavelengths for all 900 spectra is calculated when a set of spectra is produced for a given $b_{\rm J}$. Figure 4 shows a plot of this $\langle S/N \rangle$ against $b_{\rm J}$ computed in this way which can be used to associate the $b_{\rm J}$ magnitudes used in this paper with a general S/N spectrum from any source. \section[]{Principal Component Analysis} Principal Component Analysis (hereafter PCA) is a technique for both data compression and analysis (Murtagh \& Heck 1987) which can be employed to assess variations in galaxy spectra. By identifying the linear combination of input parameters with maximum variance, a set of new axes (principal components) is derived. A mathematical description is given in Appendix B. Computationally, we use the technique of Singular Value Decomposition to find the eigenvectors (or principal components) of the covariance matrix. The question which arises is how to create the ideal set of principal components (or PCs) for galaxy spectra. The Normal26 spectra provide a useful data set, but they are not a representative sample of galaxy spectra in general. Ideally it would be best to define the PCs from the observed galaxy spectra of a large survey, but this data would be noisy and it is not clear at first how the noise would affect the PCA. Possibly filtering the spectra to reduce the noise level could improve the analysis, but this will lead to a loss of information. As explained in section 2, a set of 900 clean spectra are created, based upon the Normal26 galaxies , which are sampled in 4\AA \ bins resulting in 768 wavelength bins for each spectrum. PCA is now conducted on the (768x768) covariance matrix using the techniques outlined in Appendix B. It should be noted that the speed of the PCA algorithm is dependent only on the number of wavelength bins used and not on the number of spectra (we chose 900 purely to produce a large morphologically weighted data set with many random variations). \subsection[]{Variance Scaling and normalization} In the analysis which follows, the spectra are all normalized to have the same total flux over the wavelength range considered, then the mean spectrum is subtracted from each of the input spectra. No further scaling is used. We did examine the possibility of scaling each input flux to unit-variance across the sample of spectra. This method is sometimes recommended for PCA analyses since it places each input on an equal footing. This would be advantageous when considering object attributes which are fundamentally different, for example if we were basing a classification system on galaxy colour, image ellipticity and OII equivalent width. For spectra, the problem is slightly different since all the inputs are fluxes for different wavelengths, hence the relative strengths of the inputs is important and should be retained in the analysis. Francis et al. (1992) investigate this problem for PCA on QSO spectra and choose not to scale by the variance. Scaling by the variance means that the PCs for galaxy spectra at different noise levels are radically different, since at high S/N the PCA is sensitive to well correlated small features, but at low S/N these features are lost. Having chosen not to use this scaling we find the PCs only vary slightly with noise level, hence the PCs are intrinsic to the galaxy spectra themselves and not to particular observing conditions. In this case the PCs relate to both the magnitude and correlation of the features being chiefly concerned with regions of the spectra where the signal is strongest, such that they are not swamped by noise in the continuum. We also did the analysis with variance scaling, but found it gave a less reliable final classification, so we do not discuss this further. Other possibilities are a normalization of each spectrum such that the integrated flux is the same, or alternatively such that the sums of the squares of the fluxes across the spectrum is unity (unit scalar product). This second case has certain mathematical benefits since it means that each spectrum can be represented by a unit vector in the parameter space. Connolly et al. (1995) consider these possibilities, but they find their results are not greatly affected by the choice of normalization. Hence for simplicity, we opt purely for a normalization to equal integrated flux. The other operation we perform is to subtract the mean spectrum from each of the spectra in the set. This centres the points in the PC space about the origin, and makes the PCs easier to interpret. \subsection[]{The meaning of the principal components} \begin{figure} \label{fig5} \psfig{figure=fig5.ps,width=3.4in,height=3.4in} \caption{Principal components for the Normal26 spectra and for the entire set, without additional noise.} \end{figure} Figure 5 shows the mean and the first 3 PCs for the data set based upon the Normal26 spectra and Figure 6 shows an enlargement of the first PC indicating the important features. These are computed without noise being added to the spectra, but when noise is added, the PC axes change very little. We quantify this affect in more detail in section 4. We find that the first PC accounts for 87\% of the variance in the set of 900 clean spectra based on the Normal26 spectra. When the set is simulated at $b_{\rm J}=22$ $(S/N\sim2.5)$ the plot of the first PC is qualitatively the same, but only accounts for 11\% of the variance, since the noise is producing large amounts of uncorrelated variance. Let us consider the meaning of the correlations which have been found in the Normal26 spectra. It can be seen that the first PC represents the correlation in the strength of the emission lines with the young stellar component. It shows that the OII (3727), OIII (4959 and 5007), $H\alpha$ (6563) and $H\beta$ (4861) lines are all linked with a blue continuum, demonstrating the effect of the ionizing photons from young stars exciting the interstellar medium resulting in strong emission lines. The second PC allows for a range of ionization levels of the galaxies, since the oxygen and hydrogen lines are anti-correlated. The third PC indicates numerous other correlations between absorption and emission features. A parallel study involving PCA with the Kennicutt sample of galaxies (Sodr\'e \& Cuevas 1996) finds similar principal components. \begin{figure} \label{fig6} \psfig{figure=fig6.ps,width=3.4in,height=3.4in} \caption{The first principal component for the Normal26 spectra, indicating some of the important features.} \end{figure} \begin{figure} \label{fig7} \psfig{figure=fig7.ps,width=3.4in,height=3.4in} \caption{The projection onto the first principal component plotted against T-Type for the Normal26 spectra.} \end{figure} \begin{figure} \label{fig8} \psfig{figure=fig8.ps,width=3.4in,height=3.4in} \caption{Projections onto the first 3PCs plotted against one another for the Normal26 spectra.} \end{figure} It is hoped that the amplitude of a small number of these eigenspectra in any particular spectrum will be sufficient to spectrally classify the galaxies. Since the projections onto the eigenspectra use information from the entire spectrum this provides a much less noisy measure than comparing the strength of particular features. A simple indication that this may be true can be seen in Figure 7, which indicates a correlation between the projection (see Appendix B) of each of the Normal26 spectra onto the first PC and the morphological type of the galaxy on the T-Type system (de Vaucouleurs 1959; de Vaucouleurs 1963). A simple fit to this relation (as shown by the dotted line) allows the projection onto the first PC to be used to classify the galaxies into a specific T-Type. This has been used later to provide a comparison to the classifications using the ANN. Further evidence of an underlying spectral sequence can be seen in Figure 8 where the projections onto the first three PCs are plotted against each other, with the symbols representing different morphological types. Segregation in this plot indicates that the PCs are capable of differentiating between galaxy morphologies and that the Hubble sequence is clearly evident in the Normal26 spectra and not just their visual appearance on the sky. The Hubble sequence appears as a combination of PC1 and PC2, indicating that although one spectral parameter is sufficient to explain the sequence up to Sc galaxies, a second parameter (allowing for variation in ionization) becomes very important for late type and irregular galaxies, where strong star formation leads to high ionization levels. In their study, Sodr\'e \& Cuevas (1996) found that stellar synthesis models (Bruzual \& Charlot 1995) with different ages and star formation rates form a similar sequence when projected onto PCs derived from the Kennicutt (1992) sample of galaxies. Figure 5 also shows the mean and first three PCs for the entire data set, including the Normal26 and the Unusual29 spectra. These are derived by the same method as described above. The morphological mix from Table 1 is again used, but note that this does not necessarily represent a true spectral mix. This means that the galaxies with unusual features are over represented, but this allows their affect on the PCA to be seen. In this case, the first PC is entirely due to the emission line strength, since the PCA is dominated by the emission line objects. The young stellar continuum is evident in the second PC along with anti-correlations between the major emission lines. In this respect the first PC from the Normal26 spectra is evident as a combination of the first and second PCs of the entire data set. The third PC is now quite different, and displays the broad hydrogen lines characteristic of the galaxies with Seyfert nuclei. \section[]{Reconstructions from noisy spectra} We now proceed to investigate the effect of noise on the PCA technique. It would be useful to perform PCA on a large set of observed spectra, such as the 2dF spectra, since this set would contain all the possible variation in galaxy spectra and is representative of the local universe. However these observed spectra would be noisy and this may affect the location of the principal components. Using the spectral simulations we can assess the nature of this effect, by measuring the ability of the PCs to reconstruct the original spectrum. Appendix B explains how a reconstruction of the original data can be found using only a small number of PCs. Taking a set of noisy simulated spectra we can reconstruct them and define the total Residue R for the set as $$ R={1 \over NM}\sum_{ij}(S^{r}_{ij}-S_{ij})^2 , \eqno (1) $$ where the sum is over all $N$ spectra and $M$ wavelengths, $S^{r}_{ij}$ is the flux of the ith reconstructed spectrum at wavelength j, and $S_{ij}$ is the flux of the original clean ith spectrum at wavelength j. This reconstruction technique suggests the useful ability of PCA to disregard the noise, which is assumed to be uncorrelated. The major correlations in the signal are selected by the PCA, and the noise only interferes with the later PCs, such that a reconstruction using the most significant PCs can eliminate much of this uncorrelated noise. To demonstrate this effect, figure 9 shows a plot of R against the number of PCs used in the reconstruction. Seven different cases are considered and these are listed in Table 3. In each case two sets of spectra are used. One set is used to define the principal components and is labeled `Spectra$_{\rm PCA}$'. The other set is reconstructed to find the Residue R and is labeled `Spectra$_{\rm REC}$'. This reflects the possibility of defining the principal components prior to the observations using a smaller set of high quality spectra. However this would probably entail using a limited data set, so Table 3 also contains a case where only half of the Normal26 spectra have been used to define the PCs. Each set contains 900 spectra mimicking the morphological mix given in Table 1. There is one other possibility considered in Table 3, that of filtering the spectra prior to the reconstruction and we have used the technique of Wiener filtering to demonstrate this effect (see section 4.2). \subsection[]{Reconstruction results} Referring to Figure 9, we can see that the smallest R value is obtained using the noise free spectra for the Spectra$_{\rm PCA}$ and the Spectra$_{\rm REC}$ sets (line a). This is to be expected and represents the ideal condition, but one which is not available for a real observation since the underlying signal is not known. A set of clean spectra, such as the spectra we are using, could be used to form the PCs for use with observed noisy spectra, as lines b and e demonstrate. However the line d indicates that when only half of the spectra are used in the PCA the result is considerably worse, which suggests that a small set of spectra should not be considered representative of a larger ensemble. Therefore it would be better to derive the PCs from the noisy spectra themselves. This condition is shown by lines c and g for two different noise levels. It can be seen that for $b_{\rm J}$=19 (line c) about 8 PCs still contain meaningful information, but the later PCs are merely reconstructing the noise (as indicated by a rise in R). This means that the optimal reconstruction is found by limiting the PCs used to the point at which R is found to be a minimum and this then represents the entire meaningful information that can be extracted from the spectrum. Line g indicates that for very noisy data only the first PC is still meaningful. \begin{table} \caption{The seven PCA combinations used for analysis of reconstruction errors, with the notation for Figure 9.} \begin{tabular}{llc} \hline Spectra$_{\rm PCA}$ & Spectra$_{\rm REC}$ & Notation in Figure 9\\ \hline \hline Clean & Clean & a\\ Clean & $b_{\rm J}=19$ & b\\ $b_{\rm J}=19$ & $b_{\rm J}=19$ & c\\ Clean, half data set & Clean & d\\ Clean & $b_{\rm J}=22$ & e\\ $b_{\rm J}=22$ & $b_{\rm J}=22$ filtered & f\\ $b_{\rm J}=22$ & $b_{\rm J}=22$ & g\\ \end{tabular} \end{table} \begin{figure} \label{fig9} \psfig{figure=fig9.ps,width=3.4in,height=3.4in} \caption{Reconstruction errors for different sets of spectra. See Table 3 for explanation of line labels.} \end{figure} \begin{figure} \label{fig10} \psfig{figure=fig10.ps,width=3.4in,height=3.4in} \caption{The optimal number of PCs for spectral reconstruction as a function of $b_{\rm J}$.} \end{figure} So given a set of noisy spectra, it is reasonable to perform PCA on the spectra themselves, but to acknowledge the fact that only a certain number of the PCs are useful, with this number depending on the S/N of the spectra. If a set of high S/N spectra are available, using the PCs from these may extract more of the information (see lines e and g), but in this case the assumption that the noisy spectra can be well described by the PCs from the clean spectra must be made, and (as line d shows) this is not always true. Figure 10 indicates how the optimal number of PCs (i.e. the number which gives a minimum in R) varies with $b_{\rm J}$ when Spectra$_{\rm PCA}$ and Spectra$_{\rm REC}$ are both simulated at $b_{\rm J}$. The exact normalization of this graph depends on the specific data set being considered, including factors such as the number of spectra, and the wavelength sampling. For a very large set of data from a big redshift survey it may be expected that more PCs would be significant. \subsection[]{Wiener Filtering} An alternative method for extracting the meaningful information from the spectra is Wiener filtering in Fourier space (see Press et al. 1992 for a full description). This involves a smooth truncation of modes in a data independent basis, as opposed to the PCA which involves a sharp truncation in a basis which is adapted to the data. In Fourier space, let S(k) be the true spectrum of a galaxy, then the observed spectrum O(k) is given by $$O(k)=S(k)+N(k) , \eqno (2) $$ where $N(k)$ is the Fourier transform of $n(\lambda)$ (the noise at each wavelength). Let us define a linear filter in Fourier space, W(k) by $$S_{\rm r}(k)=O(k)W(k) , \eqno (3) $$ where $S_{r}(k)$ is the best reconstruction of S(k). By a least squares minimization with respect to $W(k)$ we find $$W(k)=\frac{|S(k)|^2}{|S(k)|^2+|N(k)|^2} , \eqno (4) $$ where we have ignored terms involving $S(k)N(k)$ since the noise and signal are considered to be uncorrelated. From equation (4), we define $W(k)$ as the Wiener filter. For this method we must first assume an underlying signal $S(k)$ in the spectrum. To do this we have formed 5 templates, one for each of the groups given in Table 1 by taking the mean of the Normal26 spectra in that class. We then take each noisy spectrum and compare it to the 5 templates and look for the best template in the least squares sense. This template is then used as the prior for the Wiener filtering. In addition this template matching method is a simple method of classification where a galaxy spectrum can be allocated to the group whose template it matches best. We use this later as a comparison to the classifications from the ANN. Wiener filtering can be seen as an alternative technique to produce a reconstruction from a noisy spectrum and it is interesting to compare the PCA reconstructions and the Wiener reconstructions. This comparison can be seen in Figure 11 for different levels of noise. It can be seen that the Wiener filtering reduces the noise, but also smoothes the signal, such that at low S/N the features are lost and only the rough continuum shape remains. In comparison the PCA reconstructions retain much more of the information in the spectrum, producing a reasonable reconstruction of the spectrum to a $b_{\rm J}$ of 22. At low S/N the noise causes some spurious effects, but many of the distinguishing spectral features, in this case $H\alpha$ and the 4000\AA \ break are retained. In order to make this reconstruction, the noisy spectrum is assumed to be characteristic of the set of spectra used to produce the principal components. This means that the reconstruction conforms to the correlations laid by the PCA. For some data sets it is possible that PCA would not provide a good description of the data and Wiener filtering in Fourier space would be the prefered method. Ideally a very large set of spectra is required for the PCA, such that the complete range of spectral possibilities is encompassed, and we hope to apply the techniques given here to such data sets in the future. \begin{figure*} \label{fig11} \psfig{figure=fig11.ps,width=7.0in,height=4.5in,angle=270} \caption{Comparison of spectral reconstructions for NGC3627 (Sb). a) The simulations for $b_{\rm J}$=20 and $b_{\rm J}$=22. b) Wiener filtering of the noisy spectra using a group template (see text). c) PCA reconstructions of the noisy spectra based upon 8PCs derived from the Normal26 spectra.} \end{figure*} Referring back to figure 9, line f is the result of first Wiener filtering the spectra before projecting onto the PCs. This removes much of the noise so that the line does not rise so rapidly, but the action of the filtering also removes much of the meaningful signal in the spectrum so that the reconstruction is never as good as the single PC reconstruction based on the noisy data (line g). \subsection[]{Combining Wiener filtering and PCA} As an aside, we can see in the previous section that the PCA, which takes into account correlations between the features, produces a far superior reconstruction than the Wiener filter used in Fourier space, but we have also noted that PCA reconstructions of noisy data should be restricted to only the first few PCs, since the noise interferes with the later PCs. The PCA works better than the Fourier representation because the PCA axes are chosen specifically to represent the data, whereas the Fourier axes are a generalized orthogonal set and not specific to the data being considered. The Wiener filter is used to produce a smooth cutoff of the Fourier modes so that the noisy modes are reduced in weight. Such a procedure could also be used with the PCA where, instead of truncating after a determined number of PCs, a filter is used which merely reduces the weight of the later PCs. In this paper, we actually need a direct truncation of the PCs, since we want to minimize the number of inputs to the ANN (hence reducing the number of free parameters of the network), but for a general spectral reconstruction the filtered PCA is a promising idea. \section[]{Spectral classification with an artificial neural network} Figures 7 and 8 show that the visual morphology and the spectrum of a galaxy are related and that this relation is embodied in the projection onto the PCs. This suggests that a useful method for the classification of galaxy spectra is to associate each spectrum with a morphological type. This would allow the morphology of galaxies to be examined at far greater distances and with less subjectivity than conventional examination of galaxy images. We have trained an ANN to assign morphological classifications to galaxies, based on their spectra as represented by the projections onto a small number of PCs. For each spectrum, the ANN produces an output classification which is a non linear function of the inputs. The form of the function is parameterized by a set of weights which are adjusted so that the output matches the known classifications for a training set. To be precise, the effect of training the ANN is to perform a minimization across the ANN weights vector ({\bf w}) given a set of inputs ${\bf x}_i$ for the ith galaxy (e.g. the spectrum as represented by the PCs) and known outputs $T_{\rm i}$ (e.g. the morphological group). This is done by minimizing the cost function $$ E = {1 \over 2} \sum_i [T_i - F({\bf w}, {\bf x}_i) ] ^2, \eqno (5) $$ where the non-linear function $F({\bf w}, {\bf x})$ represents the network and the summation is over the training set of spectra. \begin{figure} \label{fig12} \psfig{figure=fig12.ps,width=3.4in,height=4.0in} \caption{The percentage of ANN classifications which agree with the known morphological types for 5 and 3 classes, based upon the Normal26 spectra. Also shown is the success of $\chi^{2}$ template matching and classification based solely on PC1. The lower graph indicates the variation in $\delta(group)$ against $b_{\rm J}$ for classifications from the ANN, and PC1 alone.} \end{figure} Once the weights are set, the training is complete and the ANN can be used to classify the complete galaxy sample. A full description of the ANN as a tool for data analysis is given in Appendix C (for further detail see Lahav et al. 1996). We used a quasi-newton ANN code with the network architecture designed to allow the projections onto the first 8 PCs (derived from a set of spectra simulated at $b_{\rm J}=19.0$) to be used as input to the net and a single output being the morphological group. Between the input and output layers we chose a single hidden layer with 5 nodes, which provides a level of non-linearity in the classification. We experimented with different numbers of nodes and hidden layers and decided upon the 8:5:1 setup as the simplest architecture which gave consistently successful results. Simpler architectures were not reliable and more complex nets failed to improve the results. The output from the ANN could then be scaled and binned to give the five classes as defined in Table 1. The training process involved weight decay, which acts as a regularisation during the training, preventing erratic variations in the weights. The quasi-newton minimization and the use of weight decay are discussed in more detail in Appendix C. We chose to use 8 PCs based upon the results in section 4, which indicate this to be a reasonable compression of the data for the S/N levels we considered. We produced a set of 900 simulated spectra at each of 9 values of $b_{\rm J}$ between 18 and 22, resulting in a total set of 8100 spectra. One third of this set was then repeatedly submitted to the ANN as a training set until the error between the ANN classification and the known morphological types of the galaxies began to converge. The `trained' net was then used to classify the complete set of 8100 spectra onto a continuous scale defined by the group G number as given in Table 1. In this way the scaled output from the ANN was a single number in the range 0.5 to 5.5 and an output group was found by allocating the galaxy to the nearest group bin. These classifications could be compared with the known types of the galaxies to give a level of success at each magnitude. The ANN is trained, and spectra classified, ten times using this method to give a mean and standard deviation for the percentage of galaxies allocated to the correct group, and these results can be seen in Figure 12. In addition, if only a 3 group classification is required, the Sa, Sb and Scd groups can be combined to give one large group of spirals. The success rate for this 3 group binning is also shown in figure 12. A further measure of success is the $\delta(group)$ statistic given by $$\delta(group)=\sqrt{{1 \over N}{\sum_{i}(G_{\rm i}-A_{\rm i})^2}} , \eqno (6) $$ where the sum is over the N spectra (in this case a set of 900), $G_{\rm i}$ is the actual group to which the galaxy belongs, as defined in table 1 and $A_{\rm i}$ is the neural net classification on a continuous scale from 0.5 to 5.5. These results are encouraging, showing that the morphological variation in the Normal26 galaxies is well represented in their spectra and that the PCA/ANN technique is capable of extracting this information even with very noisy spectra. Figure 12 indicates several other lines for comparison. Two of the lines refer to the classification success when a classification based solely upon PC1 is used. A simple relation between T-Type and PC1 is assumed (as indicated by the line on Figure 7) and the results shown using 5 and 3 groups. $\delta(group)$ was also calculated for this method so that it can be compared with the ANN. The classification based solely on PC1 is found not to be ideal, although it is stable to high noise levels (since PC1 is the most meaningful correlation in the data and should not be greatly affected by noise). It is clear that the ANN is capable of better classifications using more of the principal components than the single PC result. The later components are affected by noise to a greater extent, so the ANN classification does fall as the noise level increases. The other two lines on Figure 12 refer to a classification based on $\chi^2$ template matching, from the procedure in section 4.2. This indicates a reasonable level of success, but is unable to capitalize on the extra information at high S/N which allows the ANN to refine the classifications. Since the template matching gives a discrete group output, $\delta(group)$ is not calculated for this method. \section[]{Agreement of morphological and spectral types} We now have an ANN which has been trained to relate the spectral type and morphological type of normal galaxies, using the projections onto the first eight PCs (derived from the Normal26 morphologically weighted sample). We use this to classify the Normal26 and the Unusual29 spectra without noise, to gain an indication of the agreement between spectral and morphological type. The results can be seen in figure 13. The Normal26 spectra form a reasonable sequence, with a degree of scatter in each group. This scatter is related to the $\delta(group)$ statistic plotted in Figure 12 (but note that $\delta(group)$ is summed over a complete morphological sample of 900 spectra at a particular noise level). Some overlap between the groups can be seen, verifying the fact that we are dealing with a sequence in galaxy type and not discrete classes. The agreement between morphological and spectral type indicated in Figure 13 substantiates the conclusions of the PCA analysis (Figures 7 and 8) which suggested strong links between spectra and morphology. It is reassuring to see that the traditional Hubble classification system is telling us about stellar and gas content in addition to the morphology of the galaxy. As expected, the Unusual29 spectra do not conform to this morphology-spectrum relation. In general, the unusual spectra which have been morphological classified into groups 1 to 4, produce a higher spectral class from the ANN. This is due to the presence of starbursts, Seyfert nuclei and emission features which increase the `activity' in these spectra, above that of a normal galaxy for that class. The unusual spectra in morphological group 5 contain galaxies with T-Types greater than 8.5, which include irregular and peculiar types along with extreme emission galaxies. The irregular emission line objects are classified correctly as being extreme in spectral class (group 5), but the peculiar galaxies reveal a range of spectral features, such that they are classified into a variety of spectral classes. We can investigate some particular cases which have been highlighted in Figure 13, and look at the spectra and comments given in Kennicutt (1992). The morphologically peculiar galaxy NGC 3077 has been spectrally classified by the ANN as a late Sb galaxy (ANN output group 3.15) which broadly agrees with the comments given in Kennicutt (1992) that this spectrum is similar to that of a normal Sc. In contrast, the spectrum of NGC 5195 has been spectrally classified as an elliptical by the ANN (ANN output group 1.25), whereas Kennicutt (1992) comments that it resembles an old stellar population with weak emission lines, or an `E+A' galaxy. From a sample of this size, we cannot say that the ANN is telling us anything very new, but it can be seen that the ANN is producing a consistent spectral classification which broadly agrees with a visual analysis of the spectra. \begin{figure} \label{fig13} \psfig{figure=fig13.ps,width=3.4in,height=3.4in} \caption{The Agreement of spectral and morphological classifications for the Normal26 spectra and the Unusual29 spectra.} \end{figure} It can clearly be seen in Figure 13 that the Unusual29 galaxies show very little agreement between morphological and spectral type, so in an observed sample, it would be useful to separate the normal from the unusual spectra. As Figure 7 shows, the Hubble sequence is clearly evident in relations between the PCs, so it is reasonable to assume that the unusual spectra do not have this uniformity, for example they may show discrepant emission and absorption features, or indicate strong ionization from a Seyfert nuclei without the presence of a young stellar population. To test this hypothesis, we have taken the Normal26 and Unusual29 spectra, simulated them at different noise levels, but without any morphological weighting, and run the PCA routine on the entire set. We then train the neural net (using 8PCs) to output 0 if a galaxy is one of the Normal26, or a 1 if the spectrum is a member of the Unusual29. When the ANN is trained, we ask it to reclassify the galaxies itself into these two bins. We find that at $b_{\rm J}=19.7$ about 95\% of the spectra have been classified correctly in this way. \section{Discussion} We demonstrate in this paper that the combination of the PCA technique and the ANN analysis produces a useful classification tool. The PCA is useful in three ways in our technique: (i) It allows a transformation of the data to a more useful set of axes, to reveal segregation in the sample. (ii) It allows a reconstruction of a low S/N spectrum. (iii) It provides an economic set of input parameters for the ANN. We have shown that a limited number of PCs convey the underlying information in the spectra, and beyond a certain number of PCs (defined by the noise level) the PCs are not producing useful information. When the data is restricted to the Normal26 galaxies, the projections onto the first few PCs reveal segregation in the data which is chiefly due to the morphological variation in the spectra. If the Unusual29 spectra are included, the variance in the data is due to parameters other than purely the morphological type. We show that the PCA is best executed on the observed spectra themselves, even if they are noisy, since the PCs are then highly relevant to that data set. The alternative approach of projecting the noisy data onto PCs derived from a small set of high quality spectra can be used, but in this case, the PCs do not necessarily describe the entire range in the larger set of observed spectra. The results from the ANN suggest that a good agreement between spectral and morphological type can be attained for the Normal26 spectra and that a better classification can be made using this approach than a simple $\chi^2$ fit to a set of templates. We have also shown that classification information is not restricted to the first principal component, such that a classification based purely on this is not very successful. As expected, little agreement is seen between the morphology and spectra of the Unusual29 galaxies. One way to proceed would be to separate these spectra from the main sample, and then to analyse only the normal galaxies with reference to the Hubble sequence. We show that it is also possible to use an ANN to make such a distinction. This would leave a set of unusual spectra which could be classified separately, or analysed using an alternative method, such as cluster analysis in the PC space, or one of a variety of unsupervised data analysis methods which look for trends or groupings in the data. The ability to highlight unusual spectra would also prove useful in detecting spectra with bad sky subtraction, inaccurate fluxing, or incorrect redshift determinations, so that these could be dealt with separately. The small data set used in this analysis means we are unable to draw rigorous conclusions as to the full variation of galaxy spectra. We hope to remedy this situation in the near future with similar analyses of larger observed data sets from existing redshift surveys and spectroscopic environmental studies. We intend to use the the results of this paper as the basis for classifying the spectra from the 2dF Galaxy Redshift Survey. We have shown that a five class classification is obtainable to the proposed magnitude limit of the survey ($b_{\rm J}=19.7$), but this paper also demonstrates that it is not necessarily advantageous to restrict the classification to discovering the morphological types of the galaxies. It may be better to extend the classification into classes based entirely on the spectral type. The PCA alone can reveal such subsets in the data and will provide a powerful tool when used on a large data set. For the ANN to operate well, a number of the spectra need to be used as a training set. Several options are available, such as using morphological classifications from those images bright enough to be classified by eye, a manual analysis of the high quality spectra or the use of population synthesis models. A recent investigation (Sodr\'e \& Cuevas 1996) has successfully related the positions of observed spectra and model spectra on the PC1/PC2 plane and this suggests that an ANN trained with model spectra may be able to provide interesting insights into the physical factors determining galaxy spectra. \section{Conclusion} We have demonstrated a method for the classification of low S/N spectra using simulations based upon the set of galaxy spectra presented in Kennicutt (1992). We have developed the simulations to resemble spectra from the 2dF Galaxy Redshift Survey and show that reliable classifications, with more than 90\% of the normal galaxies correctly classified, can be expected to the magnitude limit of the survey ($b_{\rm J}=19.7$). This may be optimistic, since our small data set does not encompass the full variation in galaxy spectra, but our results strongly suggest that the methods in this paper will provide an interesting analysis technique when the 2dF Galaxy Survey spectra are available. We have explored the effect of noise on the Principal Component Analysis of spectra and demonstrate that an ANN is a useful tool for the classification of noisy spectral data. We show that the ANN classification is more successful than either a $\chi^{2}$ template matching approach or a classification based solely on the projection onto the first principal component. We have also investigated the agreement of spectral and morphological type and discussed a method to separate normal from unusual galaxy spectra. \section*{Acknowledgments} The authors would like to thank C. Bailer-Jones, R.S. Ellis, P.J. Francis, J.S. Heyl, M.J. Irwin, A. Naim, L. Sodr\'e, Jr., M.C. Storrie-Lombardi, T. von Hippel, and the 2dF Galaxy Survey collaborators for useful discussions concerning spectral classification. We would also like to thank B. Ripley for making the ANN code available.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The lack of evidence of supersymmetry at the LHC has spurred additional interest in light dark matter (DM) candidates such as axions, dark photons, and other hidden sector entities~\cite{Essig2012, LightDarkSectors, DarkSectors, Nelson2011, Holdom1986}. The search for these hypothesized interactions requires detectors with sub-eV energy resolution and threshold, which has motivated R\&D efforts to build detectors with single charge detection capabilities~\cite{Romani2018, Tiffenberg_17prl_Sensei}. Using these detectors to set new DM constraints or to make a discovery requires accurate detector models and simulations. These models and simulations must include the detector properties (crystal orientation, intrinsic purity, operating conditions, etc.), as well as the effects of known backgrounds (radioactivity, leakage current, etc.). Recently developed SuperCDMS HVeV detectors provide the sensitivity necessary for modern experiments to search for light dark matter. The HVeV detector makes use of the Neganov-Trofimov-Luke (NTL) effect~\cite{Neganov1985, Luke1988} by applying a bias voltage between opposite faces of a high-purity Si substrate. This voltage biasing scheme converts ionization energy created by a single event into an amplified phonon signal that is then read out using superconducting sensors on one face of the detector. Early experiments showed that HVeV detectors provide charge quantized output signals when illuminated with 1.91 eV photons~\cite{Romani2018}. While the observed event histogram peaks corresponding to integer numbers of $e^-h^+$\ pairs detected were Gaussian, sub-gap infrared photons (SGIR) added significant ``fill-in'' between the quantized peaks. The same detector was later run with an improved fiber optic setup and IR-absorbing windows that confirmed the initial SGIR hypothesis~\cite{CDMS2018_DMSearch}. But even with the improved optical system there remained an estimated 3\% ``fill-in'' between quantized energy peaks that was attributed to a combination of charge trapping and impact ionization in the Si substrate. Charge trapping occurs when, e.g., an electron (or hole) falls into a vacancy and gets stuck; this reduces the total number of event related electrons (or holes) traversing the crystal, leading to low energy tails on the histogram peaks. Impact ionization occurs when a charge moving through the crystal has sufficient energy to liberate an additional charge that is loosely bound in the crystal; this process increases the total number of charges traversing the detector and produces high energy tails on the histogram peaks. This paper describes experiments performed with this detector to study charge leakage, charge trapping and impact ionization probabilities for HVeV detectors based on recently developed first-order models~\cite{Ponce2019}. The experiments described below used a SuperCDMS silicon HVeV detector. The detector consists of a $1{\times}1{\times}0.4$~cm$^3$ high-purity Si crystal (0.93~g) patterned with quasiparticle-trap-assisted electro-thermal-feedback transition-edge sensors (QETs), and an Al parquet pattern \cite{Romani2018}. The detector was cooled to 30~mK in a dilution refrigerator and the QET sensors were voltage biased at $\sim$22\% of their normal state resistance. The bias conditions corresponded to a sensor bias power of 0.17 pW for stable operation within the tungsten TES superconducting-to-normal transition. A single mode fiber optic was used to illuminate the Al parquet side of the detector with 650~nm (1.91~eV) photons from a pulsed laser at an adjustable repetition rate. Coarse control of the laser intensity at the detector was achieved using combinations of external optical attenuators (OA) at room temperature. Fine control of the intensity was achieved by changing the laser output power and pulse width. The HVeV Si substrate was ``neutralized'' at the start of the experiment by grounding the metal films on both sides of the detector (QET sensors and Al parquet) and pulsing the laser at 200~Hz with a relatively high intensity ($\sim$3$\times$10$^{16}$ photons per pulse) for ~16 hours. Physics data were collected using a fixed laser pulse width of 200~ns, -80~dB OA and a combination of two Si crystal bias voltages: $\pm$140~V, and four laser intensities: ``zero'' (no photons, 0.5~Hz, 20~$\mu$W), ``high'' ($\sim$0.5 photons per pulse, 200~Hz, 2000~$\mu$W), ``medium'' ($\sim$0.05 photons per pulse, 200~Hz, 200~$\mu$W), and ``low'' ($\sim$0.025 photons per pulse, 2000~Hz, 20~$\mu$W) for a total of eight configurations. At each of the two Si crystal biases used, the laser intensity was cycled in a specific order and time distribution, given by: 9.1\% zero, 30.3 \% high, 30.3\% medium, and 30.3\% low intensity. Prior to each acquisition (data collected using a single configuration during one cycle), the Si crystal was pre-biased at +($-$)160~V for one minute followed by reducing the crystal bias to +($-$)140~V for one minute. Data were recorded in a semi-continuous mode at a sample rate of 625 kHz, using a trace length of ~1.68~sec (2$^{20}$ samples) triggered by the internal TTL of the laser. We purposefully discarded the one laser-induced event in each ``zero" intensity trace. A total live-time of 15.4 (9.6) hours before cuts was collected at a detector polarity of +($-$)140~V over 27 ($<$18) hours of real-time. An aggressive raw-time cut was applied to remove all traces that contained a high-energy event. This was needed to avoid processing real signals that ride on the tail of a high energy pulse or that get distorted in electronics because of a DC voltage baseline shift in the QET readout caused by the energetic event. The raw-time cut reduced the total live-time by $\sim$70-75\%. \begin{figure}[ht!] \begin{center} \includegraphics[height=3in]{Arrival_Time_Histogram.png} \caption{\footnotesize (color online) (\textit{Top}) Scatter plot of event arrival times relative to laser pulse trigger. Events in which photons from the laser were absorbed show up in a cluster (green highlight). Events outside this range correspond to laser pulses where no photons were absorbed in the detector. The gray regions mark the events used to study the leakage rates in the background. (\textit{Bottom}) Histogram of the \textit{top} scatter plot showing how the first and last 16~$\mu$s have edge effects due to the search window. The non-highlighted region was excluded in this analysis.} \label{Arrival_Time_Histogram} \end{center} \end{figure} An optimal filter (OF) was generated from a 1~ms pulse template and noise PSD derived from each acquisition. The OF was inverse Fourier transformed to carry out the analysis in the time domain by convolving the transformed OF with the full trace to get an OF amplitude as a function of time. The laser TTL signal was used to identify "laser events". We associated the largest amplitude pulse within $\pm$~80~$\mu$s centered on the laser TTL trigger as the time-shifted OF amplitude and the corresponding position as the relative arrival time for the ``laser event'' (regardless of whether a true energy deposition occurs within that time period). Pulse pile-up was removed by applying a flat $\chi^2$ cut, which had a passing fraction of 99\% at the quantized laser peaks. There was a slight drift in detector gain of $\sim$ $-$5\% over the course of 27 hours of real-time for the +140~V crystal bias data. The detector stability over long periods of time enabled us to use the high-intensity laser data sets to calibrate all data sets in the same cycle: zero, high, medium, low. A quadratic calibration of the form $ax(1 + bx)$ was performed using the centroids from Gaussian fits to the 1, 2, and 3 $e^-h^+$\ pair peaks. The non-linearity, b, was on the order of 3\%, which was consistent with prior measurements using more peaks at higher intensity~\cite{CDMS2018_DMSearch}. \begin{figure}[ht!] \begin{center} \includegraphics[height=3in]{Background.png} \caption{\footnotesize (color online) (\textit{Top}) Background spectra (multi-colored lines) for the eight configurations and the fit for the high laser intensity with $-$140~V substrate bias. The spectra were normalized by the reduced total live-time. (\textit{Middle}) Residuals for the fit normalized by the counting statistics of each bin. Bins with zero counts were artificially set to zero. (\textit{Bottom}) The measured bulk (blue) and surface (green) leakage probabilities at +140~V (circles) and $-$140~V (diamonds) are shown to the right of the solid line; the corresponding weighted averages and standard deviations are shown to the left of the line.} \label{bkgd} \end{center} \end{figure} Figure~\ref{Arrival_Time_Histogram} (top) shows the scatter plot of calibrated time-shifting OF amplitudes versus relative arrival times for the +140~V bias high-intensity data. Events where laser photons were absorbed cluster between $-$16 and 16~$\mu$s (green shade). Only noise/leakage events appeared outside the green shaded region. The sudden increase in noise/leakage events in the first and last 16~$\mu$s of the 160 ~$\mu$s-wide window (Figure~\ref{Arrival_Time_Histogram}, bottom) were attributed to leakage events outside the search window. These events were discarded from the main analysis. This cut disproportionately affects 0~$e^-h^+$\ pair event statistics, which was accounted for by adding a fit parameter to the 0~$e^-h^+$\ pair amplitude. The events in the gray region of Figure~\ref{Arrival_Time_Histogram} were used to generate the corresponding background spectrum for each configuration. Events in the combined (green + gray) shaded regions (i.e., a 128~$\mu$s search window) were used to determine the impact ionization and charge trapping probabilities for this detector. We model our leakage current background, $B(x)$, as a noise peak with a continuous distribution of bulk leakage and quantized surface leakage~\cite{Ponce2019}: \begin{eqnarray} & &B(x) = \frac{\textrm{L}_0\textrm{N}e^{-\frac{(x - c_0)^2}{2\sigma^2}}}{\sqrt{2\pi\sigma^2}}\left(\frac{\left(1 - erf\left(\frac{x - c_0}{\sqrt{2\sigma^2}}\right)\right)}{2}\right)^{N-1}\nonumber\\ & &+ \frac{\textrm{L}_{\textrm{Surf}}}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x - c_1)^2}{2\sigma^2}}\nonumber\\ & &+ \frac{\textrm{L}_{\textrm{Bulk}}}{2(c_1 - c_0)}\left(erf\left(\frac{x - c_0}{\sqrt{2\sigma^2}}\right) - erf\left(\frac{x - c_1}{\sqrt{2\sigma^2}}\right)\right) \end{eqnarray} where N is the effective number of independent measurements within the OF search window, $\sigma$ is the detector resolution, L$_{\textrm{Bulk}}$ is the bulk leakage probability, L$_\textrm{{Surf}}$ is the surface leakage probability, L$_0$ = (1 - L$_\textrm{{Bulk}}$ - L$_\textrm{{Surf}}$), and $c_{0}$ ($c_{1}$) is the centroid of the quantized 0$^{th}$ (1$^{st}$) $e^-h^+$\ pair peak. The inclusion of c$_0$ in the first term was due to an offset introduced by the time-shifting OF. \begin{figure}[ht!] \begin{center} \includegraphics[height=3in]{Sample_Imp_Trap_Fitting.png} \caption{\footnotesize (color online) (\textit{Top}) Spectrum of laser-induced events (green) after cuts ($\sim$4 minutes), with analytical fit (black line) that includes charge leakage, charge trapping and impact ionization. (\textit{Bottom}) Residuals normalized by the bin counting statistics. Bins with zero counts were artificially set to zero.} \label{Sample_Imp_Trap_Fitting} \end{center} \end{figure} The observed background as a function of eVt (the total phonon energy in eV produced by an event) for all eight configurations are shown in Figure~\ref{bkgd}. The spectra were normalized by the reduced total live-time (number of events times the search window length of 128~$\mu$s). No significant change in the background was observed throughout the full 48 hour period of data taking, as evidenced by the nominally identical profiles shown in Figure~\ref{bkgd} (top). Figure~\ref{bkgd} (middle) shows the residuals (gray circles) for the $-$140~V high intensity data fit (top panel, black curve), lie mostly within 2$\sigma$ of the bin uncertainty indicating a good fit to our model. Bins with zero counts were artificially set to zero. Figure \ref{bkgd} (bottom) shows the fitted bulk (blue) and surface (green) leakage probabilities for the two crystal bias polarities: +140~V (circles) and $-$140~V (diamonds). The bulk leakage data at $\pm$140~V varied over a narrow range with the zero intensity values significantly lower than the other fits. This discrepancy may be due to the laser TTL signal introducing electronic cross talk; however, much effort was invested to mitigate such effects and no cross talk was observed when averaging over 100 traces. We observed a weighted bulk leakage event probability (blue points, left of solid black line) of 0.132~$\pm$~0.023\% at +140~V and 0.113~$\pm$~0.022\% at $-$140~V and concluded that the bulk leakage does not depend on the crystal bias polarity. \begin{figure}[ht!] \begin{center} \includegraphics[height=3in]{Impact_Trapping_Parameters.png} \caption{\footnotesize (color online) (\textit{Top}) Charge trapping and (\textit{bottom}) impact ionization probabilities for all acquisitions taken over the course of two days (left of solid black line). The weighted average and standard deviations are shown to the left of the black solid line with the individual $\pm$140~V data plotted to the right of the solid line separated by the dashed black line. Values were fitted while holding the bulk and surface leakage probabilities fixed using the background spectrum for each crystal bias and laser intensity (Figure~\ref{bkgd} bottom left of solid line)} \label{itparam} \end{center} \end{figure} The surface leakage data at +140~V were statistically equivalent, while the $-$140~V data varied with some overlapping uncertainties. We observed a weighted surface leakage event probability (green points, left of solid black line) of 0.087~$\pm$~0.001 for the +140~V data and 0.101~$\pm$~0.007 for the $-$140~V data. The difference indicates a very small dependence on crystal polarity although this may also be indicative of the lower statistics for the $-$140~V data. The bulk and surface leakage terms for each configuration (right side of solid line in bottom plot) were used as fixed parameters in the later fit of the impact ionization and trapping probabilities. We used the model outlined in Ponce et al.~\cite{Ponce2019} Equation~3 and assume the interaction of a single $e^-h^+$\ pair with the crystal as having some constant probability of inducing impact ionization (effectively, generating additional charge), charge trapping (effectively removing a charge), or having the original charges move through the crystal unhindered (resulting in a quantized signal). In our analysis, the individual peaks $^m$h(x) were convolved with the detector Gaussian response scaled by the appropriate Poisson probabilities for the laser intensity and summed together with the background. The fitted model was \begin{equation} M(x) = \kappa{}P_0(\lambda)\cdot{}B(x) + \sum_{m = 1}^{m_{max}}P_m(\lambda)(^{(m)}h\circledast{}G(\sigma))(x) \end{equation} where $\kappa$ accounts for the relative arrival time cut, G($\sigma$) is the normalized Gaussian function and P$_m$($\lambda$) is the Poisson probability for peak ``m'' with an average of $\lambda$. A sample fit for a +140~V high intensity data set is shown in Figure \ref{Sample_Imp_Trap_Fitting}. The residual shows several points outside the 2$\sigma$ threshold, which may be indicative of pulse pile-up very close to the laser TTL trigger. A time sequence of the measured charge trapping and impact ionization probabilities for all acquisitions are shown to the right of the vertical black line in Figure~\ref{itparam}. The wide measurement distributions and large uncertainties for the medium and low laser intensity data come from the inherently poor statistics. The weighted average and standard deviations were in agreement and no dependence on the system configuration was observed. Thus, the probabilities for both holes and electrons getting across the crystal was nominally equal. Combining all the data we measure a charge trapping probability of 0.713~$\pm$~0.093\% and an impact ionization probability of 1.576~$\pm$~0.110\%. A 0.93 gram SuperCDMS HVeV detector was operated in a semi-continuous mode and used to demonstrated the use of a time-domain OF to analyze data. Triggered pulses could be identified based on the OF estimate arrival time to within 32~$\mu$s. Data from outside this 32~$\mu$s window was used to obtain a background spectrum that was modeled to first-order as the combination of a continuous bulk and a quantized leakage currents. The model was found to be in good agreement with the full data set. A simple impact ionization and charge trapping model for a single $e^-h^+$\ pair~\cite{Ponce2019} was then used to fit the detector response to six setup configurations (three non-zero laser intensities, two crystal bias polarities). By fixing the bulk and surface leakage parameters the impact ionization and charge trapping probabilities for the HVeV detector were successfully measured. This work was supported in part by the U.S. Department of Energy and by the National Science Foundation. This document was prepared by using resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. SLAC is operated under Contract No. DEAC02-76SF00515 with the U.S. Department of Energy. The authors are also especially grateful to the staff of the Varian Machine Shop at Stanford University for their assistance in machining the parts used in this experiment. \bibliographystyle{apsrev4-1-JHEPfix-autoEtAl}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec: intro} The successful detection of gravitational waves made up the last puzzle missing in the experimental verification of general relativity \cite{Abbott1,Abbott2,Abbott3,Abbott4,Abbott5}. Therefore, general relativity has become the most successful theory of gravity so far. However, there are still many theoretical problems that can not be explained by general relativity, such as how to explain the hierarchy between the Planck scale and the electroweak scale \cite{NimaArkani-Hamed,LRandall1,LRandall2} and how to quantize gravity \cite{A.Shomer}. In addition, the phenomena observed in experiments, such as the accelerated expansion of the Universe \cite{A.G.Riess} and the flat rotation curves of galaxies \cite{V.C.Rubin}, can not be explained by general relativity. For these reasons, many modified theories of gravity were considered \cite{NimaArkani-Hamed,LRandall1,LRandall2,Brans.Dicke,Horndeski,H.A.Buchdahl,C.Skordis,Timothy Clifton} in the hope of answering the problems that general relativity could not answer. A well-defined modified theory of gravity should be stable. Ostrogradsky's research pointed out that when a Lagrangian contains higher-order (second order or higher) time derivatives of dynamical variables, its corresponding Hamiltonian is usually bilateral unbounded \cite{M.Ostrogradsky,R.P.Woodard}. It is generally believed that this unbounded Hamiltonian will lead to an instability of the theory called the Ostrogradsky instability \cite{R.P.Woodard,H.Motohashi,A.Ganz}. Therefore, a modified gravity theory with the Ostrogradsky instability is generally considered to be pathological and should be avoided. Adding additional scalar field is one way to modify gravity. Theories obtained in this way are called scalar-tensor theories. In Ref. \cite{Z.Chen}, a tentative indication for scalar transverse gravitational waves was reported. If this is further confirmed in the future, it will strongly suggest that the gravity theory describing our world should have a scalar degree of freedom. In order to avoid the Ostrogradsky instability, we expect to give priority to those theories that can derive second-order field equations. In the metric formalism, the most general scalar-tensor theory that can derive second-order field equations is Horndeski theory \cite{Horndeski}. However, Refs. \cite{P.Creminelli,C.D.Kreisch,Y.Gong} pointed out that the observation of the speed of tensor gravitational waves in the Universe by the gravitational wave event GW170817 together with the gamma ray burst GRB170817A would severely constrain the possible parameter space of metric Horndeski theory. Specifically, GW170817 and GRB170817A require the tensor gravitational wave speed $c_g$ to meet \cite{B.P.Abbott000,B.P.Abbott111} \begin{eqnarray} \label{cg} -3\times10^{-15} \leq \frac{c_g}{c}-1 \leq 7\times10^{-16}. \end{eqnarray} This shows that in a very high precision, we can say that the tensor gravitational wave speed in the Universe is equal to the basic constant $c$ (speed of light). Considering that the cosmic background is also evolving during gravitational wave propagation, the most economic and natural assumption made by this observation result for the theory seems to be that: in any cosmological background, tensor gravitational waves always propagate at the speed of light. However, the possible subclasses of metric Horndeski theory satisfying this assumption only remain~\cite{P.Creminelli,R.Kase} \begin{eqnarray} S=\int d^4x \sqrt{-g} \big[ K(\phi,X)-G_{3}(\phi,X){\Box}\phi+G_{4}(\phi)R \big]. \end{eqnarray} This constraint limits the application of scalar-tensor theories. Therefore, We expect to find scalar-tensor theories beyond the metric Horndeski framework. There are also many studies using GW170817 to constrain modify gravity theories~\cite{Y.Gong1,A.Gumrukcuoglu,Y.Gong2,Y.Cai,J.Oost,Y.Gong3,L.Shao}. Further analysis shows that not all higher derivative theories have the Ostrogradsky instability. The higher derivative theory without the Ostrogradsky instability is required to satisfy the degeneracy condition \cite{H. Motohashi2,A.Ganz,D.Langlois}. In the metric formalism, the scalar-tensor theory with higher derivative but without the Ostrogradsky instability is called degenerate higher-order scalar-tensor (DHOST) theory \cite{D. Langlois,J.BenAchour,D.Langlois,J.Gleyzes,M.Crisostomi1,T. Kobayashi}. In addition to DHOST theory, considering the teleparallel framework is another way to go beyond the metric Horndeski framework. In teleparallel Horndeski theory established by Bahamonde \textit{et al}., metric Horndeski theory is included in the teleparallel framework as one of many subclasses \cite{S.Bahamonde1,S.Bahamonde2}. Considering the scalar-tensor theory in the Palatini formalism may be another way to go beyond metric Horndeski framework. There have been some works on scalar-tensor gravity in the Palatini formalism \cite{U.Lindstrom,F.Bauer,M.Li,T.Markkanen,L.Jarv,K.Aoki1,A.Kozak,K.Shimada,R.Jinno,K.Aoki2,Helpin,Helpin2,M.Kubota,Y.Dong}. Cosmology in Palatini-Horndeski theory is different from that in metric Horndeski theory and their stability properties are different \cite{Helpin}. Different from metric Horndeski theory, under some parameter spaces, the connection of Palatini-Horndeski theory will introduce some new degrees of freedom \cite{Helpin}. In addition, the polarization modes of gravitational waves in Palatini-Horndeski theory are different from that in metric Horndeski theory \cite{Y.Dong}. Thus, it seems that Palatini-Horndeski theory may be different from metric Horndeski. However, it is necessary to further investigate the possible parameter space of Palatini-Horndeski theory. In this paper, we will find possible subclasses of Palatini-Horndeski theory that satisfied the following condition: the speed of tensor gravitational waves is the speed of light in any spatially flat cosmological background. In Sec. \ref{sec: 2}, we will review Palatini-Horndeski theory. In Sec. \ref{sec: 3}, we will discuss the Ostrogradsky instability in Palatini-Horndeski theory for the evolution of a spatially flat Universe. In Sec. \ref{sec: 4}, we will obtain the speed of tensor gravitational waves in the spatially flat cosmological background and constrain the parameter space. The conclusion will be given in Sec. \ref{sec: 5}. We will use the natural system of units in this paper. Greek alphabet indices $(\mu,\nu,\lambda,\rho)$ and Latin alphabet indices $(i,j,k,l)$ range over spacetime indices $(0,1,2,3)$ and space indices $(1,2,3)$, respectively. \section{Palatini-Horndeski theory} \label{sec: 2} In the Palatini formalism, the connection is independent of the metric. Therefore, it is necessary to take the variations of the action with respect to the metric and the connection independently. The Riemann tensor $\tilde{R}^{\mu}_{\ \nu\rho\sigma}$ and Ricci tensor $\tilde{R}_{\mu\nu}$ in the Palatini formalism are defined as \begin{eqnarray} \label{Riemann} \tilde{R}^{\rho}_{~\mu\lambda\nu} &=&\partial_{\lambda}\Gamma^{\rho}_{\mu\nu} -\partial_{\nu}\Gamma^{\rho}_{\mu\lambda} +\Gamma^{\rho}_{\sigma\lambda}\Gamma^{\sigma}_{\mu\nu} -\Gamma^{\rho}_{\sigma\nu}\Gamma^{\sigma}_{\mu\lambda}, \\ \label{Ricci} \tilde{R}_{\mu\nu}~~ &=&\partial_{\lambda}\Gamma^{\lambda}_{\mu\nu} -\partial_{\nu}\Gamma^{\lambda}_{\mu\lambda} +\Gamma^{\lambda}_{\sigma\lambda}\Gamma^{\sigma}_{\mu\nu} -\Gamma^{\lambda}_{\sigma\nu}\Gamma^{\sigma}_{\mu\lambda}. \end{eqnarray} Furthermore, we assume that the connection is nontorsion: $\Gamma^{\lambda}_{\mu\nu}=\Gamma^{\lambda}_{\nu\mu}$. The action of Palatini-Horndeski theory is defined as follows: \begin{eqnarray} \label{action} S\; = \; \int d^4x \sqrt{-g} \;\Bigl(\mathcal{L}_{2}+\mathcal{L}_{3}+\mathcal{L}_{4}+\mathcal{L}_{5}\Bigl) \label{actioneq}, \end{eqnarray} where \begin{eqnarray} \label{L2} \mathcal{L}_{2}&=&K(\phi,X), \\ \label{L3} \mathcal{L}_{3}&=&-G_{3}(\phi,X)\tilde{\Box}\phi, \\ \label{L4} \mathcal{L}_{4}&=&G_{4}(\phi,X)\tilde{R} +G_{4,X}(\mathnormal{\phi},X) \left[\left({\tilde{\Box}\phi}\right)^{2} -\left(\tilde{\nabla}_{\mu}\tilde{\nabla}_{\nu}\phi\right) \left(\tilde{\nabla}^{\mu}\tilde{\nabla}^{\nu}\phi\right)\right], \\ \label{L5} \nonumber \mathcal{L}_{5}&=&G_{5}(\phi,X)\left(\tilde{R}_{\mu\nu} -\frac{1}{2}\mathnormal{g}_{\mu\nu}\tilde{R}\right) \tilde{\nabla}^{\mu}\tilde{\nabla}^{\nu}\phi \\ \nonumber &-&\frac{1}{6}G_{5,X}(\phi,X) \left[ \left(\tilde{\Box}\phi\right)^{3}-3\tilde{\Box}\phi \left(\tilde{\nabla}_{\mu}\tilde{\nabla}_{\nu}\phi\right) \left(\tilde{\nabla}^{\mu}\tilde{\nabla}^{\nu}\phi\right)\right. \\ &+&2\left.\left(\tilde{\nabla}^{\lambda}\tilde{\nabla}_{\rho}\phi\right) \left(\tilde{\nabla}^{\rho}\tilde{\nabla}_{\sigma}\phi\right) \left(\tilde{\nabla}^{\sigma}\tilde{\nabla}_{\lambda}\phi\right) \right]. \end{eqnarray} Here, $\tilde{\Box}=\tilde{\nabla}^{\mu}\tilde{\nabla}_{\mu}$, $X=-\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi$, and $ K, G_{3}, G_{4}$ and $G_{5}$ are real analytic functions of the variables $\phi$ and $X$. To distinguish the quantities in the metric formalism, we add tilde to represent the corresponding quantities defined in the Palatini formalism. A comma in subscript means partial derivative, e.g., $G_{4,X}\equiv \partial G_4 /\partial X$. In the Palatini formalism, the compatibility condition $\tilde{\nabla}_{\lambda}g_{\mu\nu}=0$ is generally no longer valid. Therefore, the definition of $\tilde{\Box}\phi, \tilde{\nabla}^{\mu}\tilde{\nabla}^{\nu}\phi$ and $ \tilde{\nabla}^{\mu}\tilde{\nabla}_{\nu}\phi$ in the action (\ref{action}) in the Palatini formalism is not unique \cite{Helpin,M.Kubota}. In this paper, we take the definition in Ref. \cite{Y.Dong}: \begin{eqnarray} \nonumber \label{nabla define} \tilde{\Box}\phi &=& g^{\mu\nu}\tilde{\nabla}_{\mu}\tilde{\nabla}_{\nu}\phi, \\ \tilde{\nabla}^{\mu}\tilde{\nabla}^{\nu}\phi &=& g^{\mu\rho}\tilde{\nabla}_{\rho}(g^{\nu\sigma}\tilde{\nabla}_{\sigma}\phi), \\ \tilde{\nabla}^{\mu}\tilde{\nabla}_{\nu}\phi &=& g^{\mu\rho}\tilde{\nabla}_{\rho}\tilde{\nabla}_{\nu}\phi. \nonumber \end{eqnarray} \section{The Ostrogradsky instability} \label{sec: 3} Since the action (\ref{action}) of Palatini-Horndeski theory contains the second order time derivatives of the scalar field $\phi$, one may think that Palatini-Horndeski theory has the Ostrogradsky instability. However, in this section, the analysis of the degeneracy condition in the case of the evolution of a spatially flat Universe shows that it can not be taken for granted that Palatini-Horndeski theory must have the Ostrogradsky instability. If a theory is Ostrogradsky stable, then it must be Ostrogradsky stable in the special case of the evolution of a spatially flat Universe. For a spatially flat Universe, the metric $g_{\mu\nu}$ is the spatially flat Friedmann-Robertson-Walker (FRW) metric, and the connection $\Gamma^{\lambda}_{\mu\nu}$ and scalar field $\phi$ are only functions of time: \begin{eqnarray} \label{background} ds^2=-N(t)^2 dt^2+a(t)^2\delta_{ij}dx^idx^j, \quad \Gamma^{\lambda}_{\mu\nu}=\Gamma^{\lambda}_{\mu\nu}(t), \quad \phi=\phi(t). \end{eqnarray} We consider that the connection has spatial isotropy, that is, under the spatial rotation transformation, the components of the connection $\Gamma^{\lambda}_{\mu\nu}$ are invariant. This condition further limits the value of the connection. Specifically, under the spatial rotation transformation, the transformation law of the components of the connection is the same as that of the third-order tensor, which requires that the connection $\Gamma^{\lambda}_{\mu\nu}$ satisfies \cite{A.Minkevich,D.Iosifidis} \begin{eqnarray} \label{Gamma background} &\Gamma^{0}_{0i}=\Gamma^{i}_{00}=0, \quad \Gamma^{0}_{ij}=\Gamma^{0}_{11} \delta_{ij}, \quad \Gamma^{i}_{0j}=\Gamma^{1}_{01} \delta^{i}_{j}, \nonumber \\ &\Gamma^{i}_{jk} =\Gamma^{1}_{23} \delta^{il} \varepsilon_{ljk} =\Gamma^{1}_{23} \delta^{il} \varepsilon_{l(jk)} =0. \end{eqnarray} Here, $\delta_{ij}$ is the Kronecker delta, and $\varepsilon_{ijk}$ is the Levi-Civita tensor. It can be seen that for the components of the connection, only $\Gamma^{0}_{00}, \Gamma^{0}_{11}=\Gamma^{0}_{22}=\Gamma^{0}_{33}$ and $\Gamma^{1}_{01}=\Gamma^{2}_{02}=\Gamma^{3}_{03}$ may not be zero. One may want to set $N(t)=1$ at the level of action, and then obtain the evolution equations by varying the variables $\left(a,\phi,\Gamma^{0}_{00},\Gamma^{0}_{11},\Gamma^{1}_{01}\right)$. However, this will miss one equation \cite{H.MotohashiT.SuyamaK.Takahashi}. Thus, in order to obtain complete evolution equations, $N$ should be kept in the action. By substituting Eqs. (\ref{background}) and (\ref{Gamma background}) into the action (\ref{action}), we obtain the action that describes the evolution of a spatially flat Universe: \begin{eqnarray} \label{Universe action} S\; = \; \int dt\int d^3x~ L\left(\ddot{\phi},\dot{\phi},\phi,\dot{N},N,a,\Gamma^{0}_{00},\dot{\Gamma}^{0}_{11},{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{1}_{01}\right), \end{eqnarray} where \begin{eqnarray} \label{L} \frac{L}{a^3 N}&=& K +\frac{3G_3\Gamma^{0}_{11}\dot{\phi}}{a^2} -\frac{G_3}{N^2}\left(\Gamma^{0}_{00}\dot{\phi}-\ddot{\phi}\right) \nonumber \\ &+& \frac{3G_4}{a^2}\left( \Gamma^{0}_{00}\Gamma^{0}_{11}+\Gamma^{0}_{11}\Gamma^{1}_{01}+\dot{\Gamma}^{0}_{11} \right) -\frac{3G_4}{N^2}\left( \Gamma^{0}_{00}\Gamma^{1}_{01}-{\Gamma^{1}_{01}}^2-\dot{\Gamma}^{1}_{01} \right) \nonumber \\ &+& \frac{3G_5\Gamma^{1}_{01}\dot{\phi}}{2a^2N^4} \left[ N^2\left( \Gamma^{0}_{00}\Gamma^{0}_{11}+\Gamma^{0}_{11}\Gamma^{1}_{01}+\dot{\Gamma}^{0}_{11} \right) +3a^2\left( -\Gamma^{0}_{00}\Gamma^{1}_{01}+{\Gamma^{1}_{01}}^2+\dot{\Gamma}^{1}_{01} \right) \right] \nonumber \\ &+& \frac{3G_5}{2a^2N^5} \left[ N^2\left( \Gamma^{0}_{00}\Gamma^{0}_{11}+\Gamma^{0}_{11}\Gamma^{1}_{01}+\dot{\Gamma}^{0}_{11} \right) +a^2\left( \Gamma^{0}_{00}\Gamma^{1}_{01}-{\Gamma^{1}_{01}}^2-\dot{\Gamma}^{1}_{01} \right) \right] \nonumber \\ &\times& \left[ -2\dot{N}\dot{\phi}+N\left(\Gamma^{0}_{00}\dot{\phi}+\ddot{\phi}\right) \right] +\frac{G_{4,X}\dot{\phi}}{a^4N^5} \left[ 9N^5{\Gamma^{0}_{11}}^2\dot{\phi} \right. \nonumber \\ &-& \left. 3a^2N^3\Gamma^{0}_{11} \left( 2\Gamma^{0}_{00}\dot{\phi}+\Gamma^{1}_{01}\dot{\phi}-2\ddot{\phi} \right) +2a^4\left( N\Gamma^{0}_{00}-\dot{N} \right) \left( \Gamma^{0}_{00}\dot{\phi}-\ddot{\phi} \right) \right] \nonumber \\ &-& \frac{G_{5,X}}{2a^6N^7}\dot{\phi} \left[ -11N^7{\Gamma^{0}_{11}}^3\dot{\phi}^2-3a^4N^2\Gamma^{0}_{11} \left( N\left(2\Gamma^{0}_{00}+\Gamma^{1}_{01}\right)-2\dot{N} \right) \dot{\phi}\left(\Gamma^{0}_{00}\dot{\phi}-\ddot{\phi}\right) \right. \nonumber \\ &+& \left. 9a^2N^5{\Gamma^{0}_{11}}^2\dot{\phi} \left( \Gamma^{0}_{00}\dot{\phi}+\Gamma^{1}_{01}\dot{\phi}-\ddot{\phi} \right) +2a^6\left(N\Gamma^{0}_{00}-\dot{N}\right)\left(-\Gamma^{0}_{00}\dot{\phi}+\ddot{\phi}\right)^2 \right]. \end{eqnarray} Here and below, the dot on the letter represents the derivative of the corresponding quantity with respect to time, $K,G_3,G_4,G_5,G_{4,X}$, and $G_{5,X}$ are functions of $(\phi,\frac{\dot{\phi}^2}{2N^2})$. Because $L$ is only a function of $t$ in current case, we can now omit $\int d^3x$ and consider $L$ itself as a Lagrangian. By using a Lagrangian multiplier $\pi$ to impose the constraint $\dot{\phi}=s$, the Lagrangian $L$ in (\ref{L}) can be equivalent to the following Lagrangian $\tilde{L}$, which only includes dynamic variables and the first order time derivative of dynamic variables: \begin{eqnarray} \label{eq L} &\tilde{L}\left(\dot{s},s,\dot{\phi},\phi,\dot{N},N,a,\pi,\Gamma^{0}_{00},\dot{\Gamma}^{0}_{11},{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{1}_{01}\right) \nonumber\\ &=L\left(\dot{s},s,\phi,\dot{N},N,a,\Gamma^{0}_{00},\dot{\Gamma}^{0}_{11},{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{1}_{01}\right)+\pi \left(\dot{\phi}-s\right). \end{eqnarray} By varying the Lagrangian (\ref{eq L}), two kinds of equations can be obtained. The first kind of equations include \begin{eqnarray} \label{dl/dpi} \frac{\partial{\tilde{L}}}{\partial{\pi}} &=&\dot{\phi}-s =0, \\ \label{dl/da} \frac{\partial{\tilde{L}}}{\partial{a}} &=&\frac{\partial{L}}{\partial{a}} \left(\dot{s},s,\phi,\dot{N},N,a,\Gamma^{0}_{00},\dot{\Gamma}^{0}_{11}, {\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{1}_{01}\right) =0, \\ \label{dl/dgamma000} \frac{\partial{\tilde{L}}}{\partial{\Gamma^{0}_{00}}} &=&\frac{\partial{L}}{\partial{\Gamma^{0}_{00}}} \left(\dot{s},s,\phi,\dot{N},N,a,\Gamma^{0}_{00},\dot{\Gamma}^{0}_{11}, {\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{1}_{01}\right) =0. \end{eqnarray} They are constraints between variables $\left(\dot{s},s,\phi,\dot{N},N,a,\Gamma^{0}_{00},\dot{\Gamma}^{0}_{11},{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{1}_{01}\right)$. The second kind of equations are \begin{eqnarray} \frac{d}{dt}\frac{\partial{\tilde{L}}}{\partial{\dot{N}}}-\frac{\partial{\tilde{L}}}{\partial{N}} =0,\quad \frac{d}{dt}\frac{\partial{\tilde{L}}}{\partial{\dot{s}}}-\frac{\partial{\tilde{L}}}{\partial{s}} =0,\quad \frac{d}{dt}\frac{\partial{\tilde{L}}}{\partial{\dot{\phi}}} -\frac{\partial{\tilde{L}}}{\partial{\phi}} =0,\label{E-L eq1} \\ \frac{d}{dt}\frac{\partial{\tilde{L}}}{\partial{\dot{\Gamma}^{0}_{11}}} -\frac{\partial{\tilde{L}}}{\partial{{\Gamma}^{0}_{11}}} =0,\quad \frac{d}{dt}\frac{\partial{\tilde{L}}}{\partial{\dot{\Gamma}^{1}_{01}}} -\frac{\partial{\tilde{L}}}{\partial{{\Gamma}^{1}_{01}}} =0.\qquad\quad \label{E-L eq2} \end{eqnarray} They are Euler-Lagrange equations. In addition, canonical momentums are defined as \begin{eqnarray} \label{Pphi} P_{\phi}&=&\frac{\partial{\tilde{L}}}{\partial{\dot{\phi}}} = \pi, \\ \label{PN} P_{N}&=&\frac{\partial{\tilde{L}}}{\partial{\dot{N}}} =\frac{\partial{L}}{\partial{\dot{N}}}\left(\dot{s},s,\phi,\dot{N},N,a,\Gamma^{0}_{00},\dot{\Gamma}^{0}_{11},{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{1}_{01}\right), \\ \label{Ps} P_{s}&=&\frac{\partial{\tilde{L}}}{\partial{\dot{s}}} =\frac{\partial{L}}{\partial{\dot{s}}}\left(\dot{s},s,\phi,\dot{N},N,a,\Gamma^{0}_{00},\dot{\Gamma}^{0}_{11},{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{1}_{01}\right), \\ \label{PGamma011} P_{\Gamma^{0}_{11}}&=&\frac{\partial{\tilde{L}}}{\partial{\dot{\Gamma}^{0}_{11}}} =\frac{\partial{L}}{\partial{\dot{\Gamma}^{0}_{11}}}\left(\dot{s},s,\phi,\dot{N},N,a,\Gamma^{0}_{00},\dot{\Gamma}^{0}_{11},{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{1}_{01}\right), \\ \label{PGamma101} P_{\Gamma^{1}_{01}}&=&\frac{\partial{\tilde{L}}}{\partial{\dot{\Gamma}^{1}_{01}}} =\frac{\partial{L}}{\partial{\dot{\Gamma}^{1}_{01}}}\left(\dot{s},s,\phi,\dot{N},N,a,\Gamma^{0}_{00},\dot{\Gamma}^{0}_{11},{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{1}_{01}\right). \end{eqnarray} In order to analyze the Ostrogradsky stability, it is necessary to introduce the Hamiltonian formalism of the theory. For this, take the total differential of $\tilde{L}$ \begin{eqnarray} \label{dL} d\tilde{L} &=&\frac{\partial{\tilde{L}}}{\partial{\dot{s}}} d{\dot{s}} +\frac{\partial{\tilde{L}}}{\partial{{s}}} d{{s}} +\frac{\partial{\tilde{L}}}{\partial{\dot{\phi}}} d{\dot{\phi}} +\frac{\partial{\tilde{L}}}{\partial{{\phi}}}d{{\phi}} +\frac{\partial{\tilde{L}}}{\partial{\dot{N}}} d{\dot{N}} +\frac{\partial{\tilde{L}}}{\partial{{N}}} d{{N}} +\frac{\partial{\tilde{L}}}{\partial{\dot{\Gamma}^{0}_{11}}} d{\dot{\Gamma}^{0}_{11}} \nonumber \\ &+&\frac{\partial{\tilde{L}}}{\partial{{{\Gamma}^{0}_{11}}}} d{{{\Gamma}^{0}_{11}}} +\frac{\partial{\tilde{L}}}{\partial{\dot{\Gamma}^{1}_{01}}}d\dot{\Gamma}^{1}_{01} +\frac{\partial{\tilde{L}}}{\partial{{\Gamma}^{1}_{01}}} d{\Gamma}^{1}_{01} +\frac{\partial{\tilde{L}}}{\partial{\pi}} d\pi+\frac{\partial{\tilde{L}}}{\partial{a}} da +\frac{\partial{\tilde{L}}}{\partial{{\Gamma}^{0}_{00}}}d{\Gamma}^{0}_{00}. \end{eqnarray} Using Eqs.~(\ref{dl/dpi})-(\ref{E-L eq2}) and the definitions of canonical momentums (\ref{Pphi})-(\ref{PGamma101}), the expression (\ref{dL}) will be equivalent to \begin{eqnarray} \label{dH} && d\left(P_s\dot{s}+P_\phi\dot{\phi}+P_N\dot{N}+P_{{\Gamma}^{0}_{11}}\dot{\Gamma}^{0}_{11} +P_{{\Gamma}^{1}_{01}}\dot{\Gamma}^{1}_{01}-\tilde{L} \right) \nonumber \\ &=&\dot{s}dP_{s} -\dot{P_{s}}ds +\dot{\phi}dP_{\phi} -\dot{P_{\phi}}d\phi +\dot{N}dP_{N} -\dot{P}_{N}dN \nonumber \\ &+&\dot{\Gamma}^{0}_{11}dP_{{{\Gamma}^{0}_{11}}} -\dot{P}_{{\Gamma}^{0}_{11}}d{{\Gamma}^{0}_{11}} +\dot{\Gamma}^{1}_{01}dP_{{{\Gamma}^{1}_{01}}} -\dot{P}_{{\Gamma}^{1}_{01}}d{{\Gamma}^{1}_{01}}. \end{eqnarray} It allows that the Hamiltonian of the theory is defined as \begin{eqnarray} \label{H} H=P_s\dot{s}+P_\phi\dot{\phi}+P_N\dot{N}+P_{{\Gamma}^{0}_{11}}\dot{\Gamma}^{0}_{11}+P_{{\Gamma}^{1}_{01}}\dot{\Gamma}^{1}_{01}-\tilde{L}. \end{eqnarray} If we can use the first kind of equations (\ref{dl/dpi})-(\ref{dl/dgamma000}) and the definitions of canonical momentums (\ref{Pphi})-(\ref{PGamma101}) to express variables $\left(\dot{s},\dot{\phi},\dot{N},\dot{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{0}_{00},a,\pi\right)$ as functions of independent variables $\left(s,P_{s},\phi,P_{\phi},N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01}}\right)$, so as to express the Hamiltonian $H$ as a function of independent variables $\left(s,P_{s},\phi,P_{\phi},N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01}}\right)$, then using (\ref{dH}), we can obtain the Hamilton's equations equivalent to the Euler-Lagrange equations (\ref{E-L eq1})(\ref{E-L eq2}): \begin{equation} \label{Hamilton's equations} \begin{aligned} \dot{s}&=\frac{\partial{H}}{\partial{P_{s}}},\quad~~~~~~ \dot{P_s}=-\frac{\partial{H}}{\partial{{s}}}; \\ \dot{\phi}&=\frac{\partial{H}}{\partial{P_{\phi}}},\quad ~~~~~\, \dot{P_\phi}=-\frac{\partial{H}}{\partial{{\phi}}}; \\ \dot{N}&=\frac{\partial{H}}{\partial{P_{N}}},\quad~~~~~ \dot{P}_{N}=-\frac{\partial{H}}{\partial{{N}}}; \\ \dot{\Gamma}^{0}_{11}&=\frac{\partial{H}}{\partial{P_{\Gamma^{0}_{11}}}},\quad ~ \dot{P}_{\Gamma^{0}_{11}}=-\frac{\partial{H}}{\partial{{\Gamma}^{0}_{11}}};\\ \dot{\Gamma}^{1}_{01}&=\frac{\partial{H}}{\partial{P_{\Gamma^{1}_{01}}}},\quad ~ \dot{P}_{\Gamma^{1}_{01}}=-\frac{\partial{H}}{\partial{{\Gamma}^{1}_{01}}}. \end{aligned} \end{equation} The implicit function theorem gives a sufficient condition for the following propositio : variables $\left(\dot{s},\dot{\phi},\dot{N},\dot{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{0}_{00},a,\pi \right)$ can be represented by variables $\left(s,P_{s},\phi,P_{\phi},N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01}}\right)$ locally. We mark all the set of variables $\left(\dot{s},\dot{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},\dot{N},{\Gamma}^{0}_{00},a,s,P_{s},\phi,N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01}}\right)$ as $\mathcal{X}$. This theorem points out that for a solution $x_0 \in \mathcal{X}$ of the variables satisfying Eqs.~(\ref{dl/da}), (\ref{dl/dgamma000}), and (\ref{PN})-(\ref{PGamma101}), if the value of $\mathcal{K}\left(\dot{s},s,\phi,\dot{N},N,a,\Gamma^{0}_{00},\dot{\Gamma}^{0}_{11},{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{1}_{01}\right)$ is not zero at $x_0$, where \begin{eqnarray} \label{K} \mathcal{K} =\Large {\begin{vmatrix} \frac{{\partial}^2 L}{{\partial{a}}^2} & \frac{{\partial}^2 L}{\partial{a}\partial{\dot{s}}} & \frac{{\partial}^2 L}{\partial{a}\partial{\dot{\Gamma}^{0}_{11}}}& \frac{{\partial}^2 L}{\partial{a}\partial{\dot{\Gamma}^{1}_{01}}}& \frac{{\partial}^2 L}{\partial{a}\partial{{\Gamma}^{0}_{00}}}& \frac{{\partial}^2 L}{\partial{a}\partial{\dot{N}}} \\ \frac{{\partial}^2 L}{\partial{\dot{s}}{\partial{a}}} & \frac{{\partial}^2 L}{{\partial{\dot{s}}}^2} & \frac{{\partial}^2 L}{\partial{\dot{s}}\partial{\dot{\Gamma}^{0}_{11}}}& \frac{{\partial}^2 L}{\partial{\dot{s}}\partial{\dot{\Gamma}^{1}_{01}}}& \frac{{\partial}^2 L}{\partial{\dot{s}}\partial{{\Gamma}^{0}_{00}}}& \frac{{\partial}^2 L}{\partial{\dot{s}}\partial{\dot{N}}} \\ \frac{{\partial}^2 L}{\partial{\dot{\Gamma}^{0}_{11}}{\partial{a}}} & \frac{{\partial}^2 L}{\partial{\dot{\Gamma}^{0}_{11}}{\partial{\dot{s}}}} & \frac{{\partial}^2 L}{{\partial{\dot{\Gamma}^{0}_{11}}}^2}& \frac{{\partial}^2 L}{\partial{\dot{\Gamma}^{0}_{11}}\partial{\dot{\Gamma}^{1}_{01}}}& \frac{{\partial}^2 L}{\partial{\dot{\Gamma}^{0}_{11}}\partial{{\Gamma}^{0}_{00}}}& \frac{{\partial}^2 L}{\partial{\dot{\Gamma}^{0}_{11}}\partial{\dot{N}}} \\ \frac{{\partial}^2 L}{\partial{\dot{\Gamma}^{1}_{01}}{\partial{a}}} & \frac{{\partial}^2 L}{\partial{\dot{\Gamma}^{1}_{01}}{\partial{\dot{s}}}} & \frac{{\partial}^2 L}{\partial{\dot{\Gamma}^{1}_{01}}\partial{\dot{\Gamma}^{0}_{11}}}& \frac{{\partial}^2 L}{{\partial{\dot{\Gamma}^{1}_{01}}}^2}& \frac{{\partial}^2 L}{\partial{\dot{\Gamma}^{1}_{01}}\partial{{\Gamma}^{0}_{00}}}& \frac{{\partial}^2 L}{\partial{\dot{\Gamma}^{1}_{01}}\partial{\dot{N}}} \\ \frac{{\partial}^2 L}{\partial{{\Gamma}^{0}_{00}}{\partial{a}}} & \frac{{\partial}^2 L}{\partial{{\Gamma}^{0}_{00}}{\partial{\dot{s}}}} & \frac{{\partial}^2 L}{\partial{{\Gamma}^{0}_{00}}\partial{\dot{\Gamma}^{0}_{11}}}& \frac{{\partial}^2 L}{\partial{{\Gamma}^{0}_{00}}\partial{\dot{\Gamma}^{1}_{01}}}& \frac{{\partial}^2 L}{{\partial{{\Gamma}^{0}_{00}}}^2}& \frac{{\partial}^2 L}{\partial{{\Gamma}^{0}_{00}}\partial{\dot{N}}} \\ \frac{{\partial}^2 L}{\partial{\dot{N}}\partial{a}} & \frac{{\partial}^2 L}{\partial{\dot{N}}\partial{\dot{s}}}& \frac{{\partial}^2 L}{\partial{\dot{N}}\partial{\dot{\Gamma}^{0}_{11}}}& \frac{{\partial}^2 L}{\partial{\dot{N}}\partial{\dot{\Gamma}^{1}_{01}}}& \frac{{\partial}^2 L}{\partial{\dot{N}}\partial{{\Gamma}^{0}_{00}}}& \frac{{\partial}^2 L}{{\partial{\dot{N}}^2}} \end{vmatrix}}, \end{eqnarray} then there exists a neighbourhood $\mathcal{O}\subseteq\mathcal{X}$ of $x_0$, such that the following relationships can be solved for all points in $\mathcal{O}$ that satisfy Eqs.~(\ref{dl/da}), (\ref{dl/dgamma000}), and (\ref{Ps})-(\ref{PGamma101}): \begin{eqnarray} \dot{s} &=&\dot{s}\left(s,P_{s},\phi,N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01}}\right), \nonumber \\ a &=&a\left(s,P_{s},\phi,N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01}}\right), \nonumber \\ \dot{N} &=&\dot{N}\left(s,P_{s},\phi,N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01}}\right), \nonumber \\ \Gamma^{0}_{00} &=&\Gamma^{0}_{00}\left(s,P_{s},\phi,N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01}}\right), \label{sol} \\ \dot{\Gamma}^{0}_{11} &=&\dot{\Gamma}^{0}_{11}\left(s,P_{s},\phi,N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01}}\right), \nonumber \\ \dot{\Gamma}^{1}_{01} &=&\dot{\Gamma}^{1}_{01}\left(s,P_{s},\phi,N,P_{N},{\Gamma}^{0}_{11}, P_{{\Gamma}^{0}_{11}},{\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01}}\right). \nonumber \end{eqnarray} Thus, for variables satisfying Eqs.~(\ref{dl/dpi})-(\ref{dl/dgamma000}) and (\ref{PN})-(\ref{PGamma101}) in $\mathcal{O}$, using relationships (\ref{sol}), the Hamiltonian $H\left(s,P_{s},\phi,P_{\phi},N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01}}\right)$ can be locally expressed as \begin{eqnarray} \label{H(p,q)} H &=&P_{\phi}s +P_{s}\dot{s}\left(s,P_{s},\phi,N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01 }}\right) \nonumber \\ &+& P_{{\Gamma}^{0}_{11}}\dot{\Gamma}^{0}_{11} \left(s,P_{s},\phi,N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma }^{1}_{01},P_{{\Gamma}^{1}_{01}}\right) \nonumber \\ &+&P_{{\Gamma}^{1}_{01}}\dot{\Gamma}^{1}_{01} \left(s,P_{s},\phi,N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}},{\Gamma }^{1}_{01},P_{{\Gamma}^{1}_{01}}\right) \nonumber \\ &-& L\left(s,P_{s},\phi,N,P_{N},{\Gamma}^{0}_{11},P_{{\Gamma}^{0}_{11}}, {\Gamma}^{1}_{01},P_{{\Gamma}^{1}_{01}}\right). \end{eqnarray} Note that $P_{\phi}$ can take any real value and only appear in the first term on the right hand side of Eq.~(\ref{H(p,q)}). Therefore, if we take the point in $\mathcal{O}$ that makes $s\neq0$, we can see that the Hamiltonian $H$ is bilateral unbounded, so the theory has the Ostrogradsky instability. According to the above discussion, it can be seen that a necessary condition which we call degeneracy condition for Palatini-Horndeski theory to be Ostrogradsky stable is that the value of $\mathcal{K}$ at any variables $\left(\dot{s},s,\phi,\dot{N},N,a,\Gamma^{0}_{00},\dot{\Gamma}^{0}_{11},{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},{\Gamma}^{1}_{01}\right)$ satisfying Eqs. (\ref{dl/da}) and (\ref{dl/dgamma000}) is always $0$ \cite{A.Ganz}. One might want to use this degenerate condition to rule out some unstable classes in Palatini-Horndeski theory. However, when substituting the Lagrangian (\ref{L}) into the definition of $\mathcal{K}$ in (\ref{K}), we are surprised to find \begin{eqnarray} \label{K=0} \mathcal{K}=0. \end{eqnarray} This shows that all parameter spaces of Palatini-Horndeski satisfy the degeneracy condition. Although this does not mean that all parameter spaces in the theory are Ostrogradsky stable, it shows that Palatini-Horndeski theory is not as easy to have the Ostrogradsky instability as expected. In fact, $\mathcal{K}=0$ means that there is at least one constraint on the phase space in the theory \cite{A.Ganz}. It is necessary to further analyze the constraint condition to clearly judge whether Palatini-Horndeski theory has the Ostrogradsky instability. Such an analysis seems very complex. However, we will easily see in Sec. \ref{sec: 5} that the parameter space of Palatini-Horndeski theory compatible with GW170817 is Ostrogradsky stable. \section{The speed of tensor gravitational waves} \label{sec: 4} In this section, we will calculate the speed of tensor gravitational waves propagating in a spatially flat cosmological background and find possible subclasses of Palatini-Horndeski theory that satisfy the following condition: the speed of tensor gravitational waves is the speed of light in any spatially flat cosmological background. In addition to the gravitational field, the ideal fluid material field is also distributed in the spatially flat Universe. Therefore, in addition to the gravitational field action (\ref{action}), we should also add an action $S_m$ that describes the ideal fluid into the the total action \begin{eqnarray} \label{action tot} S_{tot}=S+S_m. \end{eqnarray} Here, $S$ is defined by (\ref{action}). In the Palatini formalism, $S_m$ is only a function of the metric and the material field, and it is independent of the connection. Varying the action $S_m$ with respect to $g_{\mu\nu}$, we obtain \begin{eqnarray} \label{delta Sm} \delta S_m=-\frac{1}{2}\int d^4x \sqrt{-g} T^{\mu\nu} \delta{g_{\mu\nu}}. \end{eqnarray} Here, $T^{\mu\nu}$ is the energy-momentum tensor of the ideal fluid: \begin{eqnarray} \label{T_munu} T^{\mu\nu}=(P+\epsilon)u^{\mu}u^{\nu}+Pg^{\mu\nu}, \end{eqnarray} where $\epsilon$ is the matter density, $P$ is the matter pressure. The four-velocity $u^{\mu}$ satisfies $u^{0}=\frac{1}{N}, u^{i}=0$. By substituting Eqs. (\ref{background}) and (\ref{Gamma background}) into the action (\ref{action tot}), and varying the action (\ref{action tot}) with respect to $N,a,\phi,\Gamma^{0}_{00},\Gamma^{0}_{11}$ and $\Gamma^{1}_{01}$, we obtain the background equations: \begin{eqnarray} \label{BG EQ with P} & \frac{d}{dt}\frac{\partial{{L}}}{\partial{\dot{N}}}-\frac{\partial{{L}}}{\partial{N}}+a^3\epsilon=0,\quad \frac{\partial{L}}{\partial{a}}-3a^2 P=0,\quad \frac{d^2}{{dt}^2}\frac{\partial{{L}}}{\partial{\ddot{\phi}}} -\frac{d}{dt}\frac{\partial{{L}}}{\partial{\dot{\phi}}} +\frac{\partial{{L}}}{\partial{{\phi}}}=0, \nonumber \\ & \frac{\partial{L}}{\partial{\Gamma^{0}_{00}}}=0, \quad \frac{d}{dt}\frac{\partial{{L}}}{\partial{\dot{\Gamma}^{0}_{11}}} -\frac{\partial{{L}}}{\partial{{\Gamma}^{0}_{11}}}=0, \quad \frac{d}{dt}\frac{\partial{{L}}}{\partial{\dot{\Gamma}^{1}_{01}}} -\frac{\partial{{L}}}{\partial{{\Gamma}^{1}_{01}}}=0. \end{eqnarray} Here, $L$ is defined by (\ref{L}). Because the specific expressions of the background equations (\ref{BG EQ with P}) are very lengthy and easy to obtain, they will not be listed in this paper. In the following, we take $N(t)=1$. In order to study the tensor gravitational waves, we need to obtain the linear perturbation equations of the tensor perturbations on the spatially flat cosmological background. Since the metric and connection are independent in the Palatini formalism, they should be perturbed independently: \begin{eqnarray} \label{perturb} g_{\mu\nu} \rightarrow g_{\mu\nu}+h_{\mu\nu}, \quad \Gamma^{\lambda}_{\mu\nu} \rightarrow \Gamma^{\lambda}_{\mu\nu}+\Sigma^{\lambda}_{\mu\nu}. \end{eqnarray} We take the part describing tensor gravitational waves in perturbations: \begin{eqnarray} \label{tensor perturb} & h_{00}=h_{0i}=0,\quad h_{ij}=H_{ij},\quad \Sigma^{0}_{00}=\Sigma^{0}_{0i}=\Sigma^{i}_{00}=0,\nonumber \\ &\Sigma^{0}_{ij}=A_{ij},\quad \Sigma^{i}_{0j}=B^i_j,\quad \Sigma^{i}_{jk}=\partial^{i}C_{jk}+\partial_{(j}D^{i}_{k)}. \end{eqnarray} Here, $H_{ij}, A_{ij}, B_{ij}, C_{ij}$ and $D_{ij}$ are symmetric transverse traceless tensors. They satisfy \begin{eqnarray} \label{symmetric transverse traceless} &H_{ij}=H_{ji},~ A_{ij}=A_{ji},~ B_{ij}=B_{ji},~ C_{ij}=C_{ji},~ D_{ij}=D_{ji},\nonumber \\ &H_{i}^{i}=A_{i}^{i}=B_{i}^{i}=C_{i}^{i}=D_{i}^{i}=0,\\ &\partial^{i}H_{ij}=\partial^{i}A_{ij}=\partial^{i}B_{ij}=\partial^{i}C_{ij}=\partial^{i}D_{ij}=0. \nonumber \end{eqnarray} Only in this paragraph, we use $\delta^{ij}$ ($\delta_{ij}$) to raise and lower the index. In Appendix \ref{app: B}, we give the decomposition of the connection and explain why the perturbations describing the tensor gravitational waves are given by Eq.~(\ref{tensor perturb}). Without losing generality, we consider the propagation direction of gravitational waves as $+z$ direction. At this time, it can be seen from (\ref{tensor perturb}) that the components of the perturbations $h_{\mu\nu}$ and $\Sigma^{\lambda}_{\mu\nu}$ that may not be zero are \begin{eqnarray} \label{h,Sigma,per} h_{12},\quad h_{11}=-h_{22},\quad \Sigma^{0}_{11}=-\Sigma^{0}_{22},\quad \Sigma^{0}_{12},\quad \Sigma^{1}_{01}=-\Sigma^{2}_{02}, \quad\nonumber \\ \Sigma^{1}_{02}=\Sigma^{2}_{01},\quad \Sigma^{1}_{13}=-\Sigma^{2}_{23}, \quad \Sigma^{1}_{23}=\Sigma^{2}_{13},\quad \Sigma^{3}_{11}=-\Sigma^{3}_{22},\quad \Sigma^{3}_{12}. \end{eqnarray} By expanding the second-order terms of the perturbations (\ref{h,Sigma,per}) in the action (\ref{action tot}) and varying the action with respect to the perturbations, we can obtain the linear perturbation equations describing the tensor gravitational waves. These equations are very lengthy and easy to obtain, so they are not listed here. Now we have obtained the linear perturbation equations describing the tensor gravitational waves. Next, we will use the equations to obtain the speed of the tensor gravitational waves. Before that, we will take metric Horndeski theory as an example to demonstrate how to obtain the speed of tensor gravitational waves from the linear perturbation equation. In metric Horndeski theory, the linear perturbation equation describing the tensor gravitational waves is given by \cite{R.Kase}: \begin{eqnarray} \label{metric Horndeski eq} \ddot{h}+b(t)\dot{h}-\frac{{c_t}^2(t)}{a^2(t)}\Delta h=0, \end{eqnarray} where $h$ is the component $h_{11}$ or $h_{12}$, $b$ and $c_t$ are functions of time, and $\Delta$ is the Laplace operator. For $h(t,z)$ propagating along the $+z$ direction, we make a Fourier transform: \begin{eqnarray} \label{Fourier transform} h(t,z)=\int d^3 k_3 f_{k_3}(t) e^{-ik_{3 z}}. \end{eqnarray} By substituting Eq. (\ref{Fourier transform}) into Eq. (\ref{metric Horndeski eq}), and using the linearity of Eq. (\ref{metric Horndeski eq}), we obtain the following equation: \begin{eqnarray} \label{f_k_3} \ddot{f}_{k_3}+b(t)\dot{f}_{k_3}+\frac{{c_t}^2(t)}{a^2(t)} k_3^2 f_{k_3}=0. \end{eqnarray} This allows us to consider only the case with a single spatial wave vector $k_3$: \begin{eqnarray} \label{h only k3} h=f(t) e^{-i {k_3} z}, \end{eqnarray} where $f(t)$ can always be expressed as \begin{eqnarray} \label{f(t)} f(t)=F(t) e^{i {k_0}(t) t}. \end{eqnarray} Here, $F$ is the norm of $f(t)$ and ${k_0}(t) t$ is the argument. Therefore, $F$ and ${k_0}$ are real numbers. Considering that the gravitational wave is observed near time $t_0$ and the observation duration is $\Delta T$, that is, the observation time $t \in [t_0-\frac{\Delta T}{2},t_0+\frac{\Delta T}{2}]$. The duration $\Delta T$ is about the same order of magnitude as the period of the gravitational wave, and during this time, the amplitude and phase of the gravitational wave change very little: \begin{eqnarray} \label{approximate 1} \Delta T \sim \frac{2\pi}{k_{0}} \sim \frac{1}{k_{0}}, \quad \dot{F}\Delta T \ll F, \quad \dot{k_0}\Delta T \ll k_0. \end{eqnarray} Thus, $h=F(t)e^{i[{k_0(t) t-k_3 z}]}$ can be approximated as a plane gravitational wave near $t_0$: \begin{eqnarray} \label{plane gravitational wave} h=F(t_0)e^{i[k_0(t_0) t-k_3 z]}. \end{eqnarray} For the evolution of the cosmic background, the changes of $a,b$ and $c_t$ in Eq. (\ref{metric Horndeski eq}) during this period are also small: \begin{eqnarray} \label{approximate 2} \quad \dot{a}\Delta T \ll a, \quad \dot{b}\Delta T \ll b, \quad \dot{c_t}\Delta T \ll c_t. \end{eqnarray} So Eq. (\ref{metric Horndeski eq}) near $t_0$ can be approximated as \begin{eqnarray} \label{approximate metric Horndeski eq} \ddot{h}+b(t_0)\dot{h}-\frac{{c_t}^2(t_0)}{a^2(t_0)}\Delta h=0. \end{eqnarray} By substituting Eq. (\ref{plane gravitational wave}) into Eq. (\ref{approximate metric Horndeski eq}), we can obtain \begin{eqnarray} \label{k0 k3 demo} -{k_0}^2(t_0)+i b(t_0) k_{0}(t_0)+\frac{{c_t}^2(t_0)}{a^2(t_0)} {k_3}^2=0. \end{eqnarray} The gravitational waves we can observe have large $k_0$ and $k_3$, which makes the linear term of wave vector component (uniformly recorded as $k$) in the above equation negligible compared with the quadratic term of $k$. Thus, by Eq. (\ref{k0 k3 demo}), the relationship between $k_0$ and $k_3$ will satisfy \begin{eqnarray} \label{k0 k3} -{k_0}^2(t_0)+\frac{{c_t}^2(t_0)}{a^2(t_0)} {k_3}^2=0. \end{eqnarray} Just write (\ref{plane gravitational wave}) as \begin{eqnarray} \label{plane gravitational wave2} h=F(t_0)e^{ik_0(t_0) t}e^{-\left(\frac{k_3}{a(t_0)}\right) \left(a(t_0)z\right)}, \end{eqnarray} and using Eq. (\ref{k0 k3}), we can see that the tensor gravitational wave speed $c_g$ at time $t_0$ is \begin{eqnarray} \label{cg} c_g(t_0)=\frac{a(t_0)k_0(t_0)}{k_3}=\frac{a(t_0)}{k_3} \frac{c_t(t_0)}{a(t_0)} k_3=c_t(t_0). \end{eqnarray} The speed (\ref{cg}) obtained by this method is the same as that of Refs. \cite{R.Kase,T.Kobayashi}. Similar to the above analysis, for Palatini-Horndeski theory near a certain time, we also approximate the coefficients of perturbations in the linear perturbation equations to constants which are independent of time. In addition, we also approximate the perturbations (\ref{h,Sigma,per}) to the form of plane gravitational waves: \begin{eqnarray} \label{per plane gravitational waves} h_{\mu\nu}=\bar{h}_{\mu\nu} e^{i\left(k_0 t-k_3 z\right)},\quad \Sigma^{\lambda}_{\mu\nu}=\bar{\Sigma}^{\lambda}_{\mu\nu} e^{i\left(k_0 t-k_3 z\right)}. \end{eqnarray} Here, $\bar{h}_{\mu\nu}$ and $\bar{\Sigma}^{\lambda}_{\mu\nu} $ are amplitudes. Similar to the above example of metric Horndeski theory, by substituting (\ref{per plane gravitational waves}) into the approximated linear perturbation equations, we can obtain the linear equations with amplitudes $\bar{h}_{\mu\nu}$ and $\bar{\Sigma}^{\lambda}_{\mu\nu}$ as the variable. This equations can be written in matrix form: \begin{eqnarray} \label{matrix form} AX=0, \end{eqnarray} where $A$ is a $10 \times 10$ matrix and it depends on variables $k_0$ and $k_3$. $X$ is a column vector composed of the components of the amplitudes $\bar{h}_{\mu\nu}$ and $\bar{\Sigma}^{\lambda}_{\mu\nu}$. The specific expression of $A$ is very lengthy and easy to obtain, so it is not listed in this paper. Equation (\ref{matrix form}) has a gravitational wave solution if and only if \begin{eqnarray} \label{det A=0} \det (A)=0. \end{eqnarray} As in the above example, we consider $k_0$ and $k_3$ to be large. Thus, the lower power terms of $k$ in $\det(A)$ are ignored, and only the highest power terms of $k$ are retained. We mark the remaining quantity in $\det(A)$ as $\mathcal{A}$. Therefore, according to the equation \begin{eqnarray} \label{mathcal{A}} \mathcal{A}(k_0,k_3)=0, \end{eqnarray} we can know the relationship between $k_0$ and $k_3$, so as to solve the speed of tensor gravitational waves. Now, we will calculate the speed of tensor gravitational waves propagating in a spatially flat cosmological background. We divide the parameter space of Palatini-Horndeski theory into two classes. \textbf{\emph{Class \uppercase\expandafter{\romannumeral1}}}: $G_{5}(\phi,X)=0$. In this class, by solving Eq. (\ref{mathcal{A}}), we find that the tensor gravitational wave speed $c_g$ is given by \begin{eqnarray} \label{class1 cg} c_g^2=\frac{a^2 k_0^2}{k_3^2}=\left(1+\frac{1}{2} \frac{G_{4,X}}{G_4} {\dot{\phi}}^2\right)^2. \end{eqnarray} Thus, the condition that the tensor gravitational wave speed is always the speed of light in any spatially flat cosmological background requires \begin{eqnarray} \label{class1 cg condition} {G_{4,X}} {\dot{\phi}}^2=0 \end{eqnarray} for any spatially flat cosmological background. In this class, by solving the background equation (\ref{BG EQ with P}), we find that $\big(\ddot{a},\dddot{\phi},\Gamma^{0}_{00},{\Gamma}^{1}_{01},{\Gamma}^{0}_{11}\big)$ can be expressed as functions of $\big(a,\dot{a},\phi,\dot{\phi},\ddot{\phi},P,\epsilon\big)$. Further considering the equation of state and the energy conservation equation of the ideal fluid, we can also use $\big(P,a,\dot{a}\big)$ to express $\big(\dot{P},\dot{\epsilon}\big)$. Therefore, as long as we know $\big(a,\dot{a},\phi,\dot{\phi},\ddot{\phi},P,\epsilon\big)$, we can obtain $\big(\ddot{a},\dddot{\phi},\Gamma^{0}_{00},{\Gamma}^{1}_{01},{\Gamma}^{0}_{11},\dot{P},\dot{\epsilon}\big)$. This determines the initial value condition of the background equation (\ref{BG EQ with P}). The specific expressions of these variables are very lengthy and easy to obtain, so we have not listed them. Considering that there are different equations of state for different types of matters, $P$ and $\epsilon$ can be considered as independent variables. Therefore, the condition that the tensor gravitational wave speed is the speed of light in any spatially flat cosmological background is equivalent to the following condition: at any values of the variables $\big(a,\dot{a},\phi,\dot{\phi},\ddot{\phi},P,\epsilon\big)$, condition (\ref{class1 cg condition}) is always true. This requires that $G_{4,X}$ is always vanishing. In this way, we find that in Class {\uppercase\expandafter{\romannumeral1}}, only subclass \begin{eqnarray} \label{class1 cg condition fin} G_5=0,\quad G_{4,X}=0 \end{eqnarray} satisfies the condition that the tensor gravitational wave speed is the speed of light in any spatially flat cosmological background. \textbf{\emph{Class \uppercase\expandafter{\romannumeral2}}}: $G_{5}(\phi,X)\neq0$. In this class, by solving the background equation (\ref{BG EQ with P}), we can see that $\big(\dot{a},\dddot{\phi},\dot{\Gamma}^{0}_{00},\dot{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01}\big)$ can be expressed as the functions of $\big(a,\phi,\dot{\phi},\ddot{\phi},{\Gamma}^{0}_{00},{\Gamma}^{0}_{11},{\Gamma}^{1}_{01},P,\epsilon\big)$. Further considering the state equation and the energy conservation equation of the ideal fluid, we can also use $\big(P,a,\dot{a}\big)$ to express $\big(\dot{P},\dot{\epsilon}\big)$. Therefore, as long as we know $\big(a,\phi,\dot{\phi},\ddot{\phi},{\Gamma}^{0}_{00},{\Gamma}^{0}_{11},{\Gamma}^{1}_{01},P,\epsilon\big)$, we can solve $\big(\dot{a},\dddot{\phi},\dot{\Gamma}^{0}_{00}, \dot{\Gamma}^{0}_{11},\dot{\Gamma}^{1}_{01},\dot{P},\dot{\epsilon}\big)$. This determines the initial value condition of the background equation (\ref{BG EQ with P}). The specific expressions of these variables are very lengthy, so we do not list them. By substituting these expressions into Eq. (\ref{mathcal{A}}) and solving it, we can obtain the tensor gravitational wave speed expressed by the variables $\big(a,\phi,\dot{\phi},\ddot{\phi},{\Gamma}^{0}_{00},{\Gamma}^{0}_{11},{\Gamma}^{1}_{01},P,\epsilon\big)$. In fact, the tensor gravitational wave speed we solved is not unique in this class, and it has two possible solutions $c_{g1}$ and $c_{g2}$. These two speeds are generally different. However, when the matter pressure $P=0$, we have $c_{g1}=c_{g2}$. The specific expressions of $c_{g1}$ and $c_{g2}$ are very lengthy, so we do not list them. The first speed $c_{g1}^2$ can be expressed as a fraction \begin{eqnarray} \label{class2 cg1} c_{g1}^2=\frac{\mathcal{N}}{\mathcal{D}}. \end{eqnarray} It can be seen that $c_{g1}=1$ is equivalent to the numerator part $\mathcal{N}$ on the right side of Eq. (\ref{class2 cg1}) minus the denominator part $\mathcal{D}$ equal to $0$: \begin{eqnarray} \label{cg1 numerator-denominator} M\equiv{\mathcal{N}}-{\mathcal{D}}=0. \end{eqnarray} By expanding the brackets, $M$ can be expressed as a polynomial about the variables $\big(a,\phi,\dot{\phi},\ddot{\phi},{\Gamma}^{0}_{00},{\Gamma}^{0}_{11},{\Gamma}^{1}_{01},P,\epsilon\big)$. If we require the tensor gravitational wave speed $c_{g1}$ to be the speed of light under any spatially flat cosmological background, then for any values of the variables $\big(a,\phi,\dot{\phi},\ddot{\phi},{\Gamma}^{0}_{00},{\Gamma}^{0}_{11},{\Gamma}^{1}_{01},P,\epsilon\big)$, this polynomial should be $0$. We notice that in this polynomial, the term where $(\Gamma^{0}_{00})^4\epsilon$ appears is \begin{eqnarray} \label{cg1 000^4} 6 a^{14} (G_5)^5 {\dot{\phi}}^5 \Big( 5 G_{5,X} {\dot{\phi}}^2 -2 {G_5} \Big) (\Gamma^{0}_{00})^4\epsilon. \end{eqnarray} Therefore, the above condition requires \begin{eqnarray} G_5=0 ~~\text{or}~~G_5=\frac{5}{2}{\dot{\phi}}^2G_{5,X}. \label{CassIIcondition1} \end{eqnarray} If substituting the condition $G_5=\frac{5}{2}{\dot{\phi}}^2G_{5,X}$ into Eq. (\ref{cg1 numerator-denominator}), we again notice that in this polynomial, the term where $({\Gamma^{1}_{01}})^4\epsilon$ appears is \begin{eqnarray} \label{cg1 101^4} -\frac{1171875}{16} a^{14} (G_{5,X})^6 {\dot{\phi}}^{17} (\Gamma^{1}_{01})^4\epsilon. \end{eqnarray} Therefore, the above condition further requires $G_{5,X}=0$. Combining with the condition (\ref{CassIIcondition1}) we have $G_5=0$. However, this is inconsistent with the assumption $G_5 \neq 0$ in this class. For the second solution $c_{g2}$, using the same analysis method as that used to analyze the first solution $c_{g1}$, we find that the condition of $c_{g2}=1$ also requires $G_5=0$. To sum up, for Palatini-Horndeski theory, the parameter space satisfying the condition that the tensor gravitational wave speed is the speed of light under any spatially flat cosmological background is only $G_{4,X}=0$ and $G_5=0$. \section{Conclusion} \label{sec: 5} In this paper, we calculated the speed of tensor gravitational waves in the spatially flat cosmological background. It is worth noting that we found that there are two possible speeds of tensor gravitational waves in Class \uppercase\expandafter{\romannumeral2}. This is due to the additional degrees of freedom introduced by the tensor perturbations of the connection. It seems to imply that if we observe two tensor gravitational waves with different speeds in the future, the theory of gravitation describing our world may be described by the Palatini formalism. However, if we further require the tensor gravitational wave speed to be the speed of light $c$ in any spatially flat cosmological background, then only \begin{eqnarray} \label{action fin} S\left(g,\Gamma,\phi\right) = \int d^4x \sqrt{-g}\big[K(\phi,X)-G_{3}(\phi,X){\tilde{\Box}}\phi+G_{4}(\phi)\tilde{R} \big] \end{eqnarray} is left as the possible action in the above two subclasses of Palatini-Horndeski theory. Reference \cite{Helpin} pointed out that the action (\ref{action fin}) in the Palatini formalism is actually equivalent to the following action in the metric formalism: \begin{eqnarray} \label{KGB} S\left(g,\phi\right) = \int d^4x \sqrt{-g}\big[\bar{K}(\phi,X)-G_{3}(\phi,X){{\Box}}\phi+G_{4}(\phi){R}\big]. \end{eqnarray} Here, \begin{eqnarray} \bar{K}=K+\left(-2G_3 G_{4,\phi}+3 G_{4,\phi}^2-\frac{2}{3} G_3^2\right)\frac{X}{G_4}. \end{eqnarray} It can be seen that the action (\ref{action fin}) in the Palatini formalism actually still belongs to metric Horndeski theory. Therefore, the parameter space of Palatini-Horndeski theory compatible with GW170817 does not have the Ostrogradsky instability. It should be noted that the action (\ref{KGB}) is the only subclass of metric Horndeski theory that is compatible with the condition that the tensor gravitational wave speed is the speed of light $c$ in any spatially flat cosmological background. Finally, it should be pointed out that this does not mean that the scalar-tensor gravity in the Palatini formalism must not beyond the framework of metric Horndeski theory. It is because Palatini-Horndeski theory considered in this paper is not the most general theory of scalar-tensor gravity in the Palatini formalism. A more general discussion needs to study more general action, which needs to be studied in future work. \section*{Acknowledgments} We would like to thank Yu-Peng Zhang for useful discussion. This work is supported in part by the National Key Research and Development Program of China (Grant No. 2020YFC2201503), the National Natural Science Foundation of China (Grants No. 11875151 and No. 12047501), the 111 Project (Grant No. B20063), the Department of education of Gansu Province: Outstanding Graduate ``Innovation Star" Project (Grant No. 2022CXZX-059) and Lanzhou City's scientific research funding subsidy to Lanzhou University.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The gold standard for evaluating effects of interventions are randomized controlled trials. However, they are not always available; for example, for evaluating a treatment strategy for HIV-positive patients, a randomized controlled trial would force patients to take the treatment or to be off the treatment regardless of their health status. Observational studies are useful in these settings. In observational studies, there often is time-dependent confounding by indication: some covariates are predictors of both the subsequent treatment and outcome, and are also affected by the past treatment history. Then, standard methods adjusting for the covariates history are fallible and can lead to bias (\citealp{robins1992g}; \citealp{robins2000marginal}; \citealt{robins2000marginalstructural}). Coarse structural nested mean models (\citealp{robins1998correction}) provide a useful tool to estimate treatment effects from longitudinal observational data. \citet{lok2012impact} developed a time-dependent version of coarse structural nested mean models and applied it to investigate the impact of the timing of combination antiretroviral treatment initiation on the effect of one year treatment in HIV-positive patients. Their semiparametric method leads to an infinite number of unbiased estimating equations and a huge class of consistent and asymptotically normal estimators. An optimal estimator can be derived within this class of coarse structural nested mean models under well-specified models for the treatment effect, treatment initiation, and a nuisance regression outcome model, in an unpublished 2014 technical report available from the second author. The key assumption lies in a well-specified model for the treatment effect. However, no guidance exists on how to specify the treatment effect model, and model misspecification may lead to biased estimators, preventing valid inference. The main contribution of this article is to derive a goodness-of-fit test statistic for testing correct specification of the treatment effect model. The key insight is that with a correctly-specified treatment effect model we have more unbiased estimating equations than the number of parameters, which results in overidentification of the parameters. Overidentification restrictions tests, also called Sargan tests or $J$-tests (\citealp{sargan1958estimation} and \citealp{hansen1982large}), are widely used in the econometric literature; they however seem to have been previously unnoticed in the biostatistics literature. The standard overidentification restrictions test, given by the minimized value of the generalized method of moments (\citealp{newey1994large}; \citealp{imbens1995information}) criterion function, has a chi-squared limiting distribution, with degrees of freedom equal to the number of overidentification restrictions. In most situations, the minimum of the generalized method of moments criterion is obtained by a continuous iterative procedure to update the parameter estimates until convergence \citep{hansen1996finite}. \citet{arellano1991some} showed the test statistic based on one-step estimates other than the optimal generalized method of moments estimates is not robust and tends to over-reject even in large samples. Our test procedure is different from the standard overidentification restrictions tests in this regard. We do not obtain parameter estimates by minimizing an objective function, but rather we obtain parameter estimates by solving the optimal estimating equations with the number of equations equal to the number of parameters. The overly identified restrictions are only used for testing, not for estimation. This difference allows us to greatly reduce the computation burden. Our simulation studies show that our test statistic has correct size for large samples under the scenarios we considered. Another merit of the overidentification restrictions test is that no bootstrap is needed to compute the test statistic, which could be valuable with the large samples that are increasingly common. \section{Motivating problem and basic setup \label{sec:Motivating-Data}} \subsection{The motivating problem} Combination antiretroviral treatment is the standard initial treatment for HIV, and has considerably reduced the morbidity and mortality in HIV-positive patients. In the HIV literature, findings imply that there are key early events, during acute and early infection, in the pathogenesis of HIV infection that determine the long-term pace of disease progression \citep{hecht2006multicenter}. However, there is no strong evidence to support when to start treatment in patients in the acute and early stages of infection. It is important to understand the effect of initiating treatment at different times during the course of HIV infection. This investigation relies on an observational study, where we emulate a counterfactual experiment using causal models. \subsection{The Acute Infection and Early Disease Research Program} The Acute Infection and Early Disease Research Program study is a multicenter, observational cohort study of $1762$ HIV-positive patients diagnosed during acute and early infection (\citealp{hecht2006multicenter}). Dates of infection were estimated based on a stepwise algorithm that uses clinical and laboratory data (\citealp{smith2006lack}). We included patients with CD4 and viral load measured within $12$ months of the estimated date of infection, which resulted in $1696$ patients. Let $m$ denote the number of months between the estimated date of infection and combination antiretroviral treatment initiation ($m=0,\ldots,11$), where $0$ indicates the estimated date of infection. We are interested in evaluating the impact of $m$ on the effect of one year treatment. \subsection{Notation} Let $Y_{k}$ be the patient's CD4 count at month $k$ since the estimated date of infection $(k=0,\ldots,K+1\equiv24)$, and $L_{m}$ be a vector of covariates measured at month $m$, including age, gender, race, injection drug use, CD4 count and viral load. Let $A_{m}$ be one if the patient was on treatment at month $m$ and zero otherwise. We assume that once treatment is started, it is never discontinued. We use overbars to denote a variable history; for example, $\bar{A}_{m}$ is the treatment history until month $m$. Let $T$ be the actual treatment initiation time. The patients are assumed to be an independent sample from a larger population\textcolor{black}{{} \citep{rubin1978bayesian}, and for notational simplicity we drop the subscript $i$ for patients. To handle missingness, for $L$, if this is missing at month $m$, $L_{m}$ was coded as ``missing''. For intermediate missing outcomes, we imputed $Y_{k}$ by interpolation; if the outcome is missing just prior to onset of treatment, we imputed $Y_{k}$ by carrying the last observation forward. }Let $X\equiv(\bar{A}_{K},\bar{L}_{K},\bar{Y}_{K+1})$ denote the patient's full record.\textcolor{black}{{} Until Section 6 we assume all patients are followed up until month $K+1$.} \textcolor{black}{Let $Y_{k}^{(m)}$ be the CD4 count at month $k$, possibly counterfactual, had the patient started treatment at month $m$. Let $Y_{k}^{(\infty)}$ be the CD4 count at month $k$ had the patient never started treatment during the course of follow up. We assume the patient's observed outcome $Y_{k}$ is equal to the potential outcome $Y_{k}^{(m)}$ for $m$ equal to the actual treatment initiation time $T$; that is, if $k>T$,} $Y_{k}=Y_{k}^{(T)}$ and if $T\geq k$, $Y_{k}=Y_{k}^{(\infty)}$. We assume the assumption of no unmeasured confounding (\citealp{robins1992g}): \begin{equation} Y_{k}^{(\infty)}\amalg A_{m}\mid\bar{L}_{m},\bar{A}_{m-1}\ (k=m+1,\ldots,m+12),\label{eq:NUC} \end{equation} where $\amalg$ denotes ``is independent of'' \citep{dawid1979conditional}. This assumption holds if $\bar{L}_{m}$ contains all prognostic factors for $Y_{k}^{(\infty)}$ that affect the treatment decision at month $m$. For example, if patients with lower CD4 counts initiated treatment earlier, the assumption (\ref{eq:NUC}) would fail to hold if $\bar{L}_{m}$ does not include the history of the CD4 count. \subsection{Coarse structural nested mean model } We model the treatment effect, comparing treatment starting at month $m$ to never starting among the subgroup of patients with covariate history $\bar{l}_{m}$ and $T=m$, as\textit{ \begin{equation} E\{Y_{k}^{(m)}-Y_{k}^{(\infty)}\mid\bar{L}_{m}^{(\infty)}=\bar{l}_{m},T=m\}=\gamma_{m,\psi}^{k}(\bar{l}_{m})\ (k=m,\ldots,m+12),\label{eq:SNMM} \end{equation} }where $\psi$ is the parameter in the treatment effect model. From now on, we consider $\gamma_{m,\psi}^{k}(\bar{l}_{m})=(\psi_{1}+\psi_{2}m)(k-m)1_{(m\leq k)},$ with $(k-m)$ the duration of treatment from month $m$ to month $k$.\textcolor{black}{{} We restrict the range of $k$ from $12$ to $K+1$, whereby we avoid making extra modeling assumptions beyond the necessary ones to estimate $\gamma_{m}^{m+12}(\bar{l}_{m})$ in order to gain robustness. }Particularly, $\gamma_{m,\psi}^{m+12}(\bar{l}_{m})$\textcolor{red}{{} }quantifies the effect of one year treatment if HAART was initiated at month $m$, among the subgroup of patients with covariate history $\bar{l}_{m}$. If outcome is the CD4 count and $\gamma_{m,\psi}^{m+12}(\bar{l}_{m})>0,$ the effect of one year treatment is beneficial. $12\psi_{1}$ quantifies the effect of one year treatment if treatment was started at the estimated date of infection, and $\psi_{2}$ quantifies the increase of the effect of treatment for each month of delay after the estimated date of infection. Under this model, the treatment effect is homogeneous.\textcolor{blue}{{} }In practice, the treatment effect may vary among different groups; for example, male and female patients may have different responses to the combination antiretroviral treatment. We can then extend the model as $\gamma_{m,\psi}^{k}(\bar{l}_{m})=(\psi_{1}+\psi_{2}m+\psi_{3}\mathrm{gender})(k-m)1_{\{m\leq k\}},$ where $\psi_{3}$ quantifies the magnitude and direction of the impact of gender. For $k=12,\ldots,K+1$, define $H(k)=Y_{k}-\gamma_{T}^{k}(\bar{L}_{T})$. As proved in \textcolor{black}{\citet{robins1992g}, \citet{lok2004estimating} and \citet{lok2012impact}, $H(k)$ is mimicking a counterfactual outcome $Y_{k}^{(\infty)}$ in the sense that for $k=12,\ldots,K+1$ and $m=k-12,\ldots,k-1$, \begin{equation} E\{H(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0},A_{m}\}=E\{Y_{k}^{(\infty)}\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0},A_{m}\},\label{eq:imp1ofNUC} \end{equation} since by subtracting from the observed $Y_{k}$ the average effect of treatment, we would obtain the quantity that mimics the outcome had the patient not been treated. The implication of (\ref{eq:imp1ofNUC}) and the assumption of no unmeasured confounding is that for $k=12,\ldots,K+1$ and $m=k-12,\ldots,k-1$, \begin{equation} E\{H(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0},A_{m}\}=E\{H(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\},\label{eq:assumption} \end{equation} which plays a key role for estimation. } \subsection{The conditional probability of treatment initiation} We use a pooled logistic regression to model the probability of treatment initiation at month $m$, conditional on the past history, $p_{\theta}(m)\equiv P(A_{m}=1\mid\bar{A}_{m-1}=\bar{0},\bar{L}_{m};\theta)=1_{(\bar{A}_{m-1}=\bar{0})}1_{\mathrm{visit}}(m)/[1+\exp\{-\theta^{T}f(\bar{L}_{m})\}],$ where $1_{\mathrm{visit}}(m)$ is an indicator of whether a visit took place at month $m$, and $f(\bar{L}_{m})$ is some function of $\bar{L}_{m}$. Let $J_{\mathrm{trt}(\theta)}(X)$ denote the estimating function for $\theta_{0}$. \subsection{The unbiased estimating equation and optimal estimation} Model (\ref{eq:SNMM}) cannot be fit by standard regression methods because it involves potential outcomes. However, one can get consistent estimates by constructing unbiased estimating equations based on (\ref{eq:assumption}) \citep{lok2012impact}: for any measurable, bounded function $q_{m}^{k}:\bar{\mathcal{L}}_{m}\rightarrow\mathbb{R}^{p}$, $k=12,\ldots,K+1$, $m=k-12,\ldots,k-1$, let \[ G_{(\psi,\theta,q)}(X)\equiv\sum_{k=12}^{K+1}\sum_{m=k-12}^{k-1}q_{m}^{k}(\bar{L}_{m})[H_{\psi}(k)-E\{H_{\psi}(k)|\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}]\{A_{m}-p_{\theta}(m)\}. \] We use empirical process notation throughout. We let $P$ denote the probability measure induced by $X$ and let $P_{n}$ denote the empirical measure induced by $X_{1},...,X_{n}$. Given a measurable function $f:\mathcal{X}\mapsto\mathbb{R}$, we write $P_{n}f(X)=n^{-1}\sum_{i=1}^{n}f(X_{i})$ and $Pf(X)$ for the expectation under $P$. Then \begin{equation} P_{n}\{\begin{array}{cc} G_{(\psi,\theta,q)}(X)^{T} & \ \ J_{\mathrm{trt}(\theta)}\end{array}(X)^{T}\}^{T}=0\label{eq:ee} \end{equation} are the stacked unbiased estimating equations for both the parameter $\psi$ and the (nuisance) parameter $\theta$. For simplicity, we will suppress the dependence of the estimating functions on $X$; for example, $P_{n}G_{(\psi,\theta,q)}$ is shorthand for $P_{n}G_{(\psi,\theta,q)}(X)$. Sometimes, we also drop the dependence on the parameters. \textcolor{black}{In theory, $q$ can be chosen arbitrarily; however it largely influences the precision of the resulting estimator. To derive the optimal estimating equation, and therefore the optimal estimator, we assume that }for $k,s=12,\ldots,K+1$ and $m$ with $m=\max(k-12,s-12),\ldots,\min(k-1,s-1,K)$, \begin{equation} \mathrm{cov}\{H(k),H(s)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0},A_{m}\}\amalg A_{m}\mid\bar{L}_{m},\bar{A}_{m-1}.\label{eq:Homo} \end{equation} This assumption is a natural extension of (\ref{eq:assumption}). This would be true under \citet{robins1998structural}'s rank preservation assumption and $(Y_{k}^{(\infty)},Y_{s}^{(\infty)})\amalg A_{m}\mid\bar{L}_{m},\bar{A}_{m-1}$. However, the rank preservation assumption is unlikely to hold in practice and the assumption (\ref{eq:Homo}) is weaker. The optimal estimating equations, within the class of $P_{n}G_{(\psi,\theta,q)}$ indexed by $q$ for any measurable and bounded functions $q_{m}^{k}$, can be obtained by finding $q^{\mathrm{opt}}$ that satisfies $E\{\partial/\partial\psi^{T}G_{(\psi_{0},\theta_{0},q)}\}=E\{G_{(\psi_{0},\theta_{0},q)}G_{(\psi_{0},\theta_{0},q^{\mathrm{opt}})}^{T}\}$ for any $q$ \citep{newey1994large}. Then, under (\ref{eq:Homo}) \begin{eqnarray} & & q_{m}^{\mathrm{opt}}(\bar{L}_{m})^{T}\equiv\{\mathrm{var}(H_{m}\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0})\}^{-1}\nonumber \\ & & \times\left\{ E\left(\frac{\partial}{\partial\psi}H_{m}\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0},A_{m}=1\right)-E\left(\frac{\partial}{\partial\psi}H_{m}\mid\bar{L}_{m},\bar{A}_{m}=\bar{0}\right)\right\} ,\label{eq:opt} \end{eqnarray} where $q_{m}^{\mathrm{opt}}=(q_{m}^{\mathrm{opt},l},\ldots,q_{m}^{\mathrm{opt},r})$ with $l=\max(m+1,12)$ and $r=\min(m+12,K+1)$, which are informed by the fact that $m+1\leq k\leq m+12$ and $12\leq k\leq K+1$; $H_{m}=\{H_{\psi}(l),\ldots,H_{\psi}(r)\}^{T}$; $\mathrm{var}(H_{m}\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0})$ is a matrix with elements $\Gamma_{ks}^{m}\equiv\mathrm{cov}\{H_{\psi}(k),H_{\psi}(s)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}$; and $E(\partial/\partial\psi H_{m}\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0},A_{m}=1)-E(\partial/\partial\psi H_{m}\mid\bar{L}_{m},\bar{A}_{m}=\bar{0})=(\Delta_{m}^{l},\ldots,\Delta_{m}^{r})^{T}$ with $\Delta_{m}^{k}\equiv E\{\partial/\partial\psi H_{\psi}(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0},A_{m}\}-E\{\partial/\partial\psi H_{\psi}(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}$. \begin{remark}\label{rmk1}For the optimal estimating equations, defined by (\ref{eq:ee}) and (\ref{eq:opt}), we address two issues. One issue is that $E\{H_{\psi}(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}$ and $q^{\mathrm{opt}}$ depend on $\psi$.\textcolor{red}{{} }\textcolor{black}{Following \citet{lok2012impact}, }we use a preliminary consistent estimate $\hat{\psi}_{p}$ to replace $\psi_{0}$ in $E\{H_{\psi}(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}$ and $q^{\mathrm{opt}}$. The rationale for this replacement is that the estimating equations are unbiased for any fixed value of $\hat{\psi}_{p}$ when $p_{\theta}(m)$ is correct. If $\gamma_{m,\psi}^{k}$ is linear in $\psi$, $\Delta_{m}^{k}$ does not depend on $\psi$. One choice of \textcolor{black}{$\hat{\psi}_{p}$ is the optimal estimator if $q_{m}^{k}$ is only non-zero for $k=m+12$.} Another issue is that $\Delta_{m}^{k}$ and $E\{H_{\psi}(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}$, even with $\psi$ in lieu of $\psi_{p}$, depend on the true unknown distribution. We will use parametric models to approximate these quantities. Let $E\{H_{\psi_{p}}(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}$ be parametrized by $\xi_{1}$; for example $E_{\xi_{1}}\{H_{\psi_{p}}(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}$ is a linear regression model with covariates $\bar{L}_{m}$. Likewise, let $\Delta_{m}^{k}$ be parametrized by $\xi_{2}$. Denote estimating functions for $\xi_{1}$, $\xi_{2}$ and $\psi_{p}$ by \textit{$J_{1(\xi_{1},\psi_{p})}$}\textcolor{black}{,}\textit{ $J_{2(\xi_{2})}$ }and\textit{ }$G_{p(\psi_{p},\xi_{2})}$.\textit{ }Since $G_{p}$ depends on $\Delta_{m}^{m+12}$, it is also a function of $\xi_{2}$. \end{remark} \section{Asymptotic results of optimal estimators\label{sec:Asymptotic-Results} } We present the consistency and asymptotic normality result of the optimal estimator. These results are the building blocks to derive the goodness-of-fit test statistic. \begin{theorem}[Consistency] \label{Thm1} Let $G_{(\psi,\psi_{p},\xi,\theta)}^{*}$ be optimal estimating functions \[ G_{(\psi,\psi_{p},\xi,\theta)}^{*}=\sum_{k=12}^{K+1}\sum_{m=k-12}^{k-1}q_{m,\psi_{p},\xi_{2}}^{k,\mathrm{opt}}(\bar{L}_{m})[H_{\psi}(k)-E_{\xi_{1}}\{H_{\psi_{p}}(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}]\{A_{m}-p_{\theta}(m)\}, \] and $U_{(\psi,\psi_{p},\xi,\theta)}=\{\begin{array}{ccccc} G_{(\psi,\psi_{p},\xi_{1},\xi_{2},\theta)}^{*} & G_{p(\psi_{p},\xi_{2})} & J_{1(\xi_{1},\psi_{p})} & J_{2(\xi_{2})} & J_{\mathrm{trt}(\theta)}\}^{T}\end{array}$ be a system of estimating functions stacking all estimating functions together, where $G_{p}$, $J_{1}$ and $J_{2}$ are defined in Remark \ref{rmk1}, and $J_{\mathrm{trt}}$ is defined in (\ref{eq:ee}). Let $(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta})$ be the solution to estimating equations $P_{n}U_{(\psi,\psi_{p},\xi,\theta)}=0$. The true parameter values are $\psi_{0}$, $\xi_{0}$ and $\theta_{0}$. Under the regularity conditions (C1)--(C2) specified in the Supplementary Material, if the treatment effect model $\gamma_{m,\psi}^{k}(\bar{L}_{m})$ is well specified, and either $E_{\xi_{1}}\{H_{\psi}(k)|\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}$ or $p_{\theta}(m)$ is well specified, $\hat{\psi}-\psi_{0}\rightarrow0$ in probability, as $n\rightarrow\infty$. \end{theorem} \begin{theorem}[Asymptotic normality] Under the regularity conditions (C1)--(C5) specified in the Supplementary Material, $n^{1/2}(\hat{\psi}-\psi_{0})\rightarrow N_{p}\left(0,\Sigma_{\psi}\right)$ in distribution, as $n\rightarrow\infty$, where $p$ is the dimension of $\psi_{0}$ and $\Sigma_{\psi}$ is the $p\times p$ upper left matrix in $\{P\partial/\partial(\psi,\psi_{p},\xi,\theta)U\}^{-1}P(U$ $U^{T})\{P\partial/\partial(\psi,\psi_{p},\xi,\theta)U\}^{-1T}$. \end{theorem} \begin{remark} In the statistics literature, estimators solving unbiased estimating equations are often called $Z$-estimators. The theory of consistency and asymptotic normality of Z-estimators is well established, see for example Theorem 5.9 and Section 5.3 in \citet{van2000asymptotic}. We skip the detailed proof but explain the regularity conditions needed to guarantee the consistency in \textcolor{black}{the Supplementary Material. From Theorem \ref{Thm1}, the functional form of $\gamma_{m,\psi}^{k}$ must be correctly specified. }In contrast, the estimator remains consistent for $\psi$ if either $E_{\xi_{1}}\{H_{\psi}(k)|\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}$ or $p_{\theta}(m)$ is well specified, but not necessary both. The estimator is doubly robust (\citealp{robins2001inference}; \citealp{van2003unified}).\textcolor{black}{{} The functional form of the nuisance models can be selected on the basis of the observed data, as well as the literature and subject knowledge specific to the application setting. Later in this article, we provide a more specific illustration in the context of our example. } \end{remark} \section{Goodness-of-fit test\label{sec:Goodness-of-Fit-Test}} The consistency, asymptotic normality, and double robustness of estimators rely on a key assumption, that is, the treatment effect model is well specified. Misspecification of the treatment effect model causes bias in the parameter estimation and break down the asymptotic results. We now develop tests for model specification based on overidentification restrictions tests. Conceptually, for a well specified model, a new set of unbiased estimating functions, other than the optimal ones that are used for estimation, evaluated at the optimal estimators, should be asymptotically concentrated at zero. This asymptotic behavior leads to the following theorem\textcolor{black}{: } \begin{theorem}[Goodness-of-Fit Test] Let the treatment effect model be $\gamma_{m,\psi}^{k}(\bar{l}_{m})$ and $H_{\psi}(k)=Y_{k}-\gamma_{T,\psi}^{k}(\bar{l}_{T})$. Choose a set of functions $\{\tilde{q}_{m}^{k}(\bar{L}_{m})\in\mathbb{R}^{\nu},k=12,\ldots,K+1,m=k-12,\ldots,k-1\}$ that are different from the optimal choice $q_{m}^{k,\mathrm{opt}}$. Let \[ \tilde{G}_{(\psi,\psi_{p},\xi,\theta)}=\sum_{k=12}^{K+1}\sum_{m=k-12}^{k-1}\tilde{q}_{m,\xi_{2}}^{k}(\bar{L}_{m})[H_{\psi}(k)-E_{\xi_{1}}\{H_{\psi_{p}}(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}]\{A_{m}-p_{\theta}(m)\}. \] Let $(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta})$ be as in Theorem \ref{Thm1}. The null hypothesis is $H_{0}$: $\gamma_{m}^{k}(\bar{l}_{m})$ is well specified, and either $E_{\xi_{1}}\{H_{\psi}(k)\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}$ or $p_{\theta}(m)$ is well specified. Under $H_{0}$ and the regularity conditions (C1)--(C10) specified in the Supplementary Material, the goodness-of-fit test statistic \begin{equation} GOF=n\{P_{n}\tilde{G}_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta})}\}^{T}\hat{\Sigma}^{-1}P_{n}\tilde{G}_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta})}\rightarrow\chi^{2}(\nu),\label{eq:chisq} \end{equation} in distribution, as $n\rightarrow\infty$, where $\Sigma$ is the variance of $\Phi_{(\psi_{0},\psi_{0},\xi_{0},\theta_{0})}$ with $\Phi_{(\psi_{0},\psi_{0},\xi_{0},\theta_{0})}$ the asymptotic linear representation of $\tilde{G}_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta})}$, which is a linear combination of $G^{*}$, $\tilde{G}$, $G_{p}$, $J_{1}$, $J_{2}$ and $J_{\mathrm{trt}}$, defined in (6) in\textcolor{black}{{} the Supplementary Material,} and $\hat{\Sigma}$ is the sample variance of $\Phi_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta})}$. \end{theorem} We state the key steps in the proof, with details in \textcolor{black}{the Supplementary Material.} To establish the asymptotic distribution of $n^{1/2}P_{n}\tilde{G}_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta})}$, and therefore that of $GOF$, a key step is to linearise $n^{1/2}P_{n}\tilde{G}_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta})}$ as $n^{1/2}P_{n}\Phi_{(\psi_{0},\psi_{0},\xi_{0},\theta_{0})}$ for some $\Phi$, whereby we can apply the typical central limit theorem. To do so, the Lipschitz condition (C7) implies the functions $\tilde{G}_{(\psi,\psi_{p},\xi,\theta)}$ form a Donsker class. Using Lemma 19.24 of \citet{van2000asymptotic}, we have \begin{eqnarray} \sqrt{n}(P_{n}-P)\tilde{G}_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta})} & = & \sqrt{n}(P_{n}-P)\tilde{G}_{(\psi_{0},\psi_{0},\xi_{0},\theta_{0})}+o_{p}(1).\label{eq:donsker} \end{eqnarray} Next, we apply a Taylor expansion to $P\tilde{G}_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta})}$ on the left hand side of (\ref{eq:donsker}) and use the fact that $P\tilde{G}_{(\psi_{0},\psi_{0},\xi_{0},\theta_{0})}=0$ on the right hand side of (\ref{eq:donsker}). Finally, we can express $\Phi_{(\psi_{0},\psi_{0},\xi_{0},\theta_{0})}$, the asymptotic linear representation of $\tilde{G}_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta})}$, to be a linear combination of $G^{*}$, $\tilde{G}$, $G_{p}$, $J_{1}$, $J_{2}$ and $J_{\mathrm{trt}}$. \begin{remark}[Double Robustness] The goodness-of-fit test statistic is doubly robust in the sense that for (\ref{eq:chisq}) to hold we only require that either $E_{\xi_{1}}\{H_{\psi}(k)|\bar{L}_{m},\bar{A}_{m-1}=\bar{0}\}$ or $p_{\theta}(m)$ is well specified, not necessary for both. This property adds a protection from possible misspecification of the nuisance models. \end{remark} \begin{remark}The standard overidentification restrictions test is $\min_{\psi}nP_{n}V_{(\psi)}^{T}\{\hat{\Sigma}(\psi)\}^{-1}P_{n}V_{(\psi)},$ where $V_{(\psi)}\equiv(\begin{array}{cc} G_{(\psi,\hat{\psi}_{p},\hat{\xi}_{1},\hat{\xi}_{2},\hat{\theta})}^{*} & \tilde{G}_{(\psi,\hat{\psi}_{p},\hat{\xi},\hat{\theta})}\end{array})^{T}$ and $\Sigma(\psi)$ is the asymptotic variance of $P_{n}V_{(\psi)}$. In most situations, the minimum is obtained by a continuous iterative procedure to update the parameter estimates; that is, $\hat{\psi}^{(t+1)}=\arg\min_{\psi}nP_{n}V_{(\psi)}^{T}\{\hat{\Sigma}(\hat{\psi}^{(t)})\}^{-1}P_{n}V_{(\psi)}$ until convergence \citep{hansen1996finite}. Our test procedure does not need any iterative procedure, which simplifies the calculation. \end{remark} \begin{remark}[Choosing $\tilde{q}$]\textcolor{black}{Just like a naive choice of $q$ in estimating equations may lead to an estimator with large variance and thus useless inference, an arbitrary choice of $\tilde{q}$ may lead to the goodness-of-fit test lacking of power. We propose the following procedure to choose $\tilde{q}$, which is powerful in certain circumstances. Suppose we have two models to choose from for the treatment effect. Let the null model be $\gamma_{\psi}^{*}$, which is the treatment effect model we are testing for, and the other model to be an alternative model $\tilde{\gamma}_{\psi}$. We can derive $\Delta^{*}$, $q^{*\mathrm{opt}}$, $\tilde{\Delta}$, and $\tilde{q}^{\mathrm{opt}}$ as in (\ref{eq:opt}) with $\gamma_{\psi}^{*}$ and $\tilde{\gamma}_{\psi}$. Note that $q^{*\mathrm{opt}}$ is used for optimal estimation of the parameters in the null model. Then, candidates for $\tilde{q}$ are $\Delta^{*}$, $\tilde{\Delta}$, $\tilde{q}^{\mathrm{opt}}$, or any subvector of these that is not included in $q^{*\mathrm{opt}}$. Our simulation study shows that the goodness-of-fit test with $\tilde{q}^{\mathrm{opt}}$ is most powerful among this set of candidates in detecting the alternative model.} \end{remark} \section{Extension of goodness-of-fit test in the presence of censoring\label{sec:Extension-of-GOF-censoring}} We use the Inverse-Probability-of-Censoring-Weighting technique (\citealp{robins1995analysis}; \citealp{hernan2005structural}; \citealp{lok2012impact}) to accommodate patients lost to follow-up. Let $C_{p}=0$ indicate a patient remains in the study at month $p$. Following \citet{lok2012impact}, we assume that censoring is missing at random; that is, $(\bar{L},\bar{A})\amalg C_{k+1}\mid\bar{L}_{k},\bar{A}_{k},\bar{C}_{k}=\bar{0}$, whereby we have $P(A_{m}=1\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0},\bar{C}_{m}=\bar{0})=P(A_{m}=1\mid\bar{L}_{m},\bar{A}_{m-1}=\bar{0})$, and $p_{\theta}(m)$ does not depend on censoring. Define the Inverse-Probability-of-Censoring-Weightingg version of estimating functions $G^{*c}$ and $\tilde{G}^{c}$ using weights $W_{m,\eta}^{k}=1/\{\prod_{p=m+1}^{k}P_{\eta}(C_{p}=0\mid\bar{L}_{p-1},\bar{A}_{p-1},\bar{C}_{p-1}=\bar{0})\}$, see the Supplementary Material for details. In calculation of the weights, we use a pooled logistic regression model to estimate $P_{\eta}(C_{p}=0\mid\bar{L}_{p-1},\bar{A}_{p-1},\bar{C}_{p-1}=\bar{0})$. We assume the censoring model is well specified with estimating functions $J_{\mathrm{cen}(\eta)}$. Similarly, we have the Inverse-Probability-of-Censoring-Weighting version of the estimating function for the preliminary estimator $\hat{\psi}_{p}$, denoted by $G_{p}^{c}$. For the nuisance regression outcome models, the regression was restricted to patients still in follow-up and use weighted regression analysis with the censoring weights. Define the goodness-of-fit test statistic as \[ GOF^{c}=n\{P_{n}\tilde{G}_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta},\hat{\eta})}^{c}\}^{T}(\hat{\Sigma}^{c})^{-1}P_{n}\tilde{G}_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta},\hat{\eta})}^{c}, \] where $\hat{\Sigma}^{c}$ is the sample variance of $\Phi_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta},\hat{\eta})}^{c}$, with $\Phi_{(\psi_{0},\psi_{0},\xi_{0},\theta_{0},\eta_{0})}^{c}$ the asymptotic linear representation of $\tilde{G}_{(\hat{\psi},\hat{\psi}_{p},\hat{\xi},\hat{\theta},\hat{\eta})}^{c}$, defined by \textcolor{black}{(10)} in \textcolor{black}{the Supplementary Material}. As proved in the Supplementary Material, subject to regularity conditions, $GOF^{c}$ has an asymptotic chi-squared distribution, with degrees of freedom dimension of $\tilde{G}^{c}$. \section{Simulations\label{sec:Simulations}} The simulation designs were based on the Acute Infection and Early Disease Research Program database, but we did not consider censoring. Following an unpublished 2014 technical report available from the second author, we first generated the CD4 count outcomes $Y_{k}^{(\infty)}$ under no treatment, followed by a treatment initiation time $T$, and lastly the observed outcomes $Y_{k}$ ($k=6,\ldots,30$), as follows: (i) In each sample, $2$ groups were simulated: injection drug users ($10\%$) and patients who never used drugs ($90\%$), and then $\log Y_{6}^{(\infty)}\sim N(6\cdot0,0\cdot4^{2})$ for injection drug users, and $N(6\cdot6,0\cdot5^{2})$ for non injection drug users. For $k\geq6$, $Y_{k+1}^{(\infty)}=-10+Y_{k}^{(\infty)}+\epsilon_{k+1},$ where $\epsilon_{k}\sim N(0,\sigma_{k}^{2})$ with $\sigma_{k}=52\cdot375-1\cdot625k$ for $k=7,\ldots,19$ and $\sigma_{k}=21\cdot5$ for $k=20,\ldots,30$; (ii) $T$ was generated by a logistic regression model $\mathrm{logit}\{P(T=m\mid T\geq m,\bar{L}_{m})\}=-2\cdot4-0\cdot42\mathrm{injdrug}-0\cdot0035Y_{m}^{(\infty)}-0\cdot026m$, where $\mathrm{injdrug}$ is an indication of being an injection drug user; and (iii) $Y_{k}=Y_{k}^{(\infty)}+\gamma_{T}^{k}(\bar{L}_{T})$. We considered different models for $\gamma_{m}^{k}$. The performance of the test statistics was assessed by their ability (i) to confirm the adequacy of a model that is correctly specified with the data-generating model (type-I error) and (ii) to reject a misspecified model (power). The model under the null hypothesis $H_{0}$ upon which the goodness-of-fit statistic is based, and the alternative hypothesis $H_{a}$ were specified as $H_{0}:\gamma_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m)(k-m)$ versus $H_{a}:\gamma_{m,\psi}^{k}\neq(\psi_{1}+\psi_{2}m)(k-m).$ Six scenarios regarding the true treatment effect model, $H_{0}$ and a parametric specification of $H_{a}$ were specified as follows: Scenario (a): True: $\gamma_{m,\psi}^{k}=(25-0\cdot7m)(k-m)$, $H_{0}:$ $\gamma_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m)(k-m)$, and $H_{a}:$ $\gamma_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m+\psi_{3}m^{2})(k-m)$; Scenario (b): True: $\gamma_{m,\psi}^{k}=(25-0\cdot7m)(k-m)$, $H_{0}:$ $\gamma_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m+\psi_{3}\mathrm{injdrug})(k-m)$, and $H_{a}:$ $\gamma_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m+\psi_{3}m^{2})(k-m)$; Scenario (c): True: $\gamma_{m,\psi}^{k}=(35-1\cdot1m+0\cdot04m^{2})(k-m)$, $H_{0}:$ $\gamma_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m)(k-m)$, and $H_{a}:$ $\gamma_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m+\psi_{3}m^{2})(k-m)$; Scenario (d): True: $\gamma_{m,\psi}^{k}=(35-1\cdot1m+0\cdot04k^{2})(k-m)$, $H_{0}:$ $\gamma_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m)(k-m),$ and $H_{a}:$ $\gamma_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m+\psi_{3}m^{2})(k-m)$; Scenario (e): True: $\gamma_{m,\psi}^{k}=(25-m+0\cdot03m^{2})(k-m)$, $H_{0}:$ $\gamma_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m)(k-m)$, and $H_{a}:$ $\gamma_{m,\psi}^{k}=(\psi_{3}+\psi_{4}m)(k-m)^{3/2}$; Scenario (f): True: $\gamma_{m,\psi}^{k}=(10-1\cdot1m)(k-m)^{3/2}$, $H_{0}:$ $\gamma_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m)(k-m),$ and $H_{a}:$ $\gamma_{m,\psi}^{k}=(\psi_{3}+\psi_{4}m)(k-m)^{3/2}$. Specifically, in Scenarios (a) and (b), $H_{0}$ is correctly specified. In Scenarios (c)--(f), $H_{0}$ is misspecified with different degrees of departure from the true model. In Scenarios (c) and (d), $H_{0}$ is nested in the parametric specification of $H_{a}$. In Scenarios (e) and (f), $H_{0}$ is not nested in the parametric specification of $H_{a}$. Type-I error and power were estimated by the frequency of rejecting $H_{0}$ using $1,000$ simulated datasets.\textcolor{black}{{} We considered the following choices of $\tilde{q}$: (i) $\tilde{q}_{m}^{k}\equiv1$, which is a naive choice for comparison; }(ii)\textcolor{black}{{} $\tilde{q}_{m}^{k}=\tilde{\Delta}_{m}^{k}$; and (iii) $\tilde{q}_{m}^{k}=\tilde{q}_{m}^{\mathrm{opt},k}$. (ii) and (iii) were derived under the parametric specification of $H_{a}$. } The optimal estimator was obtained by solving (\ref{eq:ee}) with (\ref{eq:opt}). In (\ref{eq:ee}), the treatment initiation model was fitted by a logistic regression model adjusting for $\mathrm{Y}_{m}$, injection drug use, and month, restricted to patients and visits with $\bar{A}_{m-1}=\bar{0}$. Thus, the treatment initiation model was correctly specified. $E\{H_{\psi_{p}}(k)\mid\bar{A}_{m-1}=\bar{0},\bar{L}_{m}\}$ was fitted by a linear regression model adjusting for $\mathrm{CD4}_{m}$ and $(k-m)$, restricted to patients and visits with $\bar{A}_{m-1}=\bar{0}$. The covariates were motivated by (\ref{eq:imp1ofNUC}) and the data generating mechanism. The nuisance models in $q_{m}^{\mathrm{opt}}$ are specified in the Supplementary Material for simplicity of presentation, which do not affect the double robustness of the estimator. \textcolor{black}{In addition to the goodness-of-fit test statistic, we considered an elaborated-model-fitting-and-testing approach, which combines the null model and the parametric specification of the alternative model and tests the significance of the parameters corresponding to the alternative model. } \textcolor{black}{From Scenarios (a) and (b) in Table \ref{tab:1}, where the treatment effect model under $H_{0}$ is correctly specified, the goodness-of-fit test procedure with all choices of $\tilde{q}$ controls type-I error for $n=1,000$ and $n=2,000$. This suggests that the chi-squared distribution derived in this article provides an accurate approximation to the finite sample behavior of the goodness-of-fit test statistic for these sample sizes.} \textcolor{black}{From Scenarios (c)--(f), where the treatment effect model is not correctly specified, the goodness-of-fit test procedure with the optimal $\tilde{q}_{m}^{k}$ derived under the parametric specification of $H_{a}$ is most powerful, and as the sample size increases, the power increases, confirming the theoretical results. From Scenarios (c) and (d), the goodness-of-fit test procedure and the elaborated-model-fitting-and-testing approach are comparable when testing nested models. In both scenarios, the goodness-of-fit test procedure is slightly more powerful than the elaborated-model-fitting-and-testing approach for $n=500$ and $n=1,000$, which is not apparent for $n=2,000$. For Scenarios (e) and (f), the null treatment effect model is not nested in the parametric specification of $H_{a}$. Under Scenario (e),} the goodness-of-fit test statistic with \textcolor{black}{$\tilde{q}_{m}^{\mathrm{opt},k}$ shows more power than the elaborated-model-fitting-and-testing approach, likely because the elaborated-model-fitting-and-testing approach fits a larger model and loses power. Under Scenario (f), the goodness-of-fit test statistic with $\tilde{q}_{3m}^{\mathrm{opt},k}$ is slightly more powerful than the elaborated-model-fitting-and-testing approach for $n=500$, and both approaches are powerful to reject the null model in other cases.} \begin{table}[H] \centering{}{\scriptsize{}{}\protect\caption{Type-I error estimates and power estimates ($\times100$) for testing the null model $H_{0}$ by the proposed goodness-of-fit (GOF) test statistic with $\tilde{q}$ being $1$, $\tilde{\Delta}$, and $\tilde{q}^{\mathrm{opt}}$, and the Elaborated Model Fitting and Testing (EMFT) approach over $1,000$ simulations under Scenarios (a)--(f) \label{tab:1}} }{\scriptsize \par} {\scriptsize{}{}}% \begin{tabular}{ccccccccc} & \multicolumn{4}{c}{Type-I error estimates in Scenario (a)} & \multicolumn{4}{c}{Type-I error estimates in Scenario (b)}\tabularnewline & & GOF & & EMFT & & GOF & & EMFT\tabularnewline $n$\textbackslash{}$\tilde{q}$ & $1$ & $\tilde{\Delta}_{m}^{k}$ & $\tilde{q}_{m}^{\mathrm{opt},k}$ & & $1$ & $\tilde{\Delta}_{3m}^{k}$ & $\tilde{q}_{m}^{\mathrm{opt},k}$ & \tabularnewline $500$ & $5\cdot3$ & $4\cdot3$ & $5\cdot2$ & $4\cdot9$ & \textcolor{black}{$9\cdot1$} & \textcolor{black}{$9\cdot8$} & \textcolor{black}{$12\cdot3$} & \textcolor{black}{$13\cdot5$}\tabularnewline $1000$ & $4\cdot5$ & $5\cdot7$ & $5\cdot6$ & $5\cdot4$ & \textcolor{black}{$5\cdot3$} & \textcolor{black}{$4\cdot4$} & \textcolor{black}{$5\cdot4$} & \textcolor{black}{$5\cdot7$}\tabularnewline $2000$ & $4\cdot8$ & $4\cdot4$ & $5\cdot2$ & $5\cdot3$ & \textcolor{black}{$5\cdot2$} & \textcolor{black}{$4\cdot4$} & \textcolor{black}{$5\cdot2$} & \textcolor{black}{$5\cdot3$}\tabularnewline & \multicolumn{4}{c}{Power estimates in Scenario (c)} & \multicolumn{4}{c}{Power estimates in Scenario (d)}\tabularnewline & & GOF & & EMFT & & GOF & & EMFT\tabularnewline $n$\textbackslash{}$\tilde{q}$ & $1$ & $\tilde{\Delta}_{m}^{k}$ & $\tilde{q}_{m}^{\mathrm{opt},k}$ & & $1$ & $\tilde{\Delta}_{m}^{k}$ & $\tilde{q}_{m}^{\mathrm{opt},k}$ & \tabularnewline $500$ & $15$ & $29$ & $59$ & $56$ & $90$ & $96$ & $97$ & $83$\tabularnewline $1000$ & $28$ & $55$ & $89$ & $84$ & $100$ & $100$ & $100$ & $97$\tabularnewline $2000$ & $52$ & $88$ & $99$ & $99$ & $100$ & $100$ & $100$ & $100$\tabularnewline & \multicolumn{4}{c}{Power estimates in Scenario (e)} & \multicolumn{4}{c}{Power estimates in Scenario (f)}\tabularnewline & & GOF & & EMFT & & GOF & & EMFT\tabularnewline $n$\textbackslash{}$\tilde{q}$ & $1$ & $\tilde{\Delta}_{m}^{k}$ & $\tilde{q}_{m}^{\mathrm{opt},k}$ & & $1$ & $\tilde{\Delta}_{m}^{k}$ & $\tilde{q}_{m}^{\mathrm{opt},k}$ & \tabularnewline $500$ & $12$ & $25$ & $49$ & $28$ & $93$ & $99$ & $100$ & $96$\tabularnewline $1000$ & $24$ & $53$ & $73$ & $54$ & $100$ & $100$ & $100$ & $100$\tabularnewline $2000$ & $48$ & $79$ & $91$ & $80$ & $100$ & $100$ & $100$ & $100$\tabularnewline \end{tabular} \end{table} \section{Application\label{sec:Application-to-Initiating}} We applied the proposed goodness-of-fit test to study how the timing of combination antiretroviral treatment initiation after infection predicts the effect of one year of treatment in HIV-positive patients. We used the Acute Infection Early Disease Research Program database. We started with a simple null model for the treatment effect, \textcolor{black}{$H_{0}:\gamma_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m)(k-m)$}, and conducted directed alternative-model tests by testing whether possible effect modifiers should be added into the model. In the HIV literature, it has been found that there may be gender differences in immunologic response to combination antiretroviral treatment: early studies suggested that clinical disease progression was more rapid in women than men with combination antiretroviral treatment \citep{friedland1991survival,bozzette1998care}; conversely, more recent studies have shown that women have better immunologic outcomes than men on treatment\citep{maman2012gender,maskew2013gender}. It has also been shown that older age is associated with a poorer CD4 count increase with combination antiretroviral treatment \citep{maman2012gender,maskew2013gender}. Injection drug use has been found to be associated with reduced effectiveness of combination antiretroviral treatment \citep{poundstone2001differences}. As suggested by the literature, we considered tests directed at $3$ variables: gender, age, and injection drug use. \textcolor{black}{For the test directed at a certain variable $Z$, we calculated the goodness-of-fit test statistic with $\tilde{q}$ being the optimal form derived from the parametric specification of the alternative model $\tilde{\gamma}_{m,\psi}^{k}=(\psi_{1}+\psi_{2}m+\psi_{3}Z)(k-m)1_{(k>m)}$. } The nuisance models were specified on the basis of the observed data, the clinical literature, and subject knowledge. For the censoring model, we used a logistic regression model adjusting for square root of current CD4 count ($\mathrm{CD4}_{m}^{1/2}$), current log viral load, gender, age, \textcolor{black}{injection drug use} (injdrug), month, squared month, and whether a patient was treated, as discussed in \citet{krishnan2011incidence} and \citet{lok2010long}. For the treatment initiation model, we used a logistic regression models including $\mathrm{CD4}_{m}^{1/2}$, current log viral load, gender, age, injection drug use, month, days since last visit, indication of first visit, indication of second visit, race, as discussed in \citet{lok2014opt}. For $E_{\xi_{2}}\{H(k)\mid\bar{L}_{m},\bar{A}_{m}=\bar{0}\}$, we used a regression model adjusting for $\mathrm{CD4}_{m}$, $\mathrm{CD4}_{m}^{3/4}(k-m)$, $\mathrm{CD4}_{m}^{3/4}\mathrm{age}(k-m)$, $\mathrm{CD4}_{m}^{3/4}\mathrm{race}(k-m)$, $\mathrm{CD4}_{m}^{3/4}\mathrm{injdrug}(k-m)$, whether there is a CD4 slope measure, $\mathrm{CD4slope}_{m}(k-m)^{1/2}$, $(6-m)^{+}$, and $(6^{2}-m^{2})^{+}$ with $a^{+}\equiv a\times1(a>0)$. The model was motivated by (\ref{eq:imp1ofNUC}). The inclusion of $\mathrm{CD4}_{m}$, $\mathrm{CD4}_{m}^{3/4}(k-m)$, and $\mathrm{CD4}_{m}^{3/4}\mathrm{age}(k-m)$ was suggested from a stochastic model of $Y_{ik}^{(\infty)}$ for each patient $i$ over time $k$, $\{Y_{ik}^{(\infty)}\}^{1/4}=a_{i}+bk+\gamma_{1}\mathrm{age}+\gamma_{2}\mathrm{age}k+\phi W_{ik}+\epsilon_{ik},$ where $a_{i}$ is a normal random effect, $W_{ik}$ is a Brownian motion process, $\epsilon_{ik}$ is normal with mean zero and constant variance, and $a_{i}$, $W_{ik}$, $\epsilon_{ik}$ and $\mathrm{age}$ are independent \citep{taylor1994stochastic}. Other covariates were suggested in \citet{taylor1998does} and \citet{may2009cd4}. Table \ref{tab:The-AIEDRP-data:} shows the results from fitting the optimal estimator of the null treatment effect model, along with the goodness-of-fit tests directed at gender, age, and injection drug use. The $p$-values are all greater than $0\cdot05$. To avoid the multiple testing problem, we did not consider other tests. The three tests were specified prior to the actual calculation. \textcolor{black}{The results show a benefit of combination antiretroviral treatment; for example, starting treatment at the estimated date of infection would lead to an expected added improvement in CD4 counts of $12\hat{\psi}_{1}=299$ }$\text{cells/mm}^{3}$\textcolor{black}{{} after a year of therapy. Delaying treatment initiation during acute and early infection may diminish the CD4 count gain associated with one year treatment ($\hat{\psi}_{2}<0$); however, this result is not statistically significant. } \begin{table}[H] \centering{}{\scriptsize{}{}\protect\caption{\label{tab:The-AIEDRP-data:}The Acute Infection Early Disease Research Program data: the optimal estimator fitting the null treatment effect model: point estimate ($95\%$ confidence intervals based on the asymptotic normality result), along with goodness-of-fit statistics (Statistic), associated degree of freedom (DF), and $p$-values ($p$-value) for the adequacy of the null model by testing whether gender or injection drug use should be added into the model} }{\scriptsize \par} \begin{tabular}{cccc} \multicolumn{2}{c}{$\hat{\psi}_{1}$($95\%$ CI)} & \multicolumn{2}{c}{$\hat{\psi}_{2}$ ($95\%$ CI)}\tabularnewline \multicolumn{2}{c}{$24\cdot88(21\cdot61,28\cdot15)$} & \multicolumn{2}{c}{$-0\cdot48(-1\cdot47,0\cdot52)$}\tabularnewline \multicolumn{4}{c}{Goodness-of-fit test}\tabularnewline & Statistic & DF & $p$-value\tabularnewline Test directed at gender & $0\cdot99$ & $1$ & $0\cdot32$\tabularnewline Test directed at age & $0\cdot80$ & $1$ & $0\cdot37$\tabularnewline Test directed at injection drug use & $2\cdot93$ & $1$ & $0\cdot09$\tabularnewline \end{tabular} \end{table} \section{Discussion\label{sec:Discussion}} The applicability of the goodness-of-fit test procedure presented in this article is broad in the causal inference literature. The testing procedure can also be developed for the traditional structural nested mean models \citep{robins1994correcting} other than the time-dependent coarse structural nested mean models considered in this article, and marginal structural models \citep{robins2000marginal}, because both approaches yield overidentification of the parameters. For the Inverse-Probability-of-Censoring-Weighting estimator of marginal structural models, unbiased estimating equations are $P_{n}q(V)\{Y-\mu(\bar{A},V)\}w(\bar{A}\mid\bar{L})=0,$ where $Y$ is the outcome at the end of study, $\bar{A}$ is the treatment history, $V$ is a subset of the baseline covariates, $\mu(\bar{a},V)\equiv E(Y^{\bar{a}}\mid V)$ is the marginal structural model, where $Y^{\bar{a}}$ is the counterfactual outcome had every individual received the treatment $\bar{a}$, and $w(\bar{A}\mid\bar{L})$ is the inverse of conditional probability of receiving the actual treatment $\bar{A}$ given $\bar{L}$. These equations are unbiased for most choices of $q(V)$, leading to a large class of unbiased estimating equations. The literature of structural nested mean models and marginal structural models concentrates on estimation and efficiency. Little attention has been given to goodness-of-fit tests. Our test procedure for the treatment effect model can be developed in these contexts in the same manner. Our goodness-of-fit test procedure can also deal with treatment of the form ``initiate treatment when the CD4 count first drops below $x$''. Especially in resource limited countries, and historically also in the US, initiation of combination antiretroviral treatment is decided based on the CD4 count threshold. \citet{orellana2010dynamic_a} proposed dynamic regime marginal structural models and \citet{lok2007optimalstart} used structural nested mean models to simultaneously compare dynamic treatment regimes of this form and estimate the optimal one. Due to the popularity of these methods, the development of goodness-of-fit tests in these settings will be useful for model diagnosis and protect causal estimates from biases introduced by misspecification of the treatment effect model. \section*{Acknowledgements} We are grateful to the patients who volunteered for AIEDRP, to the AIEDRP study team, and to Susan Little, Davey Smith, and Christy Anderson for their help and advice in interpreting the AIEDRP database. We would like to thank James Robins and Victor DeGruttola for insightful and fruitful discussions. This work was sponsored by the Milton Fund, the Career Incubator Fund award from the Harvard School of Public Health, and NIH grants NIAID R01AI100762, R01 AI51164, R37 AI032475, AI43638, AI74621, and AI36214. \section*{Supplementary material} Supplementary material available at \textit{Biometrika} online includes regularity conditions, the proof of Theorem 3, the derivation of the goodness-of-fit test statistic in the presence of censoring, and the nuisance regression outcome models used in the simulation. \bibliographystyle{plainnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The purpose of this paper is to prove the following theorem: \begin{rteorema}\label{cucu} Let $A$ be an excellent (in fact $J-2$) ring and let $N\subseteq M$ be two finitely generated $A$-modules such that ${\rm dim}(M/N)\leq 1$. Then there exists an integer $s\geq 1$ such that, for all integers $n\geq s$ and for all ideals $I$ of $A$, \begin{eqnarray*} I^{n}M\cap N=I^{n-s}(I^{s}M\cap N)\, . \end{eqnarray*} \end{rteorema} This result is a variation of a theorem of Duncan and O'Carroll \cite{do}: maximal ideals are replaced for any ideal using the unavoidable hypothesis ${\rm dim}(M/N)\leq 1$ (as an Example of Wang shows \cite{wang1}). Moreover it provides a partial positive answer to the question raised by Huneke in Conjecture 1.3 \cite{huneke1}. We begin by recalling what is called uniform Artin-Rees properties. Let $A$ be a noetherian ring, $I$ be an ideal of $A$ and let $N\subseteq M$ be two finitely generated $A$-modules. The usual Artin-Rees lemma states that there exists an integer $s\geq 1$, depending on $N$, $M$ and $I$, such that for all $n\geq s$, \begin{eqnarray*} I^{n}M\cap N=I^{n-s}(I^{s}M\cap N)\, . \end{eqnarray*} In particular, $I^{n}M\cap N\subseteq I^{n-s}N$. As in \cite{huneke1}, let us say the pair $(N,M)$ has the ({\em strong}) {\em uniform Artin-Rees property} with respect to a set of ideals $\mathcal{W}$ of $A$ and with ({\em strong}) {\em uniform number} $s$ ($s$ depending on $(N,M;\mathcal{W})$) if, for every ideal $I$ of $\mathcal{W}$ and for all $n\geq s$, ($I^{n}M\cap N=I^{n-s}(I^{s}M\cap N)$) $I^{n}M\cap N\subseteq I^{n-s}N$. Clearly, if $s$ is a (strong) uniform number for $(N,M,\mathcal{W})$ and $t\geq s$, then $t$ is also a (strong) uniform number for $(N,M,\mathcal{W})$. The minimum of all such (strong) uniform numbers will be denoted by $s=s(N,M,\mathcal{W})$ and call it ``the'' (strong) uniform number for $(N,M,\mathcal{W})$. If $\mathcal{W}$ is the set of all ideals of $A$, we delete the phrase ``with respect to $\mathcal{W}$'' and simply write $s=s(N,M)$. Eisenbud and Hochster \cite{eh} ask whether a pair $(N,M)$ has the uniform Artin-Rees property with respect to the set of maximal ideals of $A$. O'Carroll \cite{ocarroll1} proves that if $A$ is excellent then $A$ has the uniform Artin-Rees property with respect to the set of maximal ideals and Duncan and O'Carroll \cite{do} generalize this result to the strong uniform Artin-Rees property. Later, O'Carroll \cite{ocarroll2} shows the strong uniform Artin-Rees property with respect to the set of principal ideals of a noetherian ring $A$. Nevertheless, the strong uniform Artin-Rees property cannot hold for the class of all ideals of $A$. Indeed, Wang \cite{wang1} shows that if $(A,\mathfrak{m})$ is a 3-dimensional regular local ring, $\mathfrak{m}=(x,y,z)$, $I_{k}=(x^{k},y^{k},x^{k-1}y+z^{k})$ and $J=(z)$, then there does not exist an $s\geq 1$ such that, for all $n\geq s$ and for all $k\geq 1$, $I^{n}_{k}\cap J=I^{n-s}_{k}(I^{s}_{k}\cap J)$. Remark that ${\rm dim}(A/J)=2$. Thus, in this sense, Theorem \ref{cucu} is not improvable. On the other hand Huneke \cite{huneke1} shows the uniform Artin-Rees property with respect to the class of all ideals of a noetherian ring $A$ if $A$ is either essentially of finite type over a noetherian local ring, either a ring of characteristic $p$ and a module finite over $A^{p}$ or either essentially of finite type over $\mathbb{Z}$. In the same paper Huneke conjectures that this theorem remains true for excellent noetherian rings of finite Krull dimension. Thus Theorem \ref{cucu} gives a partial positive answer to this conjecture. Since strong uniform Artin-Rees property is not true in general, Huneke \cite{huneke2} asks for classes of ideals where strong uniform Artin-Rees property holds. If $A$ is regular local, $J$ is an ideal of $A$, does there exist an $s\geq 1$ such that, for all $n\geq s$ and for all ideal $I$ of $A$ whose image in $A/J$ is generated by a system of parameters, $I^{n}\cap J=I^{n-s}(I^{s}\cap J)$ ? In fact, Lai \cite{lai} proves that this property is equivalent to the Relation-Type Conjecture, stated by Huneke, and proved by Wang \cite{wang2} for rings with finite local cohomology. The {\em relation type} of an ideal $I$, ${\rm rt}(I)$, is the largest degree of any minimal homogeneous system of generators of the ideal defining the Rees algebra of $I$. The Relation-Type Conjecture asks whether there is an integer $s\geq 1$ such that, for all parameter ideal $I$ of a complete local equidimensional noetherian ring $A$, the relation type of $I$ is ${\rm rt}(I)\leq s$. In order to prove Theorem \ref{cucu} we generalise this relationship between the strong uniform Artin-Rees property and the existence of uniform bounds for the relation type. First we define ${\rm rt}(I;M)$, the relation type of an ideal $I$ with repect to an $A$-module $M$ (Section \ref{ja}). Then we consider ${\rm grt}(M)={\rm sup}\{ {\rm rt}(I;M)\mid I\, \mbox{ideal} \; \mbox{of}\ A\} $, the supremum (possibly infinite) of all relation types of ideals $I$ of $A$ with respect to $M$, and call it the {\em global relation type} of the $A$-module $M$. We prove: \begin{rteorema} Let $A$ be a commutative ring, $\mathcal{W}$ a set of ideals of $A$, $I\in \mathcal{W}$ and $N\subseteq M$ two $A$-modules. Let $s(N,M;\mathcal{W})$ denote the strong uniform number for the pair $(N,M)$ with respect to $\mathcal{W}$. Then $s(N,M;\{ I\} )\leq {\rm rt}(I;M/N)\leq {\rm max}({\rm rt}(I;M),s(N,M;\{ I\}))$. In particular, $s(N,M;\mathcal{W})\leq {\rm sup}\{ {\rm rt}(J;M/N)\mid J\in \mathcal{W}\}$ and $s(N,M)\leq {\rm grt}(M/N)$. \end{rteorema} We thus ask for whether a module has finite global relation type. A very special is already known: for a commutative (non necessarily noetherian local) domain $A$, ${\rm grt}(A)=1$ is equivalent to $A$ be a ring of Pr\"ufer \cite{costa} and, more in general, commutative rings with ${\rm grt}(A)=1$ are known to be the rings of weak dimension one or less \cite{planas1}. Thus, for a noetherian local ring $A$, ${\rm grt}(A)=1$ if and only if $A$ is a discrete valuation ring or a field. Our guide here is the following celebrated theorem of Cohen and Sally \cite{cohen}, \cite{sally}: for a commutative noetherian local ring $(A,\mathfrak{m},k)$, ${\rm sup}\{ \mu (I)\mid I \mbox{ ideal of } A\} <\infty $ is equivalent to dimension of $A$ be ${\rm dim}\, A\leq 1$, where $\mu (I)={\rm dim}_{k}(I/\mathfrak{m}I)$ stands for the minimum number of generators of $I$. Then, we show the expected analoguos result by replacing $\mu (I)$ for the relation type ${\rm rt}(I)$ of $I$. Concretely: \begin{rteorema}\label{main} Let $A$ be an excellent (in fact $J-2$) ring. The following conditions are equivalent: \begin{itemize} \item[$(i)$] ${\rm grt}(M)<\infty $ for all finitely generated $A$-module $M$. \item[$(ii)$] ${\rm grt}(A)<\infty $. \item[$(iii)$] There exists an $r\geq 1$ such that ${\rm rt}(I)\leq r$ for every three-generated ideal $I$ of $A$. \item[$(iv)$] There exists an $r\geq 1$ such that $(x^{r}y)^{r}\in (x^{r+1},y^{r+1})(x^{r+1},y^{r+1},x^{r}y)^{r-1}$ for all $x,y\in A$. \item[$(v)$] ${\rm dim}\, A\leq 1$. \end{itemize} \end{rteorema} The paper is organized as follows. Section \ref{ja} is dedicated to recall some definitions and properties on the module of effective relations of a graded algebra. In order to prove Theorem 2, we need to generalize them from graded algebras to graded modules. Once introduced all the machinery, we prove Theorem 2 in Section \ref{rtim}. In Section \ref{1dim} we prove that rings of finite global relation type have dimension one or less and in Section \ref{0dim} we show that zero dimensional modules over noetherian rings have finite global relation type. This is half of the proof in Theorem 3. In Section \ref{jor}, we first consider the local case and reduce to Cohen-Macaulay modules. Then we give a new proof, now for modules, of a well known result for rings (see, for instance, \cite{trung2}): if $I$ is an $\mathfrak{m}$-primary ideal of a one dimensional Cohen-Macaulay local ring $A$ and $M$ is a maximal Cohen-Macaulay $A$-module, then ${\rm rt}(I;M)\leq e(A)$, the relation type of $I$ with respect $M$ is bounded above by the multiplicity of $A$. We conclude that one dimensional finitely generated modules over noetherian local rings have finite global relation type. Section \ref{final} finishes with all proofs. Throughout, $A$ denotes a commutative ring with unity. All tensor products are over $A$ unless specified the contrary. Dimension of a ring or module always mean Krull dimension. One of the main tools in this note is the module of effective relations of a graded algebra or module. In order to recall some of their general properties we will often refer to \cite{planas2}. \section{Preliminaries}\label{ja} Let $A$ be a commutative ring. By a {\em standard} $A$-algebra we mean a commutative graded $A$-algebra $U=\oplus _{n\geq 0}U_{n}$ with $U_{0}=A$ and such that $U$ is generated as an $A$-algebra by the elements of $U_{1}$. Put $U_{+}=\oplus _{n>0}U_{n}$ the {\em irrelevant ideal} of $U$. If $E=\oplus _{n\geq 0}E_{n}$ is a graded $U$-module and $r\geq 0$ is an integer, we denote by $F_{r}(E)$ the submodule of $E$ generated by the elements of degree at most $r$. Put (possibly infinite) $s(E)={\rm min}\{ r\geq 1\mid E_{n}=0 \mbox{ for all } n\geq r+1\} $. Remark that we are only interested for $s(E)\geq 1$. Since for all $n\geq 1$, $(E/U_{+}E)_{n}=E_{n}/U_{1}E_{n-1}$, then for all $r\geq 1$, the following three conditions are equivalent: $F_{r}(E)=E$; $s(E/U_{+}E)\leq r$; and $E_{n}=U_{1}E_{n-1}$ for all $n\geq r+1$. If $f:V\rightarrow U$ is a surjective graded morphism of standard $A$-algebras, we denote by $E(f)$ the graded $A$-module $E(f)={\rm ker}f/V_{+}{\rm ker}f=\oplus _{n\geq 1}{\rm ker}f_{n}/V_{1}{\rm ker}f_{n-1}=\oplus _{n\geq 1}E(f)_{n}$. The following is an elementary but very useful fact (Lemma 2.1 \cite{planas2}): if $f:V\rightarrow U$ and $g:W\rightarrow V$ are two surjective graded morphisms of standard $A$-algebras, then there exists a graded exact sequence of $A$-modules $E(g)\rightarrow E(f\circ g)\buildrel g\over \rightarrow E(f)\rightarrow 0$. In particular, $s(E(f))\leq s(E(f\circ g))\leq {\rm max}(s(E(f)),s(E(g)))$. Moreover, if $V$ and $W$ are two symmetric algebras, then $E(g)_{n}=0$ and $E(f\circ g)_{n}=E(f)_{n}$ for all $n\geq 2$. Let $U$ be a standard $A$-algebra, let ${\bf S}(U_{1})$ be the symmetric algebra of $U_{1}$ and let $\alpha :{\bf S}(U_{1})\rightarrow U$ be the surjective graded morphism of standard $A$-algebras induced by the identity on $U_{1}$. The {\em module of effective} $n$-{\em relations} of $U$ is defined to be $E(U)_{n}=E(\alpha )_{n}={\rm ker}\alpha _{n}/U_{1}{\rm ker}\alpha _{n-1}$ (for $n=0,1$, $E(U)_{n}=0$). Put $E(U)=\oplus _{n\geq 2}E(U)_{n}=\oplus _{n\geq 2}E(\alpha )_{n}=E(\alpha )= {\rm ker}\alpha /{\bf S}_{+}(U_{1}){\rm ker}\alpha$. The {\em relation type} of $U$ is defined to be ${\rm rt}(U)=s(E(U))$, that is, ${\rm rt}(U)$ is the minimum positive integer $r\geq 1$ such that the effective $n$-relations are zero for all $n\geq r+1$. A {\em symmetric presentation} of $U$ is a surjective graded morphism of standard $A$-algebras $f:V\rightarrow U$, where $V={\bf S}(V_{1})$ is the symmetric $A$-algebra of the $A$-module $V_{1}$ (for instance, $V_{1}=U_{1}$ and $f_{1}=1$, or $f_{1}:V_{1}\rightarrow U_{1}$ a free presentation of $U_{1}$). Using Lemma 2.1 in \cite{planas2} one deduces that $E(U)_{n}=E(f)_{n}$ for all $n\geq 2$ and $s(E(U))=s(E(f))$. Thus the module of effective $n$-relations and the relation type of a standard $A$-algebra are independent of the chosen symmetric presentation. For an ideal $I$ of $A$, the module of effective $n$-relations and the relation type of $I$ are defined to be $E(I)_{n}=E(\rees{I})_{n}$ and ${\rm rt}(I)={\rm rt}(\rees{I})$, where $\rees{I}=\oplus _{n\geq 0}I^{n}t^{n}\subset A[t]$ is the {\em Rees algebra of} $I$. If ${\rm rt}(I)<\infty $ (for instance, if $A$ is noetherian), then ${\rm rt}(I)={\rm rt}(\agr{I})$, where $\agr{I}=\oplus _{n\geq 0}I^{n}/I^{n+1}$ is the {\em associated graded ring of} $I$ (Proposition 3.3 \cite{planas2}). \medskip Let us now extend the classical notion of relation type of an ideal to the relation type of an ideal with respect to a module. Some of the results we present here are a straightforward generalization of former results. We thus will skip some details. \begin{definicio}{\rm Let $U=\oplus _{n\geq 0}U_{n}$ be a standard $A$-algebra and $F=\oplus _{n\geq 0}F_{n}$ a graded $U$-module. We will say $F$ is a {\em standard} $U$-module if $F$ is generated as an $U$-module by the elements of $F_{0}$, that is, $F_{n}=U_{n}F_{0}$ for all $n\geq 0$. In particular, $F_{n}=U_{1}F_{n-1}$ for all $n\geq 1$. }\end{definicio} \begin{exemples}{\rm Some of the most interesting standard modules for our purposes are the following: \begin{itemize} \item[$(1)$] If $U$ is a standard $A$-algebra and $M$ is an $A$-module, then $U\otimes M$ is a standard $U$-module ($M$ in degree zero). If $U_{1}=A^{\oplus n}$ is a finitely generated free $A$-module and $U={\bf S}(U_{1})$ is the symmetric algebra of $U_{1}$, then $U\otimes M=A[T_{1},\ldots ,T_{n}]\otimes M=M[T_{1},\ldots ,T_{n}]$. \item[$(2)$] $\reesw{I}{M}=\oplus _{n\geq 0}I^{n}M$, the {\em Rees module of an ideal} $I$ {\em of} $A$ {\em with respect to an} $A$-{\em module} $M$, is a standard $\rees{I}$-module. \item[$(3)$] $\agrw{I}{M}=\oplus _{n\geq 0}I^{n}M/I^{n+1}M$, the {\em associated graded module of an ideal} $I$ {\em of} $A$ {\em with respect to an} $A$-{\em module} $M$, is a standard $\agr{I}$-module. \end{itemize} }\end{exemples} Let $U=\oplus _{n\geq 0}U_{n}$ be a standard $A$-algebra and $F=\oplus _{n\geq 0}F_{n}$, $G=\oplus _{n\geq 0}G_{n}$ be two graded $U$-modules. If $\varphi :G\rightarrow F$ is a surjective graded morphism of $U$-modules, we denote by $E(\varphi )$ the graded $A$-module $E(\varphi )={\rm ker}\varphi /U_{+}{\rm ker}\varphi ={\rm ker}\varphi _{0}\oplus (\oplus _{n\geq 1}{\rm ker}\varphi _{n}/U_{1}{\rm ker}\varphi _{n-1})=\oplus _{n\geq 0}E(\varphi )_{n}$. The following is a generalization of Lemma 2.1 in \cite{planas2}: \begin{lema}\label{gener} If $\varphi :G\rightarrow F$ and $\psi :H\rightarrow G$ are two surjective graded morphisms of graded $U$-modules, then there exists a graded exact sequence $E(\psi )\rightarrow E(\varphi \circ \psi )\buildrel \psi \over \rightarrow E(\varphi )\rightarrow 0$ of $A$-modules. In particular, $s(E(\varphi ))\leq s(E(\varphi \circ \psi ))\leq {\rm max}(s(E(\varphi )),s(E(\psi )))$. Moreover, if $H={\bf S}(P)\otimes Q$ is the tensor product of the symmetric algebra of the $A$-module $P$ with the $A$-module $Q$, $G={\bf S}(M)\otimes N$ is the tensor product of the symmetric algebra of the $A$-module $M$ with the $A$-module $N$ and $\psi =f\otimes h$ where $f:{\bf S}(P)\rightarrow {\bf S}(M)$ is induced by an epimorphism $f_{1}:P\rightarrow M$ and $h:Q\rightarrow N$ is also surjective, then $E(\psi )_{n}=0$ and $E(\varphi \circ \psi )_{n}=E(\varphi )_{n}$ for all $n\geq 2$. \end{lema} \noindent {\em Proof}. To deduce the existence of the exact sequence we proceed as in Lemma 2.1 in \cite{planas2}. For the second assertion, consider the following commutative diagram of exact rows: \begin{picture}(330,125)(-5,0) \put(200,100){\makebox(0,0){\mbox{\footnotesize ${\bf \Lambda}_{2}(U_{1})\otimes {\bf S}_{n-2}(P)\otimes Q$}}} \put(300,100){\makebox(0,0){\mbox{\footnotesize ${\bf \Lambda}_{2}(U_{1})\otimes {\bf S}_{n-2}(M)\otimes N$}}} \put(370,100){\makebox(0,0){$0$}} \put(245,100){\vector(1,0){8}} \put(347,100){\vector(1,0){13}} \put(100,60){\makebox(0,0){\mbox{\footnotesize $U_{1}\otimes {\rm ker}\psi _{n-1}$}}} \put(200,60){\makebox(0,0){\mbox{\footnotesize $U_{1}\otimes {\bf S}_{n-1}(P)\otimes Q$}}} \put(300,60){\makebox(0,0){\mbox{\footnotesize $U_{1}\otimes {\bf S}_{n-1}(M)\otimes N$}}} \put(370,60){\makebox(0,0){$0$}} \put(135,60){\vector(1,0){22}} \put(240,60){\vector(1,0){20}} \put(340,60){\vector(1,0){20}} \put(40,20){\vector(1,0){30}} \put(135,20){\vector(1,0){30}} \put(235,20){\vector(1,0){30}} \put(325,20){\vector(1,0){35}} \put(30,20){\makebox(0,0){$0$}} \put(100,20){\makebox(0,0){{\footnotesize ${\rm ker}\psi _{n}$}}} \put(200,20){\makebox(0,0){{\footnotesize ${\bf S}_{n}(P)\otimes Q$}}} \put(300,20){\makebox(0,0){{\footnotesize ${\bf S}_{n}(M)\otimes N$}}} \put(370,20){\makebox(0,0){$0$}} \put(100,50){\vector(0,-1){20}} \put(200,50){\vector(0,-1){20}} \put(200,50){\vector(0,-1){16}} \put(300,50){\vector(0,-1){20}} \put(300,50){\vector(0,-1){16}} \put(170,40){\makebox(0,0){{\footnotesize $\partial _{1,n}^{P}\otimes 1_{Q}$}}} \put(330,40){\makebox(0,0){{\footnotesize $\partial _{1,n}^{M}\otimes 1_{N}$}}} \put(200,90){\vector(0,-1){20}} \put(300,90){\vector(0,-1){20}} \put(170,80){\makebox(0,0){{\footnotesize $\partial _{2,n}^{P}\otimes 1_{Q}$}}} \put(330,80){\makebox(0,0){{\footnotesize $\partial _{2,n}^{M}\otimes 1_{N}$}}} \put(250,110){\makebox(0,0){\mbox{\footnotesize $1\otimes \psi _{n-2}$}}} \put(250,70){\makebox(0,0){\mbox{\footnotesize $1\otimes \psi _{n-1}$}}} \put(250,10){\makebox(0,0){\mbox{\footnotesize $\psi _{n}$}}} \end{picture} \noindent where $\partial _{2,n}^{P}((x\wedge y)\otimes z)=y\otimes xz-x\otimes yz$ and $\partial _{1,n}^{P}(x\otimes t)=xt$, $x,y\in U_{1}$, $z\in {\bf S}_{n-2}(P)$, $t\in {\bf S}_{n-1}(P)$ ($\partial ^{M}$ defined analogously). By Theorem 2.4 in \cite{planas2}, the right and middle columns are exact sequences for all $n\geq 2$ and, by the snake lemma, ${\rm ker}(\partial _{1,n}^{P}\otimes 1_{Q})\rightarrow {\rm ker}(\partial _{1,n}^{M}\otimes 1_{N})\rightarrow E(\psi )_{n}\rightarrow 0$ are exact for all $n\geq 2$. Since $1\otimes \psi _{n-2}$ is surjective, $E(\psi )_{n}=0$ for all $n\geq 2$. \vrule height4pt width3pt depth2pt \medskip \begin{definicio}{\rm Let $U$ be a standard $A$-algebra and $F$ be a standard $U$-module. Let ${\bf S}(U_{1})$ be the symmetric algebra of $U_{1}$ and let $\alpha :{\bf S}(U_{1})\rightarrow U$ be the surjective graded morphism of standard $A$-algebras induced by the identity on $U_{1}$. Let $\gamma :{\bf S}(U_{1})\otimes F_{0}\buildrel \alpha \otimes 1\over \rightarrow U\otimes F_{0}\rightarrow F$ be the composition of $\alpha \otimes 1$ with the structural morphism. Since $F$ is a standard $U$-module, $\gamma $ is a surjective graded morphism of graded ${\bf S}(U_{1})$-modules. The {\em module of effective} $n$-{\em relations} of $F$ is defined to be $E(F)_{n}=E(\gamma )_{n}= {\rm ker}\gamma _{n}/U_{1}{\rm ker}\gamma _{n-1}$ (for $n=0$, $E(F)_{n}=0$). Put $E(F)=\oplus _{n\geq 1}E(F)_{n}=\oplus _{n\geq 1}E(\gamma )_{n}=E(\gamma )= {\rm ker}\gamma /{\bf S}_{+}(U_{1}){\rm ker}\gamma$. The {\em relation type} of $F$ is defined to be ${\rm rt}(F)=s(E(F))$, that is, ${\rm rt}(F)$ is the minimum positive integer $r\geq 1$ such that the effective $n$-relations are zero for all $n\geq r+1$. A {\em symmetric presentation} of a standard $U$-module $F$ is a surjective graded morphism of standard $V$-modules $\varphi :G\rightarrow F$, with $\varphi :G=V\otimes M\buildrel f\otimes h\over \rightarrow U\otimes F_{0}\rightarrow F$ where $f:V\rightarrow U$ is a symmetric presentation of the standard $A$-algebra $U$, $h:M\rightarrow F_{0}$ is an epimorphism of $A$-modules and $U\otimes F_{0}\rightarrow F$ is the structural morphism. Using Lemma \ref{gener}, one deduces that $E(F)_{n}=E(\varphi )_{n}$ for all $n\geq 2$ and $s(E(F))=s(E(\varphi ))$. Thus the module of effective $n$-relations and the relation type of a standard $U$-module are independent of the chosen symmetric presentation. For an ideal $I$ of $A$ and an $A$-module $M$, the module of effective $n$-relations and the relation type of $I$ with repect to $M$ are defined to be $E(I;M)_{n}=E(\reesw{I}{M})_{n}$ and ${\rm rt}(I;M)={\rm rt}(\reesw{I}{M})$. }\end{definicio} \begin{observacio}\label{prougranm} {\rm The following are simple, but useful remarks: \begin{itemize} \item[$(1)$] If $U$ is a standard $A$-algebra, then $U$ is a standard $U$-module. Moreover the modules of effective $n$-relations of $U$ as a standard $A$-algebra and as a standard $U$-module are equal $E_{A\mbox{\footnotesize -alg}}(U)_{n}=E_{U\mbox{\footnotesize -mod}}(U)_{n}$. Thus ${\rm rt}_{A\mbox{\footnotesize -alg}}(U)={\rm rt}_{U\mbox{\footnotesize -mod}}(U)$. In particular, if $I$ is an ideal of $A$, then $E(I;A)_{n}=E(I)_{n}$ and ${\rm rt}(I;A)={\rm rt}(I)$. \item[$(2)$] If $f:V\rightarrow U$ is a surjective graded morphism of standard $A$-algebras and $F$ is a standard $U$-module, then $F$ is a standard $V$-module. Moreover $E_{U\mbox{\footnotesize -mod}}(F)_{n}=E_{V\mbox{\footnotesize -mod}}(F)_{n}$ for all $n\geq 2$ and ${\rm rt}_{U\mbox{\footnotesize -mod}}(F)={\rm rt}_{V\mbox{\footnotesize -mod}}(F)$. \item[$(3)$] If $\varphi :R\rightarrow A$ is a surjective homomorphism of rings, $U$ is standard $A$-algebra and $F$ is a standard $U$-module, then $V=R\oplus U_{+}$ is a standard $R$-algebra and $F$ is a standard $V$-module. Moreover $E(U)_{n}=E(V)_{n}$ for all $n\geq 2$ and ${\rm rt}(U)={\rm rt}(V)$. Analogously $E_{U\mbox{\footnotesize -mod}}(F)_{n}=E_{V\mbox{\footnotesize -mod}}(F)_{n}$ for all $n\geq 2$ and ${\rm rt}_{U\mbox{\footnotesize -mod}}(F)={\rm rt}_{V\mbox{\footnotesize -mod}}(F)$. \item[$(4)$] If $\varphi :G\rightarrow F$ is a surjective graded morphism of standard $U$-modules such that ${\rm ker}\varphi _{n}=0$ for all $n\geq t$, then $E(G)_{n}=E(F)_{n}$ for all $n\geq t+1$ and ${\rm rt}(G)\leq \mbox{max}({\rm rt}(F),t)$. If $t=1$, then ${\rm rt}(G)={\rm rt}(F)$. For instance, if $I$ and $J$ are two ideals of $A$, ${\rm rt}(I/I\cap J)={\rm rt}((I+J)/J)$. \item[$(5)$] Let $F$ be a standard $U$-module, $\underline{x}=\{ x_{i}\} $ a (possibly infinite) set of generators of the $A$-module $U_{1}$ and $\underline{T}=\{ T_{i}\} $ a set of as many variables over $A$ as $\underline{x}$ has elements. Take $V_{1}=\oplus _{i}AT_{i}$, $V={\bf S}(V_{1})=A[\underline{T}]$, $G=V\otimes F_{0}=F_{0}[\underline{T}]$ and $\varphi :G\rightarrow F$ defined by $\varphi (\sum y_{i}T_{i})=\sum x_{i}y_{i}$. Clearly $\varphi $ is a symmetric presentation of $F$. Thus, ${\rm rt}(F)=1$ if and only if ${\rm ker}\varphi $ is generated by linear forms. If $I=(\underline{x})$ is an ideal of $A$ and $M$ is an $A$-module, then ${\rm rt}(I;M)=1$ if and only if the kernel of the surjective graded morphism $\varphi :M[\underline{T}]\rightarrow \reesw{I}{M}$, $\varphi (\sum y_{i}T_{i})=\sum x_{i}y_{i}$, is generated by linear forms. We say $I$ is an {\em ideal of linear type with respect to} $M$ if ${\rm rt}(I;M)=1$ (\cite{hsv} pag 106, \cite{trung1} pag 41). \end{itemize} }\end{observacio} \noindent {\em Proof}. $(1)$ follows from definitions. $(2)$ is consequence of Lemma \ref{gener}. For the proof of $(3)$, consider $\alpha _{V}:{\bf S}^{R}(U_{1})\rightarrow V$ surjective graded morphism of standard $R$-algebras, $\alpha _{U}:{\bf S}^{A}(U_{1})\rightarrow U$ surjective graded morphism of standad $A$-algebras, $f:V\rightarrow U$ and $g:{\bf S}^{R}(U_{1})\rightarrow {\bf S}^{A}(U_{1})$ the natural surjective graded morphisms extending $\varphi $. Since $f\circ \alpha _{V}=\alpha _{U}\circ g$ and $f_{n}$ and $g_{n}$ are isomorphisms for all $n\geq 1$, then $E(V)_{n}=E(\alpha _{V})_{n}=E(f\circ \alpha )_{n}=E(\alpha _{U})_{n}=E(U)_{n}$. For the rest of $(3)$ is sufficient to apply the tensor product $-\otimes F_{0}$. In order to prove $(4)$, let $\psi :H\rightarrow G$ be a symmetric presentation of $G$. By hypothesis, ${\rm ker}\psi_{n}={\rm ker}(\varphi \circ \psi )_{n}$ for all $n\geq t$. Hence $E(G)_{n}=E(\psi )_{n}=E(\varphi \circ \psi )_{n}=E(F)_{n}$ for all $n\geq t+1$. If $n\geq t+1,{\rm rt}(F)+1$, then $E(G)_{n}=E(F)_{n}=0$. Thus ${\rm rt}(G)\leq {\rm max}\left( {\rm rt}(F), t\right) $. Take $G=\rees{I/I\cap J}$, $F=\rees{(I+J)/J}$ and $\varphi :G\rightarrow F$ the natural surjective graded morphism with $\varphi _{0}:A/I\cap J\rightarrow A/J$ and $\varphi _{n}:(I^{n}+I\cap J)/(I\cap J)\buildrel \simeq \over \rightarrow (I^{n}+J)/J$ isomorphism for all $n\geq 1$. Applying consecutively $(1)$, $(4)$, $(3)$ and $(1)$, $E_{A/I\cap J\mbox{\footnotesize -alg}}(G)_{n}= E_{G\mbox{\footnotesize -mod}}(G)_{n}= E_{G\mbox{\footnotesize -mod}}(F)_{n}= E_{A/J\mbox{\footnotesize -alg}}(F)_{n}$. Finally, $(5)$ follows from definitions. \vrule height4pt width3pt depth2pt \medskip Let us now modify Theorem 2.4 in \cite{planas2} to modules: \begin{proposicio}\label{febrer} Let $U$ be a standard $A$-algebra and let $F$ be a standard $U$-module. For each integer $n\geq 2$, there exists a complex of $A$-modules \begin{eqnarray*} {\bf \Lambda}_{2}(U_{1})\otimes F_{n-2}\buildrel \partial _{2,n}\over \longrightarrow U_{1}\otimes F_{n-1}\buildrel \partial _{1,n}\over \longrightarrow F_{n}\, , \end{eqnarray*} defined by $\partial _{2,n}((x\wedge y)\otimes z)=y\otimes xz-x\otimes yz$ and $\partial _{1,n}(x\otimes t)=xt$ and whose homology is $E(F)_{n}$. \end{proposicio} \noindent {\em Proof}. By Theorem 2.4, there exists ${\bf \Lambda} _{2}(U_{1})\rightarrow U_{1}\otimes U_{1}\rightarrow U_{2}\rightarrow 0$, a complex of $A$-modules defined by $\partial _{2}(x\wedge y)=y\otimes x-x\otimes y$ and $\partial _{1}(x\otimes t)=xt$. Applying the tensor product $-\otimes F_{n-2}$ and considering the structural morphisms $U_{i}\otimes F_{j}\rightarrow F_{i+j}$ we get the complex. Let ${\bf S}(U_{1})$ be the symmetric algebra of $U_{1}$ and let $\alpha :{\bf S}(U_{1})\rightarrow U$ be the surjective graded morphism of standard $A$-algebras induced by the identity on $U_{1}$. Let $\gamma :{\bf S}(U_{1})\otimes F_{0}\buildrel \alpha \otimes 1\over \rightarrow U\otimes F_{0}\rightarrow F$ be the composition of $\alpha \otimes 1$ with the structural morphism. Consider now, for each $n\geq 2$, the following commutative diagram of exact rows: \begin{picture}(330,125)(-5,0) \put(200,100){\makebox(0,0){\mbox{\footnotesize ${\bf \Lambda}_{2}(U_{1})\otimes {\bf S}_{n-2}(U_{1})\otimes F_{0}$}}} \put(300,100){\makebox(0,0){\mbox{\footnotesize ${\bf \Lambda}_{2}(U_{1})\otimes F_{n-2}$}}} \put(370,100){\makebox(0,0){$0$}} \put(250,100){\vector(1,0){15}} \put(330,100){\vector(1,0){30}} \put(100,60){\makebox(0,0){\mbox{\footnotesize $U_{1}\otimes {\rm ker}\gamma _{n-1}$}}} \put(200,60){\makebox(0,0){\mbox{\footnotesize $U_{1}\otimes {\bf S}_{n-1}(U_{1})\otimes F_{0}$}}} \put(300,60){\makebox(0,0){\mbox{\footnotesize $U_{1}\otimes F_{n-1}$}}} \put(370,60){\makebox(0,0){$0$}} \put(135,60){\vector(1,0){22}} \put(240,60){\vector(1,0){30}} \put(330,60){\vector(1,0){30}} \put(40,20){\vector(1,0){30}} \put(135,20){\vector(1,0){30}} \put(235,20){\vector(1,0){35}} \put(325,20){\vector(1,0){35}} \put(30,20){\makebox(0,0){$0$}} \put(100,20){\makebox(0,0){{\footnotesize ${\rm ker}\gamma _{n}$}}} \put(200,20){\makebox(0,0){{\footnotesize ${\bf S}_{n}(U_{1}) \otimes F_{0}$}}} \put(300,20){\makebox(0,0){{\footnotesize $F_{n}$}}} \put(370,20){\makebox(0,0){$0$}} \put(100,50){\vector(0,-1){20}} \put(200,50){\vector(0,-1){20}} \put(200,50){\vector(0,-1){16}} \put(300,50){\vector(0,-1){20}} \put(300,50){\vector(0,-1){16}} \put(170,40){\makebox(0,0){{\footnotesize $\partial _{1,n}^{S}\otimes 1_{F_{0}}$}}} \put(330,40){\makebox(0,0){{\footnotesize $\partial _{1,n}^{F}$}}} \put(200,90){\vector(0,-1){20}} \put(300,90){\vector(0,-1){20}} \put(170,80){\makebox(0,0){{\footnotesize $\partial _{2,n}^{S}\otimes 1_{F_{0}}$}}} \put(330,80){\makebox(0,0){{\footnotesize $\partial _{2,n}^{F}$}}} \put(250,110){\makebox(0,0){\mbox{\footnotesize $1\otimes \gamma _{n-2}$}}} \put(250,70){\makebox(0,0){\mbox{\footnotesize $1\otimes \gamma _{n-1}$}}} \put(250,10){\makebox(0,0){\mbox{\footnotesize $\gamma _{n}$}}} \end{picture} \noindent By Theorem 2.4 in \cite{planas2}, the middle column is exact. Thus ${\rm ker}(\partial _{1,n}^{S}\otimes 1_{F_{0}})={\rm im}(\partial _{2,n}^{S}\otimes 1_{F_{0}})$. Hence, $(1\otimes \gamma _{n-1})({\rm ker}(\partial _{1,n}^{S}\otimes 1_{F_{0}}))={\rm im}((1\otimes \gamma _{n-1})\circ (\partial _{2,n}^{S}\otimes 1_{F_{0}}) )={\rm im}(\partial _{2,n}^{F})$. Using the snake lemma, we conclude that $E(F)_{n}={\rm ker}(\partial _{1,n}^{F})/{\rm im}(\partial _{2,n}^{F})$. \vrule height4pt width3pt depth2pt \medskip \begin{observacio}\label{prouper} {\rm As a corollary of Proposition \ref{febrer} we have (see also 3.1, 3.2 and 3.3 in \cite{planas2}): \begin{itemize} \item[$(1)$] Let $U$ be a cyclic standard $A$-algebra generated by a degree one form $x\in U_{1}$. If $F$ is a standard $U$-module, then $E(F)_{n}=(0:x)\cap F_{n-1}$ and ${\rm rt}(F)={\rm min}\{ r\geq 1\mid (0:_{F}x^{r+1})=(0:_{F}x^{r})\}$. If $U=\rees{I}$ is the Rees algebra of a principal ideal $I=(x)$ of $A$ and $F=\reesw{I}{M}$ is the Rees module of $I$ with respect to a module $M$, then $E(I;M)=(0:x)\cap I^{n-1}M$ and ${\rm rt}(I;M)={\rm min}\{ r\geq 1\mid (0:_{M}x^{r+1})=(0:_{M}x^{r})\}$. \item[$(2)$] If $\varphi :A\rightarrow B$ is a homomorphism of rings, $U$ is standard $A$-algebra and $F$ is a standard $U$-module, then $U\otimes B$ is a standard $B$-algebra and $F\otimes B$ is a standard $U\otimes B$-module. Moreover ${\rm rt}_{U\otimes B\mbox{\footnotesize -mod}}(F\otimes B)\leq {\rm rt}_{U\mbox{\footnotesize -mod}}(F)$. If $\varphi $ is flat, $E_{U\otimes B\mbox{\footnotesize -mod}}(F\otimes B)=E_{U\mbox{\footnotesize -mod}}(F)\otimes B$. In particular, ${\rm rt}(F)={\rm sup}\{ {\rm rt}(F_{\mathfrak{p}})\mid \mathfrak{p}\in {\rm Spec}(A)\} ={\rm sup}\{ {\rm rt}(F_{\mathfrak{m}})\mid \mathfrak{m}\in {\rm Max}(A)\} $. \item[$(3)$] If $U$ is a standard $A$-algebra, $F$ is a standard $U$-module and $J\subseteq {\rm Ann}_{A}(F_{0})$, then $U\otimes A/J$ is a standard $A/J$-algebra, $F\otimes A/J=F$ is a standard $U\otimes A/J$-module, $E_{U\mbox{\footnotesize -mod}}(F)_{n}=E_{U\otimes A/J\mbox{\footnotesize -mod}}(F)_{n}$ and ${\rm rt}_{U\mbox{\footnotesize -mod}}(F)={\rm rt}_{U\otimes A/J\mbox{\footnotesize -mod}}(F)$. \item[$(4)$] If ${\rm rt}(I;M)<\infty $ (for instance, if $A$ is noetherian and $M$ is a finitely generated $A$-module), then ${\rm rt}(I;M)={\rm rt}(\agrw{I}{M})$. In particular, if $J\subset I$, then ${\rm rt}(\reesw{I}{M}\otimes A/J)={\rm rt}(I;M)$. \end{itemize} }\end{observacio} \section{Proof of Theorem 2}\label{rtim} \begin{definicio}{\rm Let ${\rm grt}(M)={\rm sup}\{ {\rm rt}(I;M)\mid I \mbox{ ideal of } A\} $ denote the supremum (possibly infinite) of all relation types of ideals $I$ of $A$ with respect to the $A$-module $M$, and let us call it the {\em global relation type} of $M$. Remark that: \begin{itemize} \item[$(1)$] If $M=A$, ${\rm grt}(A)={\rm sup}\{ {\rm rt}(I)\mid I \mbox{ ideal of } A\} $. We will prove that for an excellent ring $A$, ${\rm grt}(A)<\infty $ is equivalent to ${\rm dim}\, A\leq 1$. \item[$(2)$] Since ${\rm rt}(I;M)={\rm sup}\{ {\rm rt}(I_{\mathfrak{p}};M_{\mathfrak{p}})\mid \mathfrak{p}\in {\rm Spec}(A)\} = {\rm sup}\{ {\rm rt}(I_{\mathfrak{m}};M_{\mathfrak{m}})\mid \mathfrak{m}\in {\rm Max}(A)\}$, then ${\rm grt}(M)={\rm sup}\{ {\rm grt}(M_{\mathfrak{p}})\mid \mathfrak{p}\in {\rm Spec}(A)\} = {\rm sup}\{ {\rm grt}(M_{\mathfrak{m}})\mid \mathfrak{m}\in {\rm Max}(A)\}$. \item[$(3)$] If necessary to specify, we will write ${\rm grt}(M)={\rm grt}_{A}(M)$ when considering $M$ an $A$-module. For instance, if $J\subseteq {\rm Ann}_{A}(M)=\{ x\in A\mid xM=0\} $, then $\reesw{(I+J)/J}{M}=\reesw{I}{M}$. Thus ${\rm rt}(I;M)={\rm rt}((I+J)/J;M)$ and ${\rm grt}_{A}(M)={\rm grt}_{A/J}(M)$. \end{itemize} }\end{definicio} \noindent {\bf Theorem 2} {\em Let $A$ be a commutative ring, $\mathcal{W}$ a set of ideals of $A$, $I\in \mathcal{W}$ and $N\subseteq M$ two $A$-modules. Let $s(N,M;\mathcal{W})$ denote the strong uniform number for the pair $(N,M)$ with respect to $\mathcal{W}$. Then $s(N,M;\{ I\} )\leq {\rm rt}(I;M/N)\leq {\rm max}({\rm rt}(I;M),s(N,M;\{ I\}))$. In particular, $s(N,M;\mathcal{W})\leq {\rm sup}\{ {\rm rt}(J;M/N)\mid J\in \mathcal{W}\}$ and $s(N,M)\leq {\rm grt}(M/N)$.} \bigskip \noindent {\em Proof}. Let $F=\reesw{I}{M/N}$, $G=\reesw{I}{M}$, $H={\bf S}(I)\otimes M$, $\varphi :G\rightarrow F$ the surjective graded morphism of standard ${\bf S}(I)$-algebras defined by $\varphi _{n}:G_{n}=I^{n}M\rightarrow I^{n}M/I^{n}M\cap N= I^{n}M+N/N=F_{n}$ and $\gamma :H\rightarrow G$ induced by the natural graded morphism $\alpha :{\bf S}(I)\rightarrow \rees{I}$. By Lemma \ref{gener}, $s(E(\varphi ))\leq {\rm s}(E(\varphi \circ \gamma ))\leq {\rm max}(s(E(\varphi )),s(E(\gamma )))$. But, $s(E(\varphi \circ \gamma ))={\rm rt}(I;M/N)$ and $s(E(\gamma ))={\rm rt}(I;M)$. Finally, since $E(\varphi )_{n}=(I^{n}M\cap N)/I(I^{n-1}M\cap N)$, then $s(E(\varphi ))= s(N,M;\{ I\} )$. In particular, $s(N,M;\{ I\} )\leq {\rm rt}(I;M/N)\leq {\rm sup}\{ {\rm rt}(J;M/N)\mid J\in \mathcal{W}\}$, and taking the supremum over all ideals $I$ of $\mathcal{W}$, $s(N,M;\mathcal{W})\leq {\rm sup}\{ {\rm rt}(J;M/N)\mid J\in \mathcal{W}\}$. \vrule height4pt width3pt depth2pt \begin{corollari} {\sc Artin-Rees Lemma}. Let $A$ be a commutative ring, $I$ an ideal of $A$ and $N\subseteq M$ two $A$-modules. If ${\rm rt}(I;M/N)<\infty $ then $s(N,M;\{ I\} )<\infty $. In particular, if $A$ is noetherian and $M$ is finitely generated, there exists an integer $s\geq 1$ such that, for all integers $n\geq s$, $I^{n}M\cap N=I^{n-s}(I^{s}M\cap N)$. \end{corollari} \begin{corollari}\label{princip} {\sc O'Carroll \cite{ocarroll1}}. Let $A$ be a noetherian ring and let $M$ be a finitely generated $A$-module. Then ${\rm sup}\{ {\rm rt}((x);M)\mid x\in A\} <\infty$. In particular, if $N\subset M$, there exists an integer $s\geq 1$ such that, for all integers $n\geq s$ and for all $x\in A$, $x^{n}M\cap N=x^{n-s}(x^{s}M\cap N)$. \end{corollari} \noindent {\em Proof}. Following very closely the proof of O'Carroll in \cite{ocarroll1}, let $0=Q_{1}\cap \ldots \cap Q_{r}$ be a minimal primary decomposition of $0$ in $M$, $r_{M}(Q_{i})=r(Q_{i}:M)=\mathfrak{p}_{i}\in {\rm Spec}(A)$, and let $s\geq 1$ be an integer such that, for all $i=1,\ldots ,r$, $\mathfrak{p}_{i}^{s}M\subseteq Q_{i}$. Then, for all $x\in A$, ${\rm rt}((x);M)\leq s$. Indeed, if $x\in \mathfrak{p}_{i}$, $x^{n+s}\in \mathfrak{p}_{i}^{n+s}$ and $(Q_{i}:x^{n+s})=M$. If $x\not\in \mathfrak{p}_{i}$, $x^{n+s}\not\in \mathfrak{p}_{i}^{n+s}$ and $(Q_{i}:x^{n+s})=Q_{i}$. Therefore, for all $n\geq 0$, $(0:x^{n+s})=(\cap _{i}Q_{i}:x^{n+s})= \cap _{i}(Q_{i}:x^{n+s})=\cap _{x\not\in \mathfrak{p}_{i}}Q_{i}$. In particular, $(0:x^{s+1})=(0:x^{s})$ and ${\rm rt}((x);M)\leq s$. We finish by applying Theorem 2. \vrule height4pt width3pt depth2pt \begin{observacio}{\rm Let $A$ be a noetherian ring and let $M$ be a finitely generated $A$-module. Let ${\rm grt}^{i}(M)={\rm sup}\{ {\rm rt}(I;M)\mid \mu (I)\leq i\}$. By \ref{princip}, ${\rm grt}^{1}(M)<\infty $. Using the example of Wang \cite{wang1} and Theorem 2, we know ${\rm grt}^{3}(M)$ might be infinite. We do not know whether ${\rm grt}^{2}(M)$ is finite. }\end{observacio} \section{Rings of finite global relation type have dimension one}\label{1dim} \begin{observacio}\label{rest05} {\rm Let $A$ be a commutative ring and let $r\geq 1$ denote an integer. Consider the following conditions: \begin{itemize} \item[$(a)$] ${\rm rt}(I)\leq r$ for every three-generated ideal $I$ of $A$. \item[$(b)$] $E(I)_{r+1}=0$ for every three-generated ideal $I$ of $A$. \item[$(c)$] $(x,y)(x,y,z)^{r}:z^{r+1}=(x,y)(x,y,z)^{r-1}:z^{r}$ for all $x,y,z\in A$. \item[$(d)$] $(x^{r}y)^{r}\in (x^{r+1},y^{r+1})(x^{r+1},y^{r+1},x^{r}y)^{r-1}$ for all $x,y\in A$. \end{itemize} Then $(a)\Rightarrow (b)\Rightarrow (c)\Rightarrow (d)$. }\end{observacio} \noindent {\em Proof}. Implication $(a)\Rightarrow (b)$ follows from the definitions. Implication $(b)\Rightarrow (c)$ holds in general: if $I$ is generated by $x_{1},\ldots ,x_{d}$ and if $E(I)_{n}=0$, then $(x_{1},\ldots ,x_{d-1})I^{n-1}:x_{d}^{n} =(x_{1},\ldots ,x_{d-1})I^{n-2}:x_{d}^{n-1}$ (Lemma 4.2 \cite{planas2}). Finally, $(d)$ follows from $(c)$ taken $x,y,z\in A$ as $x^{r+1},y^{r+1},x^{r}y$. \vrule height4pt width3pt depth2pt \medskip In order to prove $(d)\Rightarrow {\rm dim}\, A\leq 1$ let us recall some definitions. A set of elements $x_{1},\ldots ,x_{m}$ of an ideal $J$ of $A$ are called $J$-{\em independent} if every form in $A[T_{1},\ldots ,T_{m}]$ vanishing at $x_{1},\ldots ,x_{m}$ has all its coefficients in $J$. If $I=(x_{1},\ldots ,x_{m})$ and $I\subset J$, then $x_{1},\ldots ,x_{m}$ are $J$-independent if and only if the natural graded morphism of standard $(A/J)$-algebras $(A/J)[X_{1},\ldots ,X_{m}]\rightarrow \rees{I}\otimes (A/J)$ is an isomorphism ($X_{1},\ldots ,X_{m}$ algebraically independent over $A/J$). If $(A,\mathfrak{m})$ is noetherian local, then the maximum number of $\mathfrak{m}$-independent elements in $\mathfrak{m}$ is equal to ${\rm dim}\, A$ \cite{valla}. \begin{proposicio}\label{rest1} Let $A$ be a noetherian ring. If there exists an integer $r\geq 1$ such that $(x^{r}y)^{r}\in (x^{r+1},y^{r+1})(x^{r+1},y^{r+1},x^{r}y)^{r-1}$ for all $x,y\in A$, then ${\rm dim}\, A\leq 1$. \end{proposicio} \noindent {\em Proof}. Since the hypothesis localizes, we may assume $(A,\mathfrak{m},k)$ is a noetherian local ring. Suppose ${\rm dim}\, A\geq 2$. Then there exists two $\mathfrak{m}$-independent elements $x,y$. In particular, if $I=(x,y)$, $\overline{\alpha}:k[X,Y]\rightarrow \rees{I}/\mathfrak{m}\rees{I}$ defined by $\overline{\alpha}(X)=x+\mathfrak{m}I^{2}$, $\overline{\alpha}(Y)=y+\mathfrak{m}I^{2}$ is a graded isomorphism of standard $k$-algebras. By hypothesis $(x^{r}y)^{r}\in (x^{r+1},y^{r+1})(x^{r+1},y^{r+1},x^{r}y)^{r-1}$ which is generated by the elements $x^{(i+1)(r+1)+lr}y^{j(r+1)+l}, x^{i(r+1)+lr}y^{(j+1)(r+1)+l}$, $i,j,l\geq 0$, $i+j+l=r-1$. The $k$-vector space isomorphism $\overline{\alpha}_{r(r+1)}$ assures the membership of $(X^{r}Y)^{r}$ into the $k$-vector space spanned by the elements $X^{(i+1)(r+1)+lr}Y^{j(r+1)+l}$, $X^{i(r+1)+lr}Y^{(j+1)(r+1)+l}$, $i,j,l\geq 0$, $i+j+l=r-1$. Since all of them are elements of a $k$-basis of $k[X,Y]_{r(r+1)}$, then either $(X^{r}Y)^{r}=X^{(i+1)(r+1)+lr}Y^{j(r+1)+l}$ or either $(X^{r}Y)^{r}= X^{i(r+1)+lr}Y^{(j+1)(r+1)+l}$, for some $i,j,l\geq 0$, $i+j+l=r-1$. But, it is not difficult to see that there are not integers $i,j,l\geq 0$ verifying any of both equations. \vrule height4pt width3pt depth2pt \begin{observacio}{\rm The underlying idea in the proof of Proposition \ref{rest1} is that for any two $\mathfrak{m}$-independent elements $x,y$ of $A$, do not exist $r$-relations $T_{1}f(T_{1},T_{2},T_{3})+T_{2}g(T_{1},T_{2},T_{3})-T_{3}^{r}$, $f,g$ forms of degree $r-1$, among the three ordered elements $x^{r+1},y^{r+1},x^{r}y$. In particular, $T_{1}^{r}T_{2}-T_{3}^{r+1}$ must be an effective $(r+1)$-relation among the three ordered elements $x^{r+1},y^{r+1},x^{r}y$ (since any form of degree $r$ dividing $T_{1}^{r}T_{2}-T_{3}^{r+1}$ should contain $T_{3}^{r}$ as an additive factor). }\end{observacio} \begin{observacio}{\rm There exist (necessarily non noetherian) local rings with ${\rm grt}(A)<\infty $, but ${\rm dim}\, A\geq 2$. For example, a valuation ring $A$ is Pr\"ufer, thus ${\rm grt}(A)=1$ \cite{costa}, \cite{planas1}, but its dimension is not necessarily 1 or less. }\end{observacio} \section{Artinian modules have finite global relation type}\label{0dim} \begin{proposicio}\label{artim} Let $A$ be a commutative ring, $I$ an ideal of $A$ and $M$ an $A$-module. If $I^{s}M=0$ for some $s\geq 1$, then ${\rm rt}(I;M)\leq s$. If ${\rm rt}(I)=1$ and $I$ is finitely generated, then $I\neq 0$ if and only if $I^{s}\neq 0$ for all $s\geq 1$. If $(A,\mathfrak{m})$ is artinian local, then ${\rm grt}(M)<\infty $ and ${\rm grt}(A)=1$ if and only if $A$ is a field. If $A$ is an artinian ring, then ${\rm grt}(M)<\infty $. \end{proposicio} \noindent {\em Proof}. If $I^{s}M=0$ and $\varphi :G\rightarrow \reesw{I}{M}$ is a symmetric presentation of $\reesw{I}{M}$, then ${\rm ker}\varphi _{n}=G_{n}$ for all $n\geq s$. Thus, for all $n\geq s+1$, $E(I)_{n}=G_{n}/V_{1}G_{n-1}=0$ and ${\rm rt}(I;M)\leq s$. In order to prove the second assertion we may suppose that $(A,\mathfrak{m},k)$ is local. If ${\rm rt}(I)=1$, the natural graded morphism of standard $k$-algebras ${\bf S}^{k}(I/\mathfrak{m}I)\rightarrow \rees{I}/\mathfrak{m}\rees{I}$ is an isomorphism. If $I\neq 0$, then ${\bf S}^{k}(I/\mathfrak{m}I)$ is a polynomial ring in $\mu (I)$ variables, thus $I^{s}/\mathfrak{m}I^{s}\neq 0$ for all $s\geq 1$ and $I^{s}\neq 0$ since $I$ is finitely generated. If $(A,\mathfrak{m})$ is artinian local, there exists an integer $s\geq 1$, such that $I^{s}M\subseteq \mathfrak{m}^{s}M=0$ for every ideal $I$ of $A$. Thus ${\rm grt}(M)\leq s$. Moreover, if ${\rm grt}(A)=1$, then ${\rm rt}(\mathfrak{m})=1$ and $\mathfrak{m}^{s}=0$. Hence $\mathfrak{m}=0$ and $A$ is a field. If $A$ is artinian, it has a finite number of maximal ideals and since ${\rm grt}(M)={\rm sup}\{ {\rm grt}(M_{\mathfrak{m}}) \mid \mathfrak{m}\in {\rm Max}(A)\} $, then ${\rm grt}(M)<\infty $. \vrule height4pt width3pt depth2pt \begin{observacio}{\rm The minimum integer $s\geq 1$ such that $I^{s}=0$, for a nilpotent ideal $I$, is not necessarily equal to its relation type. For example, take $I=(x,y)\subset A= k\lbrack\!\lbrack X,Y\rbrack\!\rbrack /(X^{n},Y^{n})$, where $x,y$ denote the classes of $X,Y$ in $A$. Then $I^{2n-1}=0$, $I^{2n-2}\neq 0$ and ${\rm rt}(I)=n$. Indeed, since $yI^{n-2}:x^{n-1}\varsubsetneq yI^{n-1}:x^{n}=A$, then $E(I)_{n}\neq 0$ and ${\rm rt}(I)\geq n$. Moreover, since $(0:y)\cap I^{p-1}=(x^{p-n}y^{n-1})= x((0:y)\cap I^{p-2})$ for all $p\geq n+1$, then $E(I)_{p}=0$ for all $p\geq n+1$ and ${\rm rt}(I)\leq n$ (Proposition 4.5 \cite{planas2}). }\end{observacio} \begin{observacio}{\rm There exist (necessarily non noetherian) local rings with ${\rm dim}\, A=0$, but ${\rm grt}(A)=\infty $. For example, $A=k[T_{1},\ldots ,T_{m},\ldots ]/(T_{1}^{2},\ldots ,T_{m}^{m+1},\ldots )$, with $k$ a field, is a zero dimensional local ring. If $t_{m}$ denotes the residue class of $T_{m}$, $(0:t_{m}^{m})\varsubsetneq (0:t_{m}^{m+1})=A$ and ${\rm rt}((t_{m}))=m+1$. }\end{observacio} \section{Proof of Theorem 3 in the local case}\label{jor} We first need to reduce to Cohen-Macaulay local rings and modules. \begin{lema}\label{pasquom} Let $A$ be a noetherian ring, $I$ an ideal of $A$ and $N\subseteq M$ two finitely generated $A$-modules such that $I^{t}N=0$ for a certain integer $t\geq 1$. Then ${\rm rt}(I;M)\leq {\rm rt}(I;M/N)+t$. In particular, if $I$, $J$ are two ideals of $A$ such that $I^{t}J=0$ for a certain integer $t\geq 1$, then ${\rm rt}(I)\leq {\rm rt}((I+J)/J)+t$. \end{lema} \noindent {\em Proof}. Let $s={\rm s}(N,M;\{ I\})$ be the strong uniform number for the pair $(N,M)$ with respect to the set of ideals $\{ I\} $. If $n\geq s+t$, then $I^{n}M\cap N=I^{n-s}(I^{s}M\cap N)\subseteq I^{n-s}N\subseteq I^{t}N=0$. Let $F=\reesw{I}{M/N}$, $G=\reesw{I}{M}$ and $\varphi:G\rightarrow F$ defined by $\varphi _{n}:G_{n}=I^{n}M\rightarrow I^{n}M/I^{n}M\cap N=I^{n}M+N/N=F_{n}$. We have ${\rm ker}\varphi _{n}=I^{n}M\cap N=0$ for all $n\geq s+t$. Therefore, using Remark \ref{prougranm} and Theorem 2, ${\rm rt}(I;M)={\rm rt}(G)\leq {\rm max}({\rm rt}(F),s+t)\leq {\rm max}({\rm rt}(I;M/N),{\rm rt}(I;M/N)+t)={\rm rt}(I;M/N)+t$. \vrule height4pt width3pt depth2pt \begin{corollari}\label{longfinm} Let $(A,\mathfrak{m})$ be a noetherian local ring, $M$ a finitely generated $A$-module and $N\subseteq M$ a submodule of finite length. Then ${\rm grt}(M)\leq {\rm grt}(M/N)+{\rm length}(N)$. \end{corollari} \noindent {\em Proof}. If ${\rm length}(N)=t$, then $I^{t}N\subset \mathfrak{m}^{t}N=0$ for every ideal $I$ of $A$. Thus, by Lemma \ref{pasquom}, ${\rm rt}(I;M)\leq {\rm rt}(I;M/N)+t\leq {\rm grt}(M/N)+t$. Taking the supremum over all ideals $I$ of $A$, we have ${\rm grt}(M)\leq {\rm grt}(M/N)+t$. \vrule height4pt width3pt depth2pt \medskip \noindent Next lemma is a generalization to modules of a well known result for rings (see, for instance, \cite{trung2}). \begin{lema}\label{mprim} Let $(A,\mathfrak{m})$ be a one dimensional Cohen-Macaulay local ring. Let $M$ be a maximal Cohen-Macaulay module. If $I$ is an $\mathfrak{m}$-primary ideal of $A$, ${\rm rt}(I;M)\leq e(A)$. \end{lema} \noindent {\em Proof}. Applying the tensor product $-\otimes A[t]_{\mathfrak{m}[t]}$, we may assume that the residue field $k=A/\mathfrak{m}$ is infinite. By Theorem 1.1 in \cite{sally}, $\mu (I)\leq e(A)=e$ and $\mu (I^{e})\leq e< {e+1\choose 1} $. By Theorem 2.3 in \cite{sally}, there exists $y_{0}\in I$ such that $I^{e}=y_{0}I^{e-1}$. In particular, for all $n\geq e$, $I^{n}=y_{0}I^{n-1}$. Since $\mathfrak{m}\not\subset Z(M)$, then $y_{0}\not\in Z(M)$. Consider the complex of $A$-modules: \begin{eqnarray*} {\bf \Lambda}_{2}(I)\otimes I^{n-2}M\buildrel \partial _{2,n}\over \longrightarrow I\otimes I^{n-1}M\buildrel \partial _{1,n}\over \longrightarrow I^{n}M\longrightarrow 0\, , \end{eqnarray*} where $\partial _{2,n}((x\wedge y)\otimes z)=y\otimes xz-x\otimes yz$ and $\partial _{1,n}(x\otimes t)=xt$, $x,y\in I$, $z\in I^{n-2}M$ and $t\in I^{n-1}M$. By Proposition \ref{febrer}, $E(I;M)_{n}={\rm ker}\partial _{1,n}/{\rm im}\partial _{2,n}$. Let us see ${\rm ker}\partial _{1,n}={\rm im}\partial _{2,n}$ for all $n\geq e+1$. Indeed, take $u=\sum x_{i}\otimes y_{0}z_{i}\in {\rm ker}\partial _{1,n}$, $x_{i}\in I$, $z_{i}\in I^{n-2}M$. Then $0=\partial _{1,n}(u)=y_{0}\sum x_{i}z_{i}$ and, since $y_{0}\not\in Z(M)$, $\sum x_{i}z_{i}=0$. Therefore, if $v=\sum (y_{0}\wedge x_{i})\otimes z_{i}\in {\bf \Lambda}_{2}(I)\otimes I^{n-2}M$, then $\partial _{2,n}(v)=\sum x_{i}\otimes y_{0}z_{i}-\sum y_{0}\otimes x_{i}z_{i}=u$. So $E(I;M)_{n}=0$ for all $n\geq e+1$ and ${\rm rt}(I;M)\leq e(A)$. \vrule height4pt width3pt depth2pt \begin{notacions}\label{conv} {\rm Let $(A,\mathfrak{m})$ be a one dimensional noetherian local ring. Denote by $\mathfrak{q}_{1},\ldots ,\mathfrak{q}_{s}$ the minimal primary components of $(0)$. If $A$ is Cohen-Macaulay, $(0)=\mathfrak{q}_{1}\cap \ldots \cap \mathfrak{q}_{s}$ is a minimal primary decomposition of $(0)$. If $A$ is not Cohen-Macaulay, there exist an $\mathfrak{m}$-primary ideal $\mathfrak{q}_{s+1}$ such that $(0)=\mathfrak{q}_{1}\cap \ldots \cap \mathfrak{q}_{s}\cap \mathfrak{q}_{s+1}$ is a minimal primary decomposition of $(0)$. Let $n\geq 1$ be the minimum integer such that $\mathfrak{n}(A)^{n}=0$. Let $n_{i}\geq 1$ be the minimum integer such that $\mathfrak{p}_{i}^{n_{i}}\subset \mathfrak{q}_{i}$, $\mathfrak{p}_{i}=r(\mathfrak{q}_{i})$, $i=1,\ldots ,s$. For each $1\leq i_{1}<\ldots <i_{l}\leq s$, denote $t_{i_{1},\ldots ,i_{l}}={\rm max}\{ n_{i}\mid i\neq i_{1},\ldots ,i_{l}\} $ and $e(A)$ the multiplicity of $A$. Finally, set ${\rm brt}(A)={\rm max}\{ n, e(A/(\mathfrak{q}_{i_{1}}\cap \ldots \cap \mathfrak{q}_{i_{l}}))+ t_{i_{1},\ldots ,i_{l}} \mid 1\leq i_{1}<\ldots <i_{l}\leq s \}$, which is finite. }\end{notacions} \begin{proposicio}\label{bo} Let $(A,\mathfrak{m})$ be a one dimensional noetherian local ring and $J={\rm H}^{0}_{\mathfrak{m}}(A)$. Let $M$ be a one dimensional finitely generated $A$-module and $N={\rm H}^{0}_{\mathfrak{m}}(M)$. Then ${\rm grt}(A)\leq {\rm brt}(A/J)+{\rm length}(J)$ and ${\rm grt}(M)\leq {\rm brt}(A/J)+{\rm length}(N)$. If $A$ and $M$ are Cohen-Macaulay, ${\rm grt}(M)\leq {\rm brt}(A)$. \end{proposicio} \noindent {\em Proof}. Since ${\rm length}(N)=t<\infty$, by Corollary \ref{longfinm}, ${\rm grt}(M)\leq {\rm grt}(M/N)+t$. Since $JM\subseteq N$, then $J\subseteq {\rm Ann}_{A}(M/N)$ and ${\rm grt}_{A}(M/N)={\rm grt}_{A/J}(M/N)$. We thus may assume $A$ is a one dimensional Cohen-Macaulay ring and $M$ is a maximal Cohen-Macaulay module. Let us prove ${\rm grt}(M)\leq {\rm brt}(A)$. If $I\subset \mathfrak{n}(A)$, $I^{n}\subset \mathfrak{n}(A)^{n}=0$ and ${\rm rt}(I;M)\leq n$ (Proposition \ref{artim}). If $I\not\subset \mathfrak{N}(A)$, let $1\leq i_{1}<\ldots <i_{l}\leq s$ be all the subindexes $i_{j}$ such that $I\not\subset \mathfrak{p}_{i_{j}}$. Set $J_{i_{1},\ldots ,i_{l}}=\mathfrak{q}_{i_{1}}\cap \ldots \cap \mathfrak{q}_{i_{l}}$. Then $I^{t_{i_{1},\ldots ,i_{l}}}J_{i_{1},\ldots ,i_{l}}\subseteq \mathfrak{q}_{1}\cap \ldots \cap \mathfrak{q}_{s}=0$ and, by Lemma \ref{pasquom}, ${\rm rt}(I;M)\leq {\rm rt}(I;M/J_{i_{1},\ldots ,i_{l}}M)+t_{i_{1},\ldots ,i_{l}}={\rm rt}((I+J_{i_{1},\ldots ,i_{l}})/J_{i_{1},\ldots ,i_{l}};M/J_{i_{1},\ldots ,i_{l}}M)+t_{i_{1},\ldots ,i_{l}}$. But, $(I+J_{i_{1},\ldots ,i_{l}})/J_{i_{1},\ldots ,i_{l}}$ is an $\mathfrak{m}/J_{i_{1},\ldots ,i_{l}}$-primary ideal of the one dimensional Cohen-Macaulay local ring $A/J_{i_{1},\ldots ,i_{l}}$ and $M/J_{i_{1},\ldots ,i_{l}}M$ is a maximal Cohen-Macaulay module. Therefore, by Lemma \ref{mprim}, ${\rm rt}((I+J_{i_{1},\ldots ,i_{l}})/J_{i_{1},\ldots ,i_{l}};M/J_{i_{1},\ldots ,i_{l}}M)\leq e(A/J_{i_{1},\ldots ,i_{l}})$. \vrule height4pt width3pt depth2pt \begin{exemple}\label{doman} {\rm Let $(A,\mathfrak{m})$ be a one dimensional noetherian local ring. If $A$ is reduced, then ${\rm grt}(A)\leq e(A)+1$. If $A$ is a domain, then ${\rm grt}(A)\leq e(A)$. }\end{exemple} \noindent {\em Proof}. Since $A$ is Cohen-Macaulay, by Proposition \ref{bo}, ${\rm grt}(A)\leq {\rm brt}(A)$. Following the notations in \ref{conv}, if $A$ is reduced, $n=n_{1}=\ldots =n_{s}=1$, $t_{i_{1},\ldots ,i_{l}}=1$ for all $(i_{1},\ldots ,i_{l})\neq (1,\ldots ,s)$ and $t_{1,\ldots ,s}=0$. Since $e(A/J)\leq e(A)$, then ${\rm brt}(A)\leq e(A)+1$. If $A$ is a domain, then $n=1$, $n_{1}=1$, $t_{1}=0$ and ${\rm brt}(A)=e(A)$. \vrule height4pt width3pt depth2pt \begin{exemple}\label{localtg} {\rm Let $k$ be a field and $g\geq 1$ an integer. Set $R=k[t^{g+1},t^{g+2},\ldots ,t^{2g+1}]\subset k[t]$ ($t$ a variable over $k$), $\mathfrak{n}=(t^{g+1},t^{g+2},\ldots ,t^{2g+1})$, $A=R_{\mathfrak{n}}$ and $\mathfrak{m}=\mathfrak{n}R_{\mathfrak{n}}$. Then $(A,\mathfrak{m},k)$ is a one dimensional notherian local domain and ${\rm grt}(A)=e(A)=g+1$. }\end{exemple} \noindent {\em Proof}. By Example \ref{doman}, ${\rm grt}(A)\leq e(A)$. For all $n\geq 1$, $\mathfrak{m}^{n}=(t^{(g+1)n},\ldots ,t^{(g+1)n+g})$, $\mu (\mathfrak{m}^{n})=g+1$ and $e(A)=g+1$. For $n\geq 2$, take $I=(t^{g+1},t^{g+2})$ and $J=_{g,n-1}=t^{g+1}I^{n-2}:t^{(g+2)(n-1)}$. Remark that $J_{g,n-1}\subseteq J_{g,n}$ and that $E(I)_{n}=0$ if and only if $J_{g,n-1}=J_{g,n}$ (Proposition 4.5 \cite{planas2}). If $g=1$, then $I=\mathfrak{m}$, $E(I)_{2}\neq 0$ and $2\leq {\rm rt}(I)\leq e(A)=2$. Suppose $g\geq 2$. Then $J_{g,1}=t^{g+1}:t^{g+2}=\mathfrak{m}$. Moreover, $t^{(g+2)g}\not\in t^{g+1}I^{g-1}$ and $\mathfrak{m}\subseteq J_{g,g}\varsubsetneq A$. Moreover $J_{g,g+1}=A$. Thus, $E(I)_{n}=0$ for all $2\leq n\leq g$ and $E(I)_{g+1}\neq 0$. Hence $g+1\leq {\rm rt}(I)\leq e(A)=g+1$, ${\rm rt}(I)=g+1$ and ${\rm grt}(A)=g+1$. Remark that $\mathfrak{m}^{n}=t^{g+1}\mathfrak{m}^{n-1}$ for all $n\geq 2$. So the reduction number of $\mathfrak{m}$ is ${\rm rn}(\mathfrak{m})=1$ and $1<{\rm rt}(\mathfrak{m})\leq {\rm rn}(\mathfrak{m})+1=2$ \cite{trung2} while ${\rm grt}(A)=g+1$. \vrule height4pt width3pt depth2pt \begin{exemple}\label{eigrt}{\rm Let $k$ be a field, $a\geq 1$ a positive integer and $A=k\lbrack\!\lbrack X, Y\rbrack\!\rbrack /(X^{a}Y)$. Then $A$ is a one dimensional complete intersection local ring with ${\rm grt}(A)={\rm brt}(A)=a+1$. }\end{exemple} \noindent {\em Proof}. By Proposition \ref{bo}, ${\rm grt}(A)\leq {\rm brt}(A)$. Let $x,y$ denote the residue classes of $X,Y$ and let $\mathfrak{m}=(x,y)$ be the maximal ideal of $A$. Since $\mu (\mathfrak{m}^{n})=a+1$ for all $n\geq a$, the multiplicity of $A$ is $e(A)=a+1$. The minimal primary decomposition of $A$ is $(0)=\mathfrak{q}_{1}\cap \mathfrak{q}_{2}$, $\mathfrak{q}_{1}=(x^{a})$, $\mathfrak{q}_{2}=(y)$. Following the notations in \ref{conv}, $\mathfrak{p}_{1}=(x)$, $\mathfrak{p}_{2}=(y)$, $\mathfrak{n}(A)=(xy)$, $n=n_{1}=a$, $n_{2}=1$, $t_{1}=n_{2}=1$, $t_{2}=n_{1}=a$, $t_{1,2}=0$. Moreover, $A/\mathfrak{q}_{1}= k\lbrack\!\lbrack X, Y\rbrack\!\rbrack /(X^{a})$ and $e(A/\mathfrak{q}_{1})=a$; $A/\mathfrak{q}_{2}= k\lbrack\!\lbrack X\rbrack\!\rbrack $ and $e(A/\mathfrak{q}_{2})=1$. Therefore, ${\rm brt}(A)=a+1$. On the other hand, $x((0:y)\cap \mathfrak{m}^{a-1})=(x^{a+1})\varsubsetneq (x^{a})=(0:y)\cap \mathfrak{m}^{a}$. Thus $E(\mathfrak{m})_{a+1}\neq 0$ and ${\rm rt}(\mathfrak{m})\geq a+1$ (Proposition 4.5 \cite{planas2}). \vrule height4pt width3pt depth2pt \section{Final proofs}\label{final} \begin{lema}\label{quiqm} {\rm Let $(A,\mathfrak{m})$ be a one dimensional Cohen-Macaulay local ring with a unique minimal prime $\mathfrak{p}$ and let $n\geq 1$ be such that $\mathfrak{p}^{n}=0$. If $M$ is a maximal Cohen-Macaulay $A$-module, then ${\rm grt}(M)\leq {\rm max}\{ n,e(A)\} ={\rm brt}(A)$. Moreover, if $A/\mathfrak{p}$ is a discrete valuation ring, then ${\rm grt}(M)\leq {\rm max}\{ n,\sum _{i=0}^{n-1}\mu (\mathfrak{p}^{i})\} $. }\end{lema} \noindent {\em Proof}. By Proposition \ref{bo}, ${\rm grt}(M)\leq {\rm brt}(A)$. If $I\subseteq \mathfrak{p}$, then $I^{n}\subseteq \mathfrak{p}^{n}=0$ and ${\rm rt}(I;M)\leq n$. If $I\not\subset \mathfrak{p}$, then $I$ is an $\mathfrak{m}$-primary ideal of a one dimensional Cohen-Macaulay local ring. Hence, by Lemma \ref{mprim}, ${\rm rt}(I;M)\leq e(A)$. Remark that ${\rm brt}(A)={\rm max}\{ n, e(A)\} $. If moreover, $A/\mathfrak{p}$ is a discrete valuation ring, there exists $u\in A$ such that $\mathfrak{m}=uA+\mathfrak{p}$. Thus, for $r\geq n$, $\mathfrak{m}^{r}=\sum _{i=0}^{n-1}u^{r-i}\mathfrak{p}^{i}$ and for $r\gg 1$, $e(A)=\mu (\mathfrak{m}^{r})=\mu (\sum _{i=0}^{n-1}u^{r-i}\mathfrak{p}^{i})\leq \sum _{i=0}^{n-1}\mu (\mathfrak{p}^{i})$. \vrule height4pt width3pt depth2pt \begin{exemple}{\rm Let $k$ be a field, $a,b\geq 1$ two positive integers and $A=k\lbrack\!\lbrack X, Y\rbrack\!\rbrack /(X^{a},X^{b}Y)$. Then $A$ is a one dimensional noetherian local ring with ${\rm grt}(A)=a$. Moreover, if $a\leq b$, then $J={\rm H}^{0}_{\mathfrak{m}}(A)=0$ and ${\rm brt}(A)=a$. If $a>b$, then $J={\rm H}^{0}_{\mathfrak{m}}(A)\neq 0$ and ${\rm brt}(A/J)+{\rm length}(J)=a+b$. }\end{exemple} \noindent {\em Proof}. Let $x,y$ denote the residue classes of $X,Y$ and let $\mathfrak{m}=(x,y)$ be the maximal ideal of $A$. Remark that ${\rm rt}((x))=a\leq {\rm grt}(A)$. If $a\leq b$, $A=k\lbrack\!\lbrack X, Y\rbrack\!\rbrack /(X^{a})$ is a one dimensional Cohen-Macaulay ring with the unique minimal prime $(x)$. By Lemma \ref{quiqm}, $a\leq {\rm grt}(A)\leq {\rm brt}(A)={\rm max}\{ a, e(A)\} =a$. If $a>b$, then $I(x^{a-1})\subseteq (x,y)(x^{a-1})\subseteq (x^{a},x^{b}y)=0$. By Lemma \ref{pasquom}, ${\rm rt}(I)\leq {\rm rt}((I+(x^{a-1}))/(x^{a-1}))+1\leq {\rm grt}(A/(x^{a-1}))+1$. But, $A/(x^{a-1})=k\lbrack\!\lbrack X, Y\rbrack\!\rbrack /(X^{a-1},X^{b}Y)$. Repeating the same argument, we get $a\leq {\rm grt}(A)\leq {\rm grt}(A/(x^{a-(a-b)}))+(a-b)={\rm grt}(k\lbrack\!\lbrack X, Y\rbrack\!\rbrack /(X^{b}))+(a-b)=b+(a-b)=a$. On the other hand, $J={\rm H}^{0}_{\mathfrak{m}}(A)=(0:\mathfrak{m}^{a-b})=(x^{b})$ and ${\rm lenght}(J)=a-b$. $A/J=k\lbrack\!\lbrack X, Y\rbrack\!\rbrack /(X^{b})$ and, as before, ${\rm brt}(A/J)=b$. Thus, ${\rm brt}(A/J)+{\rm length}(J)=a+b$. \vrule height4pt width3pt depth2pt \bigskip \noindent {\bf Theorem 3} {\em Let $A$ be an excellent (in fact $J-2$) ring. The following conditions are equivalent: \begin{itemize} \item[$(i)$] ${\rm grt}(M)<\infty $ for all finitely generated $A$-module $M$. \item[$(ii)$] ${\rm grt}(A)<\infty $. \item[$(iii)$] There exists an $r\geq 1$ such that ${\rm rt}(I)\leq r$ for every three-generated ideal $I$ of $A$. \item[$(iv)$] There exists an $r\geq 1$ such that $(x^{r}y)^{r}\in (x^{r+1},y^{r+1})(x^{r+1},y^{r+1},x^{r}y)^{r-1}$ for all $x,y\in A$. \item[$(v)$] ${\rm dim}\, A\leq 1$. \end{itemize}} \noindent {\em Proof}. Implications $(i)\Rightarrow (ii)$ and $(ii)\Rightarrow (iii)$ are obvious, $(iii)\Rightarrow (iv)$ is Remark \ref{rest05} and $(iv)\Rightarrow (v)$ is Proposition \ref{rest1}. Let us prove $(v)\Rightarrow (i)$. Let $A$ be an excellent ring of ${\rm dim}\, A\leq 1$ and let $M$ be a finitely generated $A$-module. If ${\rm dim}\, M=0$, then, by Proposition \ref{artim}, ${\rm grt}(M)={\rm grt}_{A/{\rm Ann}_{A}(M)}(M)<\infty $. Therefore, we may assume ${\rm dim}\, A=1$ and ${\rm dim}\, M=1$. Let ${\rm Min}(A)=\{ \mathfrak{p}_{1},\ldots ,\mathfrak{p}_{r}\} $ be the set of minimal primes of $A$ and let ${\rm Ass}(A)={\rm Min}(A)\cup \{ \mathfrak{m}_{1},\ldots ,\mathfrak{m}_{s}\} $, $\mathfrak{m}_{i}\in {\rm Max}(A)$, be the set of associated primes of $A$. Since ${\rm Ass}(A)$ is finite, by Propositions \ref{artim} and \ref{bo}, $\alpha ={\rm max}\{ {\rm grt}(M_{\mathfrak{p}})\mid \mathfrak{p}\in {\rm Ass}(A)\} <\infty$. Analogously, $\alpha ^{\prime}={\rm max}\{ {\rm grt}(M_{\mathfrak{p}})\mid \mathfrak{p}\in {\rm Ass}(M)\} <\infty$. If $r\geq 2$, and for each $1\leq i_{1}<\ldots <i_{l}\leq r$ with $l\geq 2$, consider $\Gamma _{i_{1},\ldots,i_{l}}=V(\mathfrak{p}_{i_{1}}+\ldots +\mathfrak{p}_{i_{l}})$ and $\Gamma =\cup _{1\leq i_{1}<\ldots <i_{l}\leq r}\Gamma _{i_{1},\ldots ,i_{l}}$. If $r=1$, take $\Gamma =\emptyset$. In any case, $\Gamma $ is a closed finite subset of ${\rm Spec}(A)$. By Proposition \ref{bo}, $\gamma ={\rm max}\{ {\rm grt}(M_{\mathfrak{m}})\mid \mathfrak{m}\in \Gamma \} <\infty$. Let $\Sigma ={\rm Sing}(A/\mathfrak{p}_{1})\cup \ldots \cup {\rm Sing}(A/\mathfrak{p}_{r})$, ${\rm Sing}(A/\mathfrak{p}_{i})=\{ \mathfrak{m}\in {\rm Max}(A)\mid \mathfrak{m}\supset \mathfrak{p}_{i} \mbox{ and } A_{\mathfrak{m}}/\mathfrak{p}_{i}A_{\mathfrak{m}} \mbox{ is not regular }\} $. By hypothesis, ${\rm Sing}(A/\mathfrak{p}_{i})$ is a closed subset of ${\rm Spec}(A)$. In particular, ${\rm Sing}(A/\mathfrak{p}_{i})$ and $\Sigma$ are finite. Again by Proposition \ref{bo}, $\sigma ={\rm max}\{ {\rm grt}(M_{\mathfrak{m}})\mid \mathfrak{m}\in \Sigma \} <\infty$. Now, take $\mathfrak{m}\in {\rm Max}(A)$, $\mathfrak{m}\not\in {\rm Ass}(A)\cup {\rm Ass}(M)\cup \Gamma \cup \Sigma$. Thus, $A_{\mathfrak{m}}$ is a one dimensional Cohen-Macaulay local ring, $M_{\mathfrak{m}}$ is a maximal Cohen-Macaulay $A_{\mathfrak{m}}$-module, $\mathfrak{m}$ contains exactly one minimal prime $\mathfrak{p}\in {\rm Min}(A)$, $\mathfrak{m}\supset \mathfrak{p}$, and $A_{\mathfrak{m}}/\mathfrak{p}A_{\mathfrak{m}}$ is a discrete valuation ring. Since $A$ is noetherian, there exists an integer $n\geq 1$ such that $\mathfrak{n}(A)^{n}=0$. Thus $\mathfrak{p}^{n}A_{\mathfrak{m}}=0$. By Lemma \ref{quiqm}, ${\rm grt}(M_{\mathfrak{m}})\leq {\rm max}\{ n,\sum _{i=0}^{n-1}\mu (\mathfrak{p}^{i}A_{\mathfrak{m}})\} \leq {\rm max}\{ n,\sum _{i=0}^{n-1}\mu (\mathfrak{p}^{i})\}$. If $\mu =\sum _{i=0}^{n-1}\mu (\mathfrak{p}^{i})$, then ${\rm grt}(M) ={\rm sup}\{ {\rm grt}(M_{\mathfrak{p}})\mid \mathfrak{p}\in {\rm Spec}(A)\} \leq {\rm max}\{ n,\mu ,\alpha ,\alpha ^{\prime}, \gamma ,\sigma \} <\infty $. \vrule height4pt width3pt depth2pt \begin{observacio} {\rm There exists (necessarily non $J-2$) noetherian rings with ${\rm dim}\, A\leq 1$, but ${\rm grt}(A)=\infty $. For example, take $k$ a field and $R=k[t_{1}^{2},t_{1}^{3},t_{2}^{3},t_{2}^{4},t_{2}^{5},\ldots , t_{g}^{g+1},t_{g}^{g+2},\ldots ,t_{g}^{2g+1},\ldots ]$. The ideals $\mathfrak{p}_{g}=(t_{g}^{g+1},t_{g}^{g+2},\ldots ,t_{g}^{2g+1})$ are prime of height 1. Let $S$ be the multiplicative closed set $S=R-\cup \mathfrak{p}_{g}$ and $A=S^{-1}R$. Let $\mathfrak{m}_{g}=S^{-1}\mathfrak{p}_{g}$. Since all prime ideals of $R$ contained in $\cup \mathfrak{p}_{g}$ are contained in some $\mathfrak{p}_{g}$, then $A$ is a one dimensional noetherian domain with maximal ideals $\mathfrak{m}_{g}$ \cite{sv}. By Example \ref{localtg}, ${\rm grt}(A_{\mathfrak{m}_{g}})=g+1$. Thus ${\rm grt}(A)=\infty $. Remark ${\rm Sing}(A)={\rm Spec}(A)-\{ (0)\}$, so $A$ is not $J-2$.}\end{observacio} \noindent {\bf Theorem 1} {\em Let $A$ be an excellent (in fact $J-2$) ring and let $N\subseteq M$ be two finitely generated $A$-modules such that ${\rm dim}(M/N)\leq 1$. Then there exists an integer $s\geq 1$ such that, for all integers $n\geq s$ and for all ideals $I$ of $A$, \begin{eqnarray*} I^{n}M\cap N=I^{n-s}(I^{s}M\cap N)\, . \end{eqnarray*}} \noindent {\em Proof}. Since ${\rm grt}_{A}(M/N)={\rm grt}_{A/J}(M/N)$ for $J={\rm Ann}_{A}(M/N)$, we can suppose that $A$ is an excellent ring of ${\rm dim}\, A\leq 1$. Thus, by Theorems 2 and 3, $s(N,M)\leq {\rm grt}(M/N)<\infty$. \vrule height4pt width3pt depth2pt \bigskip \noindent {\sc Acknowledgement}. I would like to thank J. \`Alvarez and J. Masdemont for valuable comments. I wish to express my gratitude to J.M. Giral and L. O'Carroll for their attention and interesting conversations regarding this paper. This work was partially supported by the UPC-PR9712 grant. {\small
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} On-chip devices, based on plasmonics, offer the potential to control and process signals at microscale, where surface plasmon polaritons are exploited as information carriers. However, short lifetime of SPPs still belongs to the major bottlenecks in plasmonics. Circumventing this problem would be a key to applications for nanoscaled signal processing, information transfer or quantum memories. So far, the efforts to achieve longer SPP lifetimes were focused on using thin films, where long-range SPP modes can be excited \cite{Berini2009}. Another challenge in plasmonics relates to novel, controllable ways to influence the propagation velocity of SPPs, to eventually build compact delay lines for plasmons. Methods of slowing down SPPs are usually based on structured metallic surfaces \cite{Kocabas2009} or gratings \cite{Sondergaard2006, Chen2008}. None of these ways is really tunable: the plasmonic velocity depends on the geometry of the structure which - once fabricated - can hardly be modified. In this work we propose to exploit electromagnetically induced transparency \cite{Harris1997} both for plasmonic slow-down and plasmonic lifetime enhancement. For this purpose we propose to use thin film geometries, where long-range SPPs can be created. SPPs can propagate over considerable distances reaching up to centimeters \cite{Berini2009}, which makes them suitable for processing with external means \cite{Konopsky2006}. One of such means takes advantage of the tunability offered by EIT \cite{Yannopapas2009,Du2012,Shen2014,Ziemkiewicz2017}. EIT has already been proposed for slowing down SPPs near the interference between dielectric and active, negative index metamaterial \cite{Kamli2008}. \begin{figure}[ht!] \centering \fbox{\includegraphics[width=\linewidth]{uklad4.png}} \caption{A probe beam incident at a silver film between a glass surface and EIT medium excites propagating surface plasmon polaritons. The inset shows an atomic $\Lambda$ system enabling realization of the EIT phenomenon.} \label{fig:setup} \end{figure} In the EIT phenomenon, dispersive properties of a medium can be externally tuned through illumination with a moderately strong laser beam, referred to as a control one. With such illumination it is possible to almost cancel absorption of a weaker resonant probe. As a result the probe, instead of being efficiently damped, can propagate at a reduced group velocity which depends on the control field's intensity \cite{Harris1997, Fleischhauer2005}. In this work we propose to make use of such tunable dispersive properties to reduce the velocity of SPPs and significantly increase their lifetime. An exemplary setup where such slowdown could be realized consists of a silver thin film on top of a glass substrate, with coupling and decoupling gratings (Fig.~\ref{fig:setup}). The probe beam is incident at an angle $\theta$ chosen such that it excites a pair of SPPs, propagating along the thin film. This pair consists of a long-range and short-range SPPs (LRSPP and SRSPP, respectively), characterized by different group velocities and propagation distances \cite{Berini2009}. Instead of embedding the setup in air or a dielectric, we suggest to put it in a tunable EIT medium, with the control beam co-linear with the propagation path of the SPPs along the film. We show analytically and verify numerically that using EIT could enable slow, but long-living plasmons. \section{Electromagnetically induced transparency} Before we proceed to discuss the details of the scheme, we introduce the basics of the EIT phenomenon. \begin{figure}[htbp] \centering \fbox{\includegraphics[width=0.5\linewidth]{rechi.png}\includegraphics[width=0.5\linewidth]{imchi.png}} \caption{Real (a) and imaginary (b) part of the electric susceptibility of the EIT medium in function of probe detuning $\delta$ and control field $\Omega$.} \label{fig:chi} \end{figure} The simplest EIT can be realized in an atomic medium whose energy structure can be approximated by the three-level $\Lambda$ configuration with the excited state $a$, fully populated ground state $b$ and a side state $c$ (inset of Fig.~\ref{fig:setup}). The transition frequencies between the states $i$ and $j$ are denoted as $\omega_{ij}$. The control beam drives the $c \leftrightarrow a$ transition, while the probe couples $b$ and $a$ levels. Once the control beam coupling two empty states creates coherence in the medium, the probe beam can propagate at a reduced velocity and almost without absorption \cite{Fleischhauer2005}. A quantitative description of the modified dispersive response is given through the electric susceptibility $\chi(\delta,\Omega)$ of the $\Lambda$ medium \begin{equation} \label{eq:chi} \chi(\delta,\Omega) = -\frac{N|\mu_{ab}|^2}{\hbar\epsilon_0}\frac{\delta-\delta_{c}+i\gamma_{bc}}{(\delta+i\gamma_{ab})(\delta-\delta_{c}+i\gamma_{bc})-|\Omega|^2}, \end{equation} where $\delta = \omega-\omega_{ab}$ and $\delta_c=\omega_c-\omega_{ac}$ represent the detuning from the atomic transition resonance of the probe and control beams, centered at $\omega$ and $\omega_c$, respectively. The symbol $\gamma_{ij}$ stands for the decoherence rate at the $i \leftrightarrow j$ transition, $N$ is the atomic density, $\mu_{ab}$ is the transition dipole moment, $\hbar$ is the reduced Planck's constant, $\Omega$ is related to the control field $\mathbf{E}_c$ by \mbox{$\Omega=\frac{\mathbf{E}_c.\mu_{ac}}{\hbar}$}, and $\epsilon_0$ stands for the vacuum electric permittivity. The imaginary part of the susceptibility [Fig.~\ref{fig:chi}(b)] shows a prominent dip centered around the atomic resonance, referred to as the transparency window. Its width can be externally tuned: it is proportional to the control field $\Omega$. The dispersion, i.e. susceptibility's real part [Fig.~\ref{fig:chi}(a)] is directly responsible for the probes group velocity \begin{equation} v_g(\omega,\Omega) =\frac{c}{n(\omega,\Omega)+\omega \frac{d n}{d \omega}}, \label{eq:velocity} \end{equation} where in this case $n(\omega,\Omega)=\sqrt{1+\Re[\chi(\omega,\Omega)]}$, but it will be modified at presence of the metallic film. Please note that the group velocity can be tuneded via the intensity of the control field. The dispersion inside transparency window becomes normal, linear to a good approximation and $Re[\chi(\omega,\Omega)]\sim \frac{1}{\Omega^2}$, with slope incerases for decreasing control field \cite{Fleischhauer2005}. This means that the probe pulse travels with the reduced group velocity (with respect to the vacuum speed of light $c$) leading to its slowdown even by several orders of magnitude \cite{Hau1999, Marangos1999}, or a possible complete stop \cite{Phillips2001,Liu2001}. \section{plasmonic slowdown} We now proceed to discuss the details of the scenario proposed in this work. Adjusting the thickness of the silver layer and the incidence angle, it is possible to excite SPPs propagating along the metallic film. This is achieved with the TM-polarized probe beam illuminating the setup. In case of such a thin metal layer with thickness $d$ much smaller than the optical wavelength $d << \lambda$, the probe creates two distinct SPP modes, called long-range- and short-range surface plasmon polaritons. As compared to SPPs on a thick metal layer, the described plasmons, especially the long range modes, are characterized by a very long, nanosecond lifetimes and propagation distances on the order of millimetres \cite{Konopsky2006,Yi2007,Park2009,Berini2009}. This key characteristic of thin film plasmons is crucial in practical realization of our scheme, allowing for macroscopic sample size, as compared to other schemes based on a typical Kretschmann configuration \cite{Kamli2008,Shen2014}. EIT conditions are created with a TE polarized control beam parallel to the metallic film (see Fig.~\ref{fig:setup}). In the situation when the $\Lambda$ system is constituted by two hyperfine-splitted metastable states $b$ and $c$, the Doppler-broadening has no adverse effect on EIT even at room temperatures, provided that one uses co-propagating probe and control beams \cite{Harris1997, Alzetta2004,Fleischhauer2005}. The role of the control beam is solely to modify the optical properties of the EIT medium, i.e. to change the propagating conditions for the probe SPPs. At presence of the control field $\Omega$ the dispersive properties of the EIT medium are modified as described above and given by Eq.(\ref{eq:chi}). The resulting dispersive properties of the probe SPP modes follow from the boundary conditions on the glass-metal and EIT-metal interfaces \cite{Economou1969,Raether1988,Pitarke2006} \begin{eqnarray}\label{eq:disp} \frac{\kappa_m\tanh(\kappa_m\frac{d}{2})}{\epsilon_m} = \frac{-\kappa_{eit}}{\epsilon_{eit}} = \frac{-\kappa_g}{\epsilon_g},\nonumber\\ \frac{\kappa_m\coth(\kappa_m\frac{d}{2})}{\epsilon_m} = \frac{-\kappa_{eit}}{\epsilon_{eit}} = \frac{-\kappa_g}{\epsilon_g}, \end{eqnarray} where $\kappa_j = \sqrt{k^2 - \epsilon_j\frac{\omega^2}{c^2}}$ is the component of the wave vector $k$ parallel to the surface in the $j$-th medium with $j\in\{eit,m,g\}$ corresponding respectively to the EIT medium, metal and glass, $\epsilon_j$ are relative electric permittivities, and \mbox{$\epsilon_{eit}(\omega,\Omega) = 1+\chi(\omega,\Omega)$}. For a quantitative discussion, we consider the following parameters: a probe field with $\lambda = 589$ nm exciting a pair of SPPs on a silver layer ($\epsilon_m = -13.3 + 0.883 i$) \cite{Palik1998} with thickness $d=32.7$ nm. The metal film is surrounded by glass of $\epsilon_g = 2.2$ and EIT medium. The latter consists of sodium vapour, with typical densities in the range of $N=10^{10} - 10^{12}~\mathrm{cm}^{-3}$, illuminated with the control field $\Omega$ typically of several hundred MHz. The chosen wavelength $\lambda$ corresponds to the sodium D2 line ($3^{2}S_{1/2}\rightarrow 3^{2}P_{3/2}$). The upper state $a$ would simply be the $|3^{2}P_{3/2}, F=0, m_F=0\rangle$ hyperfine sublevel, while the lower states would correspond to symmetric and antisymmetric superpositions, respectively: $b,c \sim |3^{2}S_{1/2}, F=1,m_F = -1\rangle \pm |3^{2}S_{1/2}, F=1,m_F = +1\rangle$ \cite{Steck}. This is necessary for them to be coupled with linearly polarized light, as required by our scenario: the $c \leftrightarrow a$ transition is driven with TE-polarized light (the control field), while the probe of TM polarization couples states $b$ and $a$ [Fig.~\ref{fig:setup}]. We emphasize that such EIT scenario has been demonstrated experimentally for sodium vapours at room temperatures \cite{Alzetta2004}. The electric dipole moment of the $a \leftrightarrow b$ transition is $\mu_{ab} = 1.72 \times 10^{-29}$ Cm. The most important decoherence mechanism in sodium is spontaneous emission $\gamma_{ab, ac} = 60$ MHz, whose influence is much greater than the impact of collisions with cell walls. These collisions can be further reduced with an addition of buffer gas, which is a standard procedure in EIT experiments. On the contrary, dephasing $\gamma_{bc}$ between the lower states corresponds to an electric-dipole-forbidden transition, for which spontaneous emission is negligible, so this decoherence rate is small and determined by atomic movement. Only the atoms coherently prepared by the control beam contribute to EIT. In the heated cell the time an atom spends on average within the volume illuminated by the control beam can be estimated as $T=15$ $\mu$s \cite{Dziczek2005}. An additional depolarizing process is related to collisions \cite{Dziczek2005} with the metallic film that may occur at time-scales of $T^\prime = 0.1$ ms. The total depolarizing rate $\gamma_{bc}=T^{-1}+{T^\prime}^{-1} \approx 68$ kHz is small with respect to both: other decoherence processes in the EIT medium and loss in metals. Finally, please note that all the important features of EIT remain observable even when $\gamma_{bc} \neq 0$, provided that the control field satisfies $|\Omega|^2 \gg \gamma_{ab} \gamma_{bc}$ \cite{Fleischhauer2005}, which is relatively easily achieved. The width of the transparency window is limited by the Stark shift of atomic energy levels, which happens around \hbox{$\Omega > \gamma_{ab} \approx 500$ MHz}. \begin{figure}[htbp] \centering \fbox{\includegraphics[width=\linewidth]{grupowa.png}} \caption{Group velocities of SPPs as a function of the probe detuning $\delta$ and control field $\Omega$. The inset shows a cut for $\delta=0$.} \label{fig:group_velocity} \end{figure} Solving Eqs.~(\ref{eq:disp}) numerically, one can obtain the group velocity $v_g = \frac{d \omega}{d k}$ as a function of tunable parameters such as the control field $\Omega$ and the density of the EIT medium $N$. We plot the resulting group velocity for the set of data given above as a function of the probe detuning $\delta$ from the $b \leftrightarrow a$ atomic transition and the control field $\Omega$ (Fig.~\ref{fig:group_velocity}). Naturally, LRSPPs (SRSPPs) are characterized with group velocities enhanced (decreased) with respect to SPPs in standard Kretschmann configuration (black line). The influence of the EIT medium can be recognized from the characteristic pattern corresponding to the transparency window in Fig.~\ref{fig:chi}(b). For frequencies around $\delta=0$, a reduction by the factor of $10^{-4}$ with respect to vacuum speed of light is obtained (see inset). The velocity can be tuned by optical means, i.e. by adjustment of the electric field of the control beam $\Omega$. \begin{figure}[htbp] \centering \fbox{\includegraphics[width=\linewidth]{wynik1d.png}} \caption{Group velocity of SPPs obtained analytically by solving Eq.~(\ref{eq:disp}) and numerically using FDTD. The insets show distributions of the electric field component perpendicular to the metallic surface for LR- and SRSPP modes.} \label{fig:numeryka} \end{figure} Our analytical findings are verified by a direct comparison with numerically obtained results for group velocity reduction (Fig.~\ref{fig:numeryka}), where excellent agreement was achieved. We have used the finite-difference time-domain (FDTD) method, based directly on Maxwell's equations in 2D complemented with suitable dispersion model for metal and EIT medium \cite{Ziemkiewicz2015,Ziemkiewicz2017}. Please note that the asymptotic value of the group velocity obtained for large control fields $v_g \approx 0.8632~c$ would be the SPP velocity at the absence of the EIT medium. Due to simulation time and stability constraints, we numerically investigated the case of very low atomic medium densities $N = 10^9$ cm$^{-3}$, resulting in a slowdown by up to two orders of magnitude. As one can see from Eqs.(\ref{eq:chi}-\ref{eq:velocity}), since $\chi \sim N$, the results are approximately scaled by the factor $1/\sqrt{N}$, which allows us to anticipate smaller group velocities for enhanced densities $N$, in accordance with the analytical findings. Application of the numerical method enables an illustration of field distributions corresponding to propagation of LR- and SRSPPs along the thin film (see insets). Please note that a considerable fraction of the field enters the EIT medium. \begin{figure}[htbp] \centering \fbox{a)\includegraphics[width=0.48\linewidth]{tau2b.png}b)\includegraphics[width=0.48\linewidth]{tau3b.png}} \caption{LRSPP lifetime as a function of $v_g$ (a) and $\Omega$ (b). The linear regression plot in (a) was fit to the points corresponding to $v_g<0.1c$. } \label{fig:tau} \end{figure} The reduced group velocity is directly related to a strongly enhanced plasmonic lifetime. For strong velocity reduction $v_g\ll c$ we have $\epsilon + \omega\frac{\partial \epsilon}{\partial \omega} \gg 1$, and obtain \begin{equation}\label{eq:2c} v_g \approx \frac{2c}{\epsilon + \omega\frac{\partial \epsilon}{\partial \omega} }. \end{equation} The probe beam energy density depends on its electric field $E$ \cite{Landau1960} \begin{equation}\label{eq:w} W = \frac{\partial (\omega \epsilon)}{\partial \omega}|E|^2 = (\epsilon + \omega\frac{\partial \epsilon}{\partial \omega} ) |E|^2 \approx \frac{2c}{v_g} |E|^2. \end{equation} For SPP in form of an impulse taking finite time $\delta t$, the volume $V$ taken by an SPP is proportional to its group velocity, e. g. $V \sim v_g \delta t$. Therefore, the total field energy $\int_V W d^3x$ is independent on $v_g$. The power absorbed in the metal, which is the dominant source of losses in the system, is expressed as \begin{equation}\label{eq:P} P = \int_V \sigma |E_{m}|^2 d^3x \sim \sigma \frac{v_g }{2c} \int_V W d^3x \sim v_g, \end{equation} where $\sigma$ is the conductivity of the metal, $E_m \sim E$ is the field inside the metal, and we have used Eq.~(\ref{eq:w}). Therefore, the absorbed power is proportional to the group velocity, and can be greatly reduced in conditions of EIT. In other words, longer lifetimes origin at the fact that a reduced group velocity in EIT conditions corresponds to energy transfer from the SPP field to atomic excitations. This prevents the metal from absorbing the field. This is the reason for the increased plasmonic lifetimes $\tau$, compared in Fig.~\ref{fig:tau}(a) to lifetimes $\tau_0$ at the absence of the EIT medium. As expected, we see an almost linear dependence of lifetime on the inverse of the group velocity for significant slowdowns. The results are extrapolated to the regime of smaller group velocities achieved for atomic densities typically used in EIT experiments. The obtained correlation between SPP lifetime and group velocity agrees with the findings of Derrien \emph{et al} \cite{Derrien2016}, where moderate lifetime enhancement has been discussed at various metal-air interfaces. It should be stressed that the increased lifetime is crucial for the effective slowdown of SPPs; the slowdown by two orders of magnitude results in $\tau \approx 1$~ns and corresponding reduced plasmon spectral width \hbox{$\Gamma \approx 1$~GHz}, which is sufficiently narrow to fit inside the transparency window. We have examined the same lifetime enhancement in dependence of the control field $\Omega$ [Fig.~\ref{fig:tau}(b)], illustrating the optical tunability of the SPP lifetime. Decreasing the control field leads to a narrower transparency window, which results in lower group velocities and longer lifetimes. The numerical results are compared to theoretical relation given by Eq.~(\ref{eq:w}) and Eq.~(\ref{eq:P}), where the approximation (\ref{eq:2c}) has not been used. In such a case, one obtains nonlinear relation between SPP lifetime and control field \begin{equation} \tau \sim 1 + \chi(\omega,\Omega) + \omega\frac{\partial \chi(\omega,\Omega)}{\partial \omega} \end{equation} which is a fairly good match to the numerical results in a whole group velocity range. \section{Conclusions} We have proposed to exploit the EIT phenomenon for a significant reduction of plasmonic propagation velocity and the corresponding lifetime enhancement. Both group velocity and lifetime can be optically tuned with changes of the control field. We have discussed an exemplary setup feasible to realize our predictions: a silver thin film placed on a glass substrate and surrounded by sodium vapours, where the EIT is performed. This finds potential applications for tunable technologies of nanoscaled information processing both at a classical and at a quantum level, nanodevices that could be switched between their operational modes, or even quantum memories for plasmons. \bigskip \section{Introduction} On-chip devices, based on plasmonics, offer the potential to control and process signals at microscale, where surface plasmon polaritons are exploited as information carriers. However, short lifetime of SPPs still belongs to the major bottlenecks in plasmonics. Circumventing this problem would be a key to applications for nanoscaled signal processing, information transfer or quantum memories. So far, the efforts to achieve longer SPP lifetimes were focused on using thin films, where long-range SPP modes can be excited \cite{Berini2009}. Another challenge in plasmonics relates to novel, controllable ways to influence the propagation velocity of SPPs, to eventually build compact delay lines for plasmons. Methods of slowing down SPPs are usually based on structured metallic surfaces \cite{Kocabas2009} or gratings \cite{Sondergaard2006, Chen2008}. None of these ways is really tunable: the plasmonic velocity depends on the geometry of the structure which - once fabricated - can hardly be modified. In this work we propose to exploit electromagnetically induced transparency \cite{Harris1997} both for plasmonic slow-down and plasmonic lifetime enhancement. For this purpose we propose to use thin film geometries, where long-range SPPs can be created. SPPs can propagate over considerable distances reaching up to centimeters \cite{Berini2009}, which makes them suitable for processing with external means \cite{Konopsky2006}. One of such means takes advantage of the tunability offered by EIT \cite{Yannopapas2009,Du2012,Shen2014,Ziemkiewicz2017}. EIT has already been proposed for slowing down SPPs near the interference between dielectric and active, negative index metamaterial \cite{Kamli2008}. \begin{figure}[ht!] \centering \fbox{\includegraphics[width=\linewidth]{uklad4.png}} \caption{A probe beam incident at a silver film between a glass surface and EIT medium excites propagating surface plasmon polaritons. The inset shows an atomic $\Lambda$ system enabling realization of the EIT phenomenon.} \label{fig:setup} \end{figure} In the EIT phenomenon, dispersive properties of a medium can be externally tuned through illumination with a moderately strong laser beam, referred to as a control one. With such illumination it is possible to almost cancel absorption of a weaker resonant probe. As a result the probe, instead of being efficiently damped, can propagate at a reduced group velocity which depends on the control field's intensity \cite{Harris1997, Fleischhauer2005}. In this work we propose to make use of such tunable dispersive properties to reduce the velocity of SPPs and significantly increase their lifetime. An exemplary setup where such slowdown could be realized consists of a silver thin film on top of a glass substrate, with coupling and decoupling gratings (Fig.~\ref{fig:setup}). The probe beam is incident at an angle $\theta$ chosen such that it excites a pair of SPPs, propagating along the thin film. This pair consists of a long-range and short-range SPPs (LRSPP and SRSPP, respectively), characterized by different group velocities and propagation distances \cite{Berini2009}. Instead of embedding the setup in air or a dielectric, we suggest to put it in a tunable EIT medium, with the control beam co-linear with the propagation path of the SPPs along the film. We show analytically and verify numerically that using EIT could enable slow, but long-living plasmons. \section{Electromagnetically induced transparency} Before we proceed to discuss the details of the scheme, we introduce the basics of the EIT phenomenon. \begin{figure}[htbp] \centering \fbox{\includegraphics[width=0.5\linewidth]{rechi.png}\includegraphics[width=0.5\linewidth]{imchi.png}} \caption{Real (a) and imaginary (b) part of the electric susceptibility of the EIT medium in function of probe detuning $\delta$ and control field $\Omega$.} \label{fig:chi} \end{figure} The simplest EIT can be realized in an atomic medium whose energy structure can be approximated by the three-level $\Lambda$ configuration with the excited state $a$, fully populated ground state $b$ and a side state $c$ (inset of Fig.~\ref{fig:setup}). The transition frequencies between the states $i$ and $j$ are denoted as $\omega_{ij}$. The control beam drives the $c \leftrightarrow a$ transition, while the probe couples $b$ and $a$ levels. Once the control beam coupling two empty states creates coherence in the medium, the probe beam can propagate at a reduced velocity and almost without absorption \cite{Fleischhauer2005}. A quantitative description of the modified dispersive response is given through the electric susceptibility $\chi(\delta,\Omega)$ of the $\Lambda$ medium \begin{equation} \label{eq:chi} \chi(\delta,\Omega) = -\frac{N|\mu_{ab}|^2}{\hbar\epsilon_0}\frac{\delta-\delta_{c}+i\gamma_{bc}}{(\delta+i\gamma_{ab})(\delta-\delta_{c}+i\gamma_{bc})-|\Omega|^2}, \end{equation} where $\delta = \omega-\omega_{ab}$ and $\delta_c=\omega_c-\omega_{ac}$ represent the detuning from the atomic transition resonance of the probe and control beams, centered at $\omega$ and $\omega_c$, respectively. The symbol $\gamma_{ij}$ stands for the decoherence rate at the $i \leftrightarrow j$ transition, $N$ is the atomic density, $\mu_{ab}$ is the transition dipole moment, $\hbar$ is the reduced Planck's constant, $\Omega$ is related to the control field $\mathbf{E}_c$ by \mbox{$\Omega=\frac{\mathbf{E}_c.\mu_{ac}}{\hbar}$}, and $\epsilon_0$ stands for the vacuum electric permittivity. The imaginary part of the susceptibility [Fig.~\ref{fig:chi}(b)] shows a prominent dip centered around the atomic resonance, referred to as the transparency window. Its width can be externally tuned: it is proportional to the control field $\Omega$. The dispersion, i.e. susceptibility's real part [Fig.~\ref{fig:chi}(a)] is directly responsible for the probes group velocity \begin{equation} v_g(\omega,\Omega) =\frac{c}{n(\omega,\Omega)+\omega \frac{d n}{d \omega}}, \label{eq:velocity} \end{equation} where in this case $n(\omega,\Omega)=\sqrt{1+\Re[\chi(\omega,\Omega)]}$, but it will be modified at presence of the metallic film. Please note that the group velocity can be tuneded via the intensity of the control field. The dispersion inside transparency window becomes normal, linear to a good approximation and $Re[\chi(\omega,\Omega)]\sim \frac{1}{\Omega^2}$, with slope incerases for decreasing control field \cite{Fleischhauer2005}. This means that the probe pulse travels with the reduced group velocity (with respect to the vacuum speed of light $c$) leading to its slowdown even by several orders of magnitude \cite{Hau1999, Marangos1999}, or a possible complete stop \cite{Phillips2001,Liu2001}. \section{plasmonic slowdown} We now proceed to discuss the details of the scenario proposed in this work. Adjusting the thickness of the silver layer and the incidence angle, it is possible to excite SPPs propagating along the metallic film. This is achieved with the TM-polarized probe beam illuminating the setup. In case of such a thin metal layer with thickness $d$ much smaller than the optical wavelength $d << \lambda$, the probe creates two distinct SPP modes, called long-range- and short-range surface plasmon polaritons. As compared to SPPs on a thick metal layer, the described plasmons, especially the long range modes, are characterized by a very long, nanosecond lifetimes and propagation distances on the order of millimetres \cite{Konopsky2006,Yi2007,Park2009,Berini2009}. This key characteristic of thin film plasmons is crucial in practical realization of our scheme, allowing for macroscopic sample size, as compared to other schemes based on a typical Kretschmann configuration \cite{Kamli2008,Shen2014}. EIT conditions are created with a TE polarized control beam parallel to the metallic film (see Fig.~\ref{fig:setup}). In the situation when the $\Lambda$ system is constituted by two hyperfine-splitted metastable states $b$ and $c$, the Doppler-broadening has no adverse effect on EIT even at room temperatures, provided that one uses co-propagating probe and control beams \cite{Harris1997, Alzetta2004,Fleischhauer2005}. The role of the control beam is solely to modify the optical properties of the EIT medium, i.e. to change the propagating conditions for the probe SPPs. At presence of the control field $\Omega$ the dispersive properties of the EIT medium are modified as described above and given by Eq.(\ref{eq:chi}). The resulting dispersive properties of the probe SPP modes follow from the boundary conditions on the glass-metal and EIT-metal interfaces \cite{Economou1969,Raether1988,Pitarke2006} \begin{eqnarray}\label{eq:disp} \frac{\kappa_m\tanh(\kappa_m\frac{d}{2})}{\epsilon_m} = \frac{-\kappa_{eit}}{\epsilon_{eit}} = \frac{-\kappa_g}{\epsilon_g},\nonumber\\ \frac{\kappa_m\coth(\kappa_m\frac{d}{2})}{\epsilon_m} = \frac{-\kappa_{eit}}{\epsilon_{eit}} = \frac{-\kappa_g}{\epsilon_g}, \end{eqnarray} where $\kappa_j = \sqrt{k^2 - \epsilon_j\frac{\omega^2}{c^2}}$ is the component of the wave vector $k$ parallel to the surface in the $j$-th medium with $j\in\{eit,m,g\}$ corresponding respectively to the EIT medium, metal and glass, $\epsilon_j$ are relative electric permittivities, and \mbox{$\epsilon_{eit}(\omega,\Omega) = 1+\chi(\omega,\Omega)$}. For a quantitative discussion, we consider the following parameters: a probe field with $\lambda = 589$ nm exciting a pair of SPPs on a silver layer ($\epsilon_m = -13.3 + 0.883 i$) \cite{Palik1998} with thickness $d=32.7$ nm. The metal film is surrounded by glass of $\epsilon_g = 2.2$ and EIT medium. The latter consists of sodium vapour, with typical densities in the range of $N=10^{10} - 10^{12}~\mathrm{cm}^{-3}$, illuminated with the control field $\Omega$ typically of several hundred MHz. The chosen wavelength $\lambda$ corresponds to the sodium D2 line ($3^{2}S_{1/2}\rightarrow 3^{2}P_{3/2}$). The upper state $a$ would simply be the $|3^{2}P_{3/2}, F=0, m_F=0\rangle$ hyperfine sublevel, while the lower states would correspond to symmetric and antisymmetric superpositions, respectively: $b,c \sim |3^{2}S_{1/2}, F=1,m_F = -1\rangle \pm |3^{2}S_{1/2}, F=1,m_F = +1\rangle$ \cite{Steck}. This is necessary for them to be coupled with linearly polarized light, as required by our scenario: the $c \leftrightarrow a$ transition is driven with TE-polarized light (the control field), while the probe of TM polarization couples states $b$ and $a$ [Fig.~\ref{fig:setup}]. We emphasize that such EIT scenario has been demonstrated experimentally for sodium vapours at room temperatures \cite{Alzetta2004}. The electric dipole moment of the $a \leftrightarrow b$ transition is $\mu_{ab} = 1.72 \times 10^{-29}$ Cm. The most important decoherence mechanism in sodium is spontaneous emission $\gamma_{ab, ac} = 60$ MHz, whose influence is much greater than the impact of collisions with cell walls. These collisions can be further reduced with an addition of buffer gas, which is a standard procedure in EIT experiments. On the contrary, dephasing $\gamma_{bc}$ between the lower states corresponds to an electric-dipole-forbidden transition, for which spontaneous emission is negligible, so this decoherence rate is small and determined by atomic movement. Only the atoms coherently prepared by the control beam contribute to EIT. In the heated cell the time an atom spends on average within the volume illuminated by the control beam can be estimated as $T=15$ $\mu$s \cite{Dziczek2005}. An additional depolarizing process is related to collisions \cite{Dziczek2005} with the metallic film that may occur at time-scales of $T^\prime = 0.1$ ms. The total depolarizing rate $\gamma_{bc}=T^{-1}+{T^\prime}^{-1} \approx 68$ kHz is small with respect to both: other decoherence processes in the EIT medium and loss in metals. Finally, please note that all the important features of EIT remain observable even when $\gamma_{bc} \neq 0$, provided that the control field satisfies $|\Omega|^2 \gg \gamma_{ab} \gamma_{bc}$ \cite{Fleischhauer2005}, which is relatively easily achieved. The width of the transparency window is limited by the Stark shift of atomic energy levels, which happens around \hbox{$\Omega > \gamma_{ab} \approx 500$ MHz}. \begin{figure}[htbp] \centering \fbox{\includegraphics[width=\linewidth]{grupowa.png}} \caption{Group velocities of SPPs as a function of the probe detuning $\delta$ and control field $\Omega$. The inset shows a cut for $\delta=0$.} \label{fig:group_velocity} \end{figure} Solving Eqs.~(\ref{eq:disp}) numerically, one can obtain the group velocity $v_g = \frac{d \omega}{d k}$ as a function of tunable parameters such as the control field $\Omega$ and the density of the EIT medium $N$. We plot the resulting group velocity for the set of data given above as a function of the probe detuning $\delta$ from the $b \leftrightarrow a$ atomic transition and the control field $\Omega$ (Fig.~\ref{fig:group_velocity}). Naturally, LRSPPs (SRSPPs) are characterized with group velocities enhanced (decreased) with respect to SPPs in standard Kretschmann configuration (black line). The influence of the EIT medium can be recognized from the characteristic pattern corresponding to the transparency window in Fig.~\ref{fig:chi}(b). For frequencies around $\delta=0$, a reduction by the factor of $10^{-4}$ with respect to vacuum speed of light is obtained (see inset). The velocity can be tuned by optical means, i.e. by adjustment of the electric field of the control beam $\Omega$. \begin{figure}[htbp] \centering \fbox{\includegraphics[width=\linewidth]{wynik1d.png}} \caption{Group velocity of SPPs obtained analytically by solving Eq.~(\ref{eq:disp}) and numerically using FDTD. The insets show distributions of the electric field component perpendicular to the metallic surface for LR- and SRSPP modes.} \label{fig:numeryka} \end{figure} Our analytical findings are verified by a direct comparison with numerically obtained results for group velocity reduction (Fig.~\ref{fig:numeryka}), where excellent agreement was achieved. We have used the finite-difference time-domain (FDTD) method, based directly on Maxwell's equations in 2D complemented with suitable dispersion model for metal and EIT medium \cite{Ziemkiewicz2015,Ziemkiewicz2017}. Please note that the asymptotic value of the group velocity obtained for large control fields $v_g \approx 0.8632~c$ would be the SPP velocity at the absence of the EIT medium. Due to simulation time and stability constraints, we numerically investigated the case of very low atomic medium densities $N = 10^9$ cm$^{-3}$, resulting in a slowdown by up to two orders of magnitude. As one can see from Eqs.(\ref{eq:chi}-\ref{eq:velocity}), since $\chi \sim N$, the results are approximately scaled by the factor $1/\sqrt{N}$, which allows us to anticipate smaller group velocities for enhanced densities $N$, in accordance with the analytical findings. Application of the numerical method enables an illustration of field distributions corresponding to propagation of LR- and SRSPPs along the thin film (see insets). Please note that a considerable fraction of the field enters the EIT medium. \begin{figure}[htbp] \centering \fbox{a)\includegraphics[width=0.48\linewidth]{tau2b.png}b)\includegraphics[width=0.48\linewidth]{tau3b.png}} \caption{LRSPP lifetime as a function of $v_g$ (a) and $\Omega$ (b). The linear regression plot in (a) was fit to the points corresponding to $v_g<0.1c$. } \label{fig:tau} \end{figure} The reduced group velocity is directly related to a strongly enhanced plasmonic lifetime. For strong velocity reduction $v_g\ll c$ we have $\epsilon + \omega\frac{\partial \epsilon}{\partial \omega} \gg 1$, and obtain \begin{equation}\label{eq:2c} v_g \approx \frac{2c}{\epsilon + \omega\frac{\partial \epsilon}{\partial \omega} }. \end{equation} The probe beam energy density depends on its electric field $E$ \cite{Landau1960} \begin{equation}\label{eq:w} W = \frac{\partial (\omega \epsilon)}{\partial \omega}|E|^2 = (\epsilon + \omega\frac{\partial \epsilon}{\partial \omega} ) |E|^2 \approx \frac{2c}{v_g} |E|^2. \end{equation} For SPP in form of an impulse taking finite time $\delta t$, the volume $V$ taken by an SPP is proportional to its group velocity, e. g. $V \sim v_g \delta t$. Therefore, the total field energy $\int_V W d^3x$ is independent on $v_g$. The power absorbed in the metal, which is the dominant source of losses in the system, is expressed as \begin{equation}\label{eq:P} P = \int_V \sigma |E_{m}|^2 d^3x \sim \sigma \frac{v_g }{2c} \int_V W d^3x \sim v_g, \end{equation} where $\sigma$ is the conductivity of the metal, $E_m \sim E$ is the field inside the metal, and we have used Eq.~(\ref{eq:w}). Therefore, the absorbed power is proportional to the group velocity, and can be greatly reduced in conditions of EIT. In other words, longer lifetimes origin at the fact that a reduced group velocity in EIT conditions corresponds to energy transfer from the SPP field to atomic excitations. This prevents the metal from absorbing the field. This is the reason for the increased plasmonic lifetimes $\tau$, compared in Fig.~\ref{fig:tau}(a) to lifetimes $\tau_0$ at the absence of the EIT medium. As expected, we see an almost linear dependence of lifetime on the inverse of the group velocity for significant slowdowns. The results are extrapolated to the regime of smaller group velocities achieved for atomic densities typically used in EIT experiments. The obtained correlation between SPP lifetime and group velocity agrees with the findings of Derrien \emph{et al} \cite{Derrien2016}, where moderate lifetime enhancement has been discussed at various metal-air interfaces. It should be stressed that the increased lifetime is crucial for the effective slowdown of SPPs; the slowdown by two orders of magnitude results in $\tau \approx 1$~ns and corresponding reduced plasmon spectral width \hbox{$\Gamma \approx 1$~GHz}, which is sufficiently narrow to fit inside the transparency window. We have examined the same lifetime enhancement in dependence of the control field $\Omega$ [Fig.~\ref{fig:tau}(b)], illustrating the optical tunability of the SPP lifetime. Decreasing the control field leads to a narrower transparency window, which results in lower group velocities and longer lifetimes. The numerical results are compared to theoretical relation given by Eq.~(\ref{eq:w}) and Eq.~(\ref{eq:P}), where the approximation (\ref{eq:2c}) has not been used. In such a case, one obtains nonlinear relation between SPP lifetime and control field \begin{equation} \tau \sim 1 + \chi(\omega,\Omega) + \omega\frac{\partial \chi(\omega,\Omega)}{\partial \omega} \end{equation} which is a fairly good match to the numerical results in a whole group velocity range. \section{Conclusions} We have proposed to exploit the EIT phenomenon for a significant reduction of plasmonic propagation velocity and the corresponding lifetime enhancement. Both group velocity and lifetime can be optically tuned with changes of the control field. We have discussed an exemplary setup feasible to realize our predictions: a silver thin film placed on a glass substrate and surrounded by sodium vapours, where the EIT is performed. This finds potential applications for tunable technologies of nanoscaled information processing both at a classical and at a quantum level, nanodevices that could be switched between their operational modes, or even quantum memories for plasmons. \bigskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Large availability of image capturing devices and storage platforms has increased end-users interest in creation and storage of images. These platforms also help users to share their images allowing modifications that match their needs such as making them look better. Consequently, the demand for manipulating images online or offline is increasing. However, manipulation of images is non-trivial for non-experts who do not have apprehension about underlying principles of both photo-editing tools and image processing techniques. To simplify the process of image manipulation, automatically changing various aspects of images is an interesting direction to explore. In earlier research, different ways of automatic image manipulation are examined . Initially, approaches transformed grayscale into a color image~\cite{zhang:2016} or transferred their style~\cite{gatys:2016} matching numerous well-known artworks. Few approaches~\cite{liang:2017} took the desired object category in an image as input and then learned to change the object by modifying their appearance or geometric structure. There has been also another direction of interest~\cite{zhu:2016} shown to manipulate images by projecting them onto an image manifold with various user's scribbles as input. It has been further extended to handle various domains in the context of paired~\cite{isola:2017} and unpaired~\cite{zhuunpair:2017} image-to-image translation without hand-engineering loss functions. There also exist some more variations of image manipulation, more details can be referred from the recent surveys~\cite{mogadala:2019}. Although earlier mentioned research has achieved promising results, manipulation of images according to user intention becomes more difficult as those methods allow minimal or no control on the image generation. To address it, recent approaches have leveraged text-to-image generation~\cite{reed:2016} techniques and manipulated images using natural language description~\cite{dong:2017,nam:2018}. To be specific, the focus is on modifying visual attributes of an object in an image, where the visual attributes are characterized by the color and the texture of an object. Figure~\ref{fig:immanip} outline the goal of the task by showing samples of the manipulated images using different models. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Figs/Intro.pdf} \caption{Samples of the output images by manipulating input images using natural language description. Existing methods (a) SISGAN~\cite{dong:2017} and (b) TAGAN~\cite{nam:2018} produce reasonable results, but fail to preserve text-irrelevant contents. Our methods (c) Single-scale and (d) Multi-scale accurately manipulates images according to the text while preserving the background.}\label{fig:immanip} \end{figure*} Nevertheless, current natural language based image manipulation approaches still suffer from several problems. \begin{itemize} \item First, they fail to properly attend on locations that need to be modified and alter basic structure in the image (e.g., layout and pose of the object) during generation. \item Second, they generate lower resolution images (e.g., 64 $\times$ 64 or 128 $\times$ 128), while recent works~\cite{karras:2017} show that higher resolution images (e.g., 256 $\times$ 256) are preferred by humans due to their improved quality and stability. \end{itemize} Therefore, in this paper, we overcome the limitations of earlier approaches by proposing a novel Two-sided Attentive Conditional Generative Adversarial Network (TEA-cGAN) for generating manipulated images while preserving other contents such as the background. Particularly in our generator, we compute a matching score between every sub-region in an image and the word in the natural language description. To be specific, a \textit{attention map} consisting of matching scores is constructed to be used along with the image input features. This helps local word-level features (i.e., fine-grained) to attend a specific type of visual attribute and detach text-relevant areas in the image from irrelevant ones. Similarly, the discriminator decides whether the image is real or fake by accumulating fine-grained matching scores with attention. Using the feedback from the discriminator, our generator adapts itself to generate manipulated images. To the best of our knowledge, none of the previous works propose attention over conditional Generative Adversarial Network (cGAN) in a generator for fine-grained image manipulation with natural language. However, attention has been used for text-to-image generation~\cite{xu:2018} or applied only to the discriminator~\cite{nam:2018}. The summary of the main contributions in this work is listed as follows. \begin{itemize} \item We proposed the novel architecture TEA-cGAN for image manipulation with natural language by leveraging fine-grained attention on conditional GAN both in the generator and discriminator. \item We have built the generator with two different scales to support the generation of sharper and higher resolution images. \item We thoroughly evaluate our approach on two datasets containing a different type of images. \end{itemize} \section{Two-sided Attentive Conditional Generative Adversarial Network (TEA-cGAN)} \label{sec:attcgan} Let $\mathbf{I}$, $\mathbf{T}$, $\mathbf{\hat{T}}$ denote an image, a positive natural language description matching the image, and a mismatch description that does not correctly describe the image, respectively. Given an image $\mathbf{I}$ and a target mismatch text $\mathbf{\hat{T}}$, our aim is to manipulate $\mathbf{I}$ according to $\mathbf{\hat{T}}$ so that the visual attributes of the manipulated image $\mathbf{\hat{I}}$ match the description of $\mathbf{\hat{T}}$ while preserving other information (e.g., background). We use Generative Adversarial Network (GAN)~\cite{goodfellow:2014} as our framework, in which the generator is trained to produce $\mathbf{\hat{I}}$ given $G(\mathbf{I},\mathbf{\hat{T}})$. In the following, we describe the generator, discriminator, and objective of our TEA-cGAN in detail. \subsection{Generator} \label{ssec:generator} The generator is an encoder-decoder architecture with attention inspired by the plain text-to-image generation approach~\cite{xu:2018}. We design two different variants of it (i) Single-scale and (ii) Multi-scale. In the following, details about each of them are presented separately. \subsubsection{Single-scale } \label{ssec:ssfa} We first encode the input image to a feature representation with an image encoder. Further, only the final output representation of the image encoder is used in combination with the fine-grain word-level features arising from the natural language description. This is done to focus and modify only the text-relevant regions in an image and leaving other regions untouched. The structure of our Single-scale model is shown in the Figure~\ref{fig:singlescale}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figs/single-scale-generator.pdf} \caption{The architecture of our Single-scale generator. It has an encoder-decoder structure, the text is injected into the image generation process using an attention mechanism.}\label{fig:singlescale} \end{figure*} Image encoder is designed using a convolutional network which has a representation of $\mathbf{I}$ defined by a tensor $V = (v_1 \ldots v_N) \in \mathbb{R}^{M \times N}$, where $\text{M}$ denotes the number of feature maps and $\text{N}$ denotes the spatial dimension \footnote{A spatial dimension normally consists of a height $H$ and a width $W$, we use $N=H\times W$ for simple notation}. The natural language description encoder generates a representation of $\mathbf{\hat{T}}$ (or $\mathbf{T}$), denoted by $W \in \mathbb{R}^{D \times L}$, using a bidirectional long short-term memory (LSTM)~\cite{hochreiter:1997} where $w_i \in \mathbb{R}^D$ represents the $i$-th word in the description obtained by concatenating forward and backward LSTM hidden layers. $L$ is the length of the description. We jointly process both $V$ and $W$ to compute a matching score between $v_i$ and $w_j$ for generating different sub-regions of the image conditioned on the relevant words corresponding to those sub-regions. \begin{equation} w'_{j} = \mathcal{U}w_j \end{equation} Where $w'_{j}$ represents a bilinear projection of $w_j$ into the $v_i$ space with a projection matrix $\mathcal{U} \in \mathbb{R}^{N \times D}$. Further, we use $w'_{j}$ to make the final classification decision by adding word-level attentions to reduce the impact of less relevant words. Our attention is a Softmax across $L$ words and is computed by Equation~\ref{eqn:attn}. Word-context feature ($v'_{i}$) is computed by its linear combination with $w'_{j}$ given by Equation~\ref{eqn:weighattn}. \begin{equation} \label{eqn:attn} \alpha_{ij} = \frac{exp(v_{i}^Tw'_{j})}{\sum_{k=1}^{L} exp(v_{ik}^Tw'_{jk}) } \end{equation} \begin{equation} \label{eqn:weighattn} v'_{i} = \sum_{j=1}^{L} \alpha_{ij} w'_{j} \end{equation} Here, $V' = (v'_{1},\ldots,v'_{N})$ denote the word-context features of the entire image. Finally, $\mathcal{V}$ and $\mathcal{V'}$ will be concatenated and fed into several residual blocks for further processing. The processed feature will be further transformed into an image using the image decoder. \subsubsection{Multi-scale } \label{ssec:msfa} The main issue with the Single-scale architecture is that it may fail to focus on the right locations in images. We understand that $v_i$ encodes the information of a sub-region in an image and the size of it depends on the receptive field of the last convolutional layer. If the receptive field is too small, $v_i$ may fail to provide necessary information for computing a matching score used for the attention. However, if the receptive field is too large, the corresponding sub-regions may contain irrelevant features for the attention. Considering an extreme case that the size of the receptive field equals the size of the image, then the attention does not make much sense. Hence, we are motivated to utilize features from different scales. The structure of our Multi-scale model is shown in the Figure~\ref{fig:multiscale}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figs/multi-scale-generator.pdf} \caption{The architecture of our Multi-scale generator. We utilize image features of different scales while generating images. For each scale, the text information is injected into the network by an attention mechanism as shown in \textit{AttnFusion}.}\label{fig:multiscale} \end{figure*} To be specific, we use a set of tensors $(V_m,\ldots, V_1)$ \footnote{we intentionally use the reversed order of indexing for simple notation in the following equations.} constituting image $\mathbf{I}$ features of different scales extracted by the different convolutional layers and a embedding matrix $W$ consisting of word embeddings in the natural language description (the same $W$ as defined in the single-scale model). We starts by computing the word-context feature $\mathcal{F}^{attn}(V_1)$ using the image feature ($V_1$) obtained by the last convolutional layer. This is done in the same manner as the Single-scale model. It is further upsampled spatially using nearest neighbor upsampling. The resulting feature is treated as our first hidden state $h_0$. Furthermore, we develop an \textit{attention fusion} (AttnFusion) module to fuse the attention information from different scales for generating new hidden states. The structure of the \textit{attention fusion} module is shown in the sub-box of Figure~\ref{fig:multiscale}. Specifically, the hidden states ($h_i$) are computed as follows: \begin{equation} h_{0} = NN_{\uparrow}(\mathcal{F}^{attn}(V_1,\mathbf{W})) \end{equation} \begin{equation} \begin{split} h_{i} &=AttnFusion(h_{i-1}, V_{i}, W) \\ &=NN_{\uparrow}(Conv(\mathcal{F}^{attn}(V_i, W) \circ h_{i-1})) \end{split} \end{equation} Where $NN_{\uparrow}$ denotes the nearest neighbour upsampling and $\circ$ represents the spatial concatenation between two feature tensors. Note $\mathcal{F}^{attn}$ needs to be adapted accordingly to fit the spatial dimension of $V_i$. Further, the final hidden state is concatenated with visual features and they are fed into residual blocks for further processing. Finally, an image decoder will generate the processed feature back to an image. Architecture-wise the convolutional operations in the generator do not have biases and we do not apply batchnorm to the output layer of the generator. Also, in the Multi-scale generator, we move the last convolutional layer in the residual blocks to the second layer in the image encoder as it improves the quality of image generation. \subsection{Discriminator} \label{ssec:discriminator} The main aim of our research is to incorporate attention into the generator. Hence, for both the Single-scale and the Multi-scale model we apply a discriminator in line with Nam et al.~\cite{nam:2018}. However, we do fine-tuning of the discriminator. Similarly to the generator, all convolutional operations do not have biases and we do not apply batchnorm to the discriminator input layer. We use the conditional and unconditional score in discriminator, where the conditional score is influenced by both the input image and the description while the unconditional score is only influenced by the input image. \subsection{Objective} \label{ssec:objective} Let $D(I,T)$ denote the output (score) of the discriminator by considering both the image quality of image $I$ and the matching between $I$ and a text description $T$. Higher score indicates higher image quality and a good matching \footnote{Generally, the score is normalized to $[0,1]$}. In addition, inspired by \cite{zhang:2018}, we also introduce the unconditional score $D(I)$ to purely assess the image quality. We use a factor $\gamma_1$ to balance the influence between the conditional and unconditional loss. The discriminator's objective $\mathcal{L}_D$ is defined as follows: \begin{equation} \label{eqn:objdisc} \begin{split} \mathcal{L}_D &= \mathbb{E}_{\mathbf{I} \sim p_{data}} [\log{D(\mathbf{I})}] \\ &+ \mathbb{E}_{\mathbf{I} \sim p_{data}}[\log{(1 - D(G(\mathbf{I})))}] \\ &+\mathbb{E}_{\mathbf{I}, \mathbf{T} \sim p_{data}} [\gamma_1 \log{D(\mathbf{I},\mathbf{T})}] \\ &+ \mathbb{E}_{\mathbf{I}, \mathbf{\hat{T}} \sim p_{data}} [\gamma_1 \log{(1 - D(G(\mathbf{I},\mathbf{\hat{T}}),\mathbf{\hat{T}}))}] \end{split} \end{equation} The mismatch text $\mathbf{\hat{T}}$ is randomly sampled from a dataset regardless of $\mathbf{I}$. Similar to text-to-image task, we feed both positive and negative examples (($\mathbf{I},\mathbf{T})$ and $G(\mathbf{I},\mathbf{\hat{T}})$, respectively) to the discriminator to make it not only judge image quality but also the matching. Note that we want to maximize this objective. In training, we minimize $-\mathcal{L}_D$. Further, the generator objective ($\mathcal{L}_G$) is defined as follows: \begin{equation} \begin{split} \mathcal{L}_G & = \mathbb{E}_{\mathbf{I} \sim p_{data}}[\log{D(\mathbf{I})}] \\ &+ \mathbb{E}_{\mathbf{I}, \mathbf{\hat{T}} \sim p_{data}}[\gamma_1 \log{D(G(\mathbf{\hat{I}},\mathbf{\hat{T}}),\mathbf{\hat{T}})}]\\ &+ \gamma_2 L_{R} \end{split} \end{equation} where $\gamma_2$ is a hyperparameter that controls the influence of the auxiliary losses. We wish the generator to keep the background intact while manipulating the image as descriptions only target to the main objects in the images. In other words, we want the generator to do minimum change to the input images. Hence, we add the image reconstruction loss $L_R$. For positive pairs, i.e. an image $I$ and a matched text $T$ the generator should reconstruct $I$, changes on the original image will be penalized. We use the $L_1$ loss (Equation~\ref{eqn:l1}) in training. \begin{equation} \label{eqn:l1} L_{R} = \lVert \textbf{I} - G(\textbf{I},\textbf{T}) \rVert \end{equation} Our TEA-cGAN is trained by alternatively minimizing both the discriminator and the generator objectives. \section{Experimental Setup} \subsection{Datasets} We evaluated our approaches by conducting experiments with the modified versions of the Caltech-200 bird dataset (CUB)~\cite{wah:2011} and Oxford-102 flower dataset~\cite{nilsback:2008}. Each image in the original dataset was crowdsourced by~\cite{reedlearning:2016} to collect 10 captions for describing the colors of different parts of birds or flower. More details about the datasets are given in the Table~\ref{table:oxford-102-dataset} and Table~\ref{table:cub-dataset}. \begin{table} \small \centering \caption{\label{table:oxford-102-dataset} Splits of ``Oxford-102'' dataset with image descriptions.} \begin{tabular}{l | c c c c} \toprule Split & Images & Captions per Image & Classes \\ \midrule Training &5,878 & 10 & 82 \\ Validation &1,156 & 10 & 20 \\ Test & 1,155 & 10 & 20 \\ \midrule Total & 8,189 &10 & 102 \\ \bottomrule \end{tabular} \end{table} \begin{table} \small \centering \caption{ \label{table:cub-dataset} Splits of ``CUB'' dataset with image descriptions.} \begin{tabular}{l | c c c} \toprule Split & Images & Captions per Image & Classes \\ \midrule Training & 8,855 & 10 & 150 \\ Test & 2,933 & 10 & 50\\ \midrule Total & 11,788 & 10 & 200\\ \bottomrule \end{tabular} \end{table} \subsection{Evaluation} Automatic evaluation of GAN approaches is tricky. Although Inception Score~\cite{salimans:2016} and Fr{\'e}chet Inception distance (FID)~\cite{heusel:2017} are used as a quantitative measure for evaluation of generated images, it cannot be used for our case since generated images do not have any ground truth labels (more detail in section \ref{subsec: inception}). Similar to earlier approaches~\cite{dong:2017,nam:2018}, we will conduct a human evaluation to rank our proposed approaches and existing models on two different aspects: \begin{itemize} \item \textit{Accuracy}: Do the generated images match the description while preserving the background of the input image? \item \textit{Naturalness}: Do the generated images look realistic? \end{itemize} We also calculate the reconstruction losses (L$_1$ and L$_2$) per pixel to determine the error attained in reconstructing back the input image while keeping the background intact. \subsection{Implementation} \label{ssec:impl} We implemented our proposed approaches using PyTorch 1.1.0\footnote{\url{https://pytorch.org/}}. Initially, word embeddings of the natural language descriptions are initialized with the fastText\footnote{\url{https://github.com/facebookresearch/fastText}} vectors and data augmentation is applied to the input images by random flipping and cropping. The weight $\gamma_2$ of the reconstruction loss is set to 2 for Single-scale generator and 3 for the Multi-scale generator (for all resolutions), while weight $\gamma_1$ is set to 10. A batch-size of 128 is used for generating images of resolution of $128 \times 128$, and a batch-size of 32 is used for generating images with a resolution of $256 \times 256$. We trained all our models for 600 epochs with an Adam Optimizer~\cite{kingma:2014} having a learning rate of 0.0002 and a momentum of 0.5. The learning rate is decayed by 0.5 for every 100 epochs. \section{Experimental Results} \label{sec:exp} We conducted experiments at different levels to evaluate our proposed models. First, a quantitative analysis is performed by calculating reconstruction losses. To further validate our results, we conducted a human study on the ``CUB'' dataset to comprehend \textit{Accuracy} and \textit{Naturalness} of generated images by ranking the best models. We then provide qualitative results of the generated images comparing different methods\footnote{We use 128 $\times$ 128 to have a fair comparison with other methods.} and also higher resolution images from our Multi-scale model. Later, we show visualizations of the attention for Multi-scale model and further analyze the impact of text interpolation and the contributions from components of the model. \subsection{Quantitative Results} \label{ssec:quantres} To compare the model's ability to keep the text irrelevant content preserved (e.g., background), we first calculate the reconstruction loss using the image along with their natural language description from both ``CUB'' and ``Oxford-102'' datasets. In the Table~\ref{table:recoeval}, we see our TEA-cGAN-Multi-Scale model has the lowest reconstruction losses (L$_1$ and L$_2$) indicating that our Multi-scale model is the preferred choice for keeping the content of the original image intact. \begin{table*} \centering \caption{\label{table:recoeval} L$_1$ and L$_2$ Loss (pixel-level) on CUB and Oxford-102 test dataset. Lower the better.} \begin{tabular}{lcccc} \toprule & \multicolumn{2}{c}{CUB} & \multicolumn{2}{c}{Oxford-102} \\ \cmidrule(r){2-3} \cmidrule(r){4-5} Method & L$_1$ & L$_2$ & L$_1$ & L$_2$ \\ \cmidrule(r){1-1} \cmidrule(r){2-3} \cmidrule(r){4-5} SISGAN~\cite{dong:2017} & 0.51 & 0.20 & 0.53 & 0.19 \\ TAGAN~\cite{nam:2018} & 0.46 & 0.15 & 0.48 & 0.16\\ \midrule TEA-cGAN-Single-Scale (Ours) & 0.37 & 0.1 & 0.5 & 0.17 \\ TEA-cGAN-Multi-Scale (Ours) & \textbf{0.25} & \textbf{0.05} & \textbf{0.33} & \textbf{0.07} \\ \bottomrule \end{tabular} \end{table*} We then perform a human evaluation on the CUB dataset by ranking images generated by different models based on (i) Accuracy and (ii) Naturalness. For the evaluation, we randomly selected 8 images and 8 texts from the test set and produced 64 outputs from the CUB dataset for each method. We resized all output images to 128 $\times$128 to have a fair comparison and prevent the users from evaluating the images based on different resolutions. Table~\ref{table:quanteval} shows the results from the study where results are shown as average ranking values. \begin{table*} \centering \caption{\label{table:quanteval} Accuracy and Naturalness average ranking values evaluated by users on CUB test dataset. Lower the better.} \begin{tabular}{lcc} \toprule & \multicolumn{2}{c}{CUB} \\ \cmidrule(r){2-3} Method & Accuracy & Naturalness \\ \cmidrule(r){1-1} \cmidrule(r){2-3} SISGAN~\cite{dong:2017} & 3.94 & 3.97 \\ TAGAN~\cite{nam:2018} & 2.4 & 2.6 \\ \midrule TEA-cGAN-Single-Scale (Ours) & 1.97 & 1.94 \\ TEA-cGAN-Multi-Scale (Ours) & \textbf{1.69} & \textbf{1.49} \\ \bottomrule \end{tabular} \end{table*} To further verify the study results, we conducted a Chi-Square test. We want to test whether the models and the ranking are dependent. The null hypothesis is that these two variables are independent. We compared our TEA-cGAN-Multi-scale model with all other models. We also compare our TEA-cGAN-Single-scale model with the SISGAN~\cite{dong:2017} and TAGAN~\cite{nam:2018}. Accuracy and naturalness are tested separately. In total, we conducted 10 significance tests and all of them get a $p$-value smaller than $10^{-18}$. Even if we use Bonferroni correction the $p$-value is much smaller than $5\%$ thus we reject the null hypothesis. Table \ref{table:Chi_acc} and Table~\ref{table:Chi_natural} presents all $p$-values computed with all models used for evaluation i.e., SISGAN, TAGAN, TEA-cGAN-Single-scale (Single-scale) and TEA-cGAN-Multi-scale (Multi-scale). \begin{table} \caption{\label{table:Chi_acc} Cross comparison the models. Accuracy.} \centering \begin{tabular}{lc} \toprule Model & Accuracy \\ & ($p$-value) \\ \midrule SISGAN \& Multi-scale & $2.49 \times 10^{-148}$ \\ TAGAN \& Multi-scale & $3.95 \times 10^{-29}$ \\ Single-scale \& Multi-scale & $2.77 \times 10^{-11}$ \\ SISGAN \& Single-scale & $2.04 \times 10^{-151}$ \\ TAGAN \& Single-scale & $1.14 \times 10^{-19}$ \\% [1ex] adds vertical space \bottomrule \end{tabular} \end{table} \begin{table} \caption{\label{table:Chi_natural} Cross comparison the models. Naturalness.} \centering \begin{tabular}{lc} \toprule Model & Naturalness \\ & ($p$-value)\\ \midrule SISGAN \& Multi-scale& $1.22 \times 10^{-159}$ \\ TAGAN \& Multi-scale & $1.52 \times 10^{-69}$ \\ Single-scale \& Multi-scale & $6.85 \times 10^{-16} $ \\ SISGAN \& Single-scale & $2.44\times 10^{-156}$ \\ TAGAN \& Single-scale & $1.17\times 10^{-32}$ \\ \bottomrule \end{tabular} \end{table} \subsection{Qualitative Results} \label{ssec:qualres} Figure~\ref{fig:qualcomp} shows a qualitative comparison of our models with a strong baseline (i.e., TAGAN) that generates a 128 $\times$ 128 resolution. We observe that the baseline can generate an image matching the natural language descriptions. However, the method fails to keep the content relevant to the text and is likely to generate a different image in contrast to the original layout. However, our method preserves the background intact and helps in only transferring visual attributes given in the text. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Figs/128comp.pdf} \caption{Qualitative comparison of our proposed methods with the closest baseline TAGAN~\cite{nam:2018}. In many cases, our proposed TEA-cGAN-Multi-scale method outperforms the baseline methods qualitatively.}\label{fig:qualcomp} \end{figure} We further use our TEA-cGAN-Multi-scale model to generate higher resolution images i.e., 256 $\times$ 256. In Figure~\ref{fig:qualmultiscale}, we show TEA-cGAN-Multi-scale generated 128 $\times$ 128 and 256 $\times$ 256 resolution images side by side to show the difference. In both resolutions our model effectively disentangles text irrelevant content such as background from visual attributes that need to be changed. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Figs/256res.pdf} \caption{Samples of the 128 $\times$ 128 and 256 $\times$ 256 resolution images generated with our TEA-cGAN-Multi-scale model.}\label{fig:qualmultiscale} \end{figure} \subsection{Text Interpolation} \label{ssec:sent_interpolation} To understand TEA-cGAN-Multi-scale model ability to generate images without memorizing the text and is generalizable, we conducted a text interpolation experiment for the generator. The idea here is to fix the input image and select two sentences from the test set. Further, two sentences are encoded into embeddings to perform linear interpolation between them. However, in our case since we use individual word representations as opposed to a complete sentence representation, we restrict the two sentences to the same length. Figure~\ref{fig:sentint} shows that the TEA-cGAN-Multi-scale model generates images of interpolated text while preserving the contents of the original image. This validates our hypothesis of non-memorization of varied texts by inherently learning latent information useful for generalization. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{Figs/sentint.pdf} \caption{Text interpolation results with TEA-cGAN-Multi-scale. Upper left: the original image. Bottom: images generated according to the linear interpolation of the two sentences. All generated images have 128 $\times$ 128 resolution.}\label{fig:sentint} \end{figure} \subsection{Attention Visualization} \label{ssec:attvis} Images generated with TEA-cGAN-Single-scale are visualized with heat maps in the Figure~\ref{fig:attnvis}. We observe that for some words, our model correctly attends to the corresponding locations in the images. Note that it is normal that not all words are attended to a location in the image. First, stop words do not has their corresponding locations. Second, for phrases like ``white belly'' refers to one visual attribute and the model does not need to attend to the corresponding location twice. In such cases, our model tends to distribute the attention the word describing the color (white). We see positive correlation between the attention quality and the image quality. For example, in simple scenes (Figure~\ref{fig:attnvis} top picture), our model can attend to the object itself rather than the background and generated promising images. While if the background is cluttered and the object that needs to be manipulated is invisible, then our model fails to alter visual attributes (Figure~\ref{fig:attnvis} bottom picture). \begin{figure} \centering \includegraphics[width=0.65\textwidth]{Figs/rescheck.pdf} \caption{Attention visualization (Generator) of each word w.r.t the image with TEA-cGAN-Single-scale. Bright areas indicates locations where the model is focusing on while generating the image. \textit{Top}: Success case and \textit{Bottom}: Failure case.}\label{fig:attnvis} \end{figure} \subsection{Inception Score} \label{subsec: inception} Our task is similar to text-to-image generation~\cite{reed:2016, zhang2017stackgan, zhang:2018}, however we cannot use automatic evaluation measure such as inception score \cite{salimans:2016} for estimating generated image quality. This is due its inappropriateness for our task. In the following, we provide more details. To calculate the inception score on a set of images, we basically need to apply an image classifier such as inception network \cite{salimans:2016}) on the images. Inception score is designed on an assumption that if a generated image looks realistic, the classifier should be able to classify it easily and accurately, i.e., the label distribution should have a low entropy. However, this is only true if the generator tries to generate images belonging to classes seen by the classifier. A text-to-image generation fulfills this requirement, for example, generating birds that look similar to birds in the CUB dataset. However, our generator modifies the bird and the resulting bird does not belong to any class in the CUB dataset. For example, if our generator changes the color of a crow from black to red, then the resulting bird cannot be classified. If we apply a classifier trained on the CUB dataset, it should fail to classify the modified images, not matter how realistic they look. The resulting label distribution should have higher entropy which causes a low inception score. However, it does not imply that the image quality is low. To verify this claim, we compare the label distributions between real and generated images. As shown in the Figure~\ref{fig:inception_res}, we use our TEA-cGAN-Multi-scale model to generate three images based on an input image according to different descriptions. We use the fine-tuned inception model \footnote{\url{https://github.com/hanzhanggit/StackGAN-inception-model}} to classify the images. We observe that the model classifies the input image correctly with high confidence ($\approx 80\%$), which results in an unimodal distribution (refer Figure~\ref{fig:inception_res} (b)). However, the model outputs a distribution that is similar to a uniform distribution when classifying the generated image. This is reasonable as this non-existent bird does not belong to any class. In the Figure~\ref{fig:inception_res} (c), we further show that it is the same case for different descriptions. Although for a human the generated images look realistic, in our scenario the label distributions are more or less random. Therefore, the inception score is not consistent with the human perception. \begin{figure*} \centering \includegraphics[width=0.65\textwidth]{Figs/inception_res.pdf} \caption{(a): Input image along with the three generated images according to different descriptions. (b) Label distributions of the input image and the first generated image. (c) Label distributions of the three generated images.}\label{fig:inception_res} \end{figure*} \section{Related Work} \label{sec:related} We present related work from some of the closely aligned areas. \subsection{Text-to-Image Generation} \label{ssec:rwttoimgen} Initially, alignDRAW~\cite{mansimov:2015} was introduced to iteratively draw patches on a canvas, while attending to the relevant words in the description. Further, visual concepts are translated from characters to pixels~\cite{reedlearning:2016} with a conditional GAN. It was further improved~\cite{reed:2016} by taking instructions about what content to be drawn in which location achieving high-quality image generation. To generate images with high resolution, several GANs are stacked together as stackGAN~\cite{zhang:2018} using the global sentence representation. This helped to generate images of different sizes. To over come the bottleneck of global-level sentence representation, attention based GAN as AttGAN~\cite{xu:2018} is introduced to capture the fine-grained details at different sub-regions of the image. It pays attention to the relevant words in the natural language description. Recently, ControlGAN~\cite{licontrollable:2019} is proposed to effectively synthesise high-quality images by controlling the parts of image generation according to natural language descriptions. Our work leverage ideas from AttGAN, however, we use it in both the generator and discriminator for manipulating image semantically. \subsection{Image-to-Image Translation} \label{ssec:rwimtoimgen} Several ideas were explored to perform image-to-image translation. There are paired~\cite{isola:2017}, unpaired~\cite{zhuunpair:2017} and style transfer~\cite{gatys:2016} approaches proposed in the recent times based on GANs. Paired approaches that use image pairs as training examples were applied to various tasks such as generating images from sketches~\cite{sangkloy:2017}. Unpaired approaches that do not use image pairs are learned using Coupled GANs~\cite{liu:2016} and cross-modal scene networks~\cite{aytar:2017} with a weight-sharing strategy for learning a common representation. Few approaches~\cite{yi:2017} also used unsupervised techniques for image-to-image translation. Our work differs from direct image-to-image translation as we condition both image and natural language description in the generator for image generation. \subsection{Interactive Image Manipulation} \label{ssec:rwimmanp} Instead of using a single natural language sentence to manipulate images, another interesting approach is to have an interactive system that generates an image iteratively. A variation of it is the image manipulation via natural language dialogue~\cite{cheng:2018}. \section{Conclusion and Future Work} In this paper, we proposed TEA-cGAN which can manipulate images with a natural language description. We created two different scales of feature aggregation in the generator by leveraging attention. We found that it was helpful to find the relevant contents in the original image according to the descriptions in a more fine-grained manner. We showed in the experiments that our approach outperforms existing methods both quantitatively and qualitatively on 128 $\times$ 128 resolution and even generates higher resolution images i.e., 256 $\times$ 256 for richer experience. In future, we would like to alter images which contain multiple objects per image. \section{Acknowledgements} Aditya Mogadala is supported by the German Research Foundation (DFG) as part of SFB1102. \bibliographystyle{cas-model2-names} \section{Introduction} Two classfiles namely \file{cas-sc.cls} and \file{cas-dc.cls} were written for typesetting articles submitted in journals of Elsevier's Complex Article Service (CAS) workflow. \subsection{Usage} \begin{enumerate} \item \file{cas-sc.cls} for single column journals. \begin{vquote} \documentclass[<options>]{cas-sc} \end{vquote} \item \file{cas-dc.cls} for single column journals. \begin{vquote} \documentclass[<options>]{cas-dc} \end{vquote} \end{enumerate} and have an option longmktitle to handle long front matter. \section{Front matter} \begin{vquote} \title [mode = title]{This is a specimen $a_b$ title} \tnotemark[1,2] \tnotetext[1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \author[1,3]{CV Radhakrishnan}[type=editor, auid=000,bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-7511-2910] \cormark[1] \fnmark[1] \ead{cvr_1@tug.org.in} \ead[url]{www.cvr.cc, cvr@sayahna.org} \end{vquote} \begin{vquote} \credit{Conceptualization of this study, Methodology, Software} \address[1]{Elsevier B.V., Radarweg 29, 1043 NX Amsterdam, The Netherlands} \author[2,4]{Han Theh Thanh}[style=chinese] \author[2,3]{CV Rajagopal}[% role=Co-ordinator, suffix=Jr, ] \fnmark[2] \ead{cvr3@sayahna.org} \ead[URL]{www.sayahna.org} \credit{Data curation, Writing - Original draft preparation} \address[2]{Sayahna Foundation, Jagathy, Trivandrum 695014, India} \author[1,3]{Rishi T.} \cormark[2] \fnmark[1,3] \ead{rishi@stmdocs.in} \ead[URL]{www.stmdocs.in} \address[3]{STM Document Engineering Pvt Ltd., Mepukada, Malayinkil, Trivandrum 695571, India} \cortext[cor1]{Corresponding author} \cortext[cor2]{Principal corresponding author} \fntext[fn1]{This is the first author footnote. but is common to third author as well.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \end{vquote} \begin{vquote} \nonumnote{This note has no numbers. In this work we demonstrate $a_b$ the formation Y\_1 of a new type of polariton on the interface between a cuprous oxide slab and a polystyrene micro-sphere placed on the slab. } \begin{abstract}[S U M M A R Y] This template helps you to create a properly formatted \LaTeX\ manuscript. \noindent\texttt{\textbackslash begin{abstract}} \dots \texttt{\textbackslash end{abstract}} and \verb+\begin{keyword}+ \verb+...+ \verb+\end{keyword}+ which contain the abstract and keywords respectively. Each keyword shall be separated by a \verb+\sep+ command. \end{abstract} \begin{keywords} quadrupole exciton \sep polariton \sep \WGM \sep \BEC \end{keywords} \maketitle \end{vquote} \begin{figure} \includegraphics[width=\textwidth]{sc-sample.pdf} \caption{Single column output (classfile: cas-sc.cls).} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{dc-sample.pdf} \caption{Double column output (classfile: cas-dc.cls).} \end{figure} \subsection{Title} \verb+\title+ command have the below options: \begin{enumerate} \item \verb+title:+ Document title \item \verb+alt:+ Alternate title \item \verb+sub:+ Sub title \item \verb+trans:+ Translated title \item \verb+transsub:+ Translated sub title \end{enumerate} \begin{vquote} \title[mode=title]{This is a title} \title[mode=alt]{This is a alternate title} \title[mode=sub]{This is a sub title} \title[mode=trans]{This is a translated title} \title[mode=transsub]{This is a translated sub title} \end{vquote} \subsection{Author} \verb+\author+ command have the below options: \begin{enumerate} \item \verb+auid:+ Author id \item \verb+bioid:+ Biography id \item \verb+alt:+ Alternate author \item \verb+style:+ Style of author name chinese \item \verb+prefix:+ Prefix Sir \item \verb+suffix:+ Suffix \item \verb+degree:+ Degree \item \verb+role:+ Role \item \verb+orcid:+ ORCID \item \verb+collab:+ Collaboration \item \verb+anon:+ Anonymous author \item \verb+deceased:+ Deceased author \item \verb+twitter:+ Twitter account \item \verb+facebook:+ Facebook account \item \verb+linkedin:+ LinkedIn account \item \verb+plus:+ Google plus account \item \verb+gplus:+ Google plus account \end{enumerate} \begin{vquote} \author[1,3]{Author Name}[type=editor, auid=000,bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-7511-2910, facebook=<facebook id>, twitter=<twitter id>, linkedin=<linkedin id>, gplus=<gplus id>] \end{vquote} \subsection{Various Marks in the Front Matter} The front matter becomes complicated due to various kinds of notes and marks to the title and author names. Marks in the title will be denoted by a star ($\star$) mark; footnotes are denoted by super scripted Arabic numerals, corresponding author by of an Conformal asterisk (*) mark. \subsubsection{Title marks} Title mark can be entered by the command, \verb+\tnotemark[<num>]+ and the corresponding text can be entered with the command \verb+\tnotetext[<num>]+ \verb+{<text>}+. An example will be: \begin{vquote} \title[mode=title]{Leveraging social media news to predict stock index movement using RNN-boost} \tnotemark[1,2] \tnotetext[1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \end{vquote} \verb+\tnotetext+ and \verb+\tnotemark+ can be anywhere in the front matter, but shall be before \verb+\maketitle+ command. \subsubsection{Author marks} Author names can have many kinds of marks and notes: \begin{vquote} footnote mark : \fnmark[<num>] footnote text : \fntext[<num>]{<text>} affiliation mark : \author[<num>] email : \ead{<emailid>} url : \ead[url]{<url>} corresponding author mark : \cormark[<num>] corresponding author text : \cortext[<num>]{<text>} \end{vquote} \subsubsection{Other marks} At times, authors want footnotes which leave no marks in the author names. The note text shall be listed as part of the front matter notes. Class files provides \verb+\nonumnote+ for this purpose. The usage \begin{vquote} \nonumnote{<text>} \end{vquote} \noindent and should be entered anywhere before the \verb+\maketitle+ command for this to take effect. \subsection{Abstract and Keywords} Abstract shall be entered in an environment that starts with \verb+\begin{abstract}+ and ends with \verb+\end{abstract}+. Longer abstracts spanning more than one page is also possible in Class file even in double column mode. We need to invoke longmktitle option in the class loading line for this to happen smoothly. The key words are enclosed in a \verb+{keyword}+ environment. \begin{vquote} \begin{abstract} This is a abstract. \lipsum[3] \end{abstract} \begin{keywords} First keyword \sep Second keyword \sep Third keyword \sep Fourth keyword \end{keywords} \end{vquote} \section{Main Matter} \subsection{Tables} \subsubsection{Normal tables} \begin{vquote} \begin{table} \caption{This is a test caption.} \begin{tabular*}{\tblwidth}{@{} LLLL@{} } \toprule Col 1 & Col 2\\ \midrule 12345 & 12345\\ 12345 & 12345\\ 12345 & 12345\\ \bottomrule \end{tabular*} \end{table} \end{vquote} \subsubsection{Span tables} \begin{vquote} \begin{table*}[width=.9\textwidth,cols=4,pos=h] \caption{This is a test caption.} \begin{tabular*}{\tblwidth}{@{} LLLLLL@{} } \toprule Col 1 & Col 2 & Col 3 & Col4 & Col5 & Col6 & Col7\\ \midrule 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ 12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\ \bottomrule \end{tabular*} \end{table*} \end{vquote} \subsection{Figures} \subsubsection{Normal figures} \begin{vquote} \begin{figure} \centering \includegraphics[scale=.75]{Fig1.pdf} \caption{The evanescent light - $1S$ quadrupole coupling ($g_{1,l}$) scaled to the bulk exciton-photon coupling ($g_{1,2}$). The size parameter $kr_{0}$ is denoted as $x$ and the \PMS is placed directly on the cuprous oxide sample ($\delta r=0$, See also Fig. \protect\ref{FIG:2}).} \label{FIG:1} \end{figure} \end{vquote} \subsubsection{Span figures} \begin{vquote} \begin{figure*} \centering \includegraphics[width=\textwidth,height=2in]{Fig2.pdf} \caption{Schematic of formation of the evanescent polariton on linear chain of \PMS. The actual dispersion is determined by the ratio of two coupling parameters such as exciton-\WGM coupling and \WGM-\WGM coupling between the microspheres.} \label{FIG:2} \end{figure*}\end{vquote} \subsection{Theorem and theorem like environments} CAS class file provides a few hooks to format theorems and theorem like environments with ease. All commands the options that are used with \verb+\newtheorem+ command will work exactly in the same manner. Class file provides three commands to format theorem or theorem like environments: \begin{enumerate} \item \verb+\newtheorem+ command formats a theorem in \LaTeX's default style with italicized font for theorem statement, bold weight for theorem heading and theorem number typeset at the right of theorem heading. It also optionally accepts an argument which will be printed as an extra heading in parentheses. Here is an example coding and output: \begin{vquote} \newtheorem{theorem}{Theorem} \begin{theorem}\label{thm} The \WGM evanescent field penetration depth into the cuprous oxide adjacent crystal is much larger than the \QE radius: \begin{equation*} \lambda_{1S}/2 \pi \left({\epsilon_{Cu2O}-1} \right)^{1/2} = 414 \mbox{ \AA} \gg a_B = 4.6 \mbox{ \AA} \end{equation*} \end{theorem} \end{vquote} \item \verb+\newdefinition+ command does exactly the same thing as with except that the body font is up-shape instead of italic. See the example below: \begin{vquote} \newdefinition{definition}{Definition} \begin{definition} The bulk and evanescent polaritons in cuprous oxide are formed through the quadrupole part of the light-matter interaction: \begin{equation*} H_{int} = \frac{i e }{m \omega_{1S}} {\bf E}_{i,s} \cdot {\bf p} \end{equation*} \end{definition} \end{vquote} \item \verb+\newproof+ command helps to define proof and custom proof environments without counters as provided in the example code. Given below is an example of proof of theorem kind. \begin{vquote} \newproof{pot}{Proof of Theorem \ref{thm}} \begin{pot} The photon part of the polariton trapped inside the \PMS moves as it would move in a micro-cavity of the effective modal volume $V \ll 4 \pi r_{0}^{3} /3$. Consequently, it can escape through the evanescent field. This evanescent field essentially has a quantum origin and is due to tunneling through the potential caused by dielectric mismatch on the \PMS surface. Therefore, we define the \emph{evanescent} polariton (\EP) as an evanescent light - \QE coherent superposition. \end{pot} \end{vquote} \end{enumerate} \subsection{Enumerated and Itemized Lists} CAS class files provides an extended list processing macros which makes the usage a bit more user friendly than the default LaTeX list macros. With an optional argument to the \verb+\begin{enumerate}+ command, you can change the list counter type and its attributes. You can see the coding and typeset copy. \begin{vquote} \begin{enumerate}[1.] \item The enumerate environment starts with an optional argument `1.' so that the item counter will be suffixed by a period as in the optional argument. \item If you provide a closing parenthesis to the number in the optional argument, the output will have closing parenthesis for all the item counters. \item You can use `(a)' for alphabetical counter and `(i)' for roman counter. \begin{enumerate}[a)] \item Another level of list with alphabetical counter. \item One more item before we start another. \begin{enumerate}[(i)] \item This item has roman numeral counter. \end{vquote} \begin{vquote} \item Another one before we close the third level. \end{enumerate} \item Third item in second level. \end{enumerate} \item All list items conclude with this step. \end{enumerate} \section{Biography} \verb+\bio+ command have the below options: \begin{enumerate} \item \verb+width:+ Width of the author photo (default is 1in). \item \verb+pos:+ Position of author photo. \end{enumerate} \begin{vquote} \bio[width=10mm,pos=l]{tuglogo.jpg} \textbf{Another Biography:} Recent experimental \cite{HARA:2005} and theoretical \cite{DEYCH:2006} studies have shown that the \WGM can travel along the chain as "heavy photons". Therefore the \WGM acquires the spatial dispersion, and the evanescent quadrupole polariton has the form (See Fig.\ref{FIG:3}): \endbio \end{vquote} \section[CRediT...]{CRediT authorship contribution statement} Give the authorship contribution after each author as \begin{vquote} \credit{Conceptualization of this study, Methodology, Software} \end{vquote} To print the details use \verb+\printcredits+ \begin{vquote} \author[1,3]{V. {{\=A}}nand Rawat}[auid=000, bioid=1, prefix=Sir, role=Researcher, orcid=0000-0001-7511-2910] \end{vquote} \begin{vquote} \cormark[1] \fnmark[1] \ead{cvr_1@tug.org.in} \ead[url]{www.cvr.cc, www.tug.org.in} \credit{Conceptualization of this study, Methodology, Software} \address[1]{Indian \TeX{} Users Group, Trivandrum 695014, India} \author[2,4]{Han Theh Thanh}[style=chinese] \author[2,3]{T. Rishi Nair}[role=Co-ordinator, suffix=Jr] \fnmark[2] \ead{rishi@sayahna.org} \ead[URL]{www.sayahna.org} \credit{Data curation, Writing - Original draft preparation} . . . . . . . . . \printcredits \end{vquote} \section{Bibliography} For CAS categories, two reference models are recommended. They are \file{model1-num-names.bst} and \file{model2-names.bst}. Former will format the reference list and their citations according to numbered scheme whereas the latter will format according name-date or author-year style. Authors are requested to choose any one of these according to the journal style. You may download these from The above bsts are available in the following location for you to download: \url{https://support.stmdocs.in/wiki/index.php?title=Model-wise_bibliographic_style_files} \hfill $\Box$ \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The primary motivation for this paper is the existence of unexpected commutativity properties discovered during the last 10 years for double interchange semigroups. Kock \cite[Proposition 2.3]{Kock2007} presents a $4 \times 4$ configuration for which associativity and the interchange law imply the equality of two monomials, with the same placement of parentheses and operation symbols, but with different permutations of the arguments. We display his result both algebraically and geometrically: \begin{equation} \label{Kockdiagram} \begin{array}{c} \begin{array}{l} ( a \,\Box\, b \,\Box\, c \,\Box\, d ) \,\blacksquare\, ( e \,\Box\, f \,\Box\, g \,\Box\, h ) \,\blacksquare\, ( i \,\Box\, j \,\Box\, k \,\Box\, \ell ) \,\blacksquare\, ( m \,\Box\, n \,\Box\, p \,\Box\, q ) \equiv \\ ( a \,\Box\, b \,\Box\, c \,\Box\, d ) \,\blacksquare\, ( e \,\Box\, g \,\Box\, f \,\Box\, h ) \,\blacksquare\, ( i \,\Box\, j \,\Box\, k \,\Box\, \ell ) \,\blacksquare\, ( m \,\Box\, n \,\Box\, p \,\Box\, q ) \end{array} \\[5mm] \begin{array}{c} \begin{tikzpicture}[ draw = black, x = 6 mm, y = 6 mm ] \node [Square] at ($(0, 0)$) {$a$}; \node [Square] at ($(1, 0)$) {$b$}; \node [Square] at ($(2, 0)$) {$c$}; \node [Square] at ($(3, 0)$) {$d$}; \node [Square] at ($(0,-1)$) {$e$}; \node [Square] at ($(1,-1)$) {$f$}; \node [Square] at ($(2,-1)$) {$g$}; \node [Square] at ($(3,-1)$) {$h$}; \node [Square] at ($(0,-2)$) {$i$}; \node [Square] at ($(1,-2)$) {$j$}; \node [Square] at ($(2,-2)$) {$k$}; \node [Square] at ($(3,-2)$) {$\ell$}; \node [Square] at ($(0,-3)$) {$m$}; \node [Square] at ($(1,-3)$) {$n$}; \node [Square] at ($(2,-3)$) {$p$}; \node [Square] at ($(3,-3)$) {$q$}; \end{tikzpicture} \end{array} \equiv \begin{array}{c} \begin{tikzpicture}[ draw = black, x = 6 mm, y = 6 mm ] \node [Square] at ($(0, 0)$) {$a$}; \node [Square] at ($(1, 0)$) {$b$}; \node [Square] at ($(2, 0)$) {$c$}; \node [Square] at ($(3, 0)$) {$d$}; \node [Square] at ($(0,-1)$) {$e$}; \node [Square] at ($(1,-1)$) {$g$}; \node [Square] at ($(2,-1)$) {$f$}; \node [Square] at ($(3,-1)$) {$h$}; \node [Square] at ($(0,-2)$) {$i$}; \node [Square] at ($(1,-2)$) {$j$}; \node [Square] at ($(2,-2)$) {$k$}; \node [Square] at ($(3,-2)$) {$\ell$}; \node [Square] at ($(0,-3)$) {$m$}; \node [Square] at ($(1,-3)$) {$n$}; \node [Square] at ($(2,-3)$) {$p$}; \node [Square] at ($(3,-3)$) {$q$}; \end{tikzpicture} \end{array} \end{array} \end{equation} Note the transposition of $f$ and $g$. We use the symbol $\equiv$ as an abbreviation for the statement that the equation holds for all values of the arguments. This interplay between algebra and geometry underlies all the results in this paper. DeWolf \cite[Proposition 3.2.4]{DeWolf2013} used a similar argument with 10 variables to prove that the operations coincide in every cancellative double interchange semigroup. Bremner \& Madariaga used computer algebra to show that nine variables is the smallest number for which such a commutativity property holds. We display one of their results \cite[Theorem 4.1]{BM2016}; note the transposition of $e$ and $g$: \begin{equation} \label{BMdiagram} \begin{array}{c} \begin{array}{l} ( ( a \,\Box\, b ) \,\Box\, c ) \,\blacksquare\, ( ( ( d \,\Box\, ( e \,\blacksquare\, f ) ) \,\Box\, ( g \,\blacksquare\, h ) ) \,\Box\, i ) \equiv \\ ( ( a \,\Box\, b ) \,\Box\, c ) \,\blacksquare\, ( ( ( d \,\Box\, ( g \,\blacksquare\, f ) ) \,\Box\, ( e \,\blacksquare\, h ) ) \,\Box\, i ) \end{array} \\[5mm] \begin{array}{c} \begin{tikzpicture}[ draw = black, x = 0.625 mm, y = 0.625 mm ] \draw ( 0, 0) -- (48, 0) ( 0,24) -- (24,24) ( 0,48) -- (48,48) ( 6,36) -- (24,36) ( 0, 0) -- ( 0,48) (12, 0) -- (12,48) (24, 0) -- (24,48) (48, 0) -- (48,48) ( 6,24) -- ( 6,48) (24,24) -- (48,24) ( 6,12) node {$a$} (18,12) node {$b$} (36,12) node {$c$} ( 3,36) node {$d$} ( 9,30) node {$e$} ( 9,42) node {$f$} (18,30) node {$g$} (18,42) node {$h$} (36,36) node {$i$}; \end{tikzpicture} \end{array} \equiv \begin{array}{c} \begin{tikzpicture}[ draw = black, x = 0.625 mm, y = 0.625 mm ] \draw ( 0, 0) -- (48, 0) ( 0,24) -- (24,24) ( 0,48) -- (48,48) ( 6,36) -- (24,36) ( 0, 0) -- ( 0,48) (12, 0) -- (12,48) (24, 0) -- (24,48) (48, 0) -- (48,48) ( 6,24) -- ( 6,48) (24,24) -- (48,24) ( 6,12) node {$a$} (18,12) node {$b$} (36,12) node {$c$} ( 3,36) node {$d$} (18,30) node {$e$} ( 9,42) node {$f$} ( 9,30) node {$g$} (18,42) node {$h$} (36,36) node {$i$}; \end{tikzpicture} \end{array} \end{array} \end{equation} Even though the operations are associative, we fully parenthesize monomials so that the algebraic equation corresponds exactly with its geometric realization. In this paper, we begin the classification of commutativity properties with 10 variables which are not consequences of the known results with nine variables. Following Loday \& Vallette \cite{LV2012}, we say \emph{algebraic operad} to mean an operad in the symmetric monoidal category of vector spaces over a field $\mathbb{F}$. When we say simply \emph{operad}, we mean a symmetric algebraic operad generated by two binary operations. For an earlier reference on operads and their applications, see Markl, Shnider \& Stasheff \cite{MSS2002}. For a more recent reference which emphasizes the algorithmic aspects, see Bremner \& Dotsenko \cite{BD2016}. The most common contemporary application of rectangular partitions is to VLSI (very large scale integration): the process of producing integrated circuits by the combination of thousands of transistors into a single silicon chip \cite{KLMH2011}. In microelectronics, block partitions are called \emph{floorplans}: schematic representations of the placement of the major functional components of an integrated circuit. Finding optimal floorplans subject to physical constraints leads to NP-hard problems of combinatorial optimization. An important subset consists of \emph{sliceable} floorplans; these are similar to our dyadic partitions, except that we require the bisections to be exact. Many NP-hard problems have polynomial time solutions in the sliceable case. However, sliceable floorplans are defined to exclude the possibility that four subrectangles intersect in a point, thus rendering the interchange law irrelevant. \section{Overview of results} We recall basic definitions and results from the theory of algebraic operads. We consider seven algebraic operads, each generated by two binary operations. \subsection{Nonassociative operads} The first four have nonassociative operations. \begin{definition} \label{freeoperad} $\mathbf{Free}$ is the free operad generated by operations ${\,\scalebox{.67}{$\vartriangle$}\,}$ and ${\,\scalebox{.67}{$\blacktriangle$}\,}$: we identify the basis monomials in arity $n$ with the set $\mathbb{B}_n$ of all labelled rooted complete binary plane trees with $n$ leaves (\emph{tree monomials} for short), where \emph{labelled} means that we assign an operation to each internal node (including the root), and we choose a bijection between the leaf nodes and the argument symbols $\arg_1, \dots, \arg_n$. For $n = 1$ we have only one tree, with no root and one leaf labelled $x_1$. The partial compositions $\circ_i$ in this operad are defined as usual. (We avoid using $\circ$ as an operation symbol since it conflicts with partial composition.) \end{definition} \begin{algorithm} We recall in the general case the recursive algorithm for \emph{converting a tree $T$ to a word $\mu(T)$}. Let $\mathbf{O}$ be the free nonsymmetric operad generated by operations $\Omega = \{ \, \omega_i \mid i \in I \, \}$ indexed by the set $I$, with arities assigned by the function $a\colon I \to \mathbb{N}$. The tree $\vert$ with one (leaf) node may be identified with the word $x_1$. Each operation $\omega_i$ can be identified with either (i) the word $\mu(T_i) = \omega_i(x_1,\dots,x_{a(i)})$, or (ii) the planar rooted tree $T_i$ with root labelled $\omega_i$ and $a(i)$ leaves labelled $x_1, \dots, x_{a(i)}$ from left to right. This defines $\mu(T)$ for trees with exactly one internal node (the root). Now let $T$ be a tree with at least two internal nodes (counting the root), with root labelled $\omega_i$ and with $a(i)$ children (which are leaves or roots of subtrees) denoted $T_1, \dots, T_{a(i)}$. By induction we may assume that $\mu(T_1), \dots, \mu(T_{a(j)})$ have been defined, and so we set $\mu( T ) = \omega_i( \mu(T_1), \dots, \mu( T_{a(i)} )$, with the subscripts of the variables changed to produce the identity permutation. By induction, this defines $\mu(T)$ for every basis tree $T$ in the free operad $\mathbf{O}$. \end{algorithm} \begin{definition} \label{interoperad} $\mathbf{Inter}$ is the quotient of $\mathbf{Free}$ by the operad ideal $\mathrm{I} = \langle \boxplus \rangle$ generated by the interchange law (also called exchange law, medial law, entropic law): \begin{equation} \label{intlaw} \boxplus\colon\, ( a \,{\,\scalebox{.67}{$\vartriangle$}\,}\, b ) \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, ( c \,{\,\scalebox{.67}{$\vartriangle$}\,}\, d ) - ( a \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, c ) \,{\,\scalebox{.67}{$\vartriangle$}\,}\, ( b \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, d ) \equiv 0. \end{equation} Example \ref{exampleinterchange} gives the geometrical explanation of our symbol for this relation. \end{definition} \begin{definition} \label{bpoperad} $\mathbf{BP}$ is the operad of \emph{block} (or \emph{rectangular}) partitions of the (open) unit square $I^2$, $I = (0,1)$. To be precise, a block partition $P$ of $I^2$ is determined by a finite set $C$ of open line segments contained in $I^2$ such that $P = I^2 \setminus \bigcup C$ is the disjoint union of open subrectangles $(x_1,x_2) \times (y_1,y_2)$, called \emph{empty blocks}. The segments in $C$, which are called \emph{cuts}, must be either horizontal or vertical, that is $H = (x_1,x_2) \times \{y_0\}$ or $V = \{x_0\} \times (y_1,y_2)$, where $0 \le x_0, x_1, x_2, y_0, y_1, y_2 \le 1$ with $x_1 < x_2$, $y_1 < y_2$. We assume that the cuts are \emph{maximal} in the sense that if two elements $H, V \in C$ intersect then one is horizontal, the other is vertical, and $H \cap V$ is a single point. $\mathbf{BP}$ has two binary operations: the \emph{horizontal} (resp.~\emph{vertical}) operation $x \rightarrow y$ (resp.~$x \uparrow y$) translates $y$ one unit to the east (resp.~north), forms the union of $x$ and translated $y$ to produce a block partition of a rectangle of width (resp.~height) two, scales this rectangle horizontally (resp.~vertically) by one-half, and produces another block partition. The operadic analogues are as follows. If $x$ is a block partition with $m$ parts ordered $x_1, \dots, x_m$ in some way, then $x$ is an $m$-ary operation: for any other block partition $y$ with $n$ parts, the partial composition $x \circ_i y$ ($1 \le i \le m$) is the result of scaling $y$ to have the same size as $x_i$, and replacing $x_i$ by the scaled partition $y$, producing a new block partition with $m{+}n{-}1$ parts. Let $\,\adjustbox{valign=m}{\rotatebox{90}{$\boxminus$\;}}\,$ and $\boxminus$ denote the two block partitions with two equal parts: the first has a vertical (resp.~horizontal) cut and represents the horizontal (resp.~vertical) operation; the parts are labelled 1, 2 in the positive direction, namely east (resp.~north). These two operations form a basis for the homogeneous space $\mathbf{BP}(2)$. The original operations are then defined as follows: \begin{equation} \label{bpoperations} \begin{array}{l} x \rightarrow y = ( \, \,\adjustbox{valign=m}{\rotatebox{90}{$\boxminus$\;}}\, \circ_1 x \, ) \circ_{m+1} y = ( \, \,\adjustbox{valign=m}{\rotatebox{90}{$\boxminus$\;}}\, \circ_2 y \, ) \circ_1 x, \\[0.5mm] x \uparrow y = ( \, \boxminus \circ_1 x \, ) \circ_{m+1} y = ( \, \boxminus \circ_2 y \, ) \circ_1 x. \end{array} \end{equation} $\mathbf{BP}$ is a set operad, but we make it into an algebraic operad in the usual way (see \S\ref{vectorsetoperads}): we define operations on elements and extend to linear combinations. \end{definition} \begin{definition} \label{dbpoperad} $\mathbf{DBP}$ is the unital suboperad of $\mathbf{BP}$ generated by $\mathbf{BP}(2)$, where unital means we include the unary operation represented by $I^2$, the block partition with one part. Thus $\mathbf{DBP}$ consists of the \emph{dyadic} partitions, meaning that every $P$ in $\mathbf{DBP}$ with $n{+}1$ parts comes from some $Q$ in $\mathbf{DBP}$ with $n$ parts by exact bisection of a part of $Q$ horizontally or vertically. The free double interchange magma is the algebra over $\mathbf{DBP}$ generated by $I^2$. \end{definition} \begin{algorithm} \label{defbinarypartition} For the general dimension $d \ge 1$, a \emph{dyadic block partition} $P$ with $k$ parts of the open unit $d$-cube $I^d$, $I = (0,1)$, is constructed by setting $P_1 \leftarrow \{ I^d \}$ and performing the following steps for $i = 1, \dots, k{-}1$: \begin{itemize}[leftmargin=*] \item Choose an element $B \in P_i$ and a coordinate axis $j \in \{1,\dots,d\}$. \item Set $c \leftarrow \tfrac12(a_j{+}b_j)$ where $(a_j,b_j)$ is the projection of $B$ onto the coordinate $x_j$. \item Set $\{ B', B'' \} \leftarrow B \setminus \{ \, x \in B \mid x_j = c \, \}$: the disjoint open blocks obtained from bisecting $B$ by the hyperplane $x_j = c$. \item Set $P_{i+1} \leftarrow ( \, P_i \setminus \{ B \} \, ) \sqcup \{ B', B'' \}$: in $P_i$, replace block $B$ with blocks $B'$, $B''$. \end{itemize} Finally, set $P \leftarrow P_k$. \end{algorithm} \begin{definition} \label{defsubrectangle} Let $P$ be a block partition of $I^2$ of arity $n$ determined by a set $C$ of line segments. Then $P = I^2 \setminus C$ is the disjoint union of $n$ empty blocks $B_1, \dots, B_n$ and we indicate this by writing $P = \bigsqcup B_i$. Suppose that the open rectangle $R = (x_1,x_2) \times ( y_1,y_2) \subseteq I^2$ admits a block partition (in the obvious sense) into the disjoint union of a subset $B_{i_1}, \dots, B_{i_m}$ of $m$ empty blocks from $P$. In this case we say that $R$ is a \emph{subrectangle} of $P$ of arity $m$. Every empty block $B_i$ is a subrectangle of $P$ of arity 1. \end{definition} \begin{definition} \label{defgeomap} The \emph{geometric realization map} $\Gamma\colon \mathbf{Free} \to \mathbf{BP}$ is the morphism of operads defined on tree monomials as follows: $\Gamma( \,\vert\, ) = I^2$, where $\vert$ is the (unique) tree with one node (a leaf), and recursively we define \begin{equation} \begin{array}{c} \Gamma( \, T_1 {\,\scalebox{.67}{$\vartriangle$}\,} T_2 \, ) = \adjustbox{valign=m} {\begin{xy} ( 5, 5 )*+{\Gamma(T_1)}; ( 15, 5 )*+{\Gamma(T_2)}; ( 0, 0 ) = "1"; ( 0, 10 ) = "2"; ( 10, 0 ) = "3"; ( 10, 10 ) = "4"; ( 20, 0 ) = "5"; ( 20, 10 ) = "6"; { \ar@{-} "1"; "2" }; { \ar@{-} "3"; "4" }; { \ar@{-} "5"; "6" }; { \ar@{-} "1"; "5" }; { \ar@{-} "2"; "6" }; \end{xy}} = \Gamma(T_1) \rightarrow \Gamma(T_2), \\[4mm] \Gamma( \, T_1 {\,\scalebox{.67}{$\blacktriangle$}\,} T_2 \, ) = \adjustbox{valign=m} {\begin{xy} ( 5, 5 )*+{\Gamma(T_1)}; ( 5, 15 )*+{\Gamma(T_2)}; ( 0, 0 ) = "1"; ( 0, 10 ) = "2"; ( 0, 20 ) = "3"; ( 10, 0 ) = "4"; ( 10, 10 ) = "5"; ( 10, 20 ) = "6"; { \ar@{-} "1"; "3" }; { \ar@{-} "4"; "6" }; { \ar@{-} "1"; "4" }; { \ar@{-} "2"; "5" }; { \ar@{-} "3"; "6" }; \end{xy}} = \Gamma(T_1) \uparrow \Gamma(T_2). \end{array} \end{equation} \end{definition} \begin{lemma} \label{lemma1} The image of $\Gamma$ is the operad $\Gamma( \mathbf{Free} ) = \mathbf{DBP}$ of dyadic block partitions. The kernel of $\Gamma$ is the operad ideal $\mathrm{ker}(\Gamma) = \langle \boxplus \rangle$ generated by the interchange law. Hence there is an operad isomorphism $\mathbf{Inter} \cong \mathbf{DBP}$. \end{lemma} \begin{proof} The first statement is clear, the second is Lemma \ref{nasrinslemma}, and the third is an immediate consequence of the first and second. \end{proof} \begin{notation} \label{gammanotation} We write $\mathrm{I} = \mathrm{ker}(\Gamma) = \langle \boxplus \rangle$, and $\gamma\colon \mathbf{Inter} \rightarrow \mathbf{DBP}$ for the isomorphism of Lemma \ref{lemma1}. Then the geometric realization map $\Gamma = \iota \circ \gamma \circ \chi$ factors through the natural surjection $\chi \colon \mathbf{Free} \twoheadrightarrow \mathbf{Inter}$ and the inclusion $\iota \colon \mathbf{DBP} \hookrightarrow \mathbf{BP}$: \begin{equation} \mathbf{Free} \twoheadrightarrow \mathbf{Free} / \mathrm{I} = \mathbf{Free} / \langle \boxplus \rangle = \mathbf{Inter} \xrightarrow{\;\gamma\;} \mathbf{DBP} \hookrightarrow \mathbf{BP}. \end{equation} See Figure \ref{bigpicture}. \end{notation} \begin{remark} We mention but do not elaborate on the similarity between (i) the straightforward $n$-dimensional generalizations of the operads $\mathbf{BP}$ and $\mathbf{DBP}$, and (ii) the much-studied operads $E_n$ which are weakly equivalent to the topological operads of little $n$-discs and little $n$-cubes. We refer the reader to McClure \& Smith \cite{MS2004} for further details and references. \end{remark} \subsection{Associative operads} The last three operads have associative operations. \begin{definition} \label{assocboperad} $\mathbf{AssocB}$ is the quotient of $\mathbf{Free}$ by the operad ideal $\mathrm{A} = \langle \mathrm{A}_{\wedgehor}, \mathrm{A}_{\wedgever} \rangle$ generated by the associative laws for two operations: \begin{equation} \label{associativelaws} \begin{array}{l} \mathrm{A}_{\wedgehor}(a,b,c) = ( \, a \,{\,\scalebox{.67}{$\vartriangle$}\,}\, b \, ) \,{\,\scalebox{.67}{$\vartriangle$}\,}\, c - a \,{\,\scalebox{.67}{$\vartriangle$}\,}\, ( \, b \,{\,\scalebox{.67}{$\vartriangle$}\,}\, c \, ), \\ \mathrm{A}_{\wedgever}(a,b,c) = ( \, a \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, b \, ) \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, c - a \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, ( \, b \,{\,\scalebox{.67}{$\blacktriangle$}\,}\, c \, ). \end{array} \end{equation} This \emph{two-associative} operad is denoted $\mathbf{2as}$ by Loday \& Ronco \cite{LR2006}. It is clumsy to regard the basis elements of $\mathbf{AssocB}$ as cosets of binary tree monomials in $\mathbf{Free}$ modulo the ideal $\mathrm{A} = \langle \mathrm{A}_{\wedgehor}, \mathrm{A}_{\wedgever} \rangle$. We write $\overline{{\,\scalebox{.67}{$\vartriangle$}\,}}$ and $\overline{{\,\scalebox{.67}{$\blacktriangle$}\,}}$ for the operations in $\mathbf{AssocB}$, where the bar indicates the quotient modulo $\mathrm{A}$. To be precise, for tree monomials $x, y \in \mathbf{Free}$, we define $\overline{{\,\scalebox{.67}{$\vartriangle$}\,}}$ and $\overline{{\,\scalebox{.67}{$\blacktriangle$}\,}}$ by these equations: \begin{equation} \begin{array}{l} \left( \, x + \mathrm{A} \, \right) \,\overline{{\,\scalebox{.67}{$\vartriangle$}\,}}\, \left( \, y + \mathrm{A} \, \right) = \left( \, x \,{\,\scalebox{.67}{$\vartriangle$}\,}\, y \, \right) + \mathrm{A}, \\ \left( \, x + \mathrm{A} \, \right) \,\overline{{\,\scalebox{.67}{$\blacktriangle$}\,}}\, \left( \, y + \mathrm{A} \, \right) = \left( \, x \,{\,\scalebox{.67}{$\blacktriangle$}\,} y\, \, \right) + \mathrm{A}. \end{array} \end{equation} \end{definition} \begin{definition} \label{assocnboperad} $\mathbf{AssocNB}$ is an isomorphic copy of $\mathbf{AssocB}$ corresponding to the following change of basis. We write $\rho \colon \mathbf{AssocB} \to \mathbf{AssocNB}$ to represent rewriting a coset representative (a binary tree) as a nonbinary (= not necessarily binary) tree. The new basis consists of the disjoint union $\{ x_1 \} \sqcup \mathbb{T}_{\,\scalebox{.67}{$\vartriangle$}\,} \sqcup \mathbb{T}_{\,\scalebox{.67}{$\blacktriangle$}\,}$ of the isolated leaf $x_1$ and two copies of the set $\mathbb{T}$ of all labelled rooted \emph{not necessarily binary} plane trees with at least one internal node (counting the root). We assume that each internal node has at least two children, and so every tree in $\mathbb{T}$ has at least two leaves. If $T$ is a tree in $\mathbb{T}$ with root $r$, then the \emph{level} $\ell(s)$ of any internal node $s$ is the length of the unique path from $r$ to $s$ in $T$. In $\mathbb{T}_{\,\scalebox{.67}{$\vartriangle$}\,}$, the root $r$ of every tree $T$ has label ${\,\scalebox{.67}{$\vartriangle$}\,}$, and the label of an internal node $s$ is ${\,\scalebox{.67}{$\vartriangle$}\,}$ (resp.~${\,\scalebox{.67}{$\blacktriangle$}\,}$) if $\ell(s)$ is even (resp.~odd). In $\mathbb{T}_{\,\scalebox{.67}{$\blacktriangle$}\,}$, the labels of the internal nodes are reversed. If $T$ is in $\mathbb{T}_{\,\scalebox{.67}{$\vartriangle$}\,} \sqcup \mathbb{T}_{\,\scalebox{.67}{$\blacktriangle$}\,}$ (so $n \ge 2$), then we include the $n!$ trees for all bijections between the leaves and the argument symbols $\arg_1, \dots, \arg_n$. Lemma \ref{assoclemma} gives a precise statement of the bijection between these two bases. For further information, see Loday \& Ronco \cite[\S5]{LR2006}. \end{definition} \begin{remark} If the choice of basis is not relevant, then we write $\mathbf{Assoc}$ to represent the operad $\mathbf{AssocB} \cong \mathbf{AssocNB}$. \end{remark} \begin{definition} \label{diaoperad} $\mathbf{DIA}$ is the quotient of $\mathbf{Free}$ by the operad ideal $\langle \mathrm{A}_{\wedgehor}, \mathrm{A}_{\wedgever}, \boxplus \rangle$. This is the operad governing double interchange \emph{algebras}, which possess two associative operations satisfying the interchange law. \end{definition} \subsection{Set and vector operads} \label{vectorsetoperads} The operads $\mathbf{Inter}$, $\mathbf{AssocB}$, $\mathbf{AssocNB}$, $\mathbf{DIA}$ are defined by relations of the form $v_1 - v_2 \equiv 0$ (equivalently $v_1 \equiv v_2$) where $v_1, v_2$ are cosets of tree monomials in $\mathbf{Free}$. We could therefore work entirely with set operads, since we never need to consider linear combinations. Vector spaces and sets are connected by a pair of adjoint functors: the forgetful functor sending a vector space $V$ to its underlying set, and the left adjoint sending a set $S$ to the free vector space on $S$ (the vector space with basis $S$). The connection between vector spaces and sets is reflected in the relation between Gr\"obner bases for operads and the theory of rewriting systems: if we compute a syzygy for two tree polynomials $v_1 - v_2$ and $w_1 - w_2$, then the common multiple of the leading terms cancels, and we obtain another difference of tree monomials; similarly, from a critical pair of rewrite rules $v_1 \mapsto v_2$ and $w_1 \mapsto w_2$, we obtain another rewrite rule\footnote{We thank Vladimir Dotsenko for this clarification.}. We state our main results in terms of set operads, but strictly speaking, we work with algebraic operads. A double interchange semigroup is a module over an operad $\mathbf{DIS}$ in the category of sets; the corresponding notion over the algebraic operad $\mathbf{DIA}$ is a double interchange algebra. Our main reason for using algebraic rather than set operads is that the former theory is much better developed. For more about the relation between set and algebraic operads, see Giraudo \cite[\S1.1.2]{Giraudo2016}. \subsection{Morphisms between operads} \label{morphisms} Our goal in this paper is to understand the operad $\mathbf{DIA}$, which is the quotient of $\mathbf{Free}$ by the operad ideal generated by the associative and interchange laws. We have no convenient normal form for the basis monomials of $\mathbf{DIA}$ (that is, no convenient way to choose a canonical representative for each equivalence class in the quotient operad). As we have just seen, there is a convenient normal form when we factor out associativity but not interchange. As we will see later (Lemma \ref{nasrinslemma}), there is also a convenient normal form when we factor out interchange but not associativity: the dyadic block partitions. Our approach will be to use the (tree) monomial basis of the operad $\mathbf{Free}$; to these monomials, we apply rewriting rules which express associativity of each operation (from right to left, or the reverse) and the interchange law between the operations (from black to white, or the reverse). These rewritings convert one tree monomial in $\mathbf{Free}$ to another tree monomial which is equivalent to the first modulo associativity and interchange. In other words, given an element $X$ of $\mathbf{DIA}$ represented by a tree monomial $T$ in $\mathbf{Free}$, these rewritings convert $T$ to another tree monomial $T'$ which is in the same inverse image as $T$ with respect to the natural surjection $\mathbf{Free} \twoheadrightarrow \mathbf{DIA}$. We must allow undirected rewriting because of the complicated way in which associativity and interchange interact: in order to pass from $T$ to $T'$, we may need to apply associativity from left to right, then apply interchange, and then apply associativity from right to left. We present a commutative diagram of operads and morphisms in Figure \ref{bigpicture}: \begin{itemize}[leftmargin=*] \item $\alpha$ is the natural surjection from $\mathbf{Free}$ onto $\mathbf{Assoc} = \mathbf{Free} / \mathrm{A}$. \item $\chi$ is the natural surjection from $\mathbf{Free}$ onto $\mathbf{Inter} = \mathbf{Free} / \mathrm{I}$. \item $\overline{\alpha}$ is the natural surjection from $\mathbf{Inter}$ onto $\mathbf{DIA} = \mathbf{Inter} / \langle\mathrm{A}_{\wedgehor}{+}\mathrm{I},\mathrm{A}_{\wedgever}{+}\mathrm{I}\rangle$. \item $\overline{\chi}$ is the natural surjection from $\mathbf{Assoc}$ onto $\mathbf{DIA} = \mathbf{Assoc} / \langle\boxplus{+}\mathrm{A}\rangle$. \item $\overline{\chi} \circ \alpha = \overline{\alpha} \circ \chi$: the diagram commutes. \item For $\gamma$, $\iota$, $\Gamma = \iota \circ \gamma \circ \chi$ see Definition \ref{defgeomap} and Notation \ref{gammanotation}. \item For $\rho$ see Definition \ref{assocnboperad}. \end{itemize} \begin{figure}[ht] \begin{adjustbox}{center} \setlength{\fboxsep}{12pt} \fbox{ \setlength{\fboxsep}{4pt} \setlength\fboxrule{1pt} \begin{xy} ( 0, 0 )*+{\mathbf{Free}} = "free"; ( 48, -36 )*+{\mathbf{BP}} = "bp"; ( 48, -24 )*+{\mathbf{DBP}} = "dbp"; ( 48, -12 )*+{\mathbf{Inter}} = "inter"; ( 48, 0 )*+{\circlearrowleft} = "x"; ( 48, 12 )*+{\mathbf{AssocB}} = "assocb"; ( 48, 24 )*+{\mathbf{AssocNB}} = "assocnb"; ( 96, 0 )*+{\boxed{\mathbf{DIA}}} = "dia"; { \ar@{->>}^{-/\langle\mathrm{A}_{\wedgehor},\mathrm{A}_{\wedgever}\rangle\quad}_{\alpha} "free"; "assocb" }; { \ar@{->>}^{-/\langle\boxplus\rangle}_{\chi} "free"; "inter" }; { \ar@/_6mm/@{.>}_{\Gamma} "free"; "bp" }; { \ar@{->}_{\gamma}^{\text{isomorphism}} "inter"; "dbp" }; { \ar@{->>}^{-/\langle\mathrm{A}_{\wedgehor}{+}\mathrm{I},\mathrm{A}_{\wedgever}{+}\mathrm{I}\rangle\qquad}_{\overline{\alpha}} "inter"; "dia" }; { \ar@{->}^{\rho}_{\text{isomorphism}} "assocb"; "assocnb" }; { \ar@{->>}^{-/\langle\boxplus{+}\mathrm{A}\rangle}_{\overline{\chi}} "assocb"; "dia" }; { \ar@{->}^{\text{inclusion}}_{\iota} "dbp"; "bp" }; \end{xy} } \end{adjustbox} \vspace{-10pt} \caption{Big picture of operads and morphisms for rewriting monomials} \label{bigpicture} \end{figure} \subsection{Diagram chasing and commutativity} \label{diagramchasing} By a \emph{monomial} $X$ in $\mathbf{DIA}$, we mean an equivalence class of (tree) monomials in $\mathbf{Free}$, modulo the equivalence relation generated by the relations $\mathrm{A}_{\wedgehor}$, $\mathrm{A}_{\wedgever}$, $\boxplus$. Thus $X$ is a nonempty subset of (tree) monomials in $\mathbf{Free}$, and a representative of $X$ is simply an element of $X$. We start with a monomial $X$ in $\mathbf{DIA}$ and choose a convenient representative (tree) monomial $T \in X$. To the tree monomial $T$, we freely apply any sequence of rewrite rules of the following two types: \begin{itemize}[leftmargin=*] \item \emph{Reassociating} in either direction (left to right or right to left) with respect to either operation, ${\,\scalebox{.67}{$\vartriangle$}\,}$ or ${\,\scalebox{.67}{$\blacktriangle$}\,}$: this means applying $\alpha$ to $T$ to obtain a unique element of $\mathbf{Assoc}$, namely the coset $\alpha(T) = T + \mathrm{A}$; rewriting the binary tree $T$ as a nonbinary tree as explained in Definition \ref{assocnboperad}; and applying $\alpha^{-1}$ by choosing a different binary tree $T'$ representing the same nonbinary tree: $T + \mathrm{A} = T' + \mathrm{A}$, i.e., $\alpha(T) = \alpha(T')$. \item \emph{Interchanging} in either direction, left to right or right to left (more precisely horizontal to vertical, or vertical to horizontal, i.e., white root to black root, or black root to white root): this means applying $\chi$ to $T$ to obtain a unique element of $\mathbf{Inter}$, namely the coset $\chi(T) = T + \mathrm{I}$; rewriting the binary tree $T$ as a dyadic block partition (Lemma \ref{nasrinslemma}: interchange may only be applied in an unambiguous way to a \emph{binary} tree); and applying $\chi^{-1}$ by choosing a different binary tree $T'$ representing the same dyadic block partition: $T + \mathrm{I} = T' + \mathrm{I}$, i.e., $\chi(T) = \chi(T')$. \end{itemize} The role played by the nonbinary trees when reassociating is analogous to the role played by the dyadic block partitions when interchanging: the actual rewriting of the coset representatives (the tree monomials in $\mathbf{Free}$) takes place using $\rho$ and $\rho^{-1}$ for associativity, and $\gamma$ and $\gamma^{-1}$ for the interchange law. We point out that: \begin{itemize}[leftmargin=*] \item applying associativity, $T \mapsto T' \in \alpha^{-1}(\alpha(T))$, changes the corresponding dyadic block partition, but does not change the nonbinary tree monomial; \item applying the interchange law, $T \mapsto T' \in \chi^{-1}(\chi(T))$, changes the corresponding nonbinary tree monomial, but does not change the dyadic block partition. \end{itemize} This rewriting process is unavoidable because we do not have a well-defined normal form for elements in $\mathbf{DIA}$, but we do have easily computable normal forms for elements of $\mathbf{Free}$, $\mathbf{Assoc} = \mathbf{Free} / \langle \mathrm{A}_{\wedgehor}, \mathrm{A}_{\wedgever} \rangle$ and $\mathbf{Inter} = \mathbf{Free} / \langle \boxplus \rangle$. We apply any number of these rewrite rules in any order, and stop if and when we obtain a tree monomial $T''$ \emph{identical} to the original monomial $T$ \emph{except} for the permutation of the arguments. The equality in $\mathbf{DIA}$ of the cosets of the tree monomials $T$ and $T''$ in $\mathbf{Free}$ is a multilinear commutativity relation for double interchange algebras, or equivalently for double interchange semigroups (since we have been working exclusively with basis monomials). \begin{figure}[ht] \setlength{\fboxsep}{10pt} \[ \boxed{ \begin{array}{l} ( ( a {\,\scalebox{.67}{$\blacktriangle$}\,} b ) {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} f) \;\; [\in \mathbf{Free}] \\ \xrightarrow{\makebox[12mm]{$\alpha$}} \; ( ( a {\,\scalebox{.67}{$\blacktriangle$}\,} b ) {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} f) + \mathrm{A} \;\; [\in \mathbf{AssocB}] \\ \xrightarrow{\makebox[12mm]{$\rho$}} \; ( a {\,\scalebox{.67}{$\blacktriangle$}\,} b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} f) + \mathrm{A} \;\; [\in \mathbf{AssocNB}] \\ \xrightarrow{\makebox[12mm]{$ \rho^{-1} $}} \; ( a {\,\scalebox{.67}{$\blacktriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} f) + \mathrm{A} \;\; [\in \mathbf{AssocB}] \\ \xrightarrow{\makebox[12mm]{$\alpha^{-1}$}} \; ( a {\,\scalebox{.67}{$\blacktriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} f) \;\; [\in \mathbf{Free}] \\ \xrightarrow{\makebox[12mm]{$\chi$}} \; ( a {\,\scalebox{.67}{$\blacktriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} f) + \mathrm{I} \;\; [\in \mathbf{Inter}] \\[8pt] \xrightarrow{\makebox[12mm]{$\gamma$}} \; \adjustbox{valign=m} {\begin{xy} ( 0, 0 ) = "1"; ( 0, 12 ) = "2"; ( 0, 24 ) = "3"; ( 12, 0 ) = "4"; ( 12, 12 ) = "5"; ( 12, 24 ) = "6"; ( 24, 0 ) = "7"; ( 24, 12 ) = "8"; ( 24, 24 ) = "9"; ( 0, 18 ) = "10"; ( 12, 18 ) = "11"; ( 18, 0 ) = "12"; ( 18, 12 ) = "13"; { \ar@{-} "1"; "3" }; { \ar@{=} "4"; "6" }; { \ar@{-} "7"; "9" }; { \ar@{-} "1"; "7" }; { \ar@{-} "2"; "8" }; { \ar@{-} "3"; "9" }; { \ar@{-} "10"; "11" }; { \ar@{-} "12"; "13" }; ( 6, 6 )*+{a} = "a"; ( 6, 15 )*+{b} = "b"; ( 6, 21 )*+{c} = "c"; ( 15, 6 )*+{d} = "d"; ( 21, 6 )*+{e} = "e"; ( 18, 18 )*+{f} = "f"; \end{xy}} = \adjustbox{valign=m} {\begin{xy} ( 0, 0 ) = "1"; ( 0, 12 ) = "2"; ( 0, 24 ) = "3"; ( 12, 0 ) = "4"; ( 12, 12 ) = "5"; ( 12, 24 ) = "6"; ( 24, 0 ) = "7"; ( 24, 12 ) = "8"; ( 24, 24 ) = "9"; ( 0, 18 ) = "10"; ( 12, 18 ) = "11"; ( 18, 0 ) = "12"; ( 18, 12 ) = "13"; { \ar@{-} "1"; "3" }; { \ar@{-} "4"; "6" }; { \ar@{-} "7"; "9" }; { \ar@{-} "1"; "7" }; { \ar@{=} "2"; "8" }; { \ar@{-} "3"; "9" }; { \ar@{-} "10"; "11" }; { \ar@{-} "12"; "13" }; ( 6, 6 )*+{a} = "a"; ( 6, 15 )*+{b} = "b"; ( 6, 21 )*+{c} = "c"; ( 15, 6 )*+{d} = "d"; ( 21, 6 )*+{e} = "e"; ( 18, 18 )*+{f} = "f"; \end{xy}} \;\; [\in \mathbf{DBP}] \;\; \left( \!\! \begin{tabular}{c} double line \\ denotes root \\ operation \end{tabular} \!\! \right) \\ \xrightarrow{\makebox[12mm]{$\gamma^{-1}$}} \; ( a {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} f) + \mathrm{I} \;\; [\in \mathbf{Inter}] \\ \xrightarrow{\makebox[12mm]{$\chi^{-1}$}} \; ( a {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} f) \;\; [\in \mathbf{Free}] \\ \xrightarrow{\makebox[12mm]{$\alpha$}} \; ( a {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} f) + \mathrm{A} \;\; [\in \mathbf{AssocB}] \\ \xrightarrow{\makebox[12mm]{$ \rho $}} \; ( a {\,\scalebox{.67}{$\vartriangle$}\,} d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} f) + \mathrm{A} \;\; [\in \mathbf{AssocNB}] \\ \xrightarrow{\makebox[12mm]{$ \rho^{-1} $}} \; ( ( a {\,\scalebox{.67}{$\vartriangle$}\,} d ) {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} f) + \mathrm{A} \;\; [\in \mathbf{AssocB}] \\ \xrightarrow{\makebox[12mm]{$\alpha^{-1}$}} \; ( ( a {\,\scalebox{.67}{$\vartriangle$}\,} d ) {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) {\,\scalebox{.67}{$\vartriangle$}\,} f) \;\; [\in \mathbf{Free}] \end{array} } \] \vspace{-15pt} \caption{Example of rewriting in free double interchange semigroups} \label{rewritingexample} \end{figure} \begin{example} In Figure \ref{rewritingexample} we present a simple exercise in rewriting, which does not lead to a commutativity property (such an example would require at least nine variables), but which should suffice to illustrate the preceding discussion. For clarity, we include in square brackets the position in Figure \ref{bigpicture} of the current object. We emphasize that we never work directly with elements of $\mathbf{DIA}$. \end{example} \section{Background in categorical algebra} Most of the operadic and geometric objects studied in this paper originate in category theory; we mention Mac Lane \cite{MacLane1965}, Kelly \& Street \cite{KS1974}, Street \cite{Street1996} as mathematical references, Kr\"omer \cite{Kromer2007} for historical and philosophical aspects. \subsection{Many binary operations} Many different structures may be regarded as extensions of the notion of semigroup to the case of $d \ge 2$ binary operations. (The results of this paper concern only $d = 2$.) We give definitions for the general case $d \ge 2$ only when this requires no more space than $d = 2$. \begin{definition} \label{defmtuplesemigroup} A $d$-\emph{tuple magma} is a nonempty set $S$ with $d$ binary operations $S \times S \to S$, denoted $(a,b) \mapsto a \star_i b$ for $1 \le i \le d$. A $d$-\emph{tuple semigroup} is a $d$-tuple magma in which every operation satisfies the associative law. A $d$-\emph{tuple interchange magma} is a $d$-tuple magma in which every pair of distinct operations satisfies the interchange law. A $d$-\emph{tuple interchange semigroup} is a $d$-tuple semigroup in which every pair of distinct operations satisfies the interchange law. (Some authors refer to the last structure simply as ``a $d$-tuple semigroup''.) \end{definition} \begin{example} \label{exampleinterchange} Double interchange magmas have operations $\rightarrow$ (horizontal) and $\uparrow$ (vertical) related by the interchange law, which expresses the equality of two sequences of bisections which partition a square into four smaller squares: \[ ( a \rightarrow b ) \uparrow ( c \rightarrow d ) \, \equiv \begin{array}{c} \begin{tikzpicture}[draw=black, x=6 mm, y=6 mm] \node [Square] at ($(0, 0.0)$) {$a$}; \node [Square] at ($(1, 0.0)$) {$b$}; \node [Square] at ($(0,-1.2)$) {$c$}; \node [Square] at ($(1,-1.2)$) {$d$}; \end{tikzpicture} \end{array} \equiv \begin{array}{c} \begin{tikzpicture}[draw=black, x=6 mm, y=6 mm] \node [Square] at ($(0, 0)$) {$a$}; \node [Square] at ($(1, 0)$) {$b$}; \node [Square] at ($(0,-1)$) {$c$}; \node [Square] at ($(1,-1)$) {$d$}; \end{tikzpicture} \end{array} \equiv \begin{array}{c} \begin{tikzpicture}[draw=black, x=6 mm, y=6 mm] \node [Square] at ($(0.0, 0)$) {$a$}; \node [Square] at ($(1.2, 0)$) {$b$}; \node [Square] at ($(0.0,-1)$) {$c$}; \node [Square] at ($(1.2,-1)$) {$d$}; \end{tikzpicture} \end{array} \equiv \; ( a \uparrow c ) \rightarrow ( b \uparrow d ). \] \end{example} \subsection{The Eckmann-Hilton argument} Structures with binary operations $\rightarrow$ and $\uparrow$ satisfying the interchange law arose during the late 1950s and early 1960s, in universal algebra, algebraic topology, and double category theory. The operations are usually associative, but even without this assumption, Eckmann \& Hilton \cite{EH1962} showed that if we allow them to possess unit elements $e_\rightarrow$ and $e_\uparrow$ then the interchange law forces the units to be equal: \begin{align*} e_\rightarrow = e_\rightarrow \rightarrow e_\rightarrow &= ( e_\rightarrow \uparrow e_\uparrow ) \rightarrow ( e_\uparrow \uparrow e_\rightarrow ) \\ &= ( e_\rightarrow \rightarrow e_\uparrow ) \uparrow ( e_\uparrow \rightarrow e_\rightarrow ) = e_\uparrow \uparrow e_\uparrow = e_\uparrow. \end{align*} From this, it further follows that each operation is the opposite of the other, and that the two operations coincide: if we write $e = e_\rightarrow = e_\uparrow$ then \begin{align*} & x \rightarrow y \equiv ( e \uparrow x ) \rightarrow ( y \uparrow e ) \equiv ( e \rightarrow y ) \uparrow ( x \rightarrow e ) \equiv y \uparrow x, \\ & y \uparrow x \equiv ( y \rightarrow e ) \uparrow ( e \rightarrow x ) \equiv ( y \uparrow e ) \rightarrow ( e \uparrow x ) \equiv y \rightarrow x. \end{align*} Thus there remains \emph{one commutative operation}, which is in fact also \emph{associative}: \[ ( a b ) c \equiv ( a b ) ( e c ) \equiv ( a e ) ( b c ) \equiv a ( b c ). \] Hence we assume that \emph{at most one} of the operations possesses a unit element. \begin{theorem} Let $S$ be an $d$-tuple interchange magma with operations $\star_1, \dots, \star_d$ for $d \ge 2$. If these operations have unit elements, then the units are equal, the operations coincide, and the remaining operation is commutative and associative. \end{theorem} \begin{proof} See the paper of Eckmann \& Hilton \cite{EH1962}, especially Theorem 3.33 (page 236), the definition of $\mathbf{H}$-structure (page 241), and Theorem 4.17 (page 244). \end{proof} \subsection{Double categories} Extension of the notion of semigroup to sets with $d \ge 2$ operations received a strong impetus in the 1960s from different approaches to two-dimensional category theory: see Ehresmann \cite{Ehresmann1963} for double categories, B\'enabou \cite{Benabou1967} for bicategories, Kelly \& Street \cite{KS1974} for 2-categories. The survey by Street \cite{Street1996} also covers higher-dimensional categories; pasting diagrams \cite{Johnson1989,Power1991} and parity complexes \cite{Street1991} arose as extensions of the interchange law to higher dimensions. For our purposes, the most relevant concept is that of double category; we mention in particular the work of Dawson \& Par\'e \cite{DP1993}. The most natural example is the double category $\mathbf{Cat}$ which has small categories as objects, functors as 1-morphisms, and natural transformations as 2-morphisms; it has two associative operations, horizontal composition of functors and vertical composition of natural transformations, which satisfy the interchange law. \begin{definition} A \emph{double category} $\mathbf{D}$ is an ordered pair of categories $( \mathbf{D}_0, \mathbf{D}_1 )$, together with functors $e \colon \mathbf{D}_0 \to \mathbf{D}_1$ and $s, t \colon \mathbf{D}_1 \to \mathbf{D}_0$. In $\mathbf{D}_0$ we denote objects by capital Latin letters $A$, $A'$, \dots (the 0-cells of $\mathbf{D}$) and morphisms by arrows labelled with lower-case italic letters $u$, $v$, \dots (the vertical 1-cells of $\mathbf{D}$). In $\mathbf{D}_1$ we denote objects by arrows labelled with lower-case italic letters $h$, $k$, \dots (the horizontal 1-cells of $\mathbf{D}$), and the morphisms by lower-case Greek letters $\alpha$, $\beta$, \dots (the 2-cells of $\mathbf{D}$). If $A$ is an object in $\mathbf{D}_0$ then $e(A)$ is the (horizontal) identity arrow on $A$. (Recall by the Eckmann-Hilton argument that identity arrows may exist in only one direction.) The functors $s$ and $t$ are the source and target: if $h$ is a horizontal arrow in $\mathbf{D}_1$ then $s(h)$ and $t(h)$ are its domain and codomain, objects in $\mathbf{D}_0$. These three functors are related by the equation $s(e(A)) = t(e(A)) = A$ for every $A$. For 2-cells $\alpha$, $\beta$ horizontal composition $\alpha \,\adjustbox{valign=m}{\rotatebox{90}{$\boxminus$\;}}\, \beta$ and vertical composition $\alpha \boxminus \beta$ are defined by the following diagrams and satisfy the interchange law: \begin{equation*} \adjustbox{valign=m}{% \begin{xy} ( 0, 0 )*+{A} = "a"; ( 12, 0 )*+{C} = "c"; ( 24, 0 )*+{E} = "e"; ( 0, 12 )*+{B} = "b"; ( 12, 12 )*+{D} = "d"; ( 24, 12 )*+{F} = "f"; ( 6, 1 ) = "ac"; ( 6, 11 ) = "bd"; ( 18, 1 ) = "ce"; ( 18, 11 ) = "df"; { \ar@{->}^{u} "a"; "b" }; { \ar@{->}_{v} "c"; "d" }; { \ar@{->}_{w} "e"; "f" }; { \ar@{->}_{h} "a"; "c" }; { \ar@{->}_{k} "c"; "e" }; { \ar@{->}^{\ell} "b"; "d" }; { \ar@{->}^{m} "d"; "f" }; { \ar@{=>}_{\alpha} "ac"; "bd" }; { \ar@{=>}_{\beta} "ce"; "df" }; \end{xy} } \!\!\cong \adjustbox{valign=m}{% \begin{xy} ( 0, 0 )*+{A} = "a"; ( 18, 0 )*+{E} = "e"; ( 0, 12 )*+{B} = "b"; ( 18, 12 )*+{F} = "f"; ( 9, 1 ) = "ae"; ( 9, 11 ) = "bf"; { \ar@{->}^{u} "a"; "b" }; { \ar@{->}_{w} "e"; "f" }; { \ar@{->}_{k \circ h} "a"; "e" }; { \ar@{->}^{m \circ \ell} "b"; "f" }; { \ar@{=>}_{\alpha \,\adjustbox{valign=m}{\rotatebox{90}{$\boxminus$\;}}\, \beta} "ae"; "bf" }; \end{xy} } \quad \adjustbox{valign=m}{% \begin{xy} ( 0, 0 )*+{A} = "a"; ( 0, 12 )*+{B} = "b"; ( 0, 24 )*+{C} = "c"; ( 12, 0 )*+{D} = "d"; ( 12, 12 )*+{E} = "e"; ( 12, 24 )*+{F} = "f"; ( 6, 1 ) = "ad"; ( 6, 11 ) = "be-"; ( 6, 13 ) = "be+"; ( 6, 23 ) = "cf"; { \ar@{->}^{u} "a"; "b" }; { \ar@{->}^{v} "b"; "c" }; { \ar@{->}_{w} "d"; "e" }; { \ar@{->}_{x} "e"; "f" }; { \ar@{->}_{h} "a"; "d" }; { \ar@{->}_{} "b"; "e" }; { \ar@{->}^{\ell} "c"; "f" }; { \ar@{=>}_{\alpha} "ad"; "be-" }; { \ar@{=>}_{\beta} "be+"; "cf" }; \end{xy} } \!\!\!\!\cong \adjustbox{valign=m}{% \begin{xy} ( 0, 0 )*+{A} = "a"; ( 0, 12 )*+{C} = "c"; ( 18, 0 )*+{D} = "d"; ( 18, 12 )*+{F} = "f"; ( 9, 1 ) = "h"; ( 9, 11 ) = "\ell"; { \ar@{->}_{h} "a"; "d" }; { \ar@{->}^{\ell} "c"; "f" }; { \ar@{->}^{v \circ u} "a"; "c" }; { \ar@{->}_{x \circ w} "d"; "f" }; { \ar@{=>}_{\alpha \boxminus \beta} "h"; "\ell" }; \end{xy} } \end{equation*} A monoid may be viewed as a category with one object; this restriction guarantees that all morphisms are composable. Similarly, a double interchange semigroup may be viewed as a double category with one object. If we retain associativity and omit interchange, then we obtain a sesquicategory; see Stell \cite{Stell1995}. \end{definition} \subsection{Endomorphism PROPs} In the category of vector spaces and linear maps over a field $\mathbb{F}$, the endomorphism PROP of $V$ is the bigraded direct sum \[ \mathbf{End}(V) = \bigoplus_{p, q \ge 0} \mathbf{End}(V)^{p,q} = \bigoplus_{p, q \ge 0} \mathbf{Lin}( V^{\otimes p}, V^{\otimes q} ), \] where $\mathbf{Lin}( V^{\otimes p}, V^{\otimes q} )$ is the vector space of all linear maps $V^{\otimes p} \longrightarrow V^{\otimes q}$. On $\mathbf{End}(V)$ there are two natural bilinear operations: \begin{itemize}[leftmargin=*] \item The horizontal product: for $f\colon V^{\otimes p} \longrightarrow V^{\otimes q}$ and $g\colon V^{\otimes r} \longrightarrow V^{\otimes s}$ we define the operation $\otimes\colon \mathbf{End}(V)^{p,q} \otimes \mathbf{End}(V)^{r,s} \longrightarrow \mathbf{End}(V)^{p+r,q+s}$ as follows: \[ f \otimes g \colon V^{\otimes (p+r)} \cong V^{\otimes p} \otimes V^{\otimes r} \longrightarrow V^{\otimes q} \otimes V^{\otimes s} \cong V^{\otimes (q+s)}. \] \item The vertical product: for $f\colon V^{\otimes p} \longrightarrow V^{\otimes q}$ and $g\colon V^{\otimes q} \longrightarrow V^{\otimes r}$ we define the operation $\circ \colon \mathbf{End}(V)^{p,q} \otimes \mathbf{End}(V)^{q,r} \longrightarrow \mathbf{End}(V)^{p,r}$ as follows: \[ g \circ f \colon V^{\otimes p} \longrightarrow V^{\otimes r}. \] \end{itemize} These two operations satisfy the interchange law. If $f\colon V \rightarrow V'$ and $g\colon W \rightarrow W'$ then $f \otimes g \colon V \otimes W \longrightarrow V' \otimes W'$ is defined by interchange between $\otimes$ and function \emph{evaluation}: $( f \otimes g )( v \otimes w ) \equiv f(v) \otimes g(w)$. If $f'\colon V' \rightarrow V''$ and $g'\colon W' \rightarrow W''$ then composition of tensor products of maps is defined by interchange between $\otimes$ and function \emph{composition}: $( f' \otimes g' ) \circ ( f \otimes g ) \equiv ( f' \circ f ) \otimes ( g' \circ g )$. \subsection{Tree sequences and Thompson's group} We consider the group of symmetries of the set of all dyadic partitions of the open unit interval $I = (0,1)$. \begin{definition} \label{defdyadicsubset} A number $x \in I$ is \emph{dyadic of level $b$} if $x = a 2^{-b}$ for positive integers $a, b$ where $a$ is \emph{odd} and $1 \le a \le 2^b{-}1$. A dyadic subset $C \subset I$ is a \emph{tree sequence} (or \emph{dyadic partition}) if $C$ is obtained from $I$ by a sequence of (exact) bisections of open subintervals. (Thus $C$ is the image of an unlabelled plane rooted complete binary tree under the one-dimensional geometric realization map.) For every $a 2^{-b} \in C$, exactly one of $a{-}1$, $a{+}1$ is twice an odd number, say $2a'$, and the other is divisible by 4. Then a dyadic subset is a tree sequence if and only if $x = a 2^{-b} \in C$ implies $p(x) = a' 2^{-b+1} \in C$; that is, every $x \in C$ has a tree parent in $C$. \end{definition} \begin{definition} Let $f$ be a homeomorphism of $[0,1]$ which fixes the endpoints and is piecewise linear. Assume that the subset of $(0,1)$ at which $f$ is not differentiable is a tree sequence, and that at all other interior points $f'(x)$ is a power of 2. The set of all such $f$ is a group under function composition, called \emph{Thompson's group} $F$. For further information, see Cannon et al.~\cite{CFP1996}. \end{definition} Let $A = \{ a_1, \dots, a_n \}$ and $B = \{ b_1, \dots, b_n \}$ be (strictly increasing) tree sequences of size $n$ partitioning $(0,1)$ into $n{+}1$ subintervals. We have $f(A) = B$ where $f \in F$ is linear on each subinterval and satisfies $f(a_i) = b_i$ for $1 \le i \le n$. Thus $F$ describes transformations from one rooted binary tree to another. Plane rooted complete binary trees with $n$ internal nodes are in bijection with association types for nonassociative products of $n{+}1$ factors. Hence, we may also regard $F$ as consisting of transformations from one association type to another; in this case, we call $f \in F$ a \emph{reassociation} of the parentheses. We display the bijection between tree sequences and association types for arities $\le 5$ in Figure \ref{treesequences}. \begin{figure}[ht] {\footnotesize \[ \begin{array}{l@{\;}l@{\quad}l@{\;}l@{\,}} \line(1,0){128} & a & \line(1,0){64}\line(0,1){6}\line(1,0){64} & ab \\ \line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){64} & (ab)c & \line(1,0){64}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32} & a(bc) \\ \line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){64} & ((ab)c)d & \line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){64} & (a(bc))d \\ \line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32} & (ab)(cd) & \line(1,0){64}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32} & a((bc)d) \\ \line(1,0){64}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16} & a(b(cd)) \\ \line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){64} & (((ab)c)d)e & \line(1,0){16}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){64} & ((a(bc))d)e \\ \line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){64} & ((ab)(cd))e & \line(1,0){32}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){64} & (a((bc)d))e \\ \line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){64} & (a(b(cd)))e & \line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32} & ((ab)c)(de) \\ \line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32} & (a(bc))(de) & \line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32} & (ab)((cd)e) \\ \line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16} & (ab)(c(de)) & \line(1,0){64}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){32} & a(((bc)d)e) \\ \line(1,0){64}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){32} & a((b(cd))e) & \line(1,0){64}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){16} & a((bc)(de)) \\ \line(1,0){64}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){16} & a(b((cd)e)) & \line(1,0){64}\line(0,1){6}\line(1,0){32}\line(0,1){6}\line(1,0){16}\line(0,1){6}\line(1,0){8}\line(0,1){6}\line(1,0){8} & a(b(c(de))) \end{array} \]} \vspace{-5mm} \caption{Tree sequences and association types} \label{treesequences} \end{figure} \section{Preliminary results on commutativity relations} \subsection{Lemmas on associativity and interchange} For $n \ge 1$, the tree monomial basis of $\mathbf{Free}(n)$ is the set $\mathbb{B}_n$ of all complete rooted binary plane trees with $n$ leaves, with internal nodes labelled ${\,\scalebox{.67}{$\vartriangle$}\,}$ or ${\,\scalebox{.67}{$\blacktriangle$}\,}$, and leaves labelled by a permutation of $x_1, \dots, x_n$ (Definition \ref{freeoperad}). For $\mathbf{Assoc}$, either we use equivalence classes (under double associativity) of binary trees as basis monomials, or (more conveniently) we use the basis $\mathbb{NB} = \{ x_1 \} \sqcup \mathbb{T}_{\,\scalebox{.67}{$\vartriangle$}\,} \sqcup \mathbb{T}_{\,\scalebox{.67}{$\blacktriangle$}\,}$ of labelled rooted (not necessarily binary) plane trees with \emph{alternating} labels ${\,\scalebox{.67}{$\vartriangle$}\,}$ and ${\,\scalebox{.67}{$\blacktriangle$}\,}$ on internal nodes (Definitions \ref{assocboperad}, \ref{assocnboperad}). \begin{lemma} \label{assoclemma} A basis for $\mathbf{Assoc}(n)$ is the set $\mathbb{NB}_n$ of all trees in $\mathbb{NB}$ with $n$ leaves. \end{lemma} \begin{proof} We give an algorithm for converting a tree $T \in \mathbb{B}_n$ into a tree $\alpha(T) \in \mathbb{NB}_n$. We omit the (trivial but tedious) details of the proof that $\alpha$ is surjective, and that for any tree $U \in \mathbb{NB}_n$, the inverse image $\alpha^{-1}(U) \subseteq \mathbb{B}_n$ consists of a single equivalence class for the congruence on $\mathbb{B}_n$ defined by the consequences of the associativity relations $\mathrm{A}_{\wedgehor}$, $\mathrm{A}_{\wedgever}$ of equation \eqref{associativelaws}. We define $\alpha$ by the following diagrams, which indicate that for every $T \in \mathbb{B}_n$, and \emph{every} internal node labelled ${\,\scalebox{.67}{$\vartriangle$}\,}$, the subtree of $T$ with that node as root is rewritten as indicated, obtaining a tree $\alpha(T) \in \mathbb{NB}_n$: \[ \begin{array}{c@{\qquad\qquad}c} \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 2, 8 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "l"; ( 10, 8 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "r"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "l" }; { \ar@{-} "root"; "r" }; { \ar@{-} "l"; "t1" }; { \ar@{-} "l"; "t2" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \xrightarrow{\;\;\;\alpha\;\;\;} \adjustbox{valign=m}{ \begin{xy} ( 6, 12 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "t1" }; { \ar@{-} "root"; "t2" }; { \ar@{-} "root"; "t3" }; { \ar@{-} "root"; "t4" }; \end{xy} } & \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 2, 8 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "l"; ( 10, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "r"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "l" }; { \ar@{-} "root"; "r" }; { \ar@{-} "l"; "t1" }; { \ar@{-} "l"; "t2" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \xrightarrow{\;\;\;\alpha\;\;\;} \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 10, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "r"; ( 0, 8 )*+{T_1} = "t1"; ( 4, 8 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "t1" }; { \ar@{-} "root"; "t2" }; { \ar@{-} "root"; "r" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \\[9mm] \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 2, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "l"; ( 10, 8 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "r"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "l" }; { \ar@{-} "root"; "r" }; { \ar@{-} "l"; "t1" }; { \ar@{-} "l"; "t2" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \xrightarrow{\;\;\;\alpha\;\;\;} \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 2, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "l"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 8 )*+{T_3} = "t3"; ( 12, 8 )*+{T_4} = "t4"; { \ar@{-} "root"; "l" }; { \ar@{-} "root"; "t3" }; { \ar@{-} "root"; "t4" }; { \ar@{-} "l"; "t1" }; { \ar@{-} "l"; "t2" }; \end{xy} } & \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 2, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "l"; ( 10, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "r"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "l" }; { \ar@{-} "root"; "r" }; { \ar@{-} "l"; "t1" }; { \ar@{-} "l"; "t2" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \xrightarrow[\text{no change}]{\;\alpha\;} \adjustbox{valign=m}{ \begin{xy} ( 6, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 2, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "l"; ( 10, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "r"; ( 0, 0 )*+{T_1} = "t1"; ( 4, 0 )*+{T_2} = "t2"; ( 8, 0 )*+{T_3} = "t3"; ( 12, 0 )*+{T_4} = "t4"; { \ar@{-} "root"; "l" }; { \ar@{-} "root"; "r" }; { \ar@{-} "l"; "t1" }; { \ar@{-} "l"; "t2" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \end{array} \] Switching ${\,\scalebox{.67}{$\vartriangle$}\,}$ and ${\,\scalebox{.67}{$\blacktriangle$}\,}$ throughout defines $\alpha$ for subtrees with roots labelled ${\,\scalebox{.67}{$\blacktriangle$}\,}$. \end{proof} Gu et al.~\cite{GLM2008} study various classes of binary trees whose internal nodes are labelled white or black; however, none of their results coincides with our situation. The Knuth rotation correspondence is similar but not identical to our bijection $\alpha$ between binary and nonbinary trees; see Ebrahimi-Fard \& Manchon \cite[\S2]{EFM2014}. For $n \ge 1$, consider the graph $G_n$ whose vertex set is $\mathbb{B}_n$; the size of this set is the large Schr\"oder numbers (OEIS A006318): 1, 2, 6, 22, 90, 394, 1806, \dots. In $G_n$ there is an edge joining tree monomials $v, w$ if and only if $w$ may be obtained from $v$ by one application of the interchange law. The number of \emph{isolated} vertices in $G_n$ \cite{BM2016} is the sequence (OEIS A078482) 1, 2, 6, 20, 70, 254, 948, \dots. This is also the number of planar guillotine partitions of $I^2$ which avoid a certain nondyadic block partition with four parts (equivalently, there is no way to apply the interchange law); see Asinowski et al.~\cite[\S6.2, Remark 1]{ABBMP2013}. \begin{notation} For monomials $m_1, m_2 \in \mathbf{Free}(n)$ with $n \ge 4$, we write $m_1 \equiv m_2$ if and only if $m_1$ and $m_2$ can be obtained from the two sides of the interchange law \eqref{intlaw} by the same sequence of partial compositions. We write $m_1 \sim m_2$ if and only if $\Gamma( m_1 ) = \Gamma( m_2 )$, where $\Gamma$ is the geometric realization map (Definition \ref{defgeomap}). \end{notation} \begin{lemma} \label{nasrinslemma} The equivalence relations $\sim$ and $\equiv$ coincide. That is, $\sim$ is generated by the consequences in arity $n$ of the interchange law \eqref{intlaw}. \end{lemma} \begin{proof} For $n = 1,2,3$, the map $\Gamma$ is injective, so there is nothing to prove. Now suppose that $n \ge 4$ and that $m_1, m_2 \in \mathbf{Free}(n)$ satisfy $m_1 \sim m_2$; thus for some dyadic block partition $P \in \mathbf{DBP}(n)$ we have $m_1, m_2 \in \Gamma^{-1}(P)$. For $n = 4$, the dihedral group of symmetries of the square acts on the basis of 40 tree monomials; the generators are replacing ${\,\scalebox{.67}{$\vartriangle$}\,}$ (resp.~${\,\scalebox{.67}{$\blacktriangle$}\,}$) by the opposite operation and transposing the operations. In the following argument, we omit the permutations of the indeterminates, but the reasoning remains valid for a symmetric operad. There are nine orbits, of sizes two (twice), four (five times), eight (twice). For each orbit, we choose an orbit representative and display its image under $\Gamma$ in Figure \ref{rectangularpartitions}. The dihedral group also acts in the obvious way on these nine dyadic block partitions. In every case except the first, the size of the orbit generated by the block partition equals the size of the orbit generated by the tree monomial. The first block partition $\boxplus$ is fixed by all 8 symmetrices of the square, and the two monomials in $\Gamma^{-1}(\boxplus)$ are the two terms of the interchange law \eqref{intlaw}. This is the only failure of injectivity in arity 4. \begin{figure}[ht] \footnotesize \[ \begin{array}{c} \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,1.5) -- (1.5,1.5) (0,0.75) -- (1.5,0.75) (0,0) -- (0,1.5) (0.75,0) -- (0.75,1.5) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,1.5) -- (1.5,1.5) (0,1.125) -- (1.5,1.125) (0,0.75) -- (1.5,0.75) (0,0.375) -- (1.5,0.375) (0,0) -- (0,1.5) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,1.5) -- (1.5,1.5) (0,1.3125) -- (1.5,1.3125) (0,1.125) -- (1.5,1.125) (0,0.75) -- (1.5,0.75) (0,0) -- (0,1.5) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,1.5) -- (1.5,1.5) (0,1.125) -- (1.5,1.125) (0,0.75) -- (1.5,0.75) (0,0) -- (0,1.5) (0.75,1.125) -- (0.75,1.5) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,0.375) -- (1.5,0.375) (0,0.5625) -- (1.5,0.5625) (0,0.75) -- (1.5,0.75) (0,1.5) -- (1.5,1.5) (0,0) -- (0,1.5) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \\[3mm] \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,0.375) -- (1.5,0.375) (0,0.75) -- (1.5,0.75) (0,1.5) -- (1.5,1.5) (0,0) -- (0,1.5) (0.75,0.375) -- (0.75,0.75) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,0.375) -- (1.5,0.375) (0,0.75) -- (1.5,0.75) (0,1.5) -- (1.5,1.5) (0,0) -- (0,1.5) (0.75,0.75) -- (0.75,1.5) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,1.5) -- (1.5,1.5) (0,0.75) -- (1.5,0.75) (0,0) -- (0,1.5) (0.75,0) -- (0.75,0.75) (0.375,0) -- (0.375,0.75) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \qquad \begin{tikzpicture} \draw (0,0) -- (1.5,0) (0,1.5) -- (1.5,1.5) (0,0.75) -- (1.5,0.75) (0,0.375) -- (0.75,0.375) (0,0) -- (0,1.5) (0.75,0) -- (0.75,0.75) (1.5,0) -- (1.5,1.5); \end{tikzpicture} \end{array} \] \vspace{-5mm} \caption{Orbit representatives for dihedral group in arity 4} \label{rectangularpartitions} \end{figure} Assume that $n \ge 5$ and that $\sim$ and $\equiv$ coincide on $\mathbf{Free}(k)$ for $k < n$. Clearly any monomial $m \in \mathbf{Free}(k)$ has the form $m = m_1 \ast m_2$ for some $m_1 \in \mathbf{Free}(k_1)$, $m_2 \in \mathbf{Free}(k_2)$ where $k_1, k_2 < n$, $k_1 + k_2 = n$, and $\ast \in \{ {\,\scalebox{.67}{$\vartriangle$}\,}, {\,\scalebox{.67}{$\blacktriangle$}\,} \}$. Consider a dyadic block partition $P \in \mathbf{DBP}(n)$. There are three cases: \emph{Case 1}. Assume $P$ contains the horizontal bisection of $I^2$, but the two resulting parts do not both have vertical bisections. Let $x, y \in \mathbf{Free}(n)$ be tree monomials in $\Gamma^{-1}(P)$, so $x \sim y$. By assumption, we have $x = x_1 {\,\scalebox{.67}{$\blacktriangle$}\,} x_2$ where $x_i = x'_i {\,\scalebox{.67}{$\blacktriangle$}\,} x''_i$ for at most one $i \in \{1,2\}$; and the same for $y$. Since $\Gamma$ is an operad morphism, it follows from $\Gamma( x_1 {\,\scalebox{.67}{$\blacktriangle$}\,} x_2 ) = \Gamma( y_1 {\,\scalebox{.67}{$\blacktriangle$}\,} y_2 )$ that $\Gamma( x_1 ) \uparrow \Gamma( x_2 ) = \Gamma( y_1 ) \uparrow \Gamma( y_2 )$. It is geometrically clear that $\Gamma( x_i ) = \Gamma( y_i )$ for $i \in \{1,2\}$, and this implies that $x_i$ and $y_i$ have the same arity $k_i$. Hence $x_i \sim y_i$, and by induction $x_i \equiv y_i$. Therefore $x \equiv y$. \emph{Case 2}. Assume $P$ contains the vertical bisection of $I^2$, but the two resulting parts do not both have horizontal bisections. The argument is the same as Case 1 with ${\,\scalebox{.67}{$\vartriangle$}\,}$ and ${\,\scalebox{.67}{$\blacktriangle$}\,}$ transposed; this leaves the interchange law \eqref{intlaw} unchanged. \emph{Case 3}. Assume $P$ contains both horizontal and vertical bisections of $I^2$. In addition to the possibilities in Cases 1 and 2, there are two different factorizations for each monomial $x, y \in \Gamma^{-1}(P)$ into products of four factors. Using both algebraic and geometric notation, we have: \begin{align*} & x = x_1 {\,\scalebox{.67}{$\blacktriangle$}\,} x_2 = ( z_1 {\,\scalebox{.67}{$\vartriangle$}\,} z_2 ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( z_3 {\,\scalebox{.67}{$\vartriangle$}\,} z_4 ) \stackrel{\boxplus}{=} ( z_1 {\,\scalebox{.67}{$\blacktriangle$}\,} z_3 ) {\,\scalebox{.67}{$\vartriangle$}\,} ( z_2 {\,\scalebox{.67}{$\blacktriangle$}\,} z_4 ) = y_1 {\,\scalebox{.67}{$\vartriangle$}\,} y_2 = y, \\ & \Gamma(x) = \adjustbox{valign=m} {\begin{xy} ( 15, 5 )*+{\Gamma(x_1)}; ( 15, 15 )*+{\Gamma(x_2)}; ( 10, 0 ) = "1"; ( 10, 10 ) = "2"; ( 10, 20 ) = "3"; ( 20, 0 ) = "4"; ( 20, 10 ) = "5"; ( 20, 20 ) = "6"; { \ar@{-} "1"; "3" }; { \ar@{-} "4"; "6" }; { \ar@{-} "1"; "4" }; { \ar@{-} "2"; "5" }; { \ar@{-} "3"; "6" }; \end{xy}} = \adjustbox{valign=m} {\begin{xy} ( 5, 5 )*+{\Gamma(z_1)}; ( 5, 15 )*+{\Gamma(z_3)}; ( 15, 5 )*+{\Gamma(z_2)}; ( 15, 15 )*+{\Gamma(z_4)}; ( 0, 0 ) = "1"; ( 0, 10 ) = "2"; ( 0, 20 ) = "3"; ( 10, 0 ) = "4"; ( 10, 10 ) = "5"; ( 10, 20 ) = "6"; ( 20, 0 ) = "7"; ( 20, 10 ) = "8"; ( 20, 20 ) = "9"; { \ar@{-} "1"; "3" }; { \ar@{-} "4"; "6" }; { \ar@{-} "7"; "9" }; { \ar@{-} "1"; "7" }; { \ar@{-} "2"; "8" }; { \ar@{-} "3"; "9" }; \end{xy}} = \adjustbox{valign=m} {\begin{xy} ( 5, 5 )*+{\Gamma(y_1)}; ( 15, 5 )*+{\Gamma(y_2)}; ( 0, 0 ) = "1"; ( 0, 10 ) = "2"; ( 10, 0 ) = "3"; ( 10, 10 ) = "4"; ( 20, 0 ) = "5"; ( 20, 10 ) = "6"; { \ar@{-} "1"; "2" }; { \ar@{-} "3"; "4" }; { \ar@{-} "5"; "6" }; { \ar@{-} "1"; "5" }; { \ar@{-} "2"; "6" }; \end{xy}} = \Gamma(y). \end{align*} If $x \sim y$ then either (i) the claim follows from the equivalence of factors in lower arity as in Cases 1 and 2, or (ii) the claim follows from an application of the interchange law in arity $n$ as indicated in the last two equations. \end{proof} For a generalization of Lemma \ref{nasrinslemma} to $d \ge 2$ nonassociative operations, see \cite{BD2017}. \subsection{Cuts and slices} Recall the notions of empty blocks and subrectangles in a block partition from Definitions \ref{bpoperad} and \ref{defsubrectangle}. \begin{definition} Let $P$ be a block partition of $I^2$ and $R$ a subrectangle of $P$. By a \emph{main cut} in $R$ we mean a horizontal or vertical bisection of $R$. Every subrectangle has at most two main cuts; the empty block is the only block partition with no main cut. Suppose that a main cut partitions $R$ into subrectangles $R_1$ and $R_2$. If either $R_1$ or $R_2$ has a main cut parallel to the main cut of $R$, this is called a \emph{primary cut} in $R$. This definition extends as follows: if the subrectangle $S$ of $R$ is one of the subrectangles obtained from a sequence of cuts all of which are parallel to a main cut of $R$ then a main cut of $S$ is a primary cut of $R$. In a given direction, we include the main cut of $R$ as a primary cut. Let $C_1, \dots, C_\ell$ be all the primary cuts of $R$ parallel to a given main cut $C_i$ of $R$ ($1 \le i \le \ell$) in their natural order (bottom to top, or left to right) so that there is no primary cut between $C_j$ and $C_{j+1}$ for $1 \le j \le \ell{-}1$. Define the artificial ``cuts'' $C_0$ and $C_{\ell+1}$ to be the bottom and top (or left and right) sides of $R$. We write $S_j$ for the $j$-th \emph{slice} of $R$ parallel to the given main cut; that is, the subrectangle between $C_{j-1}$ and $C_j$ for $1 \le j \le \ell{+}1$. \end{definition} \begin{definition} Let $m$ be a monomial of arity $n$ in the operad $\mathbf{Free}$. We say that $m$ \emph{admits a commutativity relation} if for some transposition $(ij) \in S_n$ ($i < j$), the following relation holds for the corresponding cosets in $\mathbf{DIA}$: \[ m(x_1,\dots,x_i,\dots,x_j,\dots,x_n) \equiv m(x_1,\dots,x_j,\dots,x_i,\dots,x_n). \] \end{definition} We emphasize (referring to the commutative diagram of Figure \ref{bigpicture}) that the proof of a commutativity property for the monomial $m$ consists of a sequence of applications of associativity and the interchange law starting from $m$ and ending with the same pattern of parentheses and operations but with a different permutation. \begin{proposition} \label{twomaincuts} Let $m$ be a tree monomial in $\mathbf{Free}$ which admits a commutativity relation. Assume that this commutativity relation is not the result of operad partial composition with a commutativity relation of lower arity, either from (i) a commutativity relation holding in a proper factor of $m$, or (ii) a commutativity relation holding in a proper quotient of $m$, by which we mean substitution of the same decomposable factor for the same indecomposable argument in both sides of a commutativity relation of lower arity. If $P = \Gamma(m)$ is the corresponding dyadic block partition of $I^2$ then $P$ contains both of the main cuts (horizontal and vertical); that is, it must be possible to apply the interchange law as a rewrite rule at the root of the tree monomial $m$. \end{proposition} \begin{proof} Any dyadic block partition $P$ of $I^2$ has at least one main cut, corresponding to the root of the tree monomial $m$ for which $P = \Gamma(m)$. Transposing the $x$ and $y$ axes if necessary (this corresponds to switching the horizontal and vertical operation symbols in the monomial $m$), we may assume that $P$ contains the vertical main cut, corresponding to the operation ${\,\scalebox{.67}{$\vartriangle$}\,}$ in the monomial $m = m_1 {\,\scalebox{.67}{$\vartriangle$}\,} m_2$: \[ P = \begin{array}{|c|c|} \midrule P_1 & P_2 \\ \midrule \end{array} \] Let $P_1 = \Gamma(m_1)$ and $P_2 = \Gamma(m_2)$ be the dyadic block partitions of $I^2$ corresponding to $m_1$ and $m_2$. If both $P_1$ and $P_2$ have the horizontal main cut, then we are done, since these two cuts combine to produce the horizontal main cut for $P$: \[ P = \begin{array}{|c|c|} \midrule P''_1 & P''_2 \\ \midrule P'_1 & P'_2 \\ \midrule \end{array} \] Otherwise, at most one of $P_1$ and $P_2$ has the horizontal main cut. Reflecting in the vertical line $x = \tfrac12$ if necessary (this corresponds to replacing the horizontal operation ${\,\scalebox{.67}{$\vartriangle$}\,}$ by its opposite throughout tree monomial $m$), we may assume that $P_1$ does \emph{not} have the horizontal main cut. Then either $P_1$ has the vertical main cut, or $P_1$ has no main cut (so that $P_1$ is an empty block): \[ P = \begin{array}{|c|c|c|} \midrule P'_1 & P''_1 & P_2 \\ \midrule \end{array} \qquad \text{or} \qquad P = \begin{array}{|c|c|} \midrule P_1 & P_2 \\ \midrule \end{array} \] In either case, $P_1$ is the union of $k \ge 1$ consecutive vertical slices $S_1, \dots, S_k$ from left to right, where we assume that $k$ is as large as possible so that the slices are as thin as possible. It follows that each of these vertical slices either is an empty block or has only one main cut which is horizontal: \[ P = \begin{array}{|c|c|c|c|} \midrule S_1 & \cdots & S_k & P_2 \\ \midrule \end{array} \] Therefore, in the monomial $m_1$ for which $P_1 = \Gamma( m_1 )$, each of these vertical slices corresponds either to an indecomposable indeterminate $x_j$ or a decomposable element $t {\,\scalebox{.67}{$\blacktriangle$}\,} u$ whose root operation is the vertical operation (corresponding to the horizontal main cut). By assumption, $P_1$ does not have the horizontal main cut, and so at least one of the vertical slices $S_j$ does not have the horizontal main cut; we choose $j$ to be as small as possible, thereby selecting the leftmost vertical slice without the horizontal main cut: \[ P = \begin{array}{|c|c|c|c|c|c|} \midrule S_1 & \cdots & x_j & \cdots & S_k & P_2 \\ \midrule \end{array} \] By maximality of the choice of $k$, the vertical slice $S_j$ does not have the vertical main cut either. Thus $S_j$ has no main cut, and hence $S_j$ is the empty block, and so in the monomial $m_1$, the vertical slice $S_j$ corresponds to an indecomposable indeterminate $x_j$. Therefore $m$ must have the following form, where $v$ and/or $w$ may be absent (that is, $v {\,\scalebox{.67}{$\vartriangle$}\,} x_j {\,\scalebox{.67}{$\vartriangle$}\,} w$ may be $x_j {\,\scalebox{.67}{$\vartriangle$}\,} w$ or $v {\,\scalebox{.67}{$\vartriangle$}\,} x_j$ or simply $x_j$): \begin{equation} \label{vwequation} m = m_1 {\,\scalebox{.67}{$\vartriangle$}\,} m_2 = v {\,\scalebox{.67}{$\vartriangle$}\,} x_j {\,\scalebox{.67}{$\vartriangle$}\,} w {\,\scalebox{.67}{$\vartriangle$}\,} m_2. \end{equation} (We may omit parentheses since the operation ${\,\scalebox{.67}{$\vartriangle$}\,}$ is associative.) If both $v$ and $w$ are absent, then $m_1 = x_j$ and so $m = x_j {\,\scalebox{.67}{$\vartriangle$}\,} m_2$. In this case, it is clear that the only way in which the interchange law can be applied as a rewrite rule to $m$ is within the submonomial $m_2$. But this implies that any commutativity relation which holds for $m$ is a consequence of a commutativity relation for $m_2$, contradicting our assumption. If $x_j$ is not the only argument in $m_1$ then there is at least one factor $v$ or $w$ on the left or right side of $x_j$ in equation \eqref{vwequation}. We want to be able to apply the interchange law as a rewrite rule in a way which involves all of $m$; otherwise, any commutativity relation which holds for $m$ must be a consequence of a commutativity relation for a proper submonomial, contradicting our assumption. Let us write the monomial \eqref{vwequation} as a tree monomial; it has the form \begin{equation} \label{vwtree1} \adjustbox{valign=m}{ \begin{xy} ( 12, 12 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 0, 0 )*+{\boxed{v}} = "t1"; ( 8, 0 )*+{x_j} = "t2"; ( 16, 0 )*+{\boxed{w}} = "t3"; ( 24, 0 )*+{\boxed{m_2}} = "t4"; { \ar@{-} "root"; "t1" }; { \ar@{-} "root"; "t2" }; { \ar@{-} "root"; "t3" }; { \ar@{-} "root"; "t4" }; \end{xy} } \end{equation} We can apply the interchange law to this tree only in one of the following ways: within $v$, within $w$, within $m_2$, or (if both $w$ and $m_2$ have ${\,\scalebox{.67}{$\blacktriangle$}\,}$ at the root) using the root of \eqref{vwtree1} with $w$ and $m_2$. In the last case, we first rewrite \eqref{vwtree1} as follows: \begin{equation} \label{vwtree2} \adjustbox{valign=m}{ \begin{xy} ( 12, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 20, 8 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "r"; ( 0, 8 )*+{\boxed{v}} = "t1"; ( 8, 8 )*+{x_j} = "t2"; ( 16, 0 )*+{\boxed{w}} = "t3"; ( 24, 0 )*+{\boxed{m_2}} = "t4"; { \ar@{-} "root"; "t1" }; { \ar@{-} "root"; "t2" }; { \ar@{-} "root"; "r" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \end{equation} After applying the interchange law, we obtain a tree of the following form: \begin{equation} \label{vwtree3} \adjustbox{valign=m}{ \begin{xy} ( 12, 16 )*+{{\,\scalebox{.67}{$\vartriangle$}\,}} = "root"; ( 20, 8 )*+{{\,\scalebox{.67}{$\blacktriangle$}\,}} = "r"; ( 0, 8 )*+{\boxed{v}} = "t1"; ( 8, 8 )*+{x_j} = "t2"; ( 16, 0 )*+{\boxed{w'}} = "t3"; ( 24, 0 )*+{\boxed{m'_2}} = "t4"; { \ar@{-} "root"; "t1" }; { \ar@{-} "root"; "t2" }; { \ar@{-} "root"; "r" }; { \ar@{-} "r"; "t3" }; { \ar@{-} "r"; "t4" }; \end{xy} } \end{equation} Thus, no matter how we apply the interchange law to \eqref{vwtree1}, the fact that $x_j$ is a child of the root ${\,\scalebox{.67}{$\vartriangle$}\,}$ remains unchanged. Hence any commutativity relation for $m$ must be a consequence of a commutativity relation of lower arity (either as factor or as quotient), contradicting our original assumption. Therefore both $P_1$ and $P_2$ have the horizontal main cut, and hence $P$ has both main cuts. \end{proof} \subsection{Border blocks and interior blocks} \begin{definition} Let $m$ be a (tree) monomial in the operad $\mathbf{Free}$ which admits a commutativity relation transposing the indeterminates $x_i$ and $x_j$ ($i \ne j$). If $B = \Gamma(m)$ is the labelled block partition of $I^2$ corresponding to $m$ then $B$ is called a \emph{commutative block partition}. The two empty blocks corresponding to $x_i$ and $x_j$ are called \emph{commuting empty blocks}. \end{definition} \begin{definition} Let $B$ be a block partition of $I^2$ consisting of the empty blocks $R_1$, $\dots$, $R_k$. If the closure of $R_i$ has empty intersection with the four sides of the closure $\overline{I^2}$ then $R_i$ is an \emph{interior block}, otherwise $R_i$ is a \emph{border block}. \end{definition} \begin{lemma} \label{borderlemma} Suppose that $B_1 = \Gamma(m_1)$ and $B_2 = \Gamma(m_2)$ are two labelled dyadic block partitions of $I^2$ such that $m_1 \equiv m_2$ in every double interchange semigroup; hence this equivalence must be the result of applying associativity and the interchange law. (This is more general than a commutativity relation for a dyadic block partition.) Then any interior (respectively border) block of $B_1$ remains an interior (respectively border) block in $B_2$. \end{lemma} \begin{proof} It is clear from the geometric realizations that neither associativity nor the interchange law can change an interior block to a border block or conversely. \end{proof} \begin{lemma} \label{interiorlemma} Let $B = \Gamma(m)$ be a commutative block partition. Then the two commuting empty blocks must be interior blocks. \end{lemma} \begin{proof} Let $R_1$, \dots, $R_\ell$ be the empty blocks from left to right along the north side of $I^2$. It is clear from the geometric realization that neither associativity nor the interchange law can change the order of $R_1$, \dots, $R_\ell$. The same applies to the other three sides. \end{proof} \section{Commutative block partitions in arity 10} \begin{lemma} \label{arity10slices} Let $B = \Gamma(m)$ be a commutative block partition of arity 10. Then $B$ has at least two and at most four parallel slices in either direction (horizontal or vertical). \end{lemma} \begin{proof} Lemma \ref{twomaincuts} shows that $B$ contains both main cuts; since $B$ contains 10 empty blocks, $B$ has at most five parallel slices (four primary cuts) in either direction. But if there are four primary cuts in one direction and the main cut in the other direction, then we have 10 empty blocks, each of which is a border block, contradicting Lemma \ref{interiorlemma}. \end{proof} \begin{lemma} \label{arity10blocks} Let $B = \Gamma(m)$ be a commutative block partition of any arity. Then $B$ has both main cuts by Lemma \ref{twomaincuts}, and hence $B$ consists of the union of four square quarters $A_1$, \dots, $A_4$ (in the NW, NE,SW, SE corners respectively). If one of these quarters has an empty block which is interior to $B$, then that quarter contains at least three empty blocks. If one of these quarters has two empty blocks which are both interior to $B$, then that quarter contains at least four empty blocks. Hence $B$ contains at least seven empty blocks. \end{lemma} \begin{proof} If one of the four subrectangles has only two empty blocks then these two blocks were created by a main cut, and hence both of them are border blocks in $B$. Similarly, if one of the rectangles has only three empty blocks, then either these three blocks are three parallel slices (in which case all three are border blocks in $B$) or these three blocks were created by a main cut in one direction followed by the main cut in the other direction in one of the blocks formed by the first main cut (in which case only one of the three blocks is an interior block in $B$). Lemma \ref{interiorlemma} shows that $B$ has at least two interior blocks, and these can occur either in two different subrectangles or in the same subrectangle. For different subrectangles, $B$ contains at least $3+3+1+1$ empty blocks, and for the same subrectangle, $B$ contains at least $4+1+1+1$ empty blocks. \end{proof} \begin{proposition} \label{atleast8proposition} A commutative block partition $B$ has at least eight empty blocks. \end{proposition} \begin{proof} Proposition \ref{twomaincuts} shows that $B$ must have both horizontal and vertical main cuts. Lemma \ref{borderlemma} shows that an interior block cannot commute with a border block, so $B$ must have at least two interior empty blocks. The proof of Lemma \ref{arity10blocks} shows that the number of empty blocks in $B$ is at least seven, with the minimum occurring if and only if there are two interior blocks in the same quarter $A_1$, \dots, $A_4$. Reflecting in the horizontal and/or vertical axes if necessary, we may assume that the NW quarter $A_1$ contains two empty blocks which are interior to $B$ and contains only the horizontal main cut (otherwise we reflect in the NW-SE diagonal). Figure \ref{atleast8} shows the three partitions with seven empty blocks satisfying these conditions. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[ draw = black, x = 10 mm, y = 10 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (3,1.5) (0,3) -- (3,3) (0,2.25) -- (1.5,2.25) (0.75,1.9) -- (1.5,1.9) (0,0) -- (0,3) (0.75,1.5) -- (0.75,2.25) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.75,0.75) node {$a$} (2.25,0.75) node {$b$} (1.1,2.1) node {$e$} (0.375,1.8) node {$c$} (1.1,1.7) node {$d$} (0.75,2.625) node {$f$} (2.25,2.25) node {$g$}; \end{tikzpicture} \qquad \begin{tikzpicture}[ draw = black, x = 10 mm, y = 10 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (3,1.5) (0,3) -- (3,3) (0,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,1.5) -- (0.75,2.25) (0.35,1.5) -- (0.35,2.25) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.7500,0.750) node {$a$} (2.2500,0.750) node {$b$} (0.1875,1.875) node {$c$} (0.5625,1.925) node {$d$} (1.1250,1.875) node {$e$} (0.7500,2.625) node {$f$} (2.2500,2.250) node {$g$}; \end{tikzpicture} \qquad \begin{tikzpicture}[ draw = black, x = 10 mm, y = 10 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (3,1.5) (0,3) -- (3,3) (0,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,1.5) -- (0.75,2.25) (1.1,1.5) -- (1.1,2.25) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.7500,0.750) node {$a$} (2.2500,0.750) node {$b$} (0.3750,1.875) node {$c$} (0.9375,1.925) node {$d$} (1.3125,1.875) node {$e$} (0.7500,2.625) node {$f$} (2.2500,2.25) node {$g$}; \end{tikzpicture} \end{center} \vspace{-4mm} \caption{Three block partitions for proof of Proposition \ref{atleast8proposition}} \label{atleast8} \end{figure} \noindent Consider the monomial corresponding to the first partition in Figure \ref{atleast8}. We may apply the interchange law only where two orthogonal cuts intersect at a point which is interior to both; that is, at a plus $+$ configuration. We may apply associativity only where we have $( - \ast - ) \ast -$ or $- \ast ( - \ast - )$. At each step, there is only one possible rewriting that may be applied; we underline the three (associativity) or four (interchange law) factors involved: \begin{align*} ( \underline{a} {\,\scalebox{.67}{$\vartriangle$}\,} \underline{b} ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( \underline{( ( c {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\blacktriangle$}\,} e ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} f )} {\,\scalebox{.67}{$\vartriangle$}\,} \underline{g} ) &\equiv ( \underline{a} {\,\scalebox{.67}{$\blacktriangle$}\,} ( \underline{( c {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\blacktriangle$}\,} e ) )} {\,\scalebox{.67}{$\blacktriangle$}\,} \underline{f} ) ) {\,\scalebox{.67}{$\vartriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} g ) \\ &\equiv ( \underline{( a {\,\scalebox{.67}{$\blacktriangle$}\,} ( c {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\blacktriangle$}\,} e ) ) )} {\,\scalebox{.67}{$\blacktriangle$}\,} \underline{f} ) {\,\scalebox{.67}{$\vartriangle$}\,} ( \underline{b} {\,\scalebox{.67}{$\blacktriangle$}\,} \underline{g} ) \\ &\equiv ( a {\,\scalebox{.67}{$\blacktriangle$}\,} ( c {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\blacktriangle$}\,} e ) ) {\,\scalebox{.67}{$\vartriangle$}\,} b ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( f {\,\scalebox{.67}{$\vartriangle$}\,} g ). \end{align*} Similar calculations apply to the second and third block partitions. In this way, we have computed the entire equivalence class of the original monomial subject to rewriting using associativity and the interchange law. From this we see that no block partition with seven empty blocks admits a commutativity relation. \end{proof} The method used for the proof of Proposition \ref{atleast8proposition} can be extended to show that a dyadic block partition with eight empty blocks cannot be commutative. In this case, we have the following subcases: (i) one quarter $A_i$ has five empty blocks, and the other three are empty; (ii) one quarter $A_i$ has four empty blocks, another $A_j$ has two empty blocks, and the other two are empty (here we distinguish two subsubcases, depending on whether $A_i$ and $A_j$ share an edge or only a corner); (iii) two quarters $A_i$ and $A_j$ each have three empty blocks, and the other two are empty (with the same two subsubcases). This provides a completely different proof, independent of machine computation, of one of the main results in \cite{BM2016}; we omit the (rather lengthy) details. In what follows, we write $B$ for a commutative block partition with 10 empty blocks. Lemma \ref{interiorlemma} shows that the commuting blocks must be interior blocks, and Lemma \ref{arity10slices} shows that $B$ has either two, three, or four parallel slices in either direction. Thus, if $B$ has three (respectively four) parallel slices in one direction, then the commuting blocks must be in the middle slice (respectively the middle two slices). Without loss of generality, interchanging horizontal and vertical if necessary, we may assume that these parallel slices are vertical. \subsection{Four parallel vertical slices} In this case we have the vertical and horizontal main cuts, and two additional vertical primary cuts. Applying horizontal associativity if necessary, this gives the following configuration: \[ \begin{tikzpicture} [ draw = black, x = 8 mm, y = 8 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00); \end{tikzpicture} \] This configuration has eight empty blocks, all of which are border blocks. We need two more cuts to create two interior blocks. Applying vertical associativity if necessary in the second slice from the left, and applying a dihedral symmetry of the square if necessary, we are left with three possible configurations: \begin{equation} \label{configsABC} A\colon \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 8 mm, y = 8 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (1.50,0.75) -- (2.25,0.75); \end{tikzpicture} } \quad\quad B\colon \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 8 mm, y = 8 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (0.75,0.75) -- (1.50,0.75); \end{tikzpicture} } \quad\quad C\colon \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 8 mm, y = 8 mm ] \draw (0.000,0.00) -- (0.000,3.00) (0.750,0.00) -- (0.750,3.00) (1.500,0.00) -- (1.500,3.00) (2.250,0.00) -- (2.250,3.00) (3.000,0.00) -- (3.000,3.00) (0.000,0.00) -- (3.000,0.00) (0.000,1.50) -- (3.000,1.50) (0.000,3.00) -- (3.000,3.00) (0.750,2.25) -- (1.500,2.25) (1.125,1.50) -- (1.125,2.25); \end{tikzpicture} } \end{equation} \subsubsection{Configuration $A$} We present simultaneously the algebraic and geometric steps in the proof of a new commutativity relation. We label the empty blocks in the initial configuration as follows: \[ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (1.5,1.5) (0,3) -- (3,3) (0.75,2.25) -- (1.5,2.25) (1.5,0.75) -- (2.25,0.75) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (2.25,0) -- (2.25,3) (1.5,1.5) -- (3,1.5) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.375) node {$f$} (1.875,1.15) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$d$} (1.125,2.625) node {$e$} (1.875,2.25) node {$i$} (2.625,2.25) node {$j$}; \end{tikzpicture} \] We show that this partition admits a commutativity relation transposing $d$ and $g$. We refer the reader to Figure \ref{bigpicture} and \S\ref{diagramchasing} as an aid to understanding the proof. In the following list of monomials, we indicate the four factors $w, x, y, z$ taking part in each application of the interchange law. We omit parentheses in products using the same operation two or more times; the factors $w, x, y, z$ make clear how we reassociate such products between two consecutive applications of the interchange law. The diagrams which appear after the list of monomials represent the same steps in geometric form; in each application of the interchange law as the rewrite rule $( a \star_2 b ) \star_1 ( c \star_2 d ) \mapsto ( a \star_1 c ) \star_2 ( b \star_1 d )$ where $\{ \star_1, \star_2 \} = \{ {\,\scalebox{.67}{$\vartriangle$}\,}, {\,\scalebox{.67}{$\blacktriangle$}\,} \}$, we indicate the root operation $\star_1$ by a thick line and the child operations $\star_2$ by dotted lines: \begin{align} &\text{factors}\; w, x, y, z & & \text{result of application of interchange law} \notag \\ \midrule &\text{initial configuration} & &((a {\,\scalebox{.67}{$\vartriangle$}\,} b){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} (d {\,\scalebox{.67}{$\blacktriangle$}\,} e))){\,\scalebox{.67}{$\vartriangle$}\,} (((f {\,\scalebox{.67}{$\blacktriangle$}\,} g){\,\scalebox{.67}{$\vartriangle$}\,} h ){\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \notag \\[-2pt] &f {\,\scalebox{.67}{$\blacktriangle$}\,} g, h , i, j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} b){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} (d {\,\scalebox{.67}{$\blacktriangle$}\,} e))){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\blacktriangle$}\,} g {\,\scalebox{.67}{$\blacktriangle$}\,} i ){\,\scalebox{.67}{$\vartriangle$}\,} (h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \label{I1} \tag{I1} \\[-2pt] &f, g{\,\scalebox{.67}{$\blacktriangle$}\,} i, h , j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} b){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} (d {\,\scalebox{.67}{$\blacktriangle$}\,} e))){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ){\,\scalebox{.67}{$\blacktriangle$}\,} ((g {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I2} \tag{I2} \\[-2pt] &a, b, c, d {\,\scalebox{.67}{$\blacktriangle$}\,} e & &((a {\,\scalebox{.67}{$\blacktriangle$}\,} c){\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} e)){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ){\,\scalebox{.67}{$\blacktriangle$}\,} ((g {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I3} \tag{I3} \\[-2pt] &a, c, b {\,\scalebox{.67}{$\blacktriangle$}\,} d, e & &((a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d)){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e)){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ){\,\scalebox{.67}{$\blacktriangle$}\,} ((g {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I4} \tag{I4} \\[-2pt] &a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d), c {\,\scalebox{.67}{$\vartriangle$}\,} e, f {\,\scalebox{.67}{$\vartriangle$}\,} h, (g {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d)){\,\scalebox{.67}{$\vartriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} h )){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j) \label{I5} \tag{I5} \\[-2pt] &a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d), f {\,\scalebox{.67}{$\vartriangle$}\,} h , c {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,}(g {\,\scalebox{.67}{$\blacktriangle$}\,} i), j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d)){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e{\,\scalebox{.67}{$\vartriangle$}\,}(g {\,\scalebox{.67}{$\blacktriangle$}\,} i))){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \label{I6} \tag{I6} \\[-2pt] &a, b {\,\scalebox{.67}{$\blacktriangle$}\,} d, c {\,\scalebox{.67}{$\vartriangle$}\,} e, g {\,\scalebox{.67}{$\blacktriangle$}\,} i & &((a {\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e)){\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g {\,\scalebox{.67}{$\blacktriangle$}\,} i)){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \label{I7} \tag{I7} \\[-2pt] &a, c {\,\scalebox{.67}{$\vartriangle$}\,} e, b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g, i & &((a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g)){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} i)){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ) {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \label{I8} \tag{I8} \\[-2pt] &a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g), c {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} i, f {\,\scalebox{.67}{$\vartriangle$}\,} h , j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g){\,\scalebox{.67}{$\vartriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} h ) ){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} i {\,\scalebox{.67}{$\vartriangle$}\,} j) \label{I9} \tag{I9} \\[-2pt] &a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g), f {\,\scalebox{.67}{$\vartriangle$}\,} h, c {\,\scalebox{.67}{$\vartriangle$}\,} e, i{\,\scalebox{.67}{$\vartriangle$}\,} j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g)){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} e) ){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I10} \tag{I10} \\[-2pt] &a, b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g, c, e & &((a {\,\scalebox{.67}{$\blacktriangle$}\,} c){\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} g{\,\scalebox{.67}{$\blacktriangle$}\,} e ){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I11} \tag{I11} \\[-2pt] &a, c, b {\,\scalebox{.67}{$\blacktriangle$}\,} d, g{\,\scalebox{.67}{$\blacktriangle$}\,} e & &((a {\,\scalebox{.67}{$\vartriangle$}\,}(b {\,\scalebox{.67}{$\blacktriangle$}\,} d )){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} (g{\,\scalebox{.67}{$\blacktriangle$}\,} e) ){\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h ) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I12} \tag{I12} \\[-2pt] &a {\,\scalebox{.67}{$\vartriangle$}\,}(b {\,\scalebox{.67}{$\blacktriangle$}\,} d ), c {\,\scalebox{.67}{$\vartriangle$}\,} (g{\,\scalebox{.67}{$\blacktriangle$}\,} e), f {\,\scalebox{.67}{$\vartriangle$}\,} h, i {\,\scalebox{.67}{$\vartriangle$}\,} j & &(a {\,\scalebox{.67}{$\vartriangle$}\,}(b {\,\scalebox{.67}{$\blacktriangle$}\,} d ){\,\scalebox{.67}{$\vartriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} h ) ){\,\scalebox{.67}{$\blacktriangle$}\,} (( c {\,\scalebox{.67}{$\vartriangle$}\,} (g{\,\scalebox{.67}{$\blacktriangle$}\,} e) {\,\scalebox{.67}{$\vartriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I13} \tag{I13} \\[-2pt] &a, (b {\,\scalebox{.67}{$\blacktriangle$}\,} d ){\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} h, c {\,\scalebox{.67}{$\vartriangle$}\,} (g{\,\scalebox{.67}{$\blacktriangle$}\,} e), i {\,\scalebox{.67}{$\vartriangle$}\,} j & &(a {\,\scalebox{.67}{$\blacktriangle$}\,} ( c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)){\,\scalebox{.67}{$\vartriangle$}\,} (((b {\,\scalebox{.67}{$\blacktriangle$}\,} d ){\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I14} \tag{I14} \\[-2pt] &b {\,\scalebox{.67}{$\blacktriangle$}\,} d, f {\,\scalebox{.67}{$\vartriangle$}\,} h, i , j & &(a {\,\scalebox{.67}{$\blacktriangle$}\,} ( c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)){\,\scalebox{.67}{$\vartriangle$}\,} ((b {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \label{I15} \tag{I15} \\[-2pt] & b, d {\,\scalebox{.67}{$\blacktriangle$}\,} i, f {\,\scalebox{.67}{$\vartriangle$}\,} h, j & &(a {\,\scalebox{.67}{$\blacktriangle$}\,} ( c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)){\,\scalebox{.67}{$\vartriangle$}\,} ((b {\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I16} \tag{I16} \\[-2pt] &a, c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e), b {\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} h, (d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j & &(a {\,\scalebox{.67}{$\vartriangle$}\,} b {\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} h){\,\scalebox{.67}{$\blacktriangle$}\,} ((c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)) {\,\scalebox{.67}{$\vartriangle$}\,} ((d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I17} \tag{I17} \\[-2pt] &a {\,\scalebox{.67}{$\vartriangle$}\,} b, f {\,\scalebox{.67}{$\vartriangle$}\,} h, c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e),(d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} b) {\,\scalebox{.67}{$\blacktriangle$}\,}(c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)) {\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I18} \tag{I18} \\[-2pt] &f, h, d {\,\scalebox{.67}{$\blacktriangle$}\,} i, j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} b) {\,\scalebox{.67}{$\blacktriangle$}\,}(c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)) {\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\blacktriangle$}\,} d {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} (h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \label{I19} \tag{I19} \\[-2pt] &f {\,\scalebox{.67}{$\blacktriangle$}\,} d, i,h, j & &((a {\,\scalebox{.67}{$\vartriangle$}\,} b) {\,\scalebox{.67}{$\blacktriangle$}\,}(c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e)) {\,\scalebox{.67}{$\vartriangle$}\,} (((f {\,\scalebox{.67}{$\blacktriangle$}\,} d) {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \label{I20} \tag{I20} \end{align} The same sequence of rewritings has the following geometric representation: \begin{align*} & \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (1.5,1.5) (0,3) -- (3,3) (0.75,2.25) -- (1.5,2.25) (1.5,0.75) -- (2.25,0.75) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.375) node {$f$} (1.875,1.15) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$d$} (1.125,2.625) node {$e$} (1.875,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (2.25,0) -- (2.25,3); \draw[ very thick] (1.5,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I1}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (1.5,1.5) (0,3) -- (3,3) (0.75,2.25) -- (1.5,2.25) (1.5,2.25) -- (2.25,2.25) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.75) node {$f$} (1.875,1.875) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$d$} (1.125,2.625) node {$e$} (1.875,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[ very thick] (2.25,0) -- (2.25,3); \draw[dotted] (1.5,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I2}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,2.25) -- (1.5,2.25) (1.5,2.25) -- (2.25,2.25) (2.25,0) -- (2.25,3) (1.5,1.5) -- (3,1.5) (0,0) -- (0,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.75) node {$f$} (1.875,1.875) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$d$} (1.125,2.625) node {$e$} (1.875,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (0.75,0) -- (0.75,3); \draw[very thick] (0,1.5) -- (1.5,1.5); \end{tikzpicture}_{\eqref{I3}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (1.5,2.25) -- (2.25,2.25) (0,0) -- (0,3) (1.5,1.5) -- (3,1.5) (2.25,0) -- (2.25,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.375) node {$b$} (1.875,0.75) node {$f$} (1.875,1.875) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.125) node {$d$} (1.125,2.25) node {$e$} (1.875,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (0,1.5) -- (1.5,1.5) ; \draw[ very thick] (0.75,0) -- (0.75,3); \end{tikzpicture}_{\eqref{I4}} $} \\ & \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (1.5,2.25) -- (2.25,2.25) (0,0) -- (0,3) (2.25,0) -- (2.25,3) (0.75,0) -- (0.75,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.375) node {$b$} (1.875,0.75) node {$f$} (1.875,1.875) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.125) node {$d$} (1.125,2.25) node {$e$} (1.875,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (0,1.5) -- (3,1.5) ; \draw[ very thick] (1.5,0) -- (1.5,3); \end{tikzpicture}_{\eqref{I5}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (0.75,2.25) -- (1.5,2.25) (0.75,0) -- (0.75,1.5) (0,0) -- (0,3) (2.25,0) -- (2.25,1.5) (3,0) -- (3,3) (0.75,1.5) -- (0.75,3) (0.375,1.5) -- (0.375,3) (0.375,0.75) node {$a$} (1.25,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.875) node {$g$} (2.625,0.75) node {$h$} (0.185,2.25) node {$c$} (1.25,1.125) node {$d$} (0.6,2.25) node {$e$} (1.125,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[very thick] (0,1.5) -- (3,1.5); \draw[dotted] (1.5,0) -- (1.5,3); \end{tikzpicture}_{\eqref{I6}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (1.5,1.5) -- (3,1.5) (0.75,0.75) -- (1.5,0.75) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.375,1.5) -- (0.375,3) (1.5,0) -- (1.5,3) (1.5,0) -- (1.5,3) (2.25,0) -- (2.25,1.5) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.25,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.875) node {$g$} (2.625,0.75) node {$h$} (0.185,2.25) node {$c$} (1.25,1.125) node {$d$} (0.6,2.25) node {$e$} (1.125,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[ very thick] (0,1.5) -- (1.5,1.5); \draw[ dotted] (0.75,0) -- (0.75,3); \end{tikzpicture}_{\eqref{I7}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (0.75,1.15) -- (1.5,1.15) (0,0) -- (0,3) (0.375,1.5) -- (0.375,3) (1.5,0) -- (1.5,3) (2.25,0) -- (2.25,1.5) (3,0) -- (3,3) (1.5,0) -- (1.5,3) (1.5,1.5) -- (3,1.5) (0.375,0.75) node {$a$} (1.25,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.3) node {$g$} (2.625,0.75) node {$h$} (0.185,2.25) node {$c$} (1.25,0.9) node {$d$} (0.6,2.25) node {$e$} (1.125,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (0,1.5) -- (1.5,1.5); \draw[ very thick] (0.75,0) -- (0.75,3); \end{tikzpicture}_{\eqref{I8}} $} \\ & \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (0.75,1.15) -- (1.5,1.15) (0,0) -- (0,3) (0.375,1.5) -- (0.375,3) (0.75,1.5) -- (0.75,3) (0.75,0) -- (0.75,1.5) (2.25,0) -- (2.25,1.5) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.25,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.3) node {$g$} (2.625,0.75) node {$h$} (0.185,2.25) node {$c$} (1.25,0.9) node {$d$} (0.6,2.25) node {$e$} (1.125,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[very thick] (1.5,0) -- (1.5,3); \draw[ dotted] (0,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I9}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (0.75,1.15) -- (1.5,1.15) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (2.25,0) -- (2.25,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.25,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.3) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.25,0.9) node {$d$} (1.25,2.25) node {$e$} (1.85,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (1.5,0) -- (1.5,3); \draw[very thick] (0,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I10}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (1.5,1.5) -- (3,1.5) (0.75,0.75) -- (1.5,0.75) (0.75,1.15) -- (1.5,1.15) (0,0) -- (0,3) (1.5,0) -- (1.5,3) (2.25,0) -- (2.25,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.25,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.3) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.25,0.9) node {$d$} (1.25,2.25) node {$e$} (1.85,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (0.75,0) -- (0.75,3); \draw[very thick] (0,1.5) -- (1.5,1.5); \end{tikzpicture}_{\eqref{I11}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (1.5,1.5) -- (3,1.5) (0.75,0.75) -- (1.5,0.75) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (2.25,0) -- (2.25,3) (3,0) -- (3,3) (1.5,0) -- (1.5,3) (0.375,0.75) node {$a$} (1.125,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.85) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.25) node {$d$} (1.125,2.555) node {$e$} (1.85,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[very thick] (0.75,0) -- (0.75,3); \draw[dotted] (0,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I12}} $} \\ & \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,0.75) -- (1.5,0.75) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (2.25,0) -- (2.25,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.375) node {$b$} (1.85,0.75) node {$f$} (1.125,1.85) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.25) node {$d$} (1.125,2.555) node {$e$} (1.85,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[very thick] (1.5,0) -- (1.5,3); \draw[dotted] (0,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I13}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (1.5,0.75) -- (1.85,0.75) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,1.5) -- (0.75,3) (2.25,0) -- (2.25,1.5) (3,0) -- (3,3) (1.85,0) -- (1.85,1.5) (2.25,1.5) -- (2.25,3) (1.5,1.5) -- (3,1.5) (0.75,0.75) node {$a$} (1.65,0.6) node {$b$} (2.15,0.75) node {$f$} (1.125,1.85) node {$g$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.65,1.25) node {$d$} (1.125,2.555) node {$e$} (1.85,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (1.5,0) -- (1.5,3); \draw[ very thick] (0,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I14}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0,0) -- (0,3) (3,0) -- (3,3) (1.5,0) -- (1.5,3) (1.5,0.75) -- (2.25,0.75) (0,1.5) -- (3,1.5) (0.75,2.25) -- (1.5,2.25) (0.75,1.5) -- (0.75,3) (2.625,0) -- (2.625,1.5) (0.75,0.75) node {$a$} (1.875,0.375) node {$b$} (0.375,2.25) node {$c$} (1.875,1.125) node {$d$} (1.125,2.555) node {$e$} (2.4375,0.75) node {$f$} (1.125,1.85) node {$g$} (2.8125,0.75) node {$h$} (1.875,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (2.25,0) -- (2.25,3); \draw[ very thick] (1.5,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I15}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (1.5,2.25) -- (2.25,2.25) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,1.5) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (2.625,0) -- (2.625,1.5) (0,1.5) -- (1.5,1.5) (0.75,0.75) node {$a$} (1.875,0.75) node {$b$} (2.4375,0.75) node {$f$} (1.125,1.85) node {$g$} (2.8125,0.75) node {$h$} (0.375,2.25) node {$c$} (1.875,1.85) node {$d$} (1.125,2.555) node {$e$} (1.85,2.555) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (1.5,1.5) -- (3,1.5); \draw[ very thick] (2.25,0) -- (2.25,3); \end{tikzpicture}_{\eqref{I16}} $} \\ & \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (1.5,2.25) -- (2.25,2.25) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,1.5) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (2.625,0) -- (2.625,1.5) (2.25,0) -- (2.25,3) (0.75,0.75) node {$a$} (1.875,0.75) node {$b$} (2.4375,0.75) node {$f$} (1.125,1.85) node {$g$} (2.8125,0.75) node {$h$} (0.375,2.25) node {$c$} (1.875,1.85) node {$d$} (1.125,2.555) node {$e$} (1.85,2.555) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (0,1.5) -- (3,1.5); \draw[ very thick] (1.5,0) -- (1.5,3); \end{tikzpicture}_{\eqref{I17}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (1.5,1.5) (0,3) -- (3,3) (0.75,2.25) -- (1.5,2.25) (1.5,2.25) -- (2.25,2.25) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (2.25,0) -- (2.25,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.75) node {$f$} (1.875,1.875) node {$d$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$g$} (1.125,2.625) node {$e$} (1.875,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (1.5,0) -- (1.5,3); \draw[ very thick] (0,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I18}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,3) -- (3,3) (0.75,2.25) -- (1.5,2.25) (1.5,2.25) -- (2.25,2.25) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0,1.5) -- (1.5,1.5) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.75) node {$f$} (1.875,1.875) node {$d$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$g$} (1.125,2.625) node {$e$} (1.875,2.625) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (2.25,0) -- (2.25,3); \draw[ very thick] (1.5,1.5) -- (3,1.5); \end{tikzpicture}_{\eqref{I19}} $} \; \adjustbox{valign=m}{$ \begin{tikzpicture}[ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (1.5,1.5) (0,3) -- (3,3) (2.25,0) -- (2.25,3) (1.5,0.75) -- (2.25,0.75) (0.75,2.25) -- (1.5,2.25) (0,0) -- (0,3) (0.75,0) -- (0.75,3) (1.5,0) -- (1.5,3) (3,0) -- (3,3) (0.375,0.75) node {$a$} (1.125,0.75) node {$b$} (1.875,0.375) node {$f$} (1.875,1.1) node {$d$} (2.625,0.75) node {$h$} (0.375,2.25) node {$c$} (1.125,1.875) node {$g$} (1.125,2.625) node {$e$} (1.875,2.25) node {$i$} (2.625,2.25) node {$j$}; \draw[dotted] (1.5,1.5) -- (3,1.5); \draw[ very thick] (2.25,0) -- (2.25,3); \end{tikzpicture}_{\eqref{I20}} $} \end{align*} \begin{theorem} In every double interchange semigroup, the following commutativity relation holds for all values of the arguments $a, \dots, j$: \begin{align*} & ((a {\,\scalebox{.67}{$\vartriangle$}\,} b){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} (d {\,\scalebox{.67}{$\blacktriangle$}\,} e))){\,\scalebox{.67}{$\vartriangle$}\,} (((f {\,\scalebox{.67}{$\blacktriangle$}\,} g){\,\scalebox{.67}{$\vartriangle$}\,} h ){\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \equiv {} \\ & ((a {\,\scalebox{.67}{$\vartriangle$}\,} b){\,\scalebox{.67}{$\blacktriangle$}\,} (c {\,\scalebox{.67}{$\vartriangle$}\,} (g {\,\scalebox{.67}{$\blacktriangle$}\,} e))){\,\scalebox{.67}{$\vartriangle$}\,} (((f {\,\scalebox{.67}{$\blacktriangle$}\,} d){\,\scalebox{.67}{$\vartriangle$}\,} h ){\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \end{align*} \end{theorem} \subsubsection{Configuration $B$} For configuration $B$ in display \eqref{configsABC}, we label only the two blocks which transpose in the commutativity relation. The required applications of associativity and the interchange law can easily be reconstructed from the diagrams: \begin{align*} & \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (0.75,0.75) -- (1.50,0.75); \draw (1.125,1.875) node {$c$} (1.125,1.1) node {$g$}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,1.50) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (1.50,2.25) -- (2.25,2.25) (0.75,0.75) -- (1.50,0.75) (2.625,1.50) --(2.625,3.00); \draw (1.875,1.875) node {$c$} (1.125,1.1) node {$g$}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,1.50) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (1.50,0.750) -- (2.25,0.750) (0.75,0.75) -- (1.50,0.75) (2.625,1.50) --(2.625,3.00); \draw (1.875,1.1) node {$c$} (1.125,1.1) node {$g$}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (1.50,0.750) -- (2.25,0.750) (0.75,0.75) -- (1.50,0.75); \draw (1.875,1.1) node {$c$} (1.125,1.1) node {$g$}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (1.50,0.75) -- (2.25,0.75); \draw (1.875,1.1) node {$c$} (1.125,1.875) node {$g$}; \end{tikzpicture} } \\[1mm] & \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,1.50) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (0.75,0.75) -- (1.50,0.75) (0.375,0.00) --(0.375,1.50); \draw (1.125,1.875) node {$g$} (1.125,1.1) node {$c$}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,1.50) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (0.75,1.875) -- (1.50,1.875) (0.375,0.00) --(0.375,1.50); \draw (1.125,2.05) node {\scalebox{.67}{$g$}} (1.125,1.65) node {\scalebox{.67}{$c$}}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (0.75,1.875) -- (1.50,1.875); \draw (1.125,2.05) node {\scalebox{.67}{$g$}} (1.125,1.65) node {\scalebox{.67}{$c$}}; \end{tikzpicture} } \quad \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 6 mm, y = 6 mm ] \draw (0.00,0.00) -- (0.00,3.00) (0.75,0.00) -- (0.75,3.00) (1.50,0.00) -- (1.50,3.00) (2.25,0.00) -- (2.25,3.00) (3.00,0.00) -- (3.00,3.00) (0.00,0.00) -- (3.00,0.00) (0.00,1.50) -- (3.00,1.50) (0.00,3.00) -- (3.00,3.00) (0.75,2.25) -- (1.50,2.25) (0.75,0.75) -- (1.50,0.75); \draw (1.125,1.875) node {$g$} (1.125,1.1) node {$c$}; \end{tikzpicture} } \end{align*} \begin{theorem} In every double interchange semigroup, the following commutativity relation holds for all values of the arguments $a, \dots, j$: \[ \begin{array}{l} ( ( a {\,\scalebox{.67}{$\vartriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( f {\,\scalebox{.67}{$\vartriangle$}\,} ( g {\,\scalebox{.67}{$\blacktriangle$}\,} h ) ) ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( i {\,\scalebox{.67}{$\vartriangle$}\,} j ) ) \equiv {} \\[1mm] ( ( a {\,\scalebox{.67}{$\vartriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} g ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( f {\,\scalebox{.67}{$\vartriangle$}\,} ( c {\,\scalebox{.67}{$\blacktriangle$}\,} h ) ) ) {\,\scalebox{.67}{$\vartriangle$}\,} ( ( d {\,\scalebox{.67}{$\vartriangle$}\,} e ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( i {\,\scalebox{.67}{$\vartriangle$}\,} j ) ) \end{array} \] \end{theorem} \subsubsection{Configuration $C$} For configuration $C$ in display \eqref{configsABC}, recall that applying the interchange law does not change the partition (only the monomial representing the partition), and applying associativity can be done only horizontally to the entire configuration or vertically to the second slice from the left. None of these operations transposes the two smallest empty blocks, so we obtain no commutativity relation. \subsection{Three parallel horizontal slices} In this subsection we consider horizontal rather than vertical slices, since this makes it a little easier to follow the discussion. We do not claim to have discovered all possible commutativity relations with three parallel slices, since the number of cases is very large. However, we determine 32 commutativity relations, 16 of which are new, and 16 of which follow immediately from one of the known arity nine relations \cite{BM2016}. Moreover, the 16 new relations may all be obtained from a single relation by applying associativity and the automorphism group of the square (the dihedral group of order 8). Without loss of generality, this leaves the following two cases. \emph{Case 1}: The horizontal slices have 2, 6, 2 empty blocks, labelled as follows: \begin{center} \begin{tikzpicture} [ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (3,1.5) (0,3) -- (3,3) (0,2.25) -- (3,2.25) (0,0) -- (0,3) (0.75,1.5) -- (0.75,2.25) (1.5,0) -- (1.5,3) (1.125,1.5) -- (1.125,2.25) (2.25,1.5) -- (2.25,2.25) (1.875,1.5) -- (1.875,2.25) (3,0) -- (3,3) (0.75,0.75) node {$a$} (2.25,0.75) node {$b$} (0.375,1.875) node {$c$} (0.9375,1.875) node {$d$} (1.3125,1.875) node {$e$} (1.6875,1.875) node {$f$} (2.0625,1.875) node {$g$} (2.625,1.875) node {$h$} (0.75,2.625) node {$i$} (2.25,2.625) node {$j$}; \end{tikzpicture} \end{center} The two commutating empty blocks could be any two of $d$, $e$, $f$, $g$. But in this configuration, it is easy to see that no sequence of applications of associativity and the interchange law will change the order of these four blocks. \emph{Case 2}: The horizontal slices have 2, 5, 3 empty blocks. There are two subcases, depending on whether the third horizontal slice has two vertical cuts, or one vertical cut and one horizontal cut. In the latter subcase, we label the blocks as follows: \begin{equation} \label{oneofeach} \adjustbox{valign=m}{ \begin{tikzpicture} [ draw = black, x = 8 mm, y = 8 mm ] \draw (0,0) -- (3,0) (0,1.5) -- (3,1.5) (0,3) -- (3,3) (0,2.25) -- (3,2.25) (1.5,0.75) -- (3,0.75) (0,0) -- (0,3) (0.75,1.5) -- (0.75,2.25) (1.5,0) -- (1.5,3) (2.25,1.5) -- (2.25,2.25) (1.92,1.5) -- (1.92,2.25) (3,0) -- (3,3) (0.75,0.75) node {$a$} (2.25,0.375) node {$b$} (2.25,1.125) node {$c$} (0.375,1.875) node {$d$} (1.6875,1.875) node {$f$} (2.0625,1.875) node {$g$} (2.625,1.875) node {$h$} (1.125,1.875) node {$e$} (0.75,2.625) node {$i$} (2.25,2.625) node {$j$}; \end{tikzpicture} } \end{equation} \begin{theorem} In every double interchange semigroup, the following commutativity relation holds for all values of the arguments $a, \dots, j$: \[ \begin{array}{l} (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)){\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\vartriangle$}\,} e) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} (((f {\,\scalebox{.67}{$\vartriangle$}\,} g) {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \equiv {} \\[1mm] (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} (( d {\,\scalebox{.67}{$\vartriangle$}\,} e) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} (((g {\,\scalebox{.67}{$\vartriangle$}\,} f){\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \end{array} \] \end{theorem} \begin{proof} We list applications of interchange; the other details are self-explanatory: \begin{align*} & (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)){\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\vartriangle$}\,} e) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)){\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} ( i {\,\scalebox{.67}{$\vartriangle$}\,} j) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)){\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} f) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv (a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} f) {\,\scalebox{.67}{$\blacktriangle$}\,} i) ) {\,\scalebox{.67}{$\vartriangle$}\,} ((b {\,\scalebox{.67}{$\blacktriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv ((a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} f)) {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c) ) ) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv ((a {\,\scalebox{.67}{$\vartriangle$}\,} b ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} c) ) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv ((a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e)) {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} c) ) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv ((a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e)) {\,\scalebox{.67}{$\blacktriangle$}\,} i ) {\,\scalebox{.67}{$\vartriangle$}\,} ( (b {\,\scalebox{.67}{$\blacktriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\vartriangle$}\,} e) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \\ &\equiv ((a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g )) {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} h)) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j) \\ &\equiv (a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g ) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} (f {\,\scalebox{.67}{$\vartriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} h {\,\scalebox{.67}{$\blacktriangle$}\,} j) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} b) {\,\scalebox{.67}{$\blacktriangle$}\,} ((d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g ) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} ((f {\,\scalebox{.67}{$\vartriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} b) {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} f {\,\scalebox{.67}{$\vartriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} ( i {\,\scalebox{.67}{$\vartriangle$}\,} (h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \\ &\equiv ((a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} f) {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} ( i {\,\scalebox{.67}{$\vartriangle$}\,} (h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \\ &\equiv (a {\,\scalebox{.67}{$\blacktriangle$}\,} (d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} f ) {\,\scalebox{.67}{$\blacktriangle$}\,} i ) {\,\scalebox{.67}{$\vartriangle$}\,} ((b {\,\scalebox{.67}{$\blacktriangle$}\,} c) {\,\scalebox{.67}{$\blacktriangle$}\,} (h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} ((( d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} f){\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} (h {\,\scalebox{.67}{$\blacktriangle$}\,} j)) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} (( d {\,\scalebox{.67}{$\vartriangle$}\,} e {\,\scalebox{.67}{$\vartriangle$}\,} g {\,\scalebox{.67}{$\vartriangle$}\,} f{\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} (i {\,\scalebox{.67}{$\vartriangle$}\,} j)) \\ &\equiv (a {\,\scalebox{.67}{$\vartriangle$}\,} (b {\,\scalebox{.67}{$\blacktriangle$}\,} c)) {\,\scalebox{.67}{$\blacktriangle$}\,} (( d {\,\scalebox{.67}{$\vartriangle$}\,} e) {\,\scalebox{.67}{$\blacktriangle$}\,} i) {\,\scalebox{.67}{$\vartriangle$}\,} ((g {\,\scalebox{.67}{$\vartriangle$}\,} f{\,\scalebox{.67}{$\vartriangle$}\,} h) {\,\scalebox{.67}{$\blacktriangle$}\,} j) \end{align*} The proof is complete. \end{proof} In the subcase with two vertical cuts in the third slice, the corresponding diagram is the same as \eqref{oneofeach} except that the lower right block containing $b$ and $c$ is rotated 90 degrees clockwise. We obtain a commutativity relation for this block partition, but this relation is easily seen to be an consequence of identity 3992 from \cite{BM2016}. \section{Concluding remarks} In this final section we briefly mention possible directions for future research. \subsubsection*{Mixed structures} We have studied two binary operations, both associative or both nonassociative, related by the interchange law. More generally, for $p, q \ge 0$ let $I = \{ 1, \dots, p{+}q \}$; choose subsets $J \subseteq I$ and $K \subseteq \{ \, \{k,\ell\} \mid k, \ell \in I, \, k \ne \ell \, \}$. Let $S$ be a set with $p{+}q$ binary operations, $p$ associative $\star_1$, \dots, $\star_p$ and $q$ nonassociative $\star_{p+1}$, \dots, $\star_{p+q}$, satisfying interchange between $\star_j$ and itself for $j \in J$, and between $\star_k$, $\star_\ell$ for $\{k,\ell\} \in K$. The operads we have studied in this paper correspond to $(p,q) = (0,2)$ or $(2,0)$ with $J = \emptyset$ and $K = \{\{1,2\}\}$. \subsubsection*{Higher arity interchange laws} We have studied only binary operations. More generally, let $S$ be a nonempty set, $M_p(S)$ the set of all $p$-ary operations $f\colon S^p \to S$, and $X = ( x_{ij} )$ a $p \times q$ array with entries in $S$. If $f \in M_p(S)$, $g \in M_q(S)$ then we may apply $f$, $g$ to $X$ either by applying $g$ to each row vector, obtaining an $m \times 1$ column vector, and applying $f$; or the reverse. If the results are equal then $f$, $g$ satisfy the $m \times n$ interchange law (we also say that $f$, $g$ commute): \[ \begin{array}{l} f( g( x_{11}, \dots, x_{1n} ), \dots, g( x_{m1}, \dots, x_{mn} ) ) \equiv \\ g( f( x_{11}, \dots, x_{m1} ), \dots, f( x_{1n}, \dots, x_{mn} ) ). \end{array} \] Since $f$ acts on columns and $g$ on rows, we may write $f( X g ) \equiv ( f X ) g$, showing that interchange may be regarded as a form of associativity. \subsubsection*{Higher dimensions} We have studied structures with two operations, corresponding to the horizontal and vertical directions in two dimensions. Most of our constructions make sense for any number of dimensions $d \ge 2$. One obstacle for $d \ge 3$ is that the monomial basis for $\mathbf{Assoc}$ consisting of nonbinary trees with alternating white and black internal nodes ($\mathbf{AssocNB}$) does not generalize in a straightforward way. \subsubsection*{Associativity for two operations} With more than one operation, there are various forms of associativity; we have only considered the simplest: each operation is individually associative. The operations may also associate with each other in various ways: black-white associativity, $( a {\,\scalebox{.67}{$\vartriangle$}\,} b ) {\,\scalebox{.67}{$\blacktriangle$}\,} c \equiv a {\,\scalebox{.67}{$\vartriangle$}\,} ( b {\,\scalebox{.67}{$\blacktriangle$}\,} c )$; total associativity (black-white and white-black); compatibility (every linear combination of the operations is associative); diassociativity (black-white and the two bar identities). \subsubsection*{Variations on the interchange law} In universal algebra, the interchange law is called the \emph{medial} identity; it has a close relative, the \emph{paramedial} identity, in which the outer arguments transpose: $( a {\,\scalebox{.67}{$\vartriangle$}\,} b ) {\,\scalebox{.67}{$\blacktriangle$}\,} ( c {\,\scalebox{.67}{$\vartriangle$}\,} d ) = ( d {\,\scalebox{.67}{$\blacktriangle$}\,} b ) {\,\scalebox{.67}{$\vartriangle$}\,} ( d {\,\scalebox{.67}{$\blacktriangle$}\,} a )$. In general, one considers $d$ operations of arity $n$, and relations $m_1 \equiv m_2$ where $\{ m_1, m_2 \}$ is an unordered pair of monomials of arity $N = 1+w(n{-}1)$ in which $m_1$ has the identity permutation of $N$ distinct variables and $m_2$ has some nonidentity permutation. Of greatest interest are those relations which have the greatest symmetry: that is, the corresponding unordered pair generates an orbit of minimal size under the action of the wreath product $S_d \ltimes (S_n)^d$ of the symmetric group $S_d$ permuting the operations with the group $(S_n)^d$ permuting the arguments of the operations. \subsubsection*{$N$-ary suboperads of binary operads} To conclude, we mention a different point of view on commutativity for double interchange semigroups. In general, let $\mathbf{O}$ be a symmetric operad generated by binary operations satisfying relations of arity $\ge 3$. An algebra over $\mathbf{O}$ is called an $\mathbf{O}$-\emph{algebra}; the most familiar cases are associative, alternative, pre-Lie, diassociative, dendriform, etc. We propose the following definition of $N$-\emph{tuple} $\mathbf{O}$-\emph{system} for all $N \ge 3$: an algebra over the suboperad $\mathbf{O}^{(N)} \subset \mathbf{O}$ generated by the $S_N$-module $\mathbf{O}(N)$ of all $N$-ary operations in $\mathbf{O}$. In particular, consider the operad $\mathbf{DIA}$ generated by two associative operations satisfying the interchange law. Previous results \cite{BM2016} show that $\mathbf{DIA}(N)$ is a direct sum of copies of the regular $S_N$-module if and only if $N \le 8$. The generators of $\mathbf{DIA}$ have no symmetry, but the generators of $\mathbf{DIA}^{(N)}$ have symmetry for $N \ge 9$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Cognitive deficit of older adults is one of the biggest global public health challenges in elderly care. Approximately 5.2 million people of 65 and older are suffered with any form of cognitive impairments in United States in 2012 \cite{stat12}. Dementia is one of the major causes of the cognitive impairments which is more acute among 85 and older population (50\%) \cite{stat12}. However, the costs (financial and time) of health care and long-term care for individuals with Alzheimer's (special form of dementia) or other dementias are substantial. For example, during 2016, about 15.9 million family and friends in United States provided 18.2 billion hours of unpaid assistance to those with cognitive impairments which is a contribution to the nation valued at \$230.1 billion. One the other hand, total payments for all individuals with all form of cognitive impairments are estimated at \$259 billion. Total annual payments for health care, long-term care and hospice care for people with Alzheimer's or other dementias are projected to increase from \$259 billion in 2017 to more than \$1.1 trillion in 2050. Among the above costs, a significant amount are relevant to clinical and diagnostic tests \cite{stat17}. Although clinical and diagnostic tests have become more precise in identifying dementia, studies have shown that there is a high degree of underrecognition especially in early detection. However, there are many advantages to obtaining an early and accurate diagnosis when cognitive symptoms are first noticed as the root cause findings of impairment always lessen the progress of impairment status and sometimes symptoms can be reversible and cured. With the proliferation of emerging ubiquitous computing technologies, many mobile and wearable devices have been available to capture continuous functional and physiological behavior of older adults. Wearable sensors are now capable of estimating number of steps being taken, physical activity levels, sleep patterns and physiological outcomes (heart rate, skin conductance) of older adults \cite{sano15}. Ambient sensors also help capture the movement patterns of objects and humans for activity and behavior recognition \cite{dawadi14,dawadi15}. Researchers also proved the existence of correlations between cognitive impairment and everyday task performance \cite{dawadi14, akl15,alam16} as well as physiological symptoms \cite{alam16,sano15}. Although current studies showed some successes in IoT-assisted cognitive health assessment in different domains individually, there are several existing challenges in developing and validating a fully automated multi-modal assessment model. \begin{enumerate} \item \emph{Real-time IoT System}: A real-time IoT system must include a continuous and fault tolerant data streaming capability among central hub, wearable sensors and ambient sensors regardless of network communication protocol (WiFi, Ethernet, Bluetooth etc.) which are not available in existing researches. \item \emph{Multi-modal Context Fusion}: Though several offline clinically validated cognitive health assessment tools exist \cite{wai03, starling99, krapp07, yesavage82, zung71}, there is no universally accepted method for IoT-assisted automatic cognitive health assessment in smart home environment that can fuse multi-modal sensor contexts altogether. For example, some researchers showed ambient sensors based Activities of Daily Livigin (ADLs) sequence pattern can signify the cognitive health status of older adults \cite{akl15, dawadi15}. Researchers also showed wearable Electrodermal Activity pattern analysis may carry the significance of cognitive status \cite{sano15}. However, for validation of IoT based cognitive health assessment, self-reported surveys, clinical diagnosis and observation based tools are used individually by prior researchers \cite{akl15, dawadi15, sano15, alam16}. \end{enumerate} Regarding aforementioned challenges for the automation of cognitive health assessment, \emph{AutoCogniSys} considers (i) reproducibility of our model in any smart home system consists of ambient motion sensors, wearable accelerometer (ACC) sensors, wearable Electrodermal Activity (EDA) and Photoplethysmography (PPG) sensors individually or combined streams; (ii) context awareness based on ambient motion sensors and wearable ACC sensors in any types of activities such as hand gestural, postural and complex ADLs; and (iii) high accuracy, i.e., a recall rate of over 90\% with less than 5\% false positive rate. More specifically, \emph{AutoCogniSys} extends our existing work \cite{alam16} in three dimensions, \emph{(1) True Automation:} We first investigate the correlations of cognitive impairment with human activities and stress where we manually labeled activities, extract the corresponding physiological sensor (EDA and PPG) features of each activity, and use statistical method to find correlations. Then, we propose automatic complex activity recognition based on a Hierarchical Dynamic Bayesian Network (HDBN) model, fine-grained extraction of physiological sensor features and finally machine learning classification of cognitive impairment. \emph{(2) Noises Elimination:} We define different types of noises on ACC, EDA and PPG sensors, propose extensive signal processing techniques to remove noises and show significant improvement can be achieved in cognitive impairment classification. \emph{(3) Implementation and Evaluation:} Finally, we design and implement IoT system and analytic methods and minimize the human involvement to automate our proposed cognitive health assessment approach by considering effective smart home sensor customization and deployment, data collection, screening, cleaning and filtering, feature computation, normalization and classification, and activity model training. \textbf{Research Questions:} \emph{AutoCogniSys} consequently tackles the following key research questions. $\bullet$ Can we detect simultaneously the periodic rhythms of both hand gestures and postural activities from wrist-worn ACC sensor signal for diverse population (population with same activity but diverse ways such as walking with walker, stretcher or normally)? If so, how can we incorporate the hand gesture, posture and ambient sensor data streams to help improve the ADLs recognition models? $\bullet$ How can we exploit and relate the micro-activity features into noise free physiological sensor signals processing to automate cognitive health assessment process? What are the critical roles of clinical survey and technology guided assessment methodologies and their inter-relationships for automating the different intermediate steps of cognitive health assessment process? To tackle these, we make the following \textbf{key contributions}: $\bullet$ We employ an extensive signal deconvolution technique that in conjunction with machine learning technique helps facilitate a wrist-worn ACC-based multi-label (hand gestural and postural) activity recognition for diverse population. We then leverage multi-label context sets with ambient and object sensor signals for complex activity recognition based on HDBN model. $\bullet$ We propose a novel collaborative filter for EDA signal processing by postulating signal as a mixture of three components: \emph{tonic phase, phasic phase} and \emph{motion artifacts}, and employ convex optimization technique for filtering out the motion artifacts. We also propose a novel PPG signal processing technique to filter out the inherent motion artifacts and noises using improved Periodic Moving Average Filtering (PMAF) technique. $\bullet$ We design and prototype an IoT system consisting of multiple devices (wearable wrist band, IP camera, object and ambient sensors) connected with central hub via WiFi, Ethernet and Bluetooth communication protocols. We collected data from 22 older adults living in a continuing care retirement community center in a very natural setting (IRB \#HP-00064387). $\bullet$ Finally, we employ statistical and machine learning techniques to jointly correlate the activity performance metrics and stress (EDA and PPG) features that helps achieve max. 93\% of cognitive impairment status detection accuracy. We evaluate \emph{AutoCogniSys} on 5 clinically validated offline assessment tools as ground truth. \section{Related Works} \emph{AutoCogniSys} builds on previous works on wearable devices based low-level (postural and hand gestural) activity recognition and their integration with ambient sensors to recognize complex ADLs, the underlying signal processing and applications on cognitive health assessment automation. \subsection{Wearable Sensor Signal Processing} Wearable sensors can be two types: physical and physiological. Physical sensors (accelerometer, gyroscope etc.) signal values change over the movements of the sensor devices. Physiological sensors change over physiological condition of body such as EDA changes over stress and PPG changes over heart rate. However, physical movements also impose noises on physiological sensor signals which is called \emph{motion artifacts}. \subsubsection{Physiological Signal Processing} A continuous and descrete decomposition of EDA, and time and frequency domain analytics of PPG signal have been investigated before to extract relevant physiological features which were contaminated with noises and motion artifacts \cite{alam16}. \cite{setz10} denoised and classified EDA from cognitive load and stress with accuracy higher than 80\%. Though motion artifacts removal techniques such as exponential smoothing \cite{hern11} and low-pass filters \cite{poh10, hernandez14} provide significant improvement in filtering EDA signals, wavelet transforms offer more sophisticated refinement for any kind of physiological sensors such as electroencephalogram \cite{krish06, zikov02}, electrocardiogram \cite{erc06,alfa08}, and PPG \cite{lee03}. \cite{chen15} proposed a stationary wavelet transform (SWT) based motion artifacts removal technique. `cvxEDA' proposed a convex optimization technique considering EDA as a mixture of white gaussian noise, tonic and phasic components where white gaussian noise includes motion artifacts and external noises \cite{greco16}. \emph{AutoCogniSys} intelligently combines SWT and `cvxEDA' together to remove noises and motion artifacts from EDA signal. On the other hand, it is more difficult to remove motion artifacts from PPG signal due to its periodicity of nature \cite{wang13}. Researchers proposed different methods such as frequency analytics \cite{garde13,wang13}, statistical analytics \cite{peng14} and digital filter \cite{lee10} to reduce noises and motion artifacts from PPG. \emph{AutoCogniSys} used Periodic Moving Average Filter (PMAF) in this regard \cite{lee07}. \subsubsection{Physical Sensor Signal Processing} ACC based hand gesture recognition has been explored by several researchers in past such as discrete hidden markov model \cite{liu10}, artificial neural network \cite{arce11}, weighted naive bayes and dynamic time warping \cite{mace13}. Akl et. al. proposed 18 gesture dictionary based Support Vector Machine (SVM) classifier \cite{akl11}. Wrist-worn ACC based postural activity recognition approach has been proposed using Decision Tree, Random Forest, Support Vector Machines, K-Nearest Neighbors, Naive Bayes and deep neural networks \cite{gj14, wang16}, the accuracy stagnates at 85\% using SVM method \cite{martin16}. However, neither of past works proposed any technique that can provide single body worn ACC sensor-based multiple body contexts recognition nor works efficiently for diverse posture say walking normally, with walker, with double walker or wheel chair. Our proposed 8-hand gesture recognition technique assisted sparse-deconvolution method improves classification performances on both normal and diverse postures. However, we incorporated hand gestures and postures in conjunction with ambient sensors into single-inhabitant HDBN model \cite{alam16b} that provides significant improvement in complex activity recognition. \subsection{Cognitive Health Assessment} Smart home environment has been used for providing automated health monitoring and assessment in the ageing population before \cite{dawadi14, gong15, akl15, dawadi15}. `SmartFABER' proposed a non-intrusive sensor network based continuous smart home environmental sensor data acquisition and a novel hybrid statistical and knowledge-based technique to analyz the data to estimate behavioral anomalies for early detection of mild-cognitively impairment \cite{riboni16}. \cite{skubic15} presented an example of unobtrusive, continuous monitoring system for the purpose of assessing early health changes to alert caregivers about the potential signs of health hazards. Though, prior researches proposed a sequence of ambient motion sensor streams as complex activity components in activity based health assessment \cite{dawadi14, gong15, akl15, dawadi15}, we consider inclusion of an wearable wrist-band with in-built ACC sensor to detect hand gesture and posture, augmenting with the ambient sensor readings to help recognize complex activities as well as cognitive health assessment of older adults. Additionally, we propose intelligent use of physiological features of skin through different physiological sensor signals (EDA, PPG) processing in daily activity tasks and incorporate context-awareness for automation of cognitive health assessment that have not been explored before. \begin{figure}[!htb] \begin{center} \epsfig{file=flowchart.pdf,height=1.6in, width=3.5in} \caption{Overall flow of \emph{AutoCogniSys} pipeline.} \label{fig:overview} \end{center} \end{figure} \section{Overall Architecture} We first investigate existing IoT-based cognitive health care frameworks that covers every aspects of wearable (physical, physiological) and ambient (passive infrared and object) sensor signals computing. \emph{AutoCogniSys} is comprised of three component modules: (i)~sensing, (ii)~processing, and (iii)~analysis. The `sensing' module consists of clinical assessment tools (surveys, observation and clinical backgrounds) and sensing signals (ambient and wearable sensors). `Sensor processing' module is comprised of three sub-modules: a)~clinical assessment feature extraction from assessment tools; b)~ambient sensor feature extraction; and c)~wearable sensor processing (noise removal, segmentation, feature extraction, classification etc.). `Analysis' module is comprised of machine learning and statistical analytics-based score prediction of cognitive impairment. Automation of each module's functionality and inter-intra modular transactions without human interference can be called {\it true automation} of cognitive health assessment. Fig.~\ref{fig:overview} shows the overall flow of \emph{AutoCogniSys} which is discussed in details in the following sections. \subsection{Demographic Ground Truth Data Collection} Currently in standard clinical practice and research, the most accurate evaluations of cognitive health assessment are one-to-one observation and supervision tasks/questionnaires for monitoring an individual's functional abilities and behavior \cite{resnick15}. In the first stage of this pilot study, we have investigated current literatures and carefully chosen the clinically proven functional and behavioral health assessment survey tools \cite{resnick15}. On the other hand, to cross check with the survey based evaluations, we have also chosen clinically justified observation based behavioral assessment methods. First, following the resident consent, our clinical research evaluator collects demographic and descriptive data (age, gender, race, ethnicity, marital status, education and medical commodities). She has performed two types of clinical assessments: (1) \emph{Observation based} where the resident's cognition is assessed using the Saint Louis University Mental Status (SLUMS) scale \cite{wai03}. (2) \emph{Survey based} where five widely used and clinically well validated surveys are taken into account: (a) \emph{Yale Physical Activity Survey} \cite{starling99}; (b) \emph{Lawton Instrumental Activities of Daily Living}; (c) \emph{Barthel Index of Activities of Daily Living} \cite{krapp07}; (d) \emph{Geriatric Depression Rating scale} \cite{yesavage82}; and (e) \emph{Zung Self-Rating Anxiety scale} \cite{zung71}. \subsection{Smart Environment Creation} For an ideal IoT-based system, instrumenting and deploying it at each participant's natural living environment warrants for assembling a flexible set of hardware and software interfaces to ease the system configuration, setup, and network discovery processes. The sensor system placed in the residences of volunteers needs to meet several specific physiological signals and activity monitoring needs. However, we must confirm that the devices are reliable with potential for re-deployment as well as appear unintimidating to the participants. Inspired by the above requirements, we developed a real testbed IoT system, {\it SenseBox}, by customizing Cloud Engine PogoPlug Mobile base station firmware to integrate with WiFi (connect ambient and object sensors) and Bluetooth (connect wristband) protocol. The smart home components are as follows: (i) PogoPlug base server with a continuous power supply, (ii) 3 binary Passive Infrared sensors in three different rooms (kitchen, livingroom and bedroom) to capture room level occupancy, (iii) 7 binary object sensors attached with closet door, entry door, telephone, broom, laundry basket, trash can and trash box, (iv) three IP cameras in the appropriate positions to collect the ground truth data and (v) an Empatica E4 \cite{empatica} wrist-band (integrated sensors: PPG at 64 Hz, EDA at 4 Hz, Body temperature at 1 Hz and a triaxial ACC at 32 Hz) on the participant's dominating hand. \section{Activity Recognition} We aim to detect single wrist-worn ACC sensor based hand gesture and postural activities and insert these into an HDBN graphical model in conjunction with ambient and object sensor values for complex activity recognition. We consider the recognition problem asan activity tupple of $\langle gesture,posture,ambient,object \rangle$. Though, Alam et. al. provides significant performance improvement for single wrist-worn ACC sensor aided 18-hand gesture based postural activity recognition in lab environment \cite{alam17}, it faces some practical challenges in real-time smart environment with older adults due to the diversity of their postures. For example, some older adults use walker, double walking sticks or wheel chair for walking in which cases collecting 18 hand gestures and corresponding postural activities for training requires endless efforts and carefulness. To reduce the complexity of ground truth labeling and later state space explosion for graphical model (HDBN), we propose to use rotational normalization method that can merge some hand-gestures subject to directional differences and forms an 8-hand gesture model. However, our proposed Feature Weight Naive Bayes (FWNB) classifier adds significant improvement on Alam et. al. proposed sparse-deconvolution method as well as recognition in diverse postural environment. \begin{figure}[!htb] \begin{center} \epsfig{file=hand_gestures.pdf,height=0.5in, width=3in} \vspace{-.2in} \caption{8 hand gesture dictionary with direction} \label{fig:hand_gestures} \vspace{-.2in} \end{center} \end{figure} \subsection{Hand Gesture Recognition} \label{sec:hand_gesture} \emph{AutoCogniSys} proposes an 8-gesture dictionary (as shown in Fig. \ref{fig:hand_gestures}) and a Feature Weighted Naive Bayesian (FWNB) framework for building, modeling and recognizing hand gestures. The method comprises of the following steps: (i) \emph{Preprocessing:} wrist-worn ACC sensor provided 3-axis data are passed through 0.4Hz low-pass filter to remove the data drift. (ii) \emph{Rotation normalization:} Normalizing the rotation of hand gestures provides greater accuracy and allows for more realistic, orientation-independent motion. At first, we find the best fit plane of the acceleration vectors thus if the motion lies in a single plane, then the acceleration vectors of a closed shape should on average lie in that main plane. Then, we take all acceleration segments between points of inflection to form one single vector called reference vector that provides us the general direction of user's motion. After that, each vector is normalized relative to the reference vector. This normalization helps remove a lot of hand gestures from prior considered 18 hand gestures resulting a reduced dictionary of 8 gestures. (iii) \emph{Feature Weighted Naive Bayesian model:} Naive Bayes classifier is light-weight and efficient technique for hand gesture recognition. We extract 12 ACC features \cite{alam17} and calculate weight for each feature type based on the similarity of feature measures of the trained gestures (0$<$weight$<$1). While recognizing gestures, the proximity of each feature measure to the average trained feature measure of each gesture type is calculated by a normal distribution. Then, the proximity value is multiplied by the feature weight that was calculated in the training phase. All of these multiplied values are added together and the system predicts the gesture type with the greatest value as the user gesture. In the learning data points, there should be static postural activities (such as sitting, lying etc.) to avoid unexpected noises over wrist-worn ACC sensors. In the final hand gesture dictionary, we save the reference vector as our signal dictionary. \subsection{Postural Activity Recognition} In normal lab environment, wrist-worn ACC sensor signal is a mixture (convolution) of actual hand gesture and postural activity relevant signals \cite{alam17}. \emph{AutoCogniSys} improves the idea by reducing the number of hand gestures and postural activities to 8 (as shown in Fig.\ref{fig:hand_gestures}) using rotation normalization and 4 (walking, sitting, standing and lying). Then, we use sparse-deconvolution method (with 31\% signal reconstruction error) to get Approximately Sparse Factor. The summary of the entire process is stated bellow: {\it Building Deconvolution Method:} We first consider the wrist-worn ACC sensor signals (3-axis values) as a convolution of hand gesture and postural activity effects and build a deconvolution framework. The deconvolution framework takes a known signal (hand gesture effects) and a equalizer parameter ($\lambda$) as input and provides an Approximately Sparse Factor signal (postural activity effects) as output. For 3-axis ACC signals, we need to learn associated 3 equalizer parameters for each hand gesture. Moreover, each equalizer parameter is involved with 4 postural activities that results a total 96 ($8\times 3\times 4$) equalizer parameters to learn. {\it Learning Classification Model:} We use the Approximately Sparse Factor signal to extract 12 statistical features and SVM with sequential machine optimization (SMO) \cite{cao06} for postural activity recognition. {\it Prediction Model:} After recognizing the hand gestures following the method explained in Sec.~\ref{sec:hand_gesture}, we take the corresponding reference vector as known signal and extract the Approximately Sparse Factor signals incorporating corresponding 3 equalizer parameters ($\lambda$) for the sparse-deconvolution method. Then, we apply feature extraction and prior learned SMO based SVM classifier \cite{cao06} to classify final postural activity. Fig.~\ref{fig:deconvolution} illustrates a single axis example of the deconvolution. \begin{figure}[!htb] \begin{center} \epsfig{file=deconvolution.pdf,height=1.6in, width=3in} \vspace{-.15in} \caption{Sample deconvolution example of X-axis. The raw x-axis of accelerometer signal, reference vector of the sample gesture and the extracted corresponding ASF signal of walking.} \label{fig:deconvolution} \end{center} \vspace{-.15in} \end{figure} \subsection{Complex Activity Recognition} We build a HDBN based complex activity recognition framework for single inhabitant scenario smart home environment \cite{alam16b} taking the advantage of detected hand gestural and postural activities along with the ambient and object sensor streams. At first, we obtain instant hand gestural and postural activities from our above proposed models, and additionally motion sensor and object sensor readings from our IoT-system for every time instant generating a 4-hierarchy of HDBN model. Considering the context set $\langle gestural, postural, ambient,object\rangle$ as a hierarchical activity structure (extending two 2-hierarchical HDBN \cite{alam16b}), we build complex activity recognition model for single inhabitant scenario. Finally, we infer the most-likely sequence of complex activities (and their time boundaries), utilizing the well-known Expectation Maximization (EM) algorithm \cite{dempster77} for training and the Viterbi algorithm \cite{forney73} for run-time inference. \section{Automatic Activity Features Estimation} The effects of cognitive ability on daily activity performance have been studied before \cite{dawadi14,akl15}. They experimentally and clinically validated that cognitive impairment highly reduces the daily activity performances and this activity performance can be computed as an indicator of cognitive ability status of older adults. The standard activity features refer to completeness of task (TC), sequential task ability (SEQ), interruption avoidance capabilities (INT) etc. In current behavioral science literature, the above activity features carry specific definition based on the sub-tasks involved with a complex activity \cite{dawadi14,akl15}. Completeness of task refers to how many sub-tasks are missed by the participants. Sequential task ability refers to how many sequences of sub-tasks are missed referring the gerontologist defined standard sequences of the sub-task for the particular complex activity. Interruption avoidance capability refers to how many times the participants stop or interleave while doing any sub-task. The final goal of activity features estimation is to provide overall task score. The task score is proportional to the functional ability of participants in performance daily activities. Our behavioral scientist team, comprises with Nursing professor, gerontologist and retirement community caregivers, carefully discus, optimize and choose 87 sub-tasks in total for 13 complex activities. Each of the sub-task comprises with sequential occurrences of hand gesture and postural activities. However, no researchers ever considered hand gesture for activity features estimation due to complexity of multi-modal wearable and ambient sensors synchronization and multi-label activity classification \cite{dawadi14,akl15}. \emph{AutoCogniSys} exploited single wrist-worn sensor based hand gesture and postural activity recognition, and proposed an activity features (TC, SEQ and INT) estimation method including these two parameters in conjunction with object and ambient sensor features that provide significant improvement of cognitive health assessment of older adults. \subsection{Machine Learning Based Complex Activity Features Estimation} In current cognitive health assessment literature, complex activity features can be defined as $\langle TC,SEQ,INT,TS\rangle$. We used supervised method to estimate TC, SEQ and INT, and unsupervised method to estimate TS. We first, formulate the automated scoring as a supervised machine learning problem in which machine learning algorithms learn a function that maps $\langle${\it hand gesture, posture, object, ambient sensor}$\rangle$ feature set to the direct observation scores. We use bagging ensemble method to learn the mapping function and SMO based SVM \cite{cao06} as base classifier. The learner averages by boostrapping individual numeric predictions to combine the base classifier predictions and generates an output for each data point that corresponds to the highest-probability label. We train three classifiers considering observation as ground truth for TC, SEQ and INT scores and test on the testing dataset. We derive unsupervised scores using dimensionality reduction technique for each feature set. First, we take all features of each activity, apply optimal discriminant analysis technique as a dimensionality reduction process \cite{zhang09} and reduce the feature sets into single dimensional value which represents the automated task completeness scores of the particular user activity. A min-max normalization is applied that provides us a uniform range of the variables using $ z_i=\frac{x_i-min(x)}{max(x)-min(x)}$ equation where $x=\{x1,\ldots,x_n\}$ and $z_i$ is $i^{th}$ normalized data. The final single dimensional score represents machine learning based TS score. \section{Physiological Sensor Signals Processing} The autonomic nervous system (ANS) restrains the body's physiological activities including the heart rate, skin gland secretion, blood pressure, and respiration. The ANS is divided into sympathetic (SNS) and parasympathetic (PNS) branches. While SNS actuates the body's resources for action under arousal conditions, PNS attenuates the body to help regain the steady state. Mental arousal (say stress, anxiety etc.) activates the sweat gland causing the increment and reduction of Skin Conductance on SNS and PNS physiological conditions respectively. However, Instant Heart Rate also has similar effect on SNS and PNS physiological condtions i.e., a higher value of heart rate is the effect of SNS and lower value is the outcome of PNS. EDA and PPG sensors are widely used to estimate the instant value of skin conductance and heart rate respectively \cite{alam16}. \subsection{EDA Sensor Signal Processing} EDA is the property of the human body that causes continuous variation in the electrical characteristics of the skin which varies with the state of sweat glands in the skin. There are three types of arousal: \emph{cognitive, affective and physical}. \emph{Cognitive} arousal occurs when a person tries to solve any problem using her cognitive ability. \emph{Affective} arousal occurs when a person is worried, frightened or angry either doing daily activities or in resting position. On the other hand, \emph{physical} arousal is related to the brain command to move bodily parts which is imposed on the total arousal as an artifact, called \emph{motion artifact}. However, there are always some noises due to the weather conditions (temperature, humidity etc.) and device motion. This \emph{motion artifact} can be the prime cause of signal contamination of physiological outcomes while performing daily activities which must be removed. \emph{AutoCogniSys} proposes an EDA sensor signal processing method consists of three steps: (i) noise and motion artifacts removal, (ii) separation of tonic component and phasic component (explained later) from contamination free EDA signal and (iii) feature extraction on the response window. \subsubsection{Motion Artifacts Removal} There are many types of motion artifacts but the unsual steep rise is the mostly occured ones associated with EDA signal while performing daily activities \cite{edel67}. We use well-known steep rising noises reduction technique, SWT \cite{chen15}. We first consider EDA signal as a mixture of a slow variant tonic and fast variant phasic component, i.e., SWT coefficient is modeled as a mixture of two Gaussian components, phasic (close to zero valued signal) and tonic (high rising signal). After expanding EDA signal into multiple levels of scaling and wavelet coefficients, we choose adaptively a threshold limit at each level based on the statistical estimation of the wavelet coefficients' distribution, and employ that on the wavelet coefficients of all levels. Finally, an inverse wavelet transform technique is applied to the thresholded wavelet coefficients to obtain the artifacts free EDA signal. Fig~.\ref{fig:eda_artifact_removal} shows a sample of raw and motion artifacts free EDA signal. \begin{figure}[!htb] \begin{center} \vspace{-.1in} \epsfig{file=eda_signal_artifact.pdf,height=1.6in, width=3.5in} \caption{Dashed line represents noisy EDA signal and solid red line represents \emph{AutoCogniSys} proposed motion artifact free EDA signal} \label{fig:eda_artifact_removal} \end{center} \end{figure} \subsubsection{Convex Optimization Technique to EDA Deconvolution} After the motion artifact removal, we consider EDA as the sum of three components for $N$ sample: a slow tonic driver ($t$), fast (compact, bursty) non-negative sparse phasic driver ($r$) and a reminder error term ($\epsilon_r$). \begin{equation} \label{eq:eda_signal} y = t + r + \epsilon_r \end{equation} This additive error $\epsilon_r$ is a White Gaussian Noise. The central problem associated with the deconvolution method is to get tonic $t$ component from the above equation. \cite{greco16} showed that EDA signal deconvolution (separation of tonic, phasic and noise terms from EDA signal) is a quadratic optimization problem and defined tonic component as follows: \begin{equation} \label{eq:tonic} t = Bl + Cd, \end{equation} where $B$ is a tall matrix whose columns are cubic $B$-spline basis functions, $l$ is the vector of spline coefficients, $C$ is a $N\times 2$ matrix, $d$ is a $2\times 1$ vector with the offset and slope coefficients for the linear trend. The above equation is subject to the following optimization problem, \begin{eqnarray} minimize \frac{1}{2} {||Mq + Bl + Cd- y||}^2_2 +\alpha {||Aq||}_1 + \frac{\lambda}{2} {||l||}^2_2\\ subject\;to\; Aq \geq 0\nonumber \end{eqnarray} where $M$ and $A$ are tridiagonal matrices and $q$ is an auxiliary variable. After solving the above equation, we can get the optimal values for $\{q,l,d\}$ that can be used to obtain tonic component from the equation~\ref{eq:tonic}. The reminder of the equation~\ref{eq:eda_signal} ($r+\epsilon_r$) is considered as a mixture of White Gaussian Noise ($\epsilon_r$) and a fast variant phasic component ($r$). We employ butterworth low-pass filter (5Hz) and hanning smoothing with window size 4 (optimal) to remove $\epsilon_r$ from phasic component ($r$). \subsection{PPG Signal Processing} PPG is used mainly for measuring the oxygen saturation in the blood and blood volume changes in skin. An ideal PPG signal processing must contain the following steps: noise and motion artifacts removal, heart rate detection, heart rate variability estimation and feature extraction. \subsubsection{PPG Signal Noise and Motion Artifacts Removal} Similar to EDA signal, PPG signal is also contaminated with motion artifacts and noises. However, unlike EDA signal, PPG produce quasiperiodicity in a time series spectrum \cite{mete30}. We use Periodic Moving Average Filter (PMAF) to remove motion artifacts and noises \cite{lee07}. We first segment the PPG signal on periodic boundaries and then average the $m^{th}$ samples of each period. After filtering the input PPG signal with a 5-Hz $8^{th}$-order Butterworth low-pass filter, we estimate the maximum and minimum value of each period. The mean of each period are obtained from the maximum and minimum values applying the zero crossing method. These points of the means help determine the boundaries of each period. Then, interpolation or decimation is performed to ensure that each period had the same number of samples \cite{lee07}. \subsubsection{Heart Rate and Heart Rate Variability Estimation} We first apply PMAF on PPG signal to remove noises and motion artifacts, refine PPG by smoothing the signal using 1-dimensional Gaussian Filter and Convolution, calculate first derivative of the convoluted signal and finally find the differences between two consecutive peak values which is called HRV \cite{sel08}. The occurrences of total peak values (R-peak or beat) in each minute is called Heart Rate (HR) with an unit of Beat Per Minute. The signal value property of HRV and HR are inversely proportional which means the mental arousal that increases HR should decrease HRV in the time segment window. Fig~\ref{fig:ppg_artifact_removal} shows a sample of the noisy and filtered PPG signal and their corresponding Instant Heart Rate. \begin{figure}[!htb] \vspace{-.1in} \begin{center} \epsfig{file=ppg_artifact_removal.pdf,height=1.4in, width=3.5in} \vspace{-.15in} \caption{Top figure illustrates the noisy signal (dotted line) and filtered signal from PPG sensor based on our filtering method. Bottom figure illustrates instant heart rate calculated from noisy signal (dotted line) and filtered signal} \label{fig:ppg_artifact_removal} \end{center} \vspace{-.15in} \end{figure} \subsection{Physiological Sensor Signal Feature Extraction} Using the above mentioned methods, we removed the noises and motion artifacts from EDA and PPG signals and generated two time series signal from EDA (tonic and phasic components) and one time series signal from PPG (HRV). Then, we segment each of the time series signal based on our prior detected complex activities such that each response window starts and ends with the starting and ending points of each complex activity. We extract 7 statistical time-series features for EDA (as shown in Table~\ref{tab:eda_features}) and 8 features for HRV (Table~\ref{tab:hrv_features}) within the response window). \begin{table}[!t] \begin{center} \renewcommand{\arraystretch}{1} \caption{EDA Features Within The Response Window} \begin{scriptsize} \label{tab:eda_features} \begin{tabular}{|c|l|} \hline \bfseries Features& \bfseries Description\\ \hline nSCR & Number of SCRs within response window (wrw)\\ \hline Latency & Response latency of first significant SCR wrw\\ \hline AmpSum & Sum of SCR-amplitudes of significant SCRs wrw\\ \hline SCR & Average phasic driver wrw\\ \hline ISCR & Area (i.e. time integral) of phasic driver wrw\\ \hline PhasicMax & Maximum value of phasic activity wrw\\ \hline Tonic & Mean tonic activity wrw\\ \hline \end{tabular} \end{scriptsize} \end{center} \end{table} \begin{table}[!t] \begin{center} \renewcommand{\arraystretch}{1} \vspace{-.3in} \caption{Heart Rate Variability features} \label{tab:hrv_features} \begin{scriptsize} \begin{tabular}{|c|l|} \hline \bfseries Feature& \bfseries Description\\ \hline $\overline{RR}$&Mean RR intervals\\ \hline SDNN&Standard deviation of RR intervals\\ \hline SDSD&Std of successive RR interval differences\\ \hline RMSSD&Root mean square of successive differences\\ \hline NN50&\#successive intervals differing more than 50 ms\\ \hline pNN50&relative amount of NN50\\ \hline HRVTI&Total number of RR intervals/height of the histogram\\ \hline TINN&Width of RR histogram through triangular interpolation\\ \hline \end{tabular} \end{scriptsize} \end{center} \end{table} \section{Experimental Evaluation} In this section, we explain our data collection, available benchmark dataset, baseline methods and evaluation. \subsection{Datasets and Baseline Methods} We validate and compare \emph{AutoCogniSys} with baseline methods on both publicly available and our collected datasets. \subsubsection{RCC Dataset: Collection and Ground Truth Annotation} For collecting Retirement Community Center Dataset (RCC Dataset), we recruited 22 participants (19 females and 3 males) with age range from 77-93 (mean 85.5, std 3.92) in a continuing care retirement community with the appropriate institutional IRB approval and signed consent. The gender diversity in the recruited participants reflects the gender distribution (85\% female and 15\% male) in the retirement community facility. A trained gerontology graduate student evaluator completes surveys with participants to fill out the surveys. Participants are given a wrist band to wear on their dominant hand, and concurrently another trained IT graduate student have the IoT system setup in participants' own living environment (setup time 15-30 minutes). The participants are instructed to perform 13 \emph{complex ADLs}. Another project member remotely monitors the sensor readings, videos and system failure status. The entire session lasts from 2-4 hours of time depending on participants' physical and cognitive ability. We follow the standard protocol to annotate demographics and activities mentioned in the IRB. Two graduate students are engaged to annotate activities (postural, gestural and complex activity) whereas the observed activity performances are computed by the evaluator. Two more graduate students are engaged to validate the annotations on the videos. In overall, we are able to annotate 13 complex activities (total 291 samples) labeling for each participant; 8 hand gestures (total 43561 samples) and 4 postural activities (total 43561 samples) labeling. Annotation of postural and complex activities outcomes no difficulties from recorded videos. However, annotation of hand-gestures is extremely difficult in our scenario. We used video based hand tracker that can track and sketch wrist movements from a video episode \cite{hugo14}. This sketching can help us significantly to identify which particular hand gesture is being performed in the time segment. \subsubsection{EES Datasets: EDA and PPG Sensor Datasets} We used Eight-Emotion Sentics (EES) dataset to validate \emph{AutoCogniSys} proposed physiological signal processing approaches \cite{picard01}. The dataset consists of measurements of four physiological signals (PPG/Blood Volume Pulse, electromyogram, respiration and Skin Conductance/EDA) and eight affective states (neutral, anger, hate, grief, love, romantic love, joy, and reverence). The study was taken once a day in a session lasting around 25 minutes for 20 days of recordings from an individual participant. We consider only PPG and EDA for all of the affective states in our study. \subsubsection{Baseline Methods} Though no frameworks ever combined all modalities together into real-time automated cognitive health assessment, we evaluate \emph{AutoCogniSys} performance by comparing the performances of its components individually with upto date relevant works. For hand gesture and postural activity recognition, we consider \cite{alam17} proposed method as baseline. For complex activity recognition, we compare our hand gesture and postural activity classifiers aided HDBN model with three-level Dynamic Bayesian Network \cite{zhu12} framework. For activity performance estimation, activity performance based cognitive health assessment; and EDA and PPG based cognitive health assessment, we have considered \cite{alam16} proposed method as baseline. \subsection{Activity Recognition Evaluation} The standard definition for \emph{accuracy} in any classification problem is $\frac{TP+TN}{TP+TN+FP+FN}$ where $TP,TN,FP$ and $FN$ are defined as true positive, true negative, false positive and false negative. For complex activity recognition evaluation, we additionally consider \emph{start/end duration error} as performance metric that can be explained as follows: consider that the true duration of ``cooking'' is 30 minutes (10:05 AM - 10:35 AM) and our algorithm predicts 29 minutes (10.10 - to 10.39 AM). Then, the start/end duration error is 9 minutes ($|$5 minutes delayed start$|$ + $|$4 minutes hastened end$|$), in an overall error of e.g., 30\% (9/30=0.3). We measure cross-participant accuracy using leave-two-participants-out method for performance metrics, i.e., we take out two of the participants' data points from the entire dataset, train our proposed classification models, test the model accuracy on the two left-out participants relevant data points, and continue the process for entire dataset. \begin{figure*}[!htb] \begin{minipage}{0.45\textwidth} \begin{center} \epsfig{file=hand_gesture_accuracy.pdf,height=1.6in, width=3in} \caption{Feature Weighted Naive Bayes (FWNB) classification accuracy comparisons with baseline approaches (graphical signatures of all hand gestures are shown).} \label{fig:hand_gesture_accuracy} \end{center} \end{minipage} \begin{minipage}{0.29\textwidth} \begin{center} \vspace{-.12in} \epsfig{file=posture_accuracy_normal.pdf,height=1.6in, width=2.1in} \caption{4-class postural level activity recognition performance and comparisons with baseline method} \label{fig:posture_accuracy_normal} \end{center} \end{minipage} \begin{minipage}{0.25\textwidth} \begin{center} \vspace{-.12in} \epsfig{file=posture_accuracy_extended.pdf,height=1.6in, width=2.1in} \caption{6-class diverse postural activity recognition framework accuracy comparisons with the baseline approach.} \label{fig:posture_accuracy_extended} \end{center} \end{minipage} \end{figure*} Fig~\ref{fig:hand_gesture_accuracy} displays Feature Weighted Naive Bayes (FWNB) based the 8-hand gestural activity recognition accuracies comparisons with the baseline methods which clearly depicts the outperformance of our method (5\% improvement) with an overall accuracy of 92\% (FP rate 6.7\%) in RCC dataset. For postural activity recognition, dataset achieving 91\% postural activity recognition accuracy (FP rate 9.5\%) which outperforms the baseline approach significantly (8\% improvement). Now, we expand the postural activities for RCC datasets into 3 diverse `walking' postures: `normal walking', `walking with walker', `walking with single stick' and the accuracy goes down to 88\% (FP 7.9\%). Fig.~\ref{fig:posture_accuracy_normal} and Fig.~\ref{fig:posture_accuracy_extended} illustrate 4-class postural and extended 6-class postural classifier accuracies respectively which clearly posit that \emph{AutoCogniSys} outperforms in each case of postural activities as well as overall performances (8\% and 7\% improvement respectively). For complex activity classification, we choose RCC dataset to train our HDBN model. Our leave-two-participants out method results an accuracy of 85\% (FP Rate 3.6\%, precision 84.2\%, recall 84.5\%, ROC Area 98.2\%) with a start/end duration error of 9.7\%. We run the entire evaluation for baseline complex activity recognition algorithm too achieving an overall accuracy of 78\% (FP Rate 5.2\%, precision 79.6\%, recall 78.5\%, ROC Area 82.7\%) which is clearly lower performed method than our approach. Fig. \ref{fig:complex_activity_roc} and Fig~\ref{fig:complex_activity_accuracy} illustrate the ROC curve and each complex activity recognition accuracy comparisons with baseline method which depict the outperformance of our framework over baseline methods (7\% improvement). Fig~\ref{fig:complex_activity_accuracy} also shows that inclusion of postural activity improves the final complex activity recognition (4\% improvement). \begin{figure} [!htb] \begin{minipage}{0.15\textwidth} \begin{center} \epsfig{file=complex_activity_roc.pdf,height=1.4in, width=1.1in} \caption{ROC curve for complex activity recognition} \label{fig:complex_activity_roc} \end{center} \end{minipage} \begin{minipage}{0.33\textwidth} \begin{center} \epsfig{file=complex_activity_accuracy.pdf,height=1.4in, width=2.3in} \caption{Complex ADLs recognition accuracy improvement and comparison with baseline \cite{zhu12} and HMM based method} \label{fig:complex_activity_accuracy} \end{center} \end{minipage} \end{figure} \subsection{Quantification of Performance Score} To characterize both the qualitative and quantitative health assessment performance scores, we start with four different feature groups ranging from both functional and physiological health measures: (i) observation based activity features, (ii) automatic activity performance features, (iii) EDA features and (iv) PPG features. In \emph{observation based activity features}, we design a complex activity set comprised of multiple subtasks which are involved with task {\it interruption, completion and sequencing}. Participants are instructed to perform the complex activities while the trained evaluator observed the aforementioned functional activity performance measures. Each incorrect attempt of performance measure will be assigned one point thus higher score reflects lower performance of functional activities \cite{dawadi14}. We first detect hand gesture and postural activities. Then, we feed the low-level activity contexts (gestural and postural) combined with ambient contexts (object and ambient motion sensor readings) into HDBN for single inhabitant model \cite{alam16b} to recognize complex activities. The complex activity recognition framework provides both activity labels and activity window (start-end points). Then, we extract features of object sensor, ambient sensor, gestural activity and postural activity events for each activity window. The features are number of occurrences, mean number of occurrences, consecutive 1, 2, 3, $\ldots$ 20 occurrences, top 10, 20, 30, $\ldots$, 90 percentile etc (29 features in total). In \emph{physiological features} we first detect 13 complex activities using HDBN algorithm which provides activity labels and activity window (start-end points), apply noise reduction, motion artifacts removal, extract 7 EDA features and 8 HRV features for each activity and take the mean of them over time (minutes) to get 15 (7+8) complex activity physiological features set for each participant. In summary, we extract 3 observation based activity features, 29 automatic activity performance features, 7 EDA features and 8 HRV features. \subsection{Physiological Signal Processing Performance Evaluation} Standard evaluation technique should use both experimental and publicly available datasets to confirm the outperformance of the novel approaches. We first evaluate our physiological signal processing techniques using a publicly available dataset (EES Dataset \cite{picard01}) to detect 8 human emotions. Then, in next section, we evaluate our methods in assessing cognitive health status of older adults using RCC dataset. For EDA, we first apply SWT method to remove motion artifacts and noises. Then, we use cvxEDA method to separate tonic and phasic components of EDA. Then, we extract 7 EDA features on a sliding window of 4 seconds. Finally, we feed the 7 EDA features into a SMO based SVM algorithm \cite{cao06}. We use 10-fold cross validation to classify eight emotions achieving 87\% of overall accuracy (FP rate 6\%). For PPG, we first apply our proposed PMAF based noises and motion artifacts removal technique. Then, we calculate HRV and perform time-domain feature extraction to extract 8 HRV features on a sliding window of 4 seconds. We feed these features into a SMO based SVM algorithm \cite{cao06}. Our 10-fold cross validation shows accuracy of 79\% (FP rate 11.5\%) of detecting 8 emotions on EES Dataset. Fig. \ref{fig:ees_eda} and Fig. \ref{fig:ees_ppg} clearly depict that \emph{AutoCogniSys} proposed EDA and PPG signal processing techniques significantly improve the accuracy over the baseline \cite{alam16} method (10\% and 12\% improvement). \begin{figure}[!htb] \begin{minipage}{0.24\textwidth} \begin{center} \epsfig{file=ees_eda.pdf,height=1.2in, width=1.8in} \caption{(EES Databaset) EDA features based Eight Emotion classification accuracy comparisons with baseline method} \label{fig:ees_eda} \end{center} \end{minipage} \begin{minipage}{0.23\textwidth} \begin{center} \epsfig{file=ees_ppg.pdf,height=1.2in, width=1.7in} \caption{(EES Dataset) PPG features based 8-Emotion classification accuracy comparisons with baseline method} \label{fig:ees_ppg} \end{center} \end{minipage} \end{figure} \subsection{Evaluation of Performance Scores} The feature subsets used in the experimentation for observation and survey based clinical assessments and technology guided physiological and activity initiated health assessments are depicted in Table~\ref{tab:feature_subset}. From our 6 demographics surveys, we find significant distributions in terms of cognition only for SLUMS Score (S-Score). Based on that, we divide our participants pool into three groups: \emph{Not Cognitively Impaired (NCI), Mild Cognitively Impaired (MCI) and Cognitively Impaired (CI)} where the number of participants are $5$, $7$ and $10$ respectively. \begin{table}[!t] \begin{scriptsize} {\centering \renewcommand{\arraystretch}{.6} \caption{Feature Subsets} \label{tab:feature_subset} \begin{tabular}{|l|L{5.5cm}|} \hline \bfseries Feature& \bfseries Description\\ \hline Observation & Task Completeness (TC), Sequencing (SEQ), Interruptions (INT)\\ \hline Survey & SLUMS Score (S-Score), ZUNG Score (Z-Score), IADL Score (I-Score), Yale Score (YPAS), Barthel Score (B-Score), GDS Score (G-Score)\\ \hline EDA and HRV & 7 and 8 Features\\ \hline Activity Performance& Supervised (TC, SEQ, INT), Unsupervised\\ \hline Arousal& EDA and HRV features of each complex activity window\\ \hline \end{tabular} } \end{scriptsize} \end{table} \begin{figure}[!htb] \begin{center} \epsfig{file=group_correlation.pdf,height=1in, width=3.3in} \caption{\emph{AutoCogniSys} Proposed Method Based Group Correlation analysis ( $r-value$) NCI, MCI and CI represent not cognitive, mild cognitive and cognitively impaired group of population. TC, INT, SEQ, EDA and HRV represent task completeness, interruption scores, sequencing scores, electrodermal activity features and heart rate variability features} \label{fig:group_correlation} \end{center} \vspace{-.2in} \end{figure} \begin{figure}[!htb] \begin{center} \epsfig{file=group_correlation_baseline.pdf,height=1in, width=3.3in} \caption{Baseline \cite{alam16} method based Group Correlation analysis ( $r-value$)} \label{fig:group_correlation_baseline} \vspace{-.25in} \end{center} \end{figure} \subsection{Statistical Correlation Analysis of Cognitive Health} We used Pearson correlation coefficients with significance on $p<0.05$* for individual feature and partial correlation coefficients with significance on $p<0.005$** for group of features correlation analysis. Fig. \ref{fig:group_correlation} and Fig. \ref{fig:group_correlation_baseline} show the group correlation analysis results based on \emph{AutoCogniSys} proposed framework and baseline \cite{alam16} framework respectively. It can be clearly depicted that our proposed framework improves the correlation with the ground truths. \subsection{Machine Learning Classification of Cognitive Health} We evaluate using machine learning classifiers to predict cognitive status of older adults using both individual modalities and combined features. We use leave-two-participants out method to train and test classification accuracy. We first choose the individual activity features (machine learning method based interruption scores, sequencing scores, unsupervised scores) and their combined features to train and test cognitive impairment status classification for SMO based SVM algorithm \cite{cao06}. The classification accuracies are 72\%, 69\%, 76\% and 83\% respectively. Then we consider 7 EDA-activity features and 8 HRV-activity features individually in training and testing phase of SMO based SVM algorithm \cite{cao06} resulting 85\% and 80\% accuracy respectively. \begin{figure}[!htb] \begin{minipage}{0.24\textwidth} \begin{center} \epsfig{file=combined_classification.pdf,height=1.2in, width=1.7in} \vspace{-.15in} \caption{Individual and combined classification accuracies comparison with baseline method for cognitive impairment status detection} \label{fig:combined_classification} \end{center} \end{minipage} \begin{minipage}{0.23\textwidth} \begin{center} \epsfig{file=each_activity_cognitive_assessment.pdf,height=1.2in, width=1.7in} \caption{Machine learning based cognitive health assessment accuracy for each complex activity in terms of activity, EDA and HRV features.} \label{fig:each_activity_cognitive_assessment} \end{center} \end{minipage} \end{figure} For combined classifier, we first applied sequential forward feature selection to find the best combinations of 1- 3 features for cognitive impairment classification group MCI, NCI and CI in terms of combined activity features (29 features), EDA-activity features (7 features) and HRV-activity features (8) features. Our final combined classifier (SMO based SVM algorithm \cite{cao06}) provides an accuracy of {\bf 93\%} in detecting the cognitive impairment status of older adults. Fig. \ref{fig:combined_classification} shows our proposed individual and combined methods outperform the baseline \cite{alam16} significantly (13\% improvement). Fig. \ref{fig:each_activity_cognitive_assessment} shows the cognitive impairment status prediction accuracy for each modality (activity feature, EDA and HRV) per individual complex activity. \subsection{Discussion} If we exclude the postural activities from automated activity performance scoring, we find reduced statistical correlation with original task completeness performance for \{NCI, MCI, CI\} participant group (INT 0.53*, SEQ 0.21' and unsupervised 0.49'). However, if we skip our proposed motion artifact removal stage, we find reduced statistical correlation with \{NCI, MCI\} and \{MCI, CI\} groups of participants (EDA and HRV correlations respectively \{0.51*, -0.51*\} and \{-0.53*,0.46\}). To test our proposed motion artifacts removal impact on EDA signals more rigorously, we choose 5 random participants, engage one expert motion artifact annotator to annotate motion artifacts segment on each participant's first 30 minutes of complex dataset using recorded video and apply both baseline and our methods to detect motion artifact segments. While baseline method achieves 75.5\% (FP rate 20.3\%) accuracy in detecting motion artifact segments, \emph{AutoCogniSys} outperforms achieving 89.9\% (FP rate 8.9\%) accuracy. In terms of experience, we have seen 100\% acceptance of wearing wrist-band, 71\% of acceptance for signing consent on using cameras and 0\% failure rate of collecting continuous data. \section{Conclusion} We propose, \emph{AutoCogniSys}, an IoT inspired design approach combining wearable and ambient sensors embedded smart home design, extensive signal processing, machine learning algorithms and statistical analytics to automate cognitive health assessment in terms of complex activity performances and physiological responses of daily events. Additionally, our postural activity detection approach in diverse population cum improved activity performance measurement and fundamental physiological sensor artifacts removal from physiological sensors help facilitate the automated cross-sectional cognitive health assessment of the older adults. Our efficient evaluation on each modality (physical, physiological and ambient) and each activity mode proves that any of the mode (say single activity and single sensor) also can provide significant improved cognitive health assessment measure.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Photon pairs from nonlinear optics are so far the only resource to have distributed quantum entanglement over more than a few kilometers~\cite{ursin-2007-3,1367-2630-11-8-085002,Dynes:09,Inagaki:13,Ma2012Quantum-,Herbst17112015}, a critical link in future quantum networks, and are well-suited for use in multi-port quantum interferometers for sensing, simulation and computation, both as pairs directly and for heralded single photons~\cite{Metcalf:2013aa,Crespi:2013aa,Tillmann:2013aa,Carolan14082015}. Entangled photon pairs have also been used in quantum teleportation~\cite{Takesue:15,Valivarthi:2016aa,Sun:2016aa} and entanglement swapping~\cite{Takesue:09,PhysRevA.85.032337,Jin:2015aa}. These applications require that the reduced spectral state of each photon is pure: mixedness of the photon states leads to reduced visibility of the interference of independent photons, and therefore lower-quality final states. Parametric down-conversion (PDC) and four-wave mixing (FWM) are the most common sources of photon pairs, and these photons usually possess spectral anti-correlation leading to mixedness of the reduced state of each photon. This frequency entanglement can be useful for some applications~\cite{PhysRevA.65.053817}, but is catastrophic for multi-photon interference or entanglement-swapping experiments. A convenient solution is narrowband filtering of both photons, which casts each into a single spectral mode, removing entanglement in favor of the spectral purity of each photon. Both FWM sources~\cite{Sharping:06,1367-2630-13-6-065005,chalco2011,W.2013On-chip-,Sun:2016aa} and PDC sources ~\cite{PhysRevA.81.021801,Bruno:2014aa,Valivarthi:2016aa,PhysRevLett.117.210502,Chen:17,Vergyris:2016aa} often use filters much narrower than the photon bandwidths. But is spectral filtering compatible also with high {\em pair-symmetric heralding efficiency} (PSHE), defined as the product of signal and idler heralding efficiencies? In contrast to heralded single photon sources where only one photon requires high heralding efficiency, we consider photon pair sources where both photons must be generated in spectrally pure states and with high efficiency, such that both may be used for interference experiments. High heralding efficiency is critical for scaling experiments and communications to many photons and higher rates~\cite{1367-2630-12-9-093027,PhysRevLett.117.210502,Chen:17} due to the exponential increase in losses with number of photons, and also of fundamental importance: for reaching scalability in optical quantum computing~\cite{PhysRevLett.100.060502,PhysRevA.81.052303,Jennewein:2011aa}, in device-independent quantum cryptography~\cite{PhysRevLett.98.230501,PhysRevA.86.032325}, and for tests of local causality with entangled photons~\cite{LHFVienna,PhysRevLett.115.250402}. Our results are especially important for applications that require both high pair-symmetric heralding efficiency and multi-source interference: interference of pair sources to produce large entangled states~\cite{Pan2000Experime,1367-2630-12-9-093027,PhysRevLett.117.210502}, entanglement swapping~\cite{PhysRevLett.80.3891,Takesue:09,PhysRevA.85.032337,Herbst17112015,Jin:2015aa}, heralded noiseless qubit amplification~\cite{Kocsis2013Heralded,2015arXiv150703210B}, quantum repeater networks~\cite{Azuma:2015aa,PhysRevA.92.022357,Krovi:2016aa}, and certain multiphoton phase estimation schemes~\cite{0295-5075-82-2-24001,Xiang:2013aa}. Here we show that, for photon pair sources with spectral correlation or anti-correlation, increasing the spectral purity by filtering comes at a direct cost of decreasing the pair-symmetric heralding efficiency. This tradeoff is based only on the joint spectral intensity (JSI) of the photons, not on the underlying physics that produce a specific JSI, meaning our results are applicable to both PDC and FWM, and to pulsed and continuous-wave pumps. We find a significant drop in achievable PSHE even with ideal filters. We quantify this tradeoff by introducing the symmetrized fidelity of the photon pairs to two spectrally pure single photons, and show that it is bounded well below one for spectrally-correlated sources. This is supported by an experiment using a lithium niobate photon-pair source, where we vary filter parameters, and find that heralding efficiency necessarily decreases as purity increases. Similar results could be obtained for spatial correlation and spatial filtering, but here we focus on a single spatial mode. Previous investigations of filtering in PDC and FWM have largely focused on heralded single photons, where the herald{\em ing} photon is strongly filtered and the herald{\em ed} photon is unfiltered, allowing both high spectral purity and high single-sided heralding efficiency~\cite{Aichele:2002aa,Neergaard-Nielsen:07,1367-2630-12-6-063001,PhysRevA.83.053843}. The effect filtering on continuous-variable photon states has been studied~\cite{PhysRevA.90.023823}, as has the effect of self-and cross-phase modulation on filtered photon pairs~\cite{PhysRevA.94.063855}. Recent theoretical work has included also spatial entanglement and purity with spatial and spectral filters~\cite{PhysRevA.91.013819,PhysRevA.94.069901}, showing again high single-sided heralding efficiency and purity. This is in contrast to source engineering methods, which achieve intrinsically pure states by controlling the dispersion and pump bandwidth~\cite{PhysRevA.56.1627,PhysRevA.64.063815,PhysRevLett.100.133601,prespw,Garay-Palmett:07,Halder:09,Levine:10,PhysRevLett.106.013603,Gerrits:11,Fang:13,Harder:13,Fortsch:2013aa,Bruno:14,Weston:2016aa}. Some schemes with tight spectral and time filtering can even outperform this source engineering, when considering production rates as well as purity~\cite{PhysRevA.82.043826,PhysRevA.84.033844}. Furthermore, in contrast to spectral filtering after generation, placing the nonlinear medium in a cavity of carefully engineered length and finesse can in principle produce spectrally pure states without loss of heralding efficiency~\cite{Chuu_2012,1367-2630-17-7-073039}. In most cases, however, filters are still needed for single-mode operation, as the phasematching bandwidth covers multiple longitudinal modes of the cavity~\cite{Jeronimo-Moreno2010,Neergaard-Nielsen:07,PhysRevLett.101.190501,PhysRevLett.102.063603,PhysRevLett.110.220502}; fortunately, for narrowband pumps and filters, these modes do not contribute to a decrease in heralding efficiency because each filter intersects just one cavity mode. For the case where both photons are to be used from non-engineered and non-cavity sources, hints that filtering is incompatible with high PSHE have appeared numerous times~\cite{PhysRevA.56.1627,PhysRevA.64.063815,PhysRevLett.100.133601,Bock:16} and a simple model for heralding efficiency after filtering was developed in~\cite{PhysRevA.92.012329}, but so far no experiments have directly studied the impact of filtering on purity and heralding efficiency simultaneously, and no previous studies have found the fundamental limits to symmetrized fidelity we present here. \section{Spectrally-filtered photon pairs} One can get a feeling for the intrinsic tradeoff between reduced-state spectral purity and heralding efficiency from \cref{fig.intro}. It shows the joint spectral intensity of an example photon pair state, overlaid with narrowband filters on each photon, labeled signal and idler. To achieve a spectrally pure state, the JSI that remains after filtering must be uncorrelated between the two photons, either a circle, or an ellipse along the vertical or horizontal axis. But for high PSHE, the two-photon amplitudes transmitted by each filter individually must overlap, otherwise signal photons will pass the filter without the corresponding idler and vice versa. \begin{figure}[!h] \centerline{\includegraphics[width=0.9\linewidth]{Fig1.pdf}} \caption{Photon pair production and filtering (top), resulting in joint spectral intensity with spectral correlation between signal and idler photons (bottom). The signal and idler filters are overlaid, and the JSI and marginal photon spectra remaining after filtering dictate the reduced-state spectral purity and heralding efficiency of the photons. The phasematching function with angle $\theta$ is multiplied by the pump envelope (which always has angle \SI{45}{\degree}) to produce the total JSI. Thus the overall angle of the JSI is somewhere between \SI{45}{\degree} and $\theta$.} \label{fig.intro} \end{figure} An uncorrelated JSI, fully contained within both filters is only possible for certain ranges of the phasematching angle, namely $\theta\in\left[\SI{90}{\degree},\SI{180}{\degree}\right]$, and with a pump bandwidth optimized for the phasematching bandwidth. But these conditions are precisely those for which filtering is not required, since there are no underlying spectral correlations in this condition. Furthermore, achieving a phasematching angle in this range is nontrivial, as it requires the group velocity of the pump to be between that of the signal and idler. This source engineering is only possible~\cite{Zhang2012Heralded} in PDC for very specific wavelength ranges in birefringent crystals~\cite{prespw,1367-2630-10-9-093011,Laudenbach:16}. It is easier to arrange in FWM since it occurs naturally for normal dispersion with the pump between the signal and idler frequencies (here the frequencies are rather close, necessitating narrow filtering for pump removal), or by pumping near the zero-dispersion wavelength~\cite{PhysRevLett.102.123603,1367-2630-13-6-065009} or using birefringent fibers~\cite{Smith:09}. As a concrete example we consider waveguided type-II PDC wherein the photons are emitted in a single spatial mode (such that spatial variables do not play a role) but with different polarizations. These sources can be easily transformed to entangled-pair sources with Sagnac~\cite{PhysRevA.73.012316} or Mach-Zehnder~\cite{PhysRevLett.105.253601} interferometers. At low enough pump powers to stay in the single-pair regime, the spectral properties of PDC are governed by the joint spectral amplitude $f\left(\omega_s, \omega_i\right)$ for signal and idler frequencies $\omega_s$ and $\omega_i$, giving rise to the photon pair state~\cite{PhysRevA.56.1627} \begin{align} \ket{\psi} = \iint d\omega_s d\omega_i f\left(\omega_s, \omega_i\right) \mathcal{F}_s(\omega_s)\mathcal{F}_i(\omega_i)\ket{\omega_s}\ket{\omega_i}, \label{eq.state} \end{align} where $\ket{\omega_{s/i}}$ is a single photon at frequency $\omega_{s/i}$ with the polarization of the signal/idler mode and $\mathcal{F}_s(\omega_s)$ and $\mathcal{F}_i(\omega_i)$ are spectral filters on the signal and idler photons respectively. The joint spectral intensity is $\left|f\left(\omega_s, \omega_i\right)\mathcal{F}_s(\omega_s)\mathcal{F}_i(\omega_i)\right|^2$, and the filters can be of any shape: we consider square and Gaussian filters. We model the joint spectral amplitude around central frequencies $\omega_{s0}$ and $\omega_{i0}$ by \begin{align}\label{eq.jsa} f\left(\omega_s, \omega_i\right) = N \exp\left({\frac{-\left(\omega_s-\omega_{s0}+\omega_i-\omega_{i0}\right)^2}{4\sigma_p^2}}\right)\\ \times\mathrm{sinc}\left(\frac{\left(\left[\omega_s-\omega_{s0}\right]\sin{\theta} + \left[\omega_i-\omega_{i0}\right]\cos{\theta}\right)}{2\sigma_{pm}}\right)\nonumber. \end{align} The pump and phasematching bandwidths are $\sigma_p$ and $\sigma_{pm}$, respectively; $N$ is a normalization term; and the phasematching angle~\cite{Smith:09} is $\theta = \arctan\left(\frac{k_p^\prime - k_s^\prime}{k_p^\prime - k_i^\prime}\right)$, where $k_x^\prime$ is the frequency derivative of the wavenumber $k$ of mode $x$. Thus the nonlinear material, waveguide characteristics, and wavelengths can all be chosen to determine the phasematching angle. \section{Heralding efficiency and reduced-state spectral purity} We define the signal photon's {\em filter heralding efficiency} as the probability that the signal photon passes its filter given that the idler photon has passed its filter, and vice versa for the idler photon's filter heralding efficiency. These efficiencies will be less than one whenever the JSIs passed by each filter individually do not match~\cite{PhysRevA.91.013819,PhysRevA.92.012329}. Defining the probability that both photons pass their filters as $\Gamma_{both}$ and the probability that each passes individually as $\Gamma_{s/i}$, we find the signal's filter heralding efficiency is $\eta_{f,s} = \frac{\Gamma_{both}}{\Gamma_{i}}$, and the idler's is $\eta_{f,i} = \frac{\Gamma_{both}}{\Gamma_{s}}$. Then we define the pair-symmetric heralding efficiency as $PSHE = \eta_{f,s}\eta_{f,i}$. Of course this is only the contribution of filtering to the PSHE; optical losses will lower the PSHE further. The spectral purity of the reduced state of either photon given that both photons have passed their respective filters (corresponding to the relevant case of coincident detection) is~\cite{1367-2630-12-11-113052} $P = \mathrm{Tr}\left(\rho_s^2\right)$, where \begin{align}\label{eq.rhos} \rho_s&=\mathrm{Tr}_i\left(\ket{\psi}\bra{\psi}\right)\\\nonumber & = \iiint d\omega_i d\omega_s d\omega_s^\prime f\left(\omega_s, \omega_i\right)f^*\left(\omega_s^\prime, \omega_i\right)\\\nonumber &\times \mathcal{F}_s(\omega_s)\mathcal{F}_s(\omega_s^\prime)\mathcal{F}_i(\omega_i)^2\ket{\omega_s}\bra{\omega_s^\prime} \end{align} is the reduced density matrix. The purity can be taken for either signal or idler as there is no other degree of freedom (e.g. spatial) that would allow different purities for each mode and we are always considering that the photons are detected in coincidence. \begin{figure}[h] \centerline{\includegraphics[width=\linewidth]{Fig2.pdf}} \caption{Theoretical filter heralding efficiency for signal (purple solid) and idler (purple dashed), combined PSHE (purple dotted), and spectral purity (blue) versus filter bandwidth, for the flat-top filters with the same bandwidth for signal and idler, showing the intrinsic tradeoff between purity and efficiency. The corresponding thin grey curves are the analytic results for Gaussian filters. Some representative JSIs are shown below their corresponding filter bandwidths (the leftmost is very small on this scale).} \label{fig.filt} \end{figure} Taking the JSI of \cref{fig.intro} (with pump bandwidth \SI{0.42}{\nano\meter}, phasematching bandwidth \SI{0.46}{\nano\meter}, and $\theta=\SI{60.5}{\degree}$, matching the experiment below), we calculate the filter heralding efficiencies and spectral purity versus filter bandwidth, which are taken as equal for the signal and idler. As seen in \cref{fig.filt}, as soon as the filters are narrow enough to increase the purity, the filter heralding efficiency starts to drop. The filters are ideal flat-top filters with perfect transmission in the passband and perfect blocking otherwise. This is an idealization of real dense-wave-division multiplexing filters, chosen to highlight the intrinsic physical effects of filtering rather than the technical effects. In fact, real filters lead to even stronger reductions in heralding efficiency due to nonuniformities, slow rolloff, and nonunit transmission. Gaussian filters (thin grey curves) show worse performance for both purity and heralding efficiency, with the improved purity at large filter bandwidths due to the removal of sinc lobes under the Gaussian approximation of the JSI. The kink in \cref{fig.filt} around \SI{3}{\nano\meter} filter bandwidth in the idler heralding efficiency is due to the asymmetry of the JSI~\cite{0953-4075-46-5-055501}. Even though both filters are varied equally, since the JSI is tipped slightly towards parallel to the idler axis, above the kink, the filtering is dominated by the idler filter, while below both filters contribute. \begin{figure}[h] \centerline{\includegraphics[width=\linewidth]{Fig3.pdf}} \caption{Calculated symmetrized fidelity $\sqrt{F_sF_i}$ (thick red) versus phasematching angle, after optimizing the pump bandwidth (black solid) and the signal (black dot-dash) and idler (black dash) filter bandwidths. The maximum achievable fidelity is independent of the crystal length but the optimal values of bandwidth change to accommodate the different phasematching bandwidths. A few crystal types~\cite{snlo} for degenerate type II PDC to \SI{1550}{\nano\meter} are overlaid (filled star), while for degenerate type 0 and type I, the phasematching angle is always \SI{45}{\degree} (except with engineered dispersion for example in microsctructured fibers~\cite{Garay-Palmett:07}, or for noncollinear PDC~\cite{1367-2630-12-9-093027}). With nondegenerate photons and other wavelengths (see three examples at \SI{800}{\nano\meter}, empty star) many different angles can be reached~\cite{Laudenbach:16}. Below the plot are unfiltered JSIs at \SI{45}{\degree} intervals.} \label{fig.theta} \end{figure} To quantify the combined effect of filtering on heralding efficiency and purity we introduce the symmetrized fidelity $F=\sqrt{F_sF_i}$, where $F_{s/i}$ is the fidelity for the signal/idler to a pure single-photon state $\ket{1}_{s/i} = \int d\omega g_{s/i}(\omega)\hat{a}_{s/i}^\dagger(\omega) \ket{0}$ after heralding by the idler/signal and including the vacuum component caused by filtering losses. We symmetrize the fidelity in this way rather than taking just the signal or idler fidelity to capture the effects of filtering on both photons together. The spectral function $g_{s/i}(\omega)$ is optimized for each photon to maximize the fidelity, as it is not directly given by any eigenvector of the reduced density matrix~\cref{eq.rhos}. The individual fidelities are \begin{align} F_{s} &=\eta_{f,s}\times\underset{g_s(\omega)}{\max}\bra{1}_s\rho_{s}\ket{1}_{s},\\\nonumber F_{i} &= \eta_{f,i}\times\underset{g_i(\omega)}{\max}\bra{1}_i\rho_{i}\ket{1}_{i}. \end{align} Either $F_s$ or $F_i$ can be made to approach one by filtering, but in general not both simultaneously. Using the Gaussian approximation developed in the Supplemental Material which allows analytic solutions, we find the symmetrized fidelity to be related to the purity and heralding efficiency by \begin{align} F = \sqrt{\eta_{f,s}~\eta_{f,i}}~\frac{2P}{1+P}. \end{align} By optimizing the pump and filter bandwidths for each phasematching angle we bound the maximum value of symmetrized fidelity available by filtering, as shown in \cref{fig.theta}. The maximum is independent of the phasematching bandwidth (here chosen as \SI{1.5}{\nano\meter}), though the optimal pump and filter bandwidths change. For our lithium niobate (LN) crystal with $\theta=\SI{60.5}{\degree}$ the maximum is $F=0.57$. By contrast, sources with $\theta\in\left[\SI{90}{\degree},\SI{180}{\degree}\right]$ can have $F\rightarrow 1$ even without filtering, as the optimal filter bandwidth goes to infinity. This shows clearly the futility of filtering for reduced-state spectral purity in PDC: the conditions in which filters are needed are only where filtering cannot recover perfect fidelity due to lowered heralding efficiency. Of course without filters in these conditions the fidelity to a pure single photon would be even lower. We stress that this fidelity bound is generic for all PDC and FWM sources (with JSIs described by the pump-times-phasematching model), and is thus a very powerful tool in source design. Finally, to show the sharpness of these effects we vary the filter bandwidths independently and set the pump and phasematching bandwidths to \SI{0.38}{\nano\meter} and \SI{1.5}{\nano\meter} respectively, which for $\theta=\SI{60.5}{\degree}$ allows an optimal symmetrized fidelity. As shown in \cref{fig.filters}, the best filter heralding efficiencies for the signal photon have the largest signal filter and the smallest idler filter; and vice versa for the idler photon. However the largest purity requires small filters on both arms, resulting in a symmetrized fidelity that varies slowly over filter bandwidth and never exceeds 0.57, falling to zero as either filter gets too narrow. \begin{figure}[htp] \centerline{\includegraphics[width=\linewidth]{Fig4.pdf}} \caption{Filter heralding efficiencies for (a) signal and (b) idler as a function of signal and idler filter bandwidths, along with (c) reduced-state spectral purity and (d) symmetrized fidelity to a spectrally pure single photon. Here we use Gaussian filters, with bandwidths given by the full width at half-maximum (FWHM). While the heralding efficiencies and purity range individually over $[0,1]$, the symmetrized fidelity is reasonably constant around its mean value of 0.45, and never surpasses 0.57.} \label{fig.filters} \end{figure} \section{Experiment}\label{sec.exp} To confirm the tradeoff between purity and PSHE, we measured the heralding efficiency of signal and idler photons and the joint spectral intensities of a photon-pair source under various filtering conditions. The source (\cref{fig.intro}) was a \SI{21}{\milli\meter} type-II periodically-poled lithium niobate waveguide, fiber pigtailed on both ends~\cite{al.:2016aa} and pumped by a Ti:Sapphire pulsed laser of wavelength \SI{778}{\nano\meter}. The laser had a pulse width of \SI{3.0}{\pico\second} FWHM, nearly transform limited to \SI{0.42}{\nano\meter} FWHM spectral bandwidth, and \SI{5}{\micro\watt} coupled power resulting in a production of $\sim\num{0.02}$~pairs/pulse before filtering. Calculations for lithium niobate predict a phasematching angle of \SI{60.5}{\degree} and bandwidth \SI{0.46}{\nano\meter}. The output of the source was coupled to a WaveShaper 4000 (Finisar Corp.) which was used to separate the nondegenerate photons (central wavelengths \SI{1562}{\nano\meter} and \SI{1549}{\nano\meter}) and define their spectral filters. We characterize the heralding efficiency for each filter setting using the Klyshko method~\cite{0049-1748-10-9-A09} such that $\eta_s = \frac{C}{S_i},~~\eta_i = \frac{C}{S_s}$, where $C$ are the number of coincidences, $S_{s/i}$ are the number of signal/idler singles, and $\eta_{s/i}$ are the total heralding efficiencies. Then we extract the filter heralding efficiency by dividing out the heralding efficiency $\eta_{max,~s/i}$ when the filters are set to maximum bandwidth, which comes from nonunit coupling and detector efficiencies. Thus the filter heralding efficiencies are \begin{align} \eta_{f,s}= \frac{C}{S_i~\eta_{max,~s}},~~\eta_{f,i} = \frac{C}{S_s~\eta_{max,~i}}. \end{align} We confirmed that the peak filter transmission is independent of the WaveShaper's filter bandwidth assuring that the reduction in heralding efficiency is due to the fundamental tradeoff rather than technical imperfections (see plot in Supplemental Material). We characterized the purity by measuring a joint spectral intensity with a time-of-flight spectrometer~\cite{Avenhaus:09}, assuming a constant phase of the joint spectrum~\cite{Gerrits:11}, and calculating $P = \mathrm{Tr}\left(\rho_a^2\right)$, where $\rho_a$ is the reduced spectral density matrix of the signal or idler photon~\cite{1367-2630-12-11-113052}. Using the JSI as an indicator of purity can be limited by artificial smoothing from limited spectrometer resolution and spectral phases that are not identifiable with intensity measurements. Thus we have employed as high a resolution as possible, and verified numerically that the expected phases due to pump chirp are negligible. We show in \cref{fig.filter} the joint spectral intensities after filtering and the corresponding purities, calculated with an additional time filter of twice the filter bandwidth to reduce technical noise from our laser's instability and limited timing resolution of our spectrometer. \begin{figure}[t] \centerline{\includegraphics[width=\linewidth]{Fig5.pdf}} \caption{Measured joint spectral intensities of photon pairs, from nearly unfiltered (\SI{10}{\nano\meter} bandwidth) to strongly filtered and spectrally pure (\SI{0.2}{\nano\meter} bandwidth). The axis labels $\Lambda$ give the distance from the central wavelength.} \label{fig.filter} \end{figure} \begin{figure}[h] \centerline{\includegraphics[width=\linewidth]{Fig6.pdf}} \caption{Experimental filter heralding efficiency, PSHE, and spectral purity (a) and symmetrized fidelity (b) versus filter bandwidth (points), with theoretical prediction (curves). The experimental data are shown with error bars from Poissonian statistics smaller than symbol size. Adding artificial jitter to the theoretical JSI makes the purity agree with the experiment for large filters, and thus this jitter has also been applied to the theoretical calculation for symmetrized fidelity.} \label{fig.exp} \end{figure} The purity, filter heralding efficiencies, and symmetrized fidelity are plotted in \cref{fig.exp} and correspond reasonably well with the predictions after accounting for the asymmetry of our measured JSI. The limited resolution of our fiber spectrometer due to detector timing jitter tends to increase measured purities for large filters, as it rounds off sharp features of the JSI. Adding our experimental detector timing jitter of \SI{120}{\pico\second} to the theoretical JSI makes the predicted purity match the experiment for large filters. The remaining mismatch in the symmetrized fidelity could be due to small ripples in the WaveShaper transmission. The overall trend is clear: the increase in purity comes at a direct cost of heralding efficiency, and the fidelity of the signal and idler states to pure single photons cannot reach unity by filtering. \section{Conclusion} We have shown that spectral filtering of down-converted photons to increase the reduced-state spectral purity can lead to intrinsically low pair-symmetric heralding efficiencies, and cannot increase the symmetrized fidelity to a pure single photon beyond strong, general bounds. Our results suggest that, if high heralding efficiency of photon pairs is important, source engineering is required to generate spectrally decorrelated states, and for noise reduction only broadband filters should be used. The problem of reduced efficiency could also be avoided with carefully-designed cavities~\cite{Jeronimo-Moreno2010}, or more general time-frequency filtering~\cite{PhysRevA.82.043826} to directly select single spectral-temporal modes~\cite{Eckstein:11}. For example, without the reduction of heralding efficiency from narrowband filtering, the rate of 10-photon entanglement in two recent experiments~\cite{PhysRevLett.117.210502,Chen:17} could have been increased by a factor of 10 (counting only reduction of heralding efficiencies) or a factor 100 (counting all filtering losses). For heralded photon sources, care must be taken when filtering the heralded photon so as not to decrease its heralding efficiency unnecessarily. Finally, the analytic expressions we developed will be useful in designing the next generation of photon pair sources, as they allow optimization of the spectral purity and heralding efficiency with and without filtering. It would be interesting in future work to design the optimal filter shape that minimizes the purity-efficiency tradeoff, or maximizes the symmetrized fidelity. \begin{acknowledgments} We thank Vahid Ansari for providing WaveShaper code, and Viktor Quiring and Raimund Ricken for sample preparation. We acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Marie Curie Initial Training Network PICQUE (Photonic Integrated Compound Quantum Encoding, grant agreement no. 608062, funding Program: FP7-PEOPLE-2013-ITN, http://www.picque.eu), the DFG (Deutsche Forschungsgemeinschaft, grant no. SI 1115/4-1 and Gottfried Wilhelm Leibniz-Preis). \end{acknowledgments} \section*{Supplemental Material} \subsection*{Analytic calculations of filter heralding efficiency, purity, and fidelity} Here we find the filter heralding efficiency, purity, and fidelity of an arbitrary (but Gaussian) photon-pair joint spectrum, given Gaussian signal and idler filters of unit transmission and central frequencies that match the photons' central frequencies of $\omega_{s0/i0}$. The amplitude bandwidths for the pump ($\sigma_p$) and phasematching ($\sigma_{pm}$) are related to the FWHM intensity bandwidths that would be measured in the lab by \begin{align*} &\sigma_p = \sigma_{p_{FWHM}}/(2\sqrt{2\ln2})\approx0.425\sigma_{p_{FWHM}},\\ &\sigma_{pm} = \sqrt{0.193}\sigma_{pm_{FWHM}}/(2\sqrt{2\ln2})\approx0.187\sigma_{pm_{FWHM}}, \end{align*} where all bandwidths have units rad Hz. These can be converted to wavelength using the central pump wavelength for $\sigma_p$ and the central photon wavelength for $\sigma_{pm}$. The filter heralding efficiency can be derived from the Klyshko heralding efficiency~\cite{0049-1748-10-9-A09} $\eta_{s/i} = \frac{C}{S_{i/s}}$, where $C$ are the number of coincidences and $S_{i/s}$ the number of singles in the idler/signal arm per integration time. Lumping all optical losses into $\eta_{opt}$ and keeping the filtering losses separate, the coincidences and singles are $C=n\eta_{opt}^2\Gamma_{both}$ and $S_{s/i}=n\eta_{opt}\Gamma_{s/i}$, where $n$ is the number of photon pairs produced, $\Gamma_{both}$ is the probability that both photons pass their respective filters, and $\Gamma_{s/i}$ is the probability that the signal/idler photon passes its filter. Thus the signal's filter heralding efficiency is \begin{align} \eta_{f,s} = \frac{\eta_s}{\eta_{opt}}= \frac{\Gamma_{both}}{\Gamma_{i}}, \end{align} and the idler's is \begin{align} \eta_{f,i} = \frac{\eta_i}{\eta_{opt}}=\frac{\Gamma_{both}}{\Gamma_{s}}. \end{align} To calculate the heralding efficiency, we find the probability that both photons are passed by the filter, then the probability that each photon is passed individually. The unfiltered state is given by $\ket{\psi} = \iint d\omega_s d\omega_i f\left(\omega_s, \omega_i\right)\ket{\omega_s}\ket{\omega_i}$, so the coincidence probability is \begin{align} &\Gamma_{unfilt} = \left|\braket{\psi|\psi}\right|^2 \\ &= \iint d\omega_s d\omega_i \left|f\left(\omega_s, \omega_i\right) \right|^2\equiv 1.\nonumber \end{align} Following Ref.~\cite{0953-4075-46-5-055501}, we define $\Omega_s = \omega_s - \omega_{s0}$ and $\Omega_i = \omega_i - \omega_{i0}$ with $\omega_{s0}$ and $\omega_{i0}$ the respective central frequencies, approximate $\mathrm{sinc(x)}\approx\exp\left(-\alpha x^2\right)$ with $\alpha = 0.193$ in the joint spectral amplitude, add Gaussian filters with bandwidth $\sigma_{s/i} = \sigma_{s/i_{FWHM}}/(2\sqrt{2\ln2})$, and then neglect phase contributions, giving \begin{align} f\left(\omega_s, \omega_i\right)&\mathcal{F}_s(\omega_s)\mathcal{F}_i(\omega_i)\approx N \exp\left(-\frac{\left(\Omega_s+\Omega_i\right)^2}{4\sigma_p^2}\right)\\ &\times\exp\left(\frac{-\alpha\left(\Omega_s\sin{\theta} + \Omega_i\cos{\theta}\right)^2}{4\sigma_{pm}^2}\right)\\ \nonumber &\times\exp\left(-\frac{\Omega_s^2}{4\sigma_s^2} - \frac{\Omega_i^2}{4\sigma_i^2}\right)\nonumber. \end{align} Collecting the terms in the exponentials gives \begin{align} f\left(\omega_s, \omega_i\right)&\approx N\exp\left(-\frac{a}{4}\Omega_s^2 - \frac{b}{4}\Omega_i^2 - \frac{c}{2}\Omega_s\Omega_i\right), \end{align} with~\cite{0953-4075-46-5-055501} \begin{align*} a&= \frac{\alpha^2\sin^2{\theta}}{\sigma_{pm}^2} + \frac{1}{\sigma_p^2} + \frac{1}{\sigma_s^2}\\ b&= \frac{\alpha^2\cos^2{\theta}}{\sigma_{pm}^2} + \frac{1}{\sigma_p^2} + \frac{1}{\sigma_i^2}\\ c&= \frac{\alpha^2\cos{\theta}\sin{\theta}}{\sigma_{pm}^2} + \frac{1}{\sigma_p^2}. \end{align*} Then without filters, with $\tfrac{1}{\sigma_s^2} = \tfrac{1}{\sigma_i^2}=0$ (and the corresponding $a_0$ and $b_0$), the coincidence probability is \begin{align} \Gamma_{unfilt} = N^2\iint d\Omega_s d\Omega_i\exp\left(-\frac{a_0}{2}\Omega_s^2 - \frac{b_0}{2}\Omega_i^2 - c\Omega_s\Omega_i\right). \end{align} This integral can be evaluated using the multi-dimensional generalization of a Gaussian function with \begin{align} A = \begin{pmatrix} a_0 & c\\ c & b_0 \end{pmatrix} \end{align} as \begin{align} \Gamma_{unfilt} = N^2 \frac{2\pi}{\sqrt{a_0b_0-c^2}}, \end{align} giving the normalization \begin{align} N^2 = \frac{\sqrt{a_0b_0-c^2}}{2\pi}. \end{align} Now, the coincidence count probability with signal and idler filters is \begin{align} \Gamma_{both} &= N^2\iint d\Omega_s d\Omega_i\exp\left(-\frac{a}{2}\Omega_s^2 - \frac{b}{2}\Omega_i^2 - c\Omega_s\Omega_i\right)\\ \nonumber &= \sqrt{\frac{a_0b_0-c^2}{ab-c^2}}. \end{align} To find the marginal probabilities, we just set one of the filters to infinite bandwidth. Thus \begin{align} \Gamma_{i} = \sqrt{\frac{a_0b_0-c^2}{a_0b-c^2}}\\ \Gamma_{s} = \sqrt{\frac{a_0b_0-c^2}{ab_0-c^2}}. \end{align} Finally, the filter heralding efficiencies are \begin{align} \eta_{f,~s} = \frac{\Gamma_{both}}{\Gamma_{i}} = \sqrt{\frac{a_0b-c^2}{ab-c^2}}, \end{align} and \begin{align} \eta_{f,~i} = \frac{\Gamma_{both}}{\Gamma_{s}} = \sqrt{\frac{ab_0-c^2}{ab-c^2}}. \end{align} To find the purity we need the reduced density matrix for signal or idler. Since our photons are entangled only in this spectral degree of freedom and we only consider the case when both photons are detected, they will have the same purity, and either reduced density matrix will suffice. We consider here only the spectral purity, neglecting vacuum and higher-order photon components. We find \begin{align} &\rho_s = \mathrm{Tr}_i\left(\ket{\psi}\bra{\psi}\right)\\ &= \iiint d\Omega_i d\Omega_s d\Omega_s^\prime f\left(\Omega_s, \Omega_i\right)f^*\left(\Omega_s^\prime, \Omega_i\right)\ket{\Omega_s}\bra{\Omega_s^\prime}. \nonumber \end{align} Then the purity is \begin{align} P &= \mathrm{Tr}\left(\rho_s^2\right)\\\nonumber &= \iiiint d\Omega_s d\Omega_s^\prime d\Omega_i d\Omega_i^{\prime}\\\nonumber &\phantom{{}=1} \times f \left(\Omega_s, \Omega_i\right)f^* \left(\Omega_s^{\prime}, \Omega_i\right) f\left(\Omega_s^{\prime}, \Omega_i^\prime\right) f^*\left(\Omega_s, \Omega_i^{\prime}\right)\\ \nonumber &= N^4\frac{(2\pi)^2}{\sqrt{a^2b^2-abc^2}}\\ \nonumber &= \sqrt{\frac{\left(ab-c^2\right)^2}{a^2b^2-abc^2}}\\\nonumber &= \sqrt{\frac{ab-c^2}{ab}}\nonumber \end{align} How do the purity and heralding efficiency depend on each other? To achieve high heralding efficiency requires $ab\rightarrow a_0b_0$, i.e. no filtering. To achieve high purity requires either $c=0$, i.e. source engineering to bring $\theta\in\left[\SI{90}{\degree},\SI{180}{\degree}\right]$ and matching the interaction length and pump bandwidth, or $ab\gg c^2$, i.e. strong filtering. But for $ab\gg c^2$, at least one of $a\gg |c|$ or $b\gg |c|$, implying at least one heralding efficiency tending to zero. To find the symmetized fidelity we first consider the fidelity of the signal photon to an arbitrary Gaussian pure single photon state, after filtering and heralding by the (filtered) idler photon. The pure state is \begin{align} \ket{1_p} = \int d\Omega g(\Omega)\ket{\Omega}, \end{align} with \begin{align} g(\Omega) = \left(\frac{d}{2\pi}\right)^{\frac{1}{4}} \exp\left(-\frac{d}{4}\Omega^2\right) \end{align} The fidelity (in the sense of probabilities~\cite{Jozsa:1994aa}) is \begin{align} F_s=\bra{1_p}\rho\ket{1_p} \end{align} where $\rho = \left(1-\eta_{f,s}\right)\ket{0}\bra{0} + \eta_{f,s}\rho_s$, giving \begin{align} F_s &= \eta_{f,s}\iiint d\Omega_s d\Omega_s^\prime d\Omega_i \\\nonumber &\times f \left(\Omega_s, \Omega_i\right)f^* \left(\Omega_s^{\prime}, \Omega_i\right)g(\Omega_s)g(\Omega_s^\prime)\\\nonumber &= \eta_{f,s}\sqrt{\frac{4(ab-c^2)d}{b(a+d)^2-c^2(a+d)}}. \end{align} Differentiating with respect to $d$ to find the state which maximizes the fidelity gives \begin{align} d=\sqrt{\frac{a(ab-c^2)}{b}}, \end{align} for the maximum fidelity \begin{align} F_s &= \frac{2\eta_{f,s}}{1+\sqrt{\frac{ab}{ab-c^2}}}\\\nonumber &= \frac{2\eta_{f,s}P}{1+P}\\\nonumber &= \frac{2\sqrt{a_0b-c^2}}{\sqrt{ab-c^2}+\sqrt{ab}}. \end{align} A similar procedure for the idler yields \begin{align} F_i= \frac{2\sqrt{ab_0-c^2}}{\sqrt{ab-c^2}+\sqrt{ab}}. \end{align} Combining these for the symmetrized efficiency gives \begin{align} F = \sqrt{F_sF_i} = \sqrt{\eta_{f,s}\eta_{f,i}}\frac{2P}{1+P}. \end{align} Finally we consider the purity-efficiency factor~\cite{PhysRevA.91.013819} of both photons together, which allows analytic optimization over the filter bandwidths. The factor is \begin{align} PEF&=\sqrt{P\eta_{f,s}\times P\eta_{f,i}} = \left(\frac{\left(a_0b-c^2\right)\left(ab_0-c^2\right)}{a^2b^2}\right)^{\frac{1}{4}}\\\nonumber &=F\frac{1+P}{2}. \end{align} For $c\neq0$, the PEF can have in the best case any two of $\left\{P,\eta_{f,s},\eta_{f,i}\right\}$ approach 1, while the other approaches 0. For the phasematching angles $\theta\in\left[\SI{90}{\degree},\SI{180}{\degree}\right]$ one can have $c\rightarrow0$ allowing a PEF of 1. When $c^2>\tfrac{a_0b_0}{2}$, which corresponds to $\theta\in\left(\SI{15}{\degree},\SI{75}{\degree}\right)$, the maximum value of the PEF after optimizing the filter bandwidth can be found as \begin{align} PEF_{max}=\sqrt{\frac{a_0b_0}{4c^2}}, \end{align} with optimal filter bandwidths defined by $a=\tfrac{2c^2}{b_0}$ and $b=\tfrac{2c^2}{a_0}$. In the special cases of $\theta=\SI{45}{\degree}$ or for narrowband pumps or phasematching the PEF is upper-bounded by $\tfrac{1}{2}$ since $c^2\rightarrow a_0b_0$. The upper bound of the PEF for other phasematching angles depends on the angle and the pump and phasematching bandwidths, but is in general $<\tfrac{1}{\sqrt{2}}$ for $\theta\in\left(\SI{15}{\degree},\SI{75}{\degree}\right)$. \subsection*{WaveShaper Transmision} In the experiment, we confirmed that the spectral filters applied are nearly square with direct measurements of the WaveShaper, shown in \cref{fig.waveshaper}. The transmission loss is about \SI{4.8}{\decibel} for the signal photon and \SI{4.4}{\decibel} for the idler. \begin{figure}[t] \centerline{\includegraphics[width=\linewidth]{WaveshaperTrans.pdf}} \caption{Measured transmission of the tunable filters for various square filter settings. The full bandwidth with the signal and idler filters is shown in corresponding colors in (a), while (b) shows a zoom of the signal filters, showing no change in peak transmission until $<\SI{0.2}{\nano\meter}$, which is limited by the resolution of our spectrometer.} \label{fig.waveshaper} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper, we study \emph{asymmetric all-or-nothing transforms}, which we define informally as follows. \begin{Definition} \label{def1} Suppose $s$ is a positive integer and $\phi : \Gamma^s \rightarrow \Gamma^s$, where $\Gamma$ is a finite set of size $v$ (called an \emph{alphabet}). Thus $\phi$ is a function that maps an input $s$-tuple $\mathbf{x} = (x_1, \dots , x_s)$ to output $s$-tuple $\mathbf{y} = (y_1, \dots , y_s)$. Suppose $t_i$ and $t_o$ are integers such that $1 \leq t_i \le t_o \le s$. The function $\phi$ is an \emph{$(t_i,t_o,s,v)$-all-or-nothing transform} (or $(t_i,t_o,s,v)$-AONT) provided that the following properties are satisfied: \begin{enumerate} \item $\phi$ is a bijection. \item If any $s - t_o$ of the $s$ outputs $y_1, \dots , y_s$ are fixed, then the values of any $t_i$ inputs $x_i$ (for $1 \leq i \leq s$) are completely undetermined. \end{enumerate} \end{Definition} \begin{Remark} {\rm It is not difficult to see that $t_i \leq t_o$ if a $(t_i,t_o,s,v)$-AONT exists, as follows. If only $t_o$ are outputs are unknown, then the number of possible values taken on by any subset of the inputs is at most $v^{t_o}$. Since a subset of $t_i$ inputs must be completely undetermined, we must have $v^{t_i} \leq v^{t_o}$, or $t_i \leq t_o$.} \end{Remark} All-or-nothing-transforms (AONTs) were invented in 1997 by Rivest \cite{R}. Rivest's work concerned AONTs that are computationally secure. Some early papers on various generalizations of AONTs include \cite{Boyko,CDHKS,Desai}. Stinson \cite{St} introduced and studied all-or-nothing transforms in the setting of unconditional security. Further work focussing on the existence of unconditionally secure AONTs can be found in \cite{DES,EGS, ES,ES2,WCJ,ZZWG}. AONTs have had numerous applications in cryptography and information security; see \cite{Phd} for an overview. Rivest's original definition in \cite{R} corresponded to the special case $t_i= t_o = 1$. Most research since then has involved AONTs where $t_i = t_o = t$ for some positive integer $t$. (Such an AONT is often denoted as a $(t,s,v)$-AONT in the literature.) In such an AONT, knowing all but $t$ outputs leaves any $t$ inputs undetermined. Here we mainly consider AONTs where $t_i < t_o$. Such an AONT can be thought of \emph{asymmetric} in the sense that the number of missing outputs is greater than the number of inputs about which we are seeking information. In general, asymmetric AONTs are easier to construct than AONTs in which $t_i = t_o$ because the requirements are weaker. The first example of asymmetric AONTs in the literature is apparently found in Stinson \cite[\S 2.1]{St}. We present this construction in Example \ref{Ex:even_bastion}. \begin{Example} \label{Ex:even_bastion} {\rm For $s$ even, a $(1,2,s,2)$-AONT exists as follows. Given $s$ inputs $x_1, \dots , x_s \in \ensuremath{\mathbb Z}_2$, define \[ \begin{array}{l} r = \displaystyle \sum_{i = 1}^s x_i\Botstrut{4}\\ y_i = r + x_i, \text{ for } 1 \leq i \leq s. \end{array} \] This yields the $s$ outputs $y_1, \dots , y_s$. The inverse transformation is computed as \[ \begin{array}{l} r' = \displaystyle \sum_{i = 1}^s y_i\Botstrut{4}\\ x_i = r' + y_i, \text{ for } 1 \leq i \leq s. \end{array} \] Suppose we are given $s-2$ of the $s$ outputs, so two outputs are missing. It is clear that each input depends on $s-1$ outputs: $x_i$ is a function of all the $y_j$'s, except for $y_i$. Thus, if two outputs are missing, then no values can be ruled out for $x_i$. $\blacksquare$ } \end{Example} We note that the construction given in Example \ref{Ex:even_bastion} only works for even $s$ (when $s$ is odd, the mapping is not invertible). A construction for odd values of $s$ will be given later (see Lemma \ref{OddBastion}). Karame et al.~\cite{BastionAONT} introduced \emph{Bastion}, which is a scheme for securely dispersing a document over multiple cloud storage services. Bastion involves encrypting a plaintext using counter mode and then applying a $(1,2,s,2)$-AONT to the resulting ciphertext blocks. The paper \cite{BastionAONT} considered a threat model where the adversary may have access to the key or use a backdoor to decrypt the ciphertext. To protect against these threats, assuming the adversary cannot access at least two parts, they suggest to divide the ciphertext into multiple parts and store each part on a different server after applying the AONT. \subsection{Our Contributions} Our goal in this paper is to develop the basic mathematical theory of asymmetric AONTs. In Section \ref{security.sec}, we discuss a combinatorial approach to asymmetric AONTs, and we examine how different combinatorial definitions impact the security of the transforms. We also present some connections with other combinatorial structures such as orthogonal arrays and split orthogonal arrays. Section \ref{linear.sec} focusses on existence and bounds for linear asymmetric AONTs. We complete the solution of the existence problem for $t_i = 1$, as well as when $t_i = 2$ and $t_o = s-1$. Then we turn to cases where $t_i \geq 2$. We prove a general necessary condition for existence, and then we consider the case $t_i = 2$ in detail. New existence results are obtained from computer searches. Finally, Section \ref{summary.sec} is a brief summary. We note that many of the results in this paper were first presented in the PhD thesis of the first author \cite{Phd}. \section{Combinatorial Definitions and Security Properties} \label{security.sec} Definition \ref{def1} is phrased in terms of security properties, i.e., it specifies information about a subset of inputs that can be deduced if only a certain subset of the outputs is known. (As mentioned in the introduction, we are studying AONTs in the setting of unconditional security.) It is useful to employ a combinatorial description of AONTs in order to analyze them from a mathematical point of view. Combinatorial definitions of AONTs have appeared in numerous papers, beginning in \cite{St}. However, the connections between security definitions and combinatorial definitions turn out to be a bit subtle, as was recently shown by Esfahani and Stinson \cite{ES2}. First, as noted in \cite{ES2}, there are two possible ways to interpret the security requirement. In the original definition of AONT due to Rivest \cite{R}, as well as in Definition \ref{def1}, we only require that the values of any $t_i$ inputs are \emph{completely undetermined}, given the values of $s - t_o$ outputs. In other words, assuming that every possible input $s$-tuple occurs with positive probability, the probability that the $t_i$ specified inputs take on any specified possible values (given all but $t_o$ outputs) is positive. This notion is termed \emph{weak security} in\cite{ES2}. An alternative notion that is discussed in detail in \cite{ES2} is that of \emph{perfect security}. Here, we require that the \emph{a posteriori} distribution on any $t_i$ inputs, given the values of $s - t_o$ outputs, is identical to the \emph{a priori} distribution on the same inputs. Thus, \emph{no information} about any $t_i$ inputs is revealed when $s - t_o$ outputs are known. The standard combinatorial definition for $(t,t,s,v)$-AONT (see, e.g., \cite{DES,EGS}) involves certain unbiased arrays. We review this definition now and discuss when weak or perfect security can be attained (the security may depend on the probability distribution defined on the input $s$-tuples). Then we generalize this approach to handle the slightly more complicated case of asymmetric AONTs. An \emph{$(N,k,v)$-array} is an $N$ by $k$ array, say $A$, whose entries are elements chosen from an alphabet $\Gamma$ of order $v$. Suppose the $k$ columns of $A$ are labelled by the elements in the set $C$. Let $D \subseteq C$, and define $A_D$ to be the array obtained from $A$ by deleting all the columns $c \notin D$. We say that $A$ is \emph{unbiased} with respect to $D$ if the rows of $A_D$ contain every $|D|$-tuple of elements of $\Gamma$ exactly $N / v^{|D|}$ times. Of course, this requires that $N$ is divisible by $v^{|D|}$. An AONT, say $\phi$, is a bijection from $\Gamma$ to $\Gamma$, where $\Gamma$ is a $v$-set. The \emph{array representation} of $\phi$ is a $(v^s,2s,v)$-array, say $A$, that is constructed as follows: For every input $s$-tuple $(x_1, \dots , x_s) \in \Gamma^s$, there is a row of $A$ containing the entries $x_1, \dots , x_s, y_1, \dots , y_s$, where $\phi(x_1, \dots , x_s) = (y_1, \dots , y_s)$. Our combinatorial definition of an AONT, Definition \ref{defunbiased}, involves arrays that are unbiased with respect to certain subsets of columns. This definition is an obvious generalization of previous definitions for $(t,t,s,v)$-AONTs from \cite{DES,EGS}. \begin{Definition} \label{defunbiased} A \emph{$(t_i,t_o,s,v)$-all-or-nothing transform} is a $(v^s,2s,v)$-array, say $A$, with columns labelled $1, \dots , 2s$, that is unbiased with respect to the following subsets of columns: \begin{enumerate} \item $\{1, \dots , s\}$, \item $\{s+1, \dots , 2s\}$, and \item $I \cup J$, for all $I \subseteq \{1,\dots , s\}$ with $|I| = t_i$ and all $J \subseteq \{s+1,\dots , 2s\}$ with $|J| = s-t_o$. \end{enumerate} \end{Definition} We interpret the first $s$ columns of $A$ as indexing the $s$ inputs and the last $s$ columns as indexing the $s$ outputs. Then, as mentioned above, properties 1 and 2 ensure that the array $A$ defines a bijection $\phi$. Property 3 guarantees that knowledge of any $s-t_o$ outputs does not rule out any possible values for any $t_i$ inputs. The following results concerning $(t,t,s,v)$-AONTs are from \cite{ES2}. \begin{Theorem} \label{sym.thm} Suppose $\phi : \Gamma^s \rightarrow \Gamma^s$ is a bijection, where $\Gamma$ is an alphabet of size $v$, and suppose $1 \leq t\leq s$. \vspace{-.2in}\\ \begin{enumerate} \item Suppose any input $s$-tuple occurs with positive probability. Then the mapping $\phi$ is a weakly secure AONT if and only if its array representation is a $(t,t,s,v)$-AONT. \item The mapping $\phi$ is a perfectly secure AONT if and only if its array representation is a $(t,t,s,v)$-AONT and every input $s$-tuple occurs with the same probability. \end{enumerate} \end{Theorem} When we turn to asymmetric AONTs, there is an additional subtlety, namely that we can obtain weak security for combinatorial structures that are weaker than the arrays defined in Definition \ref{defunbiased}. We can characterize asymmetric AONTs achieving weak security in terms of arrays that satisfy covering properties with respect to certain sets of columns. As before, suppose $A$ is an $(N,k,v)$-array, whose entries are elements chosen from an alphabet $\Gamma$ of order $v$ and whose columns are labelled by the the set $C$. Also, for $D \subseteq C$, define $A_D$ as before. We say that $A$ is \emph{covering} with respect to a subset of columns $D \subseteq C$ if the rows of $A_D$ contain every $|D|$-tuple of elements of $\Gamma$ \emph{at least once}. \begin{Remark} {\rm An array that satisfies the covering property for all subsets of $t$ columns is called a \emph{$t$-covering array}. Such arrays have many important applications, including software testing. See \cite[\S VI.10]{CD} for a brief survey of covering arrays. } \end{Remark} We state a few simple observations without proof. \begin{Lemma} \label{unbiased-covering.lem} Suppose $A$ is an $(N,k,v)$-array with columns labelled by $C$. \begin{enumerate} \item If $A$ is unbiased or covering with respect to $D \subseteq C$, then $N \geq v^{|D|}$. \item If $A$ is unbiased with respect to $D \subseteq C$, then $A$ is covering with respect to $D$. \item If $D \subseteq C$ and $N = v^{|D|}$, then $A$ is unbiased with respect to $D$ if and only if $A$ is covering with respect to $D$. \item If $A$ is unbiased or covering with respect to $D \subseteq C$, then $A$ is unbiased or covering (resp.) with respect to all $D' \subseteq D$. \end{enumerate} \end{Lemma} \begin{Definition} \label{weakAONT} A \emph{$(t_i,t_o,s,v)$-weak-all-or-nothing transform} is a $(v^s,2s,v)$-array, say $A$, with columns labelled $1, \dots , 2s$, that is covering with respect to the following subsets of columns: \begin{enumerate} \item $\{1, \dots , s\}$, \item $\{s+1, \dots , 2s\}$, and \item $I \cup J$, for all $I \subseteq \{1,\dots , s\}$ with $|I| = t_i$ and all $J \subseteq \{s+1,\dots , 2s\}$ with $|J| = s-t_o$. \end{enumerate} \end{Definition} We note that a $(t,t,s,v)$-weak-AONT is equivalent to a $(t,t,s,v)$-AONT. This follows immediately from Lemma \ref{unbiased-covering.lem}. However, a $(t_i,t_o,s,v)$-weak-AONT is not necessarily a $(t_i,t_o,s,v)$-AONT if $t_i < t_0$. Example \ref{weakAONT.exam} depicts a $(1,2,3,2)$-weak-AONT that is not a $(1,2,3,2)$-AONT. \begin{Example} \label{weakAONT.exam} {\rm We present a $(1,2,3,2)$-weak AONT over the alphabet $\{a,b\}$. The array representation of this AONT is as follows: \begin{center} \begin{tabular}{|c|c|c||c|c|c|} \hline $x_1$ & $x_2$ & $x_3$ & $y_1$ & $y_2$ & $y_3$ \\ \hline a & a & a & a & a & a \\ a & a & b & b & b & a \\ a & b & a & b & a & b \\ a & b & b & b & a & a \\ b & a & a & a & b & b \\ b & a & b & a & b & a \\ b & b & a & a & a & b \\ b & b & b & b & b & b \\\hline \end{tabular} \end{center} This array is biased with respect to various pairs of columns $(x_i, y_j)$. For example, we verify that this array is biased with respect to columns $x_1$ and $y_1$. Specifically, the ordered pairs $(a,a)$ and $(b,b)$ each occur once, but the ordered pairs $(a,b)$ and $(b,a)$ each occur three times. However, for all choices of $x_i$ and $y_j$, it can be verified that $A$ is covering with respect to the pair of columns $(x_i, y_j)$.} $\blacksquare$ \end{Example} The following theorem extends part of Theorem \ref{sym.thm} to the asymmetric case. Proofs are omitted, as they are essentially the same as the proofs in \cite{ES2}. \begin{Theorem} \label{asym.thm} Suppose $\phi : \Gamma^s \rightarrow \Gamma^s$ is a bijection, where $\Gamma$ is an alphabet of size $v$, and suppose $1 \leq t_i \leq t_o \leq s$. \vspace{-.2in}\\ \begin{enumerate} \item Suppose any input $s$-tuple occurs with positive probability. Then the mapping $\phi$ is weakly secure if and only if its array representation is a $(t_i,t_o,s,v)$-weak-AONT. \item The mapping $\phi$ is perfectly secure if its array representation is a $(t_i,t_o,s,v)$-AONT and every input $s$-tuple occurs with the same probability. \end{enumerate} \end{Theorem} \begin{Remark} \label{converse.rem} The second part of Theorem \ref{sym.thm} is ``if and only if''. However, we do not know if the converse of the second part of Theorem \ref{asym.thm} is true when $t_i < t_o$. \end{Remark} \subsection{General Properties} In the rest of the paper, we focus on $(t_i,t_o,s,v)$-AONTs that satisfy Definition \ref{defunbiased}. These are the AONTs that are unbiased with respect to various subsets of columns. First, we record various general properties about these AONTs. Some of these results are generalizations of previous results pertaining to $(t,t,s,v)$-AONT, and most of them follow easily from Lemma \ref{unbiased-covering.lem}. The following result was shown in \cite{WCJ} for the case $t_i = t_o$. The generalization to arbitrary $ t_i\leq t_o$ is obvious. \begin{Theorem} \label{inverse1.cor} A mapping $\phi: \mathcal{X}^s\to \mathcal{X}^s$ is a $(t_i,t_o,s,v)$-AONT if and only if $\phi^{-1}$ is an $(s-t_o,s-t_i,s,v)$-AONT. \end{Theorem} \begin{proof} Interchange the first $s$ columns and the last $s$ columns in the array representation of the AONT $\phi$. \end{proof} An \emph{orthogonal array} \emph{OA$(t,k, v)$} is a $(v^{t},n,v)$ array, say $A$, that is unbiased with respect to any $t$ columns. The next theorem generalizes \cite[Corollary 35]{EGS}. \begin{Theorem} \label{OAthenAONT} If there exists an OA$(s,2s,v)$, then there exists a $(t_i,t_0,s,v)$-AONT for all $t_i$ and $t_o$ such that $1 \leq t_i \leq t_o \leq s$. \end{Theorem} \begin{proof} It suffices to show that an OA$(s,2s,v)$ satisfies the conditions of Definition \ref{defunbiased}. This follows immediately from Lemma \ref{unbiased-covering.lem} and the observation that \[ 1 \leq t_i + s - t_o \leq s\] for all $t_i$ and $t_o$ such that $1 \leq t_i \leq t_o \leq s$. \end{proof} Levenshtein \cite{Levenshtein} defined \emph{split orthogonal arrays} (or \emph{SOAs}) as follows. A split orthogonal array \emph{SOA$(t_1,t_2;n_1,n_2; v)$} is a $(v^{t_1+t_2},n_1+n_2,v)$ array, say $A$, that satisfies the following properties: \begin{enumerate} \item the columns of $A$ are partitioned into two sets, of sizes $n_1$ and $n_2$, respectively, and \item $A$ is unbiased with respect to any $t_1+t_2$ columns in which $t_1$ columns are chosen from the first set of $n_1$ columns and $t_2$ columns are chosen from the second set of $n_2$ columns. \end{enumerate} From the definition of split orthogonal arrays, we can immediately obtain the following theorem. \begin{Theorem} \label{SOA.thm} Suppose there exists a $(t_i, t_o, s, v)$-AONT. Then there exists an SOA$(t_i, s-t_o, s, s,v)$. \end{Theorem} \begin{proof} Consider the array representation of a $(t_i,t_o,s,q)$-AONT. Denote $n_1 = s, n_2=s, t_1 = t_i$ and $t_2 = s-t_o $. Fixing any $t_2$ outputs does not yield any information about any $t_1$ inputs. Hence, the array is unbiased with respect to any $s-t_o+t_i$ columns where $t_i$ columns are chosen from the first set of $s$ columns and $s - t_o$ columns are chosen from the second set of $s$ columns. Therefore the array is an SOA$(t_i, s-t_o, s, s,v)$. \end{proof} Theorems \ref{OAthenAONT} and \ref{SOA.thm} show that, in a certain sense, AONTs (symmetric and asymmetric) are ``between'' orthogonal arrays and split orthogonal arrays. More precisely, an OA$(s,2s,v)$ implies the existence of a $(t_i, t_o, s, v)$-AONT (for $1 \leq t_i \leq t_o \leq s$), which in turn implies the existence of an SOA$(t_i, s-t_o, s, s,v)$. \section{Linear Asymmetric AONTs} \label{linear.sec} Suppose $q$ is a prime power. If every output of a $(t_i,t_o,s,v)$-AONT is an $\ensuremath{\mathbb F}_q$-linear function of the inputs, the AONT is a \emph{linear} $(t_i,t_o,s,q)$-AONT. Note that we will write a linear $(t_i,t_o,s,q)$-AONT in the form $\mathbf{y} = \mathbf{x} M^{-1}$, where $M$ is an invertible $s$ by $s$ matrix over $\ensuremath{\mathbb F}_q$ (as always, $\mathbf{x}$ is an input $s$-tuple and $\mathbf{y}$ is an output $s$-tuple). Of course it holds also that $\mathbf{x} = \mathbf{y}M$. \begin{Remark} \label{bastion-linear} The $(1, 2, s, 2)$-AONT described in Example \ref{Ex:even_bastion} (for even values of $s$) is a linear AONT, where $M$ is the $s$ by $s$ matrix with $0$'s on the diagonal and $1$'s elsewhere. When $s$ is even, $M$ is invertible and $M^{-1} = M$. \end{Remark} The following lemma generalizes \cite[Lemma 1]{DES}. \begin{Lemma} \label{linearAsymAONT} Suppose that $q$ is a prime power and $M$ is an invertible $s$ by $s$ matrix with entries from $\ensuremath{\mathbb F}_q$. Suppose $1 \leq t_i \leq t_o \leq s$. Then the function $\mathbf{y} = \mathbf{x} M^{-1}$ defines a linear $(t_i,t_o,s,q)$-AONT if and only if every $t_o$ by $t_i$ submatrix of $M$ has rank $t_i$. \end{Lemma} \begin{proof} Suppose $I,J \subseteq \{1, \dots , s\}$, $|I| = t_i$, $|J| = t_o$. Let ${\mathbf x'} = (x_i : i \in I)$. We have ${\mathbf x'} = {\mathbf y} M'$, where $M'$ is the $s$ by $t_i$ matrix formed from $M$ by deleting all columns not in $I$. Now assume that $y_j$ is fixed for all $j \not\in J$ and denote ${\mathbf y'} = (y_j : j \in J)$. Then we can write ${\mathbf x'} = {\mathbf y'} M'' + {\mathbf c}$, where $M''$ is the $t_o$ by $t_i$ submatrix of $M$ formed from $M$ by deleting all columns not in $I$ and all rows not in $J$, and ${\mathbf c}$ is a vector of constants. If $M''$ is of rank $t_i$, then ${\mathbf x'}$ is completely undetermined, in the sense that ${\mathbf x'}$ takes on all values in $(\ensuremath{\mathbb F}_q)^{t_i}$ as ${\mathbf y'}$ varies over $(\ensuremath{\mathbb F}_q)^{t_o}$. On the other hand, if $t' =\mathsf{rank}(M'') < t_i$, then ${\mathbf x'}$ can take on only $(\ensuremath{\mathbb F}_q)^{t'}$ possible values \end{proof} The following corollaries pertain to the special case where $t_i = t_o = t$. \begin{Corollary} \cite{DES} Suppose $M$ is an invertible $s$ by $s$ matrix with entries from $\ensuremath{\mathbb F}_q$. Then $\mathbf{y} = \mathbf{x} M^{-1}$ defines a linear $(t,t,s,q)$-AONT if and only if every $t$ by $t$ submatrix of $M$ is invertible. \end{Corollary} \begin{Corollary} \label{inverse2.cor} Suppose that $\mathbf{y} = \mathbf{x} M^{-1}$ defines a linear $(t,t,s,q)$-AONT. Then $\mathbf{y} = \mathbf{x} M$ defines a linear $(s-t,s-t,s,q)$-AONT. \end{Corollary} \begin{Corollary} \label{linear-st} Suppose $M$ is an invertible $s$ by $s$ matrix with entries from $\ensuremath{\mathbb F}_q$. Then $\mathbf{y} = \mathbf{x} M^{-1}$ defines a linear $(t,t,s,q)$-AONT if and only if every $s-t$ by $s-t$ submatrix of $M^{-1}$ is invertible. \end{Corollary} Another approach to construct asymmetric AONTs is to use $t$-AONTs or other asymmetric AONTs. The following results will present various such constructions. First, we generalize \cite[Theorem 20]{EGS}. \begin{Lemma} \label{Lem:AsymCofactor} If $1 \leq t_i \le t_o < s$, then the existence of a linear $(t_i,t_o,s,q)$-AONT implies the existence of a linear $(t_i,t_o,s-1,q)$-AONT. \end{Lemma} \begin{proof} Let $M$ be a matrix for a linear $(t_i,t_o,s,q)$-AONT. Since $M$ is invertible, if we calculate its determinant using the cofactor expansion of $M$ with respect to its first row, at least one of the $(s-1)\times (s-1)$ submatrices is invertible. Also, any $t_o\times t_i$ submatrix of $M$, including those in the invertible submatrix, are of rank $t_i$. Hence, the invertible submatrix is a $(t_i,t_o,s-1,q)$-AONT. \end{proof} \begin{Lemma}\label{Lem:Asym_L_to} If $1 \leq t_i\le t_o \le s$, then the existence of a linear $(t_i,t_o,s,q)$-AONT implies the existence of a linear $(t_i,t^{\prime}_o,s,q)$-AONT for all $t^{\prime}_o$ such that $t_o \leq t^{\prime}_o \leq s$. \end{Lemma} \begin{proof} Consider the matrix representation of the linear $(t_i,t_o,s,q)-$AONT. Every $ t^{\prime}_o$ by $t_i$ submatrix is rank $t_i$, because all its $t_o \times t_i$ submatrices are of rank $t_i$. \end{proof} \begin{Lemma}\label{Lem:Asym_S_ti} If $1 \leq t_i\le t_o \le s$, then the existence of a linear $(t_i,t_o,s,q)$-AONT implies the existence of a linear $(t^{\prime}_i,t_o,s,q)$-AONT for any $t^{\prime}_i$ such that $1 \leq t^{\prime}_i \leq t_i \leq s$. \end{Lemma} \begin{Example} \label{E232} We observe that existence of a linear $(t_i,t_o,s,q)$-AONT does not necessarily imply the existence of a linear $(t_i,t_i,s,q)$-AONT or a linear $(t_o,t_o,s,q)$-AONT. Consider the linear $(2,3,4,2)$-AONT presented by the following matrix \[ \left(\begin{array}{c c c c} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 1 & 1 & 1 & 0 \end{array} \right). \] While every $3\times 2$ submatrix of the matrix above is of rank $2$, a $(2,2,s,2)$-AONT does not exist if $s>2$, as was proven by D'Arco et al.\ \cite{DES}. Additionally, from Corollary \ref{inverse2.cor}, a linear $(3,3,4,2)$-AONT would be equivalent to a linear $(1,1,4,2)$-AONT. Since it was shown in \cite{St} that an $(1,1,4,2)$-AONT does not exist, we conclude that a linear $(3,3,4,2)$-AONT does not exist. $\blacksquare$ \end{Example} The main general construction for linear $(t,t,s,q)$-AONTs in \cite{DES} uses Cauchy matrices. We provide a generalization that applies to asymmetric AONTs. \begin{Theorem} Suppose $q \geq 2s$ is a prime power and $1 \leq t_i \leq t_o \leq s$. Then there exists a linear $(t_i,t_o,s,q)$-AONT. \end{Theorem} \begin{proof} In \cite[Theorem 2]{DES}, it was shown that a linear $(t,t,s,q)$-AONT exists if $q \geq 2s$ is a prime power and $1 \leq t \leq s$. Let $t_i = t_o = t$ and then apply Lemma \ref{Lem:Asym_S_ti}. This shows that there is a a linear $(t'_i,t_o,s,q)$-AONT provided that $1 \leq t'_i \leq t_o \leq s$. \end{proof} \subsection{Linear $(1,t_o,s,q)$-AONT} We noted in Remark \ref{bastion-linear} that there exists a linear $(1, 2, s, 2)$-AONT for all even values of $s \geq 2$. In the next lemma, we show that linear $(1,2,s,2)$-AONTs exist for odd values of $s$. \begin{Lemma} \label{OddBastion} There is a linear $(1,2,s,2)$-AONT for any odd value of $s \geq 3$. \end{Lemma} \begin{proof} Suppose $s\geq 3$ is odd. Let $M$ be the $s$ by $s$ matrix whose first subdiagonal consists of $0$'s, but all other entries are $1$'s. For example, when $s=5$, we have \[ M = \left(\begin{array}{c c c c c} 1 & 1 & 1 & 1 & 1\\ 0 & 1 & 1 & 1 & 1\\ 1 & 0 & 1 & 1 & 1\\ 1 & 1 & 0 & 1 & 1\\ 1 & 1 & 1 & 0 & 1 \end{array}\right).\] The matrix $M$ is invertible and its inverse is an $s$ by $s$ matrix with a right top submatrix that is an identity matrix, and $1$'s occur along the last row and first column. For example, when $s=5$, we have \[ M^{-1} = \left(\begin{array}{c c c c c} 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 1 \\ 1 & 1 & 1 & 1 & 1 \end{array}\right).\] Further, any $2$ by $1$ submatrix of $M$ has rank $1$ because there is at most one occurrence of $0$ in each column of $M$. \end{proof} Recall that $\mathbf{y} = \mathbf{x} M^{-1}$ and $\mathbf{x} = \mathbf{y}M$. Given $s$ inputs $x_1, \dots , x_s \in \ensuremath{\mathbb Z}_2$, the above-discussed transform can be computed as follows: \[ \begin{array}{l} y_1 = \displaystyle\sum_{i = 1}^s x_i.\Botstrut{4}\\ y_i = x_{i-1} + x_{s}, \text{ for } 2\le i \le s. \end{array} \] This yields the $s$ outputs $y_1, \dots , y_s$. The inverse transform is computed as \[ \begin{array}{l} x_s = \displaystyle\sum_{i = 1}^s y_i \Botstrut{4} \\ x_i = x_s + y_{i+1}, \text{ for } 2\le i \le s. \end{array} \] Thus, computation of the transform or its inverse requires $2s-2$ addition operations in $\ensuremath{\mathbb Z}_2$ (i.e., exclusive-ors). \begin{Theorem} Suppose $q$ is a prime power and $1 \leq t_o \leq s$. Then there is a linear $(1,t_o,s,q)$-AONT unless $q=2$ and $t_0 = 1$. Further, there does not exist any $(1,1,s,2)$-AONT. \end{Theorem} \begin{proof} When $q > 2$, it was shown in \cite[Corollary 2.3]{St} that there exists a linear $(1,1,s,q)$-AONT for all $s \geq 1$. Applying Lemma \ref{Lem:Asym_L_to}, there exists a linear $(1,t_o,s,q)$-AONT for all prime powers $q > 2$ and all $t_0$ and $s$ such that $1 \leq t_o \leq s$. We have also noted in Remark \ref{bastion-linear} that there exists a linear $(1, 2, s, 2)$-AONT for all even values of $s \geq 2$. Applying Lemma \ref{Lem:Asym_L_to}, there exists a linear $(1,t_o,s,2)$-AONT for all $t_0$ and $s$ such that $s$ is even and $2 \leq t_o \leq s$. From Lemmas \ref{Lem:Asym_L_to} and \ref{OddBastion}, there exists a linear $(1,t_o,s,2)$-AONT for all $t_0$ and $s$ such that $s$ is odd and $2 \leq t_o \leq s$. Finally, it was shown in \cite{St} that there does not exist any $(1,1,s,2)$-AONT. \end{proof} \subsection{Linear $(2,s-1,s,2)$-AONT} In this section, we consider linear $(2,s-1,s,2)$-AONTs. For even values of $s \geq 4$, we use the $(1,2,s,2)$-AONT from Remark \ref{bastion-linear}. This AONT is based on the $s$ by $s$ matrix $M$ with $0$'s on the diagonal and $1$'s elsewhere. We have already noted that this matrix is invertible. To show that it gives rise to a $(2,s-1,s,2)$-AONT, we need to show that any $s-1$ by $2$ submatrix has rank $2$. It can be observed that any choice of $s-1$ rows and two columns will contain at least $s-3 \geq 1$ occurrences of the row $(1,1)$ and at least one copy of the row $(0,1)$ or $(1,0)$. Therefore, we have proven the following. \begin{Lemma} \label{s-even.lem} For any even integer $s\ge 4$, there exists a linear $(2,s-1,s,2)$-AONT. \end{Lemma} Now we turn to odd values of $s$. \begin{Lemma} \label{s-odd.lem} For any odd integer $s\ge 5$, there is a linear $(2,s-1,s,2)$-AONT. \end{Lemma} \begin{proof} For an odd integer $s \geq 5$, define the $s$ by $s$ matrix $B_s$ to have $1$'s in the entries on the main diagonal, the last row and the last column, and $0$'s elsewhere. For example, the matrix $B_5$ is as follows: \[ \left(\begin{array}{c c c c c } 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \end{array} \right). \] Suppose we subtract rows $1, \dots , s-1$ of $B_s$ from row $s$. Then we obtain an upper triangular matrix with $1$'s on the main diagonal. This proves that $B_s$ is invertible. Now we prove that any $s-1$ by $2$ submatrix has rank two. First, consider columns $i$ and $s$, where $1 \leq i \leq s-1$. The $s$ rows of this submatrix contain two copies of $(1,1)$ and $s-2$ copies of $(0,1)$. Therefore, any $s-1$ rows still contain at least one copy of $(1,1)$ and at least one copy of $(0,1)$. This means that the $s-1$ by $2$ submatrix has rank $2$. Next, we consider columns $i$ and $j$, where $1 \leq i < j \leq s-1$. The $s$ rows of this submatrix contain one copy of each of $(0,1)$, $(1,0)$ and $(1,1)$. Therefore, any $s-1$ rows still contain at least two of the three pairs $(0,1)$, $(1,0)$ and $(1,1)$. This means that the $s-1$ by $2$ submatrix has rank $2$. \end{proof} \begin{Theorem} There is a linear $(2,s-1,s,2)$-AONT if and only if $s\geq 4$. \end{Theorem} \begin{proof} If a $(t_i,t_o,s,q)$-AONT exsits, we must have $t_i \leq t_o$. Hence, $s \geq 3$ if a $(2,s-1,s,2)$-AONT exists. D'Arco et al.\ \cite{DES} proved that a linear $(2,2,3,2)$-AONT does not exist. For $s \geq 4$, Lemmas \ref{s-even.lem} and \ref{s-odd.lem} show that a linear $(2,s-1,s,2)$-AONT exists. \end{proof} \subsection{Linear $(t_i,t_o,s,q)$-AONT with $t_i \geq 2$} In this section, we study linear $(t_i,t_o,s,q)$-AONTs with $t_i \geq 2$. We first prove a general upper bound on $s$ as a function of $t_i$, $t_o$ and $q$. Then we consider the case $t_i=2$ in detail. \begin{Theorem} \label{T1} Suppose there exists a linear $(t_i,t_o,s,q)$-AONT with $2\leq t_i \leq t_o$. Then the following bound holds: \[ s \leq \frac{(t_o -1)(q^{t_i} - 1)}{(t_i -1)(q-1)}.\] \end{Theorem} \begin{proof} Fix any $t_i$ columns of the matrix $M$ and consider the resulting submatrix $M'$. Recall that any $t_o$ by $t_i$ submatrix of $M$ must have rank $t_i$. There are $q^{t_i}$ possible $t_i$-tuples for any given row of $M'$. We can replace an all-zero $t_i$-tuple with any other $t_i$-tuple, and it does not decrease the rank of any $t_o$ by $t_i$ submatrix in $M'$. Hence, we can assume that there is no all-zero $t_i$-tuple among the rows of $M'$. Therefore, there are $q^{t_i}-1$ possible rows in $M'$. For any two nonzero $t_i$-tuples, say $a$ and $b$, define $a\sim b$ if there is a nonzero element $\alpha \in \ensuremath{\mathbb F}_{q}$ such that $a = \alpha b$. Clearly $\sim$ is an equivalence relation, and there are $({q^{t_i}-1})/({q-1})$ equivalence classes, each of size $q-1$. Suppose the equivalence classes of rows are denoted by $\mathcal{E}_i$. Further, suppose there are $a_i$ rows from $\mathcal{E}_i$ in $M'$, for $1 \leq i \leq (q^{t_i}-1)/(q-1)$. The sum of the $a_i$'s is equal to $s$ and hence the average value of an $a_i$ is \[ \overline{a} = \frac{s(q-1)}{ q^{t_i}-1}.\] Let $L$ denote the sum of the $t_i -1$ largest $a_i$'s. It is clear that \[ L \geq (t_i - 1)\overline{a} = \frac{s(t_i - 1)(q-1)}{ q^{t_i}-1}.\] Also, because the $t_o$ rows of $M'$ cannot come from fewer than $t_i$ equivalence classes, we have \[ L \leq t_o - 1.\] Hence, combining the two inequalities, we see that \[ s \leq \frac{(t_o -1)(q^{t_i} - 1)}{(t_i -1)(q-1)}.\] \end{proof} We now look at the case $t_i = 2$ in more detail. \begin{Theorem} \label{Thrm:2_to_AsymAONT_bound} Suppose there exists a linear $(2,t_o,s,q)$-AONT with $2 \leq t_o$. Then the following bound holds: \[s \leq \max \{ 1 + (t_o - 2)(q+1), 2 + (t_o - 1)(q-1)\}.\] \end{Theorem} \begin{proof} Consider an $s$ by $2$ submatrix $M'$ and let $a_0$ be the number of $(0,0)$ rows in this submatrix. We divide the proof into two cases. \begin{description} \item[case (1)] \mbox{\quad} \vspace{.1in}\\ Suppose $a_0 \geq 1$. We claim that $M'$ contains at most $t_o - a_0 - 1$ rows from any one equivalence class $\mathcal{E}_i$, where equivalence classes are as defined in the proof of Theorem \ref{T1}. This follows because $t_o - a_0$ rows from one equivalence class, together with the $a_0$ rows of $0$'s, would result in $M'$ having rank 1. Excluding the rows of $0$'s, there are $q+1$ possible equivalence classes of rows, so \[ s \leq a_0 + (t_o - a_0 - 1 )(q+1) \leq 1 + (t_o - 2)(q+1).\] \item[case (2)] \mbox{\quad} \vspace{.1in}\\ If we are not in case (1), then $a_0 = 0$ for \emph{every} $s$ by $2$ submatrix $M'$. There can be at most one $0$ in each row of $M$, so there are at most $s$ occurrences of $0$ in $M$. Therefore, there must be two columns in $M$ that contain a total of at most two $0$'s. We focus on this particular $s$ by $2$ submatrix $M'$. Let the number of $0$'s in $M'$ be denoted by $a$; we have noted that $a \leq 2$. In the $s - a$ rows that do not contain a $0$, there are at most $t_o - 1$ rows from any equivalence class $\mathcal{E}_i$. Note that we have excluded two $\mathcal{E}_i$'s, i.e., $(*,0)$ and $(0,*)$, so \[ s \leq a + (t_o - 1 )(q- 1) \leq 2 + (t_o - 1)(q-1).\] \end{description} Since one of the above two cases must hold, we have \[s \leq \max \{ 1 + (t_o - 2)(q+1), 2 + (t_o - 1)(q-1)\}.\] \end{proof} We note that \[ 1 + (t_o - 2)(q+1) < (t_o -1)(q+1)\] and \[ 2 + (t_o - 1)(q-1) < (t_o - 1)(q+1),\] so \[ \max \{ 1 + (t_o - 2)(q+1), 2 + (t_o - 1)(q-1)\} < (t_o - 1)(q+1).\] Hence the bound from Theorem \ref{Thrm:2_to_AsymAONT_bound} improves Theorem \ref{T1} when $t_i = 2$. For positive integers $t_i$ and $t_o$, where $1 \leq t_i \leq t_o$, and a prime power $q$, define \[S(t_i,t_o,q)= \max \{s: \text{a linear } (t_i,t_o,s,q)\text{-AONT exists}\}.\] Note that $S(t_i,t_o,q) \geq t_o$ because the $t_o$ by $t_o$ identity matrix is a $(t_i,t_o,t_o,q)$-AONT. \begin{Theorem} Suppose $1 \leq t_i \leq t_o$ and $q$ is a prime power. Then there exists a $(t_i,t_o,s,q)$-AONT for $t_o \leq s \leq S(t_i,t_o,q)$. \end{Theorem} \begin{proof} This is an immediate consequence of Lemma \ref{Lem:AsymCofactor}. \end{proof} We mainly consider cases where $2 \leq t_i < t_o$. However, before proceeding, we recall some previous results concerning the special case $t_i = t_o = 2$. Theorems \ref{T1} and \ref{Thrm:2_to_AsymAONT_bound} both assert that $S(2,2,q) \leq q+1$. However, the stronger result that $S(2,2,q) \leq q$ was previously shown in \cite[Theorem 14]{EGS}. There are also some known lower bounds on $S(2,2,q)$, which are recorded in the following theorem. \begin{Theorem} Suppose $q$ is a prime power. Then the following bounds hold. \begin{enumerate} \item $\lfloor q/2 \rfloor \leq S(2,2,q) \leq q$. \item $q-1 \leq S(2,2,q) \leq q$ if $q = 2^n -1$ is prime, for some integer $n$. \item $S(2,2,q) = q$ if $q$ is prime. \end{enumerate} \end{Theorem} \begin{proof} 1.\ and 2.\ are shown in \cite{EGS}, while 3.\ is proven in \cite{WCJ}. \end{proof} The cases when $t_i < t_o$ have not received previous study in the literature. Theorems \ref{T1} and \ref{Thrm:2_to_AsymAONT_bound} provide upper bounds on $S(t_i,t_o,q)$. We evaluate some of these upper bounds for specific families of parameters in Table \ref{tab:ASYMAONT_bounds}. \begin{table} \caption{Examples of bounds from Theorems \ref{T1} and \ref{Thrm:2_to_AsymAONT_bound}.} \label{tab:ASYMAONT_bounds} \[ \begin{array}{c | c| c| c | c} t_i & q & t_o & \text{Upper bound on }S(t_i,t_o,q) & \text{Justification}\\ \hline \hline 2 & 2 & 2 & t_o+1 & \text{Theorem \ref{Thrm:2_to_AsymAONT_bound}}\\ 2 & 2 & \geq 3 & 3t_o-5 & \text{Theorem \ref{Thrm:2_to_AsymAONT_bound}}\\ \hline 2 & 3 & 2,3 & 2t_o & \text{Theorem \ref{Thrm:2_to_AsymAONT_bound}} \\ 2 & 3 & \geq 4 & 4t_o-7 & \text{Theorem \ref{Thrm:2_to_AsymAONT_bound}} \\\hline 2 & 4 & 2,3 & 3t_o-1 & \text{Theorem \ref{Thrm:2_to_AsymAONT_bound}} \\ 2 & 4 & \geq 4 & 5t_o-9 & \text{Theorem \ref{Thrm:2_to_AsymAONT_bound}} \\\hline 3 & 3 & \text{any} & \Topstrut{3}\Botstrut{1.5}\frac{13(t_o-1)}{2} & \text{Theorem \ref{T1}}\\\hline 3 & 4 & \text{any} & 20(t_o-1) & \text{Theorem \ref{T1}}\\\hline 3 & 5 & \text{any} & \Topstrut{3}\frac{121(t_o-1)}{2} & \text{Theorem \ref{T1}} \end{array} \] \end{table} We can also obtain lower bounds on $S(2,t_o,q)$, for specific choices of $t_o$ and $q$, from computer searches. The results of our searches are presented in Examples \ref{A242} to \ref{A237}. Table \ref{tab:ASYMAONT_CompRes} lists upper and lower bounds on $S(2,t_o,q)$, for some fixed values of $t_o,$ and $q$. There are four cases where we can report exact values of $S(2,t_o,q)$. When $(t_0,q) = (3,2)$ and $(3,3)$, we have found examples that meet the upper bounds from Theorem \ref{Thrm:2_to_AsymAONT_bound}. For $(t_0,q) = (4,2)$ and $(5,2)$, the searches were run to completion and the exact values of $S(2,t_o,q)$ turn out to be strictly less than the bounds obtained from Theorem \ref{Thrm:2_to_AsymAONT_bound}, which are $S(2,4,2) \leq 7$ and $S(2,5,2) \leq 10$. \begin{table} \caption{ Upper and lower bounds on $S(2,t_o,q)$} \label{tab:ASYMAONT_CompRes} \centering \begin{tabular}{ c| c| c | l || c| l} $t_o$& $q$ & lower bound & reference & upper bound & reference \\ \hline\hline 3 & 2 & 4 & Example \ref{E232} & 4 & Theorem \ref{Thrm:2_to_AsymAONT_bound} \\ 4 & 2 & 5 & Example \ref{A242} & 5 & exhaustive search \\ 5 & 2 & 8 & Example \ref{A252} & 8 & exhaustive search\\ 6 & 2 & $10$ & Example \ref{A262} &13& Theorem \ref{Thrm:2_to_AsymAONT_bound}\\ 7 & 2 & $12$ & Example \ref{A272} & 16& Theorem \ref{Thrm:2_to_AsymAONT_bound}\\ 8 & 2 & $13$ & Example \ref{A282} & 19 & Theorem \ref{Thrm:2_to_AsymAONT_bound}\\ \hline 3 & 3 & 6 & Example \ref{A233}& 6& Theorem \ref{Thrm:2_to_AsymAONT_bound}\\ 4 & 3 & $8$ & Example \ref{A243}& 9& Theorem \ref{Thrm:2_to_AsymAONT_bound}\\ 5 & 3 & $9$ & Example \ref{A253}& 13& Theorem \ref{Thrm:2_to_AsymAONT_bound}\\ 6 & 3 & $13$ & Example \ref{A263}& 17& Theorem \ref{Thrm:2_to_AsymAONT_bound}\\ \hline 3 & 4 & $6$ & Example \ref{A234}& 8& Theorem \ref{Thrm:2_to_AsymAONT_bound}\\ 4 & 4 & $9 $ & Example \ref{A244}& 11& Theorem \ref{Thrm:2_to_AsymAONT_bound}\\ 5 & 4 & $11 $ & Example \ref{A254}& 16& Theorem \ref{Thrm:2_to_AsymAONT_bound}\\ \hline 3 & 5 & $8 $ & Example \ref{A235}& 10& Theorem \ref{Thrm:2_to_AsymAONT_bound}\\ 4 & 5 & $10 $ & Example \ref{A245}& 14& Theorem \ref{Thrm:2_to_AsymAONT_bound}\\ \hline 3 & 7 & $8 $ & Example \ref{A237}& 14& Theorem \ref{Thrm:2_to_AsymAONT_bound} \\ \end{tabular} \end{table} \section{Discussion} \label{summary.sec} There are many open problems involving asymmetric AONTs. It would certainly be of interest to find improved necessary conditions and general constructions. The first cases are when $t_i = 2$. A starting point would be to close the gaps in the bounds reported in Table \ref{tab:ASYMAONT_CompRes}. As mentioned in Remark \ref{converse.rem}, it is unknown if the converse of part 2 of Theorem \ref{asym.thm} is true when $t_i < t_o$. We feel that this question is worthy of further study. \section*{Acknowledgements} We thank Bill Martin for helpful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $F = (f_1,\ldots,f_r)$ with $f_j\in \mathbb{C}[x_1,\ldots,x_n]$ be a tuple of polynomials, and $r>0$. Introduce new variables $s = (s_1,\ldots, s_r)$ and fix a tuple of natural numbers $a = (a_1,\ldots, a_r) \in \mathbb{N}^r$ such that the product $f_1^{a_1}\ldots f_r^{a_r}$ admits zeros on $X=\mathbb{C}^n$. By definition, the {\it Bernstein-Sato ideal} $B_F^{a}$ consists of all polynomials $b(s)\in \ensuremath{\mathbb{C}}[s]$ such that $$b(s) F^s \in \ensuremath{\mathscr{D}}_X[s]F^{s+a}$$ where $F^s = f_1^{s_1}\cdots f_r^{s_r}$, $\ensuremath{\mathscr{D}}_X=\mathbb{C}[x]\langle\partial\rangle$ is the ring of algebraic differential operators on $X$, with $x=x_1,\ldots,x_n$, $\partial=\partial_1,\ldots,\partial_n$, and $\partial_i=\partial/\partial x_i$ for $i=1,\ldots,n$. Here $\ensuremath{\mathscr{D}}_X[s]F^{s+a}$ is the $\mathscr{D}_X[s]$-submodule of the free $\mathbb{C}[x,f^{-1},s]$-module $\mathbb{C}[x,f^{-1},s]F^{s+a}$ obtained by applying formally the operators in $\mathscr{D}_X[s]$ to the symbol $F^{s+a}$ by using the usual derivation rules, where $f=f_1\ldots f_r$. The zero locus of the ideal $B_F^a$ is denoted $$Z(B_F^a)\subseteq \ensuremath{\mathbb{C}}^r.$$ This construction extends easily to the case when $F:X\to \mathbb{C}^r$ is a morphism from a smooth affine complex algebraic variety, and also, by using analytic differential operators, to the case when $F:(X,x)\to (\mathbb{C}^r,0)$ is the germ of a holomorphic map of complex manifolds. The latter are the so-called {\it local Bernstein-Sato ideals} $B_{F,x}^a$, and the former, $B_F^a$, equal the intersection of all local $B_{F,x}^a$ for $x$ in the zero locus of $f$. In the classical case $r=1=a$, the ring $\ensuremath{\mathbb{C}}[s]$ is a principal ideal domain and the unique monic generator $b_f(s)$ of $B_F^{1}$ is called the {\it Bernstein-Sato polynomial} of $F=f$. One has: \begin{theorem} \label{thrmMoreA} (\cite[Theorem 1.1.1]{budur2020zeroII}) Let $F=(f_1,\ldots,f_r):X\rightarrow\mathbb{C}^r$ be a morphism of smooth complex affine irreducible algebraic varieties, or the germ at $x\in X$ of a holomorphic map on a complex manifold. Let $a\in\mathbb{N}^r$ such that $\prod_{j=1}^rf_j^{a_j}$ is not invertible as a holomorphic function on $X$. Then: \begin{enumerate} \item Every irreducible component of $Z(B_F^{a})$ of codimension 1 is a hyperplane of type $l_1s_1+\ldots+l_rs_r+b=0$ with $l_j\in\mathbb{Q}_{\ge 0}$, $b\in\mathbb{Q}_{>0}$, and for each such hyperplane there exists $j$ with $a_j\ne 0$ such that $l_j>0$. \item Every irreducible component of $Z(B_F^{a})$ of codimension $>1$ can be translated by an element of $\mathbb{Z}^r$ inside a component of codimension 1. \end{enumerate} \end{theorem} For $r=1$ this is equivalent to the classical result that the roots of the Bernstein-Sato polynomial $b_f$ are negative rational numbers, due to Kashiwara \cite{kashiwara1976b}. The first part without the strict positivity of $l_j$ is due to Sabbah \cite{sabbah1987proximite} and Gyoja \cite{gyoja1993bernstein}. The second part for the case $a=(1,\ldots,1)$ is due to Maisonobe \cite{maisonobe2016filtration}, a completely different proof of which was given recently by van der Veer \cite{robin}. The first purpose of this paper is to further refine part (1) of the above theorem in terms of numerical data from log resolutions. Let $\mu:Y\to X$ be a strong log resolution of $f$. This means that $\mu$ is a projective morphism that is an isomorphism over the complement of $D$, the divisor defined by $f$, such that $Y$ is smooth and $\mu^*D$ is a simple normal crossings divisor. The numerical data we refer to are the orders of vanishing $\ord_{E}(f_j)\in\mathbb{N}$ of $f_j$ along irreducible components $E$ of $\mu^*D$, and the orders of vanishing $k_E=\ord_{E}(\det \operatorname{Jac}(\mu))\in\mathbb{N}$ of the determinant of the Jacobian of $\mu$, also equal to the coefficients of the relative canonical divisor $K_\mu$ of $\mu$. We show: \begin{theorem}\label{thm: MainTheorem} Every irreducible component of $Z(B_F^a)$ of codimension $1$ is a hyperplane of the form $$\ord_{E}(f_1)s_1 + \cdots + \ord_{E}(f_r)s_r + k_E + c=0 $$ with $c\in\mathbb{Z}_{>0}$. \end{theorem} Without the term $k_E$, the statement was proven for $r=1$ by Kashiwara \cite{kashiwara1976b} and for $r\ge 1$ by \cite[Lemma 4.4.6]{budur2020zeroII}. The case $r=1$ of Theorem \ref{thm: MainTheorem} is due to Lichtin \cite{lichtin1989poles}, a new proof of which was given by Dirks-Musta\c{t}\u{a} \cite{DM}. If $r=1$, the upper bound $c< (n+a-1)N_E-k_E$ for $c$ as in Theorem \ref{thm: MainTheorem} can be deduced from \cite[Theorem 0.4]{S-m}. For $r>1$ the problem of finding an upper bound for $c$ is open. The second part of this paper contains a number of lower bounds for the Bernstein-Sato zero locus. Firstly, one has an easy multivariate generalisation for the fact that the Bernstein-Sato polynomial $b_f(s)$, which corresponds to the case $r=1=a$, always has $-1$ as a root. \begin{proposition}\label{prop1.3} Let $C$ be an irreducible component of $D$ such that $m:=\sum_{j=1}^{r}\ord_{C}(f_j)a_j\neq 0$. Then $\left(\sum_{j=1}^r \ord_{C}(f_j)s_j \right)+ c = 0$ determines an irreducible component of $Z(B_F^a)$ for $c = 1,\ldots, m$. \end{proposition} Further, we generalise the fact that the jumping numbers of $f$ in $(0,\lct(f)+1)$ are roots of $b_{f}(s)$, \cite{ClassicalJump}, \cite[Theorem 2]{BSArbitraryVariety}. Recall that the log-canonical threshold $\lct(f)$ is the smallest jumping number of $f$. For any $\lambda\in \mathbb{R}_{\geq 0}^r$ the {\it mixed multiplier ideal sheaf} of $F^\lambda$ is given by $$\mathcal{J}(F^\lambda) = \mu_*\O_Y(K_\mu - \lfloor \sum_{j=1}^r \lambda_j \mu^* D_j \rfloor)$$ where $D_i$ denotes the divisor determined by $f_i$ and $\lfloor {-} \rfloor$ is the round-down of an $\mathbb{R}$-divisor. Associated to $\lambda$ is the region $$\mathcal{R}_F(\lambda):= \{\lambda'\in \mathbb{R}_{\geq 0}^r:\mathcal{J}(F^{\lambda}) \subseteq \mathcal{J}(F^{\lambda'})\}.$$ The {\it jumping walls of $F$} are given by the intersection of $\mathbb{R}^r_{>0}$ with the boundary of $\mathcal{R}_F(\lambda)$ for some $\lambda$. In the case $r=1$, these are the jumping numbers of $f$. By the definition of mixed multiplier ideals, each facet of a jumping wall, that is, a codimension-one face, is cut out by a hyperplane of the form $\sum_{j=1}^r\ord_E(f_j)s_j = k_E + c$ with $c\in \mathbb{Z}_{>0}$ and $E$ an irreducible component of $\mu^*D$. Thus facets of jumping walls can potentially determine irreducible components of $Z(B_F^a)$ by replacing $s_j$ with $-s_j$, by Theorem \ref{thm: MainTheorem}. The log-canonical threshold, or rather the interval $[0,\lct (f)]$, is generalised by the {\it $\LCT$-polytope} $$ \LCT(F) := \bigcap_{E}\{\lambda\in \mathbb{R}_{\geq 0}^r : \sum_{j=1}^r \ord_E(f_j)\lambda_j \leq k_E + 1\}.$$ The facets of $\LCT(F)$ intersecting $\mathbb{R}_{>0}^r$ non-trivially are always jumping walls of $F$. Define the {\it $\KLT_{a}$-region} $$\KLT_{a}(F):= \bigcap_{E}\{\lambda\in \mathbb{R}_{\geq 0}^r: \sum_{j=1}^r \ord_E(f_j)(\lambda_j - a_j) < k_E + 1\}.$$ If $r=1$ then $\LCT_a(f)=[0,\lct(f)+a)$. We rephrase $\LCT(F)$ and $\LCT_a(F)$ in terms of log-canonical and Kawamata log-terminal singularities in \ref{sec: JumpingWall}. This shows that $\LCT(F)$ and $\LCT_a(F)$ are independent of the chosen resolution. \begin{theorem}\label{thm: JumpingWall} If a facet of a jumping wall of $F$ intersects $\KLT_{a}(F)$, then the facet determines an irreducible component of $Z(B_F^a)$. \end{theorem} This theorem was shown by \cite{cassou2011multivariable} for $Z(B_F^{1})$ when $f_1,\ldots, f_r$ are germs of plane curves. We employ the same method, which is essentially the one used in \cite{kollar1997singularities}, \cite[Theorem B]{ClassicalJump}. From \Cref{thm: JumpingWall} we deduce a generalisation for the fact that the largest root of the Bernstein-Sato polynomial $b_f(s)$ is equal to $-\lct(f)$ when $r=1$. \begin{corollary}\label{thm: LCT} Let $\sum_{j=1}^r \ord_E(f_j)s_j = k_E + 1$ define the affine span of a facet of $\LCT(F)$. Then $\sum_{j=1}^r \ord_{E}(f_j)s_j +k_E + 1=0$ defines an irreducible component of $Z(B_F^a)$ if there exists at least one $j$ with $a_j\neq 0$ and $\ord_{E}(f_j)\neq 0$. \end{corollary} This together with \Cref{thm: MainTheorem} implies the analogue of the maximality statement from the case $r=1=a$: the irreducible components of codimension one of $Z(B_F^a)$ originating from the $\LCT$-polytope are the closest to the origin with that slope. Starting with the Bernstein-Sato ideal, a generalized version of the log-canonical threshold was also introduced in \cite[3.2]{Lei}. It was shown in \cite[Theorem 5.6]{Lei} that $s_1+\ldots+s_r+n=0$ defines the irreducible component of $Z(B_F)$ closest to the origin with that slope, if $F$ is the complete factorization of a hyperplane arrangement $\prod_{i=1}^rf_i$, in which case one can also see that this component corresponds to a facet of $\LCT(F)$. Saito \cite{RealLogCan} also introduced a version of log-canonical thresholds and jumping numbers for real algebraic functions, called {\it real log-canonical threshold} and {\it real jumping numbers}. ``Real" here refers to working over $\mathbb{R}$. Real jumping numbers, like the usual jumping numbers defined when the base field is $\mathbb{C}$, are positive rational numbers. It is shown in \cite{RealLogCan} that the negatives of small real jumping numbers are roots of Bernstein-Sato polynomials. Interesting about these real jumping numbers is that they do not have to agree with the usual jumping numbers. These results are of further interest due to applications to statistics \cite{Wa}. Mixed multiplier ideals and their jumping walls will be defined on real algebraic manifolds in \ref{sec: RealJump}. There are also the associated notions of a {\it $\RKLT_a$-region}, {\it $\RLCT$-polytope} and real Bernstein-Sato ideal. In Theorem \ref{thmRs} and Corollary \ref{corRs} we give the real analogs of Theorem \ref{thm: JumpingWall} and Corollary \ref{thm: LCT}, generalising Saito's results. For the proof of Theorem \ref{thm: MainTheorem} we follow the strategy of Kashiwara \cite{kashiwara1976b} and Lichtin \cite{lichtin1989poles}. The main problem for the case $r>1$ is that the $\mathscr{D}_X$-modules computing the Bernstein-Sato ideals are not holonomic anymore, and thus a new idea is needed. This is essentially the problem which has been surmounted using relative holonomic $\mathscr{D}$-modules in \cite{budur2020zeroI}, \cite{budur2020zeroII}, \cite{robin}, in order to provide a topological interpretation of $Z(B_F^a)$. The results of these papers are thus crucial for us. Relative holonomic $\mathscr{D}$-modules appeared as early as \cite{sabbah1987proximiteII} and are also recently studied in \cite{AnalyticDirectIm, FS}. Our main technical result is Lemma \ref{gradeBound}. The proofs of the other results mentioned above are straight-forward and need no essential new ideas. In Section \ref{sec: DXS} we gather the main results on relative $\mathscr{D}$-modules we need. In Section \ref{secUB} we use these results to prove Theorem \ref{thm: MainTheorem}. In Section \ref{secLB} we prove the other results. In Section \ref{secEx} we give an example. \medskip \noindent {\bf Acknowledgement.} We were informed that L. Wu has also proven all the results in this paper. We thank L. Wu and P. Zhao for useful discussions. N. Budur was supported by the grants FWO G097819N and Methusalem METH/15/026. R. van der Veer was supported by an FWO PhD fellowship. \section{$\ensuremath{\mathscr{D}}_X[s]$-modules}\label{sec: DXS} This section provides preliminaries on the theory of $\ensuremath{\mathscr{D}}_X[s]$-modules such as direct images and homological properties, where $s=(s_1,\ldots,s_r)$. \subsection{Relative holonomic $\mathscr{D}$-modules}\label{sec: RelHol} Let $X$ be a smooth complex variety and let $R$ be a regular commutative finitely generated $\ensuremath{\mathbb{C}}$-algebra integral domain. The {\it sheaf of relative differential operators} on $X$ is defined by $$\ensuremath{\mathscr{D}}_X^R := \ensuremath{\mathscr{D}}_X \otimes_\ensuremath{\mathbb{C}} R.$$ The order filtration $F_j \ensuremath{\mathscr{D}}_X$ on $\ensuremath{\mathscr{D}}_X$ extends to a filtration $F_j \ensuremath{\mathscr{D}}_X^R := F_j\ensuremath{\mathscr{D}}_X\otimes_\ensuremath{\mathbb{C}} R$ on $\ensuremath{\mathscr{D}}_X^R$. The graded objects for this filtration are denoted by $\ensuremath{\gr^{rel}\RelMinspace}$. Denote $\pi_{T^*X}:T^*X \to X$ and $\pi_{\Spec R}: \Spec R \to \{\operatorname{pt}\}$ the projection maps onto $X$ and a point, respectively. Since $\ensuremath{\operatorname{gr}}\ensuremath{\mathscr{D}}_X \cong (\pi_{T^*X})_* \O_{T^*X}$ \cite[Chapter 2]{hotta2007d} it holds that $\ensuremath{\gr^{rel}\RelMinspace}\ensuremath{\mathscr{D}}_X^R \cong (\pi_{T^* X}\times \pi_{\Spec R})_*\O_{T^*X\times \Spec R}$. Since $\ensuremath{\mathscr{D}}_X^R$ is a sheaf of non-commutative rings, one should distinguish between left and right $\ensuremath{\mathscr{D}}_X^R$-modules. We may also refer to a $\ensuremath{\mathscr{D}}_X^R$-module without specifying left or right if no confusion is possible. In these cases it is intended that the result holds in either case. For any filtered $\ensuremath{\mathscr{D}}_X^R$-module $\ensuremath{\mathscr{M}}$ there is an associated sheaf of modules on $T^*X\times \Spec R$ given by $(\pi_{T^*X}\times \pi_{\Spec R})^{-1}(\ensuremath{\gr^{rel}\RelMinspace}\ensuremath{\mathscr{M}})\allowbreak\otimes_{\pi^{-1}\ensuremath{\gr^{rel}\RelMinspace}\ensuremath{\mathscr{D}}_X^R} \O_{T^*X\times \Spec R}$. From now we write $\ensuremath{\gr^{rel}\RelMinspace}\ensuremath{\mathscr{D}}_X^R$ and $\ensuremath{\gr^{rel}\RelMinspace} \ensuremath{\mathscr{M}}$ for the corresponding sheaves on $T^*X\times \Spec R$. A filtration compatible with $F_\bullet \mathscr{D}_X^R$ on a $\ensuremath{\mathscr{D}}_X^R$-module $\ensuremath{\mathscr{M}}$ is said to be {\it good} if $\ensuremath{\gr^{rel}\RelMinspace}\ensuremath{\mathscr{M}}$ is a coherent $\ensuremath{\gr^{rel}\RelMinspace}\ensuremath{\mathscr{D}}_X^R$-module. A quasi-coherent $\ensuremath{\mathscr{D}}_X^R$-module $\ensuremath{\mathscr{M}}$ locally admits a good filtration if and only if it is coherent \cite[Corollary D.1.2]{hotta2007d}, in fact one can take this filtration to be global \cite[Proof of Theorem 2.1.3]{hotta2007d}. For a coherent $\ensuremath{\mathscr{D}}_X^R$-module $\ensuremath{\mathscr{M}}$ the support $\Ch^{rel}\RelMinspace\ensuremath{\mathscr{M}}$ of $\ensuremath{\gr^{rel}\RelMinspace}\ensuremath{\mathscr{M}}$ in $T^*X\times \Spec R$ is independent of the chosen filtration \cite[Lemma D.3.1]{hotta2007d} and is called the {\it relative characteristic variety}. Equivalently, the relative characteristic variety is locally determined by the radical of the annihilator ideal of $\ensuremath{\gr^{rel}\RelMinspace} \ensuremath{\mathscr{M}}$ in $\ensuremath{\gr^{rel}\RelMinspace}\ensuremath{\mathscr{D}}_X^R$. \begin{lemma}[{\cite[Lemma 3.2.2]{budur2020zeroI}}]\label{prop: SESBehaviourChrel} For any short exact sequence of coherent $\ensuremath{\mathscr{D}}_X^R$-modules $$0\to \ensuremath{\mathscr{M}}_1\to \ensuremath{\mathscr{M}}_2 \to \ensuremath{\mathscr{M}}_3 \to 0 $$ it holds that $\Ch^{rel}\RelMinspace\ensuremath{\mathscr{M}}_2 = \Ch^{rel}\RelMinspace\ensuremath{\mathscr{M}}_1 \cup \Ch^{rel}\RelMinspace\ensuremath{\mathscr{M}}_3.$ \end{lemma} \begin{definition}\label{def: RelHolonomic} A coherent $\ensuremath{\mathscr{D}}_X^R$-module $\ensuremath{\mathscr{M}}$ is said to be {\it relative holonomic} if its relative characteristic variety is a finite union $\Ch^{rel}\RelMinspace\ensuremath{\mathscr{M}} = \cup_w \Lambda_w \times S_w$ where $\Lambda_w\subseteq T^* X$ are irreducible conic Lagrangian subvarieties and $S_w\subseteq \Spec R$ are irreducible subvarieties. \end{definition} \begin{lemma}[{\cite[Lemma 3.2.4]{budur2020zeroI}}]\label{prop: SubquotientRelHol} Any subquotient of a relative holonomic module is relative holonomic. \end{lemma} The functor which associates to a left $\ensuremath{\mathscr{D}}_X^R$-module $\ensuremath{\mathscr{M}}$ the right $\ensuremath{\mathscr{D}}_X^R$-module $\ensuremath{\mathscr{M}}\otimes_{\O_X}\omega_X$ is an equivalence of categories, where $\omega_X$ is the canonical invertible sheaf. The pseudoinverse associates $\ensuremath{\mathcal{H}om}_{\O_X}(\omega_X,\allowbreak\ensuremath{\mathscr{M}})$ to a given right-module $\ensuremath{\mathscr{M}}$. Pick local coordinates $x_1,\ldots, x_n$ on $X$, that is, regular functions such that $dx_1,\ldots,dx_n$ are a local basis for $\Omega_{X}^1$. There is an induced local section $dx := dx_1\wedge \ldots \wedge dx_n$ for $\omega_X$. For any left $\ensuremath{\mathscr{D}}_X^R$-module $\ensuremath{\mathscr{M}}$ one has a locally defined $\O_X\otimes R$-linear isomorphism $\ensuremath{\mathscr{M}} \to \ensuremath{\mathscr{M}}\otimes_{\O_X}\omega_X$ associating to any section $m$ the section $m^* = m dx$. This can be made to commute with the $\ensuremath{\mathscr{D}}_X^R$-module structure. That is, for any operator $P$ of $\ensuremath{\mathscr{D}}_X^R$ there is an {\it adjoint operator} $P^*$ such that $$(P\cdot m)^* = m^* \cdot P^*.$$ Indeed, for a vector field $\xi = \sum \xi_i \partial_i$ this is satisfied by setting $\xi^* = -\sum \partial_i \xi_i$. Iteration then extends to differential operators of arbitrary order. \subsection{Direct image} Let $\mu:Y\to X$ be a morphism of varieties. The {\it direct image functor} on right $\ensuremath{\mathscr{D}}_Y$-modules is defined by $$\mu_+ \ensuremath{\mathscr{M}}:= R\mu_*(\ensuremath{\mathscr{M}}\otimes_{\ensuremath{\mathscr{D}}_Y}^L \ensuremath{\mathscr{D}}_{Y\to X}) $$ where $\ensuremath{\mathscr{D}}_{Y\to X}:= \O_Y\otimes_{\mu^{-1}\O_X}\mu^{-1}\ensuremath{\mathscr{D}}_X$ is the transfer $(\ensuremath{\mathscr{D}}_{Y},\mu^{-1}\ensuremath{\mathscr{D}}_X)$-bimodule. There is an induced $\ensuremath{\mathscr{D}}_Y^R$-module direct image functor. Indeed, consider a right $\ensuremath{\mathscr{D}}_Y^R$-module $\ensuremath{\mathscr{M}}$ and observe that multiplication by $r\in R$ is $\ensuremath{\mathscr{D}}_Y$-linear. By the functoriality of the $\ensuremath{\mathscr{D}}_Y$-module direct image it follows that there is an associated endomorphism on $\mu_+ \ensuremath{\mathscr{M}}$. This equips the direct image with a canonical structure of a complex of $\ensuremath{\mathscr{D}}_X^R$-modules. For $j\in \mathbb{Z}$, the cohomology sheaf $ H^j\mu_+\ensuremath{\mathscr{M}}$ is called the {\it $j$-th direct image}. Whenever $\mu$ is proper and $\ensuremath{\mathscr{M}}$ is coherent as $\ensuremath{\mathscr{D}}_Y^R$-module it holds that $H^j\mu_+\ensuremath{\mathscr{M}}$ is coherent over $\ensuremath{\mathscr{D}}_X^R$ for any $j$. The proof for this statement is identical to the absolute case \cite[Theorem 2.5.1]{hotta2007d}. The following proposition may be established identically to the absolute case \cite[Theorem 4.4.1]{sabbah2011introduction}. \begin{proposition}\label{prop: DirectImageRelHol} Suppose that $\mu$ is proper and let $\ensuremath{\mathscr{M}}$ be a relative holonomic right $\ensuremath{\mathscr{D}}_Y^R$-module. Then $H^j\mu_+ \ensuremath{\mathscr{M}}$ is relative holonomic for any $j\in \mathbb{Z}$. \end{proposition} \subsection{Homological notions}\label{sec: HomNotion} Let $n = \dim X$ and $r = \dim R$. For some results in this section the distinction between left and right modules is relevant. Such results have been stated in terms of right $\ensuremath{\mathscr{D}}_X^R$-modules, which is the case we will need. It should be clear that these results have obvious analogues for left $\ensuremath{\mathscr{D}}_X^R$-modules. \begin{definition} Let $\ensuremath{\mathscr{M}}$ be a non-zero coherent $\ensuremath{\mathscr{D}}_X^R$-module. The smallest integer $j\geq 0$ such that $\Ex{\ensuremath{\mathscr{D}}_X^R}{j}(\ensuremath{\mathscr{M}},\ensuremath{\mathscr{D}}_X^R) \neq 0$ is called the {\it grade} of $\ensuremath{\mathscr{M}}$ and is denoted $j(\ensuremath{\mathscr{M}})$. If $\ensuremath{\mathscr{M}} = 0$ then $j(\ensuremath{\mathscr{M}})$ is said to be infinite. \end{definition} \begin{definition}\label{def: BSIdeal} The {\it Bernstein-Sato-ideal} of a $\ensuremath{\mathscr{D}}_X^R$-module $\ensuremath{\mathscr{M}}$ is given by $B_\ensuremath{\mathscr{M}}:=\Ann_R \ensuremath{\mathscr{M}}$. We denote by $Z(B_\ensuremath{\mathscr{M}})$ the zero locus of $B_\ensuremath{\mathscr{M}}$, that is, the reduced closed subscheme defined by the radical ideal of $B_\ensuremath{\mathscr{M}}$ in $\Spec R$. \end{definition} \begin{lemma}[{\cite[Lemma 3.4.1]{budur2020zeroI}}]\label{prop: ChrelGrades} Let $\ensuremath{\mathscr{M}}$ be a relative holonomic $\ensuremath{\mathscr{D}}_X^R$-module. Then $$\dim \Ch^{rel}\RelMinspace \ensuremath{\mathscr{M}} + j(\ensuremath{\mathscr{M}}) = 2n + r.$$ \end{lemma} \begin{lemma}[{\cite[Lemma 3.2.2]{budur2020zeroI}}]\label{rem: GradeIFFBernsteinIdeal} Let $\ensuremath{\mathscr{M}}$ be a relative holonomic $\ensuremath{\mathscr{D}}_X^R$-module. Then $Z(B_\ensuremath{\mathscr{M}})$ is the projection of $\Ch^{rel}\RelMinspace\ensuremath{\mathscr{M}}$ on $\Spec R$. Hence, $j(\ensuremath{\mathscr{M}})=n+k$ if and only if $Z(B_\ensuremath{\mathscr{M}})$ has codimension $k$ in $\Spec R$. \end{lemma} \begin{definition} A non-zero coherent $\ensuremath{\mathscr{D}}_X^R$-module $\ensuremath{\mathscr{M}}$ is said to be {\it $j$-pure} if $j(\ensuremath{\mathscr{N}}) = j(\ensuremath{\mathscr{M}}) = j$ for every non-zero submodule $\ensuremath{\mathscr{N}}$. \end{definition} \begin{lemma}[{\cite[Lemma 3.4.2]{budur2020zeroI}}]\label{prop: Injective3.4.2} Let $\ensuremath{\mathscr{M}}$ be a $j$-pure relative holonomic $\ensuremath{\mathscr{D}}_X^R$-module and suppose that $b\in R$ is not contained in any minimal prime ideal of $R$ containing $B_\ensuremath{\mathscr{M}}$. Then there exists a good filtration on $\ensuremath{\mathscr{M}}$ such that multiplication by $b$ induces injective endomorphisms on $\ensuremath{\mathscr{M}}$ and $\ensuremath{\gr^{rel}\RelMinspace}\ensuremath{\mathscr{M}}$. \end{lemma} \begin{corollary}\label{cor: injective} Let $\ensuremath{\mathscr{M}}$ be a relative holonomic $\ensuremath{\mathscr{D}}_X^R$-module with $R=\mathbb{C}[s_1,\ldots,s_r]$, $r>0$. (i) There exists a non-empty Zariski open subset $W(\ensuremath{\mathscr{M}})$ of the space $R_1$ of polynomials in $R$ of degree one such that every $\ell \in W(\ensuremath{\mathscr{M}})$ acts injectively on $\ensuremath{\mathscr{M}}$. (ii) One can assume, by shrinking $W(\ensuremath{\mathscr{M}})$ if $W(\ensuremath{\mathscr{M}})=R_1$, that there exists a Zariski closed proper subset $V(\ensuremath{\mathscr{M}})$ of $\mathbb{C}^r$ such that $$W(\ensuremath{\mathscr{M}})=\{\ell\in R_1\mid \ell\text{ does not vanish on any irreducible component of }V(\ensuremath{\mathscr{M}})\}.$$ \end{corollary} \begin{proof} Denote by $\ensuremath{\mathscr{M}}_i$ the largest submodule of $\ensuremath{\mathscr{M}}$ with $j(\ensuremath{\mathscr{M}}_i)\geq i\ge 0$. The modules $\ensuremath{\mathscr{M}}_i$ exist and form a decreasing sequence $$\ensuremath{\mathscr{M}}=\ensuremath{\mathscr{M}}_0\supset \ensuremath{\mathscr{M}}_1\supset\dots$$ by \cite[IV.1.6.(i) and IV.2.8]{B}. By Lemma \ref{prop: SubquotientRelHol}, $\ensuremath{\mathscr{M}}_i$ are also relative holonomic. Thus by Lemma \ref{rem: GradeIFFBernsteinIdeal}, $j(\ensuremath{\mathscr{M}}_i)\ge n$ for all $i$ with $\ensuremath{\mathscr{M}}_i\neq 0$, and $\ensuremath{\mathscr{M}}_{i}=0$ if $i>n+r$. The successive quotients $\ensuremath{\mathscr{M}}_i/\ensuremath{\mathscr{M}}_{i+1}$ are either $0$ or pure of grade $i$, by \cite[Proposition 4.11]{robin}. Let $I$ denote the set of indices $i$ such that $\ensuremath{\mathscr{M}}_i/\ensuremath{\mathscr{M}}_{i+1}\neq 0$. If $i\in I$, let $V_i\subset \mathbb{C}^r$ be the zero locus of $B_{\ensuremath{\mathscr{M}}_i/\ensuremath{\mathscr{M}}_{i+1}}$, and let $W_i$ be the set of $\ell\in R_1$ that do not vanish on any irreducible component of $V_i$. Each $W_i$ is non-empty Zariski open in $R_1$ and every $\ell\in W_i$ acts injectively on $\ensuremath{\mathscr{M}}_i/\ensuremath{\mathscr{M}}_{i+1}$ by Lemma \ref{prop: Injective3.4.2}. If $i=n$, then $V_n=\mathbb{C}^r$ and $W_n=R_1$. If $n< i\le n+r$, then $V_i\subsetneq \mathbb{C}^r$ and $W_i$ might still be all of $R_1$. Set $W(\ensuremath{\mathscr{M}}):=\cap_{i\in I}W_i$. Then $W(\ensuremath{\mathscr{M}})$ is non-empty Zariksi open in $R_1$ and every $\ell\in W(\ensuremath{\mathscr{M}})$ acts injectively on $\ensuremath{\mathscr{M}}$. This gives (i) If $W(\ensuremath{\mathscr{M}})\subsetneq R_1$, define $V(\ensuremath{\mathscr{M}}):=\cup_{i}V_i$ where the union runs over $i\in I$ such that $W_i\neq R_1$. It is clear that this satisfies (ii). \end{proof} \begin{corollary}\label{commutativity} Let $\mu:Y\to X$ be a morphism of smooth varieties, and let $\ensuremath{\mathscr{M}}$ be a relative holonomic $\ensuremath{\mathscr{D}}_Y^{R}$-module, with $R=\ensuremath{\mathbb{C}}[s_1,\dots,s_r]$, $r>0$. There exists a finite set of points $\{\beta_{ij}\in\mathbb{C}\}$ such that for every $\alpha$ in the non-empty Zariski open complement $\mathbb{C}^r\setminus\cup_{i,j}\{z_i-\beta_{ij}=0\}$, the natural morphism of $\mathscr{D}_X$-modules \begin{equation}\label{eqW} \left(H^0\mu_+ \ensuremath{\mathscr{M}}\right)\otimes _R R/\mathfrak{m}_\alpha \to H^0\mu_+(\ensuremath{\mathscr{M}}\otimes_R R/\mathfrak{m}_\alpha)\end{equation} is an isomorphism, where $\mathfrak{m}_\alpha=(s_1-\alpha_1,\ldots,s_r-\alpha_r)$ is the maximal ideal in $R$ of $\alpha$. \end{corollary} \begin{proof} Let $\gamma\in\mathbb{C}^r$. First, let $\ell\in W(\ensuremath{\mathscr{M}})\subset R_1$ be a polynomial of degree one, with $W(\ensuremath{\mathscr{M}})$ as in Corollary \ref{cor: injective}. Then multiplication by $\ell$ on $\ensuremath{\mathscr{M}}$ followed by the direct image induces a long exact sequence of $\ensuremath{\mathscr{D}}_X^R$-modules $$0 \to H^0\mu_+\ensuremath{\mathscr{M}} \xrightarrow{\ell\cdot} H^0\mu_+ \ensuremath{\mathscr{M}} \to H^0\mu_+ \left(\ensuremath{\mathscr{M}}\otimes_{R} R/(\ell)\right) \to H^1\mu_+\ensuremath{\mathscr{M}} \xrightarrow{\ell\cdot} H^1\mu_+\ensuremath{\mathscr{M}}.$$ Thus $(H^0\mu_+ \ensuremath{\mathscr{M}})\otimes_{R} R/(\ell)$ is a $\mathscr{D}_X$-submodule of $H^0\mu_+ (\ensuremath{\mathscr{M}}\otimes_{R} R/(\ell))$. Their quotient is isomorphic to the kernel of $\ell$ on $H^1\mu_+\ensuremath{\mathscr{M}}$. We can assume further that $\ell\in W(\ensuremath{\mathscr{M}})\cap W(H^1\mu_+\ensuremath{\mathscr{M}})$ since the intersection is Zariski open and dense. Then this kernel is zero, and hence $$\left(H^0\mu_+ \ensuremath{\mathscr{M}}\right)\otimes_{R} R/(\ell)\simeq H^0\mu_+(\ensuremath{\mathscr{M}}\otimes_{R} R/(\ell)).$$ By Corollary \ref{cor: injective}, we can assume $W(\ensuremath{\mathscr{M}})\cap W(H^1\mu_+\ensuremath{\mathscr{M}})$ is the set of $\ell\in R_1$ that do not vanish on any irreducible component of a Zariski closed proper subset $V(\ensuremath{\mathscr{M}})\cup V(H^1\mu_+\ensuremath{\mathscr{M}})$ of $\mathbb{C}^r$. Thus, there exists a finite set of points $\{\beta_{1j}\in\mathbb{C}\}$ such that $\ell=s_1-\alpha_1\in W(\ensuremath{\mathscr{M}})\cap W(H^1\mu_+\ensuremath{\mathscr{M}})$ for $\alpha_1\in\mathbb{C}\setminus\{\beta_{1j}\}$. If $r=1$, the above argument gives the claim. If $r>1$, we proceed by induction since $R/(s_1-\alpha_1) \simeq \mathbb{C}[s_2,\ldots,s_r]$. \end{proof} \section{Upper bounds}\label{secUB} We consider first the algebraic case of \Cref{thm: MainTheorem}. Since $B_F^a$ is the intersection of all local $B_{F,x}^a$, we may assume that $X$ is affine and admits local coordinates $x_1,\ldots, x_n$. Recall from the introduction that $G = F\circ\mu$ and let $g_j=f_j\circ\mu$. As in the introduction, we use the notation $\mathscr{D}_X[s]$ for $\mathscr{D}_X^R$ if $R=\mathbb{C}[s]$. \subsection{Translation to right modules} By the translation between left and right modules in \ref{sec: RelHol} the functional equation $P F^{s+a} = b(s) F^s$ may be restated as the equation $F^{s+a}dx \cdot P^* = b(s) F^s dx $ in $$\ensuremath{\mathscr{N}}:=\ensuremath{\mathscr{D}}_X[s] F^s \otimes_{\O_X}\omega_X = F^sdx\cdot \ensuremath{\mathscr{D}}_X[s].$$ Define $\ensuremath{\mathscr{M}}$ to be the submodule of $ \ensuremath{\mathscr{D}}_Y[s] G^s\otimes_{\O_Y}\omega_Y$ spanned by $G^s \mu^*(dx)$ over $\ensuremath{\mathscr{D}}_Y[s]$, $$ \ensuremath{\mathscr{M}}:= G^s \mu^*(dx)\cdot \ensuremath{\mathscr{D}}_Y[s]. $$ \begin{lemma} The right $\ensuremath{\mathscr{D}}_Y[s]$-module $\ensuremath{\mathscr{M}}$ is relative holonomic. \end{lemma} \begin{proof} The left $\ensuremath{\mathscr{D}}_Y[s]$-module $\ensuremath{\mathscr{D}}_Y[s]G^s$ is relative holonomic by \cite[R\'esultat 1]{maisonobe2016filtration}. Then the associated right module $\ensuremath{\mathscr{D}}_Y[s]G^s\otimes_{\O_Y} \omega_Y$ is also relative holonomic. Hence, Lemma \ref{prop: SubquotientRelHol} implies that the submodule $\ensuremath{\mathscr{M}}$ is also relative holonomic. \end{proof} \subsection{$\ensuremath{\mathscr{D}}_X[s]\langle t\rangle$-modules} Let $\ensuremath{\mathscr{D}}_X[s]\langle t \rangle$ denote the sheaf of rings obtained from $\ensuremath{\mathscr{D}}_X[s]$ by adding a new variable $t$ which commutes with sections of $\ensuremath{\mathscr{D}}_X$ and is subject to $s_j t = t(s_j +a_j)$ for every $j=1,\ldots, r$ . The $\ensuremath{\mathscr{D}}_X[s]$-module $\ensuremath{\mathscr{N}}$ may be equipped with the structure of a right $\ensuremath{\mathscr{D}}_X[s]\langle t\rangle$-module by the action $$F^sdx \cdot P(x,\partial,s) \cdot t =F^{s + a}dx\cdot P(x,\partial, s + a).$$ In this formalism $B_F^a$ is the Bernstein-Sato ideal of $\ensuremath{\mathscr{N}}/ \ensuremath{\mathscr{N}} t$. An analogous $\ensuremath{\mathscr{D}}_Y[s] \langle t\rangle$-module structure can be given to $\ensuremath{\mathscr{M}}$. \begin{lemma}\label{lem: BernsteinSatoPolynomialUpstairs} The Bernstein-Sato ideal $B_{\ensuremath{\mathscr{M}}/\ensuremath{\mathscr{M}} t}$ contains a polynomial of the form $$b(s)=\prod_E \prod_{j=1}^N(\ord_{E}(g_1) s_1 + \cdots + \ord_{E}(g_r) s_r + k_E+ j)$$ where $E$ ranges over the irreducible components of $\mu^*D$, for some $N\in \mathbb{Z}_{\geq 0}$. \end{lemma} \begin{proof} The proof is analogous to the one in \cite[Section 4]{lichtin1989poles}. We can reduce the claim to the local analytic Bernstein-Sato ideal at a point $y\in Y$ lying in the support of $\mu^*D$, since $B_{\ensuremath{\mathscr{M}}/\ensuremath{\mathscr{M}} t}$ is the intersection of the local analytic Bernstein-Sato ideals. Let $E_i$ with $i\in I$ be the local analytic irreducible components of $\mu^*D$ at $y$. We can assume that there are local analytic coordinates $z_1,\ldots,z_n$ where every $E_i$ with $i\in I$ is determined by some $z_{j_i}$. After relabeling, we may be assumed that $j_i = i$. In these local coordinates $$G^s = \prod_{i\in I} z_i^{\sum_{j=1}^r \ord_{E_i}(g_j)s_j}\qquad\text{ and }\qquad \mu^*(dx) = v \prod_{i\in I} z_i^{k_i} dz$$ where $v$ is a local unit. Let $$P = v^{-1}\left(\prod_{i\in I}(-\partial_i)^{\sum_{j=1}^r a_j\ord_{E_i}(g_j)}\right) v.$$ Then $$G^{s+a}\mu^*(dx) \cdot P = q(s) G^s \mu^*(dx) $$ where $$q(s) = \prod_{i\in I}\left(\sum_{j=1}^r \ord_{E_i}(g_j)s_j + \sum_{j=1}^r a_j\ord_{E_i}(g_j) + k_i\right)\cdots\left(\sum_{j=1}^r \ord_{E_i}(g_j)s_j + 1 + k_i \right).$$ \end{proof} The $\ensuremath{\mathscr{D}}_X$-linear endomorphism $t$ induces an endomorphism on $H^0\mu_+\ensuremath{\mathscr{M}}$. The relation $s_it = t(s_i +a_i)$ also holds on $H^0\mu_+\ensuremath{\mathscr{M}}$ due to the functoriality of the direct image. Hence $H^0\mu_+\ensuremath{\mathscr{M}}$ is equipped with the structure of a $\ensuremath{\mathscr{D}}_X[s]\langle t\rangle$-module. The surjection of right $\ensuremath{\mathscr{D}}_Y[s]$-modules $\ensuremath{\mathscr{D}}_Y[s]\to \ensuremath{\mathscr{M}}$ defined by $1\mapsto G^s \mu^*(dx)$ induces a morphism $H^0\mu_+ (\ensuremath{\mathscr{D}}_Y[s] )\to H^0\mu_+\ensuremath{\mathscr{M}}$. Observe that $H^0\mu_+ (\ensuremath{\mathscr{D}}_Y[s] ) = \mu_*(\ensuremath{\mathscr{D}}_{Y\to X}\otimes_\mathbb{C}\bC[s])$ contains a global section corresponding to $1\otimes 1$. We write $u$ for the image of this section in $H^0\mu_+ \ensuremath{\mathscr{M}}$, and $\ensuremath{\mathscr{U}}$ for the right $\ensuremath{\mathscr{D}}_X[s]\langle t\rangle$-submodule generated by $u$. \begin{lemma}\label{lem: SurjectionUF} There is a surjective morphism of right $\ensuremath{\mathscr{D}}_X[s]\langle t \rangle$-modules $\ensuremath{\mathscr{U}}\to \ensuremath{\mathscr{N}}$ sending $u$ to $F^sdx$. \end{lemma} \begin{proof} This is analogous to the corresponding absolute result \cite[Chapter 5, p246]{bjork1979rings}. One must show that $(F^s dx)P = 0$ whenever $uP = 0$ for some differential operator $P$ over an open $V\subseteq X$. The resolution of singularities $Y\to X$ is an isomorphism over the complement of the divisor $D$ determined by $f$. This induces isomorphisms $\ensuremath{\mathscr{U}} \simeq H^0\mu_+ \ensuremath{\mathscr{M}} \simeq \ensuremath{\mathscr{N}}$ outside of $D$. It follows that the support of the coherent sheaf of $\O_V$-modules $\O_V ((F^s dx) P) $ lies in $D$. Thus $f^N ((F^s dx) P) = 0$ for some sufficiently large $N\geq 0$. Note that $f$ is a non-zero divisor of $\ensuremath{\mathscr{N}}(V)$. Therefore, $(F^s dx) P= 0$ on $V$ as desired. \end{proof} \begin{lemma}\label{gradeBound} The module $(H^0\mu_+\ensuremath{\mathscr{M}})/\ensuremath{\mathscr{U}}$ is relative holonomic, and $j((H^0\mu_+\ensuremath{\mathscr{M}})/\ensuremath{\mathscr{U}})>n$. \end{lemma} \begin{proof} Let $\mathscr{L}=(H^0\mu_+\ensuremath{\mathscr{M}})/\ensuremath{\mathscr{U}}$. The fact that $\mathscr{L}$ is relative holonomic is clear since $\ensuremath{\mathscr{M}}$ is. The fact that $j(\mathscr{L})>n$ is equivalent to the fact that $\Ann_{\ensuremath{\mathbb{C}}[s]}\mathscr{L}\not=0$. By \cite[Theorem E]{robin} for any $\alpha$ in the zero locus of $\Ann_{\ensuremath{\mathbb{C}}[s]}\mathscr{L}$, \begin{equation}\label{tensor} \mathscr{L}\otimes_{\ensuremath{\mathbb{C}}[s]}\frac{\ensuremath{\mathbb{C}}[s]}{\mathfrak{m}_\alpha}\not=0, \end{equation} where $\mathfrak{m}_\alpha$ is the maximal ideal corresponding to $\alpha$. Let $\ensuremath{\mathbb{C}}_\alpha=\ensuremath{\mathbb{C}}[s]/\mathfrak{m}_\alpha$. To prove that $\Ann_{\ensuremath{\mathbb{C}}[s]}\mathscr{L}\not=0$ it thus suffices that $\mathscr{L}\otimes_{\ensuremath{\mathbb{C}}[s]}\ensuremath{\mathbb{C}}_\alpha=0$ for some $\alpha\in\ensuremath{\mathbb{C}}^r$. We consider the exact sequence of $\mathscr{D}_X$-modules \begin{equation}\label{eqss}\ensuremath{\mathscr{U}}\otimes \ensuremath{\mathbb{C}}_\alpha\to \left(H^0\mu_+\ensuremath{\mathscr{M}}\right)\otimes\ensuremath{\mathbb{C}}_\alpha\to \mathscr{L}\otimes_{\ensuremath{\mathbb{C}}[s]}\ensuremath{\mathbb{C}}_\alpha\to 0.\end{equation} By Lemma \ref{commutativity}, $\left(H^0\mu_+\ensuremath{\mathscr{M}}\right)\otimes\ensuremath{\mathbb{C}}_\alpha=H^0\mu_+(\ensuremath{\mathscr{M}}\otimes\ensuremath{\mathbb{C}}_\alpha)$ for all $\alpha\in\mathbb{C}^r$ outside a finite union of hyperplanes of type $\{z_i-\beta_{ij}=0\}$ with $\beta_{ij}\in\mathbb{C}$. Among such $\alpha$, we pick now $\alpha\in\mathbb{Z}^r$ satisfying that $\alpha=\alpha'-\mathbf k$ with $\mathbf k=(k,\ldots,k)\in\mathbb{Z}^r$ for $k\in\mathbb{N}$ arbitrarily large with respect to a fixed $\alpha'\in\alpha+\mathbb{Z}^r$. We consider the diagram \[ \begin{tikzcd} U=X\setminus D \arrow[r,"j'"]\arrow[dr,"j"]& Y\arrow[d,"\mu"]\\ & X \end{tikzcd} \] where $j$ and $j'$ are the natural open embeddings. Our choice of $\alpha\in\mathbb{Z}^r$ implies the equality of regular holonomic right $\mathscr{D}_Y$-modules $$ \ensuremath{\mathscr{M}}\otimes_{\ensuremath{\mathbb{C}}[s]} \ensuremath{\mathbb{C}}_\alpha = (\mathscr{D}_X[s]G^s\otimes_{\mathcal O_Y}\omega_Y)\otimes_{\ensuremath{\mathbb{C}}[s]}\ensuremath{\mathbb{C}}_\alpha, $$ corresponding to the regular holonomic left $\mathscr{D}_Y$-module $$\mathscr{D}_Y[s]G^s\otimes_{\mathbb{C}[s]}\mathbb{C}_\alpha\simeq \mathscr{D}_YG^\alpha =\mathcal O_Y[g^{-1}]= j'_+(\mathscr{D}_Ug^{-1})$$ with $g=\prod_{j=1}^rg_j$, whose de Rham complex is the perverse sheaf $Rj'_*\mathbb{C}_{U}[n]$. The first assertion can be checked locally, and the rest are well-known, see \cite[Theorem 2.5.1]{budur2020zeroI}. Since $j$ is an affine morphism, the derived direct image $R\mu_*(Rj'_*\mathbb{C}_{U}[n]) = Rj_*\mathbb{C}_U[n]$ is also perverse and hence equal to the perverse $0$-direct image ${}^pR^0\mu_*(Rj'_*\mathbb{C}_{U}[n])$. Equivalently, using the Riemann-Hilbert correspondence between regular holonomic $\mathscr{D}$-modules and perverse sheaves, there is an isomorphism of left $\mathscr{D}_X$-modules, $$ (H^0\mu_+)\mathcal O_Y[g^{-1}] \simeq \mathcal O_X[f^{-1}] \simeq \ensuremath{\mathscr{D}}_XF^s \otimes_{\mathbb{C}[s]} \mathbb{C}_\alpha. $$ In terms of right $\mathscr{D}_X$-modules this gives $$H^0\mu_+(\ensuremath{\mathscr{M}}\otimes_{\mathbb{C}[s]}\mathbb{C}_\alpha) \simeq \ensuremath{\mathscr{N}}\otimes_{\mathbb{C}[s]}\mathbb{C}_\alpha \simeq F^\alpha dx\cdot\mathscr{D}_X.$$ Thus the first map in (\ref{eqss}) is the map $\ensuremath{\mathscr{U}}\otimes \ensuremath{\mathbb{C}}_\alpha=\ensuremath{\mathscr{D}}_{X}[s]u\otimes \ensuremath{\mathbb{C}}_\alpha\to F^\alpha dx\cdot\mathscr{D}_X$ that sends $u$ to $F^\alpha dx$. Hence this map is surjective. This shows that $\mathscr{L}\otimes_{\ensuremath{\mathbb{C}}[s]}\ensuremath{\mathbb{C}}_\alpha=0$ as required. \end{proof} \subsection{Proof of Theorem \ref{thm: MainTheorem} - algebraic case.} Let $\mathscr{L}=(H^0\mu_+\ensuremath{\mathscr{M}})/\ensuremath{\mathscr{U}}$. The Bernstein-Sato ideals $B_{\mathscr{L} t^n}$ form an increasing sequence of ideals in the Noetherian ring $\ensuremath{\mathbb{C}}[s]$. Hence there must exist some $N\geq 1$ such that $B_{\mathscr{L} t^n} = B_{\mathscr{L} t^{n+1}}$ for all $n\geq N$. By \Cref{gradeBound} the $\ensuremath{\mathscr{D}}_X[s]$-module $\mathscr{L}$ has grade $\ge n + 1$, so Lemma \ref{rem: GradeIFFBernsteinIdeal} provides some non-zero $q(s_1,\ldots,s_r) \in B_{\mathscr{L}}$. Then also $q\in B_{\mathscr{L} t^N}$. Observe that one has the relation $$q(s_1,\ldots,s_r)t = tq(s_1+a_1,\ldots, s_r + a_r).$$ In particular it follows that $q(s + a)\in B_{\mathscr{L} t^{N+1}}$. Due to the stabilisation $B_{\mathscr{L} t^N} = B_{\mathscr{L} t^{N+1}}$ it follows by iteration that $q(s+ja)\in B_{\mathscr{L} t^N}$ for any integer $j\geq 0$. Due to the estimate for the slopes in \Cref{thrmMoreA} it follows that we can pick some polynomial $r(s)$ which annihilates $\mathscr{L} t^N$ and such that $r(s+a)$ does not vanish on any codimension one irreducible component of $Z(B_{F,x}^a)$. We now follow closely \cite{kashiwara1976b} and \cite{lichtin1989poles}. Let $b(s)$ be the Bernstein-Sato polynomial for $\ensuremath{\mathscr{M}}/\ensuremath{\mathscr{M}} t$ provided by \Cref{lem: BernsteinSatoPolynomialUpstairs}. Notice that the action of $t$ is injective on $\ensuremath{\mathscr{M}}$. This means that the morphism $$\phi:\ensuremath{\mathscr{M}}\to \ensuremath{\mathscr{M}}: m_1\mapsto \text{the unique }m_2 \text{ such that }m_1b(s)=m_2t,$$ is well-defined and $\ensuremath{\mathscr{D}}_Y$-linear, and that $b(s)=t\circ \phi:\ensuremath{\mathscr{M}}\to\ensuremath{\mathscr{M}}$ as a morphism of $\ensuremath{\mathscr{D}}_Y$-modules. By functoriality we thus conclude that $b(s)=t\circ H^0\mu_+\phi$ as a morphism on $H^0\mu_+\ensuremath{\mathscr{M}}$. This implies that $$(H^0\mu_+\ensuremath{\mathscr{M}}) b(s) \subset H^0(\mu_+\ensuremath{\mathscr{M}}) t.$$ Set $B := \prod_{j=0}^{N} b(s+ ja)$. Then with a similar argument applied inductively we have that $(H^0\mu_+\ensuremath{\mathscr{M}}) B(s)\subset (H^0\mu_+\ensuremath{\mathscr{M}}) t^{N+1}$. Thus have $$(H^0\mu_+\ensuremath{\mathscr{M}}) B(s)r(s+a)\subset(H^0\mu_+\ensuremath{\mathscr{M}}) t^{N+1}r(s+a)=(H^0\mu_+\ensuremath{\mathscr{M}}) t^{N}r(s)t.$$ Since $\mathscr{L} t^N=(H^0(\mu_+\ensuremath{\mathscr{M}})t^N+\ensuremath{\mathscr{U}})/\ensuremath{\mathscr{U}}$ and $r$ annihilates $\mathscr{L} t^N$, we have $$((H^0\mu_+\ensuremath{\mathscr{M}})t^N+\ensuremath{\mathscr{U}})r\subset \ensuremath{\mathscr{U}},$$ and hence $$(H^0\mu_+\ensuremath{\mathscr{M}})t^Nr\subset ((H^0\mu_+\ensuremath{\mathscr{M}})t^N+\ensuremath{\mathscr{U}})r\subset \ensuremath{\mathscr{U}}.$$ In particular, since $\ensuremath{\mathscr{U}}\subset H^0\mu_+\ensuremath{\mathscr{M}}$ we have that $\ensuremath{\mathscr{U}} B(s)r(s+a)\subset \ensuremath{\mathscr{U}} t$, that is, $B(s)r(s+a)$ lies in the Bernstein-Sato ideal $B_{\ensuremath{\mathscr{U}}/\ensuremath{\mathscr{U}} t}$. By \Cref{lem: SurjectionUF} we have a $\ensuremath{\mathscr{D}}_X[s]\langle t\rangle$-linear surjection $\ensuremath{\mathscr{U}} \to \ensuremath{\mathscr{N}}$. Thus $\ensuremath{\mathscr{U}}/\ensuremath{\mathscr{U}} t$ surjects onto $\ensuremath{\mathscr{N}}/\ensuremath{\mathscr{N}} t$, and so $B(s)r(s+a)$ also annihilates $\ensuremath{\mathscr{N}}/\ensuremath{\mathscr{N}} t$. This implies that $Z(B_F^a)\subseteq Z(B(s)r(s+a))$. Since we know that none of the irreducible components of $Z(r(s+a))$ are irreducible components of codimension one of $Z(B_F^a)$, this gives the desired result. \hfill $\Box$ \subsection{The analytic case} The proof of \Cref{thm: MainTheorem} proceeds similarly in the local analytic case, that is, when the smooth affine variety $X$ is replaced with the germ of a complex manifold $(X,x)$, or equivalently, with a very small open ball $\Omega_x$ centered at $x$ in $X$. By \cite[3.6]{budur2020zeroI}, all the results we have used for relative holonomic $\mathscr{D}_X$-modules hold in the local analytic case. The log resolution $\mu:Y\to X=\Omega_x$ has the property that $Y$ admits a finite cover $\{Y_k\}$ of open subsets such that $g=f\circ\mu$ is a locally a monomial. Relative holonomicity can be defined for any analytic $\mathscr{D}_Y[s]$-module admitting a good filtration on each $Y_i$, and by \cite[Theorem 1.17]{AnalyticDirectIm} the analytic direct image functor $\mu_+$ for such modules preserves relative holonomicity. Thus all the results from this section extend to the analytic version. \section{Lower bounds}\label{secLB} \subsection{Proof of Proposition \ref{prop1.3}.} Let $x$ be a smooth point of $C$. We can assume that $x_1$ is a local equation for $C$ at $x$. Then locally at $x$, $f_j=x_1^{N_j}u_j$ with $N_j=\ord_C(f_j)$ and $u_j$ a locally invertible function. We assume $m=\sum_{j=1}^rN_ja_j$ is non-zero. One easily computes now that $B_{F,x}^a$ is the principal ideal generated by $$b(s)=\prod_{c=1}^m \left(\left(\sum_{j=1}^r N_js_j\right)+ c \right)$$ corresponding to the relation $$b(s)\prod_{j=1}^rf_j^{s_j}= \partial_1^m(\prod_{j=1}^r u_j^{a_j})^{-1}\prod_{j=1}f_j^{s_j}.$$ \hfill $\Box$ \subsection{Jumping walls}\label{sec: JumpingWall} In this subsection we establish \Cref{thm: JumpingWall} on the relation between the jumping walls and $Z(B_F^a).$ By \cite[Corollary 3.12]{kollar1997singularities} one can rephrase the $\LCT$-polytope and $\KLT_a$-region as \begin{align*} \LCT(F) &= \{\lambda\in \mathbb{R}_{\geq 0}^r : (X,F^\lambda) \text{ is log-canonical}\}\\ \KLT_{a}(F)&= \{\lambda\in \mathbb{R}_{\geq 0}^r: (X,F^{\lambda - a}) \text{ is Kawamata log-terminal}\}. \end{align*} For our purposes the analytical reformulation of Kawatama log-terminality from \cite[Proposition 3.20]{kollar1997singularities} is the most convenient, $$\KLT_a(F) = \{\lambda\in \mathbb{R}_{\geq 0}^r : \prod_{j=1}^r \abs{f_j}^{-2(\lambda_j-a_j)}\text{ is integrable near any }x\in X\}.$$ Similarly, the stalk of the mixed multiplier ideal sheaf $\mathcal{J}(F^\lambda)$ for $\lambda\in \mathbb{R}^r_{\geq 0}$ at any $x\in X$ is $$\mathcal{J}(F^\lambda)_x = \{\phi \in \O_{X,x}: \abs{\phi}^2 \prod_{j=1}^r \abs{f_j}^{-2\lambda_j} \text{ is integrable near }x\}.$$ \begin{proof}[Proof of Theorem \ref{thm: JumpingWall}.] Let $E$ be an irreducible component of $\mu^*D$. Suppose that the hyperplane $\{\sum_{j=1}^r \ord_E(g_j)s_j = k_E + c\}$ for some $c\in\mathbb{Z}_{>0}$ is the affine span of a facet $\sigma$ of a jumping wall of $F$ which intersects $\KLT_{a}(F)$. We show that $\sum_{j=1}^r \ord_E(g_j)s_j + k_E + c = 0$ determines an irreducible component of $Z(B_F^a)$. Note that the facet $\sigma$ must be included in $\KLT_{a}(F)$. Let $\lambda$ be a point of $\sigma$. Then there must exist some $x\in D$ and $\phi\in \O_{X,x}\setminus \mathcal{J}(F^\lambda)_x$ such that $$\int \abs{\phi}^2\prod_{j=1}^r\abs{f_j}^{-2(\lambda_j -\varepsilon_j)}\psi dxd\bar x < \infty, \qquad \int \prod_{j=1}^r \abs{f_j}^{-2(\lambda_j - a_j)}\psi dxd\bar x< \infty.$$ for any $\varepsilon \in \mathbb{R}_{>0}^r$ and positive bump function $\psi$ supported on a sufficiently small neighbourhood of $x$, where $dx=dx_1\ldots dx_n$ for local coordinates $x_1,\ldots, x_n$ on $X$. Pick some $b(s) \in B_F^a$ and take the support of $\psi$ to be sufficiently small such that there exists some local differential operator $P$ with $b(s)F^s = PF^{s+a}$. By conjugation it follows that $\overline{b}(s) \overline{F}^s = \overline{P}\overline{F}^{s+a}$. Holomorphic and antiholomorphic differential operators commute so $$\abs{b(s)}^2 \prod_{j=1}^r \abs{f_j}^{2s_j} = P\overline{P} \abs{f_j}^{2(s_j + a_j)}.$$ Now assume that the real part of all $2(s_j+a_j)$ is strictly greater than the order of $P$. Then $\abs{f_j}^{2(s_j+a_j)}$ has enough continuous derivatives to apply integration by parts. This yields that $$\abs{b(s)}^2 \int \prod_{j=1}^r\abs{f_j}^{2s_j}|\phi|^2 \psi dxd\bar x= \int \prod_{j=1}^r \abs{f_j}^{2(s_j+a_j)} P^*\overline{P^*}\abs{\phi}^2\psi dxd\bar x$$ View this as an equality of meromorphic functions of $s$ to conclude that the equality holds for arbitrary $s\in \mathbb{R}^r$ provided both integrals are finite. Now take $s = -\lambda + \varepsilon$ and let $\varepsilon$ tend to zero from above. Then, by dominated convergence, the integral on the right hand side converges to a finite number. On the other hand, since $\phi$ is not in $\mathcal{J}(F^s)_x$, the integral on the left hand side tends to infinity by the monotone convergence theorem. This means that the equality is only possible if $b(s)$ vanishes on $(-\lambda_1,\ldots, -\lambda_r)$. Since the point $\lambda$ is arbitrary on $\sigma$, and $b(s)\in B_F^a$ is also arbitrary, we conclude that $\sum_j \ord_{E}(g_j)s_j + k_E + c=0$ determines an irreducible component of $Z(B_F^a)$. \end{proof} \begin{proof}[Proof of Corollary \ref{thm: LCT}.] A facet of $\LCT(F)$ is by definition a facet of a jumping wall of $F$. By \Cref{thm: JumpingWall} it is enough to show that $\sum_{j=1}^r \ord_E(g_j)s_j = k_E + 1$ intersects $\KLT_a(F)$. Let $\lambda$ be an interior point of this facet of $\LCT(F)$. It is enough to show $F^{\lambda-a}$ is Kawamata log-terminal. Let $E'$ be an irreducible component of $\mu^*D$. Then $\sum_j\ord_{E'}(f_j)\lambda_j\le k_{E'}+1$. Equality holds if and only if $E'$ determines the same facet of $\LCT(F)$ as $E$, that is, $$ \left\{\sum_{j=1}^r \ord_{E'}(g_j)s_j = k_{E'} + 1\right\} =\left\{\sum_{j=1}^r \ord_E(g_j)s_j = k_E + 1\right\}. $$ Let $I_E$ be the set of such $E'$. By assumption, there exists at least one $j$ with $\sum_j \ord_{E}(f_j)\cdot a_j\neq 0$. This implies that for the same $j$, the same holds for $E'\in I_E$. Thus for $E'\in I_E$ we have $\sum_j \ord_{E'}(f_j) a_j> 0$. Hence for all irreducible components $E'$ of $\mu^*D$ one has $$ \sum_j\ord_{E'}(f_j)(\lambda_j-a_j)<k_{E'}+1 $$ as claimed. \end{proof} \subsection{Real jumping walls}\label{sec: RealJump} Finally, we establish the real analogues for the results in \ref{sec: JumpingWall}. Let $X_\mathbb{R}$ be a real affine algebraic manifold. Let $F = (f_1,\ldots,f_r)$ be a tuple of real algebraic functions on $X_{\mathbb{R}}$. Fix $a\in \mathbb{Z}_{\geq 0}^r$ and assume that $\prod_{j=1}^r f_j^{a_j}$ is not invertible. The Bernstein-Sato ideal $B_{F}^a\subset \mathbb{R}[s]$, with $s=s_1,\ldots, s_r$, consists by definition of all polynomials $b(s)\in \mathbb{R}[s]$ such that $$b(s)F^s \in \ensuremath{\mathscr{D}}_{X_\mathbb{R}}[s]F^{s+a}$$ where $\ensuremath{\mathscr{D}}_{X_\mathbb{R}}$ denotes the ring of real algebraic differential operators on $X_\mathbb{R}$. If $F_\mathbb{C}$ is the complexification on $X_\mathbb{C}=X_\mathbb{R}\otimes_\mathbb{R} \mathbb{C}$ of $F$, it is easy to see that $B_{F}^a$ consists of all polynomials obtained by replacing the coefficients of $q(s)$ with their real parts for all $q(s)\in B_{F_\mathbb{C}}^a$. It is conjectured in \cite{B-ls} that $B_{F_\mathbb{C}}^a$ is generated by polynomials with coefficients in $\mathbb{Q}$, in which case the same polynomials would generate $B_F^a$. Since this conjecture is open, for now we can only conclude from Theorem \ref{thrmMoreA} the following: \begin{lemma}\label{lemXr} Let $X_\mathbb{R}$ be a real affine algebraic manifold. Let $F = (f_1,\ldots,f_r)$ be a tuple of real algebraic functions on $X_{\mathbb{R}}$. Let $F_\mathbb{C}$ be the $F$ considered as having complex coefficients. Fix $a\in \mathbb{Z}_{\geq 0}^r$ and assume that $\prod_{j=1}^r f_j^{a_j}$ is not invertible. Then the codimension-one part of $Z(B_{F_\mathbb{C}}^a)$ in $\mathbb{C}^r$ consists of the complexification of the real codimension-one part of the zero locus $Z(B_F^a)$ in $\mathbb{R}^r$. \end{lemma} A similar comparison holds between the local Berstein-Sato ideals $B_{F,x}^a$ and $B_{F_\mathbb{C},x}^a$ for $x\in X_\mathbb{R}$, where $B_{F,x}^a$ consists of all polynomials $b(s)\in \mathbb{R}[s]$ such that $$b(s)F^s \in \ensuremath{\mathscr{D}}_{X_{\mathbb{R}},x}[s]F^{s+a}$$ where $\ensuremath{\mathscr{D}}_{X_\mathbb{R},x}[s]$ denotes the ring of germs at $x$ of real analytic differential operators on $X_\mathbb{R}$. Moreover, as in the complex affine case, $B_F^a$ is the intersection of $B_{F,x}^a$ for $x\in X_\mathbb{R}$. Denote by $\O_{X_\mathbb{R}}$ the sheaf of real analytic functions on $X_\mathbb{R}$. After Saito \cite{RealLogCan}, we define the {\it real mixed multiplier ideals sheaves} $\mathcal J_\mathbb{R}(F^\lambda)\subset \mathcal O_{X_\mathbb{R}}$ for $\lambda\in\mathbb{R}^r_{\ge 0}$ by setting $$ \mathcal{J}_{\mathbb{R}}(F^\lambda)(U): =\left\{\phi\in \mathcal O_{X_\mathbb{R}}(U) : \abs{\phi}\prod_{j=1}^r \abs{f_j}^{-\lambda_j} \text{ is locally integrable on }U\right\}. $$ Let $\mu:Y_\mathbb{R}\to X_\mathbb{R}$ be a real log resolution of singularities for $f:=\prod_{j=1}^rf_j$, that is, $\mu^* f$ and $\mu^*dx_1\ldots dx_n$ are locally monomial up to multiplication by an invertible function, where $x_1,\ldots, x_n$ are local algebraic coordinates on $X_\mathbb{R}$. Since $X_\mathbb{R}$ is assumed to be the underlying real analytic manifold of a smooth scheme $X$ defined over $\mathbb{R}$, $Y_\mathbb{R}$ is the underlying real analytic manifold of a smooth scheme $Y$ obtained by blowing up $X$ successively along smooth centers defined over $\mathbb{R}$. Then the components of the divisor determined by $\mu^*f$ in $Y_\mathbb{R}$ are the non-empty real loci of the components of the divisor defined by $f$ in $Y$, see \cite[1.2]{RealLogCan}. As before, we denote $k_{E}:=\ord_{E}(\det \operatorname{Jac}(\mu))\in\mathbb{N}$ for the order of vanishing of the determinant of the Jacobian of $\mu$ along an irreducible component $E$ of the simple normal crossings divisor determined by $\mu^*f$ in $Y_\mathbb{R}$. Fix some $x\in X_\mathbb{R}$ with $f(x)=0$. Associated to $\lambda\in\mathbb{R}^r_{\ge 0}$ is the region $$\mathcal{R}_{\mathbb{R},F,x}(\lambda) := \{\lambda' \in \mathbb{R}_{\geq 0}^r : \mathcal{J}_\mathbb{R}(F^\lambda)_x \subseteq \mathcal{J}_\mathbb{R}(F^{\lambda'})_x\}.$$ The {\it real jumping wall at $x$} associated to $\lambda$ is the intersection of the boundary of $\mathcal{R}_{\mathbb{R},F,x}(\lambda)$ with $\mathbb{R}_{>0}^r$. The {\it $\RLCT$-polytope at $x$} is the closure $\RLCT_x(F)$ of $\mathcal{R}_{\mathbb{R},F,x}(0)$. The stalk $\mathcal{J}_{\mathbb{R}}(F^\lambda)_x$ admits a characterization similar to the complex case, see \cite[Proposition 1]{RealLogCan}. It follows that the facets of the jumping wall are cut out by hyperplanes of the form $\sum_{j=1}^r\ord_E(g_j)s_j = k_E + c$ with $c\in \mathbb{Z}_{>0}$ and the $\RLCT$-polytope is cut out by hyperplanes of the form $\sum_{j=1}^r\ord_E(g_j)s_j = k_E + 1$. Here, $E$ runs over all irreducible components of the divisor determined by $\mu^* f$ with $x\in\mu(E)$. The {\it $\RKLT_{a}$-region} is defined by $$\RKLT_{a,x}(F) := \{\lambda\in \mathbb{R}_{\geq 0}^r : \prod_{j=1}^r \abs{f_j}^{-(\lambda_j-a_j)}\text{ is integrable near }x\}.$$ The following theorem now follows similarly to \Cref{thm: JumpingWall}. \begin{theorem}\label{thmRs} With the assumptions as in Lemma \ref{lemXr}, and with $x\in f^{-1}(0)\subset X_\mathbb{R}$, if a facet of a real jumping wall of $F$ at $x$ intersects $\RKLT_{a,x}(F)$, then it determines an irreducible component of $Z(B_{F,x}^a)$. \end{theorem} \begin{proof} Let $\sigma$ be a facet of a real jumping wall of $F$ at $x$ which intersects $\RKLT_{a,x}(F)$. The affine span of $\sigma$ must be a hyperplane of the form $\sum_{j=1}^r \ord_E(g_j)s_j = k_E + c$ with $c\in\mathbb{Z}_{>0}$, where $E$ is an irreducible component of the divisor determined by $\mu^* f$ with $x\in\mu(E)$. We show that $\sum_{j=1}^r \ord_E(g_j)s_j + k_E + c = 0$ determines an irreducible component of $Z(B_{F,x}^a)$. Let $\lambda$ be a point on $\sigma$. Then there must exist $\phi\in \O_{X_\mathbb{R},x}\setminus \mathcal{J}_\mathbb{R}(F^s)_x$ such that $$\int \abs{\phi}\prod_{j=1}^r\abs{f_j}^{-(\lambda_j -\varepsilon_j)}\psi dx < \infty, \qquad \int \prod_{j=1}^r \abs{f_j}^{-(\lambda_j - a_j)}\psi dx< \infty.$$ for any $\varepsilon \in \mathbb{R}_{>0}^r$ and positive bump function $\psi$ supported on a sufficiently small neighbourhood of $x$, where $dx=dx_1\ldots dx_n$ and $x_1,\ldots, x_n$ are local coordinates on $X_\mathbb{R}$ at $x$. Pick some $b(s) \in B_{F,x}^a$ and take the support of $\psi$ to be sufficiently small such that there exists some local differential operator $P\in\mathscr{D}_{X_\mathbb{R},x}[s]$ with $b(s)F^s = PF^{s+a}$. Assume that the specialization of $s_j+a_j$ to a complex number has real part strictly greater than the order of $P$ for all $j$. Then $b(s) \prod_{j=1}^r \abs{f_{j}}^{s_j} = P \prod_{j=1}^r \abs{f_{j}}^{s_j +a_j}$ and $\abs{f_{j}}^{s_j +a_j}$ has enough continuous partial derivatives to apply integration by parts. This yields that $$b(s)\int \prod_{j=1}^r \abs{f_{j}}^{s_j} \abs{\phi}\psi dx= \int \prod_{j=1}^r \abs{f_{j}}^{s_j +a_j} P^* \abs{\phi}\psi dx.$$ View this as an equality of meromorphic functions in $s$ to deduce that the equality holds for arbitrary $s\in \mathbb{R}^r$ provided both integrals are finite. Now take $s = -\lambda + \varepsilon$ and let $\varepsilon$ tend to zero from above. Then, by dominated convergence, the integral on the right hand side stays finite as $\varepsilon$ tends to zero. On the other hand, by monotone convergence, the integral on the left hand side tends to infinity since $\phi$ is not in $\mathcal{J}_\mathbb{R}(F^s)_x$. This means that $b(s)$ vanishes on $(-\lambda_1,\ldots, -\lambda_r)$. Since the point $\lambda$ on the facet $\sigma$ and $b(s)\in B_{F,x}^a$ were arbitrary we conclude that $\sum_j \ord_{E}(g_j)s_j + k_E + c=0$ determines an irreducible component of $Z(B_{F,x}^a)$. \end{proof} Precisely as with \Cref{thm: LCT} one obtains: \begin{corollary}\label{corRs} With the same assumptions as in Theorem \ref{thmRs}, suppose that $\sum_{j=1}^r \ord_E(g_j)s_j = k_E + 1$ defines the affine span of a facet of $\RLCT_x(F)$. If $a_j\neq 0$ and $\ord_{E}(g_j)\neq 0$ for some $j$, then $\sum_j \ord_{E}(g_j)s_j +k_E + 1=0$ defines an irreducible component of $Z(B_{F,x}^a)$. \end{corollary} \section{Example}\label{secEx} Let $f_1 = y^2 -x^2 + x^3$ and $f_2 = y$ define the coordinate functions of the morphism $F:\ensuremath{\mathbb{C}}^2 \to \ensuremath{\mathbb{C}}^2$. We compare the Bernstein-Sato zero locus $Z(B_F^a)$ for $a = (1,2)$ with the estimates we obtained in this article. Using the library \texttt{dmodideal.lib} \cite{LLM} in \texttt{SINGULAR} \cite{DGPS} yields the principal ideal $$B_F^a = \left((s_1+1)(s_2+1)(s_2+2) \prod_{l=2}^5(2s_1 + s_2 + l)\right).$$ A strong log resolution $\mu:Y\to X$ may be found by use of one blowup. Let $E_j$ be the strict transform of $f_j=0$ for $j=1,2$, and let $E_0$ be the exceptional divisor. Then \Cref{thm: MainTheorem} yields that $$Z(B_F^a) \subseteq \bigcup_{l=1}^\infty Z(s_1 + l) \cup Z(s_2 + l) \cup Z(2s_1 + s_2 + l+ 1).$$ The trivial estimate \Cref{prop1.3} yields that $$Z(s_1 + 1)\cup Z(s_2 + 1) \cup Z(s_2 + 2) \subseteq Z(B_F^a).$$ \noindent We have $$ \KLT_a(F)= \{\lambda\in \mathbb{R}_{\geq 0}^r : \lambda_1 <2, \ \lambda_2 <3, \text{ and }\ 2\lambda_1 + \lambda_2 <6\}, $$ $$ \LCT(F) = \{\lambda\in \mathbb{R}_{\geq 0}^r :\lambda_1\le 1, \lambda_2\le 1, \text{ and }2\lambda_1+\lambda_2\le 2 \}, $$ see Figure \ref{fig1}. Further, a polynomial $h\in\mathbb{C}[x,y]$ belongs to the ideal $\mathcal{J}(F^\lambda)$ if and only if $$\ord_{E_{1}}(h)\geq \lambda_1,\quad \ord_{E_{2}}(h)\geq \lambda_2, \quad\text{ and } \ord_{E_{0}}(h)\geq 2\lambda_1 + \lambda_2 -1.$$ Then \begin{align*} \mathcal J(F^\lambda) &=\mathbb{C}[x,y]\quad\text{ for } \lambda\in \LCT^o(F):=\LCT(F)\setminus (\{\lambda_2=1\}\cup\{2\lambda_1+\lambda_2=2\}),\\\mathcal J(F^\lambda) &=(x,y)\quad\text{ for }\lambda\in [0,1)^2\setminus \LCT^o. \end{align*} By translating these two regions by integral vectors $(m_1,m_2)\in\mathbb{N}^2$ one obtains the other regions of constancy of mixed multiplier ideals, the latter equal to $(f_1^{m_1}f_2^{m_2})$ and, respectively, $(x,y)f_1^{m_1}f_2^{m_2}$. The jumping walls are depicted in Figure \ref{fig1}. All irreducible components of $Z(B_F^a)$ arise from the facets of the jumping walls in this example. Hence the lower bound for $Z(B_F^a)$ following from \Cref{thm: JumpingWall} is tight. In this case the estimates coming from the real jumping walls at the origin are identical to the foregoing estimates. \begin{figure}[H] \centering \begin{minipage}{.4\textwidth} \centering \includegraphics[width =0.5 \textwidth]{BsZeroLocus} \end{minipage} \begin{minipage}{.4\textwidth} \centering \includegraphics[width = 0.5\textwidth]{JumpingWalls3} \end{minipage} \captionsetup{justification=centering} \caption{Left: $Z(B_F^a)$. Right: The jumping walls of $F$ with $\KLT_a(F)$ lightly shaded, and $\LCT(F)$ in darker shade.} \label{fig1} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Aerial image categorization is an {important component for many applications in artificial intelligence and remote sensing~\cite{add1,add2,add3},} such as visual surveillance, navigation, and robot path planning. However, it is still a challenging task to deal with aerial image categorization successfully due to two reasons. On one hand, the aerial image components {(\textit{e.g.}, house roofs and grounds) as well as their spatial configurations are complex and inconstant, making it difficult to extract features sufficiently discriminative} for aerial image representation. On the other hand, the efficiency of the existing aerial image categorization methods is far from practical {due to the huge number of various components as well as their bilateral relationships.} Therefore, a discriminative and concise aerial image representation has become increasingly imperative for a successful categorization system.\\ \indent In the literature of designing discriminative image representations for visual recognition, many features have been proposed. {They can be categorized into two groups: global features} and local features. Global features, such as histograms, eigenspace \cite{eigenspace}, and skeletal shape~\cite{skeletalsharp}, generalize the entire image with a single vector and are standard for statistic models like SVM. However, global features are sensitive to occlusion and clutter. {Besides, these representations typically rely on a preliminary segmentation of objects in images.} These two limitations result in unstable categorization performance. {Different from global features, local features are developed to increase the discrimination, such as scale invariant feature transform (SIFT)~\cite{sift}.} Each local feature describes a localized image region and is calculated around the interest points. Thus, they are robust to partial occlusion and clutter. {To take advantage of this property, local features~\cite{handwritten,parsing,hierarchical} (\textit{e.g.}, junction~\cite{junction}, gradient~\cite{gradient}, contour,~\textit{etc}) are widely used for aerial image parsing recently.} However, when employing local features for image categorization, different images typically contain different numbers of local features. {That is, it is difficult to integrate the local features within an image for the standard classifiers.} In many cases, they are integrated into an orderless bag-of-features as global representation, thereby the similarity between images is determined by the orderless bag-of-features. {It is worth emphasizing that as a non-structural representation, the bags-of-features representation ignores} the geometric property of an image (\textit{i.e.}, the spatial distribution of the local image patches), which prevents it from being highly discriminative. Given the zebra skin and the chessboard skin, their bag-of-features representations are similar. That is to say, {the bag-of-features representation is not sufficiently descriptive to distinguish the zebra and the chessboard, although the geometric properties of the two images are significantly different.\\ \indent {In order to encode image geometric proprieties into a categorization model, several image geometric features} have been proposed. In~\cite{beyond}, the spatial pyramid matching kernel is obtained by clustering the local features into a few geometric types. {However, the spatial pyramid matching kernel is not flexible enough, since it highly depends on the human prior knowledge.} RGB-domain spin image~\cite{spin} describes the spatial context by exploring the chain structure of pixels in each RGB channel. {However,} the chain structure usually fails to describe the spatial context with complicated structures. Walk kernel~\cite{walk_kernel} is proposed to capture the walk structures among image local features. {However, the unavoidable totter phenomenon (\textit{i.e.}, one vertex may occur several times in a walk) brings noise and hence limiting its discrimination.} To obtain a better discrimination, parameters are provided to tune the length of the chain~\cite{spin} or walk~\cite{walk_kernel}. This operation leads to very redundant structures. {Both the time consumption and the memory cost increase remarkably as the structure number goes up.} Therefore, a concise image structure representation is desired for accurate aerial image categorization. Recently, many graph-based models are applied in intelligence systems and multimedia. They can be used as geometric image descriptors~\cite{zhang1,zhang2,zhang3,zhang4} to enhance image categorization. Besides, these methods can be used as image high-order potential descriptors of superpixels~\cite{zhang5,zhang6,zhang7,zhang8,zhang9}. Further, graph-based descriptors can be used as a general image aesthetic descriptors to improve image aesthetics ranking, photo retargeting and cropping~\cite{zhang10,zhang11,zhang12,zhang13}.\\ \indent In this paper, {we propose a novel aerial image categorization system, which enables the exploration of the geometric property embedded in local features}. An aerial image is represented by a graph, since graph is a natural and descriptive tool to express the complicated relationships among objects. By defining region connected graph (RCG), we decompose an aerial image into a set of discriminative subgraphs. To capture discriminative relationships among RCGs, a structure refinement strategy is carried out to select highly discriminative and low redundant structures. {Based on the refined structures, we extract sub-RCGs accordingly and all the sub-RCGs from an aerial image} form the discriminative spatial context. Finally, a quantization {operation transforms the} discriminative spatial context into a feature vector for categorization.\\ \indent The major contributions of this paper are as follows: 1) region connected graph (RCG), a graph-based representation that describes the local patches and their topology for an areal image; 2) a structure {refinement algorithm that selects highly} discriminative and low redundant structures among the training RCGs; and 3) an efficient isomorphism {subgraph extraction component} that acquires the corresponding sub-RCGs. \section{Region Connected Graph(RCG)} An aerial image usually contains millions of pixels. If we treat each pixel as a local feature, highly computational complexity will make aerial image recognition intractable. {Fortunately, an aerial image can be represented by a collection of clusters because pixels are usually highly correlated with their neighboring ones.} Each cluster consists of neighboring pixels with consistent color intensities. Thus, given an aerial image, we can represent it by a set of regions instead of millions of pixels. {The neighboring relationships between regions define the spatial context of an aerial image.} Naturally, we can model this representation as a labeled graph. {The labels denote the local features of each region and each edge connects pairwise neighboring regions.} In our work, we call this representation region connected graph (RCG).\\ \begin{figure}[h]\centering \includegraphics[scale=0.62]{fig2.ps} \caption{From pixel clusters (left) to singly connected regions (right).} \label{fig2} \end{figure} \indent To obtain {the RCG} from an aerial image, a segmentation algorithm (\textit{i.e.}, fuzzy clustering~\cite{fuzzy} in our implementation) groups pixels into different clusters according to their color intensity. Note that the pixels in the same cluster are unnecessarily spatially neighboring. As shown in Fig.~\ref{fig2}, we use different grayscale values to identify different clusters. Pixels in the face and the lower half of the Snoopy's body are grouped into the same cluster. However, it is more reasonable if {they are categorized into} different groups, since the face {and the lower half of Snoopy are spatially isolated. To this end,} a region growing algorithm~\cite{ip_matlab} is employed to divide an image into regions iteratively. In each iteration, the region growing algorithm initializes the current region with a random pixel. {It continues adding the spatially neighboring pixels into this region if the current pixel and the existing pixels come from the same cluster.} The iteration terminates if the entire pixels are considered. {The clustering result is shown on the right of Fig.~\ref{fig2}.}\\ \begin{figure}[h] \includegraphics[scale=0.5]{fig3.ps} \caption{The flowchart from an aerial image to its RCG.} \label{fig3} \end{figure} \indent On the basis of the singly connected regions, the RCG of an aerial image can be obtained as shown in Fig.~\ref{fig3}. Given an aerial image $I$ (Fig.~\ref{fig3}(a)), we segment it into $K$ singly connected regions $R=\{r^i \}_{i=1\cdots K}$ (Fig.~\ref{fig3}(b)). {Then, each singly connected region is treated as a vertex $v_i$ (the red solid point),} and the relationship between spatially neighboring vertices is linked by an edge $e_j$ (the green line). Finally, denoting $V=\cup v_i $ as a collection vertices $v_i$ and $E=\cup e_j$ a set of edges $e_j$, we define $\mathcal{G}=(V,E)$ as an RCG, where $V$ is a set of {singly connected regions and $E$ is a set of spatially neighboring} relationships (Fig.~\ref{fig3}(c)). Let $|G|$ denote the number of vertices in RCG $G$. The number of neighbors of a vertex is called the vertex degree. {A useful attribute of RCG is that its vertex degree is upper bounded. That is to say, each region has a limited number of neighbors. It is observed that} the average vertex degree of each RCG is less than four and the maximum vertex degree is no more than 15. \section{Discriminative Structures Selection} It is natural to recognize {an aerial image by matching its RCG to a labeled one.} However, as proved in \cite{isomorphism}, given a pair of graphs, it is NP-hard to determine whether they have the same structure. That means it is intractable to {compare pairwise} RCGs directly. Alternatively, we represent {an aerial image by a set of sub-RCGs} $\{G_{sub}^k\}_{k=1\cdots N}$, where $\cup_{k=1}^N G_{sub}^k=G$. Thereby, the aerial image categorization can be conducted by matching its sub-RCGs to those of the labeled aerial images. {Noticeably, the RCG} of an aerial image may contain tens to hundreds of vertices. Given $n$ vertices in an RCG, there will be $N=2^{\frac{(n+1)*n}{2}}$ different sub-RCGs, which makes it {impractical to} represent an aerial image by enumerating all its sub-RCGs (Fig.~\ref{fig4}(a)). Toward a discriminative and concise representation for aerial image recognition, only sub-RCGs with highly discriminative and low redundant structures should be {selected for aerial image categorization (Fig.~\ref{fig4}(b)).} \begin{figure}[h] \includegraphics[scale=0.51]{fig4.ps} \caption{{An example of sub-RCGs and their structures.}} \label{fig4} \end{figure} \subsection{Frequent Structures Mining} Each sub-RCG reflects the structure of a subset of connected vertices in the RCG. In other words, a sub-RCG models the spatial context of an aerial image. {Different types of aerial image are with different spatial context, so do the structures of sub-RCGs.} It is natural to use the structure of sub-RCG to determine the aerial image type. For instance, as shown in Fig.~\ref{fig4}(c), {all the three sub-RCGs share the same structure but slightly different color intensity} distributions. However, it is impractical to enumerate all the possible sub-RCGs. Moreover, only those frequently {occurred sub-RCGs contribute to the recognition task} while the others are redundant. Motivated by these, we have to select the frequent {structures.}\\ \indent In our implementation, an efficient {frequent subgraph discovery algorithm called FSG~\cite{fsg}} is employed. It is noticeable that the vertex value of sub-RCGs might be different though they share the same structure. This prevents us from mining the frequent structures accurately. {Therefore, we ignore the difference of vertex values. In particular,} given a sub-RCG, its structure $S$ is obtained by setting the vertex labels of the sub-RCG to a same value,~\textit{e.g.}, one.\\ \indent FSG accumulates the times of happening for each structure. {It outputs the probabilities of all the structures in the training RCGs, implying that the structure is unnecessarily existing in all the training RCGs.} A probability $p(S)$ represents the frequency of $S$. As the number of original candidate structures is exponential, only the structure whose probability is higher than a threshold is output as a frequent one. {Therefore, the number of frequent structures is greatly reduced greatly.} \subsection{Measures for Structure Selection} The number of frequent structures is still too large (typically 100$\sim$300) though it is much smaller than that of the candidate structures. {In addition, a structure with high frequency may not be highly discriminative.} Thus, we carry out a further selection among the frequent structures {to preserve only the highly discriminative and low redundant ones.} We first define a distance to describe the similarity between sub-RCGs ($G_{sub}$ and $G_{sub}')$ with the same size: \begin{equation} d(G_{sub},G_{sub}')=\sum_{v_r\subseteq G_{sub}\wedge v_r'\subseteq G_{sub}'}{\parallel f(v_r)-f(v_r')\parallel}^2 \label{eq_1} \end{equation} where $v_r$ is $r$-th vertex of $G_{sub}$ and $f(\cdot)$ the local regions' feature vector. $||\cdot||$ is the Euclidean norm. More specifically, for structure $S$ and $S'$ in $G$ and $G'$ respectively, if $|S|=|S'|$, we define the structure distance between $G$ and $G'$ as follows: \begin{equation} d_e\left(S_G,S_{(G')}'\right)=\varphi\cdot \hspace{-10pt}\sum_{G_{sub}(S)\subseteq G}\cdot \sum_{G_{sub}' (S')\subseteq G'} \hspace{-10pt}d(G_{sub} (S),G_{sub}'(S')) \label{eq_2} \end{equation} where $G_{sub}(S_{(\cdot)})$ is the sub-RCG corresponding to $S_{(\cdot)}$. {$\varphi$ is a factor that normalize $d_e$ to $[0, 1]$ and it is not a tuning parameter. That is, $\varphi=\frac{1} {(|G_{sub}|\cdot|G_{sub}'|}$, where $|G_{sub}|$ and $|G_{sub}'|$ denote the number of sub-RCGs in RCG $G$ and $G'$, respectively.} By extending Eq.(~\ref{eq_2}) to the situations when $\|S\|\neq\|S'\|$, we define a more generic form of the structure distance between $G$ and $G'$. {It is based on the probability $p(S)$ by taking into account of different situations.} \begin{eqnarray} d\left(S_G,S_{G'}'\right)=\left\{ \begin{array}{ll} p(S)*p(S')*d_e\left(S_G,S_{(G')}'\right)\\ \hspace{10pt}\textrm{if $|S|=|S'|$ and $G_{sub}\neq \emptyset \wedge G_{sub}'\neq \emptyset $}\\ p(S)*p(S')*\sum_i d_e\left(S_G,C_i(S_{G'},S_G)\right) \\ \hspace{10pt}\textrm{if $|S|<|S'|$ and $G_{sub}\neq \emptyset \wedge G_{sub}'\neq \emptyset $}\\ p(S)*p(S')*\sum_i d_e\left(S_G',C_i(S_{G},S_G')\right) \\ \hspace{10pt}\textrm{if $|S|>|S'|$ and $G_{sub}\neq \emptyset \wedge G_{sub}'\neq \emptyset $}\\ (1-p(S))*(1-p(S'))\\ \hspace{10pt}\textrm{if $G_{sub}= \emptyset \wedge G_{sub}'= \emptyset $}\\ p(S)+p(S')-2p(S)*p(S')\\ \hspace{10pt}\textrm{otherwise}\\ \end{array}\right.\label{eq_3} \end{eqnarray} \indent The probability for structure $S$ existing in $G$ is denoted by $p(S)$. It is straightforward to obtain the first line of Eq.(\ref{eq_3}) by multiplying $p(S)*p(S^{'})$ with the structure distance $d_e(S,S')$ wherein $p(S)*p(S')$ denotes the probability for $S$ existing in $G$ and $S_2$ existing in $G'$. {This is similar to the second line and the third line of Eq.(\ref{eq_3}).} As $S$ is a subset of $S'$ when $|S|<|S'|$, the function $C(S,S')$ outputs the enumerated structures with the same size to $S'$ in $S$ by FSG~\cite{fsg} in the second line of Eq.(\ref{eq_3}), and vice versa in the third line. $(1-p(S))*(1-p(S'))$ denotes the probability for neither $S$ existing in $G$ nor $S'$ existing in $G'$. An $p(S)+p(S')-2p(S)*p(S')$ in the last line is the probability for either $S$ existing in either $G$ or $S'$ existing in $G'$. $d\left(S_G,S_{G'}' \right)\in [0,1]$.\\ \indent Based on the structure distance $d\left(S_G,S_{G'}'\right)$ between $G$ and $G'$, {measure of structure discrimination(MSD), is defined for structure's discrimination.} Inspired by the definition of discriminative ability in LDA~\cite{klda}, MSD computes the distance ratio between RCGs with different labels and those with same labels: \begin{equation} M_{sd}(S)=\frac{D_{S}^{b}}{D_{S}^{w}}= \frac{\sum_{G}\sum_{G'} d\left(S_G,S_{G'}'\right)*\sigma} {\sum_{G}\sum_{G'} d\left(S_G,S_{G'}'\right)*\sigma'} \label{eq_4} \end{equation} $\sigma$ and $\sigma'$ {are functions indicating whether $G$ and $G'$ are} belong to the same class. If $G$ and $G'$ belong to different classes, $\sigma=1,\sigma'=0$, otherwise $\sigma=1,\sigma'=0$\footnote{Pairwise RCGs $G$ and $G'$ belonging to the same class means that their corresponding aerial images belong to the same class. Similarly, two RCGs $G$ and $G'$ belonging to different classes means that their corresponding aerial images belong to different classes. }. A larger $M_{sd}(S)$ means a more discriminative ability of structure $S$. However, a structure set with {high discrimination doesn't} mean it is a concise one. Aiming at a concise set of structures, it is necessary to make further structure selection. {Motivated by the fact that high correlation leads to high redundancy~\cite{speech}, we believe that one of the two structures should be removed if two structures are highly correlated.} In order to calculate the correlation between structures, an approach to quantize the redundancy between structures, {called measure of structures correlation (MSC), is defined based on the distance between structures:} \begin{equation} M_{sc}(S,S')=\frac{\sum_{G}\sum_{G'} d\left(S_G,S_{G'}'\right)} {\sum_{G}\sum_{G'} d\left(S_G,S_{G'}\right)+\sum_{G}\sum_{G'} d\left(S_G',S_{G'}'\right)} \label{eq_5} \end{equation} where the denominator functions as a normalization step. A larger $M_{sc}(S,S')$ leads to a lower correlation between structure $S$ and $S'$, and vice versa. Eq.(\ref{eq_5}) also can be explained by analogy with the three vertices of a triangle in Fig.~\ref{fig_5}. $\sum_{G}\sum_{G'} d\left(S_G,S_{G'}'\right)$, $\sum_{G}\sum_{G'} d\left(S_G,S_{G'}\right)$ and $\sum_{G}\sum_{G'} d\left(S_G',S_{G'}'\right)$ act as the distance between the three vertices. {When $\sum_{G}\sum_{G'}d\left(S_G,S_{G'}'\right)$ becomes larger, the correlation between $S$ and $S'$ becomes lower (Fig.~\ref{fig_5}(c)), and vice versa (Fig.~\ref{fig_5}(b)).} \begin{figure}[h] \includegraphics[scale=0.44]{fig5.ps} \caption{A visual explanation of the correlation between structures.} \label{fig_5} \end{figure}\subsection{MSD and MSC based Structure Refinement} Based on the two structure measures MSD and MSC, we construct a novel concise and discriminative structure refinement algorithm. {The stepwise operations of the proposed structure selection are illustrated in and Algorithm~\ref{tab_1} respectively.} The algorithm can be divided into two steps. First, the MSD values of all the candidate structures are computed and sorted in descending order. Candidate structure whose MSD value is higher than a threshold will be preserved initially into the list $L_{final}$. Second, the MSC value between each pair of preserved structures {is computed to evaluate their redundancy.} The removal of redundant structures is carried out iteratively. During the first round of iteration, we specify the preserved structure with the largest MSD value as the final selected one. Then, we sort the MSC values between the finally selected structure and the rest of the preserved structures. The structure whose MSC value is higher than a threshold will be removed. {The preserved structure list will be updated accordingly.} After one round of iteration, we move to the preserved structure with lower MSD value. The iteration terminates when there is no structure next to $S_{final}$. The finally preserved structures are deemed as the refined ones.\\ \begin{algorithm}\centering \caption{MSD\&MSC-based Structure Refinement} \begin{tabular}{l} \textbf{Input}: $R\{S_1,S_2\cdots S_{N},C\}$ \hspace{10pt} $//$training data set \\ ~~~~$\delta_{sd},\delta_{sc} $ \hspace{52pt}$//$the threshold for MSD and MSC \\ \textbf{Output}: $R_{final}$ \hspace{65pt}//a set of refined structures\\\hline \hspace{0pt}for $i$~=~$1$ : $N$ \textbf{do begin} \hspace{20pt} $//$ step1\\ \hspace{10pt}calculate $M_{sd}$ for $S_i$; \\ \hspace{20pt}if$\left(M_{sd}(S_i)>\delta_{sd}\right)$\\ \hspace{30pt}preserve $S_i$ into $R^{'}$;\\ \hspace{30pt}order $R_{'}$ in descending $M_{sd}$ value;\\ \hspace{10pt} \textbf{end};\\ \hspace{0pt} $S_{final}=getFirstStructure(R^{'})$; \hspace{20pt} $//$ step2 \\ \hspace{10pt} \textbf{do begin} \\ \hspace{20pt} $S_{tmp1}=getNextStructure(R^{'},S_{final})$; \\ \hspace{20pt} \textbf{do begin} \hspace{30pt} $//$ remove redundant structures \\ \hspace{30pt} $S_{tmp2}=S_{tmp1}$;\\ \hspace{30pt} if$\left(M_{sc}(S_{final},S_{tmp2})>\delta_{sc}\right)$\\ \hspace{40pt} remove $S_{tmp1}$ from $R^{'}$;\\ \hspace{40pt} $S_{tmp1}=getNextStructure(R^{'},S_{tmp2})$;\\ \hspace{30pt} else\\ \hspace{40pt} $S_{tmp1}=getNextStructure(R^{'},S_{tmp1})$;\\ \hspace{20pt} \textbf{end until}$\left(S_{tmp1}==NULL\right)$;\\ \hspace{20pt} add $S_{final}$ to $R_{final}$;\\ \hspace{20pt} $S_{final}=getNextStructure(R^{'},S_{final})$;\\ \hspace{10pt} \textbf{end until}$\left(S_{final}==NULL\right)$;\\\end{tabular} \label{tab_1} \end{algorithm} \indent Denote $n$ as the number of training RCGs and $m$ as the number of candidate structures, {we assume that the structure distance between RCGs can be computed in constant time. As the distance between RCGs is required} for calculating MSD and MSC, the computational cost of calculating MSD and MSC are both $\mathcal{O}(n^2)$. {As shown in Algorithm~\ref{tab_1}, the structure refinement step contains a double loop and the time complexity of each is $\mathcal{O}(n^2*m)$.} Therefore, the time complexity of the whole selection process is $\mathcal{O}(n^2*m^2)$. \section{Geometric Discriminative Feature} \subsection{Geometric Discriminative Feature Extraction} As the {refined structures are both concise and discriminative,} they are adopted to extract the geometric discriminative features. Guided by the refined structures, we extract sub-RCGs with the same structures and then use them as the geometric discriminative features. As RCGs are {low degree graphs (vertex degree less than 15),} the computational complexity is nearly linear increasing with the number of vertices~\cite{walk_kernel}.\\ \indent {To achieve an efficient sub-RCG extraction process, we propose} an algorithm to locate the sub-RCGs efficiently. Given a refined structure $S$ and an RCG $G$, the proposed algorithm outputs a collection of sub-RCGs with structure $S$. There are three steps in the proposed geometric discriminative feature extraction. First, the vertices {of $S$ are checked to determine} whether $|S|\leq |G|$. {If $|S| \leq |G|$, then an iterative process will be carried out.} Otherwise, the algorithm will terminates. Next, for each vertex in $G$, we treat it as the reference point and compare $S$ to the structures of its correlated sub-RCGs. A depth-first-search strategy~\cite{dfs} is employed for graph matching. Only the sub-RCGs with the same structure to $S$ are {the preserved.} By traversing all the vertices in RCG $G$, we perform the matching process and collect all the qualified sub-RCGs. Finally, a collection of qualified sub-RCGs are obtained \subsection{Quantizing Sub-RCGs into Feature Vectors} Given an aerial image, it can be represented by a set of sub-RCGs as described above. It is worth {emphasizing that the sub-RCGs are planar visual feature in $\mathbb{R}^2$.} Conventional classifiers such as support vector machine (SVM)~\cite{ksvm} can only handle 1-D vectors. Further, the number the extracted sub-RCGs are different from one aerial image to another. Therefore, it is {impractical for a conventional classifier like SVM to carry out classification directly.} To tackle this problem, a quantization method is developed to convert each aerial image into a 1-D vector.\\ \begin{figure}[h] \includegraphics[scale=0.55]{fig_add1.ps} \caption{An illustration of generating the feature vector for a test aerial image. The blue circles in each aerial image denote the sub-RCGs.} \label{fig_add1} \end{figure}\indent The proposed quantization method is based on the distances between the test aerial images and the training ones. {The distance is computed using the extracted geometric discriminative features.} Given an aerial image, we first extract its geometric discriminative features, each corresponding to a refined structure. Then. as shown in Fig.~\ref{fig_add1}, an aerial image is encoded into a vector $\mathbf{A}=[\alpha_1, \alpha_2\cdots,\alpha_n]^T$, where $n$ is the number of training aerial images and each element of $\alpha$ is computed as:\\ \begin{equation} \alpha_i\propto \exp(-\lambda*\sum\nolimits_S d\left(S_G,S_{G^{'}})\right)\\ \label{eq_6} \end{equation} where $\lambda$ is a free parameter to be tuned. {In our implementation, we fix $\lambda$ to 0.5 by using cross validation.} \section{System Overview} Our aerial image categorization system can be divided into the training and the test stages. In the training phase, structure refinement for geometric discriminative feature extraction is conducted. First, each aerial image is segmented into connected regions for building the corresponding RCGs. {Then, a frequent structure mining algorithm is employed to discover the highly frequent structures in the training RCGs.} Next, MSD and MSC are computed for each structure toward a concise set of structures. Structure refinement is carried out to acquire the highly discriminative and low redundant ones. Third, the geometric {discriminative features are obtained by extracting the sub-RCGs} corresponding to the refined structures. To convert the extracted 2-D geometric discriminative features into 1-D vectors, {a quantization scheme computes the distance} between the given aerial image and the training samples. Finally, we {train an SVM classifier by the vectors from} the encoded training samples.\\ \indent The test phase {is illustrated on the right.} Given a test aerial image, we obtain its RCG firstly. Then, the geometric discriminative features are extracted to represent the given aerial image. {Similarly, a quantization operation is carried out to convert the aerial image into a vector using the geometric discriminative features.} This vector is fed into the trained SVM for aerial image categorization. \section{Experiments and Results Analysis} Experiments are carried out on two data sets. {The first data set contains the aerial images from the Lotus Hill (LHI) data set~\cite{lotus}.} It consists of five categories where each category contains 20 aerial images. Each image is associated with a standard segmentation map. The second data set is our own complied data set and it includes aerial images from ten categories . {The whole data set contains 2,096 aerial images crawled from the Google Earth.} The experimental system is equipped with an Intel E8500 CPU and 4GB RAM. {All the algorithms are implemented on the Matlab platform.} \subsection{Comparative Study} In our experiment, the validation of the proposed geometric discriminative feature {is conducted on both } the LHI and our own data sets. We compare our geometric discriminative feature {with several representative discriminative visual} features,~\textit{i.e.}, the global RGB histogram, the intensity-domain spin images~\cite{spin}, the walk/tree kernel~\cite{walk_kernel}, the sparse coding spatial pyramid matching (SC-SPM)~\cite{scSPM}, the locality-constrained spatial pyramid matching (LLC-SPM)~\cite{llcSPM}, and the object bank~\cite{ob}. As the spatial pyramid matching kernel~\cite{beyond} heavily relies on the prior knowledge, we do not employ it for comparison. {In our implementation, the geometric discriminative features are extracted to encode both the color intensity distribution and the spatial property.} In each segmented region, a 4096-dimensional RCB-histogram is extracted as its representation. A few example aerial images and their geometric discriminative features are presented.\\ \begin{table*}\tiny \centering \caption{{Recognition rate with standard deviation on our own data set}(the experiment was repeated 10 times; HC is the HOG+color moment with a 1024-sized codebook; the number in each bracket denotes the codebook size; and LR2 and LRG are different regularizers as described in~\cite{ob})} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|}\hline Category &Walk kernel &Tree kernel& SPM(200) &SC-SPM(256) & LLC-SPM(256) &OB-SPM(LR1) &SPM(400) &SC-SPM(512)\\\hline Airport &0.882$\pm$0.023 &0.901$\pm$0.032 &0.723$\pm$0.017 &0.721$\pm$0.026 &0.723$\pm$0.017 &0.799$\pm$0.021 &0.811$\pm$0.043 &0.843$\pm$0.021 \\\hline Commer. &0.545$\pm$0.034 &0.532$\pm$0.012 &0.441$\pm$0.023 &0.443$\pm$0.031 &0.334$\pm$0.027 &0.517$\pm$0.036 &0.521$\pm$0.022&0.456$\pm$0.012 \\\hline Indust.& 0.642$\pm$0.021 &0.611$\pm$0.032 &0.521$\pm$0.021 &0.499$\pm$0.041 &0.413$\pm$0.015 &0.512$\pm$0.056 &0.454$\pm$0.033&0.576$\pm$0.018 \\\hline Inter. & 0.645$\pm$0.067 &0.685$\pm$0.011 &0.611$\pm$0.018 &0.643$\pm$0.023 &0.322$\pm$0.031 &0.675$\pm$0.034 &0.674$\pm$0.026&0.634$\pm$0.011\\\hline Park. & 0.523$\pm$0.039 &0.487$\pm$0.017 &0.443$\pm$0.011 &0.512$\pm$0.037 &0.412$\pm$0.021 &0.536$\pm$0.012 &0.512$\pm$0.057&0.496$\pm$0.025\\\hline Railway &0.556$\pm$0.076 &0.578$\pm$0.056 &0.502$\pm$0.032 &0.511$\pm$0.022 &0.521$\pm$0.033 &0.514$\pm$0.013 &0.521$\pm$0.038 &0.596$\pm$0.052\\\hline Seaport& 0.859$\pm$0.051 &0.843$\pm$0.036 &0.774$\pm$0.021 &0.745$\pm$0.034 &0.721$\pm$0.034 &0.766$\pm$0.016 &0.632$\pm$0.043&0.814$\pm$0.009\\\hline Soccer &0.646$\pm$0.021 &0.655$\pm$0.006 &0.576$\pm$0.021 &0.589$\pm$0.023 &0.578$\pm$0.023 &0.568$\pm$0.032 &0.521$\pm$0.045&0.624$\pm$0.032\\\hline Temple &0.503$\pm$0.029 &0.454$\pm$0.031 &0.521$\pm$0.042 &0.567$\pm$0.038 &0.511$\pm$0.031 &0.603$\pm$0.021 &0.534$\pm$0.024&0.565$\pm$0.045\\\hline Univer. &0.241$\pm$0.045 &0.265$\pm$0.009 &0.289$\pm$0.017 &0.301$\pm$0.021 &0.223$\pm$0.044 &0.304$\pm$0.041 &0.498$\pm$0.03&0.321$\pm$0.012\\\hline Average &0.524$\pm$0.041 &0.601$\pm$0.024 &0.540$\pm$0.022 &0.553$\pm$0.030 &0.4770$\pm$0.033 &0.579$\pm$0.028 &0.568$\pm$0.037&0.593$\pm$0.024\\\hline\hline Category &LLC-SPM (512) &OB-SPM (LRG) &SPM(800) &SC-SPM(1024)& LLC-SPM(1024) &OB-SPM(LRG1)& SPM(HC) & SC-SPM(HC) \\\hline Airport &0.801$\pm$0.021 &0.889$\pm$0.035 &0.799$\pm$0.033 &0.912$\pm$0.015 &0.899$\pm$0.019 &0.872$\pm$0.051 & 0.813$\pm$0.045 & 0.916$\pm$0.023 \\\hline Commer. &0.567$\pm$0.034 &0.565$\pm$0.032 &0.512$\pm$0.032 &0.601$\pm$0.034 &0.521$\pm$0.021 &0.617$\pm$0.034 & 0.519$\pm$0.043 & 0.584$\pm$0.042 \\\hline Indust.& 0.521$\pm$0.025 &0.613$\pm$0.013 &0.585$\pm$0.043 &0.557$\pm$0.032&0.593$\pm$0.019 &0.576$\pm$0.054 &0.598$\pm$0.058 & 0.564$\pm$0.039 \\\hline Inter. & 0.766$\pm$0.036 &0.705$\pm$0.015 &0.644$\pm$0.022 &0.788$\pm$0.014&0.622$\pm$0.035 &0.676$\pm$0.013 & 0.668$\pm$0.041 & 0.791$\pm$0.019 \\\hline Park. & 0.489$\pm$0.032 &0.486$\pm$0.016 &0.503$\pm$0.043 &0.489$\pm$0.043&0.489$\pm$0.055 &0.512$\pm$0.009 & 0.511$\pm$0.057 & 0.487$\pm$0.025 \\\hline Railway &0.553$\pm$0.042 &0.532$\pm$0.053 &0.602$\pm$0.017 &0.601$\pm$0.037&0.599$\pm$0.009 &0.589$\pm$0.010 & 0.614$\pm$0.026 & 0.609$\pm$0.044 \\\hline Seaport& 0.751$\pm$0.036 &0.779$\pm$0.045 &0.815$\pm$0.031 &0.745$\pm$0.034&0.798$\pm$0.032 &0.811$\pm$0.013 & 0.822$\pm$0.039 & 0.751$\pm$0.039 \\\hline Soccer &0.625$\pm$0.026 &0.646$\pm$0.014 &0.634$\pm$0.028 &0.689$\pm$0.036&0.655$\pm$0.014 &0.668$\pm$0.043 & 0.643$\pm$0.037 & 0.693$\pm$0.045 \\\hline Temple &0.567$\pm$0.024 &0.587$\pm$0.027 &0.577$\pm$0.041 &0.689$\pm$0.027&0.556$\pm$0.032 &0.612$\pm$0.025 & 0.587$\pm$0.046 & 0.649$\pm$0.034 \\\hline Univer. &0.409$\pm$0.042 &0.389$\pm$0.018 &0.311$\pm$0.013 &0.582$\pm$0.035&0.281$\pm$0.042 &0.304$\pm$0.011 & 0.324$\pm$0.031 & 0.537$\pm$0.033 \\\hline Average &0.605$\pm$0.032 &0.620$\pm$0.027 &0.606$\pm$0.029 &0.654$\pm$0.033&0.600$\pm$0.027 &0.636$\pm$0.025 & 0.610$\pm$0.042 & 0.658$\pm$0.032 \\\hline Category & LLC-SPM(HC) &Our proposed method &&&&&&\\\hline Airport & 0.904$\pm$0.031 &0.864$\pm$0.051&&&&&&\\\hline Commer. & 0.534$\pm$0.029&0.677$\pm$0.024&&&&&&\\\hline Indust. & 0.598$\pm$0.023&0.555$\pm$0.034&&&&&&\\\hline Inter. & 0.634$\pm$0.046&0.812$\pm$0.021&&&&&&\\\hline Park. & 0.493$\pm$0.064&0.501$\pm$0.061&&&&&&\\\hline Railway & 0.604$\pm$0.005&0.606$\pm$0.033&&&&&&\\\hline Seaport & 0.803$\pm$0.046&0.771$\pm$0.025&&&&&&\\\hline Soccer & 0.659$\pm$0.026&0.663$\pm$0.065&&&&&&\\\hline Temple & 0.574$\pm$0.041&0.665$\pm$0.019&&&&&&\\\hline Univer. & 0.287$\pm$0.049&0.551$\pm$0.034&&&&&&\\\hline Average & 0.609$\pm$0.036&0.667$\pm$0.037&&&&&&\\\hline \end{tabular}\label{tab1a} \end{table*} \indent First, we present a set of discovered discriminative subgraphs. From a horizontal glance, we can roughly discriminate aerial images from the five categories, especially for the intersections and the marines. This demonstrates {the necessity to exploit the relationships among aerial image patches} for categorization.\\ \indent Further, to make comparison among the global histogram, the spin images, the walk kernel, and the proposed geometric discriminative feature, {we select half of the images for training} and leave the rest for testing. As shown in Table~\ref{tab1a}, the proposed {feature achieves} the best accuracy on average. \subsection{Discussion on different parameter settings} We notice that the {influence of segmentation operation in the RCG construction is unnegligible.} To evaluate the performance under different segmentation settings (\textit{i.e.}, the number of singly connected regions), we perform aerial {image recognition on the LHI data set, since the off-the-shelf segmentation benchmark} is suitable to make a fair comparison.\\ \indent Different segmentation settings are employed in our evaluation,{~\textit{i.e.}, deficient segmentation and over segmentation.} The MSD values of each aerial image corresponding to different segmentation settings are computed. {We observed that the benchmark segmentation setting achieves} the largest MSD value 6.3, while the deficient segmentation and over segmentation gain 4.9 and 5.7, respectively. {Comparatively, more regions are obtained} in overly segmentations, which means it is rarer for one region to span several objects. Therefore, when building an RCG {by overly segmented regions,} fewer discriminative objects are neglected. {Further, it is unavoidable that the unsupervised clustering is less accurate than the benchmark segmentation.}\\ \begin{table} \centering \caption{Recognition accuracy {under different segmentation schemes}} \begin{tabular}{|c|c|c|c|c|}\hline Category &Bench. &Defic. &Overly &Mulit.\\\hline Intersection& 0.8 &0.3 &0.8 &0.8\\\hline Marine & 0.4&0.8 &0.8 &0.9\\\hline Parking & 0.9& 0.5&0.6 &0.6 \\\hline Residental & 0.5 &0.7&0.6 &0.7\\\hline School &0.6 &0.3&0.3&0.6 \\\hline Average rate &0.64 &0.54 &0.62 &0.72\\\hline Total topology \# &73 &125&177 &143\\\hline Selected structure \# &8 &8 &8 &8\\\hline Average RAG edge \# &37 &26 &57 &41\\\hline Average RAG vertex \# &19 &16 &31 &19\\\hline \end{tabular} \label{tab2a}\end{table} \indent We compare {the categorization accuracy under} the benchmark segmentation, the over segmentation and the deficient segmentation. {As shown in Table~\ref{tab2a},} over segmentation obtains 2$\%$ lower accuracy than that of the benchmark segmentation on average. Deficient segmentation performs worse than over segmentation by providing the lowest accuracy. The overall recognition result is consistent to what the MSD reflects.\\ \begin{figure}[h] \includegraphics[scale=0.61]{fig11.ps} \caption{{The discrimination of the frequent structures under} different segmentation schemes.} \label{fig11} \end{figure} \indent In the structure selection stage, both the threshold of MSD and MSC influence the obtained structures. Toward an easy parameter tuning process, we set the threshold of MSD to a small value, which allows a large number of candidate structures to be qualified. Then, we tune of threshold of MSC to carefully remove those redundant structures. As shown in Fig.~\ref{fig_add2}, we set the threshold of MSD to 0.1 and tune the threshold of MSC. It is observed that the categorization accuracy increases and then becomes the threshold of MSC reaches 0.65. Thus, we set the thresholds of MSD and MSC to 0.1 and 0.65 in our implementation. \begin{figure}[h] \includegraphics[scale=0.53]{fig_add2.ps} \caption{The categorization performance under different MSC thresholds.} \label{fig_add2} \end{figure} \subsection{The compilation of our aerial image data set} We {compiled our data set by searching aerial images from the Google Earth.} The whole data set contains 2,096 aerial images from ten categories. Since the aerial images from cities are usually clearer than those from the remote areas, we collected most of our images from metropolis, such as New York, Tokyo and Beijing. Due to {the various difficulties to crawl images} from different categories, the number of images in each category varies are detailed in Table~\ref{tab_5}. \begin{table} \centering \caption{The Number of images in each category (Air.means airport, Rail. railway, Comme. commercial, Inter. intersection, Temp. template, Univ. university)}\begin{tabular}{|l|c|c|c|c|c|c|} \hline Categroy &Air. &Comme. &Industrial &Inter. &Park \\ \hline Number &306 &262 &206 &302 &129 \\\hline Categroy &Rail. &Seaport &Soccer &Temp. &Univ. \\ \hline Number &115 &126 &128 &218 &305 \\ \hline\end{tabular} \label{tab_5} \end{table} \section{Conclusions} Aerial image categorization is an important component in {artificial intelligence and remote sensing~\cite{add4,add5}.} In this paper, a new geometric discriminative feature {is proposed for} aerial image recognition. Both the local features and their geometric property are taken into account to {describe an aerial image.} A region connected graph (RCG) is defined to encode the geometric property and the color intensity of an aerial image. Then, the frequent structures are mined statistically from the training RCGs. The refined structures are further selected from the frequent structures toward being highly discriminative and low redundant. {Given a new aerial image, its geometric discriminative features are extracted guided by the refined structures,} They are further quantized into a vector for SVM~\cite{ksvm} classification. We evaluated the effectiveness of our approach on both the public and our own data sets. \section{Appendix} Ideally, we want a perfect segmentation algorithm with two merits: First, each segmented region represents a semantic object/component. Second, the segmentation algorithm is parameter-free. Thus, we can apply it to segment thousands of training images once for all, without human-interactive parameter tuning. Unfortunately, for the first merit, the high-level features in those semantics-exploited segmentation methods are usually designed manually and data set dependent, which is not consistent with the fully-automated and data set independent framework of the proposed method; besides, to learn semantics, semantics-exploited segmentation methods typically require well-annotated training images, however, the large number of training aerial images used in our experiment are online crawled and human annotation is laborious. For the second merit, those semantic-exploited segmentation methods are usually complicated and there are several important user-controlled parameters. Therefore, we can only use those data-driven segmentation methods, where no semantics are explored and typically contain one tuning parameter. Those well-known data-driven segmentation algorithms can be divided into two groups. The first group algorithms need the number of segmented regions as input, such as k-means and normalized cut; however, there is no uniform segmented region number on different images because different images usually contain different number of components. The second group algorithms require some tolerance bound as input, such as the similarity tolerance between spatially neighboring segmented regions. Compared with segmented region number, we empirically found that the tolerance bound is more flexible to tune. Therefore, in our approach, we chose the second group data-driven segmentation methods. After some experimental comparison, we found that the unsupervised fuzzy clustering~\footnote{Matlab codes: https://mywebspace.wisc.edu/pwang6/personal/} outperforms several tolerance bound-based segmentation algorithms, such as graph-based segmentation~\footnote{C++ codes: http://www.cs.brown.edu/~pff/segment/}. Thus, we choose unsupervised fuzzy clustering in our approach.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Supplemental material} \subsection{Preferential attachment model and copying model} The idea behind the preferential attachment model algorithm is that new vertices are more likely to attach to existing vertices with high degree. In our simulations we implemented the algorithm proposed in \cite{BoRiSp}, where some ambiguities of the original preferential attachment model \cite{BaAl} were resolved. This algorithm provides a scale-free network having a power-law degree distribution with a fixed exponent equal to $3$: $N(d)\propto d^{-3}$, where $N(d)$ is the number of nodes of degree $d$. A drawback of the preferential attachment model is that global knowledge of the degree of all nodes is required. Moreover, the exponent of the power-law degree distribution is not controllable. The copying model introduced in \cite{KlKuRa} overcomes these drawbacks. It exploits only local structure to generate a power-law degree distribution. To do so one starts from a small fixed initial graph of constant out-degree, and at each time step a pre-existing vertex is chosen uniformly at random. This node is called the copying vertex. For each neighbor of the copying vertex, a link is added from a new added vertex to that neighbor with probability $1-p$, while with probability $p$ a link is added from the new added vertex to a uniformly random chosen one. The parameter $p$ allows to obtain random graphs with power-law degree distributions with exponents given by ${(2-p)}/{(1-p)}$. \subsection{Equivalence of non-local and local single-excitation Hamiltonians} Here we show that the spectrum of the $N$-level Hamiltonian (acting on an $N$-dimensional Hilbert space) \begin{equation} h=\sum_{i=1}^{N} h_{ii} |i \rangle \langle i| + \sum_{i < j} h_{ij} (|i \rangle \langle j| + |j \rangle \langle i|), \label{eq:nonlocal} \end{equation} is the same as the spectrum of the following spin Hamiltonian (acting on the Hilbert space of $N$ qubits), when restricted to the single excitation manifold, \begin{equation} H = \sum_{i=1}^{N} h_{ii} \sigma^+_i \sigma^-_i + \sum_{i < j}^N h_{ij} (\sigma^+_i \sigma^-_j + \sigma^+_j \sigma^-_i) \label{eq:local} \end{equation} where $ \sigma_k^{\pm} $ are Pauli ladder operators acting on the $k$th qubit. Since the Hilbert space of the $N$-qubit Hamiltonian is restricted to the single excitation manifold it is spanned by $N$ basis vectors which can be put into one-to-one correspondence with the basis vector of the Hilbert space of the $N$-level Hamiltonian \begin{equation} |i \rangle \leftrightarrow |\uparrow \rangle_i \equiv| \downarrow_1 \cdots \downarrow_{i-1} \; \uparrow_i \; \downarrow_{i+1} \cdots \downarrow_N \rangle. \label{eq:map} \end{equation} Choosing the following representation for the Pauli matrices acting on qubit $j$, \begin{align} \sigma^x_j = |\uparrow \rangle \langle \downarrow|_j + |\downarrow \rangle \langle \uparrow|_j \notag \\ \sigma^y_j = i \left( |\downarrow \rangle \langle \uparrow|_j - |\uparrow \rangle \langle \downarrow|_j \right) \notag \\ \sigma^z_j = |\uparrow \rangle \langle \uparrow|_j - |\downarrow \rangle \langle \downarrow|_j \notag \\ \sigma^+_j = \frac{\sigma^x_j+i\sigma^y_j}{2}=|\uparrow \rangle \langle \downarrow|_i \notag \\ \sigma^-_j = \frac{\sigma^x_j-i\sigma^y_j}{2}=|\downarrow \rangle \langle \uparrow|_i , \end{align} one can derive Eq.~\eqref{eq:local} from Eq.~\eqref{eq:nonlocal} using Eq.~\eqref{eq:map}. The spectrum does not change in this construction since we are simply relabeling the bases of two isomorphic Hilbert spaces. Another way of seeing this is to note that when the Hamiltonian in Eq.~\eqref{eq:local} is not restricted to the single excitation manifold and one has to diagonalize it, if the excitation numbers are conserved quantities, then one can first reduce the Hamiltonian into blocks labeled by the number of excitations and subsequently diagonalize each single block. The block labeled by a single excitation is equivalent to Eq.~\eqref{eq:nonlocal} via the mapping in Eq.~\eqref{eq:map}. \subsection{Role of the out-degrees} The WWW graph is characterized by a power-law distribution for both for the in- and out-degrees of the nodes. Here we provide numerical evidence supporting the fundamental role played by the out-degrees in activating the polylogarithmic scaling of the average inverse gap, as a function of system size $n$ (the number of vertices in the graph). \begin{figure} \includegraphics[scale=0.3]{fig1SM.eps} \caption{ (color online) Top panel: The average inverse minimum gap scaling for random graphs with only in-degree power-law distribution. Bottom panel: The average inverse minimum gap scaling for random graphs with only out-degree power-law distribution. 1000 realizations. } \label{Fig:fig1SM} \end{figure} \begin{figure} \includegraphics[scale=0.3]{fig2SM.eps} \caption{ (color online) The inverse of the average minimum gap scaling for undirected preferential attachment random graphs. Top panel: log-log plot. Bottom panel: semi-log plot. Linear fits are poor in both cases. Averaged over 1000 realizations. } \label{Fig:fig2SM} \end{figure} In order to distinguish the effect of the in-degrees from that of the out-degrees we consider preferential attachment graphs constructed in such a way that only one power-law is present. Starting with preferential attachment networks with only in-degree power-law distribution, Fig.~\ref{Fig:fig1SM} (top panel) shows the typical behavior of the inverse minimum gap. In this case the scaling is sub-linear, though not logarithmic: $[\delta^{-1}]_{\textrm{ave}} \sim n^{0.65}$. Also shown in Fig.~\ref{Fig:fig1SM} (bottom panel) is the scaling for the reverse graphs, obtained by reversing the direction of each edge. This corresponds to networks in which only the out-degrees are power-law distributed. Remarkably, in this case we find the fit $ [\delta^{-1}]_{\textrm{ave}} \sim (\log_{10} n)^{2.7}$. {In Fig.~\ref{Fig:fig2SM} we plot the same data considering the inverse of the average minimum gap, instead of the average of the inverse minimum gap. As expected qualitatively the scaling is the same, with small quantitative discrepancies.} Fig.~\ref{Fig:fig3SM} shows what happens when we consider preferential attachment graphs with identical in- and out-degrees. In this case the graph is equivalent to an undirected graph, and we find non-logarithmic, sub-linear scaling. We display both the double-logarithmic and the semi-logarithmic plots in order to make the distinction clear. \begin{figure} \includegraphics[scale=0.3]{fig3SM.eps} \caption{ (color online) The average inverse minimum gap scaling for undirected preferential attachment random graphs. Top panel: log-log plot. Bottom panel: semi-log plot. Linear fits are poor in both cases. Averaged over 1000 realizations. } \label{Fig:fig3SM} \end{figure} We note that the quantum adiabatic algorithm can still be useful even in the case of networks with only in-degree power-law distribution, for the preparation not of the pagerank state, but of the so-called \textit{inverse pagerank} \cite{GyGaPe} (used for spam detection). The latter is the pagerank of the reverse graph. The results of the simulations in Figs.~\ref{Fig:fig1SM}-\ref{Fig:fig3SM} suggest that, typically, when the algorithm is unable to prepare the pagerank in polylogarithmic time, it can still prepare the inverse pagerank in polylogarithmic time. \ignore{ The simulations shown in the main text were obtained by mixing (i.e., adding the adjacency matrices of) graphs $\mathcal{G}_A$, with only in-degree power-law distributions, with graphs $\mathcal{G}_B$ with only out-degree power-law distributions. For the family of graphs considered in the simulations reported in the main text, the maximum out-degree for $\mathcal{G}_B$ is approximately $3$ times greater than the maximum in-degree for $\mathcal{G}_A$. } \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Spin-glasses are model systems of statistical mechanics in which simple degrees of freedom interact via random couplings drawn from a given probability distribution \cite{BiYo}. The ensuing interplay between disorder and frustration gives rise to peculiar static and dynamic properties which made spin-glasses paradigms for complex systems with competing interactions. In this way the concepts and techniques developed for their theoretical understanding \cite{MPV} became invaluable tools in the quantitative analysis of problems originating from such diverse fields as algorithmic complexity \cite{RemiNat,MPZ,HaWe}, game theory \cite{DiOp,EnBe}, artificial neural networks \cite{EnvB}, and cryptography \cite{KaKi}. In the present note we study a spin-glass for which the couplings strengths are drawn from a Levy-distribution. The main characteristic of these distributions are power-law tails resulting in diverging moments. Compared to the extensively studied spin-glass models with Gaussian \cite{SK} or other finite moment distributions \cite{ViBr,KaSo,MePa} Levy-distributed couplings are interesting for several reasons. On the one hand the comparatively large fraction of strong bonds gives rise to a mechanism for the glass transition which is different from the usual scenario. On the other hand these systems pose new challenges to the theoretical analysis because the diverging second moments invalidate the central limit theorem which is at the bottom of many mean-field techniques. Related issues of interest include the spectral theory of random matrices with Levy-distributed entries \cite{ranmat1,ranmat2} and relaxation and transport on scale-free networks \cite{AlBa}. It is also interesting to note that the characteristic properties of the Cauchy-distribution have recently enabled progress in the mathematically rigorous analysis of matrix games with random pay-off matrices \cite{Roberts}. The model considered below with the help of the replica method was analyzed previously by Cizeau and Bouchaud (CB) using the cavity approach \cite{CiBo}. In their paper CB remark that they resorted to the cavity method because they were not able to make progress within the replica framework. This might have been caused by the fact that at that time the central quantity in the replica method was the second moment of the local field distribution, the so-called Edwards-Anderson parameter \cite{EdAn}, which for Levy-distributed couplings is likely to diverge. Later a variant of the replica method was developed to deal with non-Gaussian local field distributions characteristic for diluted spin glasses and complex optimization problems \cite{Remi}. Until now this approach was used only in situations where the local field distribution is inadequately characterized by its second moment alone and higher moments of the distribution are needed for a complete description. Here we show that the method may also be adapted to situations where the moments not even exist. \section{The model} We consider a system of $N$ Ising spins $S_i=\pm 1,\, i=1,...,N$ with Hamiltonian \begin{equation} \label{eq:H} H(\{S_i\})=-\frac1 {2N^{1/\alpha}} \sum_{(i,j)} J_{ij} S_i S_j\; , \end{equation} where the sum is over all pairs of spins. The couplings $J_{ij}=J_{ji}$ are i.i.d. random variables drawn from a Levy distribution $P_\alpha(J)$ defined by its characteristic function \cite{Levydist} \begin{equation} \label{eq:defP} \tilde{P}_\alpha(k):=\int dJ \; e^{-ikJ}\; P_\alpha(J)=e^{-|k|^\alpha} \; \end{equation} with the real parameter $\alpha\in (0,2]$. The thermodynamic properties of the system are described by the ensemble averaged free energy \begin{equation} \label{eq:deff} f(\beta):=-\lim_{N\to\infty}\frac{1}{\beta N}\overline{\ln Z(\beta)}\; , \end{equation} with the partition function \begin{equation} \label{eq:defZ} Z(\beta):=\sum_{\{S_i\}}\exp(-\beta H(\{S_i\}))\; . \end{equation} Here $\beta$ denotes the inverse temperature and the overbar stands for the average over the random couplings $J_{ij}$. \section{Replica theory} To calculate the average in (\ref{eq:deff}) we employ the replica trick \cite{EdAn} \begin{equation} \label{eq:replicatrick} \overline{\ln Z}=\lim_{n\to 0}\frac{\overline{Z^n}-1}{n}\; . \end{equation} As usual we aim at calculating $\overline{Z^n}$ for integer $n$ by replicating the system $n$ times, $\{S_i\}\mapsto \{S^a_i\},\, a=1,...,n$, and then try to continue the results to real $n$ in order to perform the limit $n\to 0$. Due to the algebraic decay $P_\alpha(J)\sim |J|^{-\alpha-1}$ of the distribution $P_\alpha(J)$ for large $|J|$ the average $\overline{Z^n(\beta)}$ does not exist for real $\beta$. On the other hand, for a purely imaginary temperature, $\beta=-ik,\,k\in\mathds{R}$, we find from the very definition of $P_\alpha(J)$, cf. (\ref{eq:defP}) \begin{equation} \label{eq:Zn} \overline{Z^n(-ik)}=\sum_{\{S_i^a\}} \exp \Big(-\frac{|k|^\alpha}{2N}\sum_{i,j} \Big |\sum_a S_i^a S_j^a\Big |^\alpha + {\cal{O}}(1)\Big) \; . \end{equation} Note that the replica Hamiltonian is extensive which justifies a-posteriori the scaling of the interaction strengths with $N$ used in (\ref{eq:H}). The determination of $\overline{Z^n}$ can be reduced to an effective single site problem by introducing the distributions \begin{equation} \label{eq:c} c(\vec{S})=\frac1 N \sum_i\delta(\vec{S}_i,\vec{S})\; , \end{equation} where $\vec{S}=\{S^a\}$ stands for a spin vector with $n$ components. We find after standard manipulations \cite{Remi} \begin{equation} \overline{Z^n(-ik)}= \int\prod_{\vec{S}}dc(\vec{S})\delta(\sum_{\vec{S}}c(\vec{S})-1) \exp\Big(-N\Big[\sum_{\vec{S}}c(\vec{S})\ln c(\vec{S})+\frac{|k|^\alpha}{2} \sum_{\vec{S},\vec{S}'}c(\vec{S})c(\vec{S}') |\vec{S}\cdot\vec{S}'|^\alpha\Big]\Big)\; .\label{eq:Zn2} \end{equation} In the thermodynamic limit, $N\to\infty$, the integral in (\ref{eq:Zn2}) can be calculated by the saddle-point method. The corresponding self-consistent equation for $c(\vec{\sigma})$ has the form \begin{equation} \label{eq:saddle} c(\vec{\sigma})=\Lambda(n)\exp\Big(-|k|^\alpha \sum_{\vec{S}}c(\vec{S})|\vec{S}\cdot\vec{\sigma}|^\alpha\Big) \; , \end{equation} where the Lagrange parameter $\Lambda(n)$ enforces the constraint $\sum_{\vec{S}} c(\vec{S})=1$ resulting from (\ref{eq:c}). \section{Replica symmetry} Within the replica symmetric approximation one assumes that the solution of (\ref{eq:saddle}) is symmetric under permutations of the replica indices implying that the saddle-point value of $c(\vec{S})$ depends only on the sum, $s:=\sum_a S^a$, of the components of the vector $\vec{S}$. It is then convenient to determine the distribution of local magnetic fields $P(h)$ from its relation to $c(s)$ as given by \cite{Remi} \begin{equation} \label{eq:Ph} c(s)=\int dh\; P(h)\; e^{-ikhs} \qquad\qquad P(h)=\int \frac{ds}{2\pi}\; e^{ish}\; c(\frac s k) \; . \end{equation} Note that the $P(h)$ defined in this way is normalized only after the limit $n\to 0$ is taken. The distribution of local magnetic fields is equivalent to the free energy $f(\beta)$ since all thermodynamic properties may be derived from suitable averages with $P(h)$ \cite{MePa2}. To get an equation for $P(h)$ from (\ref{eq:saddle}) we need to calculate \begin{align} \sum_{\vec{S}} e^{-ikhs}|\vec{S}\cdot\vec{\sigma}|^\alpha&= \int\frac{dr\,d\hat{r}}{2\pi} |r|^\alpha e^{ir\hat{r}} \sum_{\vec{S}}\exp\Big(-ikhs-i\hat{r}\vec{S}\cdot\vec{\sigma}\Big)\\ &=\int\frac{dr\,d\hat{r}}{2\pi} |r|^\alpha e^{ir\hat{r}} \sum_{\vec{S}}\prod_a \exp\Big(-iS^a(kh+\hat{r}\sigma^a)\Big)\\ &=\int\frac{dr\,d\hat{r}}{2\pi} |r|^\alpha e^{ir\hat{r}} [2\cos(kh+\hat{r})]^{\frac{n+\sigma}{2}}\;[2\cos(kh-\hat{r})]^{\frac{n-\sigma}{2}}\\ &\rightarrow \int\frac{dr\,d\hat{r}}{2\pi} |r|^\alpha e^{ir\hat{r}} \left[\frac{\cos(kh+\hat{r})}{\cos(kh-\hat{r})}\right]^{\frac\sigma 2}\; , \end{align} where the limit $n\to 0$ was performed in the last line and $\sigma:=\sum_a\sigma^a$. Using $\Lambda(n)\to 1$ for $n\to 0$ \cite{Remi} we therefore find from (\ref{eq:saddle}) in the replica symmetric approximation \begin{equation}\label{eq:h1} c(\sigma)= \exp\left(-|k|^\alpha\int dh P(h)\int\frac{dr\,d\hat{r}}{2\pi} |r|^\alpha \exp\Big(ir\hat{r} +\frac\sigma 2\ln\frac{\cos(kh+\hat{r})}{\cos(kh-\hat{r})}\Big)\right)\; . \end{equation} Using this result in (\ref{eq:Ph}) and performing the transformations $r\mapsto r/k, \hat{r}\mapsto \hat{r} k$ we get \begin{equation} \label{eq:Phrs} P(h)=\int\frac{ds}{2\pi}\exp \left(ish-\int dh' P(h')\int\frac{dr\,d\hat{r}}{2\pi} |r|^\alpha \exp\Big(ir\hat{r}+\frac s{2k} \ln\frac{\cos(kh'+k\hat{r})}{\cos(kh'-k\hat{r})}\Big)\right)\; . \end{equation} We are now in the position to continue this result back to real values of the temperature by simply setting $k=i\beta$. In this way we find the following self-consistent equation for the replica symmetric field distribution $P(h)$ of a Levy spin-glass at inverse temperature $\beta$ \begin{equation} \label{eq:resrs} P(h)=\int\frac{ds}{2\pi}\exp \left(ish-\int dh' P(h')\int\frac{dr\,d\hat{r}}{2\pi} |r|^\alpha \exp\Big(ir\hat{r}-i\frac s{2\beta} \ln\frac{\cosh\beta(h'+\hat{r})}{\cosh\beta(h'-\hat{r})}\Big)\right)\; . \end{equation} \section{Spin glass transition} From (\ref{eq:resrs}) we infer that the paramagnetic field distribution, $P(h)=\delta(h)$, is always a solution. To test its stability we plug into the r.h.s. of (\ref{eq:resrs}) a distribution $P_0(h)$ with a small second moment, $\epsilon_0:=\int dh P_0(h)\, h^2 \ll 1$, calculate the l.h.s. (to be denoted by $P_1(h)$) by linearizing in $\epsilon_0$ and compare the new second moment, $\epsilon_1:=\int dh P_1(h)\, h^2$, with $\epsilon_0$. We find $\epsilon_1>\epsilon_0$, {\it i.e.} instability of the paramagnetic state, if the temperature $T$ is smaller than a critical value $T_{f,\alpha}$ determined by \begin{equation} (T_{f,\alpha})^\alpha=-\int\frac{dr\,d\hat{r}}{2\pi} |r|^\alpha e^{ir\hat{r}} \tanh^2 \hat{r} =-\frac{\Gamma(\alpha+1)}{\pi}\,\cos(\frac{\alpha+1}{2} \pi) \int\frac{d\hat{r}}{|\hat{r}|^{\alpha+1}} \tanh^2 \hat{r} . \end{equation} This result for the freezing temperature is essentially the same as the one obtained by CB using the cavity method \cite{CiBo}. Our somewhat more detailed prefactor ensures that the limit $\alpha\to 2$ correctly reproduces the value $T_f^{SK}=\sqrt{2}$ of the SK-model \cite{SK}. The dependence of $T_{f,\alpha}$ on $\alpha$ is shown in fig.~\ref{f.1}. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{tclevy1.eps} \end{center} \caption{Freezing temperature $T_{c,\alpha}$ of an infinite-range spin-glass with Levy-distributed couplings as function of the parameter $\alpha$ of the Levy-distribution defined in (\ref{eq:defP}). For the scaling of the coupling strength with $N$ as chosen in (\ref{eq:H}) there is a finite transition temperature for all values of $\alpha$. In the limit $\alpha\to 2$ the result for the SK-model is recovered.} \label{f.1} \end{figure} The peculiarities of the spin-glass transition in the present system are apparent from the similarity between (\ref{eq:resrs}) and analogous results for strongly diluted spin glasses and disordered spin systems on random graphs \cite{ViBr,Remi,MePa2}. To make this analogy more explicit we rewrite (\ref{eq:resrs}) in a form that allows to perform the $s$-integration to obtain \begin{align}\nonumber P(h) &=\int\frac{ds}{2\pi} e^{ish} \sum_{d=0}^\infty \frac{(-1)^d}{d!} \int \prod_{i=1}^d \Big(dh_i P(h_i)\frac{dr_i\,d\hat{r}_i}{2\pi} |r_i|^\alpha e^{ir_i\hat{r}_i}\Big) \exp\Big(-i\frac s{2\beta}\sum_{i=0}^d \ln\frac{\cosh\beta(h_i+\hat{r}_i)}{\cosh\beta(h_i-\hat{r}_i)}\Big)\\ &=\sum_{d=0}^\infty \frac{(-1)^d}{d!} \int \prod_{i=1}^d \Big(dh_i P(h_i)\frac{dr_i\,d\hat{r}_i}{2\pi} |r_i|^\alpha e^{ir_i\hat{r}_i}\Big) \;\delta\Big(h-\frac{1}{\beta}\sum_{i=0}^d \tanh^{-1}(\tanh\beta h_i\tanh\beta\hat{r}_i)\Big)\; . \end{align} This form of the self-consistent equation is similar to those derived within the cavity approach for systems with locally tree-like topology \cite{ViBr,Remi,HaWe} and may also form a suitable starting point for a numerical determination of $P(h)$ using a population-dynamical algorithm \cite{MePa2}. \section{Discussion} Infinite-range spin-glasses with Levy-distributed couplings are interesting examples of classical disordered systems. The broad variations in coupling strengths brought about by the power-law tails in the Levy-distribution violate the Lindeberg condition for the application of the central limit theorem and give rise to non-Gaussian cavity field distributions with diverging moments. We have shown that it is nevertheless possible to derive the replica symmetric properties of the system in a compact way by using the replica method as developed for the treatment of strongly diluted spin glasses and optimization problems \cite{Remi} which focuses from the start on the complete distribution of fields rather than on its moments. Due to the long tails in the distribution of coupling strengths Levy spin-glasses interpolate between systems with many, i.e. $\mathcal{O}(N)$, weak couplings per spin as the Sherrington-Kirkpatrick model and systems with few, i.e. $\mathcal{O}(1)$, strong couplings per spin as the Viana-Bray model. The majority of the $N-1$ random interactions coupled to each spin are very weak (of order $N^{-1/\alpha}$). These weak couplings will influence only the very low temperature behaviour which may be expected to be similar to that of the SK-model. On the other hand the largest of $N$ random numbers drawn independently from the distribution (\ref{eq:defP}) is of order $N^{1/\alpha}$ \cite{BoGe} and hence every spin also shares a fraction of strong bonds, $J_{ij}=\mathcal{O}(1)$, which are for $|J_{ij}|>1/\beta$ practically frozen. With decreasing temperature a growing backbone of frozen bonds builds up that percolates at the transition temperature $T_{f,\alpha}$ \cite{CiBo}. The mechanism for the freezing transition is hence rather different from that operating in the Sherrington-Kirkpatrick model and resembles the one taking place in disordered spin systems on random graphs with local tree-structure. \acknowledgments I would like to thank Daniel Grieser, R\'emi Monasson and Martin Weigt for clarifying discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} {\em Event logs} are among the most ubiquitous types of data nowadays. They can be machine generated (server logs, database transactions, sensor data) or human generated (ranging from hospital records to life tracking, a.k.a.\ quantified self), and are bound to become ever more voluminous and diverse with the increasing digitisation of our lives and the advent of the Internet of Things (IoT). Such logs are often the most readily available sources of information on a system or process of interest. It is thus critical to have effective and efficient means to analyse them and extract the information they contain. Many such logs monitor repetitive processes, and some of this repetitiveness is recorded in the logs. A careful analysis of the logs can thus help understand the characteristics of the underlying recurrent phenomena. However, this is not an easy task: a log usually captures many different types of events. Events related to occurrences of different repetitive phenomena are often mixed together as well as with noise, and the different signals need to be disentangled to allow analysis. This can be done by a human expert having a good understanding of the domain and of the logging system, but is tedious and time consuming. {\em Periodic pattern mining} algorithms~\cite{Ozden98} have been proposed to tackle this problem. These algorithms can discover periodic repetitions of sets or sequences of events amidst unrelated events. They exhibit some resistance to noise, when it takes the form of slight variations in the inter-occurrence delay~\cite{Berberidis02} or of the recurrence being limited to only a portion of the data~\cite{Ma01}. However, such algorithms suffer from the traditional plague of pattern mining algorithms: they output too many patterns (up to several millions), even when relying on condensed representations~\cite{LopezCueva12}. Recent approaches have therefore focused on optimising the quality of the extracted {\em pattern set} as a whole~\cite{DRZ07a}, rather than finding individual high-quality patterns. In this context, the adaptation of the Minimal Description Length (MDL) principle~\cite{rissanen1978modeling,grunwald07} to pattern set mining has given rise to a fruitful line of work~\cite{krimp2011,bonchi11krimpro,tatti_long_2012,bhattacharyya_efficiently_2017}. The MDL principle is a concept from information theory based on the insight that any structure in the data can be exploited to compress the data, and aiming to strike a balance between the complexity of the model and its ability to describe the data. The most important structure of the data on which we focus here, i.e.\ of event logs, is the periodic recurrence of some events. For a given event sequence, we therefore want to identify a set of patterns that captures the periodic structure present in the data, and we devise a MDL criterion to evaluate candidate pattern sets for this purpose. First, we consider a simple type of model, representing event sequences with cycles over single events. Then, we extend this model so that cycles over distinct events can be combined together. By simply letting our patterns combine not only events but also patterns recursively, we obtain an expressive language of periodic patterns. For instance, it allows us to express the following daily routine: \smallskip {\hspace{-2ex} \parbox{.95\textwidth}{ Starting Monday at $7$:$30$ AM, wake up, then, $10$ minutes later, prepare coffee, repeat every $24$ hours for $5$ days, repeat this every $7$ days for $3$ months }} \smallskip \noindent as a pattern consisting of two nested cycles, respectively with $24$ hours and $7$ days periods, over the events ``waking up'' and ``preparing coffee''. In short, we propose a novel approach for mining periodic patterns using a MDL criterion. The main component of this approach---and our main contribution---is the definition of an expressive pattern language and the associated encoding scheme which allows to compute a MDL-based score for a given pattern collection and sequence. We design an algorithm for putting this approach into practise and perform an empirical evaluation on several event log datasets. We show that we are able to extract sets of patterns that compress the input sequences and to identify meaningful patterns. We start by reviewing the main related work, in Section~\ref{sec:related}. In Section~\ref{sec:problem}, we introduce our problem setting and a simple model consisting of cycles over single events, which we extend in Section~\ref{sec:problem_complex}. \VLongOnly{In Section~\ref{sec:comb}, we look at how patterns can be combined and compare costs. }We present an algorithm for mining periodic patterns that compress in Section~\ref{sec:algo} and evaluate our proposed approach over several event log datasets in Section~\ref{sec:xps}. We reach conclusions in Section~\ref{sec:conclusion}. \VShortOnly{We focus here on the high-level ideas, and refer the interested reader to our report~\cite{extended} that includes technical details, additional examples and experiments.} \VLongOnly{This report extends our conference publication~\cite{conf} with technical details, numerous examples, and additional experiments.} \section{Related Work} \label{sec:related} The first approaches for mining periodic patterns\VLongOnly{~\cite{Ozden98,Han98,Han99} were designed to augment traditional itemset and sequence mining techniques with the capacity to identify events whose occurrences are regularly spaced in time. They} used extremely constrained definitions of the periodicity. In~\cite{Ozden98}, \emph{all} occurrences must be regularly spaced; In~\cite{Han98,Han99}, some missing occurrences are permitted but all occurrences must follow the same regular spacing. As a result, these approaches are extremely sensitive to even small amounts of noise in the data. Ma \textit{et al.}~\cite{Ma01} later proposed a more robust approach, which can extract periodic patterns in the presence of gaps of arbitrary size in the data\VLongOnly{: the recurrence can be interrupted and restarted, possibly with a different spacing. Such perturbations are frequent in real data}. \VLongOnly{ The}\VShortOnly{While the} above approaches require time to be discretized as a preprocessing (time steps of hour or day length, for example)\VLongOnly{, smoothing out small changes in inter-occurrence delays and limiting the search for the correct period to a predetermined range. These approaches might be too coarse grained, however, and are dependant on the discretization. Several}\VShortOnly{, several} solutions have been proposed to directly discover candidate periods from raw timestamp data, using the Fast Fourier Transform~\cite{Berberidis02} or statistical models~\cite{li2012mining,yuan2017pred}.\VLongOnly{ } All of the above approaches are susceptible to producing a huge number of patterns, making the exploitation of their results difficult. The use of a {\em condensed representation} for periodic patterns~\cite{LopezCueva12} allows to significantly reduce the number of patterns output, without loss of information, but falls short of satisfactorily addressing the problem. \VLongOnly{\medskip} Considering pattern mining more in general, to tackle this pervasive issue of the overwhelming number of patterns extracted, research has focused on extracting {\em pattern sets}~\cite{DRZ07a}: finding a (small) set of patterns that together optimise some interest criterion. One such criterion is based on the Minimum Description Length (MDL) principle~\cite{grunwald_model_2000}. Simply put, it states that \emph{the best model is the one that compresses the data best}. Following this principle, the \algname{Krimp} algorithm~\cite{krimp2011} was proposed, to select a subset of frequent itemsets that yields the best lossless compression of a transactional database. This algorithm was later improved~\cite{SLIM} and the approach extended to analyse event sequences~\cite{tatti_long_2012,goKRIMP,bhattacharyya_efficiently_2017}. Along a somewhat different approach, Kiernan and Terzi proposed to use MDL to summarize event sequences~\cite{kiernan09}. To the best of our knowledge, the only existing method that combines periodic pattern mining and a MDL criterion was proposed by Heierman \textit{et al.}~\cite{DBLP:conf/icdm/HeiermanC03}. This approach considers a single regular episode at a time and aims to select the best occurrences for this pattern, independently of other patterns. Instead, we use a MDL criterion in order to select a good collection of periodic patterns. \section{Preliminary Notation and Problem Definition} \label{sec:problem} Next, we formally define the necessary concepts and formulate our problem, focusing on simple cycles. \VLongOnly{ But first, let us clarify some of the notation we use throughout. Lists are represented by enumerating their elements in order of occurrence, enclosed between $\langle$ and $\rangle$, as in $\langle i_1, i_2, \dots \rangle$ for instance, with $\lls\lle$ denoting the empty list. We use $\oplus$ to represent the concatenation of lists, as in \[\LL{a, b, c} = \LL{a} \oplus \LL{b, c}\text{ and } \LL{i_1, i_2, \dots, i_9} = \bigoplus_{k \in [1..9]} \LL{i_k}\;.\] Given a list $L$, $L[k]$ returns the element at $k^{th}$ position (indexing starts at $1$). We also use a simplified notation for lists, especially when using them as indices. Lists and single elements are then denoted respectively as upper-case and lower-case letters or numbers, and concatenation is simply represented by concatenating the corresponding letters. In this notation, we use $0$ to represent the empty list. For instance, the indices in $\ensuremath{B}_0$, $\ensuremath{B}_X$ and $\ensuremath{B}_{Xy}$ represent an empty list, a list $X$, and element $y$ concatenated to the list $X$, respectively. All logarithms are to base $2$. Symbols used are listed on the last page of this report. } \VShortOnly{\mpara{Event sequences and cycles.}} \VLongOnly{\mpara{A timestamped event sequence as input data.}} Our input data is a collection of timestamped occurrences of some events, which we call an \emph{event sequence}. The events come from an alphabet $\Omega$\label{sym:ABC} and will be represented with lower case letters. We assume that an event can occur only once per time step, so the data can be represented as a list of timestamp--event pairs, such as \VShortOnly{\[ \seqex{1} = \langle (2,c),(3,c),(6,a),(7,a),(7,b),(19,a),(30,a),(31,c),(32,a),(37,b) \rangle \;.\]} \VLongOnly{ \begin{align*} \seqex{1} &= \langle (2,c),(3,c),(6,a),(7,a),(7,b),(19,a),\\ &(30,a),(31,c),(32,a),(37,b),(42,a),(48,c),(54,a) \rangle \;. \end{align*} } Whether timestamps represent days, hours, seconds, or something else depends on the application, the only requirement is that they be expressed as positive integers. We denote as $\seq[\alpha]$\label{sym:seqalpha} the event sequence $\seq$\label{sym:seq} restricted to event $\alpha$\label{sym:alpha}, that is, the subset obtained by keeping only occurrences of event $\alpha$. \VLongOnly{ For instance, we can represent $\seqex[a]{1}$, the event sequence above restricted to event $a$, simply as a list of timestamps: \begin{align*} \seqex[a]{1} = \langle6,7,19,30,32,42,54\rangle\;. \end{align*} } We denote as $\len{\seq}$\label{sym:lenS} the number of timestamp--event pairs contained in event sequence $\seq$, i.e.\ its \emph{length}, and $\tspan{\seq}$\label{sym:durationS} the time spanned by it, i.e.\ its \emph{duration}. That is, $\tspan{\seq} = t_{\text{end}}(\seq) - t_{\text{start}}(\seq)$, where $t_{\text{end}}(\seq)$\label{sym:tSend} and $t_{\text{start}}(\seq)$\label{sym:tSstart} represent the largest and smallest timestamps in $\seq$, respectively. \VLongOnly{ Observe that $\len{\seq[\alpha]}$ equals the number of occurrences of $\alpha$ in the original sequence, and that $\tspan{\seq[\alpha]} \leq \tspan{\seq}.$ In the example above we have $\len{\seqex{1}} = 13$, $\len{\seqex[a]{1}} = 7$, $\tspan{\seqex{1}} = 52$ and $\tspan{\seqex[a]{1}} = 48$.} \VShortOnly{\medskip} \VLongOnly{\mpara{Cycles as periodic patterns.}} Given such an event sequence, our goal is to extract a representative collection of cycles. A \emph{cycle} is a periodic pattern that takes the form of an ordered list of occurrences of an event, where successive occurrences appear at the same distance from one another. We will not only consider perfect cycles, where the inter-occurrence distance is constant, but will allow some variation. A cycle is specified by indicating: \VShortOnly{\begin{itemize} \item the repeating event, called \emph{cycle event} and denoted as $\ensuremath{\alpha}$, \item the number of repetitions of the event, called \emph{cycle length}, $\ensuremath{r}$, \item the inter-occurrence distance, called \emph{cycle period}, $\ensuremath{p}$, and \item the timestamp of the first occurrence, called \emph{cycle starting point}, $\ensuremath{\tau}$. \end{itemize}} \VLongOnly{\begin{itemize} \item the repeating event, called the \emph{cycle event} and denoted as $\ensuremath{\alpha}$\label{sym:Cev}, \item the number of repetitions of the event, called the \emph{cycle length} and denoted as $\ensuremath{r}$\label{sym:Clen} , \item the inter-occurrence distance, called the \emph{cycle period} and denoted as $\ensuremath{p}$\label{sym:Cprd}, and \item the timestamp of the first occurrence, called the \emph{cycle starting point} and denoted as $\ensuremath{\tau}$\label{sym:Cto}. \end{itemize}} Cycle lengths, cycle periods and cycle starting points take positive integer values (we choose to restrict periods to be integers for simplicity and interpretability). More specifically, we require $\ensuremath{r} > 1$, $\ensuremath{p} > 0$ and $\ensuremath{\tau} \geq 0$. In addition, since we allow some variation in the actual inter-occurrence distances, we need to indicate an offset for each occurrence in order to be able to reconstruct the original subset of occurrences, that is, to recover the original timestamps. For a cycle of length $\ensuremath{r}$, this is represented as an ordered list of $\ensuremath{r}-1$ signed integer offsets, called the \emph{cycle shift corrections} and denoted as $\ensuremath{E}$\label{sym:Csc}. Hence, a cycle is a 5-tuple $\ensuremath{C}\label{sym:cycle} = (\ensuremath{\alpha}, \ensuremath{r}, \ensuremath{p}, \ensuremath{\tau}, \ensuremath{E})$. \VLongOnly{Note that since the cycles we consider here involve one event each, we can process the occurrences of each event separately. In other words, we can split the original sequence $\seq$ into subsequences $\seq[\alpha]$, one for each event $\alpha$, and handle them separately.} \VLongOnly{\mpara{A cycle's cover.}} For a given cycle $\ensuremath{C} = (\ensuremath{\alpha}, \ensuremath{r}, \ensuremath{p}, \ensuremath{\tau}, \ensuremath{E})$, with $\ensuremath{E} = \LL{ \ensuremath{e}_1, \dots, \ensuremath{e}_{\ensuremath{r}-1} }$ we can recover the corresponding occurrences timestamps by reconstructing them recursively, starting from $\ensuremath{\tau}$: $t_1 = \ensuremath{\tau}$, $t_k = t_{k-1}+\ensuremath{p}+\ensuremath{e}_{k-1}.$ Note that this is different from first reconstructing the occurrences while assuming perfect periodicity as $\ensuremath{\tau}, \ensuremath{\tau}+\ensuremath{p}, \ensuremath{\tau}+2\ensuremath{p}, \dots, \ensuremath{\tau}+(\ensuremath{r}-1) \ensuremath{p}$, then applying the corrections, because in the former case the corrections actually accumulate. Then, we overload the notation and denote the time spanned by the cycle as $\tspan{C}$\label{sym:Cspan}\VShortOnly{. }\VLongOnly{, that is \begin{align*} \tspan{C} & =t_\ensuremath{r} - t_1 \\ \VLongOnly{&= (t_{\ensuremath{r}-1}+\ensuremath{p}+\ensuremath{e}_{\ensuremath{r}-1}) - \ensuremath{\tau} \\ &= \big((t_{\ensuremath{r}-2}+\ensuremath{p}+\ensuremath{e}_{\ensuremath{r}-2})+\ensuremath{p}+\ensuremath{e}_{\ensuremath{r}-1}\big) - \ensuremath{\tau} \\} &= (\ensuremath{r}-1)\ensuremath{p}+\ensuremath{e}_{1}+\dots+\ensuremath{e}_{\ensuremath{r}-1}\;. \end{align*}} Denoting as $\sumel{\ensuremath{E}}$\label{sym:sumel} the sum of the shift corrections in $\ensuremath{E}$, $\sumel{\ensuremath{E}} = \sum_{\ensuremath{e} \in \ensuremath{E}} \ensuremath{e}$, we have \custmath{\tspan{C} = (\ensuremath{r}-1)\ensuremath{p}+\sumel{\ensuremath{E}}}{.} Note that this assumes that the correction maintains the order of the occurrences. This assumption is reasonable since an alternative cycle that maintains the order can be constructed for any cycle that does not. We denote as $\cov{\ensuremath{C}}$\label{sym:cov} the corresponding set of reconstructed timestamp--event pairs \custmath{\cov{\ensuremath{C}} = \{(t_1, \ensuremath{\alpha}), (t_2, \ensuremath{\alpha}), \dots, (t_r, \ensuremath{\alpha})\}}{.} We say that a cycle covers an occurrence if the corresponding timestamp--event pair belongs to the reconstructed subset $\cov{\ensuremath{C}}$. Since we represent time in an absolute rather than relative manner and assume that an event can only occur once at any given timestamp, we do not need to worry about overlapping cycles nor about an order between cycles. Given a collection of cycles representing the data, the original list of occurrences can be reconstructed by reconstructing the subset of occurrences associated with each cycle, regardless of order, and taking the union. We overload the notation and denote as $\cov{\ensuremath{\mathcal{C}}}$ the set of reconstructed timestamp--event pairs for a collection $\ensuremath{\mathcal{C}}$\label{sym:ccycle} of cycles $\ensuremath{\mathcal{C}} =\{\ensuremath{C}_1, \dots, \ensuremath{C}_m\}$, that is \custmath{\cov{\ensuremath{\mathcal{C}}} = \bigcup_{\ensuremath{C} \in \ensuremath{\mathcal{C}}} \cov{\ensuremath{C}}}{.} For a sequence $\seq$ and cycle collection $\ensuremath{\mathcal{C}}$ we call \emph{residual} the timestamp--event pairs not covered by any cycle in the collection: \custmath{\residual{\ensuremath{\mathcal{C}}, \seq}\label{sym:residual} = \seq \setminus \cov{\ensuremath{\mathcal{C}}}}{.} We associate a cost to each individual timestamp--event pair $o = (t, \alpha)$ and each cycle $\ensuremath{C}$, respectively denoted as $\mathit{L}(o)$\label{sym:costCL} and $\mathit{L}(\ensuremath{C})$, which we will define shortly. Then, we can reformulate our problem of extracting a representative collection of cycles as follows: \begin{problem} \label{prob:cycles} Given an event sequence $\seq$, find the collection of cycles $\ensuremath{\mathcal{C}}$ minimising the cost \[\mathit{L}(\ensuremath{\mathcal{C}}, \seq) = \sum_{\ensuremath{C} \in \ensuremath{\mathcal{C}}} \mathit{L}(\ensuremath{C}) + \sum_{o \in \residual{\ensuremath{\mathcal{C}}, \seq}} \mathit{L}(o)\;.\] \end{problem} \mpara{Code lengths as costs.} This problem definition can be instantiated with different choices of costs. Here, we propose a choice of costs motivated by the MDL principle. Following this principle, we devise a scheme for encoding the input event sequence using cycles and individual timestamp--event pairs. The cost of an element is then the length of the code word assigned to it under this scheme, and the overall objective of our problem becomes finding the collection of cycles that results in the shortest encoding of the input sequence, i.e.\ finding the cycles that compress the data most. In the rest of this section, we present our custom encoding scheme. \VShortOnly{Note that all logarithms are to base $2$.} \VLongOnly{For each type of information, we need to determine the most appropriate way to encode it, given the type of patterns we are interested in finding. The following should always be kept in mind \begin{quote} In MDL we are NEVER concerned with actual encodings; we are only concerned with code length functions. (Peter D.\ Gr\"{u}nwald 2004) \end{quote} \mpara{Outline of code systems.} Given a collection of symbols $Z$ that we might need to transmit, such as, in our case the alphabet of events over which our data sequence is expressed or the range of values that the periods might take, and a particular symbol $z$, all we are interested is the length of the code assigned to $z$, which we denote as $\mathit{L}(z)$, not the actual code. Different code systems can be used, but we focus on those that possess the \emph{prefix property}, meaning that there will not be any two code words in the system such that one is a prefix of the other, making such code uniquely decodable. For a collection of symbols $Z$, where each symbol $z$ is associated with an occurrence frequency $\mathit{fr}(z)$, the optimal prefix code is such that $\mathit{L}(z) = -\log(\mathit{fr}(z))$. However, this requires that the receptors knows the occurrence frequencies. \emph{Prequential coding} allows to obtain a code that is almost optimal, without knowing the frequencies. Such a code will assign shorter codes to, and hence favour, frequently occurring values. \emph{Fixed-length codes}, as the name indicates, assign codes of equal length to all values, and hence do not favour any value. Each value is encoded with a code of length $\log(\abs{Z})$. \emph{Universal codes} allow to encode non-negative integers, assigning shorter codes to smaller numerical values. In particular, the code length assigned to $z$ is $l_{\mathbb{N}}(z) = \log^*(z) + \log(c_0)$, where $c_0$ is a constant which must be adjusted to ensure that the Kraft inequality is satisfied, i.e.\ such that \[\sum_{z \in \mathbb{N}} 2^{-l_{\mathbb{N}}(z)} \leq 1.\] How much small values are favoured compared to larger ones can be adjusted. To avoid wasting bits on unused values large values, $c_0$ can be adjusted to ensure that Kraft inequality is not only satisfied but holds with strict equality. That is, given some upper bound $v$ on the values to encode, we denote as $\univ{v}$ the code length obtained with an adjusted $c_0$ so that \[\sum_{z \in [1..v]} 2^{-\univ{v}(z)} = 1\;.\] \mpara{Choosing the most appropriate encoding for cycles.}} For each cycle we need to specify its event, length, period, starting point and shift corrections, that is \custmath{\mathit{L}(\ensuremath{C}) = \mathit{L}(\ensuremath{\alpha}) + \mathit{L}(\ensuremath{r}) + \mathit{L}(\ensuremath{p}) + \mathit{L}(\ensuremath{\tau}) +\mathit{L}(\ensuremath{E})}{.} It is important to look more closely at the range in which each of these pieces of information takes value, at what values---if any---should be favoured, and at how the values of the different pieces depend on one another. \VLongOnly{Clearly, a cycle over event $\ensuremath{\alpha}$ cannot have a length greater than $\len{\seq^{(\ensuremath{\alpha})}}$. On the other hand, if it has length $\ensuremath{r}$, it cannot have a period greater than $\tspan{\seq^{(\ensuremath{\alpha})}}/(\ensuremath{r}-1)$. Furthermore, once $\ensuremath{\tau}$ is known, the period is further restricted to $(t_{\text{end}}(\seq^{(\ensuremath{\alpha})}) - \ensuremath{\tau})/(\ensuremath{r}-1)$. And vice-versa, if we first fix the period, it creates limitations on the values the length can take, which in turn affects the values the starting point can take. So, we see a clear dependency between these values. Also note that the maximum values for the period and the starting point depend on the time span of the sequence, while the maximum value for the length depends on the number of occurrences of the event. To avoid wasting bits, it might be useful to normalise the time scale to the smallest encountered time step. \mpara{Encoding with fixed-length codes.} A somewhat naive approach to encode a cycle is to use fixed-length codes for the event, length, period and starting point, and an adjusted universal code for the shift corrections. The magnitude of an individual shift correction can be anywhere between $0$ and $\tspan{\seq}$. So if we let $m=\tspan{\seq}+1$, we can use a code word of length $\univ{m}(\abs{e}+1)$ to indicate the absolute value of shift correction $e$ and add one bit to indicate its direction. Since we can easily determine that the length of a cycle can be no larger than $\len{\seq}$ and that, neglecting the shift corrections, its period and starting point can take values no larger than $\tspan{\seq}/2$ and $\tspan{\seq}$, respectively, we get \begin{align*} \mathit{L}(\ensuremath{C}) =& \mathit{L}(\ensuremath{\alpha}) + \mathit{L}(\ensuremath{r}) + \mathit{L}(\ensuremath{p}) + \mathit{L}(\ensuremath{\tau}) +\mathit{L}(\ensuremath{E}) \\ =& \log(\abs{\Omega}) + \log(\len{\seq}) + \log(\tspan{\seq}/2) + \log(\tspan{\seq}) \\ &+ \sum_{e \in \ensuremath{E}} (\univ{m}(\abs{e}+1) + 1) \;. \end{align*} \mpara{Optimising the encoding.} But we can do better, by exploiting the dependencies between the pieces of information. To encode the cycles' events, we can use either fixed-length coding, as above, or codes based on the events' frequency in the original sequence. In the first case the length of the code word representing the event is constant across all cycles, regardless of the event and only depends on the size of the alphabet. In the second case,}\VShortOnly{To encode the cycles' events, we can use codes based on the events' frequency in the original sequence, so that} events that occur more frequently in the event sequence will receive shorter code words: \custmath{\mathit{L}(\ensuremath{\alpha}) = -\log(\mathit{fr}(\ensuremath{\alpha})) = -\log(\custfrac{\len{\seq[\ensuremath{\alpha}]}}{\len{\seq}})}{.} This requires that we transmit the number of occurrences of each event in the original event sequence. To optimise the overall code length, the length of the code word associated to each event should actually depend on the frequency of the event in the selected collection of cycles. However, this would require keeping track of these frequencies and updating the code lengths dynamically. Instead, we use the frequencies of the events in the input sequence as a simple proxy. \VShortOnly{Clearly, a cycle with event $\ensuremath{\alpha}$ cannot have a length greater than $\len{\seq^{(\ensuremath{\alpha})}}$.} Once the cycle event $\ensuremath{\alpha}$ and its number of occurrences are known, we can encode the cycle length with a code word of length \custmath{\mathit{L}(\ensuremath{r}) = \log(\len{\seq[\ensuremath{\alpha}]})}{,} resulting in the same code length for large numbers of repetitions as for small ones. \VLongOnly{Recall that \[\tspan{\ensuremath{C}} = (\ensuremath{r}-1)\ensuremath{p}+\sumel{\ensuremath{E}}\;.\]} Clearly, a cycle spans at most the time of the whole sequence, i.e.\ ${\tspan{C} \leq \tspan{\seq}}$\VShortOnly{,}\VLongOnly{. Hence \[\ensuremath{p} \leq \Big\lfloor\frac{\tspan{\seq}-\sumel{\ensuremath{E}}}{\ensuremath{r}-1}\Big\rfloor\;,\]} so that knowing the cycle length, the shift corrections, and the sequence time span, we can encode the cycle period with a code word of length \[\mathit{L}(\ensuremath{p}) = \log\Big(\Big\lfloor\frac{\tspan{\seq}-\sumel{\ensuremath{E}}}{\ensuremath{r}-1}\Big\rfloor\Big)\;.\] \VLongOnly{ Note that the code word for the period of a cycle will be shorter if the cycle has greater length (since there are more repetitions, the period cannot be as long).} Next, knowing the cycle length and period as well as the sequence time span, \VLongOnly{the starting point $\ensuremath{\tau}$ can take any value between $t_{\text{start}}(\seq)$ and $t_{\text{end}}(\seq) - \tspan{\ensuremath{C}} = t_{\text{end}}(\seq) -\sumel{\ensuremath{E}} - (\ensuremath{r}-1)\ensuremath{p}$. Hence, }we can specify the value of the starting point with a code word of length \[\mathit{L}(\ensuremath{\tau}) = \log(\tspan{\seq} - \sumel{\ensuremath{E}} - (\ensuremath{r}-1)\ensuremath{p} + 1)\;.\] \VLongOnly{Note that if the cycle spans a larger part of the sequence, the range of the starting point is more restricted, and so it can be represented with a shorter code word.} Finally, we encode the shift corrections as follows: each correction $e$ is represented by $\abs{e}$ ones, prefixed by a single bit to indicate the direction of the shift, with each correction separated from the previous one by a zero. For instance, $\ensuremath{E} =\LL{3, -2, 0, 4}$ would be encoded as $\signDG{0}\valDG{111} \sepDG{0}\signDG{1}\valDG{11} \sepDG{0}\signDG{0} \sepDG{0}\signDG{0}\valDG{1111}\sepDG{0}$ with value digits, separating digits and sign digits, in italics, bold and normal font, respectively (the sign bit for zero is arbitrarily set to $0$ in this case). As a result, the code length for a sequence of shift corrections $\ensuremath{E}$ is \custmath{\mathit{L}(\ensuremath{E}) = 2\abs{\ensuremath{E}} + \sum_{e \in \ensuremath{E}} \abs{e}}{.} \VLongOnly{\medskip} Putting everything together, we can write the cost of a cycle $\ensuremath{C}$ as \begin{align*} \mathit{L}(\ensuremath{C}) \VLongOnly{=& \mathit{L}(\ensuremath{\alpha}) + \mathit{L}(\ensuremath{r}) + \mathit{L}(\ensuremath{p}) + \mathit{L}(\ensuremath{\tau}) +\mathit{L}(\ensuremath{E}) \\} =& \log(\len{\seq}) + \log\big(\big\lfloor\frac{\tspan{\seq}-\sumel{\ensuremath{E}}}{\ensuremath{r}-1}\big\rfloor\big) \\ &+ \log(\tspan{\seq} - \sumel{\ensuremath{E}} - (\ensuremath{r}-1)\ensuremath{p} + 1) \VLongOnly{\\ &}+ 2\abs{\ensuremath{E}} + \sum_{e \in \ensuremath{E}} \abs{e} \;. \end{align*} On the other hand, the cost of an individual occurrence $o = (t, \alpha)$ is simply the sum of the cost of the corresponding timestamp and event: \[\mathit{L}(o) = \mathit{L}(t)+\mathit{L}(\alpha) = \log(\tspan{\seq}+1) -\log(\custfrac{\len{\seq[\alpha]}}{\len{\seq}})\;.\] Note that if our goal was to actually encode the input sequence, we would need to transmit the smallest and largest timestamps ($t_{\text{start}}(\seq)$ and $t_{\text{end}}(\seq)$), the size of the event alphabet ($\abs{\Omega}$), as well as the number of occurrences of each event ($\len{\seq[\alpha]}$ for each event $\alpha$) of the event sequence. We should also transmit the number of cycles in the collection ($\abs{\ensuremath{\mathcal{C}}}$), which can be done, for instance with a code word of length $\log(\len{\seq})$. However, since our goal is to compare collections of cycles, we can simply ignore this, as it represents a fixed cost that remains constant for any chosen collection of cycles. Finally, consider that we are given an ordered list of occurrences $\LL{t_1, t_2, \dots , t_l}$ of event $\alpha$, and we want to determine the best cycle with which to cover all these occurrences at once. Some of the parameters of the cycle are determined, namely the repeating event $\ensuremath{\alpha}$, the length $\ensuremath{r}$, and the timestamp of the first occurrence~$\ensuremath{\tau}$. All we need to determine is the period $p$ that yields the shortest code length for the cycle. In particular, we want to find $p$ that minimises $\mathit{L}(\ensuremath{E})$. The shift corrections are such that $\ensuremath{E}_{k} = (t_{k+1} - t_{k}) - p$\VLongOnly{ (cf.\ the definition of a cycle's cover)}. If we consider the list of inter-occurrence distances $d_1=t_{2} - t_{1}, d_2=t_{3} - t_{2}, \dots, d_{l-1} = t_{l} - t_{l-1}$, the problem of finding $p$ that minimises $\mathit{L}(\ensuremath{E})$ boils down to minimising $\sum_{d_i} \abs{d_i - p}.$ This is achieved by letting $p$ equal the geometric median of the inter-occurrence distances, which, in the one-dimensional case, is simply the median. Hence, for this choice of encoding for the shift corrections, the optimal cycle covering a list of occurrences can be determined by simply computing the inter-occurrences distances and taking their median as the cycle period. \section{Defining Tree Patterns} \label{sec:problem_complex} So far, our pattern language is restricted to cycles over single events. In practise, however, several events might recur regularly together and repetitions might be nested with several levels of periodicity. To handle such cases, we now introduce a more expressive pattern language, that consists of a hierarchy of cyclic blocks, organised as a tree. Instead of considering simple cycles specified as 5-tuples $\ensuremath{C} = (\ensuremath{\alpha}, \ensuremath{r}, \ensuremath{p}, \ensuremath{\tau}, \ensuremath{E})$ we consider more general patterns specified as triples $\ensuremath{P}\label{sym:patt} = (\ensuremath{T}, \ensuremath{\tau}, \ensuremath{E})$, where $\ensuremath{T}$\label{sym:ptree} denotes the tree representing the hierarchy of cyclic blocks, while $\ensuremath{\tau}$ and $\ensuremath{E}$ respectively denote the starting point and shift corrections of the pattern, as with cycles. \mpara{Pattern trees.} Each \emph{leaf node} in a pattern tree represents a simple block containing one event. Each \emph{intermediate node} represents a cycle in which the children nodes repeat at a fixed time interval. In other words, each intermediate node represents cyclic repetitions of a sequence of blocks. The root of a pattern tree is denoted as $\ensuremath{B}_0$\label{sym:Pblock}. Using list indices, we denote the children of a node $\ensuremath{B}_{X}$ as $\ensuremath{B}_{X{}1}$, $\ensuremath{B}_{X{}2}$, etc. \VLongOnly{We denote the ordered list of the children of node $\ensuremath{B}_{X}$ as $\child{\ensuremath{B}_{X}}$\label{sym:child}, that is, \custmath{\child{\ensuremath{B}_{X}} = \LL{\ensuremath{B}_{X{}1}, \ensuremath{B}_{X{}2}, \dots}}{.}} All children of an intermediate node except the \VLongOnly{left-most child}\VShortOnly{first one} are associated to their distance to the preceding child, called the \emph{inter-block distance}\VShortOnly{, denoted as $d_{X{}}$ for node $\ensuremath{B}_{X{}}$.}\VLongOnly{. This distance for node $\ensuremath{B}_{X{}i}$ is denoted as $d_{X{}i}$\label{sym:interBd}, i.e.\ $d_{X{}i}$ represents the time that separates occurrences of node $\ensuremath{B}_{X{}(i-1)}$ and node $\ensuremath{B}_{X{}i}$.} Inter-block distances take non-negative integer values. Each intermediate node $\ensuremath{B}_{X}$ is associated with the period $\ensuremath{p}_{X}$ and length $\ensuremath{r}_{X}$ of the corresponding cycle. Each leaf node $\ensuremath{B}_{Y}$ is associated with the corresponding occurring event $\alpha_{Y}$. An example of an abstract pattern tree is shown in Fig.~\ref{fig:ex_tree_abs}. \VLongOnly{Some concrete pattern trees that we will use as examples are shown in Fig.~\ref{fig:ex_tree1a}--\ref{fig:ex_tree2}.} We call \emph{height} and \emph{width} of the pattern tree---and by extension of the associated pattern---respectively the number of edges along the longest branch from the root to a leaf node and the number of leaf nodes in the tree. \VLongOnly{ \begin{figure} \centering \begin{tikzpicture}[-,auto,node distance=1.2cm, thick] \node[main node] (b0) {}; \node[main node] (b2) [below of=b0] {}; \node[leaf node] (b1) [left of=b2, node distance=2.4cm] {}; \node[main node] (b3) [right of=b2, node distance=2.4cm] {}; \node[ghost node] (b2x) [below of=b2] {}; \node[main node] (b21) [left of=b2x, node distance=.8cm] {}; \node[main node] (b22) [right of=b2x, node distance=.8cm] {}; \node[ghost node] (b3x) [below of=b3, xshift=0.6cm] {}; \node[leaf node] (b31) [left of=b3x, node distance=.8cm] {}; \node[main node] (b32) [right of=b3x, node distance=.8cm] {}; \node[leaf node] (b212) [below of=b21, xshift=.2cm] {}; \node[leaf node] (b211) [left of=b212, node distance=1.2cm] {}; \node[leaf node] (b222) [below of=b22, xshift=.6cm] {}; \node[leaf node] (b221) [left of=b222, node distance=1.2cm] {}; \node[leaf node] (b223) [right of=b222, node distance=1.2cm] {}; \node[leaf node] (b321) [below of=b32] {}; \node[label node] (l0) [above of=b0, xshift=-0.2cm] {\BlockMark{0}}; \node[label node] (l2) [above of=b2] {\BlockMark{2}}; \node[label node] (l3) [above of=b3] {\BlockMark{3}}; \node[label node] (l21) [above of=b21, xshift=-.4cm] {\BlockMark{21}}; \node[label node] (l22) [above of=b22] {\BlockMark{22}}; \node[label node] (l32) [above of=b32] {\BlockMark{32}}; \node[lterm node] (l1) [below of=b1] {\BlockMark{1}}; \node[lterm node] (l31) [below of=b31] {\BlockMark{31}}; \node[lterm node] (l212) [below of=b212] {\BlockMark{212}}; \node[lterm node] (l211) [below of=b211] {\BlockMark{211}}; \node[lterm node] (l222) [below of=b222] {\BlockMark{222}}; \node[lterm node] (l221) [below of=b221] {\BlockMark{221}}; \node[lterm node] (l223) [below of=b223] {\BlockMark{223}}; \node[lterm node] (l321) [below of=b321] {\BlockMark{321}}; \node[prop node] (p0) [below of=l0] {$r_0, p_0$}; \node[prop node] (p2) [below of=l2] {$r_2, p_2$}; \node[prop node] (p3) [below of=l3] {$r_3, p_3$}; \node[prop node] (p21) [below of=l21, anchor=east, xshift=-0.6cm] {$r_{21}, p_{21}$}; \node[prop node] (p22) [below of=l22, anchor=east, xshift=0.2cm] {$r_{22}, p_{22}$}; \node[prop node] (p32) [below of=l32] {$r_{32}, p_{32}$}; \node[pterm node] (p1) [below of=l1] {$\alpha_1$}; \node[pterm node] (p31) [below of=l31] {$\alpha_{31}$}; \node[pterm node] (p212) [below of=l212] {$\alpha_{212}$}; \node[pterm node] (p211) [below of=l211] {$\alpha_{211}$}; \node[pterm node] (p222) [below of=l222] {$\alpha_{222}$}; \node[pterm node] (p221) [below of=l221] {$\alpha_{221}$}; \node[pterm node] (p223) [below of=l223] {$\alpha_{223}$}; \node[pterm node] (p321) [below of=l321] {$\alpha_{321}$}; \path (b0) edge (b1) (b0) edge (b2) (b0) edge (b3) (b2) edge (b21) (b2) edge (b22) (b3) edge (b31) (b3) edge (b32) (b21) edge (b211) (b21) edge (b212) (b22) edge (b221) (b22) edge (b222) (b22) edge (b223) (b32) edge (b321); \path[dotted, thin, ->, bend right=20, color=darkgray] (b1) edge node[above] {$d_{2}$} (b2) (b2) edge node[above] {$d_{3}$} (b3) (b21) edge node[above] {$d_{22}$} (b22) (b31) edge node[above] {$d_{32}$} (b32) (b211) edge node[above] {$d_{212}$} (b212) (b221) edge node[above] {$d_{222}$} (b222) (b222) edge node[above] {$d_{223}$} (b223); \end{tikzpicture} \caption{Abstract pattern tree.} \label{fig:ex_tree_abs} \end{figure} } For a given pattern, we can construct a tree of event occurrences by expanding the pattern tree recursively, that is, by appending to each intermediate node the corresponding number of copies of the associated subtree, recursively. We call this expanded tree the \emph{expansion tree} of the pattern, as opposed to the contracted \emph{pattern tree} that more concisely represents the pattern. \VLongOnly{When a pattern tree is expanded, several copies of a node can be generated as a result of repetitions in possibly nested cycles. Each node in an expansion is identified with a pair $(n,L)$, where $n$ is the node of the pattern tree that generated the expansion node, and $L$ is a list indicating the specific combination of repetitions of ancestors that produced it. \begin{figure*} \centering \newcommand{main node}{main node} \newcommand{main node}{main node} \newcommand{main node}{main node} \newcommand{main node}{main node} \newcommand{main node}{main node} \newcommand{1.}{1.} \newcommand{-.5}{-.5} \newcommand{1.4}{1.4} \newcommand{-.7}{-.7} \renewcommand{.5}{1.2} \renewcommand{-.3}{-.5} \renewcommand{.5}{3.5} \renewcommand{-.3}{-.15} \newcommand{3}{3} \newcommand{3}{3} \newcommand{2}{2} \newcommand{2}{2} \begin{tikzpicture}[-,auto,node distance=1.2cm] \path[rep edge] (0,0) edge ({((3+.15)-.5)*.5},{((3+.15)-.5)*-.3}); \path[rep edge, dotted] ({(3-.5)*.5},{(3-.5)*-.3}) edge ({((3+.5)-.5)*.5},{((3+.5)-.5)*-.3}); \node[main node] (R00) at (0,0) {}; \node[lmain node, xshift=10pt] (R00l) [above of=R00] {\OccMark{$(\ensuremath{B}_0, \lls\lle)$}}; \foreach \j in {1,...,3}{ \renewcommand{main node}{leaf node} \renewcommand{main node}{main node} \renewcommand{main node}{main node} \ifthenelse{\j=3}{ \renewcommand{main node}{ghost node} }{} \node[main node] (R{\j}b0) at ({(\j-.5)*.5},{(\j-.5)*-.3}) {}; \node[main node] (R{\j}b2) [below of=R{\j}b0] {}; \node[main node] (R{\j}b1) [left of=R{\j}b2, node distance=1.4cm, yshift=.2cm] {}; \node[main node] (R{\j}b3) [right of=R{\j}b2, node distance=1.4cm, yshift=.2cm] {}; \path (R{\j}b0) edge (R{\j}b1) (R{\j}b0) edge (R{\j}b2) (R{\j}b0) edge (R{\j}b3); \path[dotted] (R{\j}b2) edge ([shift={({.3*.5},{.3*-.3})}]R{\j}b2); \ifthenelse{\j=2}{}{ \path[dotted] (R{\j}b3) edge ([shift={({.3*.5},{.3*-.3})}]R{\j}b3); } \node[li node, yshift=.1cm] (R{\j}l0) at (R{\j}b0) {\OccMark{$(\ensuremath{B}_0, \LL{\j})$}}; \ifthenelse{\j=1}{ \node[lmain node, xshift=4pt, yshift=-16pt] (R{\j}l1) [above of=R{\j}b1] {\OccMark{$(\ensuremath{B}_1, \LL{\j})$}}; }{\node[lmain node] (R{\j}l1) [above of=R{\j}b1] {\OccMark{$(\ensuremath{B}_1, \LL{\j})$}};} \node[lmain node] (R{\j}l2) [above of=R{\j}b2] {\OccMark{$(\ensuremath{B}_2, \LL{\j})$}}; \node[lmain node] (R{\j}l3) [above of=R{\j}b3] {\OccMark{$(\ensuremath{B}_3, \LL{\j})$}}; \ifthenelse{\j=2}{ \node[ghost node] (Ya) at (R{\j}b2) {}; \node[ghost node] (Yb) at ([shift={({.1*.5},{.1*-.3})}]R{\j}b2) {}; \node[ghost node] (Yc) at ([shift={({.25*.5},{.25*-.3})}]R{\j}b2) {}; \node[ghost node] (Yd) at ([shift={({(3+.25-.5)*.5},{(3+.25-.5)*-.3})}]R{\j}b2) {}; \node[ghost node] (Ye) at ([shift={({(3+.5-.5)*.5},{(3+.5-.5)*-.3})}]R{\j}b2) {}; \path[rep edge] (Ya) edge (Yd); \path[rep edge, dotted] (Yd) edge (Ye); \foreach \k in {1,...,3}{ \node[main node] (R{\j}{\k}b2) at ([shift={({(\k-.5)*.5},{(\k-.5)*-.3})}]R{\j}b2) {}; \node[li node] (R{\j}{\k}b2l) at (R{\j}{\k}b2) {\OccMark{$(\ensuremath{B}_2, \LL{\j, \k})$}}; \renewcommand{main node}{main node} \ifthenelse{\k=3}{ \renewcommand{main node}{main node} }{ \renewcommand{main node}{ghost node} } \node[ghost node] (R{\j}{\k}b2x) [below of=R{\j}{\k}b2, node distance=1.4cm] {}; \node[main node] (R{\j}{\k}b21) [left of=R{\j}{\k}b2x, node distance=1.5cm] {}; \node[main node] (R{\j}{\k}b22) [right of=R{\j}{\k}b2x, node distance=1.5cm] {}; \path (R{\j}{\k}b2) edge (R{\j}{\k}b21) (R{\j}{\k}b2) edge (R{\j}{\k}b22); \ifthenelse{\k=3}{}{ \path[dotted] (R{\j}{\k}b21) edge ([shift={({.3*1.},{.3*-.5})}]R{\j}{\k}b21) (R{\j}{\k}b22) edge ([shift={({.3*1.},{.3*-.5})}]R{\j}{\k}b22); } \node[lmain node] (R{\j}{\k}l21) [above of=R{\j}{\k}b21] {\OccMark{$(\ensuremath{B}_{21}, \LL{\j, \k})$}}; \node[lmain node] (R{\j}{\k}l22) [above of=R{\j}{\k}b22] {\OccMark{$(\ensuremath{B}_{22}, \LL{\j, \k})$}}; \ifthenelse{\k=3}{ \node[ghost node] (YAa) at (R{\j}{\k}b21) {}; \node[ghost node] (YAb) at ([shift={({.1*1.},{.1*-.5})}]R{\j}{\k}b21) {}; \node[ghost node] (YAc) at ([shift={({.25*1.},{.25*-.5})}]R{\j}{\k}b21) {}; \node[ghost node] (YAd) at ([shift={({(2+.25-.5)*1.},{(2+.25-.5)*-.5})}]R{\j}{\k}b21) {}; \node[ghost node] (YAe) at ([shift={({(2+.5-.5)*1.},{(2+.5-.5)*-.5})}]R{\j}{\k}b21) {}; \path[rep edge] (YAa) edge (YAd); \path[rep edge, dotted] (YAd) edge (YAe); \foreach \l in {1,...,2}{ \node[main node] (R{\j}{\k}{\l}b21) at ([shift={({(\l-.5)*1.},{(\l-.5)*-.5})}]R{\j}{\k}b21) {}; \node[ghost node] (R{\j}{\k}{\l}b21x) [below of=R{\j}{\k}{\l}b21, node distance=.9cm] {}; \node[leaf node] (R{\j}{\k}{\l}b212) [right of=R{\j}{\k}{\l}b21x, node distance=.3cm] {}; \node[leaf node] (R{\j}{\k}{\l}b211) [left of=R{\j}{\k}{\l}b21x, node distance=.3cm] {}; \node[li node] (R{\j}{\k}{\l}b21l) at (R{\j}{\k}{\l}b21) {\OccMark{$(\ensuremath{B}_{21}, \LL{\j, \k, \l})$}}; \node[le node] (R{\j}{\k}{\l}l211) [below of=R{\j}{\k}{\l}b211] {\OccMark{$(\ensuremath{B}_{211}, \LL{\j, \k, \l})$}}; \node[le node] (R{\j}{\k}{\l}l212) [below of=R{\j}{\k}{\l}b212] {\OccMark{$(\ensuremath{B}_{212}, \LL{\j, \k, \l})$}}; \path (R{\j}{\k}{\l}b21) edge (R{\j}{\k}{\l}b211) (R{\j}{\k}{\l}b21) edge (R{\j}{\k}{\l}b212); } \node[ghost node] (YBa) at (R{\j}{\k}b22) {}; \node[ghost node] (YBb) at ([shift={({.1*1.4},{.1*-.7})}]R{\j}{\k}b22) {}; \node[ghost node] (YBc) at ([shift={({.25*1.4},{.25*-.7})}]R{\j}{\k}b22) {}; \node[ghost node] (YBd) at ([shift={({(2+.25-.5)*1.4},{(2+.25-.5)*-.7})}]R{\j}{\k}b22) {}; \node[ghost node] (YBe) at ([shift={({(2+.5-.5)*1.4},{(2+.5-.5)*-.7})}]R{\j}{\k}b22) {}; \path[rep edge] (YBa) edge (YBd); \path[rep edge, dotted] (YBd) edge (YBe); \foreach \m in {1,...,2}{ \node[main node] (R{\j}{\k}{\m}b22) at ([shift={({(\m-.5)*1.4},{(\m-.5)*-.7})}]R{\j}{\k}b22) {}; \node[leaf node] (R{\j}{\k}{\m}b222) [below of=R{\j}{\k}{\m}b22, node distance=.9cm] {}; \node[leaf node] (R{\j}{\k}{\m}b221) [left of=R{\j}{\k}{\m}b222, node distance=.6cm] {}; \node[leaf node] (R{\j}{\k}{\m}b223) [right of=R{\j}{\k}{\m}b222, node distance=.6cm] {}; \ifthenelse{\m=2}{ \node[li node, yshift=4pt, xshift=-32pt] (R{\j}{\k}{\m}b22l) at (R{\j}{\k}{\m}b22) {\OccMark{$(\ensuremath{B}_{22}, \LL{\j, \k, \m})$}};}{ \node[li node] (R{\j}{\k}{\m}b22l) at (R{\j}{\k}{\m}b22) {\OccMark{$(\ensuremath{B}_{22}, \LL{\j, \k, \m})$}}; } \node[le node] (R{\j}{\k}{\m}l222) [below of=R{\j}{\k}{\m}b222] {\OccMark{$(\ensuremath{B}_{222}, \LL{\j, \k, \m})$}}; \node[le node] (R{\j}{\k}{\m}l221) [below of=R{\j}{\k}{\m}b221] {\OccMark{$(\ensuremath{B}_{221}, \LL{\j, \k, \m})$}}; \node[le node] (R{\j}{\k}{\m}l223) [below of=R{\j}{\k}{\m}b223] {\OccMark{$(\ensuremath{B}_{223}, \LL{\j, \k, \m})$}}; \path (R{\j}{\k}{\m}b22) edge (R{\j}{\k}{\m}b221) (R{\j}{\k}{\m}b22) edge (R{\j}{\k}{\m}b222) (R{\j}{\k}{\m}b22) edge (R{\j}{\k}{\m}b223); } }{}% } }{}% } \end{tikzpicture} \caption{Expansion of the pattern tree from Fig.~\ref{fig:ex_tree_abs}.} \label{fig:ex_exp_abs} \end{figure*} The expansion tree of the pattern tree of Fig.~\ref{fig:ex_tree_abs} is shown in Fig.~\ref{fig:ex_exp_abs}. Node $(\ensuremath{B}_0, \lls\lle)$ is the root of the expansion tree, $(\ensuremath{B}_{0}, \LL{1})$ is the node generated as the first repetition of pattern node $\ensuremath{B}_{0}$, and $(\ensuremath{B}_{21}, \LL{2, 3})$ is the node generated from node $\ensuremath{B}_{21}$ in the third repetition of pattern node $\ensuremath{B}_{2}$ nested within the second repetition of pattern node $\ensuremath{B}_0$. The notation used to identify nodes in pattern trees and expansion trees allows to easily navigate the trees. In particular, the left-most leaf among the descendants of a given node $\ensuremath{B}_{X}$ can be obtained by going down the left-most branch, looking at nodes $\ensuremath{B}_{\Bid1}$, $\ensuremath{B}_{\Bid11}$, etc.\ until reaching a leaf. We denote that node, the left-most leaf descendant of $\ensuremath{B}_{X}$ as $\Lchild{\ensuremath{B}_{X}}$\label{sym:Lchild}. Similarly, we denote as $\Lchild{(n, L)}$ the left-most leaf descendant of node $(n, L)$ in the expansion tree, which is such that $\Lchild{(n, L)} = (\Lchild{n}, L')$, where $L' = L \oplus \LL{1,1\dots}$, that is, $L'$ is the list $L$ trailing with ones. That is, in addition to selecting always the left-most child, we always select the first repetition of a node when travelling the expansion tree until reaching a leaf. Note that $\Lchild{\ensuremath{B}_{X}} = \ensuremath{B}_{X}$ and $\Lchild{(\ensuremath{B}_{X}, L)} = (\ensuremath{B}_{X}, L)$ if $\ensuremath{B}_{X}$ itself is a leaf node. }\VLongOnly{We use the recursive notation $\BinfoRP{r_{X}}{p_{X}} \textcolor{darkgray}{\big(}{}\activity{\ensuremath{B}_{\Bid1}}\,\BinfoD{d_{X{}2}}\, \activity{\ensuremath{B}_{\Bid2}} \dots\textcolor{darkgray}{\big)}{}$ to represent a block $\ensuremath{B}_{X}$. With this notation, $\ensuremath{T}_1$ from Fig.~\ref{fig:ex_tree1a} is represented as \[\BinfoRP{4}{2} \textcolor{darkgray}{\big(}{}\activity{a}\textcolor{darkgray}{\big)}{}\] and $\ensuremath{T}_7$ from Fig.~\ref{fig:ex_tree2} as \[\BinfoRP{3}{10} \textcolor{darkgray}{\big(}{}\activity{b} \BinfoD{3} \BinfoRP{4}{1} \textcolor{darkgray}{\big(}{}\activity{a}\textcolor{darkgray}{\big)}{} \BinfoD{1} \activity{c}\textcolor{darkgray}{\big)}{}\;.\] \mpara{Reconstructing a pattern's cover.}} We can enumerate the event occurrences of a pattern by traversing its expansion tree and recording the encountered leaf nodes. \VLongOnly{The expansion tree is traversed in a depth-first left-to-right manner, first travelling through all children in a repetition of a block before moving on to the next repetition. }\VShortOnly{We denote as $\occsStar{\ensuremath{P}}$ this list of timestamp--event pairs reconstructed from the tree, prior to correction.} \VLongOnly{For instance, the traversal of the expansion tree shown in Fig.~\ref{fig:ex_exp_abs}, starts from the root node $(\ensuremath{B}_0 , \lls\lle{})$ and first reaches $(\ensuremath{B}_0 , \LL{1})$. Then, children nodes $(\ensuremath{B}_1, \LL{1})$, $(\ensuremath{B}_2, \LL{1})$ and $(\ensuremath{B}_3, \LL{1})$, and their descendants, should be traversed before travelling to the next repetition of $\ensuremath{B}_0$, $(\ensuremath{B}_0 , \LL{2})$. Simply put, pattern edges (represented as thin lines in Fig.~\ref{fig:ex_exp_abs}) take priority over repetition edges (represented as thick lines). We define the following recursive function: \[ \mapOids{\ensuremath{B}_{X}, l}\label{sym:mapOids} = \left\{ \begin{array}{l@{}l} \LL{\ensuremath{B}_{X}, l} & \quad \mbox{if $\ensuremath{B}_{X}$ is a leaf},\\[.5em] \bigoplus_{k \in [1..\ensuremath{r}_{X}]} \bigoplus_{\ensuremath{B}_{X{}i} \in \child{\ensuremath{B}_{X}}} & \mapOids{\ensuremath{B}_{X{}i}, l \oplus \LL{k-1}} \\ & \hfill \mbox{otherwise}.\end{array} \right. \] The list of leaf nodes encountered in the expansion tree during the traversal can be obtained as $\mapOids{\ensuremath{T}} = \mapOids{\ensuremath{B}_0, \lls\lle}$. Using a similar recursive function, following the same traversal of the expansion tree, we can construct the perfect event occurrences. That is, we can recursively construct the list of uncorrected timestamps--events pairs produced by a pattern tree $\ensuremath{T}$, which we denote as $\occsStar{\ensuremath{T}} = \occsStar{\ensuremath{B}_0}$. For this purpose, we first define a function $\shift{S}{t_s}$\label{sym:shift} that shifts a set of event occurrences $S$ by a specified value $t_s$, that is, \[ \shift{S}{t_s} = \{(t_i+t_s, \alpha_i), \quad \forall (t_i, \alpha_i) \in S\}.\] For instance \begin{align*} \mathit{shift}(&\LL{(2,c),(3,c),(6,a),(7,a)},-1) \\ =& \LL{(1,c),(2,c),(5,a),(6,a)}. \end{align*} Overloading the notation, we let $\occsStar{\ensuremath{B}_{X}}$\label{sym:occStar} denote the list of occurrences associated with $\ensuremath{B}_{X}$. If $\ensuremath{B}_{X}$ is a leaf, $\occsStar{\ensuremath{B}_{X}}$ is a one-element list \[ \occsStar{\ensuremath{B}_{X}} = \LL{(0, \alpha_{X})}.\] If $\ensuremath{B}_{X}$ is an intermediate node, we let $O(\ensuremath{B}_{X})$ denote the concatenation of the lists of occurrences of its children, each one shifted by the accumulated inter-block distances: \[ O(\ensuremath{B}_{X}) = \bigoplus_{\ensuremath{B}_{X{}i} \in \child{\ensuremath{B}_{X}}} \shift{\occsStar{\ensuremath{B}_{X{}i}}}{ \sum_{1 < j \leq i} d_{X{}j} }\;. \] Then the list of occurrences is obtained by concatenating $\ensuremath{r}_{X}$ copies of $O(\ensuremath{B}_{X})$, shifted according to the period $\ensuremath{p}_{X}$: \[ \occsStar{\ensuremath{B}_{X}} = \bigoplus_{k \in [1..\ensuremath{r}_{X}]} \shift{ O(\ensuremath{B}_{X}) }{ (k-1) \cdot \ensuremath{p}_{X} }\;. \] Finally, if the starting point of pattern $\ensuremath{P}$ is $\tau$, we have $\occsStar{\ensuremath{P}} = \shift{\occsStar{\ensuremath{T}}}{\tau}$. The occurrences appear in the list in the order in which they are generated during the expansion, which does not necessarily match the order of the timestamps. More specifically, if the sequence of timestamps in $\occsStar{\ensuremath{T}}$ is not monotone, we say that the pattern tree $\ensuremath{T}$ (and the associated pattern $\ensuremath{P}$) is \emph{interleaved}. If a pattern tree is not interleaved, all events constituting a repetition of a block must occur at latest when an event of the following repetition occurs. If several events occur at the same time, we say that the pattern tree has \emph{overlaps}. For example, pattern trees $\ensuremath{T}_3$ and $\ensuremath{T}_4$ cover the same occurrences, but $\ensuremath{T}_4$ is interleaved while $\ensuremath{T}_3$ is not. Both patterns $\ensuremath{T}_6$ and $\ensuremath{T}_7$ have overlaps, but $\ensuremath{T}_7$ is interleaved while $\ensuremath{T}_6$ is not. We denote as $o_i$ the $i^\text{th}$ event occurrence generated by $\ensuremath{T}$, and let $\occsStar{o_i}$ be the corresponding timestamp--event pair and $\mapOids{o_i}$ be the corresponding expansion leaf node, i.e.\ mapping $o_i$ to the elements at position $i$ in $\occsStar{\ensuremath{T}}$ and $\mapOids{\ensuremath{T}}$, respectively.} As for the simple cycles, we will not only consider perfect patterns but will allow some variations. For this purpose, a list of shift corrections $\ensuremath{E}$ is provided with the pattern, which contains a correction for each occurrence except the first one, i.e.\ $\abs{\ensuremath{E}} = \abs{\occsStar{\ensuremath{P}}}-1$. \VLongOnly{ By applying the shift corrections in $\ensuremath{E}$ to the perfect occurrences in $\occsStar{\ensuremath{P}}$, we can generate the list of corrected occurrences for pattern $\ensuremath{P}$, denoted as $\occs{\ensuremath{P}}$\label{sym:occs}. The corrections are listed in $\ensuremath{E}$ in the same order as the leaf nodes are encountered in the expansion tree. Therefore, the correction associated to occurrence $o_i$ is the element at position $i-1$ in $\ensuremath{E}$, i.e.\ $\ensuremath{E}[i-1]$, which we also denote as $\ensuremath{E}(o_i)$ or $\ensuremath{E}((n,L))$, where $(n,L)$ is the corresponding expansion node. For ease of notation we let $\ensuremath{E}(o_1)=0$, since the left most occurrence $o_1$ has no correction. }However, as for simple cycles, corrections accumulate over successive occurrences, and we cannot recover the list of corrected occurrences $\occs{\ensuremath{P}}$ by simply adding the individual corrections to the elements of $\occsStar{\ensuremath{P}}$. Instead, we first have to compute the accumulated corrections for each occurrence. \VLongOnly{In addition to its own correction, the corrections that should be applied to an occurrence come from the offsets of its left siblings in multi-events blocks and the offsets of previous repetitions in cycles the occurrence belongs to. Algorithm~\ref{alg:coco} shows the procedure---named \algname{CoCo}{}---that can be used to collect the occurrences whose individual corrections impact occurrence $o$ (recall that $\Lchild{}$ returns the left-most leaf descendant of a node). Then, the correction to be applied to the timestamp of $o$ is \[\cume{o}\label{sym:cume} = \ensuremath{E}(o) + \sum_{o_k \in \algname{CoCo}(o)} \ensuremath{E}(o_k)\;.\] The corrected occurrence timestamps can thus be reconstructed by shifting the perfect timestamp by the corresponding correction, i.e.\ $\occs{o_i} = \occsStar{o_i} +\cume{o_i}$. \begin{algorithm}[tb] \caption{\algname{CoCo}: Collect occurrence corrections.} \label{alg:coco} \begin{algorithmic}[1] \Require An occurrence $o$ \Ensure A set of occurrences whose corrections apply to $o$ \If{$o = (\ensuremath{B}_0, \LL{})$}\Comment{Root of pattern} \State $\omega \gets \emptyset$ \EndIf \If{$o = (\ensuremath{B}_{Xy}, Uv)$} \State $\omega \gets \{ \Lchild{(\ensuremath{B}_{Xy'}, Uv)},\, y' < y \}$ \Comment{Left-siblings} \State $\omega \gets \omega \,\cup\,\, \{ \Lchild{(\ensuremath{B}_{X}, Uv')}, v' < v \}$ \Comment{Previous repetitions} \State $\omega \gets \omega \,\cup\, $\algname{CoCo}$((\ensuremath{B}_{X}, U))$ \Comment{Recurse for parent} \EndIf \State \textbf{return} $\omega$ \end{algorithmic} \end{algorithm} } \VShortOnly{ \begin{figure} \centering \setlength{\ndhlen}{1.cm} \setlength{\ndwlen}{.8cm} \newcommand{\treeSmallH}[3]{ \node[main node] (b1#1) at (#2) {}; \node[leaf node] (b2#1) [below of=b1#1] {}; \node[pterm node, xshift=-.2cm] (p2#1) [below of=b2#1] {$#3$}; \path (b1#1) edge (b2#1); } \newcommand{\treeSmallCH}[2]{ \node[main node] (b2#1) at (#2) {}; \node[ghost node] (b2x#1) [below of=b2#1] {}; \node[leaf node] (b21#1) [left of=b2x#1, node distance=\ndwlen] {}; \node[leaf node] (b22#1) [right of=b2x#1, node distance=\ndwlen] {}; \node[pterm node, xshift=-.2cm] (p21#1) [below of=b21#1] {$a$}; \node[pterm node, xshift=-.2cm] (p22#1) [below of=b22#1] {$b$}; \path (b2#1) edge (b21#1) (b2#1) edge (b22#1); } \newcommand{\treeSmallV}[2]{ \node[ghost node] (b0#1) at (#2) {}; \node[main node] (b2#1) [below of=b0#1] {}; \node[ghost node] (b2x#1) [below of=b2#1] {}; \node[leaf node] (b21#1) [left of=b2x#1, node distance=\ndwlen] {}; \node[leaf node] (b22#1) [right of=b2x#1, node distance=\ndwlen] {}; \node[pterm node, xshift=-.2cm] (p21#1) [below of=b21#1] {$c$}; \node[pterm node, xshift=-.2cm] (p22#1) [below of=b22#1] {$e$}; \path (b2#1) edge (b21#1) (b2#1) edge (b22#1); } \newcommand{\treeSmallCV}[2]{ \node[main node] (b0#1) at (#2) {}; \node[main node] (b2#1) [below of=b0#1] {}; \node[ghost node] (b2x#1) [below of=b2#1] {}; \node[leaf node] (b21#1) [left of=b2x#1, node distance=\ndwlen] {}; \node[leaf node] (b22#1) [right of=b2x#1, node distance=\ndwlen] {}; \node[pterm node, xshift=-.2cm] (p21#1) [below of=b21#1] {$c$}; \node[pterm node, xshift=-.2cm] (p22#1) [below of=b22#1] {$e$}; \path (b0#1) edge (b2#1) (b2#1) edge (b21#1) (b2#1) edge (b22#1); } \begin{tikzpicture}[-,auto,node distance=\ndhlen, thick] \node[main node] (b0) at (-3.2,-.2) {}; \node[ghost node] (bx) [below of=b0] {}; \node[main node] (b2) [left of=bx, node distance=\ndwlen] {}; \node[leaf node] (b1) [right of=bx, node distance=\ndwlen] {}; \node[ghost node] (b2x) [below of=b2] {}; \node[leaf node] (b21) [left of=b2x, node distance=\ndwlen] {}; \node[leaf node] (b22) [right of=b2x, node distance=\ndwlen] {}; \node[label node] (l0) [above left of=b0, xshift=-.4cm] {\BlockMark{0}}; \node[label node] (l2) [above left of=b2, xshift=-.4cm] {\BlockMark{1}}; \node[lterm node] (l21) [below of=b21] {\BlockMark{11}}; \node[lterm node] (l22) [below of=b22] {\BlockMark{12}}; \node[lterm node] (l1) [below of=b1] {\BlockMark{2}}; \node[prop node] (p0) [below of=l0, xshift=-.5cm] {$r_0, p_0$}; \node[prop node] (p2) [below of=l2, xshift=-.5cm] {$r_1, p_1$}; \node[pterm node] (p21) [below of=l21] {$\alpha_{11}$}; \node[pterm node] (p22) [below of=l22] {$\alpha_{12}$}; \node[pterm node] (p1) [below of=l1] {$\alpha_2$}; \path (b0) edge (b1) (b0) edge (b2) (b2) edge (b21) (b2) edge (b22); \path[dotted, thin, ->, bend right=20, color=darkgray] (b2) edge node[above] {$d_{2}$} (b1) (b21) edge node[above] {$d_{12}$} (b22); \setlength{\ndhlen}{.5cm} \setlength{\ndwlen}{.4cm} \node[anchor=east] at (2.2,-.1) {a) $\algname{GrowHorizontally}$:}; \node[anchor=east] at (2.2,-1.5) {b) $\algname{GrowVertically}$:}; \treeSmallV{a}{0,-1.5} \treeSmallV{b}{1.2,-1.5} \node at (2.5,-2.3) {$\dots$}; \treeSmallV{d}{3.8,-1.5} \node at (5.1,-2.3) {$\longrightarrow$}; \treeSmallCV{x}{6.5,-1.5} \treeSmallH{f}{2.6,.2}{a} \treeSmallH{g}{3.8,.2}{b} \node at (5.1,-.1) {$\longrightarrow$}; \treeSmallCH{y}{6.5,.2} \end{tikzpicture} \caption{Abstract pattern tree and examples of growing patterns through combinations.} \label{fig:ex_tree_abs} \label{fig:ex_tree_grow} \end{figure} } \mpara{Encoding the patterns.} To transmit a pattern, we need to encode its pattern tree, as well as its starting point and shift corrections. Furthermore, to encode the pattern tree, we consider separately its event sequence, its cycle lengths, its top-level period, and the other values, as explained below. First we encode the event in the leaves of the pattern tree, traversing the tree from left to right, depth-first\VLongOnly{, enclosing blocks between parenthesis}. \VLongOnly{The string representing the events in the pattern tree is defined recursively as follows: \[ \evtseqfun{\ensuremath{B}_{X}} = \left\{ \begin{array}{ll} \text{`}\ensuremath{\alpha}_{X}\text{'} & \mbox{if $\ensuremath{B}_{X}$ is a leaf},\\ \text{`('} \oplus \big( \bigoplus_{\ensuremath{B}_Y \in \child{\ensuremath{B}_{X}}} \evtseqfun{\ensuremath{B}_{Y}} \big) \oplus \text{`)'} & \mbox{otherwise}.\end{array} \right. \]} We denote as $A$\label{sym:evtseq} the string\VLongOnly{ $\evtseqfun{\ensuremath{B}_0}$ for the top-level block of the tree of a pattern,} representing its event sequence. We encode each symbol $s$ in the string $A$ using a code of length $\mathit{L}(s)$, where $\mathit{L}(s)$ depends on the frequency of $s$, adjusted to take into account the additional symbols `(' and `)', used to delimit blocks. \VLongOnly{In particular, we set the code length for the extended alphabet as \[\mathit{L}(\text{`('})=\mathit{L}(\text{`)'}) = -\log(\frac{1}{3})\] for the block delimiters, and \[\mathit{L}(\text{`)'}) = -\log\big(\frac{\len{\seq[\ensuremath{\alpha}]}}{3\len{\seq}}\big)\] for the original events.} \VShortOnly{ In particular, we set the code lengths for the extended alphabet such that $\mathit{L}(\text{`('})=\mathit{L}(\text{`)'}) = -\log(1/3)$ for the block delimiters, and $\mathit{L}(\alpha) = -\log(\len{\seq[\ensuremath{\alpha}]}/(3\len{\seq}))$ for the original events.} Next, we encode the cycle lengths, i.e.\ the values $\ensuremath{r}_{X}$ associated to each intermediate node $\ensuremath{B}_{X}$ encountered while traversing the tree depth-first and from left to right, as a sequence of values, and denote this sequence $R$. For a block $\ensuremath{B}_{X}$ the number of repetitions of the block cannot be larger than the number of occurrences of the least frequent event participating in the block\VShortOnly{, denoted as $\lensfun{\ensuremath{B}_{X}}$}. \VLongOnly{Formally, the cycle length $\ensuremath{r}_{X}$ of a block $\ensuremath{B}_{X}$, can take at most a value $\lensfun{\ensuremath{B}_{X}}$ defined recursively as follows: \[ \lensfun{\ensuremath{B}_{X}} = \left\{ \begin{array}{ll} \abs{\seq[\ensuremath{\alpha}_{X}]} & \mbox{if $\ensuremath{B}_{X}$ is a leaf},\\ \min_{\ensuremath{B}_Y \in \child{\ensuremath{B}_{X}}} \lensfun{\ensuremath{B}_Y} & \mbox{otherwise},\end{array} \right. \] } We can thus encode the sequence of cycle lengths $R$ with code of length \[\mathit{L}(R) = \sum_{\ensuremath{r}_X \in R} \mathit{L}(\ensuremath{r}_X) = \sum_{\ensuremath{r}_X \in R} \log\big(\lensfun{\ensuremath{B}_{X}}\big)\;.\] Knowing the cycle lengths $R$ and the structure of the pattern tree from its event sequence $A$, we can deduce the total number of events covered by the pattern\VShortOnly{.}\VLongOnly{, $N(\ensuremath{B}_0)$, using the following formula \[ N(\ensuremath{B}_{X}) = \left\{ \begin{array}{ll} 1 & \mbox{if $\ensuremath{B}_{X}$ is a leaf},\\ \ensuremath{r}_{X} \cdot \sum_{\ensuremath{B}_Y \in \child{\ensuremath{B}_{X}}} N(\ensuremath{B}_Y) & \mbox{otherwise}.\end{array} \right. \] } The shift corrections for the pattern consist of the correction to each event occurrence except the first one\VLongOnly{ (assumed not to require correction)}. This ordered list of\VLongOnly{ $N(\ensuremath{B}_{0})-1$} values can be transmitted using the same encoding as for the simple cycles. In simple cycles, we had a unique period characterising the distances between occurrences. Instead, with these more complex patterns, we have a period $\ensuremath{p}_{X}$ for each intermediate node $\ensuremath{B}_{X}$, as well as an inter-block distance $d_{X}$ for each node $\ensuremath{B}_{X}$ that is not the left-most child of its parent. First, we transmit the period of the root node of the pattern tree, $\ensuremath{B}_0$. In a similar way as with simple cycles, we can deduce the largest possible value for $\ensuremath{p}_0$ from $\ensuremath{r}_0$ and $\ensuremath{E}$. Since we do not know when the events within the main cycle occur, we assume what would lead to the largest possible value for $\ensuremath{p}_0$, that is, we assume that all the events within each repetition of the cycle happen at once, so that each repetition spans no time at all. \VLongOnly{The corrections that must be taken into account are those applying to the left-most leaf of each repetition of the main cycle. These are exactly the corrections accumulated in $\cume{o_{za}}$ where $o_{za}$ is the first occurrence of the last repetition of the main cycle, i.e.\ $o_{za} = \Lchild{(\ensuremath{B}_0, \LL{\ensuremath{r}_0})}.$ Thus we have \[\mathit{L}(\ensuremath{p}_0) = \log\big(\Big\lfloor\frac{\tspan{\seq}-\cume{o_{za}}}{\ensuremath{r}_0-1}\Big\rfloor\big)\;.\] Once the main period is known, we can use the same principle as for simple cycles to transmit the starting point and we have \[\mathit{L}(\ensuremath{\tau}) = \log(\tspan{\seq} - \cume{o_{za}} - (\ensuremath{r}_0-1)\ensuremath{p}_0 + 1)\;.\] We denote as $\tspanStar{\ensuremath{B}_X{}}$\label{sym:tspanStar} the time spanned by the entire cycle of block $\ensuremath{B}_X{}$, that is, the time spanned by the $\ensuremath{r}_X{}$ repetitions of the block. We denote as $\tspanRepStar{\ensuremath{B}_X{}}$\label{sym:tspanRepStar} the time spanned by a single repetition of the block. Note that here we consider the perfect occurrences of the block, before applying the corrections. In this case all repetitions span the same time, which might no longer be true after correction. In Fig.~\ref{fig:ex_timelinesP8} we provide a timeline schema of the first occurrences of pattern $(\ensuremath{T}_8, 0, \textbf{0})$, i.e.\ the pattern consisting of the pattern tree $\ensuremath{T}_8$ from Fig.~\ref{fig:ex_tree2}, with starting point $0$ and no shift corrections. We indicate the time spanned by different blocks and their maximum value assuming interleaving is not allowed. \begin{figure}[tbp] \centering \renewcommand{.47}{.36} \timelinePEight{$(\ensuremath{T}_8, 0, \textbf{0})$} \caption{Pattern $(\ensuremath{T}_8, 0, \textbf{0})$ partially shown on timeline (maximum time spans assume interleaving is not allowed).} \label{fig:ex_timelinesP8} \end{figure} Suppose we know $\tspanStar{\ensuremath{B}_X{}}$. Then, in order for $\ensuremath{r}_X{}$ repetitions (equally long, but potentially spanning no time at all) to happen within time $\tspanStar{\ensuremath{B}_X{}}$, $\ensuremath{p}_X{}$ must satisfy $\ensuremath{p}_X{} \leq \lfloor\tspanStar{\ensuremath{B}_X{}}/(\ensuremath{r}_X{}-1)\rfloor$ and can therefore be represented with a code word of length \[\mathit{L}(\ensuremath{p}_X{}) = \log\Big(\Big\lfloor\frac{\tspanStar{\ensuremath{B}_X{}}}{\ensuremath{r}_X{}-1}\Big\rfloor\big)\;.\] If we do not allow interleaving, each repetition can span at most $\lfloor\tspanStar{\ensuremath{B}_X{}}/\ensuremath{r}_X{}\rfloor$, and also no longer than $\ensuremath{p}_X{}$. On the other hand, if we do allow interleaving, each repetition can have a time span of at most $\tspanStar{\ensuremath{B}_X{}} - \ensuremath{r}_X{} +1$. Thus, the maximum time span of a repetition is \[ \maxtspanRepStar{\ensuremath{B}_{X}}\label{sym:maxtspanRepStar} = \left\{ \begin{array}{l} \tspanStar{\ensuremath{B}_X{}} - \ensuremath{r}_X{} +1 \\ \hspace{1.5cm} \mbox{if interleaving is allowed},\\ \min(\ensuremath{p}_X{}, \lfloor\tspanStar{\ensuremath{B}_X{}}/\ensuremath{r}_X{}\rfloor) \quad \mbox{otherwise}.\end{array} \right. \] Obviously, the sum of the distances between the children of the block cannot be larger than the time span of a repetition. Therefore, we can represent the distances between the children of $\ensuremath{B}_X{}$ with code words such that \[\sum_{\ensuremath{B}_{X{}i} \in \child{\ensuremath{B}_{X}}, i > 1} \mathit{L}(d_{X{}i}) = (\abs{\child{\ensuremath{B}_X{}}}-1) \cdot \log\big(\maxtspanRepStar{\ensuremath{B}_{X}} + 1\big)\;.\] We can then determine the maximum span of each child of a block. If interleaving is allowed, the child can span as much time as is left in the time span of its parent after accounting for the distances of the left siblings: \[ \maxtspanStar{\ensuremath{B}_{X{}i}}\label{sym:maxtspanStar} = \maxtspanRepStar{\ensuremath{B}_X{}} - \sum_{1 \leq j \leq i} \interd{X{}j}. \] Alternatively, if interleaving is not allowed, all events of the child must occur before the first event of the next sibling: \[ \maxtspanStar{\ensuremath{B}_{X{}i}} = \left\{ \begin{array}{l@{}l} \maxtspanRepStar{\ensuremath{B}_X{}} & - \sum_{j \neq i} \interd{X{}j} \\ & \quad \mbox{if $\ensuremath{B}_{X{}i}$ is the right-most child},\\ \interd{X{}(i+1)} & \mbox{otherwise}.\end{array} \right. \] Note that $\interd{X{}(i+1)}$ is not defined if $\ensuremath{B}_{X{}i}$ is the right-most child of the block. Applying the formulas above recursively allows to compute the length of the code words needed to represent all the periods and inter-block distances in the tree, for a known value $\tspanRepStar{\ensuremath{B}_0}$. Looking at the last occurrence of the main cycle $(\ensuremath{B}_0, \LL{\ensuremath{r}_0})$, we have \[\ensuremath{\tau} + (\ensuremath{r}_0-1)\ensuremath{p}_0 + \tspanRepStar{\ensuremath{B}_0} + \cume{o_{zz}} \leq t_{\text{end}}(\seq)\;,\] and hence \[\maxtspanRepStar{\ensuremath{B}_0} = t_{\text{end}}(\seq) - \cume{o_{zz}} - (\ensuremath{r}_0-1)\ensuremath{p}_0 - \ensuremath{\tau} \;,\] where $\cume{o_{zz}}$ denotes the accumulated corrections that apply to the event having the largest uncorrected timestamp. If interleaving is not allowed, that event is the right-most leaf node of the expansion tree, i.e.\ the last element in the occurrence list. Besides, if interleaving is not allowed, we also have $\tspanRepStar{\ensuremath{B}_0} \leq \ensuremath{p}_0$. On the other hand, if interleaving is allowed the event having the largest uncorrected timestamp is not necessarily the last one in the list of occurrences (see $\occsStar{\ensuremath{T}_6}$ in Fig.~\ref{fig:ex_tree1b} for instance). Since it depends on periods and inter-block distances within the block, which have not been specified at that point, we cannot determine which event has the largest timestamp. Hence, we compute $\cume{o_{i}}$ for all occurrences $o_i$ that correspond to the right most child of a block and take the minimum (possibly a negative value) as $\cume{o_{zz}}$. To compute the periods and inter-block distances, we can use the actual value $\tspanRepStar{\ensuremath{B}_0}$, which we first need to transmit explicitly after the value of $\ensuremath{\tau}$, with a code word of length $\log\big( \maxtspanRepStar{\ensuremath{B}_0} + 1\big)$. Instead, we could use the upper-bound on $\maxtspanRepStar{\ensuremath{B}_0}$, which we do not need to transmit. It is probably more economical to transmit the value explicitly.} We denote as $\ensuremath{D}$\label{sym:inDP} the collection of all the periods (except $\ensuremath{p}_0$) and inter-block distances in the tree\VLongOnly{ (as well as $\tspanRepStar{\ensuremath{B}_0}$, if necessary)}, that need to be transmitted to fully describe the pattern. \VLongOnly{The corresponding code length is \[\mathit{L}(\ensuremath{D}) = \sum_{v \in \ensuremath{D}} \mathit{L}(v)\;,\] where the code length of each element can be computed using the formulas presented above. }To put everything together, the code used to represent a pattern $\ensuremath{P} = (\ensuremath{T}, \ensuremath{\tau}, \ensuremath{E})$ has length \VLongOnly{\begin{align*} \mathit{L}(\ensuremath{P}) &= \mathit{L}((\ensuremath{T}, \ensuremath{\tau}, \ensuremath{E})) \\ &= \mathit{L}(A) + \mathit{L}(R) + \mathit{L}(\ensuremath{p}_{0}) + \mathit{L}(\ensuremath{D}) + \mathit{L}(\ensuremath{\tau}) + \mathit{L}(\ensuremath{E})\;. \end{align*}} \VShortOnly{\[\mathit{L}(\ensuremath{P}) = \mathit{L}(A) + \mathit{L}(R) + \mathit{L}(\ensuremath{p}_{0}) + \mathit{L}(\ensuremath{D}) + \mathit{L}(\ensuremath{\tau}) + \mathit{L}(\ensuremath{E})\;.\]} \VLongOnly{ \mpara{From simpler patterns to more complex ones.} Let us have a look at what happens to the encoding of a simple cycle, when using this more complex encoding scheme to represent it. Consider a simple cycle $\ensuremath{C} = (\ensuremath{\alpha}, \ensuremath{r}, \ensuremath{p}, \ensuremath{\tau}, \ensuremath{E})$. Using the more complex encoding it can be represented as $\ensuremath{P} = (\ensuremath{T}, \ensuremath{\tau}, \ensuremath{E})$, where the cycle is represented using a more general pattern formalism $\ensuremath{T} = \BinfoRP{\ensuremath{r}}{\ensuremath{p}}\textcolor{darkgray}{\big(}{}\activity{\ensuremath{\alpha}}\textcolor{darkgray}{\big)}{}$. Both encodings are very similar, with $R = \LL{\ensuremath{r}}$, $\ensuremath{p}_{0} = \ensuremath{p}$ and $\ensuremath{D} = \lls\lle$, $A = \text{`( \ensuremath{\alpha} )'} $. The code word representing the cycle length, $\mathit{L}(\ensuremath{r})$, depends only on the frequency of occurrence of the event, which is fixed. The corrections accumulated for the first occurrence of the last repetition of the main cycle are equal to the sum of the corrections in $\ensuremath{E}$,\VLongOnly{ hence $\cume{o_{za}} = \sumel{\ensuremath{E}}$,} so that the length of the code words representing the cycle period and starting point also remain the same. The corrections are the same and encoded the same way under both encodings. The only difference comes from the different way to encode the event, which is longer under the more complex encoding, to accommodate for the additional symbols which allow to represent (nested) event sequences.\VLongOnly{ That is, for any event $\alpha$, its code length under the more complex pattern encoding $\mathit{L}_P(\ensuremath{\alpha})$ is larger than its code length under the simpler cycle encoding, $\mathit{L}_C(\ensuremath{\alpha})$, due to the over-head of having block delimiters.} Note that the actual value of $\ensuremath{\tau}$ does not impact the code length of a pattern. \VLongOnly{If we consider two cycles \[\ensuremath{C}_1 = (\ensuremath{\alpha}_1, \ensuremath{r}_1, \ensuremath{p}_1, \ensuremath{\tau}_1, \ensuremath{E}_1)\text{ and }\ensuremath{C}_2 = (\ensuremath{\alpha}_2, \ensuremath{r}_2, \ensuremath{p}_2, \ensuremath{\tau}_2, \ensuremath{E}_2)\] such that $\ensuremath{\tau}_1 \neq \ensuremath{\tau}_2$ but all other values are equal, then $\mathit{L}(\ensuremath{C}_1) = \mathit{L}(\ensuremath{C}_2)$. Simply put, translation does not affect the cost of a cycle or pattern. }On the other hand, the values of the corrections\VLongOnly{, through $\cume{o_{za}}$} impact the length of the code words representing the starting point and the main period.\VLongOnly{ For this reason, given two cycles with the same length and period but with different corrections (i.e.\ such that $\ensuremath{r}_1 = \ensuremath{r}_2$ and $\ensuremath{p}_1 = \ensuremath{p}_2$, but $\ensuremath{E}_1 \neq \ensuremath{E}_2$), the code words representing their respective periods and starting points will differ (i.e.\ we will have $\mathit{L}(\ensuremath{r}_1) = \mathit{L}(\ensuremath{r}_2)$ but $\mathit{L}(\ensuremath{p}_1) \neq \mathit{L}(\ensuremath{p}_2)$ and $\mathit{L}(\ensuremath{\tau}_1) \neq \mathit{L}(\ensuremath{\tau}_2)$).} } \VLongOnly{\section{Combining patterns and comparing costs}} \VShortOnly{\section{Algorithm for Mining Periodic Patterns that Compress} \label{sec:algo}} \label{sec:comb} Recall that for a given input sequence $\seq$, our goal is to find a collection of patterns $\ensuremath{\mathcal{C}}$ that minimises the cost \[\mathit{L}(\ensuremath{\mathcal{C}}, \seq) = \sum_{\ensuremath{P} \in \ensuremath{\mathcal{C}}} \mathit{L}(\ensuremath{P}) + \sum_{o \in \residual{\ensuremath{\mathcal{C}}, \seq}} \mathit{L}(o)\;.\] It is useful to compare the cost of different patterns, or sets of patterns, on a subset of the data, i.e.\ compare $\mathit{L}(\ensuremath{\mathcal{C}}', \seq')$ for different sets of patterns $\ensuremath{\mathcal{C}}'$ and some subsequence $\seq' \subseteq \seq$. In particular, we might compare the cost of a pattern $\ensuremath{P}$ to the cost of representing the same occurrences separately. This means comparing \[\mathit{L}(\{\ensuremath{P}\}, \cov{\ensuremath{P}}) = \mathit{L}(\ensuremath{P}) \quad\text{and}\quad \mathit{L}(\emptyset, \cov{\ensuremath{P}}) = \sum_{o \in \cov{\ensuremath{P}}} \mathit{L}(o)\;.\] If $\mathit{L}(\{\ensuremath{P}\}, \cov{\ensuremath{P}}) < \mathit{L}(\emptyset, \cov{\ensuremath{P}})$, we say that pattern $\ensuremath{P}$ is \emph{cost-effective}. In addition, we compare patterns in terms of their cost-per-occurrence ratio defined, for a pattern $\ensuremath{P}$, as \custmath{\custfrac{\mathit{L}(\ensuremath{P})}{\abs{\cov{\ensuremath{P}}}}}{,} and say that a pattern is more \emph{efficient} when this ratio is smaller.\VLongOnly{ Furthermore, in order to reduce the number of candidate patterns considered and to retain only the most promising ones, we use a procedure called $\algname{FilterCandidates}$ that takes as input a collection of patterns $\mathcal{K}$ together with some integer $k$ and returns only those patterns from $\mathcal{K}$ that are among the top-$k$ most efficient ones for some occurrence they cover.} \medskip A natural way to build patterns is to start with the simplest patterns, i.e.\ cycles over single events, and combine them together into more complex, possibly multi-level multi-event patterns. \VLongOnly{Therefore, we now look at how the cost of patterns relates to the cost of the building blocks they are constructed from. We start by looking at the cost of covering $k$ occurrences ($k \geq 3$) with a simple cycle as compared to representing them separately. In other words, we look in more details at what it takes for a cycle to be cost-effective. \mpara{Simple cycles vs.\ residuals.} Assume we have a candidate cycle $C$ of length $k \geq 3$, covering $k$ occurrences of event $\alpha$, and we want to check whether this cycle is cost-effective, i.e.\ compare the cost of representing this $k$-subsequence with $C$ to the cost of representing it with individual occurrences \[\mathit{L}(\{C\}, \cov{C}) = \mathit{L}(C) \quad\text{and}\quad \mathit{L}(\emptyset, \cov{C}) = \sum_{o \in \cov{C}} \mathit{L}(o)\;.\] The cost of representing the individual occurrences separately is \[\mathit{L}(\emptyset, \cov{C}) = k \cdot (\mathit{L}(t) + \mathit{L}(\alpha)) = k \big( \log(\tspan{\seq}+1) -\log(\frac{\len{\seq[\alpha]}}{\len{\seq}})\big) \] and the cost for representing the same occurrences with cycle $C$ is \[\mathit{L}(C) = \mathit{L}(\alpha) + \beta + \mathit{L}(\ensuremath{r}) + \mathit{L}(\ensuremath{p}) + \mathit{L}(\ensuremath{\tau}) +\mathit{L}(\ensuremath{E})\;,\] where $\beta$ denotes the length of the code for one pair of block delimiters. The cost of corrections in the cycle is \[\mathit{L}(\ensuremath{E}) = 2(k-1) + \sum_{e \in \ensuremath{E}} \abs{e}\] and the code length of the period and starting point of a cycle satisfy, respectively, \[\mathit{L}(\ensuremath{p}) < \log(\frac{\tspan{\seq}+1}{k-1}) \quad \text{and} \quad \mathit{L}(\ensuremath{\tau}) < \mathit{L}(t),\] so that \[\mathit{L}(C) < \mathit{L}(\alpha) + \beta + \mathit{L}(\ensuremath{r}) + \log\big(\frac{\tspan{\seq}+1}{k-1}\big) + \mathit{L}(t) + 2k-2 + \sum_{e \in \ensuremath{E}} \abs{e}\;.\] If we let \begin{align*} W(k) =\,& (k-1) (\mathit{L}(t) + \mathit{L}(\alpha))- \beta - \mathit{L}(\ensuremath{r}) - \log\big(\frac{\tspan{\seq}+1}{k-1}\big) - 2k+2\\ =\,& (k-2) \log(\tspan{\seq}+1) + (k-1) \mathit{L}(\alpha) - \beta - \log(\len{\seq[\ensuremath{\alpha}]}) + \log(k-1) - 2k+2\;, \end{align*} we have \[\sum_{e \in \ensuremath{E}} \abs{e} < W(k) \implies \mathit{L}(\{C\}, \cov{C}) < \mathit{L}(\emptyset, \cov{C})\;.\] In other words, if the sum of the absolute shift corrections in a cycle $C$ of length $k$ is less than $W(k)$, then the cost of representing the occurrences with $C$ is smaller than the cost of representing them separately. Furthermore, we can state the following: \begin{lemma} \label{lemma:cycles} Given a sequence $\seq$, if $C$ is a cycle of length $k$ over event $\alpha$ with corrections $\ensuremath{E}$ satisfying $\sum_{e \in \ensuremath{E}} \abs{e} < W(k)$, and if extending $C$ to cover one further occurrence of event $\alpha$ does not increase the sum of the absolute corrections by more than $\log(\tspan{\seq}+1) - 2$, then the cost of representing the $k+1$ occurrences with the extended cycle is smaller than the cost of representing them separately, i.e.\ the extended cycle remains cost-effective. \end{lemma} \begin{proof} Assume we have a cycle $C$ with corrections $\ensuremath{E}$, satisfying $\sum_{e \in \ensuremath{E}} \abs{e} < W(k)$. Let $C'$ be the cycle obtained by extending $C$ to cover one further occurrence, i.e.\ $C'$ is a cycle of length $k+1$, and let $\ensuremath{E}'$ be the associated corrections. Since \[W(k+1) - W(k) = \log(\tspan{\seq}+1) + \mathit{L}(\alpha) + \log(k/(k-1)) - 2 > \log(\tspan{\seq}+1) - 2\;,\] we have \begin{align*} & \sum_{e \in \ensuremath{E}'} \abs{e} - \sum_{e \in \ensuremath{E}} \abs{e} \leq \log(\tspan{\seq}+1) - 2 \\ \implies & \sum_{e \in \ensuremath{E}'} \abs{e} - \sum_{e \in \ensuremath{E}} \abs{e} < W(k+1) - W(k) \\ \implies & \sum_{e \in \ensuremath{E}'} \abs{e} < W(k+1) - W(k) + \sum_{e \in \ensuremath{E}} \abs{e} \\ \implies & \sum_{e \in \ensuremath{E}'} \abs{e} < W(k+1) \\ \implies & \mathit{L}(\{C'\}, \cov{C'}) < \mathit{L}(\emptyset, \cov{C'}) \;. \end{align*} \end{proof} For a simple criterion to decide whether to extend a cycle we compare the magnitude of the new correction to $\log(\tspan{\seq}+1) - 2$.} \VLongOnly{\mpara{Vertical combination: Nesting cycles.} First, let us consider a practical example. Imagine that the following sequence is part of the input: \begin{align*} \seqex{2} = \langle & (2,a), (5,a), (7,a), (8,a), (13,a), (15,a), \\ & (20,a), (21,a), (26,a), (29,a), (32,a), (33,a) \rangle \;. \end{align*} We can represent this sequence with simple cycles, using three patterns over pattern tree $\ensuremath{T}_1$ from Fig.~\ref{fig:ex_tree1a} with starting points $2$, $13$, and $26$, respectively. Using this notation, the first option is to represent the sequence with the collection \begin{align*} \ensuremath{\mathcal{C}}_1 &= \{\ensuremath{P}_{1,1}, \ensuremath{P}_{1,2}, \ensuremath{P}_{1,3} \}\\ &= \{ (\ensuremath{T}_1, 2, \LL{1,0,-1}), (\ensuremath{T}_1, 13, \LL{0,3,-1}), (\ensuremath{T}_1, 26, \LL{1,1,-1})\}\;. \end{align*} Alternatively, we can represent the sequence using four patterns over pattern tree $\ensuremath{T}_2$ from Fig.~\ref{fig:ex_tree1a} with starting points $2$, $5$, $7$ and $8$, respectively: \begin{align*} \ensuremath{\mathcal{C}}_2 &= \{\ensuremath{P}_{2,1}, \ensuremath{P}_{2,2}, \ensuremath{P}_{2,3}, \ensuremath{P}_{2,4} \} \\ &= \{(\ensuremath{T}_2, 2, \LL{-2,0}), (\ensuremath{T}_2, 5, \LL{-3,1}), \\ &\phantom{= \{} (\ensuremath{T}_2, 7, \LL{0,-1}), (\ensuremath{T}_2, 8, \LL{0,-1})\}\;. \end{align*} But it can also be represented as a single pattern containing two nested cycles, namely as patterns over pattern trees $\ensuremath{T}_3$ or $\ensuremath{T}_4$ from Fig.~\ref{fig:ex_tree1a}, respectively, depending whether the inner cycle is $\ensuremath{T}_1$ or $\ensuremath{T}_2$. So, we can represent the sequence with a single pattern, with either \begin{align*} \ensuremath{\mathcal{C}}_3 &= \{\ensuremath{P}_{3,1}\} = \{(\ensuremath{T}_3, 2, \LL{1, 0, -1, -2, 0, 3, -1, 0, 1, 1, -1})\},\text{ or }\\ \ensuremath{\mathcal{C}}_4 &= \{\ensuremath{P}_{4,1}\} = \{(\ensuremath{T}_4, 2, \LL{-2, 0, 1, -3, 1, 0, 0, -1, -1, 0, -1})\}\;. \end{align*} Note that with this type of pattern combining two nested cycles over the same event, the list of corrections for the combined pattern is a simple combination of corrections for the basic cycles: \[\ensuremath{E}_{3,1} = \ensuremath{E}_{1,1} \oplus \LL{\ensuremath{E}_{2,1}[1]} \oplus \ensuremath{E}_{1,2} \oplus \LL{\ensuremath{E}_{2,1}[2]} \oplus \ensuremath{E}_{1,3}\] where $\ensuremath{E}_{x,y}$ is the list of shift corrections for pattern $\ensuremath{P}_{x,y}$ and $\ensuremath{E}_{x,y}[i]$ is the correction at position $i$ in that list. Let us look at the code lengths for these different patterns. For this example, we have \[ \begin{array}{r@{}lr@{}lr@{}l} t_{\text{start}}(\seqex{2})&=0,&t_{\text{end}}(\seqex{2})&=34,&\tspan{\seqex{2}}&=34,\\ \multicolumn{2}{c}{\text{ and }}&\len{\seqex[a]{2}}&= 12\;.\\ \end{array} \] We list the code lengths for the different elements in Tables~\ref{tab:ex_clC1}--\ref{tab:ex_clC3-4}. In Fig.~\ref{fig:ex_timelinesP3-P4} we provide a timeline schema of the occurrences of $\ensuremath{P}_{3,1}$ as well as of the occurrences of $(\ensuremath{T}_3, 0, \textbf{0})$ and $(\ensuremath{T}_4, 0, \textbf{0})$, i.e.\ the occurrences of pattern trees $\ensuremath{T}_3$ and $\ensuremath{T}_4$ with starting point $0$ and no corrections. \begin{figure}[tbp] \centering \setlength{\ndhlen}{1.cm} \setlength{\ndwlen}{.8cm} \newcommand{\treeMedH}[5]{ \node[main node] (b1#1) at (#2) {}; \node[info node] (l1rp#1) [above of=b1#1, xshift=-.5cm, node distance=.3\ndhlen] {$\ensuremath{r}=#5, \ensuremath{p}=7$}; \node[info node] (l1t#1) [above of=l1rp#1, node distance=1.2em] {$\ensuremath{\tau}=#4$}; \node[leaf node] (b2#1) [below of=b1#1] {}; \node[pterm node, xshift=-.2cm] (p2#1) [below of=b2#1] {$#3$}; \path (b1#1) edge (b2#1); } \newcommand{\treeMedHP}[5]{ \node[main node] (b1#1) at (#2) {}; \node[info node] (l1rp#1) [above of=b1#1, xshift=-.5cm, node distance=.3\ndhlen] {$\ensuremath{r}=#5, \ensuremath{p}=7$}; \node[info node] (l1t#1) [above of=l1rp#1, node distance=1.2em] {$\ensuremath{\tau}=#4$}; \node[main node] (b2#1) [below of=b1#1] {}; \node[info node] (l2r#1) [right of=b2#1, node distance=.8\ndwlen] {$\ensuremath{r}=3$}; \node[info node] (l2p#1) [below of=l2r#1, node distance=1.2em] {$\ensuremath{p}=2$}; \node[leaf node] (b3#1) [below of=b2#1] {}; \node[pterm node, xshift=-.2cm] (p2#1) [below of=b3#1] {$#3$}; \path (b1#1) edge (b2#1) (b2#1) edge (b3#1); } \newcommand{\treeMedCH}[3]{ \node[main node] (b2#1) at (#2) {}; \node[info node] (l1rp#1) [above of=b2#1, xshift=-.5cm, node distance=.3\ndhlen] {$\ensuremath{r}=5, \ensuremath{p}=7$}; \node[info node] (l1t#1) [above of=l1rp#1, node distance=1.2em] {$\ensuremath{\tau}=#3$}; \node[leaf node] (b22#1) [below of=b2#1] {}; \node[leaf node] (b21#1) [left of=b22#1, node distance=1.5\ndwlen] {}; \node[main node] (b23#1) [right of=b22#1, node distance=1.5\ndwlen] {}; \node[leaf node] (b231#1) [below of=b23#1] {}; \node[info node] (l2r#1) [right of=b23#1, node distance=.8\ndwlen] {$\ensuremath{r}=3$}; \node[info node] (l2p#1) [below of=l2r#1, node distance=1.2em] {$\ensuremath{p}=2$}; \node[pterm node, xshift=-.2cm] (p22#1) [below of=b22#1] {$a$}; \node[pterm node, xshift=-.2cm] (p23#1) [below of=b21#1] {$b$}; \node[pterm node, xshift=-.2cm] (p211#1) [below of=b231#1] {$b$}; \path (b2#1) edge (b21#1) (b2#1) edge (b22#1) (b2#1) edge (b23#1) (b23#1) edge (b231#1); \path[dotted, thin, ->, bend right=20, color=darkgray] (b21#1) edge node[below] {$d=2$} (b22#1) (b22#1) edge node[below] {$d=1$} (b23#1); } \newcommand{\treeMedV}[3]{ \node[ghost node] (b0#1) at (#2) {}; \node[main node] (b2#1) [below of=b0#1] {}; \node[ghost node] (b2x#1) [below of=b2#1] {}; \node[info node] (l2rp#1) [above of=b2#1, xshift=-.5cm, node distance=.3\ndhlen] {$\ensuremath{r}=3, \ensuremath{p}=2$}; \node[info node] (l2t#1) [above of=l2rp#1, node distance=1.2em] {$\ensuremath{\tau}=#3$}; \node[leaf node] (b21#1) [left of=b2x#1, node distance=\ndwlen] {}; \node[leaf node] (b22#1) [right of=b2x#1, node distance=\ndwlen] {}; \node[pterm node, xshift=-.2cm] (p21#1) [below of=b21#1] {$c$}; \node[pterm node, xshift=-.2cm] (p22#1) [below of=b22#1] {$e$}; \path (b2#1) edge (b21#1) (b2#1) edge (b22#1); \path[dotted, thin, ->, bend right=20, color=darkgray] (b21#1) edge node[below] {$d=1$} (b22#1); } \newcommand{\treeMedCV}[2]{ \node[main node] (b0#1) at (#2) {}; \node[info node] (l0rp#1) [above of=b0#1, xshift=-.5cm, node distance=.3\ndhlen] {$\ensuremath{r}=12, \ensuremath{p}=7$}; \node[info node] (l0t#1) [above of=l0rp#1, node distance=1.2em] {$\ensuremath{\tau}=3$}; \node[main node] (b2#1) [below of=b0#1] {}; \node[ghost node] (b2x#1) [below of=b2#1] {}; \node[info node] (l2p#1) [right of=b2#1, node distance=\ndwlen] {$\ensuremath{p}=2$}; \node[info node] (l2r#1) [above of=l2p#1, node distance=1.2em] {$\ensuremath{r}=3$}; \node[leaf node] (b21#1) [left of=b2x#1, node distance=\ndwlen] {}; \node[leaf node] (b22#1) [right of=b2x#1, node distance=\ndwlen] {}; \node[pterm node, xshift=-.2cm] (p21#1) [below of=b21#1] {$c$}; \node[pterm node, xshift=-.2cm] (p22#1) [below of=b22#1] {$e$}; \path (b0#1) edge (b2#1) (b2#1) edge (b21#1) (b2#1) edge (b22#1); \path[dotted, thin, ->, bend right=20, color=darkgray] (b21#1) edge node[below] {$d=1$} (b22#1); } \setlength{\ndhlen}{1.cm} \setlength{\ndwlen}{.8cm} \begin{tikzpicture}[-,auto,node distance=\ndhlen, thick] \node[anchor=west] at (-\ndwlen,1.5\ndhlen) {a) $\algname{GrowHorizontally}$:}; \treeMedH{h}{0,0}{b}{2}{6} \treeMedH{f}{2.6\ndwlen,0}{a}{4}{5} \treeMedHP{g}{5.2\ndwlen,0}{b}{5}{5} \node at (7.\ndwlen,-.5\ndhlen) {$\longrightarrow$}; \treeMedCH{y}{9.5\ndwlen,0}{2} \node[anchor=west] at (-\ndwlen,-3\ndhlen) {b) $\algname{GrowVertically}$:}; \treeMedV{a}{0,-3.5\ndhlen}{3} \treeMedV{b}{2.6\ndwlen,-3.5\ndhlen}{12} \node at (4.3\ndwlen,-5\ndhlen) {$\dots$}; \treeMedV{d}{6\ndwlen,-3.5\ndhlen}{102} \node at (7.7\ndwlen,-5\ndhlen) {$\longrightarrow$}; \treeMedCV{x}{9.5\ndwlen,-3.5\ndhlen} \end{tikzpicture} \caption{Examples of growing patterns through combinations.} \label{fig:ex_tree_grow} \end{figure} \medskip Now, let us turn to the general case.} Assume that we have a pattern tree $\ensuremath{T}_I$ which occurs multiple times in the event sequence. In particular, assume that it occurs at starting points $\ensuremath{\tau}_{1}$, $\ensuremath{\tau}_{2}$, $\dots$, $\ensuremath{\tau}_{r_J}$ \VLongOnly{(where the starting points are ordered) }and that this sequence of starting points itself can be represented as a cycle of length $r_J$ and period $p_J$. \VLongOnly{In other words, if we denote as $\alpha$ the left-most event of $\ensuremath{T}_I$, i.e.\ the event associated to the starting point of $\ensuremath{T}_I$, the sequence consisting of the starting points of the different occurrences of $\ensuremath{T}_I$ can be represented by a pattern $(\ensuremath{T}_J, \ensuremath{\tau}_{1}, \ensuremath{E}_J)$ where $\ensuremath{T}_J = \BinfoRP{r_J}{p_J}\textcolor{darkgray}{\big(}{}\activity{\alpha}\textcolor{darkgray}{\big)}{}$ is a cycle of length $r_J$ and period $p_J$ over event $\alpha$, with shift corrections \[\ensuremath{E}_J = \LL{(\ensuremath{\tau}_{i}-\ensuremath{\tau}_{i-1})-p_J \text{ for } i \in [2,r_J]}\;.\] }In such a case, the occurrences of $\ensuremath{T}_I$ might be combined together and represented as a nested pattern tree\VLongOnly{ $\ensuremath{T}_N = \BinfoRP{r_J}{p_J}\textcolor{darkgray}{\big(}{}\activity{\ensuremath{T}_I}\textcolor{darkgray}{\big)}{}$}. \VLongOnly{We refer to such a combination as \emph{vertical combination}, since it produces patterns of greater depth than the original ones. }$\algname{GrowVertically}$ is the procedure which takes as input a collection $\ensuremath{\mathcal{C}}_I$ of patterns over a tree $\ensuremath{T}_I$\VLongOnly{, i.e.\ $\ensuremath{\mathcal{C}}_I = \{(\ensuremath{T}_I, \ensuremath{\tau}_{1}, \ensuremath{E}_{I,1}), \dots (\ensuremath{T}_I, \ensuremath{\tau}_{r_J}, \ensuremath{E}_{I,r_J})\}$} and returns the nested pattern\VLongOnly{, covering the same timestamp--event pairs,} obtained by combining them together as depicted in Fig.~\ref{fig:ex_tree_grow}(b).\VLongOnly{ This situation is illustrated in Fig.~\ref{fig:ex_combineV}. \begin{lemma} \label{lemma:vertical} Let $\ensuremath{\mathcal{C}}_I = \{(\ensuremath{T}_I, \ensuremath{\tau}_{1}, \ensuremath{E}_{I,1}), \dots (\ensuremath{T}_I, \ensuremath{\tau}_{r_J}, \ensuremath{E}_{I,r_J})\}$ be a collection of patterns consisting of $r_J$ occurrences of the same pattern tree $\ensuremath{T}_I$ and $\ensuremath{P}_N = \algname{GrowVertically}(\ensuremath{\mathcal{C}}_I)$ be the nested pattern obtained by combining the patterns in $\ensuremath{\mathcal{C}}_I$. If the cycle $\ensuremath{P}_J$ over the starting points of the patterns in $\ensuremath{\mathcal{C}}_I$ satisfies \[\mathit{L}(\ensuremath{P}_J) < (r_J-1) \cdot \mathit{L}((\ensuremath{T}_I, \ensuremath{\tau}_{1}, \lls\lle)) \;,\] then \[\mathit{L}(\{\ensuremath{P}_N\}, \cov{\ensuremath{\mathcal{C}}_I}) < \mathit{L}(\ensuremath{\mathcal{C}}_I, \cov{\ensuremath{\mathcal{C}}_I})\;.\] \end{lemma} \begin{proof} The code length of the event sequence in $\ensuremath{T}_N$, i.e.\ $A_N = \text{`(}\ensuremath{T}_I\text{)'}$ equals the code length to encode the event sequence in $\ensuremath{T}_I$ plus the code length for one pair of block delimiters and satisfies \[\mathit{L}(A_N) < \mathit{L}(A_{I,r_J}) + \mathit{L}(A_J).\] Once nested, the time spans in $T$ can only become more constrained, so that $\mathit{L}(\ensuremath{D}_N) \leq \mathit{L}(\ensuremath{D}_{I,r_J})$. The shift corrections for the nested pattern can be written as \[\ensuremath{E}_{N} = \ensuremath{E}_{I,1} \oplus \LL{\ensuremath{E}_{J}[1]} \oplus \ensuremath{E}_{I,2} \oplus \LL{\ensuremath{E}_{J}[2]} \dots \LL{\ensuremath{E}_{J}[r_J-1]} \oplus \ensuremath{E}_{I,r_J},\] so that \[\mathit{L}(\ensuremath{E}_N) = \mathit{L}(\ensuremath{E}_J) + \sum_{i \in [1,r_J]} \mathit{L}(\ensuremath{E}_{I,i})\;.\] For the remaining elements, we have \begin{align*} \mathit{L}(R_N) &= \mathit{L}(R_{I,r_J}) + \mathit{L}(R_J) \\ \mathit{L}(\ensuremath{p}_{0N}) &= \mathit{L}(\ensuremath{p}_{0J}) \\ \mathit{L}(\ensuremath{\tau}_N) &= \mathit{L}(\ensuremath{\tau}_J) \end{align*} Hence, the following holds for the code length of the nested pattern $\ensuremath{P}_N$ when compared to the code length for the inner patterns $\ensuremath{P}_{I,i}$ and the outer pattern $\ensuremath{P}_J$: \[\mathit{L}(\ensuremath{P}_N) < \mathit{L}(\ensuremath{P}_J) + \mathit{L}(A_{I,r_J}) + \mathit{L}(R_{I,r_J}) + \mathit{L}(\ensuremath{D}_{I,r_J}) + \sum_{i \in [1,r_J]} \mathit{L}(\ensuremath{E}_{I,i})\;.\] We can then compare the code length of the outer pattern to the code length of the structure of all but one of the inner patterns $\ensuremath{P}_J$, that is \begin{align*} & \mathit{L}(\ensuremath{P}_J) < (r_J-1) \cdot \mathit{L}((\ensuremath{T}_I, \ensuremath{\tau}_1, \lls\lle)) \\ \implies & \mathit{L}(\ensuremath{P}_J) + \mathit{L}(A_{I,r_J}) + \mathit{L}(R_{I,r_J}) + \mathit{L}(\ensuremath{D}_{I,r_J}) + \sum_{i \in [1,r_J]} \mathit{L}(\ensuremath{E}_{I,i}) \\ & < (r_J-1) \cdot \mathit{L}((\ensuremath{T}_I, \ensuremath{\tau}_1, \lls\lle)) + \mathit{L}(A_{I,r_J}) + \mathit{L}(R_{I,r_J}) + \mathit{L}(\ensuremath{D}_{I,r_J}) + \sum_{i \in [1,r_J]} \mathit{L}(\ensuremath{E}_{I,i}) \\ \implies & \mathit{L}(\ensuremath{P}_N) = \mathit{L}(\{\ensuremath{P}_N\}, \cov{\ensuremath{\mathcal{C}}_I}) < \sum_{i \in [1,r_J]} (\ensuremath{T}_{I,i}, \ensuremath{\tau}_{I,i}, \ensuremath{E}_{I,i}) = \mathit{L}(\ensuremath{\mathcal{C}}_I, \cov{\ensuremath{\mathcal{C}}_I})\;. \end{align*} \end{proof} \mpara{Horizontal combination: Concatenating cycles.} Again, let us first consider a practical example. Imagine that the following sequence is part of the input: \begin{align*} \seqex{3} = \langle & (2,b), (5,a), (7,c), (13,b), (18,a), (21,c), \\ & (26,b), (30,a), (31,c) \rangle \;. \end{align*} We can represent this sequence with single cycles of length $3$ and period $13$, over events $b$, $a$, and $c$ and with starting points $2$, $5$, and $7$, respectively. The cycle over $a$ corresponds to pattern tree $\ensuremath{T}_2$ from Fig.~\ref{fig:ex_tree1a}, the other two cycles correspond to similar pattern trees but over event $b$ and $c$, so we denote them respectively as $\ensuremath{T}_{2b}$ and $\ensuremath{T}_{2c}$. This corresponds to the following collection: \begin{align*} \ensuremath{\mathcal{C}}_5 &= \{\ensuremath{P}_{5,1}, \ensuremath{P}_{5,2}, \ensuremath{P}_{5,3}\} \\ &= \{ (\ensuremath{T}_{2b}, 2, \LL{-2,0}), (\ensuremath{T}_2, 5, \LL{0,-1}), (\ensuremath{T}_{2c}, 7, \LL{1,-3})\}\;. \end{align*} We can also use a more complex pattern tree, concatenating the three events. This corresponds to using pattern tree $\ensuremath{T}_5$ from Fig.~\ref{fig:ex_tree1b}: \begin{align*} \ensuremath{\mathcal{C}}_6 &= \{\ensuremath{P}_{6,1}\} \\ &= \{(\ensuremath{T}_5, 2, \LL{0, 1, -2, 2, 2, 0, 1, 0})\}\;. \end{align*} Let us look at the code lengths for these different patterns. For this example, we have \[ \begin{array}{r@{}lr@{}lr@{}l} t_{\text{start}}(\seqex{3})&=0,&t_{\text{end}}(\seqex{3})&=34,&\tspan{\seqex{3}}&=34,\\ \text{ and } & \multicolumn{4}{r}{\len{\seqex[a]{3}} =\len{\seqex[b]{3}}=\len{\seqex[c]{3}}}&= 3\;.\\ \end{array} \] We list the code lengths for the different elements in Tables~\ref{tab:ex_clC5}--\ref{tab:ex_clC6}. In Fig.~\ref{fig:ex_timelinesP5} we provide a timeline schema of the occurrences of $\ensuremath{P}_{6,1}$ as well as of the occurrences of $(\ensuremath{T}_5, 0, \textbf{0})$. \medskip Given}\VShortOnly{ On the other hand, given} a collection of patterns that occur close to one another and share similar periods, we might want to combine them together into a concatenated pattern by merging the roots of their respective trees. \VLongOnly{We refer to such a combination as \emph{horizontal combination}, since it produces patterns of greater width than the original ones. To understand what this means in terms of cost, we focus on the basic case where we have two patterns $\ensuremath{P}_I$ and $\ensuremath{P}_J$, such that $\ensuremath{T}_I = \BinfoRP{r}{p_I}\textcolor{darkgray}{\big(}{}\activity{T}\textcolor{darkgray}{\big)}{}$ and $\ensuremath{T}_J = \BinfoRP{r}{p_J}\textcolor{darkgray}{\big(}{}\activity{T'}\textcolor{darkgray}{\big)}{}$, both patterns have top-level blocks of the same length $r$, and with starting points $\ensuremath{\tau}_I \leq \ensuremath{\tau}_J$. We compare the cost of these two patterns to the code length for the pattern that concatenates them, that is, pattern $\ensuremath{P}_N$ with $\ensuremath{T}_N = \BinfoRP{r}{p_N}\textcolor{darkgray}{\big(}{}\activity{T} \BinfoD{d_N} \activity{T'}\textcolor{darkgray}{\big)}{}$ covering the same event occurrences in the original sequence. $\ell$ and $\ell'$ denote the number of occurrence in one repetition of the top-level block of patterns $\ensuremath{P}_I$ and $\ensuremath{P}_J$ respectively, that is $\abs{\occsStar{T}} = \ell$ and $\abs{\occsStar{T'}} = \ell'$. This situation is illustrated in Fig.~\ref{fig:ex_combineH}. Since the shift corrections are applied relatively within a block, concatenating $T$ and $T'$ only impacts the first event occurrence of each repetition of the top-level block in either pattern, i.e.\ the left-most leaf in $T$ and in $T'$. We must look at the timestamps of occurrences of the first event in $T$ and in $T'$, let's denote the timestamp of the $i^{th}$ occurrence of these events as $t(o_{i,1})$ and $t(o'_{i,1})$ respectively. Looking at the position at which these occurrences are produced by the different patterns, we have \begin{align*} \ensuremath{E}_I(o_{i,1}) &= \ensuremath{E}_I[(i-1)\ell] & \ensuremath{E}_J(o'_{i,1}) &= \ensuremath{E}_J[(i-1)\ell'] \\ \ensuremath{E}_N(o_{i,1}) &= \ensuremath{E}_N[(i-1)(\ell+\ell')] & \ensuremath{E}_N(o'_{i,1}) &= \ensuremath{E}_N[i\ell+(i-1)\ell']\;. \end{align*} Per $(\ensuremath{T}_I, \ensuremath{\tau}_I, \ensuremath{E}_I)$ we have \begin{align} t(o_{1,1}) &= \ensuremath{\tau}_I\;, \label{a1x} \\ t(o_{2,1}) &= \ensuremath{\tau}_I + p_I + \ensuremath{E}_I(o_{2,1})\;, \label{a2x}\\ t(o_{3,1}) &= \ensuremath{\tau}_I + 2 p_I + \ensuremath{E}_I(o_{3,1}) + \ensuremath{E}_I(o_{2,1})\;, \label{a3x} \end{align} and per $(\ensuremath{T}_N, \ensuremath{\tau}_N, \ensuremath{E}_N)$ \begin{align} t(o_{1,1}) &= \ensuremath{\tau}_N\;, \label{a1z}\\ t(o_{2,1}) &= \ensuremath{\tau}_N + p_N + \ensuremath{E}_N(o_{2,1})\;, \label{a2z}\\ t(o_{3,1}) &= \ensuremath{\tau}_N + 2 p_N + \ensuremath{E}_N(o_{3,1}) + \ensuremath{E}_N(o_{2,1})\;. \label{a3z} \end{align} Hence, from eq.~\ref{a1x} and eq.~\ref{a1z} we get \[ \ensuremath{\tau}_N = \ensuremath{\tau}_I. \] And generalising from eq.~\ref{a2x} and eq.~\ref{a2z} we get \[ \ensuremath{E}_N(o_{i,1}) = (p_I - p_N) + \ensuremath{E}_I(o_{i,1}).\] And therefore, we let $p_N = p_I$ so that $\ensuremath{E}_N(o_{i,j}) = \ensuremath{E}_I(o_{i,j})$ for all event occurrences of $\ensuremath{P}_I$. Furthermore, we have per $(\ensuremath{T}_J, \ensuremath{\tau}_J, \ensuremath{E}_J)$ \begin{align} t(o'_{1,1}) &= \ensuremath{\tau}_J\;, \label{b1y}\\ t(o'_{2,1}) &= \ensuremath{\tau}_J + p_J + \ensuremath{E}_J(o'_{i,2})\;, \label{b2y}\\ t(o'_{3,1}) &= \ensuremath{\tau}_J + 2 p_J + \ensuremath{E}_J(o'_{i,3}) + \ensuremath{E}_J(o'_{i,2})\;, \label{b3y} \end{align} and per $(\ensuremath{T}_N, \ensuremath{\tau}_N, \ensuremath{E}_N)$ \begin{align} t(o'_{1,1}) &= \ensuremath{\tau}_N + d_N + \ensuremath{E}_N(o'_{1,1})\;, \label{b1z} \\ t(o'_{2,1}) &= t(o_{2,1}) + d_N + \ensuremath{E}_N(o'_{2,1})\;, \label{b2z}\\ t(o'_{3,1}) &= t(o_{3,1}) + d_N + \ensuremath{E}_N(o'_{3,1})\;. \label{b3z} \end{align} Hence, from eq.~\ref{b1y} and eq.~\ref{b1z} we get \[ d_N = (\ensuremath{\tau}_J - \ensuremath{\tau}_I) - \ensuremath{E}_N(o'_{1,1})\;.\] and therefore we let $d_N = (\ensuremath{\tau}_J - \ensuremath{\tau}_I)$. From eq.~\ref{b2y} and eq.~\ref{b2z} we get \begin{align*} \ensuremath{\tau}_J +& p_J + \ensuremath{E}_J(o'_{2,1}) \\ &= \ensuremath{\tau}_N + p_N + \ensuremath{E}_N(o_{2,1}) + d_N + \ensuremath{E}_N(o'_{2,1})\;,\\ &= \ensuremath{\tau}_N + p_N + \ensuremath{E}_N(o_{2,1}) + (\ensuremath{\tau}_J - \ensuremath{\tau}_I) - \ensuremath{E}_N(o'_{1,1}) + \ensuremath{E}_N(o'_{2,1})\;, \end{align*} and hence \[(p_J - p_N) + \ensuremath{E}_J(o'_{2,1}) = \ensuremath{E}_N(o'_{2,1}) - \ensuremath{E}_N(o'_{1,1}) + \ensuremath{E}_N(o_{2,1}) \;.\] More generally, we have \[(p_J - p_N) + \ensuremath{E}_J(o'_{i,1}) = \ensuremath{E}_N(o'_{i,1}) - \ensuremath{E}_N(o'_{(i-1),1}) + \ensuremath{E}_N(o_{i,1}) \;,\] and using $p_N = p_I$ and $\ensuremath{E}_N(o_{i,1}) = \ensuremath{E}_I(o_{i,1})$: \[ \ensuremath{E}_N(o'_{i,1}) = (p_J - p_I) + \ensuremath{E}_I(o_{i,1}) - \ensuremath{E}_J(o'_{i,1}) + \ensuremath{E}_N(o'_{(i-1),1})\;.\] In the best case, the patterns are well aligned, in the sense that $\ensuremath{E}_I(o_{i,1}) = \ensuremath{E}_J(o'_{i,1})$, so then, summing up the shift corrections above, which are the only ones that differ between the old patterns and the new one, we get \[ \sum_{i \in [1,r-1]} \abs{\ensuremath{E}_N(o'_{i,1})} = \frac{r(r-1)}{2}\abs{p_J - p_I}\;.\] We use this as a filter for patterns to concatenate requiring that \[\sum_{i \in [1,r-1]} \abs{\ensuremath{E}_N(o'_{i,1})} \leq \sum_{i \in [1,r-1]} \abs{\ensuremath{E}_J(o'_{i,1})}\;,\] i.e.\ \[\abs{p_J - p_I} \leq \frac{2}{r(r-1)} \sum_{i \in [1,r-1]} \abs{\ensuremath{E}_J(o'_{i,1})} \;.\] This can be interpreted as requiring that the difference in period between the two concatenated patterns does not produce shift corrections larger than in the original patterns. } $\algname{GrowHorizontally}$ is the procedure which takes as input a collection of patterns and returns the pattern obtained by concatenating them together in order of increasing starting points as depicted in Fig.~\ref{fig:ex_tree_grow}(a).\VLongOnly{ More specifically, let the input collection be $\{ \ensuremath{P}_i \}$, where each pattern is a cycle of length $\ensuremath{r}_i$ and period $\ensuremath{p}_i$ over a pattern tree $\ensuremath{T}_i$ (possibly a single event) with starting point $\ensuremath{\tau}_i$, and assume that the patterns in the collection are indexed in order of increasing starting points, i.e.\ in the order in which they occur in the data. The resulting pattern tree $\ensuremath{T}_N$ is a cycle of length $\ensuremath{r}_N = \min(\ensuremath{r}_i)$ and period $\ensuremath{p}_N=\ensuremath{p}_1$ over the concatenation of $\ensuremath{T}_1, \ensuremath{T}_2, \dots$, where the distance between $\ensuremath{T}_{i-1}$ and $\ensuremath{T}_{i}$ is set to $d_{i}= \ensuremath{\tau}_i - \ensuremath{\tau}_{i-1}$, and with $\ensuremath{\tau}_N = \ensuremath{\tau}_1$. } \VShortOnly{ \medskip} \VLongOnly{\section{Algorithm for Mining Periodic Patterns that Compress} \label{sec:algo}} \VLongOnly{We are now ready to present our main algorithm for mining a collection of periodic patterns that compresses the input sequence.} As outlined in Algorithm~\ref{alg:mine_patterns}, our proposed algorithm consists of three stages: \textit{(i)} extracting cycles (line~\ref{alg:main-cycles}), \textit{(ii)} building tree patterns from cycles (lines~\ref{alg:main-comb-start}--\ref{alg:main-comb-end}) and \textit{(iii)} selecting the final pattern collection (line~\ref{alg:main-select}). We now present each stage in turn\VShortOnly{ at a high-level}. \begin{algorithm}[tb] \caption{Mining periodic patterns that compress.} \label{alg:mine_patterns} \begin{algorithmic}[1] \Require A multi-event sequence $\seq$, a number $k$ of top candidates to keep \Ensure A collection of patterns $\mathcal{P}$ \State $\mathcal{I} \gets \algname{ExtractCycles}(\seq, k)$ \label{alg:main-cycles} \State $\mathcal{C} \gets \emptyset; \mathcal{V} \gets \mathcal{I}; \mathcal{H} \gets \mathcal{I}$ \label{alg:main-comb-start} \While{$\mathcal{H} \neq \emptyset$ \textbf{or} $\mathcal{V} \neq \emptyset$} \label{alg:main-while} \State $\mathcal{V}' \gets \algname{CombineVertically}(\mathcal{H}, \mathcal{P}, \seq, k)$ \label{alg:main-combV} \State $\mathcal{H}' \gets \algname{CombineHorizontally}(\mathcal{V}, \mathcal{P}, \seq, k)$ \label{alg:main-combH} \State $\mathcal{C} \gets \mathcal{C} \cup \mathcal{H} \cup \mathcal{V}; \mathcal{V} \gets \mathcal{V}'; \mathcal{H} \gets \mathcal{H}'$ \label{alg:main-comb-end} \EndWhile \State $\mathcal{P} \gets \algname{GreedyCover}(\mathcal{C}, \seq)$ \label{alg:main-select} \State \textbf{return} $\mathcal{P}$ \end{algorithmic} \end{algorithm} \mpara{Extracting cycles.} \VLongOnly{The first stage of the algorithm consists in extracting cycles (line~\ref{alg:main-cycles}). The algorithm used for the initial mining of cycles is given as Algorithm~\ref{alg:cycles}.} Considering each event in turn, we use two different routines to mine cycles from the sequence of timestamps obtained by restricting the input sequence to the event of interest, combine and filter their outputs to generate the set \VLongOnly{$\mathcal{I}$ }of initial candidate patterns.\VLongOnly{ } The first routine, $\algname{ExtractCyclesDP}{}$\VLongOnly{ (line~\ref{alg:cycles-dp})}, uses dynamic programming. Indeed, if we allow neither gaps in the cycles nor overlaps between them, finding the best set of cycles for a given sequence corresponds to finding an optimal segmentation of the sequence, and since our cost is additive over individual cycles, we can use dynamic programming to solve it optimally~\cite{bellman:61:on}.\VLongOnly{ } The second routine, $\algname{ExtractCyclesTri}{}$\VLongOnly{ (line~\ref{alg:cycles-fold})}, extracts cycles using a heuristic which allows for gaps and overlappings. It collects triples $(t_0, t_1, t_2)$ such that $\abs{\abs{t_2 - t_1} - \abs{t_1 - t_0}} \leq \ell$, where $\ell$ is set so that the triple can be beneficial when used to construct longer cycles. Triples are then chained into longer cycles.\VLongOnly{ A triple $(t_{-1}, t_0, t_{+1})$, can be seen as an elementary cycle with a single shift correction $e =\abs{(t_0-t_{-1}) - (t_{+1}-t_{0})}$. Since we are looking for triples that could produce cost-effective cycles, we only keep triples for which $e < \log(\tspan{\seq}+1) - 2$, following Lemma~\ref{lemma:cycles}. Triples $(t_{-1}, t_0, t_{+1})$ and $(t'_{-1}, t'_0, t'_{+1})$ are chained together if $t_{0} = t'_{-1}$ and $t_{+1} = t'_{0}$, producing $(t_{-1}, t_0, t_{+1}, t'_{+1})$, and so on. } Finally, the set $\mathcal{C}$ of cost-effective cycles obtained by merging the output of the two routines is filtered \VLongOnly{with $\algname{FilterCandidates}$, }to keep only the $k$ most efficient patterns for each occurrence\VLongOnly{ (line~\ref{alg:cycles-filter})} for a user-specified $k$, and returned. \VLongOnly{\begin{algorithm}[tb] \caption{$\algname{ExtractCycles}$: Mines simple cycles from the data sequence.} \label{alg:cycles} \begin{algorithmic}[1] \Require A sequence $\seq$ \Ensure A collection of cycles $\mathcal{C}$ \State $\mathcal{C} \gets \emptyset$ \State $l_{\max} \gets \log(\tspan{\seq}+1) - 2$ \label{alg:cycles-crit} \For{\textbf{each} event $\alpha \in \omega$} \label{alg:cycles-event} \State $\mathcal{C} \gets \mathcal{C} \cup \algname{ExtractCyclesDP}(\seq[\alpha])$ \label{alg:cycles-dp} \State $\mathcal{C} \gets \mathcal{C} \cup \algname{ExtractCyclesTri}(\seq[\alpha], l_{\max})$ \label{alg:cycles-fold} \EndFor \State $\algname{FilterCandidates}(\mathcal{C}, \seq, k)$ \label{alg:cycles-filter} \State \textbf{return} $\mathcal{C}$ \end{algorithmic} \end{algorithm}} \VLongOnly{\medskip} \mpara{Building tree patterns from cycles.} The second stage of the algorithm builds tree patterns, starting from the cycles produced in the previous stage. That is, while there are new candidate patterns, the algorithm performs combination rounds, trying to generate more complex patterns through vertical and horizontal combinations. If desired, this stage can be skipped, thereby restricting the pattern language to simple cycles. In a round of vertical combinations performed by $\algname{CombineVertically}$ (line~\ref{alg:main-combV}), each distinct pattern tree represented among the new candidates in $\mathcal{H}$ is considered in turn. Patterns over that tree are collected and $\algname{ExtractCyclesTri}$ is used to mine cycles from the corresponding sequence of starting points.\VLongOnly{ This time, the threshold used to mine the cycles is derived from the cost of the considered pattern tree, in accordance with Lemma~\ref{lemma:vertical}.} For each obtained cycle, a nested pattern is produced by combining the corresponding candidates using $\algname{GrowVertically}$ (see Fig.~\ref{fig:ex_tree_grow}(b)). \VLongOnly{The set of candidates produced through these vertical combinations is filtered, and returned as $\mathcal{V}'$. The procedure $\algname{CombineVertically}$ for generating candidate patterns by means of vertical combinations is shown in Algorithms~\ref{alg:combineV}. \begin{algorithm}[tb] \caption{$\algname{CombineVertically}$: Combine patterns vertically.} \label{alg:combineV} \begin{algorithmic}[1] \Require A collection of new candidate patterns $\mathcal{H}$, and other candidate patterns $\mathcal{C}$, a sequence $\seq$, a number $k$ of top candidates to keep \Ensure A collection of patterns resulting from vertical combinations $\mathcal{V}'$ \State $\mathcal{V}' \gets \emptyset$ \For{\textbf{each} distinct $\ensuremath{T}_c \in \mathcal{H}$} \State $\mathcal{C} \gets \{(\ensuremath{T}_x, \ensuremath{\tau}_x, \ensuremath{E}_x) \in \mathcal{H} \cup \mathcal{C}, \text{ such that } \ensuremath{T}_x=\ensuremath{T}_c\}$ \State $l_{\max} \gets \mathit{L}((\ensuremath{T}_1, \ensuremath{\tau}_1, \lls\lle))$ \For{\textbf{each} cycle $(r, p, O) \in \algname{ExtractCyclesTri}(\{\ensuremath{\tau}_x \in \mathcal{C}\}, l_{\max})$} \State $\mathcal{K} \gets \{(\ensuremath{T}_y, \ensuremath{\tau}_y, \ensuremath{E}_y) \in \mathcal{C}, \text{ such that } \ensuremath{\tau}_y \in O\}$ \State $K \gets \algname{GrowVertically}(\mathcal{K})$ \If{$\mathit{L}(\{K\}, \cov{\mathcal{K}}) < \mathit{L}(\mathcal{K}, \cov{\mathcal{K}})$} \State $\mathcal{V}' \gets \mathcal{V}' \cup \{ K \}$ \EndIf \EndFor \EndFor \State $\mathcal{V}' \gets \algname{FilterCandidates}(\mathcal{V}', \seq, k)$ \State \textbf{return} $\mathcal{V}'$ \end{algorithmic} \end{algorithm} } In a round of horizontal combinations performed by $\algname{CombineHorizontally}$ (line~\ref{alg:main-combH}), \VLongOnly{pairs of candidates such that \textit{(i)} at least one of the two patterns was produced in the previous round, and \textit{(ii)} their starting points are closer than the period of the earliest occurring of the two patterns are considered for concatenation. A}\VShortOnly{a} graph $G$ is constructed, with vertices representing candidate patterns and with edges connecting pairs of candidates $\mathcal{K} = \{\ensuremath{P}_I, \ensuremath{P}_J\}$ for which the concatenated pattern $\ensuremath{P}_N = \algname{GrowHorizontally}(\mathcal{K})$ satisfies $\mathit{L}(\{\ensuremath{P}_N\}, \cov{\mathcal{K}}) < \mathit{L}(\mathcal{K}, \cov{\mathcal{K}})$. A new pattern is then produced for each clique of $G$, by applying $\algname{GrowHorizontally}$ to the corresponding set of candidate patterns. \VLongOnly{ The set $\mathcal{H}'$ of new patterns is then filtered and returned. The procedure $\algname{CombineHorizontally}$ for generating candidate patterns by means of horizontal combinations is shown in Algorithms~\ref{alg:combineH}. To limit the number of concatenations generated and evaluated when testing pairs of patterns, we require that the periods of two patterns be similar enough not to produce shift corrections larger than in the patterns of the pair, as discussed in Section~\ref{sec:comb}. Note that if we obtain, as a result from a horizontal combination, a pattern a the following shape \[\BinfoRP{r_0}{p_0} \textcolor{darkgray}{\big(}{}\BinfoRP{r_1}{p_1} \textcolor{darkgray}{\big(}{}\activity{T_a}\textcolor{darkgray}{\big)}{} \BinfoD{d} \BinfoRP{r_1}{p_1} \textcolor{darkgray}{\big(}{}T_b\textcolor{darkgray}{\big)}{}\textcolor{darkgray}{\big)}{}\] we will factorise it into \[\BinfoRP{r_0}{p_0} \textcolor{darkgray}{\big(}{}\BinfoRP{r_1}{p_1} \textcolor{darkgray}{\big(}{} T_a \BinfoD{d} T_b\textcolor{darkgray}{\big)}{}\textcolor{darkgray}{\big)}{}\;,\] if it results in shorter code length, as is often the case. \begin{algorithm}[tb] \caption{$\algname{CombineHorizontally}$: Combine patterns horizontally.} \label{alg:combineH} \begin{algorithmic}[1] \Require A collection of new candidate patterns $\mathcal{V}$, and other candidate patterns $\mathcal{C}$, a sequence $\seq$, a number $k$ of top candidates to keep \Ensure A collection of patterns resulting from horizontal combinations $\mathcal{H}'$ \State $\mathcal{H}' \gets \emptyset; G \gets \emptyset$ \State $\mathcal{C} \gets$ pattern pairs $(\ensuremath{P}_a, \ensuremath{P}_b) \in (\mathcal{V} \cup \mathcal{C})^2$, such that $(\ensuremath{P}_a \in \mathcal{V}$ or $\ensuremath{P}_b \in \mathcal{V})$ and $\ensuremath{\tau}_b \leq \ensuremath{\tau}_a + \ensuremath{p}_{0a}$ \For{\textbf{each} pair of patterns $\mathcal{K} = (\ensuremath{P}_a, \ensuremath{P}_b) \in \mathcal{C}$} \State $K \gets \algname{GrowHorizontally}(\mathcal{K})$ \If{$\mathit{L}(\{K\}, \cov{\mathcal{K}}) < \mathit{L}(\mathcal{K}, \cov{\mathcal{K}})$} \State $\mathcal{H}' \gets \mathcal{H}' \cup \{ K \}$ \State $G \gets G \cup \{(a,b)\}$ \EndIf \EndFor \State $\mathcal{H}' \gets \mathcal{H}' \cup \{ \algname{GrowHorizontally}(\mathcal{K})$ for each clique $\mathcal{K}$ in the graph $G \}$ \State $\mathcal{H}' \gets \algname{FilterCandidates}(\mathcal{H}', \seq, k)$ \State \textbf{return} $\mathcal{H}'$ \end{algorithmic} \end{algorithm}} \mpara{Selecting the final pattern collection.} Selecting the final set of patterns to output among the candidates in $\mathcal{C}$ is very similar to solving a weighted set cover problem. \VLongOnly{Each candidate pattern can be seen as a set containing the occurrences it covers and associated to a weight representing its code length. A singleton set is associated to each occurrence whose weight is the cost of encoding that occurrence as a residual. } Therefore, the selection is done using a simple variant of the greedy algorithm for this problem, denoted as $\algname{GreedyCover}$ (line~\ref{alg:main-select})\VShortOnly{.}\VLongOnly{, that works as follows. Initially, the set $\mathcal{P}$ of selected patterns is empty. Let $\mathcal{O}$ be the set of event occurrences covered so far, also initially empty. In each round, the pattern $P$ with smallest value of $\mathit{L}(P)/\abs{\occs{P} \setminus \mathcal{O}}$ among remaining candidates, i.e.\ the most efficient when considering only uncovered occurrences, is selected. If $P$ is cost-effective for the remaining uncovered occurrences, it is added to $\mathcal{P}$, $\mathcal{O}$ is updated and the selection proceeds to the next round. Otherwise the selection stops and $\mathcal{P}$ is returned. } \section{Experiments} \label{sec:xps} In this section, we evaluate the ability of our algorithm to find patterns that compress the input event sequences. We make the code and the prepared datasets publicly available.\footnote{\url{https://github.com/nurblageij/periodic-patterns-mdl}{}} To the best of our knowledge, no existing algorithm carries out an equivalent task and we are therefore unable to perform a comparative evaluation against competitors. To better understand the behaviour of our algorithm, we first performed experiments on synthetic sequences. We then applied our algorithm to real-world sequences including process execution traces, smartphone applications activity, and life-tracking. We evaluate our algorithm's ability to compress the input sequences and present some examples of extracted patterns. \VShortOnly{ \begin{table}[tbp] \caption{Statistics of the event log sequences used in the experiments.} \label{tab:data-stats-traces} \centering \begin{tabular}{@{\hspace{1ex}}l@{\hspace{4ex}}r@{\hspace{2ex}}r@{\hspace{2ex}}r@{\hspace{2.5ex}}r@{\hspace{1.5ex}}r@{\hspace{2ex}}r@{\hspace{2ex}}r@{\hspace{2.5ex}}r@{\hspace{1.5ex}}r@{\hspace{1ex}}} \toprule & $\len{\seq}$ & $\tspan{\seq}$ & $\abs{\Omega}$ & \multicolumn{2}{c}{$\len{\seq[\alpha]}$} & & $\cl(\emptyset, \seq)$ & \multicolumn{2}{c}{RT (s)} \\ & & & & $\omed$ & $\max$ & & & cycles & overall \\ \midrule \dstTZap{0} & $181644$ & $181643$ & $443$ & $22$ & $36697$ & & $4154277$ & $2094$ & $35048$ \\ \dstBugz{0} & $16775$ & $16774$ & $91$ & $6$ & $3332$ & & $303352$ & $112$ & $522$ \\ \dstname{samba}{} & $28751$ & $7461$ & $119$ & $44$ & $2905$ & & $520443$ & $214$ & $2787$ \\ \dstSachaG{15} & $65977$ & $221445$ & $141$ & $231$ & $4389$ & & $1573140$ & $2963$ & $14377$ \\ \cmidrule{2-10} \\ [-1.2em] \multicolumn{10}{l}{\dstUbiAR{abs} ($31$ sequences)} \\ \midrule min & $413$ & $11391$ & $10$ & $23$ & $194$ & & $6599$ & $1$ & $1$ \\ median & $23859$ & $87591$ & $87$ & $52$ & $2131$ & & $486633$ & $232$ & $1020$ \\ max & $167863$ & $17900307$ & $241$ & $129$ & $6101$ & & $3733349$ & $2297$ & $28973$ \\ \bottomrule \end{tabular} \end{table} } For a given event sequence, the main objective of our algorithm is to mine and select a good collection of periodic patterns, in the sense that the collection should allow to compress the input sequence as much as possible. Therefore, the main measure that we consider in our experiments is the \emph{compression ratio}, defined as the ratio between the length of the code representing the input sequence with the considered collection of patterns and the length of the code representing the input sequence with an empty collection of patterns, i.e.\ using only individual event occurrences, given as a percentage. For a given sequence $S$ and collection of patterns $\ensuremath{\mathcal{C}}$ the compression ratio is defined as \[ \%\cl{}\label{sym:prcCl} = 100 \cdot \cl(\ccycle, \seq) / \cl(\emptyset, \seq)\;,\] with smaller values associated to better pattern collections. \VLongOnly{ \subsection{Mining synthetic sequences} \label{ssec:xps_synthe} We begin by probing the behaviour of our algorithm on synthetic sequences containing planted periodic patterns. First we generate sequences that contain a single pattern. Each pattern consists of a basis of one to three events, repeated in a cycle, in two nested cycles or in three nested cycles, that is building pattern trees of depth $1$, $2$ and $3$ respectively. The simplest basis consists of event $a$, with the period of the inner cycle being either greater than five (specifically, in $[5, 9]$) or greater than $10$ (specifically, in $[10, 24]$). To build more complex patterns, we use event $a$ followed by event $b$ at distance $4$, i.e.\ \textcolor{darkgray}{\big(}{}\activity{a} \BinfoDT{4} \activity{b}\textcolor{darkgray}{\big)}{}, as well as event $a$ followed by event $c$ at distance $1$, followed by event $d$ at distance $2$, i.e.\ \textcolor{darkgray}{\big(}{}\activity{a} \BinfoDT{1} \activity{c} \BinfoDT{2} \activity{d}\textcolor{darkgray}{\big)}{}. Each resulting perfect synthetic sequence can then be perturbed with \emph{shift noise}, i.e.\ by displacing the occurrences by a few time steps either forward or backward, or with \emph{additive noise}, i.e.\ by adding sporadic occurrences. Displacement noise is parameterised, on one hand, by the maximum absolute shift by which the occurrences might be displaced and, on the other hand, by the fraction of occurrences that are displaced. We refer to these two parameters as the \emph{level} and the \emph{density} of the noise, respectively. For additive noise, we insert occurrences of event $a$ at random timestamps. This type of noise has a single parameter, \emph{density}, fixing the number of of sporadic occurrences as compared to the number of occurrences of the event in the unperturbed sequence. The generated sequences contain from about fifty up to over two thousand occurrences. In each round, we mine each generated sequence in turn for periodic patterns, check whether the planted pattern was recovered exactly and compare the length of the code for encoding the perturbed sequence using either the planted pattern, denoted as $\mathit{L}_H$, or those that have been selected by the algorithm, denoted as $\mathit{L}_F$. The first round of experiments is run on sequences with only shift noise. The second and third rounds of experiments are run on sequences with additive noise of density $0.1$ and density $0.5$ respectively. The fourth round is run on sequences with only shift noise, but letting the occurrences of the planted pattern interleave, unlike in the three previous rounds. In Fig.~\ref{fig:synthe_1_cr}--\ref{fig:synthe_4_cr}, we plot the compression ratio achieved by the planted pattern versus the compression ratio achieved by the pattern collection selected by the algorithm for each of the twenty sequences generated with each considered combination of parameters, for the four rounds respectively. A different take on the same results is presented in Fig.~\ref{fig:synthe_1}--\ref{fig:synthe_4}, where we show the distribution of $\%\cl_F - \%\cl_H$ among the twenty sequences generated with each combination of parameters as boxplots, for the four rounds respectively. A value of $\%\cl_F - \%\cl_H=0$ means that the patterns selected by our algorithm achieve the same compression as the planted patterns, while positive (resp.\ negative) values of $\%\cl_F - \%\cl_H$ correspond to selected patterns achieving longer (resp.\ shorter) code length than with planted patterns. On the left next to each boxplot, we indicate the number of sequences for which the planted pattern was recovered exactly. Next, we consider sequences containing multiple planted patterns. For this purpose, we consider the pool of sequences generated in each of the four rounds with single patterns above and generate new sequences by selecting between two and five sequences from the pool and combining them together. The patterns can be combined either with or without overlap, that is, either letting a sequence start before or after the preceding sequence ends. The results for the runs over these synthetic sequences containing multiple planted patterns are presented in Fig.~\ref{fig:synthe_comb}. We see from Fig.~\ref{fig:synthe_1} that when no spurious occurrences are inserted the planted pattern is recovered exactly in most cases for simple patterns of depth one, while the performance deteriorates and fewer planted patterns are recovered for more complex patterns and greater depths, as also visible from Fig.~\ref{fig:synthe_1_cr}. This is expected since recovering multi-event patterns requires that the corresponding cycles are properly recovered in the first stage of the algorithm for each of the events that make up the pattern. Even in the absence of noise, the algorithm might miss the planted pattern, e.g.\ because it merges successive nested repetition of a cycle that appear close to each other. When the sequences involve interleaving (Fig.~\ref{fig:synthe_4_cr} and~\ref{fig:synthe_4}) the algorithm behaves in a similar way, except for the more complex basis with depths two and three, which are expectedly impacted more strongly by interleaving, resulting in more degraded performances. Spurious occurrences break the planted patterns which are no longer recovered by the algorithm. With low density of additive noise the algorithm often selects patterns very similar to the planted one but covering also the spurious occurrences, using shift corrections to accommodate them (Fig.~\ref{fig:synthe_2}). This is typical of the dynamic programming cycle mining, which is able to find cycles with many repetitions but does not allow to skip any occurrence, which are thus incorporated at the cost of increased corrections. When the density of noise becomes fairly large, the inserted occurrences might actually generate new patterns that can result in shorter code length than the planted pattern, as can be observed in Fig.~\ref{fig:synthe_3}. Indeed, except for the patterns over single event $a$ with long periods, the difference in compression ratios is negative in the majority of cases. When several planted patterns are combined without overlap, the algorithm is able to recover them all exactly in roughly half of the cases for patterns taken from pools with no additive noise, with or without interleaving ($44$ and $51\%$, respectively, see Fig.~\ref{fig:synthe_comb}). In most cases the patterns selected by the algorithm yield a longer code length than the planted patterns, except in the presence of dense additive noise. Note that the requirement that the planted pattern(s) should be recovered exactly is very strict, as it means that the pattern(s) selected by the algorithm should cover the exact same occurrences as the planted ones, with the exact same pattern tree. Closer inspection of the results reveals that the algorithm is able to recover large fragments of the planted patterns in most cases. More specifically, in cases where it fails to recover planted patterns with height greater than one, the algorithm is in general able to identify cycles that constitute large fragments of different repetitions of the inner cycle of the pattern, but merely omitting a few occurrences in these fragment prevents the algorithm from combining them into vertical patterns of greater height. Designing a procedure that is able to build on the extracted fragments from different repetitions to recover the omitted occurrences could make the retrieval of this type of patterns more robust, but is clearly not trivial. \subsection{Mining real-world sequences} \label{ssec:xps_quantitative} Next, we apply our algorithm to real-world datasets.} \mpara{Datasets.} Our first two datasets come from a collaboration with STMicroelectronics and are execution traces of a set-top box based on the STiH418 SoC\footnote{STiH418 description: \url{http://www.st.com/resource/en/data_brief/stih314.pdf}} running STLinux. Both traces are a log of system actions (interruptions, context switches and system calls) taken by the KPTrace instrumentation system developed at STMicroelectronics. The \dstname{3zap}{} \VShortOnly{sequence}\VLongOnly{dataset} corresponds to 3 successive changes of channel (``zap''), while the \dstname{bugzilla}{} \VShortOnly{sequence}\VLongOnly{dataset} corresponds to logging a display blackout bug into the bug tracking system of ST. \VLongOnly{Each dataset contains two traces, one for either of the two cores of the box, named respectively \dstTZap{0} and \dstTZap{1}, on one hand, \dstBugz{0} and \dstBugz{1}, on the other hand.} For our analysis of these traces, we do not consider timestamps, only the succession of events. The \dstname{ubiqLog}{} dataset was obtained from the UCI Machine learning repository.\!\footnote{\url{https://archive.ics.uci.edu/ml/datasets/UbiqLog+(smartphone+lifelogging)}} It contains traces collected from the smartphones of users over the course of two months. For each of $31$ users \VLongOnly{(we excluded those whose data was not encoded using Hindu-Arabic numerals), }we obtain a sequence recording what applications are run on that user's smartphone. \VShortOnly{We consider absolute timestamps with a granularity of one minute.} \VLongOnly{We either consider absolute timestamps with a granularity of one minute or only the succession of events, and denote the corresponding collections of sequences respectively as \dstUbiAR{abs} and \dstUbiAR{rel}. } The \dstname{samba}{} dataset consists of a single sequence recording the emails identifying the authors of commits on the git repository of the samba network file system\footnote{\url{https://git.samba.org/}} from $1996$ to $2016$. We consider timestamps with a granularity of one day. \VLongOnly{User commits are instantaneous.} We aggregated together users that appeared fewer than $10$ times\VLongOnly{ as ``other''}. The \dstname{sacha}{} dataset \VShortOnly{consists of a single sequence containing }\VLongOnly{contains }records from the \textit{quantified awesome} life log\footnote{\url{http://quantifiedawesome.com/records}} recording the daily activities of its author between November 2011 and January 2017. The daily activities are associated to start and end timestamps, and are divided between categories organised into a hierarchy. Categories with fewer than $200$ occurrences were aggregated to their parent category. Each resulting category is represented by an event. Adjacent occurrences of the same event were merged together. \VShortOnly{We consider absolute timestamps with a granularity of $15$ minutes.} \VLongOnly{We either consider absolute timestamps with a granularity of one minute or only the succession of events, and denote the corresponding sequences respectively as \dstSachaAR{abs} and \dstSachaAR{rel}. Further, we investigate what happens when we coarsen the time granularity, from the original one minute to $15$ minutes, $30$ minutes, $1$ hour, half a day and a full day. The corresponding sequences are denoted \dstSachaG{15}, \dstSachaG{30}, \dstSachaG{60}, \dstSachaG{720} and \dstSachaG{1440}, respectively. } \VLongOnly{When considering absolute timestamps for occurrences involving non-instant processes (e.g.\ daily activities, running applications), each process might be associated with three different events representing its start, its end, and the process happening for a duration smaller than the time granularity respectively. When considering only the succession of events or, in other words, focusing on the order in which things happen rather than the specific times, we only consider the starting time of the process and each process is hence associated with only one event.} \VShortOnly{Table~\ref{tab:data-stats-traces} presents the statistics of the sequences used in our experiments.} \VLongOnly{Tables~\ref{tab:data-stats-traces}--\ref{tab:data-stats-UbiqLog-IS-rel} present the statistics of the sequences used in our experiments.} We indicate the length ($\len{\seq}$) and duration ($\tspan{\seq}$) of each sequence, the size of its alphabet ($\abs{\Omega}$), as well as the median and maximum length of the event subsequences ($\len{\seq[\alpha]}$). We also indicate the code length of the sequence when encoded with an empty collection of patterns ($\cl(\emptyset, \seq)$), as well as the running time of the algorithm (RT, in seconds) for mining and selecting the patterns, as well as for the first stage of mining cycles for each separate event. \VShortOnly{ \begin{table}[tbp] \caption{Summary of results for the separate event sequences.} \label{tab:res-short} \centering \begin{minipage}{.47\textwidth} \begin{tabular}{@{\hspace{.2ex}}l@{\hspace{.6ex}}r@{\hspace{.7ex}}r@{\hspace{.7ex}}c@{/}c@{/}c@{/}c@{\hspace{.8ex}}r@{\hspace{.2ex}}} \toprule & $\%\cl{}~~$ & $\cl\!:\!\resSet{}$ & \parbox[b][1em][b]{.4cm}{\centering $s$} & \parbox[b][1em][b]{.4cm}{\centering $v$} & \parbox[b][1em][b]{.4cm}{\centering $h$} & \parbox[b][1em][b]{.4cm}{\centering $m$} & $c^{+}$ \\ \cmidrule{2-8} \\ [-1.2em] \multicolumn{8}{c}{\dstTZap{0}} \\ \midrule $\ccycle_{S}$ & $56.32$ & 0.41 & $11852$ & -- & -- & -- & $2325$ \\ $\ccycle_{V}$ & $55.14$ & 0.40 & $10581$ & $581$ & -- & -- & $2325$ \\ $\ccycle_{H}$ & $47.84$ & 0.35 & $3459$ & -- & $4912$ & -- & $2325$ \\ $\ccycle_{V\!+H}$ & $47.40$ & 0.34 & $3499$ & $419$ & $4302$ & -- & $2325$ \\ $\ccycle_{F}$ & $46.99$ & 0.34 & $3499$ & $91$ & $4154$ & $268$ & $2325$ \\ [.2em] \multicolumn{8}{c}{\dstname{samba}{}} \\ \midrule $\ccycle_{S}$ & $28.42$ & 0.14 & $429$ & -- & -- & -- & $2657$ \\ $\ccycle_{F}$ & $28.37$ & 0.13 & $409$ & $0$ & $17$ & $0$ & $2657$ \\ \bottomrule \end{tabular} \end{minipage} \hfill \begin{minipage}{.47\textwidth} \begin{tabular}{@{\hspace{.2ex}}l@{\hspace{.6ex}}r@{\hspace{.7ex}}r@{\hspace{.7ex}}c@{/}c@{/}c@{/}c@{\hspace{.8ex}}r@{\hspace{.2ex}}} \toprule & $\%\cl{}~~$ & $\cl\!:\!\resSet{}$ & \parbox[b][1em][b]{.4cm}{\centering $s$} & \parbox[b][1em][b]{.4cm}{\centering $v$} & \parbox[b][1em][b]{.4cm}{\centering $h$} & \parbox[b][1em][b]{.4cm}{\centering $m$} & $c^{+}$ \\ \cmidrule{2-8} \\ [-1.2em] \multicolumn{8}{c}{\dstBugz{0}} \\ \midrule $\ccycle_{S}$ & $48.58$ & 0.12 & $262$ & -- & -- & -- & $1652$ \\ $\ccycle_{V}$ & $48.56$ & 0.12 & $259$ & $1$ & -- & -- & $1652$ \\ $\ccycle_{H}$ & $42.43$ & 0.12 & $133$ & -- & $70$ & -- & $1652$ \\ $\ccycle_{V\!+H}$ & $42.39$ & 0.12 & $130$ & $1$ & $72$ & -- & $1652$ \\ $\ccycle_{F}$ & $42.41$ & 0.13 & $124$ & $1$ & $70$ & $2$ & $1652$ \\ [.2em] \multicolumn{8}{c}{\dstSachaG{15}} \\ \midrule $\ccycle_{S}$ & $74.34$ & 0.37 & $9602$ & -- & -- & -- & $304$ \\ $\ccycle_{F}$ & $68.64$ & 0.35 & $3957$ & $0$ & $2996$ & $0$ & $582$ \\ \bottomrule \end{tabular} \end{minipage} \end{table} } \VShortOnly{ \begin{figure}[tbp] \centering \caption{Compression ratios for the sequences from the \dstUbiAR{abs} dataset.} \label{fig:xps-prcCl-ubi-abs} \includegraphics[trim=20 0 40 0,clip,width=.49\textwidth]{fig_prcCL_UbiqLogISEAbs2-2} \includegraphics[trim=20 0 40 0,clip,width=.49\textwidth]{fig_prcCL_UbiqLogISEAbs1-2} \end{figure}} \mpara{Measures.} Beside\VLongOnly{ the code length and} the compression ratio\VShortOnly{ ($\%\cl$)} achieved with the selected pattern collections, we consider several other characteristics\VShortOnly{ (see Table~\ref{tab:res-short})}. For a given pattern collection $\ensuremath{\mathcal{C}}$, we denote the set of residuals $\residual{\ensuremath{\mathcal{C}}, \seq}$ simply as $\mathcal{R}$\label{sym:resSet} and look at what fraction of the code length is spent on them, denoted as $\cl\!:\!\resSet\label{sym:ratioClR} = \sum_{o \in \mathcal{R}} \mathit{L}(o)/\cl(\ccycle, \seq)$. \VLongOnly{Note that when the pattern collection is empty $\cl\!:\!\resSet=1$, since only residuals are used, and hence the code length results entirely from residuals. $\abs{\mathcal{R}}$ and $\abs{\ensuremath{\mathcal{C}}}$ are the number of residuals (individual event occurrences) and the number of patterns in the collection, respectively.} We also look at the number of patterns of different types in $\ensuremath{\mathcal{C}}$\VShortOnly{:}\VLongOnly{, specifically, \textit{(i)}}\VShortOnly{ ($s$)} simple cycles, i.e.\ patterns with \VLongOnly{both width and height equal to $1$}\VShortOnly{$\text{width}=1$ and $\text{height}=1$}, \VLongOnly{\textit{(ii)}}\VShortOnly{($v$)} vertical patterns, \VLongOnly{having a width of $1$ and a height strictly greater than $1$}\VShortOnly{with $\text{width}=1$ and $\text{height}>1$}, \VLongOnly{\textit{(iii)}}\VShortOnly{($h$)} horizontal patterns, \VLongOnly{having a height of $1$ and a width strictly greater than $1$}\VShortOnly{with $\text{width}>1$ and $\text{height}=1$}, and \VLongOnly{\textit{(iv)}}\VShortOnly{($m$)} proper two-dimensional patterns, \VLongOnly{with both height and width greater than $1$}\VShortOnly{with $\text{width}>1$ and $\text{height}>1$}. Finally, we look at the \VLongOnly{fraction of patterns in $\ensuremath{\mathcal{C}}$ that cover strictly more than three occurrences, i.e.\ \[c_{>3} = \abs{\{ P \in \ensuremath{\mathcal{C}}, \abs{\cov{P}} > 3\}}/\abs{\ensuremath{\mathcal{C}}}\;,\] where $\cov{P}$ denotes the set of timestamp--event pairs covered by a pattern $P$, and the median and }maximum cover size of patterns in $\ensuremath{\mathcal{C}}$\VShortOnly{, denoted as $c^{+}$}. \mpara{Results.} \VShortOnly{In addition to the final collection of patterns returned by the algorithm after potentially a few rounds of combinations (denoted as $\ccycle_{F}$), we also consider intermediate collections of patterns, namely a collection selected among cycles mined during the first stage of the algorithm (denoted as $\ccycle_{S}$), including patterns from the first round of horizontal combinations ($\ccycle_{H}$), of vertical combinations ($\ccycle_{V}$), and both, i.e.\ at the end of the first round of combinations ($\ccycle_{V\!+H}$).} \VLongOnly{To better understand the role of the pattern combinations, in addition to looking at the final collection of patterns returned by the algorithm (denoted as $\ccycle_{F}$), we also consider intermediate collections of patterns, namely a collection selected among simple cycles mined during the initial phase of the algorithm (denoted as $\ccycle_{S}$), a collection selected among simple cycles and patterns resulting from the first round of horizontal combinations (denoted as $\ccycle_{H}$), from the first round of vertical combinations (denoted as $\ccycle_{V}$) and from both, or in other words among the candidate patterns obtain at the end of the first round of combinations (denoted as $\ccycle_{V\!+H}$). Table~\ref{tab:res-long-traces} shows the results for application trace log sequences \dstTZap{0}, \dstTZap{1}, \dstBugz{0}, \dstBugz{1} and \dstname{samba}{}. Table~\ref{tab:res-long-sacha_1} shows the results for \dstname{sacha}{} sequences when considering timestamps with different time granularities, as well as when considering only the event succession. Tables~\ref{tab:res-long-UbiqLog-ISE-abs_1}--\ref{tab:res-long-UbiqLog-ISE-abs_5} show the results for the sequences from the \dstUbiAR{abs} dataset, while tables~\ref{tab:res-long-UbiqLog-IS-rel_1}--\ref{tab:res-long-UbiqLog-IS-rel_5} show the results for the sequences from the \dstUbiAR{rel} dataset. For each sequence and pattern collection we indicate the compression ratio ($\%\cl{}$), the code length ($\mathit{L}_{\ensuremath{\mathcal{C}}}$), the fraction of code used for residual ($\cl\!:\!\resSet{}$), the number of residuals ($\abs{\mathcal{R}}$) and of patterns ($\abs{\ensuremath{\mathcal{C}}}$), the number of simple, vertical, horizontal and two-dimensional patterns ($s$, $v$, $h$, and $m$, respectively), the fraction of patterns covering more than three occurrences ($c_{>3}$) as well as the median ($c^{\text{M}}$) and the maximum ($c^{+}$) cover size of patterns in the collection. } \VShortOnly{A summary of the results for the separate event sequences \dstTZap{0}, \dstBugz{0}, \dstname{samba}{} and \dstname{sacha}{}, is presented in Table~\ref{tab:res-short}.} \VLongOnly{Table~\ref{tab:res-agg} shows aggregated results for the \dstUbiAR{abs} and \dstUbiAR{rel} datasets, where we indicate the range of values taken for the different sequences in each subset.} \VShortOnly{Fig.~\ref{fig:xps-prcCl-ubi-abs} shows the compression ratios achieved on event sequences from the \dstUbiAR{abs} dataset.} \VLongOnly{Fig.~\ref{fig:xps-prcCl-other}--\ref{fig:xps-prcCl-ubi-rel} show the compression ratios achieved for sequences from the different datasets.} \VLongOnly{ \begin{table} \caption{Aggregated results for \dstname{ubiqLog}{} sequences.} \label{tab:res-agg} \centering \begin{tabular}{@{}c@{\hspace{1ex}}r@{\hspace{.7ex}}r@{\hspace{.7ex}}c@{/}c@{/}c@{/}c@{\hspace{.4ex}}r@{}} \toprule & $\%\cl{}$ & $\cl\!:\!\resSet{}$ & \parbox[b][1em][b]{.2cm}{\centering $s$} & \parbox[b][1em][b]{.2cm}{\centering $v$} & \parbox[b][1em][b]{.2cm}{\centering $h$} & \parbox[b][1em][b]{.2cm}{\centering $m$} & $c^{+}$ \\ \cmidrule{2-8} \\ [-.5em] \multicolumn{8}{c}{\dstUbiAR{abs} (31)} \\ \midrule $\ccycle_{S}$ & [$40.18$, $85.52$] & [$0.22$, $0.60$] & [$41$, $9468$] & [$0$, $0$] & [$0$, $0$] & [$0$, $0$] & [$17$, $388$] \\ $\ccycle_{V}$ & [$40.17$, $85.52$] & [$0.23$, $0.60$] & [$41$, $9445$] & [$0$, $57$] & [$0$, $0$] & [$0$, $0$] & [$17$, $388$] \\ $\ccycle_{H}$ & [$30.08$, $84.33$] & [$0.24$, $0.60$] & [$31$, $3113$] & [$0$, $0$] & [$5$, $2256$] & [$0$, $0$] & [$17$, $2328$] \\ $\ccycle_{V\!+H}$ & [$30.08$, $84.33$] & [$0.24$, $0.60$] & [$31$, $3107$] & [$0$, $4$] & [$5$, $2252$] & [$0$, $0$] & [$17$, $2328$] \\ $\ccycle_{F}$ & [$30.06$, $84.33$] & [$0.24$, $0.60$] & [$31$, $3102$] & [$0$, $2$] & [$5$, $2233$] & [$0$, $11$] & [$17$, $2328$] \\ [.5em] \multicolumn{8}{c}{\dstUbiAR{rel} (31)} \\ \midrule $\ccycle_{S}$ & [$26.05$, $64.94$] & [$0.12$, $0.45$] & [$9$, $2567$] & [$0$, $0$] & [$0$, $0$] & [$0$, $0$] & [$158$, $8500$] \\ $\ccycle_{V}$ & [$26.05$, $64.94$] & [$0.12$, $0.45$] & [$9$, $2567$] & [$0$, $2$] & [$0$, $0$] & [$0$, $0$] & [$158$, $8500$] \\ $\ccycle_{H}$ & [$25.91$, $63.48$] & [$0.12$, $0.41$] & [$9$, $2083$] & [$0$, $0$] & [$0$, $339$] & [$0$, $0$] & [$158$, $35300$] \\ $\ccycle_{V\!+H}$ & [$25.91$, $63.48$] & [$0.12$, $0.41$] & [$9$, $2083$] & [$0$, $2$] & [$0$, $334$] & [$0$, $0$] & [$158$, $35300$] \\ $\ccycle_{F}$ & [$25.91$, $63.48$] & [$0.12$, $0.41$] & [$9$, $2083$] & [$0$, $2$] & [$0$, $334$] & [$0$, $1$] & [$158$, $35300$] \\ \bottomrule \end{tabular} \end{table} } We see that the algorithm is able to find sets of patterns that compress the input event sequences. The compression ratio varies widely depending on the considered sequence, from a modest $84\%$ for some sequences from \dstUbiAR{abs} to a reduction of more than two thirds, for instance for \dstname{samba}{}. To an extent, the achieved compression can be interpreted as an indicator of how much periodic structure is present in the sequence (at least of the type that can be exploited by our proposed encoding and detected by our algorithm). In some cases, as with \dstname{samba}{}, the compression is achieved almost exclusively with simple cycles, but in many cases the final selection contains a large fraction of horizontal patterns (sometimes even about two thirds), which bring a noticeable improvement in the compression ratio (as can be seen in Fig.~\ref{fig:xps-prcCl-ubi-abs}, for instance). Vertical patterns, on the other hand, are much more rare, and proper two-dimensional patterns are almost completely absent. The \dstname{bugzilla}{} sequence\VLongOnly{s} feature\VShortOnly{s} such patterns, and even more so the \dstname{3zap}{} sequence\VLongOnly{s}. This agrees with the intuition that recursive periodic structure is more likely to be found in execution logs tracing multiple recurrent automated processes. \VLongOnly{ \begin{figure}[tbp] \centering \begin{tabular}{@{}ccc@{}} \includegraphics[trim=5 0 20 0,width=.32\textwidth,clip]{fig_times_h_details} & \includegraphics[trim=5 0 20 0,width=.32\textwidth,clip]{fig_times_m_details} & \includegraphics[trim=5 0 20 0,width=.32\textwidth,clip]{fig_times_s_details} \\ \multicolumn{3}{c}{\hfill \includegraphics[trim=100 133 10 222,width=.45\textwidth,clip]{fig_times_l_details} \hfill \includegraphics[trim=100 45 10 310,width=.45\textwidth,clip]{fig_times_l_details} \hfill} \\ \end{tabular} \caption{Running times for sequences from the different datasets, in hours (left) and zoomed-in in minutes (middle) and seconds (right).} \label{fig:xps-times} \end{figure} } \VShortOnly{ \begin{figure}[tbp] \centering \begin{tabular}{@{}cc@{}} \includegraphics[trim=5 0 20 0,height=3.5cm,clip]{fig_times_h_all} & \includegraphics[trim=5 0 20 0,height=3.5cm,clip]{fig_times_m_all} \\ \end{tabular} \caption{Running times for mining the different sequences (in hours, left) and zooming in on shorter sequences (in minutes, right).} \label{fig:xps-times} \end{figure} } \VLongOnly{ In most cases, a large proportion of the selected patterns cover more than the minimum three timestamp--event pairs. Some of the largest patterns cover several hundreds or a few thousand occurrences, depending on the length of the input sequence, obviously, as well as the strength of its periodic structure). Obviously, the more occurrences a pattern covers, the more efficient it is, assuming it can be represented concisely. From Table~\ref{tab:res-long-sacha_1} we can see that the chosen time granularity has a strong impact on the extracted patterns. With the finest time granularity, i.e.\ $1$ minute time step (\dstSachaG{1}), few patterns are found because the activities need to reoccur with minute regularity and any deviation must be accounted in the shift corrections. Therefore periodic patterns are not very efficient and only little compression is achieved. When increasing the time granularity to $15$ minutes, $30$ minutes and to $1$ hour (respectively \dstSachaG{15}, \dstSachaG{30} and \dstSachaG{60}) allows to be more forgiving of small deviations the exact times when activities happen, resulting in more efficient patterns found. This is evidenced by a sharp decrease in the fraction of simple cycles ($s/\abs{\ensuremath{\mathcal{C}}}$) and increase in the fraction of patterns covering more than three occurrences ($c_{>3}$) and the maximum cover size ($c^{+}$). Further coarsening the time granularity, to a half day and a full day (\dstSachaG{720} and \dstSachaG{1440}) the fraction of simple cycles among the selected pattern increases again, but this time each one covers a large number of occurrences. At such level of granularity, the time and order in which the activities are carried out during the day no longer matter, only which activities are performed on any given day. Finally, with type of data considering the succession of activities rather than absolute timestamps (\dstSachaAR{rel}) might allow to identify fairly different patterns, since activities in a pattern are no longer separated by a time span but by the number of other activities performed in between. However, in this context, this can result in patterns that are difficult to understand, since they cannot be easily mapped back to time points and hence calendar dates and hours of the days cannot be used when interpreting the patterns. Hence, the choice of using succession or absolute timestamps, and, in the latter case, of choosing the granularity of the time step, has to be made by the analyst in consideration of the context and the time scale that is of interest. In some cases (e.g.\ \dstBugz{0} in Table~\ref{tab:res-long-traces}, \dstSachaG{60} in Table~\ref{tab:res-long-sacha_1} and several \dstname{ubiqLog}{} sequences), the collection of patterns selected from the final set of candidates, $\ccycle_{F}$, achieves worse compression than collections selected from intermediate sets of candidates, despite the fact that the intermediate candidate sets are subsets of the final one. This is due to the fact that the pattern selection, which is in essence a weighted set cover problem is solved greedily (see Section~\ref{sec:algo}), and a local decision of choosing a more efficient pattern produced in later combination rounds, might eventually result in degraded compression. However, the degradation is fairly limited and one might simply decide to replace the final solution by an intermediate one, when the candidates produced later on do not appear to contribute to shortening the code length.} Fig.~\ref{fig:xps-times} shows the running times for sequences from the different datasets. \VLongOnly{Circles and squares, coloured according to achieved compression ratio, indicate the running time of the algorithm for sequences from the \dstname{ubiqLog}{} dataset and from other datasets, respectively. Each such marker is connected to a triangle indicating the running time for the combination rounds. Larger triangles correspond to sequences for which more simple cycles are extracted during the initialisation phase. Darker triangles correspond to sequences for which the maximum cover size among these simple cycles is larger.} The running times vary greatly, from only a few seconds to several hours. Naturally, mining longer sequences tends to require longer running times. \VLongOnly{However, directly observable characteristics of the sequence, such as its size, the size of its alphabet, relative frequencies of the events, etc.\ are not the only factors impacting the running time. The number and length of the cycles extracted in the first stage have a major effect on the time required by the combination rounds, i.e.\ the second stage, which take the bulk of the overall running time. Indeed, if the initial candidates contain many long cycles, many more tests will be needed when trying to combine them into more complex patterns.} \setlength{\ndhlen}{2.2em} \setlength{\ndwlen}{.8cm} \begin{figure}[tbp] % \centering \begin{tikzpicture}[-,auto,node distance=\ndhlen, thick] \draw[info rec, xshift=-1.4\ndwlen, yshift=1.4\ndhlen+.7em] (0, 0) rectangle +(4.1, -2.4) {}; \threeEvEx{0,0}{7}{\SI{10}{\day}}{2017-01-09 18:15}{dinner]}{\SI{0}{\minute}}{[clean ktch.}{\SI{30}{\minute}}{clean ktch.]} \node[xshift=-1.\ndwlen, yshift=1.4\ndhlen+1.2em] at (0,-2.3) {d)}; \draw[info rec, xshift=-1.4\ndwlen, yshift=1.4\ndhlen+.7em] (4.2, 0) rectangle +(4., -2.4) {}; \threeEvEx{4.2,0}{14}{\SI{7}{\day}}{2015-01-08 08:45}{[subway}{\SI{45}{\minute}}{subway]}{\SI{0}{\minute}}{[consulting} \node[xshift=-1.\ndwlen, yshift=1.4\ndhlen+1.2em] at (4.2,-2.3) {e)}; \draw[info rec, xshift=-1.4\ndwlen, yshift=1.4\ndhlen+.7em] (8.3, 0) rectangle +(4., -2.4) {}; \otherEvEx{8.3,0}{4}{221}{151772}{6:C}{2}{2395:X}{4}{2} \node[xshift=-1.\ndwlen, yshift=1.4\ndhlen+1.2em] at (8.3,-2.3) {f)}; \draw[info rec, xshift=-1.4\ndwlen, yshift=1.4\ndhlen+.9em] (0, 2.) rectangle +(4.1, -2.) {}; \twoEvEx{0,2.1}{291}{\SI{2}{\hour}\,30}{2016-03-16 11:45}{[childcare}{\SI{1}{\hour} \SI{30}{\minute}}{childcare]} \node[xshift=-1.\ndwlen, yshift=1.4\ndhlen+1.2em] at (0,.18) {a)}; \draw[info rec, xshift=-1.4\ndwlen, yshift=1.4\ndhlen+.9em] (4.2, 2.) rectangle +(4., -2.) {}; \twoEvEx{4.2,2.1}{76}{\SI{1}{\day}}{2014-12-18 00:15}{[sleep}{\SI{8}{\hour} \SI{30}{\minute}}{sleep]} \node[xshift=-1.\ndwlen, yshift=1.4\ndhlen+1.2em] at (4.2,.18) {b)}; \draw[info rec, xshift=-1.4\ndwlen, yshift=1.4\ndhlen+.9em] (8.3, 2.) rectangle +(4., -2.) {}; \oneEvEx{8.3,2.1}{48}{\SI{1}{\day}\,\SI{15}{\minute}}{2015-12-16 00:00}{[sleep} \node[xshift=-1.\ndwlen, yshift=1.4\ndhlen+1.2em] at (8.3,.18) {c)}; \end{tikzpicture} \caption{Example patterns from \dstSachaG{15} (a--e) and \dstTZap{0} (f).} \label{fig:res-ex} \end{figure} \mpara{Example patterns.} Finally, we present some examples of patterns obtained from the \dstSachaG{15} and \dstTZap{0} sequences, in Fig.~\ref{fig:res-ex}. The start and end of an activity A are denoted as ``[A'' and ``A]'' respectively. The patterns from the \dstSachaG{15} sequence are simple and rather obvious, but they make sense when considering everyday activities. The fact that we are able to find them is a clear sign that the method is working. The \dstTZap{0} pattern is a typical system case: the repetition of a context switch (6:C) followed by several activations of a process (2395:X). \VLongOnly{Further examples can be found in Tables~\ref{tab:res-ex-sacha} and~\ref{tab:res-ex-zap}. In \dstTZap{0} patterns, event names consist of a numerical part, indicating the process id, and one or two letter indicating the action. Upper and lower case letters represent the start and end of an action, respectively. The most common actions are interruption (I), context switch (C), system call (X), user function call (U).} \medskip Most of the discovered patterns are fairly simple. We suspect that this is due to the nature of the data: there are no significantly complex patterns in these event log sequences. In any case, the expressivity of our proposed pattern language comes at no detriment to the simpler, more common patterns, but brings the potential benefit of identifying sequences containing exceptionally regular structure. \section{Conclusion} \label{sec:conclusion} In this paper, we propose a novel approach for mining periodic patterns with a MDL criterion, and an algorithm to put it into practise. Through our experimental evaluation, we show that we are able to extract sets of patterns that compress the input event sequences and to identify meaningful patterns. \VLongOnly{An analyst parsing a log might have some intuition about what periods are more meaningful, as well as relations and dependencies between events, depending on the generating process. For instance, we expect days and weeks to strongly structure life tracking logs, while patterns with periods of, say, 21 hours or 17 days would be considered less intuitive. } How to take\VLongOnly{ such} prior knowledge into account is an interesting question to explore.\VLongOnly{ } Making the algorithm more robust to noise and making it more scalable using for instance parallelisation, are some pragmatic directions for future work, as is adding a visualisation tool to support the analysis and interpretation of the extracted patterns in the context of the event log sequence. \mpara{Acknowledgements.} The authors thank Hiroki Arimura and Jilles Vreeken for valuable discussions. This work has been supported by Grenoble Alpes Metropole through the Nano2017 Itrami project, by the QCM-BioChem project (CNRS Mastodons) and by the Academy of Finland projects ``Nestor'' (286211) and ``Agra'' (313927). \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Phrase grounding~\cite{plummer2015flickr30k,mao2016generation} is the task of localizing within an image a given natural language input phrase, as illustrated in Figure~\ref{fig:grounding_illustration}. This ability to link text and image content is a key component of many visual semantic tasks such as image captioning~\cite{fang2015captions,karpathy2015deep,johnson2016densecap}, visual question answering~\cite{VQA,lu2016hierarchical,xiong2016dynamic,yang2016stacked,fukui2016multimodal}, text-based image retrieval~\cite{gordo2016deep,radenovic2016cnn}, and robotic navigation~\cite{thomason2017guiding}. It is especially challenging as it requires a good representation of both the visual and textual domain and an effective way of linking them. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figs/grounding_overview2.pdf} \caption{The phrase grounding task in the pointing game setting. Given the sentence on top and the image on the left, the goal is to point (illustrated by the stars here) to the correct location of each natural language query (colored text). Actual example of our method results on Flickr30k.} \label{fig:grounding_illustration} \vspace{-0.2cm} \end{figure} On the visual side, most of the works exploit Deep Convolutional Neural Networks but often rely on bounding box proposals~\cite{plummer2015flickr30k,rohrbach2016grounding,hu2016natural} or use a global feature of the image~\cite{fang2015captions}, limiting the localization ability and freedom of the method. On the textual side, methods rely on a closed vocabulary or try to train their own language model using small image-caption pairs datasets~\cite{javed2018learning,zhang2018top,yeh2017interpretable,engilberge2018finding}. Finally, the mapping between the two modalities is often performed with a weak linear strategy~\cite{plummer2015flickr30k,xu2018attngan}. We argue that approaches in the literature have not fully leveraged the potential of the more powerful visual and textual model developed recently, and there is room for developing more sophisticated representations and mapping approaches. In this work, we propose to explicitly learn a non-linear mapping of the visual and textual modalities into a common space, and do so at different granularity for each domain. Indeed, different layers of a deep network encode each region of the image with gradually increasing levels of discriminativeness and context awareness, similarly single words and whole sentences contain increasing levels of semantic meaning and context. This common space mapping is trained with weak supervision and exploited at test-time with a multi-level multimodal attention mechanism, where a natural formalism for computing attention heatmaps at each level, attended features and pertinence scoring, enables us to solve the phrase grounding task elegantly and effectively. We evaluate our model on three commonly used datasets in the literature of textual grounding and show that it sets a new state-of-the-art performance by a large margin. Our contributions in this paper are as follows: \begin{itemize} \item We learn, with weak-supervision, a non-linear mapping of visual and textual features to a common region-word-sentence semantic space, where comparison between any two semantic representations can be performed with a simple cosine similarity; \item We propose a multi-level multimodal attention mechanism, producing either word-level or sentence-level attention maps at different semantic levels, enabling us to choose the most representative attended visual feature among different semantic levels; \item We set new state-of-the-art performance on three commonly used datasets, and give detailed ablation results showing how each part of our method contributes to the final performance. \end{itemize} \section{Related works} In this section, we give an overview of related works in the literature and discuss how our method differs from them. \subsection{Grounding natural language in images} The earliest works on solving textual grounding~\cite{plummer2015flickr30k,rohrbach2016grounding,hu2016natural} tried to tackle the problem by finding the right bounding box out of a set of proposals, usually obtained from pre-specified models~\cite{zitnick2014edge,uijlings2013selective}. The ranking of these proposals, for each text query, can be performed using scores estimated from a reconstruction~\cite{rohrbach2016grounding} or sentence generation~\cite{hu2016natural} procedure, or using distances in a common space~\cite{plummer2015flickr30k}. However, relying on a fixed set of pre-defined concepts and proposals may not be optimal and the quality of the bounding boxes defines an upper bound~\cite{hu2016natural,wang2016structured} of the performance that can be achieved. Therefore, several methods~\cite{chen2017query,zhaoweakly} have integrated the proposal step in their framework to improve the bounding box quality. These works often operate in a fully supervised setting~\cite{chen2018msrc,yeh2017interpretable,yu2018mattnet,fukui2016multimodal,chen2017query}, where the mapping between sentences and bounding boxes has to be provided at training time which is not always available and is costly to gather. Furthermore, methods based on bounding boxes often extract features separately for each bounding box~\cite{hu2016natural,chen2018knowledge,wang2016structured}, inducing a high computational cost. Therefore, some works~\cite{ramanishka2017top,javed2018learning,zhang2018top,xiao2017weakly,yeh2018unsupervised} choose not to rely on bounding boxes and propose to formalize the localization problem as finding a spatial heatmap for the referring expression. This setting is mostly weakly-supervised, where at training time only the image and the text (describing either the whole image or some parts of it) are provided but not the corresponding bounding box or segmentation mask for each description. This is the more general setting we are addressing in this paper. The top-down approaches~\cite{ramanishka2017top,zhang2018top} and the attention-based approach~\cite{javed2018learning} learn to produce a heatmap for each word of a vocabulary. At test time, all these methods produce the final heatmap by averaging the heatmaps of all the words in the query that exist in the vocabulary. Several grounding works have also explored the use of additional knowledge, such as image~\cite{wang2016structured} and linguistic~\cite{xiao2017weakly,plummer2017phrase} structures, phrase context~\cite{chen2018msrc} and exploiting pre-trained visual models predictions~\cite{chen2018knowledge,yeh2018unsupervised}. In contrast to many works in the literature, we don't use a pre-defined set of image concepts or words in our method. We instead rely on visual feature maps and a character-based language model with contextualized embeddings which could handle any unseen word considering the context in the sentence. \subsection{Mapping to common space} It is a common approach to extract visual and language features independently and fuse them before the prediction~\cite{engilberge2018finding, chen2018knowledge,chen2017query}. Current works usually apply a multi-layer perceptron (MLP)~\cite{chen2017query,chen2018knowledge}, element-wise multiplication~\cite{hendricks2018grounding}, or cosine similarity~\cite{engilberge2018finding} to combine representations from different modalities. Other methods have used the Canonical Correlation Analysis (CCA)~\cite{plummer2017phrase, plummer2015flickr30k}, which finds linear projections that maximize the correlation between projected vectors from the two views of heterogeneous data. ~\cite{fukui2016multimodal} introduced the Multimodal Compact Bilinear (MCB) pooling method that uses a compressed feature from the outer product of two vectors of visual and language features to fuse them. Attention methods~\cite{xu2018attngan,nguyen2018improved} can also measure the matching of an image-sentence feature pair. We use non-linear mappings of both visual features (in multiple semantic levels) and textual embeddings (both contextualized word and sentence embeddings) separately and use multi-level attention with multimodal loss to learn those mapping weights. \subsection{Attention mechanisms} Attention has proved its effectiveness in many visual and language tasks~\cite{khademi2018image,anderson2018bottom,chen2017sca,yang2016stacked,xu2015show}, it is designed to capture a better representation of image-sentence pairs based on their interactions. The Accumulated Attention method~\cite{deng2018visual} propose to estimate attention on sentences, objects and visual feature maps in an iterative fashion, where at each iteration the attention of the other two modalities is exploited as guidance. A dense co-attention mechanism is explored in~\cite{nguyen2018improved} to solve the Visual Question Answering task by using a fully symmetric architecture between visual and language representations. In their attention mechanism, they add a dummy location in attention map when no region or word the model should attend along with a softmax. In AttnGAN~\cite{xu2018attngan}, a deep attention multimodal similarity model is proposed to compute a fine-grained image-text matching loss. In contrast to these works, we remove the softmax on top of the attention maps to let the model decide which word-region could be related to each other by the guide of the multimodal loss. Since we map the visual features to a multi-level visual representation, we give the model the freedom to choose any location at any level for either sentence or word. In other words, each word or sentence can choose which level of representation (and which region in that representation) to attend to. We directly calculate the attention map by cosine similarity in our common semantic space. We show that this approach significantly outperforms all the state of the art approaches on three commonly used datasets and set a new state of the art performance. \begin{figure} \centering \includegraphics[width=0.85\columnwidth]{figs/highleveloverview_onecol2.pdf} \caption{Overview of our method: the textual input is processed with a pre-trained text model followed by a non-linear mapping to the common semantic space. Similarly for the image input, we use a pre-trained visual model to extract visual features maps at multiple levels and learn a non-linear mapping for each of them to the common semantic space. A multi-level attention mechanism followed by a feature level selection produces the pertinence score between the image and the sentence. We train our model using only the weak supervision of image-sentence pairs. } \label{fig:overview} \end{figure} \section{Method} In this section, we describe our method (illustrated in Figure~\ref{fig:overview}) for addressing the textual grounding task and elaborate on each part with details. In Section~\ref{sec:feature_ext_map}, we explain how we extract multi-level visual features from an image and word/sentence embeddings from the text, and then describe how we map them to a common space. In Section~\ref{sec:mlmm_attention} we describe how we calculate multi-level multimodal attention map and attended visual feature for each word/sentence. Then, in Section~\ref{sec:feature_level} we describe how we choose the most representative visual feature level for the given text. Finally, in Section~\ref{sec:mm_loss} we define a multimodal loss to train the whole model with weak supervision. \subsection{Feature Extraction and Common Space \label{sec:feature_ext_map} \paragraph{Visual Feature Extraction}\hspace{-0.9em}: In contrast to many vision tasks where the last layer of a pre-trained CNN is being used as visual representation of an image, we use feature maps from different layers and map them separately to a common space to obtain a multi-level set of feature maps to be compared with text. Intuitively, using different levels of visual representations would be necessary for covering a wide range of visual concepts and patterns \cite{lin2017feature,yosinski2015understanding,zeiler2014visualizing}. Thus, we extract $L=4$ sets of feature maps from $L$ different levels of a visual network, upsample them by a bi-linear interpolation\footnote{as transposed convolution produces checkerboard artifacts~\cite{odena2016deconvolution}} to a fixed resolution $M \times M$ for all the $L$ levels, and then apply 3 layers of 1x1 convolution (with LeakyRelu~\cite{maas2013rectifier}) with $D$ filters to map them into equal-sized feature maps. Finally, we stack these feature maps and space-flatten them to have an overall image representation tensor $V \in \mathbb{R}^{N \times L \times D}$, with $N = M \times M$. This tensor is finally normalized by the $l_2$-norm of its last dimension. An overview of the feature extraction and common space mapping for image can be seen in the left part of Figure~\ref{fig:mapping}. In this work, we use VGG~\cite{Simonyan14c} as a baseline for fair comparison with other works~\cite{fang2015captions,xiao2017weakly,javed2018learning}, and the state of the art CNN, PNASNet-5~\cite{liu2017progressive}, to study the ability of our model to exploit this more powerful visual model. \begin{figure*}[tb] \centering \includegraphics[width=1.02\columnwidth]{figs/visual_mapping3.pdf} \hfill \includegraphics[width=0.94\columnwidth]{figs/textual_mapping4.pdf} \caption{Left: we choose feature maps of different convolutional blocks of a CNN model, resize them to the same spatial dimensions using bi-linear interpolation, and map them to feature maps of the same size. Right: word and sentences embedding to the common space from the pre-trained ELMo~\cite{peters2018deep} model. The green pathway is for word embedding, the red pathway for sentence embedding. All the orange boxes ($1 \times 1$ convolutional layers of the visual mapping, linear combination and the two sets of fully connected layers of the textual mapping) are the trainable parameters of our projection to the common space.} \label{fig:mapping} \end{figure*} \paragraph{Textual Feature Extraction}\hspace{-0.9em}: State-of-the-art works in grounding use a variety of approaches for textual feature extraction. Some use pre-trained LSTM or BiLSTMs on big datasets (e.g. Google 1 Billion~\cite{chelba2014one}) based on either word2vec~\cite{mikolov2013linguistic} or GloVe~\cite{pennington2014glove} representations. Some train BiLSTM solely on image-caption datasets (mostly MSCOCO) and argue it is necessary to train them from scratch to distinguish between visual concepts which may not be distinguishable in language (e.g. red and green are different in vision but similar in language as they are both colors) \cite{nguyen2018improved,xu2018attngan,javed2018learning,xiao2017weakly,engilberge2018finding,hendricks2018grounding,zhaoweakly,plummer2015flickr30k,yu2018mattnet,deng2018visual}. These works either use the recurrent network outputs at each state as word-level representations or their last output (on each direction for BiLSTM) as sentence-level or a combination of both. In this paper, however we use ELMo~\cite{peters2018deep}, a 3-layer network pre-trained on 5.5B tokens which calculates word representations on the fly (based on CNN on characters, similar to \cite{jozefowicz2016exploring,zhang2015character}) and then feed them to 2 layers of BiLSTMs which produce contextualized representations. Thus, for a given sentence the model outputs three representations for each token (splitted by white space). We take a linear combination of the three representations and feed them to 2 fully connected layers (with shared weights among words), each with $D$ nodes with LeakyRelu as non-linearity between each layer, to obtain each word representation $\mathbf{s}_t$ (green pathway in the right part of Figure~\ref{fig:mapping}). The resulting word-based text representation for an entire sentence would be a tensor $\mathbf{S} \in \mathbb{R}^{T \times D}$ built from the stacking of each word representation $\mathbf{s}_t$. The sentence-level text representation is calculated by concatenation of last output of the BiLSTMs at each direction. Similarly, we apply a linear combination on the two sentence-level representations and map it to the common space by feeding it to 2 fully connected layers of $D$ nodes, producing the sentence representation $\overline{\mathbf{s}}$ (red pathway in the right part of Figure~\ref{fig:mapping}). The word tensor and the sentence vector are normalized by their last dimension $l_2$-norm before being fed to the multimodal attention block. \subsection{Multi-Level Multimodal Attention Mechanism} \label{sec:mlmm_attention} Given the image and sentence, our task is to estimate the correspondences between spatial regions ($n$) in the image at different levels ($l$), and words in the sentence at different positions ($t$). We seek to estimate a correspondence measure, $H_{n,t,l}$, between each word and each region at each level. We define this correspondence by the cosine similarity between word and image region representations at different levels in common space: \begin{equation} \label{eq:heatmap_multi} H_{n,t,l} = \max(0, \langle \mathbf{s}_{t}, \mathbf{v}_{n,l}\rangle). \end{equation} Thus, $\mathbf{H} \in \mathbb{R}^{N \times T \times L}$ represents a multi-level multi-modal attention map which could be simply used for calculating either visual or textual attended representation. We apply ReLU to the attention map to zero-out dissimilar word-visual region pairs, and simply avoid applying softmax on any dimension of the heatmap tensor. Note that this choice is very different in spirit from the commonly used approach of applying softmax to attention maps~\cite{xu2015show,xu2016ask,deng2018visual,nguyen2018improved,javed2018learning,xu2018attngan,ramanishka2017top}. Indeed for irrelevant image-sentence pairs, the attention maps would be almost all zeros while the softmax process would always force attention to be a distribution over the image/words summing to $1$. Furthermore, a group of words shaping a phrase could have the same attention area which is again hard to achieve considering the competition among regions/words in the case of applying softmax on the heatmap. We will analyze the influence of this choice experimentally in our ablation study. Given the heatmap tensor, we calculate the attended visual feature for the $l$-th level and $t$-th word as \begin{equation} \mathbf{a}_{t,l} = \frac{\sum_{n=1}^{N}H_{t,n,l}\mathbf{v}_{n,l}}{\norm[\Big]{\sum_{n=1}^{N}H_{t,n,l}\mathbf{v}_{n,l}}_{2}}, \end{equation} \noindent which is basically a weighted average over the visual representations of the $l$-th level with the attention heatmap values as weights. In other words, $\mathbf{a}_{t,l}$ is a vector in the hyperplane spanned by a subset of visual representations in the common space, this subset being selected based on the heatmap tensor. An overview of our multi-level multimodal attention mechanism for calculating attended visual feature can be seen in Figure~\ref{fig:attention}. In the sequel, we describe how we use this attended feature to choose the most representative hyperplane, and calculate a multimodal loss to be minimized by weak supervision of image-sentence relevance labels. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figs/attention4.pdf} \caption{For each word feature $s_t$, we compute an attention map $H_{l,t}$ and an attended visual feature $a_{l,t}$ at each level $l$. We choose the level that maximizes similarity between the attended visual feature and the textual feature in the common space to produce the pertinence score $R_t$. This is equivalent to finding the hyperplane (spanned by each level visual feature vectors in the common space) that best matches the textual feature.} \label{fig:attention} \end{figure} \subsection{Feature Level Selection} \label{sec:feature_level} Once we find the attended visual feature, we calculate the word-image pertinence score at level $l$ using cosine similarity for each word and the attended visual feature as \begin{equation} R_{t,l} = \langle \mathbf{a}_{t,l}, \mathbf{s}_{t}\rangle. \end{equation} Intuitively, each visual feature map level could carry different semantic information, thus for each word we propose to apply a hard level-attention to get the score from the level contributing the most as \begin{equation} \label{word_pertinence} R_{t} = \max_{l}{R_{t,l}}. \end{equation} This procedure can be seen as finding projection of the textual embeddings on hyperplanes spanned by visual features from different levels and choosing the one that maximizes this projection. Intuitively, that chosen hyperplane can be a better representation for visual feature space attended by word $t$. This can be seen in the top central part of Figure~\ref{fig:overview}, where selecting the maximum pertinence score over levels is equivalent to selecting the hyperplane with the smallest angle with the $t$-th word representation (or the highest similarity between attended visual feature and textual feature). Thus, selecting the most representative hyperplane (or visual feature level). Once we find the best word-image pertinence score, similar to~\cite{xu2018attngan} and inspired by the minimum classification error~\cite{juang1997minimum}, we find the overall (word-based) sentence-image pertinence score as follows: \begin{equation} \label{w_matching} R_w(S,I) = \log\bigg( {\Big( \sum_{t=0}^{T-1}\exp{(\gamma_1R_t)} \Big)}^{\frac{1}{\gamma_1}} \bigg). \end{equation} Similarly, for the sentence we can repeat the same procedure (except that we no more need Eq.~(\ref{w_matching})) for finding the attention map, attended visual feature and sentence-image pertinence score as follows, respectively: \begin{subequations} \setlength{\jot}{7pt} \begin{align} &H_{n,l}^{s} = \max(0,\langle \bar{\mathbf{s}}, \mathbf{v}_{n,l}\rangle)\label{rs} \\ &\mathbf{a}_{l}^{s} = \sum_{n=1}^{N}H_{n,l}^{s}\mathbf{v}_{n,l} \\ &R_{s,l} = \langle \mathbf{a}_{l}^{s}, \bar{\mathbf{s}}\rangle \\ &R_s(S,I) = \max_{l}{ R_{s,l}} \end{align} \end{subequations} \subsection{Multimodal Loss} \label{sec:mm_loss} In this paper, we only use a weak supervision in the form of binary image-caption relevance. Thus, similar t ~\cite{fang2015captions,huang2013learning,xu2018attngan} we train the network on a batch of image-caption pairs, $\{(S_b,I_b)\}_{b=1}^{B}$ and force it to have high sentence-image pertinence score for related pairs and low score for unrelated pairs. Thus, considering a pertinence score $R_x$ (either $R_w$ or $R_s$), we calculate the posterior probability of the sentence $S_b$ being matched with image $I_b$ by applying competition among all sentences in the batch using: \begin{equation}\label{prob1} P_x(S_b|I_b) = \frac{\exp(\gamma_2R_x(S_b,I_b))}{\sum_{b'}^{B}{\exp(\gamma_2R_x(S_{b'},I_b))}} \end{equation} Similarly, the posterior probability of $I_b$ being matched with $S_b$ could be calculated using: \begin{equation}\label{prob2} P_x(I_b|S_b) = \frac{\exp(\gamma_2R_x(S_b,I_b))}{\sum_{b'}^{B}{\exp(\gamma_2R_x(S_b,I_{b'}))}} \end{equation} Then, similarly to~\cite{fang2015captions, xu2018attngan}, we can define the loss using the negative log posterior probability over relevant image-sentence pairs as follows: \begin{equation} L^{x} = -\sum_{b=1}^{B}\Big(\log{P_x(S_b|I_b)}+\log{P_x(I_b|S_b)}\Big) \end{equation} As we want to train a common semantic space for both words and sentences, we combine the loss $L^w$ (that can be computed based on the word relevance $R_w$) and the sentence loss $L^s$ (obtained using $R_s$) to define our final loss $L$ as \begin{equation} L = L^{w} + L^{s}. \end{equation} This loss is minimized over a batch of $B$ images along with their related sentences. We found in preliminary experiments on held-out validation data, that the values $\gamma_1=5$, $\gamma_2=10$ work well and we keep them fixed for our experiments. In the next section, we will evaluate our proposed model on different datasets and will have an ablation study to show the reason for our choices in our model. \section{Experiments} In this section, we first present the datasets used in our experimental setup. We then evaluate our approach comparing with the state-of-the-art, and further present ablation studies showing the influence of each step of our method. \subsection{Datasets} \label{datasets} \paragraph{MSCOCO 2014}\hspace{-0.6em}\cite{lin2014microsoft} consists of 82,783 training images and 40,504 validation images. Each image is associated with five captions describing the image. We use the train split of this dataset for training our model. \vspace{-0.5em} \paragraph{Flickr30k Entities}\hspace{-0.6em}\cite{plummer2015flickr30k} contains 224k phrases describing localized bounding boxes in $\sim$31k images each described by 5 captions. Images and captions come from Flickr30k~\cite{young2014image}. We use 1k images from the test split of this dataset for evaluation. \vspace{-0.5em} \paragraph{VisualGenome} \hspace{-0.6em}\cite{krishna2017visual} contains 77,398 images in the training set, and a validation and test set of 5000 images each. Each image consists of multiple bounding box annotations and a region description associated with each bounding box. We use the train split of this dataset to train our models and use its test split for evaluation. \vspace{-0.5em} \paragraph{ReferIt}\hspace{-0.6em}consists of 20,000 images from the IAPR TC-12 dataset~\cite{grubinger2006iapr} along with 99,535 segmented image regions from the SAIAPR-12 dataset~\cite{chen2017query}. Images are associated with descriptions for the entire image as well as localized image regions collected in a two-player game~\cite{kazemzadeh2014referitgame} providing approximately 130k isolated entity descriptions. In our work, we only use the unique descriptions associated with each region. We use a split similar to~\cite{hu2016natural} which contains 9k training, 1k validation, and 10k test images. We use the test split of this dataset to evaluate our models. \begin{table}[tb] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|} \cline{4-6} \multicolumn{3}{}{} & \multicolumn{3}{|c|}{Test Accuracy} \\ \hline Method & Settings & Training & VG & Flickr30k & ReferIt \\ \hline Baseline & Random & - & 11.15 & 27.24 & 24.30 \\ Baseline & Center & - & 20.55 & 49.20 & 30.40 \\ \hline TD~\cite{zhang2018top} & Inception-2 & VG & 19.31 & 42.40 & 31.97 \\ SSS~\cite{javed2018learning} & VGG & VG & 30.03 & 49.10 & 39.98 \\ Ours & BiLSTM+VGG & VG & 50.18 & 57.91 & \textbf{62.76} \\ Ours & ELMo+VGG & VG & 48.76 & 60.08 & 60.01 \\ Ours & ELMo+PNASNet & VG & \textbf{55.16} & \textbf{67.60} & 61.89 \\ \hline CGVS~\cite{ramanishka2017top} & Inception-3 & MSR-VTT & - & 50.10 & - \\ \hline FCVC~\cite{fang2015captions} & VGG & MSCOCO & 14.03 & 29.03 & 33.52 \\ VGLS~\cite{xiao2017weakly} & VGG & MSCOCO & 24.40 & - & - \\ Ours & BiLSTM+VGG & MSCOCO & 46.99 & 53.29 & 47.89 \\ Ours & ELMo+VGG & MSCOCO & 47.94 & 61.66 & 47.52 \\ Ours & ELMo+PNASNet & MSCOCO & \textbf{52.33} & \textbf{69.19} & \textbf{48.42} \\ \hline \end{tabular} } \caption{Phrase localization accuracy (pointing game) on Flickr30k, ReferIt and VisualGenome (VG) compared to state of the art methods.} \label{tab:res-pointing-3datasets} \vspace{-0.25cm} \end{table} \subsection{Experimental Setup} \label{sec:experimental_setup} We use a batch size of $B=32$, where for a batch of image-caption pairs each image (caption) is only related to one caption (image). Image-caption pairs are sampled randomly with a uniform distribution. We train the network for 20 epochs with the Adam optimizer~\cite{kingma2014adam} with $lr=0.001$ where the learning rate is divided by 2 once at the 10-th epoch and again at the 15-th epoch. We use $D=1024$ for common space mapping dimension and $\alpha=0.25$ for LeakyReLU in the non-linear mappings. We regularize weights of the mappings with $l_2$ regularization with reg$\_$value $=0.0005$. For VGG, we take outputs from \{conv4\_1, conv4\_3, conv5\_1, conv5\_3\} and map to semantic feature maps with dimension $18\times18\times1024$, and for PNASNet we take outputs from \{Cell 5, Cell 7, Cell 9, Cell 11\} and map to features with dimension $19\times19\times1024$. Both visual and textual networks weights are fixed during training and only common space mapping weights are trainable. In the ablation study, we use 10 epochs without dividing learning rate, while the rest of settings remain the same. We follow the same procedure as in~\cite{javed2018learning,johnson2016densecap,plummer2015flickr30k,xiao2017weakly} for cleaning and pre-processing the datasets and use the same train/test splits for fair comparison in our evaluations. \subsection{Phrase Localization Evaluation} \label{sec:eval} \begin{table}[tb] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|} \cline{2-7} \multicolumn{1}{}{} & \multicolumn{3}{|c|}{pointing game accuracy} & \multicolumn{3}{|c|}{attention correctness} \\ \hline & \cite{ramanishka2017top} & Ours & Ours & \cite{ramanishka2017top} & Ours & Ours \\ Class & Inc.3 & VGG & PNAS & Inc.3 & VGG & PNAS \\ \hline bodyparts & 0.194 & 0.408 & \bf{0.449} & 0.155 & 0.299 & \bf{0.373} \\ \hline animals & 0.690 & 0.867 & \bf{0.876} & 0.657 & 0.701 & \bf{0.826} \\ \hline people & 0.601 & 0.673 & \bf{0.756} & 0.570 & 0.562 & \bf{0.724} \\ \hline instrument & 0.458 & 0.286 & \bf{0.575} & 0.502 & 0.297 & \bf{0.555} \\ \hline vehicles & 0.645 & 0.781 & \bf{0.838} & 0.615 & 0.554 & \bf{0.738} \\ \hline scene & 0.667 & \bf{0.685} & 0.682 & 0.582 & 0.596 & \bf{0.639} \\ \hline other & 0.427 & 0.502 & \bf{0.598} & 0.348 & 0.424 & \bf{0.535} \\ \hline clothing & 0.360 & 0.472 & \bf{0.583} & 0.345 & 0.330 & \bf{0.473} \\ \hline \hline average & 0.501 & 0.617 & \bf{0.692} & 0.473 & 0.508 & \bf{0.639} \\ \hline \end{tabular} } \caption{Category-wise pointing game accuracy and attention correctness on Flickr30k Entities.} \label{tab:results_category} \end{table} \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{figs/150_complete2.pdf}\\ \caption{Image-sentence pair from Flickr30k with four queries (colored text) and corresponding heatmaps and selected max value (stars).} \label{fig:qualitative_flickr30k_big} \vspace{-0.15cm} \end{figure*} As stated in Section~\ref{datasets}, we train our model on the train split of MSCOCO and Visual Genome (VG), and evaluate it on the test splits of Flickr30k, ReferIt, and VG. In test time, for Flickr30k we feed a complete sentence to the model and take weighted average of attention heatmaps of words for each query with the word-image pertinence scores from Eq.~(\ref{word_pertinence}) as weights. For ReferIt and Visual Genome, we treat each query as a single sentence and take its sentence-level attention heatmap as the final query pointing heatmap. Once the pointing heatmaps are calculated, we find the max location (as pointing location for the given query) and evaluate the model by the pointing game accuracy: $\frac{\#hit}{\#hit+\#miss}$. Pointing game accuracy results can be found in Table~\ref{tab:res-pointing-3datasets} for Flickr30k, ReferIt and Visual Genome datasets. The results show that our method significantly outperforms all state-of-the-art methods in all conditions and all datasets. For fair comparison with~\cite{javed2018learning,fang2015captions,xiao2017weakly}, we used a VGG16 visual model and replaced the pre-trained BiLSTM layers of ELMo with a single trainable BiLSTM. This model (BiLSTM+VGG) still gives a pointing game accuracy absolute improvement of $20.15\%$ for VisualGenome, $7.81\%$ for Flickr30k, and $23.28\%$ for ReferIt, while giving relative improvement of $67.09\%$, $15.59\%$, and $56.98\%$, respectively. Results with the more recent PNASNet model are even better, especially for Flickr30k and VisualGenome. To get a deeper understanding of our model, we first report in Table~\ref{tab:results_category} category-wise pointing game accuracy and attention correctness~\cite{liu2017attention} (percentage of the heatmap falling into the ground truth bounding box) and compare with the state-of-the-art method~\cite{ramanishka2017top} on Flickr30k. We observe that our method obtains a higher performance on almost all categories even when VGG16 is used as the visual backbone. The model based on PNASNet consistently outperforms the state-of-the-art on all categories on both metrics. We further perform a test on level selection rate for different types of queries and report them in Table \ref{tab:level_selection}. It shows that the 3rd level dominates the selection while the 4th level is also important for several categories such as scene and animals. The 1st level is exploited mostly for the animals and people categories. The full sentence selection relies mostly on the 3rd level as well, while for some sentences the 4th model has been selected. This demonstrates the power of the proposed method in selecting the right level of representation. \begin{table}[tb] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|c||c||c|} \cline{2-11} \multicolumn{1}{}{} & \multicolumn{10}{|c|}{Selection Rate ($\%$)} \\ \hline \shortstack{\\Level / \\PNASNet \\ Layers} & \rotatebox[origin=c]{270}{bodyparts} & \rotatebox[origin=c]{270}{animals} & \rotatebox[origin=c]{270}{people} & \rotatebox[origin=c]{270}{instrument} & \rotatebox[origin=c]{270}{vehicles} & \rotatebox[origin=c]{270}{scene} & \rotatebox[origin=c]{270}{other} & \rotatebox[origin=c]{270}{clothing} & \rotatebox[origin=c]{270}{average} & \rotatebox[origin=c]{270}{sentence} \\ \hline 1 / Cell 5 & 2.6 & 10.4 & 7.5 & 0.9 & 2.0 & 5.4 & 5.4 & 5.3 & 6.3 & 0.7\\ \hline 2 / Cell 7 & 0.1 & 2.0 & 4.2 & 0.0 & 1.7 & 2.5 & 0.9 & 0.3 & 2.5 & 0.05\\ \hline 3 / Cell 9 & 85.9 & 48.4 & 64.6 & 88.6 & 68.3 & 49.5 & 70.9 & 86.1 & 66.5 & 86.51\\ \hline 4 / Cell 11 & 11.4 & 39.2 & 23.7 & 10.5 & 27.9 & 42.6 & 22.8 & 8.3 & 24.7 & 12.7\\ \hline \end{tabular} } \caption{Level selection rate for different layers of PNASNet on different categories in Flickr30k} \label{tab:level_selection} \vspace{-0.15cm} \end{table} \subsection{Ablation Study} In this section, we trained on MSCOCO multiple configurations of our approach, with a PNASNet visual model, to better understand which aspects of our method affects positively or negatively the performance. We report evaluation results on Flickr30k in Table~\ref{tab:ablation}. Results are sorted by performance to show the most successful combinations. \begin{table}[tb] \centering \resizebox{0.85\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|} \cline{2-8} \multicolumn{1}{c|}{} & SA & ELMo & NLT & NLV & WL & SL & Acc.\\ \hline 1 & \xmark & \cmark & \cmark & \cmark & ML & ML & 67.73 \\ \hline 2 & \xmark & \cmark & \cmark & \cmark & M & L & 62.67 \\ \hline 3 & \xmark & \xmark & \cmark & \cmark & ML & ML & 61.13 \\ \hline 4 & \xmark & \cmark & \xmark & \cmark & M & L & 58.40 \\ \hline 5 & \xmark & \cmark & \cmark & \xmark & M & L & 56.92 \\ \hline 6 & \xmark & \xmark & \cmark & \cmark & M & L & 56.42 \\ \hline 7 & \xmark & \cmark & \xmark & \xmark & M & L & 54.75 \\ \hline 8 & \cmark & \cmark & \cmark & \cmark & M & L & 47.20 \\ \hline 9 & \cmark & \xmark & \xmark & \xmark & M & L & 44.83 \\ \hline \end{tabular} } \caption{Ablation study results on Flickr30k using PNASNet. SA: Softmax Attention; NLT: Non-Linear Text mapping; NLV: Non-Linear Visual mapping; WL: Word-Layer; SL: Sentence-Layer; Acc.: pointing game accuracy.} \label{tab:ablation} \vspace{-0.1cm} \end{table} We first evaluated the efficacy of using multi-level feature maps (ML) with level selection compared to a fixed choice of visual layer (M: middle layer, L: last layer) for comparison to word and sentence embeddings (WL and SL). Specifically, we used \textit{Cell~7} as middle layer, and \textit{Cell~11} as last layer, to be compared with word and sentence embedding in Eq.~(\ref{eq:heatmap_multi}) and Eq.~(\ref{rs}), respectively. The results in rows $1,2$ show that using level-attention mechanism based on multi-level feature maps significantly improves the performance over single visual-textual feature comparison. We then study the affect of non-linear mapping into the common space for the text and visual features (NLT and NLV). By comparing rows $2,4,5,7$, we see that non-linear mapping in our model is really important, and replacing any mapping with a linear one significantly degrades the performance. We can also see that non-linear mapping seems more important on the visual side, but best results are obtained with both text and visual non-linear mappings. We further study the use of ELMo for text embedding or the commonly used approach of training a Bi-LSTM. Specifically, we simply replaced the pre-trained BiLSTMs of ELMo model with a trainable BiLSTM (on top of word embeddings of ELMo), and directly feed the BiLSTM outputs to the attention model. The results in rows $1,3$ and $2,6$ show the importance of using a strong contextualized text embedding as the performance drops significantly. We also study the use of softmax on the heatmaps, comparing rows $2,8$, we see that applying softmax leads to a very negative effect on the performance. This makes sense, as elaborated in Section~\ref{sec:mlmm_attention}, since this commonly used approach forces unnecessarily the heatmap to have a distribution on either words or regions. Results in row $9$ corresponds to a simple baseline on par with the state-of-the-art, showing how much improvement can be gained by not using softmax, the use of our multi-level non-linear common space representation and attention mechanism, and a powerful contextualized textual embedding. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figs/104_complete.pdf}\\ \includegraphics[width=\columnwidth]{figs/522_complete.pdf}\\ \includegraphics[width=\columnwidth]{figs/780_complete.pdf}\\ \includegraphics[width=\columnwidth]{figs/9831_complete.pdf}\\ \includegraphics[width=\columnwidth]{figs/9828_complete.pdf}\\ \caption{Some image-sentence pairs from Flickr30k, with two queries (colored text) and corresponding heatmaps and selected max value (stars).} \label{fig:qualitative_flickr30k} \vspace{-0.15cm} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figs/bad_1.pdf}\\ \includegraphics[width=\columnwidth]{figs/bad_2.pdf}\\ \caption{Some failure cases of our model. The model makes some semantically reasonable mistakes in pointing to regions. } \label{fig:qual_bad} \vspace{-0.25cm} \end{figure} \subsection{Qualitative results} We give in Figure~\ref{fig:qualitative_flickr30k_big}, \ref{fig:qualitative_flickr30k}, and~\ref{fig:qual_bad} some examples of heat maps generated for some queries of the Flickr30k dataset. Specifically, we upsample the heatmaps from their original size of $18 \times 18$ (as we use the VGG backbone for these visualizations) by bilinear interpolation to the original image size. We can observe that the max (pointing) location in heatmaps point to correct location in the image and the heatmaps often capture relevant part of the image for each query. It can deal with persons, context and objects even if they are described with some very specific words (e.g. "bronco"), which shows the power of using a character-based contextualized text embedding. Finally, Figure~\ref{fig:qual_bad} shows some localization failures involving concepts that are semantically close, and in challenging capture conditions. For example, the frames are mistakenly pointed for the query "window" which is overexposed. \section{Conclusion} In this paper, we present a weakly supervised method for phrase localization which relies on multi-level attention mechanism on top of multi-level visual semantic features and contextualized text embeddings. We non-linearly map both contextualized text embeddings and multi-level visual semantic features to a common space and calculate a multi-level attention map for choosing the best representative visual semantic level for the text and each word in it. We show that such combination sets a new state of the art performance and provide quantitative numbers to show the importance of 1. using correct common space mapping, 2. strong contextualized text embeddings, 3. freedom of each word to choose correct visual semantic level. Future works lies in studying other applications such as Visual Question Answering, Image Captioning, etc. \section*{Acknowledgment} This work was supported by the U.S. DARPA AIDA Program No. FA8750-18-2-0014. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. {\small \bibliographystyle{ieee}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Riordan matrices are infinite, lower triangular matrices defined by the generating function of their columns. They form a group, called {\em the Riordan group} (see Shapiro, Getu, W. J. Woan and L. Woodson \cite{SGWW}). More formally, let us consider the set of formal power series $\mbox{$\mathcal{F}$} = K[\![$$t$$]\!]$, where ${\bK}$ is the field of ${\bR}$ or ${\bC}$. The \emph{order} of $f(t) \in \mbox{$\mathcal{F}$}$, $f(t) =\sum_{k=0}^\infty f_kt^k$ ($f_k\in {\bK}$), is the minimum number $r\in\bN$ such that $f_r \neq 0$; $\mbox{$\mathcal{F}$}_r$ is the set of formal power series of order $r$. Let $g(t) \in \mbox{$\mathcal{F}$}_0$ and $f(t) \in \mbox{$\mathcal{F}$}_1$; the pair $(g,\,f )$ defines the {\em (proper) Riordan matrix} $D=(d_{n,k})_{n,k\in \mbox{\scriptsize${\mathbb{N}}$}}=(g, f)$ having \begin{equation}\label{Radef} d_{n,k} = [t^n]g(t) f(t)^k \end{equation} or, in other words, having $g f^k$ as the generating function of the $k$th column of $(g,f)$. The {\it first fundamental theorem of Riordan matrices } means the action of the proper Riordan matrices on the formal power series presented by \[ (g(t), f(t)) h(t)=g(t) (h\circ f)(t), \] which can be simplified to $(g,f)h=gh(f)$. Thus we immediately see that the usual row-by-column product of two Riordan matrices is also a Riordan matrix: \begin{equation}\label{Proddef} (g_1,\,f_1 ) (g_2,\,f_2 ) = (g_1 g_2(f_1),\,f_2(f_1)). \end{equation} The Riordan matrix $I = (1,\,t)$ is the identity matrix because its entries are $d_{n,k} = [t^n]t^k=\delta_{n,k}$. Let $(g\left( t\right),\,f(t)) $ be a Riordan matrix. Then its inverse is \begin{equation} (g\left( t\right),\,f(t))^{-1}=\left( \frac{1}{g(\overline{{f}}(t))}, \overline{f}(t),\right) \label{Invdef} \end{equation}% where $\overline {f}(t)$ is the compositional inverse of $f(t)$, i.e., $(f\circ \overline{f})(t)=(\overline{f}\circ f)(t)=t$. In this way, the set $\mathcal{ R}$ of all proper Riordan matrices forms a group (see \cite{SGWW}) called the Riordan group, Here is a list of six important subgroups of the Riordan group (see \cite{Sha}). \begin{itemize} \item The {\it Appell subgroup} $\{ (g(t),\,t):g(t)\in {\cF}_0\}$. \item The {\it Lagrange (associated) subgroup} $\{(1,\,f(t)):f(t)\in{\cF}_1\}$. \item The {\it Bell subgroup} $\{(g(t),\, tg(t)):g(t)\in{\cF}_0\}$. \item The {\it hitting-time subgroup} $\{(tf'(t)/f(t),\, f(t)):f(t)\in{\cF}_1\}$. \item The {\it derivative subgroup} $\{ (f'(t), \, f(t)):f(t)\in{\cF}_1\}.$ \item The {\it checkerboard subgroup} $\{ (g(t),\, f(t)):g(t)\in{\cF}_0\,\mbox{is even and}\, f(t)\in{\cF}_1\,\mbox{is odd}\}$. \end{itemize} An infinite lower triangular matrix $[d_{n,k}]_{n,k\in{\bN}}$ is a Riordan matrix if and only if a unique sequence $A=(a_0\not= 0, a_1, a_2,\ldots)$ exists such that for every $n,k\in{\bN}$ \be\label{eq:1.1} d_{n+1,k+1} =a_0 d_{n,k}+a_1d_{n,k+1}+\cdots +a_nd_{n,n}. \ee This is equivalent to \be\label{eq:1.2} f(t)=tA(f(t))\quad \text{or}\quad t=\bar f(t) A(t). \ee Here, $A(t)$ is the generating function of the $A$-sequence. The above first formula is also called the {\it second fundamental theorem of Riordan matrices } Moreover, there exists a unique sequence $Z=(z_0, z_1,z_2,\ldots)$ such that every element in column $0$ can be expressed as the linear combination \be\label{eq:1.3} d_{n+1,0}=z_0 d_{n,0}+z_1d_{n,1}+\cdots +z_n d_{n,n}, \ee or equivalently, \be\label{eq:1.4} g(t)=\frac{1}{1-tZ(f(t))}, \ee in which and throughly we always assume $g(0)=g_0=1$, a usual hypothesis for proper Riordan matrices. From \eqref{1.4}, we may obtain \[ Z(t)=\frac{g(\bar f(t))-1}{\bar f(t)g(\bar f(t))}. \] $A$- and $Z$-sequence characterizations of Riordan matrices were introduced, developed, and/or studied in Merlini, Rogers, Sprugnoli, and Verri \cite{MRSV}, Roger \cite{Rog}, Sprugnoli and the author \cite{HS}, \cite{He15}, etc. In \cite{HS} the expressions of the $A$- and $Z$-sequences of the product depend on the analogous sequences of the two factors are given. More precisely, considering two proper Riordan matrices $D_1=(g_1,f_1)$ and $D_2=(g_2,f_2)$ and their product, \[ D_3=D_1 D_2=(g_1g_2(f_1), f_2(f_1)). \] Denote by $A_i(t)$ and $Z_i(t)$, $i=1,2,$ and $3$, the generating functions of $A$-sequences and $Z$-sequences of $D_i$, $i=1,2,$ and $3$, respectively. Then \be\label{1.7} A_3(t)=A_2(t)A_1\left( \frac{t}{A_2(t)}\right) \ee and \be\label{1.7-2} Z_3(t)=\left( 1-\frac{t}{A_2(t)}Z_2(t)\right) Z_1\left( \frac{t}{A_2(t)}\right)+A_1\left( \frac{t}{A_2(t)}\right) Z_2(t). \ee Let $A(t)$ and $Z(t)$ be the generating functions of the $A$- and $Z$-sequences of a Riordan matrix $D=(g,f)$ and let us denote by $g^\ast(t)$, $f^\ast(t)$, $A^\ast(t)$ and $Z^\ast(t)$ the corresponding power series for the inverse $D^{-1}=(g^\ast, f^\ast)$ and its $A$-sequence and $Z$-sequence. We immediately observe that $f^\ast(t) = \overline{f}(t)$. Now we have (see \cite{HS}) that the $A$-sequence and $Z$-sequence of the inverse Riordan matrix $D^{-1}$ are, respectively, \be\label{1.8} A^\ast\left( \frac{t}{A(t)} \right)= \frac{1}{A(t)} \ee and \be\label{1.8-2} Z^\ast \left( \frac{t}{A(t)}\right)=\frac{Z(t)}{tZ(t)-A(t)}. \ee Since a Riordan matrix arising in a combinatorial context has non-negative entries, it can not be an involution. Hence, we consider the set of the pseudo-involutions of Riordan group ${\cR}$, which means the set of all $D \in {\cR}$ such that $MD$ (and $DM$) is an involution, where $M = (1,-t)$. Cheon, Jin, Kim, and Shapiro \cite{CJKS} (see also in Burlachenko \cite{Bur} and Phulara and Shapiro \cite{PS}) shows that a Riordan matrix $(g,f)$ is a Bell type pseudo-involution, i.e., $f=zg$ and $(g,-f)^2=(1,t)$, if and only if there exists a $B$-sequence, $\tilde B=(\tilde b_0, \tilde b_1, \tilde b_2,\ldots)$, characterizing all entries of a Riordan matrix, which is defined by \be\label{6.5} d_{n+1,k}= d_{n,k-1}+\sum_{j\geq 0}{\tilde b}_jd_{n-j,k+j} \ee for $k\geq 0$, where $d_{n,-1}=0$, $n\geq 0$. However, for non-Bell type Riordan matrices there might exist two type $B$-sequences $B=(b_0,b_1,b_2,\ldots)$ and $\hat B=(\hat b_0, \hat b_1,\hat b_2,\ldots)$ defined by \be\label{6.5-2} d_{n+1,k}= d_{n,k-1}+\sum_{j\geq 0}{b}_jd_{n-j,k+j} \ee for $k\geq 1$, and \be\label{6.5-3} d_{n+1,0}= \sum_{j\geq 0}{\hat b}_jd_{n-j,k+j}, \ee respectively. The $B$-sequence defined for all entries, $d_{n,k}$, $k\geq 1$, of a Riordan matrix not in the first column is called the {\it type-I $B$-sequence}. The $B$-sequence defined for all entries, $d_{n,0}$, of the first column is called the {\it type-II $B$-sequence}. We will show that for a Bell type Riordan matrix, it either has no $B$-sequence or has two type $B$-sequences and they are the same. However, there exist non-Bell type Riordan matrices which may have only one type $B$-sequences, which existence and construction are determined by their $A$-sequences and $Z$-sequence, respectively. More precisely, the existence and construction of their type-I $B$-sequences are characterized by the $A$-sequences of the Riordan matrices, while the existence and construction of the type-II $B$-sequences are characterized by the $Z$-sequences of the Riordan matrices. This paper is devoted to the $A$-sequence and $Z$-sequence characterization of a Riordan matrix possessing $B$-sequence and the $A$-sequence and $Z$-sequence characterization of some subgroups of ${\cR}$. In next section, we discuss $A$-sequence characterization of the Riordan matrices possessing type-I $B$-sequences. Section $3$ presents $Z$-sequence characterization of the Riordan matrices possessing type-II $B$-sequences. In Section $4$, we show some subgroups characterized by $A$-sequences, $Z$-sequence, and/or $B$-sequences. In the last section, Section $5$, we investigate the $A$-, $Z$-, and $B$-sequences of the Pascal like Riordan matrices. \section{$A$-sequences and type-I $B$-sequences of Riordan matrices } We now consider the $A$-sequence characterization of the existence of the type-I $B$-sequence for a Riordan matrix? which may not be a Bell type Riordan matrix. Here, the type-I $B$-sequence is defined by \eqref{6.5-2}. \begin{proposition}\label{pro:2.1} Let $(g,f)=(d_{n,k})_{n,k\geq 0}$ be a Riordan matrix with a type-I $B$-sequence satisfying \eqref{6.5-2}, and let $A(t)$ and $B(t)$ be the generating functions of the $A$-sequence and the $B$-sequence of the Riordan matrix, respectively. Then we have the following equivalent formulas: \bn &&f=t+tfB(tf),\label{1.2}\\ &&t=\bar f +t\bar f B(t\bar f),\label{1.4}\\ &&A(t)=1 +tB(t^2/A(t)).\label{1.5} \en \end{proposition} \begin{proof} Equation \eqref{6.5-2} can be written as \[ [t^{n+1}]gf^k= [t^n]gf^{k-1}+\sum_{j\geq 0}b_j [t^{n-j}]gf^{k+j}, \] which implies \eqref{1.2}. Substituting $t=\bar f$, the compositional inverse of $f$ into \eqref{1.2}, we obtain \eqref{1.4}.The second fundamental theorem of Riordan matrix gives $\bar f =t/A$, which can be used to re-write equation \eqref{1.4} as \eqref{1.5}. \end{proof} \hfill\rule{2.0mm}{2.0mm} From Proposition \ref{pro:2.1}, we have the following result. \begin{theorem}\label{thm:1.1} Let $A=\sum_{j\geq 0}a_jt^j$ be the generating function of the $A$-sequence, $(a_j)_{j\geq 0}$, of a Riordan matrix $(g,f)$ that possesses a type-I $B$-sequence $( b_0, b_1, b_2,\ldots)$ defined by \eqref{6.5-2}, and let $f(t)=\sum_{j\geq 0} f_j t^j$. Then $a_0=1$ and $a_2=0$, or equivalently, $f_1=1$ and $f_3=f^2_2$. \end{theorem} \begin{proof} From \eqref{1.5} and noting $a_0\not= 0$, it is easy to get \[ a_0=1. \] Denote $1/A(t)$ by $\hat C(t)$. Since \[ \hat C(t)=\frac{1}{A(t)}=c_0+\sum_{j\geq 1}c_jt^j, \] where $c_0=1$ and for $j\geq 1$ \be\label{1.6-1} c_j=-\sum_{k\geq 1}a_kc_{j-k}. \ee Thus, we may solve $c_j$ from the above equations and substitute them to $\hat C(t)$ to obtain \be\label{1.6-0} \hat C(t)=\frac{1}{A(t)}=1-a_1t+(a_1^2-a_2)t^2+(2a_1a_2-a_1^3-a_3)t^3+\cdots. \ee Comparing the coefficients of the powers of $t$ on the both sides of \eqref{1.5}, \be\label{1.6} 1+\sum_{j\geq 1}a_j t^j=1+b_0 t+b_1t^3\hat C(t)+b_2t^5\hat C(t)^2+b_3t^7\hat C(t)^3+\cdots, \ee we obtain the following system \bns &&a_1=b_0,\\ &&a_2=0,\\ &&a_3=b_1,\\ &&a_4=b_1c_1=-a_1b_1,\\ &&\cdots . \ens On the other hand, from the second fundamental theorem of Riordan matrices, $f=tA(f)$, we have \bns &&f_1t+f_2t^2+f_3t^3+\cdots\\ &=&a_0t+a_1t(f_1t+f_2t^2+f_3t^3+\cdots)+a_2t^2(f_1t+f_2t^2+f_3t^3+\cdots)^2+\cdots. \ens Thus, \[ f_1=a_0=1, f_2=a_1f_1, f_3=a_1f_2,\ldots, \] which imply $f_1=1$ and $f_3=f_2^2$. \end{proof} \hfill\rule{2.0mm}{2.0mm} Theorem \ref{thm:1.1} gives a necessary condition for the existence of type-I $B$-sequence of a Riordan matrix. Now we establish a necessary and sufficient condition for the existence of type-I $B$-sequence of a Riordan matrix and its computation. Denote by $D_{n,m,k}$ the set of \be\label{eq:2.1+2} D_{n,m,k}=\{{\mathbf {i}}=(i_{1},i_{2},\dots ,i_{k})\,:\ i_{1}+i_{2}+\cdots +i_{k}=n, i_1,i_2,\ldots,i_k\not=0\} \ee for $1\leq k\leq m\leq n$, where $k$ is the length of ${\mathbf{i}}$. Then the set \bn\label{eq:2.1+0} D_{n,m}&&=\cup^m_{k=1} D_{n,m,k}. \en is the set of compositions of $n$ with the number of parts $k=1,2,\ldots, m$. If $m=n$, we write $D_{n,n}$ as $D_n$, namely, \be\label{eq:2.1+1} {\mathcal {D}}_{n}=\{{\mathbf{i}}=(i_{1},i_{2},\dots ,i_{k})\,:\ 1\leq k\leq n,\ i_{1}+i_{2}+\cdots +i_{k}=n, i_1, i_2,\ldots, i_k\not=0 \}, \ee \begin{theorem}\label{thm:1.1-2} Let $(g,f)$ be a Riordan matrix, and let $A(t)=\sum_{j\geq 0} a_jt^j$ be the generating function of the $A$-sequence of $(g,f)$. Denote $\tilde c_j=c_{j-2}$ for $j\geq 2,$ where $c_j$ for $j\geq 0$ are shown in \eqref{1.6-1}, $\tilde c_{0}=0$, and $\tilde c_1=0$. Then $(g,f)$ has a $B$-sequence $(b_j)_{j\geq 0}$ defined by \eqref{6.5-2}, if and only if $A$-sequence of $(g,f)$ satisfies $a_0=1$, $a_2=0$ and for $\ell\geq 2$ \be\label{eq:2.1} a_{2\ell}=\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell -1,\ell-1}}b_{k}\tilde c_{i_{1}}\tilde c_{i_{2}}\cdots \tilde c_{i_{k}}, \ee where the index set, following the notation \eqref{eq:2.1+2}, is \bns {\mathcal {D}}_{2\ell -1,\ell-1}&&=\cup^{\ell-1}_{k=1}{\mathcal {D}}_{2\ell-1,\ell-1,k}, \ens where $D_{2\ell -1, \ell-1, k}$ are defined by \eqref{eq:2.1+2}. The right-hand side of equation \eqref{eq:2.1} is a function of $b_{j}$ for $1\leq j\leq \ell -1$ and $\tilde c_j$ for $0\leq j\leq 2\ell -1$ (or equivalently, $a_{j}$ for $0\leq j\leq 2\ell -3$). Here, for $\ell \geq 0$ \be\label{eq:2.2} b_\ell=a_{2\ell+1}-\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell,\ell -1}}b_{k}\tilde c_{i_{1}}\tilde c_{i_{2}}\cdots \tilde c_{i_{k}}, \ee where ${\mathcal {D}}_{0,-1}={\mathcal {D}}_{2,0}=\phi$, and for $\ell\geq 2$ \bns {\mathcal {D}}_{2\ell,\ell-1 }&&=\cup^{\ell-1}_{k=1}{\mathcal D}_{2\ell, \ell-1,k}.\\ \ens The summation of the right-hand side of \eqref{eq:2.2} is a function of $b_j$ for $1\leq j\leq \ell-1$ and $\tilde c_j$ for $0\leq j\leq 2\ell$ (or equivalently, $a_{j}$ for $0\leq j\leq 2\ell -2$). Furthermore, $B$-sequence $(b_1,b_2,b_3,\ldots)$ can be evaluated by using \eqref{eq:2.2}, where $a_{2\ell +1}$, $\ell\geq 1$, are arbitrary. Thus, we have \bns &&a_0=1,\quad b_0=a_1, \quad a_2=0, \quad b_1=a_3,\\ &&a_4=b_1c_1=-b_1a_1=-a_1a_3,\\ &&b_2=a_5-b_1c_2=a_5-b_1(a_1^2-a_2), \ens etc. \end{theorem} \begin{proof} Consider the second term of the right-hand side of \eqref{1.5}. Let $B(t)=\sum _{n=0}^{\infty }{b_{n}}x^{n}$, and let $t^2\hat C(t)=\sum _{n=0}^{\infty }{\tilde c_{n}}x^{n}$ be the formal power series with $\tilde c_{0}=0$, $\tilde c_1=0$ and $\tilde c_j=c_{j-2}$ for $j\geq 2.$ Then the composition $B\circ (t^2\hat C)$ is again a formal power series, which can be written as, by using the {\it Fa\'a di Bruno's formula}, \be\label{0.0} B(t^2\hat C(t))=\sum _{n=0}^{\infty }{d_{n}}t^{n}, \ee where $d_0 = b_0$ and the other coefficient $d_n$ for $n \geq 1$ can be expressed as a sum over compositions of $n$ or as an equivalent sum over partitions of $n$. More precisely, \be\label{0.00-1} d_{n}=\sum _{\mathbf {i} \in {\mathcal {D}}_{n}}b_{k}\tilde c_{i_{1}}\tilde c_{i_{2}}\cdots \tilde c_{i_{k}}, \ee where ${\mathcal{D}}_n$ is defined by \eqref{eq:2.1+1}. \footnote{Noting that the Fa\'a di Bruno formula can be considered as an application of the first fundamental theorem of Riordan matrices from Comtet \cite{comtet}, Roman \cite{Roman82_1} and \cite{Roman84book} and Roman and Rota \cite{RomanRota78}.} We now apply \eqref{0.0} to \eqref{1.5} and compare the coefficients of the same power terms on the both sides of \eqref{1.5} to obtain \be\label{0.-2} a_0=1 \quad \mbox{and}\quad a_n=d_{n-1} \ee for $n\geq 1$, where $d_n$ are presented by \eqref{0.00-1}. Thus, we have $a_0=1$ and $a_n=d_n$. It is clearly, $a_1=b_0$, $a_2=d_1=b_1\tilde c_1=0$, $a_3=d_2=b_1\tilde c_2=b_1$, \[ a_4=d_3=b_1\tilde c_3=b_1c_1=-a_1b_1,\quad a_5=d_4=b_1\tilde c_4+b_2\tilde c_2^2=b_1(a_1^2-a_2)+b_2. \] In general, if $n=2\ell+1$, $\ell\geq 2$, there is \[ a_{2\ell +1}=d_{2\ell}=\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell }}b_{k}\tilde c_{i_{1}}\tilde c_{i_{2}}\cdots \tilde c_{i_{k}}, \] where ${\mathcal {D}}_{2\ell}$ is defined by \eqref{eq:2.1+1} for $n=2\ell$. By using the pigeonhole principle, for every $\ell+1 \leq k\leq 2\ell $, $(i_1,i_2,\ldots, i_k)$ contains at least one component to be $1$, which implies that $\tilde c_{i_{1}}\tilde c_{i_{2}}\cdots \tilde c_{i_{k}}=0$. Thus, the summation over the index set ${\mathcal {D}}_{2\ell}$ can be reduced to the summation over the index set \[ {\mathcal D}_{2\ell, \ell}=\{{\mathbf {i}}=(i_{1},i_{2},\dots ,i_{k})\,:\ 1\leq k\leq \ell ,\ i_{1}+i_{2}+\cdots +i_{k}=2\ell, i_1,i_2,\ldots,i_k\not=0 \} \] and consequently, \[ a_{2\ell+1}=d_{2\ell}=b_{\ell}\tilde c_2^\ell+\sum_{\mathbf{i}\in {\mathcal D}_{2\ell , \ell-1}}b_{k}\tilde c_{i_{1}}\tilde c_{i_{2}}\cdots \tilde c_{i_{k}}, \] which implies \eqref{eq:2.2} because $\tilde c_2=c_0=1/a_0=1$. If $n=2\ell$, then from \eqref{1.5} and \eqref{0.00-1} we obtain \[ a_{2\ell}=d_{2\ell -1}=\sum_{{\mathbf {i}}\in {\mathcal D}_{2\ell -1}}b_k\tilde c_{i_1}\tilde c_{i_2}\ldots \tilde c_{i_k}. \] By using pigeonhole principle, the above summation over the index set ${\mathcal D}_{2\ell -1}$ is reduced to the summation over the index set \[ {\mathcal {D}}_{2\ell -1,\ell-1}=\{{\mathbf {i}}=(i_{1},i_{2},\dots ,i_{k})\,:\ 1\leq k\leq \ell -1,\ i_{1}+i_{2}+\cdots +i_{k}=2\ell -1, i_1,i_2,\ldots,i_k\not=0\}, \] which proves that \eqref{eq:2.1} is true. Conversely, if \eqref{eq:2.1} and \eqref{eq:2.2} hold, one may immediately derive \eqref{1.5}, i.e., the riordan matrix $(g,f)$ possessing the $A$-sequence has the $B$-sequence that can be constructed by using \eqref{eq:2.2}. \end{proof} \hfill\rule{2.0mm}{2.0mm} \medbreak \noindent{\bf Example 2.1} Considering the matrix $R=((1-t)g(t)/(1-tg(t)),tg(t))$ (see Cameron and Nkwanta \cite{CN}), where \be\label{RNA} g(t)= \frac{1-t+t^{2}-\sqrt{1-2t-t^2-2t^3+t^4}}{2t^{2}}. \ee We may write its first few entries as \[ R=\left[\begin{array}{cccccccc} 1 & & & & & & &\\ 1 & 1 & & & & & &\\ 2 & 2 & 1 & & & & &\\ 5 & 4 & 3& 1 & & & &\cdots \\ 12 &10 & 7 & 4& 1 & & & \\ 29 & 25 & 18 & 11& 5 & 1 && \\ 71 & 62 & 47 & 30& 16& 6 & 1&\\ & & & \cdots & & & & \end{array} \right]. \] We call $R$ an RNA type matrix because it is related to the RNA matrix $R^\ast$ shown in Example 2.2. It is easy to find that the $A$-sequence and type-I $B$-sequence of $R$ are \[ A=(1,1,0,1,-1,\ldots)\quad \mbox{and} \quad B=(1,1,1,1,1,\ldots), \] respectively, which satisfy \[ a_0=1,b_0=a_1=1, a_2=0, b_1=a_3=1, a_4=-a_1a_3=-1,\ldots. \] From the second fundamental theorem of Riordan matrices, we have \[ A(t)=\frac{1+t+t^2+\sqrt{1+2t-t^2+2t^3+t^4}}{2}. \] Thus, from \eqref{1.5} we obtain that the generating function of the $B$-sequence of $R$ is \[ B(t)=\frac{1}{1-t}. \] On the other hand, we have $A(t)=t^2g(-t)$ or $\bar f(t)=-f(-t)=tg(-t)$ because $f(t)=tg(t)$ and $A(t)=t\bar f(t)$. Moreover, noticing \eqref{1.5} and \[ \frac{A(t)-1}{t}=\frac{1}{1-t^2/A(t)}, \] we can also explain why $B(t)=1/(1-t)$. In addition, from \eqref{1.2} and $B(t)=1/(1-t)$ we obtain an identity for $g(t)$ \[ g(t)=1+\frac{tg(t)}{1-t^2g(t)}, \] or equivalently, \be\label{RNA-2} (1-t+t^2)g(t)=1+t^2g(t)^2. \ee \medbreak \noindent{\bf Remark 2.1} An alternative way to present $d_n$ shown in the proof of Theorem \ref{thm:1.1-2} is \be\label{0.00-2} d_{n}=\sum _{k=1}^{n}a_{k}\sum _{\mathbf {\pi } \in {\mathcal {P}}_{n,k}}{\binom {k}{\pi _{1},\pi _{2},...,\pi _{n}}}\tilde c_{1}^{\pi _{1}}\tilde c_{2}^{\pi _{2}}\cdots \tilde c_{n}^{\pi _{n}}, \ee where \[ {\mathcal {P}}_{n,k}=\{(\pi _{1},\pi _{2},\dots ,\pi _{n})\,:\ \pi _{1}+\pi _{2}+\cdots +\pi _{n}=k,\ \pi _{1}\cdot 1+\pi _{2}\cdot 2+\cdots +\pi _{n}\cdot n=n\} \] is the set of partitions of n into k parts in frequency-of-parts form. The first form shown in \eqref{0.00-1} is obtained by picking out the coefficient of $t^n$ in $(\tilde c_{1}t+\tilde c_{2}t^{2}+\cdots )^{k}$ by inspection, and the second form \eqref{0.00-2} is then obtained by collecting like terms, or alternatively, by applying the multinomial theorem. \medbreak Theorem \ref{thm:1.1-2} has an analogy based on the expression \eqref{1.2}. \begin{theorem}\label{thm:1.1-3} Let $(g,f)$ be a Riordan matrix, and let $f(t)=\sum_{j\geq 1} f_jt^j$. Denote $\tilde f_j=f_{j-1}$ for $j\geq 1,$ where $f_{-1}, f_0=0$, and $\tilde b_j=b_{j-1}$ for $j\geq 1$, where $b_{-1}=0$. Then $(g,f)$ has a type-I $B$-sequence defined by \eqref{6.5-2} if and only if $f_1=1$, $f_3=f^2_2$ and for $\ell\geq 2$ there are \bn\label{eq:2.1-2} f_{2\ell +1}=\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell +1}}\tilde b_{k}\tilde f_{i_{1}}\tilde f_{i_{2}}\cdots \tilde f_{i_{k}} =f_2f_{2\ell}+ \sum _{\mathbf {i} \in {\mathcal {D}}'_{2\ell +1,\ell}}\tilde b_{k}\tilde f_{i_{1}}\tilde f_{i_{2}}\cdots \tilde f_{i_{k}}, \en where the index sets are \bns {\mathcal {D}}_{2\ell +1}&&=\cup^{2\ell+1}_{k=1}{\mathcal D}_{2\ell+1,2\ell+1,k} \ens and \bns {\mathcal {D}'}_{2\ell +1,\ell}&&=\cup^\ell_{k=2}{\mathcal D}_{2\ell+1,\ell,k}. \ens The summation on the leftmost-hand side of equation \eqref{eq:2.1-2} is a function of $b_{j}$ for $1\leq j\leq \ell -1$ and $f_j$ for $1\leq j\leq 2\ell-1$. Here, for $\ell\geq 1$, \be\label{eq:2.2-2} b_{\ell-1}=f_{2\ell }-\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell,\ell-1}}\tilde b_{k}\tilde f_{i_{1}}\tilde f_{i_{2}}\cdots \tilde f_{i_{k}}, \ee where ${\mathcal {C}}_{2,0}=\phi$ and for $\ell\geq 2$, \bn\label{2.2-3} {\mathcal {D}}_{2\ell,\ell-1 }&&=\cup^{\ell-1}_{k=1}{\mathcal D}_{2\ell, \ell-1,k}. \en The summation of the right-hand side of \eqref{eq:2.2} is a function of $b_j$ for $0\leq j\leq \ell-2$ and $f_j$ for $1\leq j\leq 2\ell-1$. Furthermore, $B$-sequence $(b_1,b_2,b_3,\ldots)$ can be evaluated by using \eqref{eq:2.2-2}, where $f_{2\ell}$, $\ell\geq 1$, are arbitrary. Thus, we have \bns &&f_0=0,\quad f_1=1,\quad b_0=f_2, \quad f_3=b_0f_2=b_0^2, \quad b_1=f_4-b_0^3,\\ &&f_5=b_0f_4+b_1(2f_1f_2)=b_0^4+3b_0b_1,\\ &&b_2=f_6-b_0f_5-b_1(2f_1f_3+f_2^2)=f_6-b_0^5-5b_0^2b_1, \ens etc. \end{theorem} \begin{proof} From \eqref{1.2} we have \be\label{1.3-2} f=t+tfB(tf)=t+\sum_{j\geq 0}b_j(tf)^{j+1}=t+\sum_{j\geq 0}\tilde b_j(tf)^j, \ee where $\tilde b_j=b_{j-1}$ and $\tilde b_0=b_{-1}=0$, and we may write \[ tf=\sum_{j\geq 1} \tilde f_jt^j. \] By using the Fa\'a di Bruno's formula we have \be\label{0.0-2} \sum_{j\geq 0}\tilde b_j(tf)^j=\sum _{n=0}^{\infty }{c_{n}}t^{n}, \ee where $c_0 = \tilde b_0=0$ and the other coefficient $c_n$ for $n \geq 1$ can be expressed as a sum over compositions of $n$ or as an equivalent sum over partitions of $n$. More precisely, \be\label{0.00-3} c_{n}=\sum _{\mathbf {i} \in {\mathcal {D}}_{n}}\tilde b_{k}\tilde f_{i_{1}}\tilde f_{i_{2}}\cdots \tilde f_{i_{k}}, \ee where ${\mathcal {D}}_n$ is defined by \eqref{eq:2.1+1}. Particularly, when $n=2\ell$, in the index set ${\mathcal {D}}_{2\ell}$, if $\ell+1 \leq k\leq 2\ell$, then there is at least one component of $(i_1,i_2,\ldots, i_k)$ to be $1$. Thus, the corresponding $\tilde f_{i_1}\tilde f_{i_2}\ldots \tilde f_{i_k}=0$. Consequently, the summation in \eqref{0.00-3} over the index set ${\mathcal {D}}_{2\ell}$ is reduced to the summation over the index set \[ {\mathcal D}_{2\ell,\ell}= \{{\mathbf{i}}=(i_{1},i_{2},\dots ,i_{k})\,:\ 1\leq k\leq \ell, i_{1}+i_{2}+\cdots +i_{k}=2\ell, i_1, i_2,\ldots, i_k\not=0 \}. \] Then combining \eqref{1.3-2} and \eqref{0.0-2} yields \be\label{f2l} f_{2\ell}=c_{2\ell}=\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell }}\tilde b_{k}\tilde f_{i_{1}}\tilde f_{i_{2}}\cdots \tilde f_{i_{k}} =\tilde b_\ell (\tilde f_{2})^\ell+\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell , \ell-1}}\tilde b_{k}\tilde f_{i_{1}}\tilde f_{i_{2}}\cdots \tilde f_{i_{k}}, \ee where ${\mathcal {D}}_{2\ell, \ell-1}$ is shown in \eqref{2.2-3}, which implies \eqref{eq:2.2-2}. If $n=2\ell +1$, then \eqref{eq:2.1+1} becomes \[ {\mathcal {D}}_{2\ell +1}= \{{\mathbf{i}}=(i_{1},i_{2},\dots ,i_{k})\,:\ 1\leq k\leq 2\ell+1,\ i_{1}+i_{2}+\cdots +i_{k}=2\ell+1, i_1, i_2,\ldots, i_k\not=0 \}. \] In the above index set, if $k=1$, then $(i_1)=(2\ell+1)$. Meanwhile for $\ell +1\leq k \leq 2\ell+1$, $(i_1,i_2,\ldots, i_k)$ contains at least one component to be $1$, which devotes $\tilde f_{i_1}\tilde f_{i_2}\ldots \tilde f_{i_k}=0$. Thus, the summation in \be\label{f2l+1} f_{2\ell+1}=c_{2\ell+1}=\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell +1}}\tilde b_{k}\tilde f_{i_{1}}\tilde f_{i_{2}}\cdots \tilde f_{i_{k}} \ee over the index set ${\mathcal {D}}_{2\ell +1}$ can be reduced to the summation over the index set \[ {\mathcal D}_{2\ell+1,\ell}=\{{\mathbf{i}}=(i_{1},i_{2},\dots ,i_{k})\,:\ 1\leq k\leq \ell,\ i_{1}+i_{2}+\cdots +i_{k}=2\ell+1, i_1, i_2,\ldots, i_k\not=0 \}. \] Consequently, from \eqref{f2l+1} we obtain \bns f_{2\ell+1} &&=\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell +1,\ell}}\tilde b_{k}\tilde f_{i_{1}}\tilde f_{i_{2}}\cdots \tilde f_{i_{k}}\\ && =\tilde b_1\tilde f_{2\ell +1}+\sum _{\mathbf {i} \in {\mathcal {D}'}_{2\ell +1,\ell }}\tilde b_{k}\tilde f_{i_{1}}\tilde f_{i_{2}}\cdots \tilde f_{i_{k}}\\ && =b_0 f_{2\ell}+\sum _{\mathbf {i} \in {\mathcal {D}'}_{2\ell +1,\ell}}\tilde b_{k}\tilde f_{i_{1}}\tilde f_{i_{2}}\cdots \tilde f_{i_{k}}, \ens where $b_0=f_2$ based on \eqref{1.3-2}. Thus if $(g,f)$ has a $B$-sequence, then we must have \eqref{eq:2.1-2}. Conversely, if \eqref{eq:2.1-2} and \eqref{eq:2.2-2} hold, then the Riordan matrix $(g,f)$ possesses a $B$-sequence, which can be evaluated by using \eqref{eq:2.2-2}. \end{proof} \hfill\rule{2.0mm}{2.0mm} \noindent{\bf Example 2.2} Consider {\it RNA matrix} (see Nkwanta \cite{Nkw} and Cheon, Jin, Kim, and Shapiro \cite{CJKS}) \[ R^\ast=(g,f)=( g(t), tg(t)), \] where $g(t)$ is given by \eqref{RNA}. Cameron and Nkwanta \cite{CN} show that \[ R^\ast =C_0^{-1}PC_0, \] where \[ C_0=\left( C(t^2), tC(t^2)\right)=\left( \frac{1-\sqrt{1-4t^2}}{2t^2}, \frac{1-\sqrt{1-4t^2}}{2t}\right), \] and \[ P=\left( \frac{1}{1-t}, \frac{t}{1-t}\right). \] Hence, \[ C_0^{-1}=\left( \frac{1}{1+t^2}, \frac{t}{1+t^2}\right). \] In addition, the RNA matrix $R$ shown in Example 2.1 is related to $R^\ast$ in the sense \[ R=A_0^{-1}R^\ast A_0=A_0^{-1}C_0^{-1}PC_0A, \] where \[ A_0=\left( \frac{1}{1-t}, t\right), \quad \mbox{and}\quad A_0^{-1}=(1-t,t). \] The RNA matrix $R^\ast$ begins \[ R^\ast =\left[\begin{array}{cccccccc} 1 & & & & & & &\\ 1 & 1 & & & & & &\\ 1 & 2 & 1 & & & & &\\ 2 & 3 & 3& 1 & & & &\cdots \\ 4 &6 & 6 & 4& 1 & & & \\ 8 & 13 & 13 & 10& 5 & 1 && \\ 17 & 28 & 30 & 24& 15 & 6 & 1&\\ & & & \cdots & & & & \end{array} \right] \] The elements in the leftmost column are the number of possible RNA secondary structures on a chain of length $n$, while other elements of the matrix count such chain with $k$ vertices designated as the start of a yet to be complete link. It is easy to check that $a_{0}=1$ and $a_{2}=0$, and, as in Eample 2.1, we have the $B$-sequence $B=( 1,1,1,\ldots)$. Consider another RNA type matrix \[ R^{\ast\ast}=(d(t),h(t))=\left( d(t), tg(t)\right), \] where $g(t)$ is defined before, $h=tg$, and \[ d(t)=\frac{g(t)-1}{t}= \frac{1-t-t^{2}-\sqrt{1-2t-t^2-2t^3+t^4}}{2t^{3}}. \] We call $R^{\ast\ast}$ an RNA type matrix because the elements in the left hand column are the number of possible RNA secondary structures on a chain of length $n$ except $n=0$. The matrix $R^{\ast\ast}$ begins (see Nkwanta \cite{Nkw}) \[ R^{\ast\ast}=\left[\begin{array}{cccccccc} 1 & & & & & & &\\ 1 & 1 & & & & & &\\ 2 & 2 & 1 & & & & &\\ 4& 4 & 3& 1 & & & &\cdots \\ 8 &9 & 7 & 4& 1 & & & \\ 17 & 20 & 17& 11& 5 & 1 & &\\ 37 & 45& 41 & 29& 16 & 6 & 1&\\ & & & \cdots & & & & \end{array} \right] \] It is easy to check that $a_{0}=1$ and $a_{2}=0$, and it has a $B$-sequence $B=(1,1,1,\ldots)$ for all elements except those in the first column. The $Z$-sequence of $R^{\ast\ast}$ is $Z=(1,1,0,\ldots)$, which presents \[ d_{n+1,0}=d_{n,0}+d_{n,1}, \] where $d_{n,0}$ is the number of secondary structure for $n$ points, and \[ d_{n,1}=\sum^{n-2}_{k=0}d_{k,0}d_{n-k-1}. \] The above equation has an analogue of Catalan matrix (see, for example, Stanley \cite{Sta} and \cite{He12}). Hence, $R^{\ast\ast}$ has type-I $B$-sequence, but no type-II $B$-sequence. \medbreak \section{$Z$-sequence and type-II $B$-sequence of Riordan matrices } We say the sequence $\hat B=(\hat b_0,\hat b_1, \hat b_2,\ldots)$ is a type-II $B$-sequence of a Riordan matrix $(g,f)=(d_{n,k})_{n,k\geq 0}$ if it satisfies \be\label{6.4} d_{n+1,0}=\sum_{j\geq 0}\hat b_jd_{n-j,j} \ee for $n\geq 0$, i.e., for all entries except the first one in the first column of $(g,f)$. Obviously, a Riordan matrix may have no any type $B$-sequence, or has only one type $B$-sequence, or has both type $B$-sequences which are different. For instance, RNA matrix $R$ shown in Example 2.1 has type-I $B$-sequence $(1,1,1,\ldots)$, but no type-II $B$-sequence. From the definition of type-II $B$-sequence, we immediately learn that its existence can be characterized by the $Z$-sequence of the Riordan matrix. Here are some equivalent forms of \eqref{6.4} related to the $Z$-sequence of the Riordan matrix possessing type-II $B$-sequence. \begin{proposition}\label{pro:6.2} If a Riordan matrix $(g,f)$ possesses a type-II $B$-sequence defined by \eqref{6.4}, then we have \bn &&g=1+tg\hat B(tf),\label{6.4-2}\\ &&Z(f)=\hat B(tf).\label{6.4-3}\\ &&Z(t)=\hat B(t\bar f(t)),\label{6.4-4} \en where $\hat B(t)$ is the generating function of the type-II $B$-sequence $\hat B=(\hat b_0, \hat b_1,\hat b_2,\ldots)$. \end{proposition} \begin{proof} From \eqref{6.4}, we have \[ [t^{n+1}]g=\sum_{j\geq 0} \hat b_j [t^{n-j}]gf^j=[t^{n+1}]tg\sum_{j\geq 0}\hat b_j(tf)^j \] for $n\geq 0$. Hence, \eqref{6.4-2} holds. From \eqref{6.4-2} and \eqref{eq:1.4} and noticing $g_0=1$, we obtain \[ \hat B(tf)=\frac{g-1}{tg}=\frac{1-1/g}{t}=Z(f), \] or expression \eqref{6.4-3}. Equation \eqref{6.4-4} follows after a substitution $t=\bar f$ applied in \eqref{6.4-3}. \end{proof} \hfill\rule{2.0mm}{2.0mm} From the definition of Bell type Riordan matrices $(g,f)=(g, tg)$, it seems that its two type $B$-sequences, if exist, should have certain relationship. It is indeed true. More precisely, we will have the following result. \begin{proposition}\label{pro:6.1} Let $(g,f)$ be a Riordan matrix. Then it is a Bell type Riordan matrix, i.e., $f=tg$, if and only if it either has the same type-I and type-II $B$-sequence or has no $B$-sequence. \end{proposition} \begin{proof} The proposition statement can be written as the following equivalent form, which provides a possible way to prove the proposition: Let $(g,f)$ be a Riordan matrix with a type-I (or type-II) $B$-sequence, $B=(b_0,b_1,\cdots )$ (or $\hat B=(\hat b_0,\hat b_1,\cdots ))$, defined by \eqref{6.5-2} (or \eqref{6.4}). Then $(g,f)$ is a Bell type Riordan matrix, i.e., $f=tg$, if and only if $(g,f)$ possesses a type-II (or type-I) $B$-sequence $\hat B$ (or $B$) defined by \eqref{6.4} (or \eqref{6.5-2}) with $B=\hat B$. It is sufficient to prove the last statement by considering type-I $B$-sequence. The case of type-II $B$-sequence can be proved with a similar argument. Let $(g,f)=(g,tg)$ has a typr-I $B$-sequence. Then from \eqref{1.2}, $f=t+tfB(tf)$, and $f=tg$. Thus, \be\label{6.2} g=1+tgB(t^2g). \ee From \eqref{6.2}, we obtain $g(0)=1$, and from Proposition \ref{pro:6.2}, we know type-II $B$-sequence $\hat B$ exists and $\hat B=B$. Since the two type $B$-sequences are the same, the $B$-sequence characterizes all the entries of the Riordan matrix $(g,tg)$. Conversely, suppose a Riordan matrix $(g,f)$ has a $B$-sequence satisfying \eqref{6.4} for its first column entries and \eqref{6.5-2} for its other entries. Then from \eqref{1.2} and \eqref{6.4-2} and noticing $B=\hat B$, we must have \[ B(tf)=\frac{f-t}{tf}=\frac{g-1}{tg}, \] which implies $f=tg$, i.e., $(g,f)$ is a Bell Riordan matrix. \end{proof} \hfill\rule{2.0mm}{2.0mm} \medbreak \noindent{\bf Example 3.1} The RNA type matrices and the RNA matrix presented in Examples 2.1 and 2.2 have the same type-I $B$-sequence $(1,1,1,\ldots)$, while $R$ and $R^{\ast\ast}$ have no type-II $B$-sequence and $R^\ast$ has type-II $B$-sequence $(1,1,1,\ldots)$. It is easy to see that the Riordan matrix $(1/(1-2t), t/(1-t))$ has two different types $B$-sequences, $B=(1,0,0,\ldots)$ and $\hat B=(2,0,0,\ldots)$. The Riordan matrix $(1/(1-2f-t^2f), f)$, where \[ f=\frac{1-t-\sqrt{1-2t+t^2-4t^3}}{2t^2}, \] has the type-I $B$-sequence $B=(1,1,0,\ldots)$ and the type-II $B$-sequence $\hat B=(2,1,0,\ldots)$. \medbreak As recalled in the Introduction, a Riordan matrix is determined by its $A$-sequence and $Z$-sequence. If a Riordan matrix has a type-I $B$-sequence, then the Riordan matrix is determined by the type-I $B$-sequence and the $Z$-sequence. We now find the $Z$-sequence characterization for type-II $B$-sequences of Riordan matrices, including the existence and computation of type-II $B$-sequences. \begin{proposition}\label{pro:6.3} Let $(g,f)$ with $g(0)=1$ be a Riordan matrix $(g,f)$ possessing a type-II $B$-sequence defined by \eqref{6.4}, and let $Z(t)=\sum_{n\geq 0} z_nt^n$ be the generating function of the $Z$-sequence of $(g,f)$. Then $Z'(0)=0$, i.e., $z_1=0$, or equivalently, $g_1^2=g_0g_2$. \end{proposition} \begin{proof} From \eqref{6.4-3}, $(g,f)$ has a type-II $B$-sequence defined by \eqref{6.4} implies that $z_1=0$, or equivalently, \[ Z'(0)=z_1=0 \] because comparing the coefficient of $t$ on the both sides of \eqref{6.4-3} yields $z_1f_1=0$ and $f_1\not=0$. From \eqref{eq:1.4}, we have \[ Z(f)=\frac{g-1}{tg}=\frac{1}{t}\left( 1-\frac{1}{g}\right). \] Hence, \[ z_0+z_1f+z_2 f^2+\cdots= t^{-1}\left( 1-\left( \frac{1}{g_0}-\frac{g_1}{g_0^2} t+\left( \frac{g_1^2}{g_0^3}-\frac{g_2}{g_0^2}\right)t^2+\cdots\right)\right). \] Then $z_1=0$ is equivalent to \[ \frac{g_1^2}{g_0^3}-\frac{g_2}{g_0^2}=0, \] which is $g_1^2=g_0g_2$. \end{proof} \begin{theorem}\label{thm:6.2} Let $Z=\sum_{j\geq 0}z_jt^j$ be the generating function of the $Z$-sequence, $(z_j)_{j\geq 0}$, of a Riordan matrix $(g,f)$, and let $\bar f=\sum_{j\geq 1}\bar f_j t^j$ be the compositional inverse of $f$. Then $(g,f)$ possesses a type-II $B$-sequence $\hat B=( \hat b_0, \hat b_1, \ldots)$ defined by \eqref{6.4} if and only if $\hat b_0=z_0$, $z_1=0$, and for $\ell \geq 1$ \bn\label{eq:2.1-3} &&z_{2\ell+1} =\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell +1,\ell}} \hat b_{k}\bar f_{i_{1}-1}\bar f_{i_{2}-1}\cdots \bar f_{i_{k}-1}, \en where the index set is \[ {\mathcal {D}}_{2\ell +1,\ell}=\{{\mathbf {i}}=(i_{1},i_{2},\dots ,i_{k})\,:\ 1\leq k\leq \ell ,\ i_{1}+i_{2}+\cdots +i_{k}=2\ell +1,i_1,i_2,\ldots, i_k\not= 0\}, \] and the summation on the left-hand side of equation \eqref{eq:2.1-3} is a function of $\hat b_{j}$ for $1\leq j\leq \ell -1$ and $\bar f_j$ for $0\leq j\leq 2\ell$. Here for $\ell\geq 1$ $\hat b_\ell$ satisfy \be\label{eq:2.2-3} \hat b_{\ell}=f_1^\ell\left(z_{2\ell }-\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell, \ell-1}} {\hat b}_{k}\bar f_{i_{1}-1}\bar f_{i_{2}-1}\cdots \bar f_{i_{k}-1}\right), \ee where ${\mathcal {D}}_{2,0}=\phi$ and for $\ell\geq 2$, \[ {\mathcal {D}}_{2\ell,\ell-1 }=\{{\mathbf {i}}=(i_{1},i_{2},\dots ,i_{k})\,:\ 1\leq k\leq \ell -1, \ i_{1}+i_{2}+\cdots +i_{k}=2\ell,i_1,i_2,\ldots, i_k\not= 0\}. \] The summation of the right-hand side of \eqref{eq:2.2} is a function of $\hat b_j$ for $0\leq j\leq \ell-1$ and $\bar f_j$ for $0\leq j\leq 2\ell-1$. Furthermore, the type -II $B$-sequence $\hat B=(\hat b_1,\hat b_2,\hat b_3,\ldots)$ can be evaluated by using \eqref{eq:2.2-3}, where $z_{2\ell}$, $\ell \geq 1$, are arbitrary. Thus, we have \bns &&\hat b_0=z_0,\quad z_1=0,\quad \hat b_1=f_1z_2,\\ &&z_3=\hat b_1\bar f_2, \quad \hat b_2=f_1^2\left( z_4-\hat b_1\bar f_3\right),\\ &&z_5=\hat b_1 \bar f_4+2\hat b_2\bar f_1\bar f_2, \ens etc. \end{theorem} \begin{proof} From \eqref{6.4-4}, $Z(t)=B(t\bar f)$, we use the Fa\' a di Bruno formula to its right-hand side and compare the coefficients on its both sides. Thus \be\label{6.6} z_n=\sum _{\mathbf {i} \in {\mathcal {D}}_{n}}\hat b_{k}\bar f_{i_{1}-1}\bar f_{i_{2}-1}\cdots \bar f_{i_{k}-1}=\sum _{\mathbf {i} \in {\mathcal {E}}_{n}}\hat b_{k}\hat {\bar f}_{i_{1}}\hat {\bar f}_{i_{2}}\cdots \hat {\bar f}_{i_{k}}, \ee where $\hat {\bar f}_j=\bar f_{j-1}$, $\hat {\bar f}_0=\bar f_{-1}=0$, $\hat {\bar f}_1=\bar f_0=0$, $\bar f_n=[t^n] \overline{f}$, and ${\mathcal {D}}_n$ is defined by \eqref{eq:2.1+1}. It is clear that $z_0=\hat b_0$, \[ z_1=\hat b_1\hat {\bar f}_1=0, \] and \[ z_2=\hat b_1\hat {\bar f}_2+\hat b_2\hat {\bar f}_1^2=\hat b_1\bar f_1, \] which yields \[ \hat b_1=\frac{z_2}{ \bar f_1}=f_1z_2 \] because $1/\bar f_1=f_1$. In general, for $n=2\ell+1$ and $\ell\geq 1$, we have \bns z_{2\ell+1}=\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell+1}}\hat b_{k}\hat {\bar f}_{i_{1}}\hat {\bar f}_{i_{2}}\cdots \hat {\bar f}_{i_{k}}=\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell+1,\ell}}\hat b_{k}\hat {\bar f}_{i_{1}}\hat {\bar f}_{i_{2}}\cdots \hat {\bar f}_{i_{k}}, \ens where the last equation is due to the fact of $\hat {\bar f}_{i_{1}}\hat {\bar f}_{i_{2}}\cdots \hat {\bar f}_{i_{k}}$ contains at least one factor of $\hat {\bar f}_1=0$ for all $\ell+1\leq k\leq 2\ell+1$. Hence, we obtain \eqref{eq:2.1-3}. To determine $\hat b_j$, we substitute $n=2\ell$, $\ell\geq 1$, into \eqref{6.6} to have \bns &&z_{2\ell }=\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell}}\hat b_{k}\hat {\bar f}_{i_{1}}\hat {\bar f}_{i_{2}}\cdots \hat {\bar f}_{i_{k}}=\hat b_\ell \hat{\bar f}_2^\ell +\sum _{\mathbf {i} \in {\mathcal {D}}_{2\ell,\ell-1}}\hat b_{k}\hat {\bar f}_{i_{1}}\hat {\bar f}_{i_{2}}\cdots \hat {\bar f}_{i_{k}}, \ens where the last equation is due to the fact of $\hat {\bar f}_{i_{1}}\hat {\bar f}_{i_{2}}\cdots \hat {\bar f}_{i_{k}}$ contains at least one factor of $\hat {\bar f}_1=0$ for all $\ell+1\leq k\leq 2\ell$. From the last expression about $z_{2\ell}$ and noticing $\hat{\bar f}_2={\bar f}_1=1/f_1$, we obtain \eqref{eq:2.2-3}. Conversely, if \eqref{eq:2.1-3} and \eqref{eq:2.2-3} hold, one may derive \eqref{6.4-4}, i.e., the Riordan matrix $(g,f)$ possessing the $Z$-sequence has a type-II $B$-sequence, where the type-II $B$-sequence can be constructed by using \eqref{6.4}. \end{proof} \hfill\rule{2.0mm}{2.0mm} \section{Subgroups of Riordan group characterized by $A$- and $Z$-sequences} We now discuss the subgroups of the Riordan group defined by the $A$-sequence and $Z$-sequence of Riordan matrices. \begin{theorem}\label{thm:1.3} The set of the Riordan matrices with $A$-sequences of the form $(1,a_1,0,a_3,\ldots)$, denoted by $R_{0,2}$, is a subgroup of the Riordan group. \end{theorem} \begin{proof} If $D_{1}$ and $D_{2}$ are in $R_{0,2}$, then the generating functions of their $A$-sequences are \[ A_1(t)=a_{1,0}+a_{1,1}t+a_{1,3}t^3+\cdots \] and \[ A_2(t)=a_{2,0}+a_{2,1}t+a_{2,3}t^3+\cdots. \] From \eqref{1.6-0}, and noting $a_{2,2}=0$, we have \[ \frac{t}{A_2(t)}=t\hat C(t)=t(1-a_{2,1}t+a_{2,1}^2t^2-(a_{2,1}^3+a_{2,3})t^3+\cdots. \] Thus the generating function $A_3(t)$ of the $A$-sequence of $D_3=D_1D_2$ is \bns A_3(t)&=&A_2(t)A_1\left( \frac{t}{A_2(t)}\right)\\ &=&(a_{2,0}+a_{2,1}t+a_{2,3}t^3+\cdots)(a_{1,0}+a_{1,1}t(1-a_{2,1}t+a_{2,1}^2t^2-(a_{2,1}^3+a_{2,3})t^3 +\cdots)\\ &&\quad +a_{1,3}t^3(1-a_{2,1}t+a_{2,1}^2t^2-(a_{2,1}^3+a_{2,3})t^3+\cdots)^3\cdots\\ &=&a_{1,0}a_{2,0}+a_{1,0}a_{2,1}t+a_{1,1}(a_{2,0}+a_{2,1}t)(-a_{2,1}t+a_{2,1}^2t^2)+ct^3+\cdots\\ &=&1+a_{2,1}t+a_{1,1}(1+a_{2,1}t)(-a_{2,1}t+a_{2,1}^2t^2+ct^3+\cdots=1+a_{2,1}t+ct^3+\cdots, \ens which implies that $D_3\in R_{0,2}$. If $D\in R_{0,2}$, then the generating function, $A^\ast(t)$ of the $A$-sequence of the inverse $D^\ast$ of $D$ is \[ A^\ast =\frac{t}{f(t)}=\frac{t}{t+f_2t^2+f_2^2t^3+\cdots}=1-f_2t+c't^3+\cdots, \] which refers to that $D^\ast\in R_{0,2}$. The proof is complete. \end{proof} \medbreak \noindent{\bf Remark 4.1} Luz\'on, Mor\'on, and Prieto-Martinez \cite{LMPM} show that all Riordan matrices with $A$-sequence $(a_0,a_1,0, a_3,\ldots)$ form a subgroup of the Riordan group. Hence, $R_{0,2}$ is a subgroup of their subgroup. \medbreak \noindent{\bf Remark 4.2} Here is an alternative proof of Theorem \ref{thm:1.3}, which is based on a concept of truncation class of formal power series. More precisely, let $f=\sum_{j\geq 0} f_j t^j$ and $h=\sum_{j\geq 0}h_jt^j$ be two power series. If there exists an integer $r\geq 0$ such that the $r$-th truncations of $f|_r=\sum^r_{j=0} f_jt^j$ and $h|_r=\sum^r_{j=0}h_jt^j$ satisfying $f|_r\equiv ch|_r$ for some non-zero constant $c$ and $f_{r+1}\not= c h_{r+1}$ for any constant $c\not= 0$, then we say $f$ and $h$ have the same truncation of order $r$. For a fixed power series $f$ and an integer $r\geq 0$, the collection of all power series that possess the same truncation of order $r$ is called a truncation class of order $r$ with respect to $f$. This class is denoted by $T_r(f)$. Firstly, let $(g,f)$ be a Riordan matrix, where $f=\sum_{j\geq 1}f_jt^j$, $f_1\not=0$, and let $(a_0,a_1,a_2,\ldots)$ be the $A$-sequence of $(g,f)$. Then, $a_2=0$ and $a_{3}\not=0$ if and only if \be\label{3.1} f_2^2=f_1f_{3}, \ee or equivalently, the truncation of the first $3$ terms of $f$ can be written as \be\label{3.2} f|_r=a_0t\frac{1-(a_1t)^2}{1-a_1t}, \ee i.e., $f$ is in $T_2(t/(1-a_1t))$. In fact, from the second fundamental theorem of Riordan matrices, $f=tA(f)$, where $f=\sum_{j\geq 1}f_jt^j$ with $f_1\not=0$ and $A(t)=\sum_{i\geq 0} a_it^i$, we have \bn\label{3.3} &&\sum_{j\geq 0}f_jt^j=a_0t+a_1t\sum_{j\geq 1}f_jt^j +a_2t\left(\sum_{j\geq 1}f_jt^j\right)^2+a_3t\left(\sum_{j\geq 1}f_jt^j\right)^3+\cdots . \en Thus, \[ a_0=f_1\not=0\quad \mbox{and} \quad a_1=\frac{f_2}{f_1}. \] If $a_2=0$, then \[ f_{3}=a_1f_{2}. \] Consequently, \[ \frac{f_{3}}{f_{2}}=\frac{f_2}{f_1}=a_1, \] which implies $f_1f_{3}=f_2^2$, $f_{3}=f_2^2/f_1$, and \[ f_{3}=a_1^2f_1. \] Hence, the first $3$ terms of $f$ is \[ \sum_{j=1}^2f_jt^j=f_1t\sum^{1}_{j=0}a_1^jt^j=f_1t\frac{1-(a_1t)^2}{1-a_1t}. \] In other word, $f$ is in $T_2(t/(1-a_1t))$ because \[ f|_2=\left. a_0\frac{t}{1-a_1t}\right|_2. \] Conversely, if \eqref{3.2}, or equivalently, \eqref{3.1} hold for $j=2$, then from \eqref{3.3} we have \[ a_1f_2=f_{3}=a_1 f_2+a_2f_1, \] which implies $a_2=0$ due to $f_1\not= 0$. Secondly, let $(g,f)$ be a Riordan matrix with $A$-sequence $(a_0, a_1,\ldots)$, and let $f|_2=a_0t/(1-a_1t)|_2$. Then the compositional inverse of $f$, $\bar f$, has the truncation of order $2$ of the form \be\label{3.4} \bar f|_2=\left. \frac{t}{a_0+a_1t}\right|_2. \ee If $f|_2=a_0t/(1-a_1t)|_2$, then $\bar f$ is the compositional inverse of $f$ if and only if $\bar f|_2$ can be presented by \eqref{3.4} and $\bar f=t/(a_0+a_1t)$. In fact, $f\circ \bar f=t$ implies $(f\circ \bar f)|_2=t$. If $f$ has the truncation of the first $2$ terms presented by $f|_2=a_0t/(1-a_1t)|_2$, then a straightforward process can be applied to solve $\bar f|_2$ shown in \eqref{3.4} from the equation $(f\circ \bar f)|_2=t$. If $f|_2=a_0t/(1-a_1t)|_2$, then $\bar f$ possesses the truncation of order $2$ shown in \eqref{3.4}. Thus $\bar f=t/(a_0+a_1t)$. Conversely, if $\bar f$ satisfies \eqref{3.4}, i.e., $\bar f=t/(a_0+a_1t)$, then $f=a_0t/(1-a_1t)$, which implies $f|_2=a_0t/(1-a_1t)|_2$. Finally, we show that the set of Riordan matrices, denoted by $R_{0,2}$ with $A$-sequences, $(a_{0}, a_{1},$ $a_{2},\ldots)$, satisfying $a_{0}=1$ and $a_{2}=0$ forms a subgroup of the Riordan group. Let $(g_1,f_1)$ and $(g_2,f_2)$ be two Riordan matrices with $A$-sequences $A_1$ and $A_2$, and let $(g_3,f_3)=(g_1,f_1)(g_2,f_2)$ with $A$-sequence $A_3$. From the Second Fundamental Theorem of Riordan Arrays, $f=tA(f)$, we have $t=\bar f A$. Thus we may rewrite \eqref{1.7} as \[ A_3(t)=A_2(t)A_1\left( \bar f_2\right). \] If $(g_1,f_1)$ and $(g_2,f_2)\in R_{0,2}$, then \[ A_i(t)=a_{i,0}+a_{i,1}t+a_{i,3}t^{3}+\cdots=a_{i,0}+a_{i,1}t+O(t^{3}) \] for $i=1$ and $2$. Hence, \[ f_2|_2=\left. \frac{a_{2,0}t}{1-a_{2,1}t}\right|_2, \] which implies \[ \bar f_2|_2=\left.\frac{t}{a_{2,0}+a_{2,1}t}\right|_2. \] Combining the above equations yields \bns A_3(t)&=&(a_{2,0}+a_{2,1}t+O(t^{3}))\left(a_{1,0}+\left.a_{1,1}\frac{t}{a_0+a_1t}\right|_2+O(t^{3})\right)\\ &=&a_{1,0}a_{2,0}+a_{1,0}a_{2,1}t+\left.a_{1,1}(a_{2,0}+a_{2,1}t)\frac{t}{a_0+a_1t}\right|_r +O(t^{3})\\ &=&a_{1,0}a_{2,0}+(a_{1,0}a_{2,1}+a_{1,1})t+O(t^{3})=1+(a_{1,1}+a_{2,1})t+O(t^{3}), \ens which implies $(g_3,f_3)=(g_1,f_1)(g_2,f_2)$ is also in $R_{0,2}$, where we use the obvious result \[ a_{3,0}=A_3(0)=A_1(0)A_2(0)=1. \] \medbreak \noindent{\bf Remark 4.3} We now give a more direct way to prove Theorem \ref{thm:1.3}. If $D_{1}$ and $D_{2}$ are in $R_{0,2}$, then the generating functions of their $A$-sequences satisfy \[ A_{1}(0)=A_{2}(0)=1\,\, \mbox{and}\,\, A''_{1}(0)=A''_{2}(0)=0. \] From \eqref{1.7}, we have \[ A_{3}(0)=A_{2}(0)A_{1}(0)=1 \] and \bns A_{3}''(t)&&=A''_{2}(t)A_{1}\left( \frac{t}{A_{2}(t)}\right) +2A'_{2}(t)A'_{1}\left( \frac{t}{A_{2}(t)}\right)\left( \frac{t}{A_{2}(t)}\right)'\\ && +A_{2}(t)\left( A'_{1}(t)\left( \frac{t}{A_{2}(t)}\right)\left( \frac{t}{A_{2}(t)}\right)'\right)'\\ \ens which implies \bns A_{3}''(0)&&=2A'_{2}(0)A'_{1}(0)\frac{A_{2}(0)}{A^{2}_{2}(0)}+A_{2}(0)A''_{1}(0)\left( \frac{A_{2}(0)}{A^{2}_{2}(0)}\right)^{2}\\ &&+A_{2}(0)A'_{1}(0)( -2)A^{-2}_{2}(0)A'_{2}(0)=0 \ens Hence, $R_{0,2}$ is closed under the Riordan multiplication. Similarly, we may use \eqref{1.8} to prove that the inverse, $D^{-1}$, of any element $D\in R_{0,2}$ with the $A$-sequence $(1,a_{1}, 0, a_{3},\ldots)$ has the $A^{\ast}$-sequence $(1, a^{\ast}_{1}, 0, a^{\ast}_{3},\ldots)$. Thus $D^{-1}\in R_{0,2}$. More precisely, from \eqref{1.8}, \[ A^\ast (0)=\frac{1}{A(0)}=1, \] which implies $a^\ast_0=a_0=1$. Taking derivatives on the both sides of \eqref{1.8} yields \[ (A^\ast)'\left(\frac{t}{A(t)}\right)\frac{A(t)-tA'(t)}{A(t)^2}=-\frac{A'(t)}{A(t)^2}. \] Substituting $t=0$ and noting $A(0)=1$, we obtain \[ (A^\ast)'(0)=-A'(0). \] Taking second derivatives on the both sides of \eqref{1.8}, we have \bns &&(A^\ast)''\left(\frac{t}{A(t)}\right)\left(\frac{A(t)-tA'(t)}{A(t)^2}\right)^2\\ &&+(A^\ast)'\left(\frac{t}{A(t)}\right)\frac{-tA''(t)A(t)^2-2A(t)A'(t)(A(t)-tA'(t))}{A(t)^4} =\frac{2(A'(t))^2}{A(t)^3}-\frac{A''(t)}{A(t)^2}. \ens When $t=0$, one can derive \[ (A^\ast)''(0)-2A'(0)(A^\ast)'(0)=2A'(0)^2-A''(0). \] Since $(A^\ast)'(0)=-A'(0)$, the above equation can be reduced to \[ (A^\ast)''(0)=-A''(0)=0, \] which yields $a^\ast_2=0$. \medbreak \begin{proposition}\label{pro:1.4} Let two proper Riordan matrices $D_1=(g_1,f_1)$ and $D_2=(g_2,f_2)$ have type-I $B$-sequences $B_1$ and $B_2$, respectively, then the product of $D_1$ and $D_2$, \[ D_3=D_1 D_2=(g_1g_2(f_1), f_2(f_1)), \] has type-I $B$-sequence $B_3$ with its generating function \bns &&B_3\left( \frac{t^2}{A_3(t)}\right) =B_2\left( \frac{t^2}{A_2(t)}\right)+\frac{1}{A_2(t)}B_1\left( \frac{t^2}{A_2(t)A_3(t)}\right)\\ &&\quad +\frac{t}{A_2(t)}B_2\left( \frac{t^2}{A_2(t)}\right) B_1\left( \frac{t^2}{A_2(t)A_3(t)}\right). \ens \end{proposition} \begin{proof} Substituting \eqref{1.5} into \eqref{1.7}, one may obtain the result. \end{proof} Combining Theorems \ref{thm:1.1} and \ref{thm:1.3} together, we immediately have the following result. \begin{theorem}\label{thm:1.4} If a Riordan matrix $(g,f)$ has a type-I $B$-sequence, then it is in the subgroup $R_{0,2}$. \end{theorem} \begin{theorem}\label{thm:6.4} The set of Riordan matrices, denoted by $R_{1,1,1}$, with $A$-sequences of the form $(1, a_1, a_2,\ldots)$ and $Z$-sequences of the form $(z_0=a_1,0,z_2,\ldots)$ forms a subgroup of the Riordan group. \end{theorem} \begin{proof} Let $(g_1,f_1)$ and $(g_2,f_2)\in R_{1,1,1}$, and let $(g_3,f_3)=(g_1,f_1)(g_2,f_2)$, where the $A$-sequences $(a_{i,j})_{j=0,1,\ldots}$ and the $Z$-sequences $(z_{i,j})_{j=0,1,\ldots}$ of $(g_i,f_i)$, $i=1$ and $2$, satisfy the conditions \[ a_{i,0}=1,\quad a_{i,1}=z_{i,0}, \quad \mbox{and} \quad z_{i,1}=0, \] for $i=1$ and $2$. Then from \eqref{1.7} we have the generating function of the $A$-sequence of $(g_3,f_3)$ \be\label{1.7-1} A_3(t)=A_2(t)A_1\left( \frac{t}{A_2(t)}\right). \ee Hence from $A_i(0)=a_{i,0}=1$, the constant term of $A_3(t)$ is \[ A_3(0)=A_2(0)A_1(0)=1, \] i.e., $a_{3,0}=1$. Furthermore, \eqref{1.7-1} also implies \[ a_{3,1}=A'_3(0)=A'_2(0)A_1(0)+A_2(0)A'_1(0)=a_{1,1}+a_{2,1}. \] From \eqref{1.7-2}, we obtain the generating function of the $Z$-sequence of $(g_3,f_3)$ \be\label{1.7-2-1} Z_3(t)=\left( 1-\frac{t}{A_2(t)}Z_2(t)\right) Z_1\left( \frac{t}{A_2(t)}\right)+A_1\left( \frac{t}{A_2(t)}\right) Z_2(t). \ee Therefore, the constant term of $Z_3(t)$ is \[ Z_3(0)=Z_1(0)+A_1(0)Z_2(0)=a_{1,1}+a_{2,1}=A'_3(0), \] or equivalently, \[ z_{3,0}=a_{3,1}. \] In addition, \eqref{1.7-2-1} gives \bns Z'_3(0)&&=(-Z_2(0))Z_1(0)+A'_1(0)Z_2(0)+A_1(0)Z'_2(0)\\ &&=-z_{1,0}z_{2,0}+a_{1,1}z_{2,0}=-z_{1,0}z_{2,0}+z_{1,0}z_{2,0}=0. \ens Let $(g,f)\in R_{1,1,1}$, and let $(g^\ast,f^\ast)=(g,f)^{-1}$, i.e., $g^\ast=1/(g\circ \bar f)$ and $f^\ast=\bar f$, where the $A$-sequences $(a_j)_{j=0,1,\ldots}$ and the $Z$-sequences $(z_{j})_{j=0,1,\ldots}$ of $(g,f)$ satisfy the conditions \[ a_{0}=1,\quad a_{1}=z_{0}, \quad \mbox{and} \quad z_1=0. \] Then from \eqref{1.7} and \eqref{1.7-2}, we have \be\label{1.8-1} A^\ast\left( \frac{t}{A(t)} \right)= \frac{1}{A(t)} \ee and \be\label{1.8-2-1} Z^\ast \left( \frac{t}{A(t)}\right)=\frac{Z(t)}{tZ(t)-A(t)}, \ee respectively. Denote the $A$-sequence and the $Z$-sequence of $(g,f)^{-1}$ by $(a^\ast_j)_{j=0,1,\ldots}$ and $(Z^\ast_j)_{j=0,1,\ldots}$. Then \eqref{1.8-1} yields \[ a^\ast_0=A^\ast(0)=\frac{1}{A(0)}=1, \] and \[ a^\ast_1=(A^\ast(t))'|_{t=0}=-\frac{1}{A^2(0)}A'(0)=-a_1. \] Meanwhile, using \eqref{1.8-2-1} we obtain \[ Z^\ast_0=Z^\ast(0)=-Z(0)=-z_0=-a_1=a^\ast_1. \] Finally, we have \bns Z^\ast_1&&=(Z^\ast(t))'|_{t=0}=-\frac{Z'(0)A(0)-Z(0)(A'(0)-Z(0)}{A(0)^2}\\ &&=-\frac{Z(0)^2-A'(0)Z(0)}{A(0)^2}=a_1z_0-z_0^2=0 \ens because $a_1=z_0$. This complete the proof of the theorem. \end{proof} \hfill\rule{2.0mm}{2.0mm} \medbreak \noindent{\bf Example 4.1} The Riordan matrix $(1/(1-kt), t/(1-kt))$ begins \[ \left[\begin{array}{cccccccc} 1 & & & & & & &\\ k & 1 & & & & & &\\ k^2 & 2k & 1 & & & & &\\ k^3 & 3k^2 & 3k& 1 & & & &\cdots \\ k^4 &4k^3 & 6k^2 & 4k& 1 & & & \\ k^5 & 5 k^4& 10k^3 & 10k^2& 5k & 1 && \\ & & & \cdots & & & & \end{array} \right] \] Then its $A$-, $Z$-, and $B$-sequences are $(1,k,0,\ldots)$, $(k,0,\ldots)$, and $(k,0,\ldots)$, respectively, where $B$-sequence is defined for all entries of the matrix. \medbreak Let $(g,f)$ be a Riordan matrix, where $g=\sum_{j\geq 0} g_j t^j$ and $f=\sum_{j\geq 1}f_jt^j$ with $g_0=1$ and $f_1\not=0$. Equation \eqref{eq:1.2} shows that the generating function $A(t)$ of the $A$-sequence, $(a_j)_{j=0,1,\ldots}$, of $(g,f)$ satisfies \[ f(t)=t(a_0+a_1 f+a_2f^2+\cdots), \] which implies \be\label{6.8} f_1=a_0\quad \mbox{and} \quad f_2=a_1f_1=a_0a_1. \ee Equation \eqref{eq:1.4} shows that the generating function $Z(t)$ of the $Z$-sequence, $(z_j)_{j=0,1,\ldots}$, of $(g,f)$, \[ g(\bar f)=\frac{1}{1-\bar f Z}. \] The above equation can be written as \[ Z=\frac{g(\bar f)-1}{\bar f g(\bar f)}=\frac{ g_1+g_2(\bar f)+g_3(\bar f)^2+\cdots}{g_0+g_1(\bar f)+g_2(\bar f)^2+\cdots}. \] Thus, \be\label{6.9} z_0=Z(0)=g_1/g_0=g_1, \ee and by noting $\bar f'(0)=1/f'(0)=1/f_1$, \be\label{6.10} z_1=Z'(0)=\frac{g_0(g_2\bar f'(0))-g_1^2\bar f'(0)}{g_0^2}=\frac{g_0g_2-g_1^2}{f_1g_0^2}. \ee And we have the following result. \begin{corollary}\label{cor:6.5} Let $(g,f)$ be a Riordan matrix, where $g=\sum_{j\geq 0} g_j z^j$ and $f=\sum_{j\geq 1}f_jt^j$ with $g_0=1$ and $f_1\not=0$. Then $(g,f)\in R_{1,1,1}$ if and only if $f_1=1$, $f_2=g_1$, and $g_2=g_1^2$, or equivalently, $f_1=1$, $f_2=g_1$, and $g_2=f_2^2$. \end{corollary} \begin{proof} One may use \eqref{6.8} - \eqref{6.10} to transfer the sufficient and necessary conditions, $a_0=1$, $z_0=a_1$ and $z_1=0$, for $(g,f)\in R_{1,1,1}$ to be \[ f_1=1,\quad g_1=f_2,\quad \mbox{and}\quad g_0g_2=g_1^2, \] which proves the corollary. \end{proof} \section{$A$- and $B$- sequences of Pascal-like Riordan matrices} We shall call a lower-triangular matrix $(a_{n,k})$ is {\it Pascal-like} if 1. $a_{n,k}=a_{n,n-k}$ and 2. $a_{n,o}=a_{n,n}=1$. It is clear that not all Pascal-like matrices are Riordan matrices. If a Pascal-like matrix is also a Riordan matrix, for example, the Pascal matrix, then it is called a Pascal-like Riordan matrix. A Pascal-like matrix will then be the coefficient matrix of a family of monic reciprocal polynomials. Here, a polynomial $P_{n}(x)=\sum^{n}_{k=0}a_{n,k}x^{k}$ of degree $n$ is said to be reciprocal if \[ P_{n}(x)=x^{n}P_{n}(1/x). \] Hence, we have \[ [x^k]P_n(x)=[x^k]x^nP_n(1/x)=[x^k]\sum^n_{j=0}a_{n,j}x^{n-j}, \] which implies \[ a_{n,k}=a_{n,n-k}. \] \begin{theorem}\label{thm:4.1} Let $(a_0,a_1,a_2,\ldots)$ be the $A$-sequence of a Pascal-like Riordan matrix $P=(p_{n,k})_{n\geq k\geq 0}$. Then \be\label{4.1} \left. a_1(1-a_1)\right| a_j \ee for $j\geq 2$, or equivalently, $a_2|a_j$ for all $j\geq 2$ due to $a_2=a_1(1-a_1)$. Furthermore, we have recursive formula for $a_j$ as \be\label{4.2} a_{j}=(j-2)a_1(1-a_1)-a_2p_{j-1,2}-\cdots-a_{j-1}p_{j-1,j-2}. \ee \end{theorem} \begin{proof} We prove \eqref{4.1} by using induction. Let $P$ has $A$-sequence $(a_0,a_1,a_2,\ldots)$. Then, it is easy to see that $a_0=1$ and \be\label{4.3} p_{n,n-1}=1+(n-1)a_1 \ee for $n\geq 1$. More precisely, we have $p_{1,0}=1$ and $p_{2,1}=1+a_1p_{1,1}=1+a_1$. If $p_{n-1,n-2}=1+(n-2)a_1$, then \[ p_{n,n-1}=p_{n-1,n-2}+a_1p_{n-1,n-1}=1+(n-1)a_1. \] Since \[ p_{3,1}=1+a_1p_{2,1}+a_2 \] and $p_{3,1}=p_{3,2}$, from \eqref{4.3} we have \[ 1+a_1p_{2,1}+a_2=1+2a_1, \] or equivalently, \[ a_1(1+a_1)+a_2=2a_1, \] which shows that $a_2=a_1(1-a_1).$ Assume that $a_1(1-a_1)| a_j$ for all $3\leq j\leq k$, then from the definition of Pascal-like Riordan matrix: $p_{k+1,1}=p_{k+1,k}$, we have \[ p_{k+1,1}=1+a_1p_{k,1}+a_2p_{k,2}+\cdots +a_kp_{k,k-1}+a_{k+1}=1+ka_1=p_{k+1,k}. \] Thus, \bns a_{k+1}&=&ka_1-a_1p_{k,1}-a_2p_{k,2}-\cdots-a_kp_{k,k-1}\\ &=&ka_1-a_1p_{k,k-1}-a_2p_{k,2}-\cdots-a_kp_{k,k-1}\\ &=&ka_1-a_1(1+(k-1)a_1)-a_2p_{k,2}-\cdots-a_kp_{k,k-1}\\ &=&(k-1)a_1(1-a_1)-a_2p_{k,2}-\cdots-a_kp_{k,k-1}, \ens which implies $a_1(1-a_1)|a_{k+1}$ from the induction assumption. The rightmost expression of the above equations imply \eqref{4.2}. Since $a_2=a_1(1-a_1)$, we have $a_2|a_j$ for all $j\geq 2$. \end{proof} \begin{corollary}\label{cor:4.2} All Pascal-like Riordan matrices have no $B$-sequence except the matrix $(1/(1-t),t)$ and Pascal matrix $(1/(1-t), t/(1-t))$. Here, the type-I $B$-sequence of $(1/(1-t),t)$ is $(0,0,0,\ldots)$ while its type-II $B$-sequence is $(1,0,0,\ldots)$. Both type-I and type-II $B$-sequences of $(1/(1-t), t/(1-t))$ are $(1,0,0,\ldots)$. \end{corollary} \begin{proof} A Pascal-like matrix has a $B$-sequence if and only if its $A$-sequence possesses sequence element $a_2=a_1(1-a_1)=0$, i.e., $a_1=0$ or $a_1=1$, or equivalently, $P=(1/(1-t),t)$ or $P=(1/(1-t), t/(1-t))$. \end{proof} \noindent{\bf Acknowledgements} We thank the referees and the editor for their careful reading and helpful suggestions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} One of the most remarkable insights gained through the recent developments in the studies of duality symmetries of string theory is the possibility of formulating nonperturbative string theory in terms of supersymmetric Yang-Mills theories. From an ordinary viewpoint of perturbative string theory, Yang-Mills theories are regarded as the low-energy effective theories for describing interactions of gauge-field excitations of strings. The discovery of crucial roles \cite{pol} played by Dirichlet branes (D-branes) for realizing string dualities, however, paved a way toward possible reformulations of string theory using new degrees of freedom other than the fundamental strings, on the basis of an entirely different interpretation of Yang-Mills fields. Since string field theories assuming the fundamental strings themselves to be the basic degrees of freedom do not seem to be appropriate for nonperturbative studies of the theory, such an alternative possibility has long been sought, but has never been materialized in concrete form. In the present report, I would like to review the status of Yang-Mills matrix models from the viewpoint of asking the question, ``{\it Why could the models be the theory of quantum gravity?}" In this written version of the talk, some of the subjects which I have presented in the YITP workshop, held in succession to the Nishinomiya Yukawa symposium, will also be included and expanded. We will start from the so-called Matrix theory which was proposed first and has been a focus of most intensive studies. Next we will turn to the so-called AdS/CFT(SYM) correspondence. The purpose of this first part is to review the known results critically and provide a few new observations on some unsolved issues. We will then proceed to the issue why Yang-Mills matrix theories could be the models for quantum gravity. A special emphasis will be put on possible hidden symmetry structure which would ensure the emergence of general covariance at long distance regime. The last part discussing the problem of background independence is a preliminary report from a still unfinished project and will be of very speculative character. \section{Yang-Mills matrix models and supergravity} The D-branes are objects carrying Ramond-Ramond (RR) charges. They are necessary for realizing duality symmetry among various perturbative vacua of string theory, since the transformations associated with duality interchange Neveu-Schwarz-Neveu-Schwarz (NS-NS) and RR charges. In the case of perturbative closed-string theories, it is known that the type IIA or IIB theory allows even or odd (spatial) dimensional D-branes, respectively. In particular, the lowest dimensional objects are D0-brane (D-particle) in IIA and D(-1)-brane (D-instanton) in IIB. In low-energy effective field theory, namely, IIA or IIB supergravity, D-branes are represented as solitonic classical solutions. From the viewpoint of ordinary world-sheet formulation of the theories, they are described as collective modes of fundamental strings, in which the collective coordinates can be identified with the space-time coordinates at the boundaries of open strings with Dirichlet condition. In old perturbative string theory, it has been thought that open strings cannot be coupled to closed strings consistently, since they necessarily break the N=2 supersymmetry of closed-string sector. In our new understanding, the partial breakdown of supersymmetry just indicates the existence of D-branes as physical objects, and the remaining supersymmetry is reinterpreted as the manifestation of the BPS property of D-branes. To describe the dynamics of D-branes, we have to therefore study coupled systems of closed strings and open strings with dynamical Dirichlet boundary conditions. Here it is worthwhile to recollect an old but well-known formulation of open strings, namely, Witten's string field theory \cite{wittensft}. The latter only uses open-string fields as the dynamical degrees of freedom. Nevertheless, it includes the whole dynamics of interacting closed-open strings. Namely, it is possible to describe the dynamics of closed strings in terms of open-string degrees of freedom without explicitly introducing fields corresponding to closed strings. This remarkable property is actually a consequence of the old $s$-$t$ duality which is the basis for conformal invariance of the world-sheet string dynamics. Furthermore, if there were circumstances where we can neglect the excitation modes of open strings higher than the lowest Yang-Mills degrees of freedom, we can even imagine situations where the whole dynamics including quantum gravity effect can be described by Yang-Mills theories. Let us briefly review some representative proposals along this line. \subsection{D-particle model or Matrix theory} The first such model is called `Matrix theory'. The model is based on the 1+0-dimensional Yang-Mills theory with maximum (N=16) supersymmetry, which is obtained by the dimensional reduction from 10 (=1+9) dimensional N=1 supersymmetric Yang-Mills theory. \begin{eqnarray} S \hspace{-0.1cm}&=&\hspace{-0.2cm} \int dt \, {\rm Tr} \Bigl( {1\over 2g_s\ell_s} D_t X_i D_t X_i + i \theta D_t \theta +{1 \over 4g_s\ell_s^5} [X_i, X_j]^2 - {1\over \ell_s^2}\theta \gamma_i [\theta, X_i]\Bigr) \\ \hspace{-0.2cm} &=&\hspace{-0.1cm} \int dt \, {\rm Tr} \Bigl( {1\over 2R} D_t X_i D_t X_i + i \theta D_t \theta +{R \over 4\ell_P^6 } [X_i, X_j]^2 - {R\over \ell_P^3}\theta \gamma_i [\theta, X_i]\bigr) \, , \label{d0matrixmodel} \end{eqnarray} where $\ell_s$ and $ g_s$ are string length and coupling constants, respectively. In the second line, we have introduced the 11 dimensional parameters of M-theory, compactification radius $R=g_s\ell_s$ along the 11th direction (10th spatial direction) and the 11 dimensional Planck length $\ell_P= g_s^{1/3}\ell_s$. The Higgs field $X_i \, (i=1, 2, \ldots, 9)$ are dimensionally reduced U($N$) gauge-field matrices whose diagonal components are identified with the collective coordinates of $N$ D-particles, while the off-diagonal components are the fields of lowest open string modes connecting the D-particles. The 16 component Grassmann (hermitian) matrices $\Psi$ transforming as SO(9) spinor are the super partner of the Higgs fields. In the matrix theory conjecture proposed in ref. \cite{bfss}, this action is interpreted as the effective action of the theory in the Infinite-Momentum Frame (IMF) where the 11th total momentum $P_{11}$ is taken to be infinitely large. Following the M-theory identification of the 11th momentum with the RR 1-form charges of D-particle, it is assumed that \begin{equation} P_{11}= N/R\, . \end{equation} Thus, for any finite fixed $R$, the IMF limit corresponds to taking the large $N$ limit, $N\rightarrow \infty$. In other words, this requires that the IMF Hamiltonian $P^-=P^0-P^{11}$ must have nontrivial dynamics in the part which scales as $1/N$. One of the reasonings for this conjecture is that in the IMF frame (the part of) the (super) Poincar\a{`}e symmetry is reduced to (super) Galilean symmetry in the 9 dimensional transverse space and the above action precisely exhibits that symmetry. In particular, to be a theory in 11 dimensional space-time, it should exhibit the $N=1$ supersymmetry in 11 dimensions which amounts to $N=2$ supersymmetry in 1+9 dimensions. Indeed the model has, in addition to the $N=16$ supersymmetry in 1+0 dimensions inherited from 10 dimensional $N=1$ supersymmetry \begin{equation} \delta^{(1)} \theta = {1\over 2} ({1\over R}D_tX_i\gamma^i + {i\over 2\ell_P^3}[X_i, X_j]\gamma^{ij})\epsilon^{(1)}, \end{equation} a trivial supersymmetry under \begin{equation} \delta^{(2)} \theta = \epsilon^{(2)} , \end{equation} where $\epsilon^{(1)}$ and $\epsilon^{(2)}$ are two independent constant Majorana spinors. The algebra of these two supersymmetry transformations closes with central charges up to a field-dependent gauge transformation. For a single D-particle state at rest as the simplest example, the first symmetry is unbroken, corresponding to the BPS property of the state, while the second is broken. Furthermore, the dimension of the multiplet of single particle states fits to the desired multiplet corresponding to the first Kaluza-Klein mode of the 11 dimensional multiplet containing massless graviton and gravitino : The 16 component Grassmann coordinate $\theta$ leads to $2^{16/2}=256=128+128$ dimensional representation of transverse SO(9) ($\sim$ Spin(9)) group, which is precisely the physical dimension of the 11 dimensional supergravity multiplet. Of course, the Galilean symmetry is not sufficient to justify the decoupling of the higher string modes. That is the crucial dynamical assumption of the model. A piece of evidence for this conjecture comes from the old observation made long time ago in connection with the theory of membrane. Namely, the same model can be interpreted as a special regularized version of the membrane action \cite{dhn} in the light-cone gauge in 11 dimensions. The large $N$ limit in this interpretation is nothing but the continuum limit. The fundamental string of 10 dimensional type IIA theory is identified with the membrane which is wrapped along the compactified 11th direction. If this really works, it is quite natural to expect that after taking the appropriate large $N$ limit the model would reproduce the whole dynamics including the effect corresponding to higher string modes. The crucial new observation here is that the model should be interpreted as describing the arbitrary multi-body systems of membranes or D-particles. This solves the long-standing problem in the formulations of supersymmetric membranes, namely, the difficulty of continuous energy spectrum. The energy spectrum of the system {\sl must} be continuous from zero, to be the theory of multi-body system including the massless particles. For the validity of this interpretation, it is necessary that there exists one and only one threshold bound state which is identified with the single-particle graviton supermultiplet for each fixed $N$ of U($N$). At least for $N=2$, this is consistent with the Witten index of the model \cite{index}. Another impetus for this model is the proposal that the model might be meaningful even for finite $N$. That is, Susskind \cite{suss} suggested that the model for finite $N$ should be interpreted following the framework of the so-called discrete light-cone quantization (DLCQ), in which the compactification is made along the light-like direction $x^-= x^{11}-x^0$ instead of the space-like direction. Such a formalism has often been discussed to regularize gauge field theories. In fact, this proposal can be related to the IMF interpretation by considering the limit of small $R$ keeping $N$ fixed, which is another way of making $P_{11}$ large. If we boost the system simultaneously with taking this limit, we can keep the longitudinal momentum $P_-$ finite and the condition of compactification is imposed on the $x^-$ direction with finite compactification radius in the small $R$ limit. This is essentially the argument given in ref. \cite{seisen}. Equivalently, using the original frame with sufficiently small $R$, the compactification condition $x^{\pm}\sim x^{\pm}+2\pi R$ in the space-lime 11th dimension can be approximated by the light-like condition $(x^{-}, x^{+})\sim (x^{-}+2\pi R, x^{+})$, since in the limit we are only interested in the small longitudinal energy $P_+\sim P_i^2/2P_-$ proportional to $R$ while $P_-\sim P_{11} \sim O(1/R)$ becomes large. Now since the limit forces the 11 dimensional Planck length small compared to the string length, $\ell_P \ll \ell_s$, we expect that the interaction of D-particles can be described by lowest open string modes at least at distance scales shorter than the string length, according to the result of ref. \cite{dkps}. Also, the 11 dimensional Newton constant $G_{11} \sim g_s^3 \ell_s^9$ becomes small in this limit. Thus from the viewpoint of closed strings or membranes, the interaction of D-particles should be approximated well by classical supergravity at least for distance scales much larger than the string length $\ell_s$. If one naively {\bf assumes} that Matrix theory is correct and that the justification of Matrix theory comes {\bf solely} from the infinite momentum limit, one might expect that Matrix theory for sufficiently small $R$ with finite $N$ must reproduce the classical supergravity at all distance scales which are larger than the 11 dimensional Planck length. This would in particular require that the lowest open string mode alone describes correctly the gravitational interaction of D-particles for such wide ranges of distance scales, namely, from the infinite large distances all the way down to near the 11D Planck length which is far below the string scale. This would be quite a surprising conclusion, since we usually think that the duality between open and closed strings is due to the existence of full tower of higher string modes on both sides of closed and open strings. In particular, the effective dynamics near the string length after eliminating the higher-string modes would necessarily be non-local either in terms of the lowest graviton fields alone or of the lowest gauge modes alone . Before discussing further the meaning of this and where this naive expectation may be invalidated, let us briefly review known results related to this issue. \subsection{Matrix theory {\it vs.} supergravity} In fact, as discussed in ref. \cite{dkps}, supersymmetry ensures that the above conclusion is indeed true at least in the one-loop approximation in terms of open string computation. The two-body interaction, $v^4/r^7$, of D-particles in the leading approximation with respect to the expansion in velocity is correctly reproduced by only the lowest open string modes. Namely, the same expression is valid for the large $r$ region where the only lowest modes of the closed string couple, as described by supergravity. Furthermore, at least for two-body interactions, a non-renormalization theorem \cite{pss} is established demanding that the one-loop result for the leading term is not renormalized by higher order effects. This theorem can be generalized to the next order $v^6$ for the two-body interaction and is consistent with the result of explicit two-loop computation \cite{bbpt} of the two-body interactions. Whether similar non-renormalization theorem is valid for more general multi-body interactions is not known. Extension of the argument given in \cite{pss} to general $N$- body interactions \cite{lowe} is difficult. In general, however, we hope that some symmetry together with certain additional inputs would fix the theory of gravity completely. For example, we believe that general covariance and locality uniquely lead to General Relativity at sufficiently large distances. So the question is whether the supersymmetry of the matrix model (\ref{d0matrixmodel}) is sufficient to ensure the general coordinate invariance at large distances as interpreted in 11 dimensional space-time. If we assume the existence of massless graviton supermultiplet and Lorentz symmetry in the flat background in 11 dimensions, only consistent low-energy effective theory is believed to be supergravity. Establishing Lorentz invariance of the model in the limit $R, N \rightarrow \infty$ would thus be most desirable. At least for membrane approximation, this is very plausible. However, the membrane approximation is not sufficient to establish the Lorentz symmetry, since the interpretation of the matrix model is really very different from membrane as emphasized already. Unfortunately no concrete proposal for general case has been given. It is thus desirable to perform explicit computations for multi-body interactions. Let us here briefly review the result of explicit computations of 3-body interaction of D-particles at finite $N$. A scaling argument shows that the effective lagrangian of the 3-body interaction must take the following form \begin{equation} L_3 \sim {G_{11}^2\over R^5}{v^6\over r^{14}}\, , \end{equation} where the factor $v^6/r^{14}$ only indicates power behaviors with respect to relative velocities ($v$) and to relative distances ($r$). The power $R^{-5}$ with respect to the compactification radius is required by boost invariance along the 11th direction. Note that in terms of the Yang-Mills coupling $g_{{\rm YM}}^2 \propto g_s$ the factor $G_{11}^2/R^5\propto g^2_{{\rm YM}}$ corresponds to the two-loop contribution. For small $g_s$, the compactification radius is small, but the Newton constant is also vanishing such that the expansion parameter $G_{11}^2/R^5$ is arbitrarily small. This indicates that the regions of validity of classical supergravity and perturbative computation in the matrix model might overlap, if only the parameters are concerned neglecting the real roles of the dynamical variables. Therefore it is not unreasonable if the matrix model with finite $N$ is able to reproduce supergravity results to some finite orders with respect to the Newton constant. In classical supergravity, we can derive the following explicit form for the interaction lagrangian \begin{equation} L_3 = L_V +L_Y. \end{equation} \begin{equation} L_V= -\sum_{a,b,c}{(15)^2 N_aN_bN_c\over 64 R^5M^{18}} v_{ab}^2 v_{ca}^2 (v_{ca}\cdot v_{ab}) {1\over r_{ab}^7}{1\over r_{ca}^7} . \label{eq256} \end{equation} \begin{eqnarray} L_Y &=&-\sum_{a, b, c} {(15)^3 N_aN_bN_c \over 96(2\pi)^4R^5M^{18}} \Bigl[ -v_{bc}^2v_{ca}^2(v_{cb}\cdot \nabla_c) (v_{ca}\cdot \nabla_c)\nonumber \\ &&+{1\over 2} v_{ca}^4(v_{cb}\cdot \nabla_c)^2 +{1\over 2}v_{b c}^4 (v_{ca}\cdot \nabla_c)^2 \nonumber \\ &&-{1\over 2}v_{ba}^2 v_{ac}^2(v_{cb}\cdot \nabla_c)(v_{bc}\cdot \nabla_b) \nonumber\\ &&+{1\over 4}v_{bc}^4 ( v_{ba}\cdot \nabla_b)(v_{ca}\cdot \nabla_c) \Bigr]\Delta(a,b,c) \label{eq258} \end{eqnarray} where \[ \Delta(a,b,c) \equiv \int d^9 y {1\over |x_a-y|^7 |x_b -y|^7|x_c-y|^7} \] \begin{equation} = {64(2\pi)^3 \over (15)^3} \int_0^{\infty} d^3\sigma (\sigma_1\sigma_2 + \sigma_2\sigma_3 + \sigma_3\sigma_1)^{3/2} \exp \bigl( -\sigma_1|x_a-x_b|^2 -\sigma_2|x_b -x_c|^2 -\sigma_3|x_c-x_a|^2 \bigr) \label{eq247} \end{equation} and the indices $a, b, c, \ldots$ label the D-particles whose masses are $N_a/R, N_b/R, N_c/R, \ldots$. The Planck mass $M=1/\ell_P$ is defined by $G_{11}=2\pi^5/M^9$. The above separation into V-part and Y-part roughly corresponds to the contributions from the seagull-type diagrams and the diagrams with one 3-point self-interaction of graviton, respectively. Because of the BPS property, the contribution from the Y-part vanishes whenever any two D-particles have parallel velocities. On the side of Matrix theory, we compute the scattering phase shift in the eikonal approximation. Each of the D-particles with masses $N_a/R, N_b/R, N_c/R, \ldots$ is approximated as a cluster of corresponding number ($N_a, N_b, N_c, \ldots$) of D-particles, moving parallel within each cluster, with the smallest unit of mass $1/R$. We can again separate the two-loop contributions into V-and Y- types. The Y type contribution only comes from the diagrams with two 3-point vertices. The V-type contribution comes from the diagrams with one 4-point vertex and also from the diagrams with two 3-point vertices in which one of the propagators is canceled by the derivatives acting on the 3-point vertices. For more details, the reader should consult our original papers \cite{oy1}\cite{oy2}. It turns out that the V-type contribution to the eikonal phase shift can be written as the time integral of the above lagrangian $L_V$. For the Y-type contribution which is vastly more complicated, we have confirmed that the result of explicit time integration of the lagrangian $L_Y$ precisely agrees with the phase shift obtained from the matrix model. We can also show the precise correspondence at the level of the equations of motion on both sides including the effect of recoil \cite{oy2} to the present order of approximation. In the absence of general arguments which may guarantee the agreement between matrix theory with finite $N$ and supergravity in the long-distance limit, the above 3-body computation is the strongest evidence so far for the validity of Matrix theory conjecture in its DLCQ interpretation. A related computation involving the nonlinear graviton interaction has also been done for graviton scattering in an orientifold background \cite{daniel} and exact agreement is verified. Extension of these computations to higher-loop/body interactions is not difficult at least conceptually, but is technically formidable and no complete computations for higher cases have been reported yet, except for a partial computation \cite{dine} which indicates some signal for a possible discrepancy with supergravity at 3-loop order. It should be emphasized here that given only the connection between the matrix model as the low-energy effective theory for D-particles in type IIA theory, on one hand, and the connection of supergravity and closed strings on the other hand, the agreement of D-particle scatterings between supergravity and matrix model at arbitrary {\bf large} distances is in no sense a logical consequence. Remember that the argument of ref. \cite{seisen} is not applicable at distances near and larger than the string scale. Suppose that the original BFSS conjecture that in the large $N$ limit (and for fixed $R$) the agreement is achieved is true. Then the disagreement between supergravity and matrix theory at finite $N$ with small $R$, if it indeed occurs, must be due to the neglect of bound-state effect in forming the states of D-particles with large longitudinal momentum. Namely, no matter how $R$ is small, only for sufficiently large $N$ we would expect that the effect of higher string modes which would ensure the validity of the $s$-$t$ duality between open and closed strings is correctly taken into account. To check whether matrix theory can give sensible results in this way is, however, a very difficult problem, since for large $N$ the size of graviton is known to grow indefinitely \cite{bfss} in the limit, and hence we have to deal with complicated many body dynamics of D-particles (or partons). If that is the case, it is desirable to have definite criteria on the basis of which we can assess various situations such that agreements or disagreements between supergravity and matrix theory can be predicted by general arguments. For example, if we assume the correspondence between supgergravity and D0-matrix model following Maldacena's general conjecture \cite{maldacena} which will be the subject of the next subsection, the validity of the classical 10 dimensional supergravity description is expected for the distance scales\cite{itzak} $\ell_PN^{1/7} \ll r \ll \ell_P N^{1/3}$ , where the first and the second inequalities come from the weak coupling condition and the small curvature condition, respectively. At the lower end, the effective radius along the 11th direction becomes of the same order as $\ell_P$. Thus from the viewpoint of 11 dimensions, it should not be regarded as the limit of the supergravity description as long as the curvature radius is much larger than the Planck length, although 10 dimensional description is no more valid. However the upper limit indicates that the agreement with classical supergravity at arbitrarily large distances can only be achieved in an appropriate large $N$ limit. Namely, the parameter $N$ plays effectively a role of infrared cutoff for the theory, not only with respect to the 11th direction but also to the transverse space in the bulk, as we have argued before from the correspondence between the matrix model and a regularized theory of membranes. In other words, the low-energy long distance physics of supergravity is governed by the high-energy physics of open strings where in general we cannot neglect higher string (or membrane) modes. However, we should also keep in mind that the near-horizon limit, approximating the factor $1+ q/r^7$ by $q/r^7$, is only valid if $r\ll (g_sN)^{1/7}$ where $q\propto g_sN$ , and hence the argument cannot be extended to the upper limit $\ell_P N^{1/3}$ in the strong coupling region. Thus strictly speaking we can not be sure whether Matrix theory reproduces the large distance behavior of classical supergravity in the large $N$ limit for {\bf fixed} $g_s$ when $g_sN\gg 1$, even if we assume the validity of the Maldacena correspondence in its origianl form. Of course, the Maldacena conjecture only proposes sufficient conditions, and hence does not necessarily exclude the possibility that the region of validity extends beyond these conditions because of some (hidden) symmetry constraints depending on the type of physical quantities in question. We should also expect from a more general viewpoint that some (but perhaps already `built-in') symmetry must be responsible for the matrix model to reproduce supergravity in spite of the rapid growth of the size of graviton in the large $N$ limit. Perhaps the precise agreement of 3-body interaction in the above finite $N$ calculation should be interpreted as a partial indication for the existence of such higher symmetry. Concerning the question on why the matrix model can be the theory of gravity, one of the other crucial unsolved problems is how to extend the model to general curved backgrounds. It has been argued that any simple modification of the quantum mechanical lagrangian (\ref{d0matrixmodel}) for the curved space cannot reproduce the supergravity result even at the order $v^4$ for the D-particle interaction in curved space for finite $N$. This seems to indicate that the curved background cannot, in general, be described by finite $N$ models. Indeed, this is not unreasonable since to really modify the background in a self-consistent fashion within the framework of M-theory, we must consider the condensation of gravitons. It is difficult to treat finite condensation of graviton in the present framework of Matrix theory which assumes fixed $N$ however it is large. In the last part of this talk, I will give a preliminary consideration on the graviton condensation in a simpler case of type IIB matrix model \cite{ikkt}. For the possibility of modifying the action to curved backgrounds, an axiomatic approach called D-geometry \cite{douglas} has been suggested. We have to await to see whether this approach can resolve the above issues. We also mention a recent important work \cite{taylor} discussing the change of the background in Matrix theory, on the basis of one-loop computations of the interactions between an arbitrary pair of extended objects in the theory. For an earlier approach from the viewpoint of membrane dynamics in curved background, see \cite{dewit} and references therein. \subsection{AdS/SYM correspondence} As already mentioned, another recent development closely related to Matrix theory is the conjectured correspondence \cite{maldacena} between supergravity in anti de Sitter background on one hand and super Yang-Mills theory of D-branes on the other. This is essentially based on the following two observations. Firstly, the low-energy (low-velocity) dynamics of many D-branes which are situated almost on top each other is well described by the effective super Yang-Mills theory for any finite $N$, since we can assume the decoupling of higher modes of open strings for the same reason as we have argued in the case of Matrix theory. Secondly, the field-theory description in terms of supergravity is expected to be simultaneously effective when the curvature near the horizon becomes sufficiently small compared with the string length. In the case of D3-brane, in particular, the curvature radius at the horizon is of order \begin{equation} R_c \propto (g_{{\rm YM}}^2 N)^{1/4}\ell_s . \end{equation} D3-brane is special in that the background dilaton is constant, and thus can be made arbitrarily small by keeping $R_c$ large ($g_{{\rm YM}}^2 N\gg 1$) if $N$ is sufficiently large. Then by the duality between open and close strings, we naturally expect that the descriptions of the dynamics of D3-branes in terms of supersymmetric Yang-Mills theory or type IIB supergravity are both valid. In other words, super Yang-Mills theory in the large $N$ limit is expected to be `dual' to supergravity with the D3-background in the near horizon limit. The D3-brane metric in the near horizon limit is the direct product, AdS$_5\times$S$^5$, of the five dimensional anti de Sitter space-time AdS$_5$ and five dimensional sphere S$^5$. Thus the metric has isometric symmetry under the group $SO(4,2)\times SO(6)$. Correspondingly, Yang-Mills theory for D3 branes are the $N=4$ superconformal Yang-Mills theory which has the same conformal symmetry $SO(4,2)$ and global R-symmetry $SO(6)$. Using this correspondence, we can for example predict the spectrum of the superconformal Yang-Mills theory in the large $N$ limit by analyzing the Kaluza-Klein spectrum around the AdS$_5 \times $S$^5$. A more concrete prescription which allows us to connect correlators of both sides has been proposed in \cite{gkp}. It essentially says that the effective action of supergravity for the supergravity fields which satisfy appropriate boundary condition at the boundary of the AdS space (opposite to the horizon) is the generating functional for the correlators of super Yang-Mills theory. The external fields for the latter generating functional coupled to operators of the Yang-Mills theory are nothing but the boundary value of the bulk fields in supergravity. Many computations of correlators have been done based on this conjecture. However, it seems that this prescription has never been derived logically from the duality between open and closed strings. For example, it is not clear why the boundary condition at the boundary of the AdS space-time can dictate the choice of operators of the large $N$ Yang-Mills theory, since naively the D-branes as the heavy source producing the AdS background seem to be situated at the {\sl opposite} `boundary' of the AdS space. In the following, we will first discuss some interesting aspect related to the correspondence of conformal symmetries on both sides, and then come back again to the issue of correlators later. The metric of the AdS$_5\times$S$^5$ is given by \begin{equation} ds^2 = \alpha'\Bigl( {R_c^2 \over U^2}(dU^2 + U^2d\Omega_5^2) + {U^2\over R_c^2}dx^2_{4} \Bigr) , \label{ads5metric} \end{equation} where $U=r/\alpha' \, \, (\alpha'\propto \ell_s^2)$ is the energy of an open string stretched from the source D3-branes at the origin to a probe D3-brane. The four dimensional flat metric $dx_4^2$ is interpreted as describing the world-volume of the source consisting of $N$ D3-branes which are almost coincident to each other. The special conformal transformation in the SO(4,2) isometry is \begin{eqnarray} \delta_K x^a &=& -2\epsilon\cdot x\, x^a + \epsilon^a x^2 +\epsilon^a {R_c^4\over U^2} , \label{specialadsx}\\ \delta_K\, U &=& 2\epsilon\cdot x \, U . \label{specialadsu} \end{eqnarray} As noted originally in ref. \cite{maldacena}, the existence of the last term $R_c^4/ U^2$ leads to a nonlinear and field-dependent transformation for the dynamical coordinates of the probe D3-brane. The latter property constrains the action to be the Dirac-Born-Infeld action for the probe D3-brane in the background of the source D3-branes, with a help of a supersymmetric nonrenormalization theorem. If the conjectured relation between Yang-Mills theory and supergravity is valid, it must be possible to derive the same property on the side of D3-brane Yang-Mills theory. However, the special conformal transformation of the world-volume Yang-Mills theory is the standard one, \begin{eqnarray} \delta_K x^a &=& -2\epsilon\cdot x\, x^a + \epsilon^a x^2 \end{eqnarray} without the last term of eq. (\ref{specialadsx}). On the Yang-Mills side, the coordinate $U$ is the radial component of the diagonal part of the 6 Higgs fields $X_i \, \, (i=1\sim 6)$ which, on the side of supergravity, correspond to the space described by $\{U, S^5\}$. The solution of this puzzle is the following. To study the dynamics of a probe D3-brane in the background of the source D3-branes, we have to derive effective theory for the diagonal Higgs fields by integrating over the off-diagonal components corresponding to the elements of the quotient group U($N$)/U($N-1$)$\times$U(1). In performing this integration, we have to impose the gauge condition, most conveniently, the familiar background gauge condition. However, it turns out that the background gauge condition (or any other reasonable gauge condition) is not invariant under the special conformal transformation, and therefore we have to make a field-dependent gauge transformation which compensates the violation of the conformal invariance. Thus the transformation law of the diagonal Higgs fields receives a correction in a field-dependent manner. In the large $U$ approximation, we can easily evaluate the correction by performing a one-loop calculation. The final result precisely takes the form (\ref{specialadsx}) including the numerical coefficient. For details, we refer the reader to ref. \cite{jky}. That the correct transformation law is obtained in the one-loop approximation including the precise coefficient suggests that some sort of the non-renormalization theorem is at work here, demanding that the lowest order result for the metamorphosed transformation law on the Yang-Mills side gets no higher oder corrections at least in the large $N$ limit. Since the derivation of the isometry almost amounts to the derivation of the background metric of the AdS$_5$, the above result provides a strong support to the conjecture on the general relation between supergravity and supersymmetric Yang-Mills theory. In particular, the corrected transformation law explains the appeareance of the natural scale $(g_sN)^{1/4}$ from the side of Yang-Mills theory. If this result is not corrected by the higher order effect, it would be the first derivation of the scale which goes as $N^{1/4}$ in the large $N$ limit from a purely Yang-Mills point of view. This result also provides further evidence on our view about the relation between the source and the probe. Namely, the probe D3-brane is at somewhere far away from the horizon, while the AdS space itself is produced by the large number (=$N-1$) of the source D-branes at somewhere near (or inside ?) the horizon. From this viewpoint, the operators corresponding to the boundary value of the bulk field for $U\rightarrow \infty$ are not, at least directly, the operators of the world-volume theory which corresponds to the large $N$ Yang-Mills theory. This raises a puzzle mentioned in the beginning of this subsection : Why and how does the boundary condition for large $U$ dictate the operators of the large $N$ Yang-Mills theory which corresponds to the source of the AdS space-time? In the following, we suggest a simple argument which justifies the prescription of \cite{gkp}\cite{witten} under the assumption that the Maldacena's conjecture is true. The breaking of the gauge group U($N$) into U($N-M$)$\times$U(M) \, ($N\gg M$) by assigning the large vacuum expectation value for the Higgs field corresponding to the radial direction amounts to introducing a heavy source and a light probe at a distance scale $U$ in the energy unit. We assume that the position of probe is at somewhere outside the near horizon limit, $U>R_c/\alpha'$. On the supergravity side, in the limit of large $N$ with fixed $M$, we can treat the effect of the probe as a small perturbation around the background of the heavy source. We thus decompose the metric as \begin{equation} g_{\mu\nu} = \overline{g}_{\mu\nu} + h_{\mu\nu} \end{equation} where the first term $\overline{g}_{\mu\nu}$ is the classical metric produced by the source D3-branes and $h_{\mu\nu}$ is the metric produced by the probe in the background $\overline{g}_{\mu\nu}(N)$. The perturbative metric $h_{\mu\nu}$ satisfies the linearized equation in the lowest order approximation. \begin{equation} {\cal D}_N h_{\mu\nu}(u) = 2\kappa_{10}^2 T^p_{\mu\nu}(u) \end{equation} where $T^p_{\mu\nu}$ is the energy-momentum tensor of the probe \begin{equation} T^p_{\mu\nu}(u, x) \propto {1\over \sqrt{-\overline{g}}}\delta^{(6)} (u-U)T^p_{\mu\nu}(x) \end{equation} and $ {\cal D}_N$ is the kinetic operator for the linearized theory in the background of the source D3-branes. We denote by $u$ the variable corresponding to the radial transverse coordinate in the bulk, while the coordinate along the D3-branes is denoted by $x$. For notational simplicity, we suppress the angle variable corresponding to $S^5$. For the perturbative metric polalized along the direction parallel to the world volume, the kinetic operator essentially takes the following form \begin{equation} -{\cal D}_N = (1+2 g_{{\rm YM}}^2N\alpha'^2/r^4)^{1/2}\triangle^{\parallel}+ (1+2g_{{\rm YM}}^2N\alpha'^2/r^4)^{-1/2}\triangle_{R^6} \, ,\end{equation} where $\triangle^{\parallel}$ is the laplacian for the flat four dimensions along the world volume and $\triangle_{R^6}$ is the flat six dimensional laplacian corresponding to the six dimensional transverse space. Note that the laplacian for the transverse part is proportional to the flat space laplacian as noted in \cite{douglas-taylor} even before taking the near horizon limit. The boundary condition for the linearized field (in the Euclidean formulation \footnote{For a Lorentzian formulation, see \cite{bla}. }) is that it vanishes \cite{gkp} as $u\rightarrow 0$, since otherwise the solution diverges at the origin. By assuming that the states of the probe D3-brane can be chosen arbitrarily, the perturbative metric can also be assumed to induce an arbitrary boundary value $f_{\mu\nu}(x)$ at some large value of $u$. It is natural to set the boundary at $u=R_c/\alpha'$ where the near-horizon limit loses its validity : \begin{equation} h_{\mu\nu}(u, x)\rightarrow 0 \, \, \, (u\rightarrow 0) \, \quad h_{\mu\nu}(u, x)\rightarrow f_{\mu\nu}(x)\, \, \, (u\rightarrow R_c/\sqrt{\alpha'}). \end{equation} In the low-energy limit along the direction of the world-volume, we neglect the laplacian along the D3-brane (`quasi static' approximation) and we can approximate the boundary value as \begin{equation} f_{\mu\nu}(x) \propto \kappa_{10}^2{T^p_{\mu\nu}(x)\over (u-U)^4}\Bigr|_{u=R_c/\alpha'} \, , \end{equation} since the laplacian for the transverse part of six dimensions is proportional to the flat space laplacian even outside the near horizon limit. The effective action for the boundary value is obtained by substituting the perturbed metric into the supergravity action, \begin{equation} S_{{\rm eff}}^{{\rm sg}}= S[\overline{g}+h] \, , \label{sugraaction} \end{equation} using the AdS metric for the background. $\overline{g}_{\mu\nu}$. What we have done is essentially to replace the effect of the probe in arbitrary given states at somewhere outside the near horizon region by the boundary condition $f_{\mu\nu}$ for the perturbation $h_{\mu\nu}$ around the background of the source at the boundary of the AdS space-time. Now, on the Yang-Mills side, we construct the effective theory for the unbroken part U($N-M$)$\times$U(M) after integrating over the heavy Higgs and W-bosons corresponding to the off-diagonal matrix elements whose `mass' is of order $U$ for large $U$ corresponding to the broken part of the gauge group. This leads, again in the low-energy limit, to the effective action \begin{equation} S^{{\rm YM}}_{{\rm eff}} \sim \kappa_{10}^2\int dx^4 {T^s_{\mu\nu}(x) T^p_{\mu\nu}(x)\over (u'-U)^4} \end{equation} where $T^s_{\mu\nu} \sim {\rm Tr}_{N-M} (F_{\mu\sigma}F_{\nu\sigma}) $ is the energy-momentum tensor of the source D3-branes on the Yang-Mills side. This one-loop result is exact because of the non-renormalization theorem \cite{dinesei} and hence is valid even for the probe at somewhere outside the near horizon region. Note that this is consistent with the fact that the laplacian is proportional to that of the flat space in supergravity. Here we have assumed that the distance between the source and probe is sufficiently large and is order of $|u'-U|$. It seems natural to assume that $u'$ is of the same order as $R_c/\alpha'$. The source D3-branes cannot be considered to be at rest at the origin. They are expected to extend to the whole range of the near-horizon region. Then the average position of the source D3-branes would be determined by the scale $R_c$ which is the only scale in this region. Apart from a proportinal factor, we can replace the above expression by \begin{equation} S^{{\rm YM}}_{{\rm eff}} \sim \int dx^4 \, f_{\mu\nu}(x)T^s_{\mu\nu}(x) \, . \label{ymaction} \end{equation} The equivalence between (\ref{sugraaction}) and (\ref{ymaction}), $ e^{-S_{{\rm eff}}^{sg}[f]} \propto \langle e^{-S^{{\rm YM}}_{{\rm eff}}[f] }\rangle \, , $ is essentially the statement of the usual prescription \cite{gkp} \cite{witten}. We have only discussed the metric perturbation, but the general idea can be easily extended to other massless fields. Details remain to be seen. Our discussion clearly shows that the correlators we compute using the prescription \cite{gkp} \cite{witten} are those of the unbroken part U($N-M$) corresponding to the source, in spite of the fact that we use boundary conditions at the boundary of the AdS space-time. It is not correct to think that the large $N$ Yang-Mills system is literally on the `boundary'. In order to derive the correlators from the boundary, we need in general definite rules which allow us to connect the operator insertions at the probe and the source. This is somewhat analogous to the LSZ relation between S-matrix elements and the corresponding Green functions.\footnote{ During writing the present manuscript, several works which are related to this issue appeared \cite{polchinski}\cite{susskind-holo}. Note, however, that the context of these recent works is slightly different from ours. } In our argument above, the heavy `Higgs and W bosons' play the role of a `mediator' for connecting the probe and the source. A similar reasoning also justifies the method for computing the Wilson loop expectation value, proposed in \cite{maldacena2} which naturally treats heavy W-bosons as `quarks' by utilizing the breaking U($N$)$\rightarrow$ U($N-1$)$\times$U(1) of gauge group as in our argument. In this case, the fundamental open string corresponding to the infinitely heavy W-boson, treated as a heavy point-like test particle, is playing the role of mediator. Our argument is based on the quasi-static approximation. This is justified for sufficiently large distances between the source and the probe, since the mass scale in terms of the world volume theory is very large and hence the characteristic distance scale with respect to the world volume is small. This is a manifestation of the space-time uncertainty relation explained in the next subsection. The correction to the quasi-static approximation can also be interpreted on the basis of the uncertainty relation, as discussed in \cite{douglas-taylor}: An uncertainty in the momenta along the world-volume is proportional to the uncertainty with respect to the transverse positions of the probe. Including this effect, more precise understanding on the correlators and also the extension to general D-branes are very important, since they may provide otherwise scarce information on the physics of the matrix models in the large $N$ limit. For example, it would be extremely interesting if we can obtain some useful information on the large $N$ behavior of Matrix theory in this way \cite{sek-yo}. In our argument, it is very crucial that the lowest order interaction between the source and the probe is equivalently described by both supergravity and matrix model even outside the near horizon region. For the validity of this property, the supersymmetric nonrenormalization theorem is important on the matrix side, while the laplacian must be essentially proportional to the flat space laplacian on the supergravity side. It is not difficult to see that, from the viewpoint of 10 dimensions, the latter is satisfied for D-particles after taking into account the nontrivial behavior of dilaton. From the viewpoint of 11 dimensions, we have already seen this in the previous subsection. \subsection{Generalized conformal symmetry and space-time uncertainty principle} Next let us consider the question whether the conformal symmetry which is so important in the AdS/SYM relation has any generality beyond the special case of D3-branes. One of the characteristics of Yang-Mills theories interpreted as the dynamical theory of D-branes is of course that the fields on the world-volume now represent the collective motion of D-branes in the bulk space-time. This in particular implies that the dimensionalities of the fields on the world-volume and of the base-space coordinates are opposite, as is seen from the transformation law (\ref{specialadsx}) and (\ref{specialadsu}), or more simply from the scale transformation \begin{equation} X_i(x_a) \rightarrow X_i'(x_a') =\lambda X_i(x_a) , \label{eq21} \end{equation} \begin{equation} x_a \rightarrow x_a' = \lambda^{-1} x_a . \label{eq22} \end{equation} As is emphasized in ref. \cite{jy}, this indicates a general qualitative property that the long-distance phenomena in the (transverse) target space is dual to the short distance phenomena in the world volume and {\it vice versa}. This property has also been emphasized independently (named as 'UV-IR correspondence') from the context of establishing the holographic bound for the entropy using the AdS/SYM correspondence in \cite{suss-witten}. For a recent discussion on holography \cite{thooft-suss}, we refer the reader to \cite{susskind-holo}. Qualitatively, such a dual correspondence between the two different distance scales is precisely the prediction of the `space-time uncertainty principle' \cite{yostu}\cite{liyo} which has been proposed long ago as a possible space-time interpretation of the world-sheet conformal symmetry of perturbative string theory. As already reviewed in some previous publications \cite{yo-1}\cite{liyo2}, to which I would like to refer the reader for the explanation of the original motivation and examples, the statement can be summarized as follows: \vspace{0.3cm} \noindent Let \begin{enumerate} \item $\Delta T$ : uncertainty in probing the distance scales in the \underline{longitudinal} directions along the world volumes of D-branes including time. If the world-volume coordinates of D-branes in the static gauge are denoted by $x_a \, (a=0, \ldots, p)$, \[ \Delta T = |\Delta x| \] where $| \, \cdot \, |$ is the length in the Euclidean metric. If we use the Minkowski metric, the original derivation of the relation requires that $\Delta T$ should be measured along the time-like direction along the world volume. \item $\Delta X$ : uncertainty in probing the distance scales in the bulk along the \underline{transverse} directions orthogonal to D-branes. \end{enumerate} Then the following uncertainty relation is universally valid, \begin{equation} \Delta T \Delta X > \alpha' . \label{spacetimeuncertaintyrelation} \end{equation} Note that this relation survives in the Maldacena limit ($\Delta T \Delta U>1$), since \[ \Delta X \sim |\Delta r| = \alpha' |\Delta U| . \] This explains the dual relation between the two different length scales determined by the mass of the open strings stretched between D-branes, on one hand, and the transverse distance between the branes, on the other. As discussed in \cite{douglas-taylor} and \cite{liyo2}, this elementary property is responsible for explaining some important qualitative aspects of D-brane dynamics in connection with the AdS/CFT(SYM) correspondence and holography. Furthermore, if this relation is applied to D-particles, we can immediately derive the characteristic Planck scale $\ell_P=g_s^{1/3}\ell_s$ of 11 dimensions given only that the mass of a D-particle is of order $1/g_s\ell_s$ by combining with the ordinary quantum mechanical uncertainty relations. This also leads to the holographic property that the minimum bit of information of the quantum state of a D-particle is stored in a cell of the order of the Planck volume in the transverse space in 11 dimensions. In particular, as is suggested in \cite{liyo2}, the space-time uncertainty relation can be regarded as an underlying principle behind the ultraviolet-infrared relation \cite{suss-witten} which is on the basis of the AdS/CFT(SYM) correspondence. Since the previous account given in \cite{liyo2} was ambiguous in some point, I would like to repeat the discussion here very briefly by taking into account the important observation made in \cite{peet-pol}. Our discussion on the correspondence between AdS$_5\times$S$^5$ and the type IIB string theory suggests that the uncertainty of the positions of the D3-branes in the radial direction $U$ is of order the AdS radius $R_c\sim (g_{{\rm YM}}^2N)^{1/4}\sim \Delta X$ in the string unit $\ell_s =1$. The space-time uncertainty relation then demands that the uncertainty with respect to the time-like length scale along the world volume is of order $\Delta T \sim 1/R_c$. Thus as the AdS radius increases, the dynamics of the D3-banes are probing the high-energy region of the order $R_c$ of the AdS space-time. This is due to the fact that the typical mass scale of the open strings mediating the source D3-branes are growing as $R_c$. Does this imply that the length scale along the space-like directions on the world volume also decreases? Naively it might look so if we assume Lorentz invariance on the world volume. However, the AdS/CFT relation leads to a contrary conclusion. From the viewpoint of the D-brane dynamics, the energy of the open strings mediating the interaction among the D3-branes can also be regarded as the self-energy of the heavy charged fields (U(1)). Let the uncertainty of the spatial position of such a charged field be $\Delta X_s$. The self energy of the field in the large $N$ strong coupling region can be estimated from the behavior of the Wilson loop \cite{maldacena2}, which tells us that it is of order $R_c^2/\Delta X_s$. Note that this is different from the weak coupling behavior which would be proportional to $R_c^4$: Coulomb force is still there corresponding to conformal symmetry but with the different effective charge. Equating this result with the energy scale determined by the space-time uncertainty relation, we have \begin{equation} \Delta X_s\sim R_c. \label{spacelikeuncertainty} \end{equation} Thus the world volume of the source D3-branes can be regarded as the collection of cells with volume $R_c^3$ in the space-like directions with a continuous flow of time, and hence the degrees of freedom of the theory is given, in terms of the 10 dimensional Newton constant $G_{10}\sim g_s^2\sim g_{{\rm YM}}^4$, as $$ N_{dof}\sim N^2{L^3\over R_c^3}={L^3R_c^5\over G_{10}} $$ where we have assumed that the source D3-branes wrap around a 3-torus of the length $L$ and also that the degrees of freedom is proportional to $N^2$ even in the strong coupling regime in view of the result \cite{gubklebtsyet} for the entropy at finite temperature. The final result is consistent with the Beckenstein-Hawking formula and is equivalent with the original result derived in \cite{suss-witten}. The relation (\ref{spacelikeuncertainty}) is at first sight quite surprising, but is an essential property ensuring holography. The above argument is consistent with the recent analysis \cite{polchinski}\cite{susskind-holo} of holography in the flat space limit. It should however be kept in mind that the space-time uncertainty relation or ultraviolet-infrared relation alone is not sufficient to derive holography. We have to combine it with some dynamical information, as exemplified by the assumptions needed in the argument. Although the space-time uncertainly relation might look at first sight too simple in order to characterize the short-distance space-time structure, it indeed captures the most important characteristics of quantum string theory including D-branes. We hope that it plays some role as one of the guiding principles toward nonperturbative formulation of string/M theory. The conformal symmetry can be regarded as a mathematical structure which characterizes the space-time uncertainty relation (\ref{spacetimeuncertaintyrelation}) : Clearly, the relation is invariant under the scale transformations $ \Delta T \rightarrow \lambda \Delta T , \quad \Delta X \rightarrow \lambda^{-1} \Delta X . $ The invariance can be extended to full conformal symmetry for general D$p$-branes, if we identify the uncertainty with the infinitesimal variations of the coordinate and fields, as \begin{equation} \delta_K \Delta T = -2\epsilon\cdot x \Delta T , \quad \delta_K \Delta X = 2\epsilon \cdot x \Delta X \label{variationconf} \end{equation} using the relation \[ \Delta (x_a +\delta_K x_a) = -2\epsilon\cdot x \Delta x_a +2(\epsilon_a x\cdot dx - x_a \epsilon\cdot \Delta x)\, , \] where the second term is orthogonal to the first term and therefore we have the first equality of eq. (\ref{variationconf}). Thus it seems that the conformal symmetry plays an analogous role in the target space-time as that of the canonical structure in the phase space of classical mechanics. This strongly suggests that the noncommutative nature of the space-time coordinates which characterizes the matrix models should be understood as a realization of the quantization of space-time and the conformal symmetry is a signature of certain unknown symmetry structure behind it. These considerations motivate us to generalize the conformal symmetry of D3-brane to general D-branes. Let us first consider the D-particle model. The action (\ref{d0matrixmodel}) is invariant under the scale transformation \begin{equation} X_i(t) \rightarrow X_i'(t') =\lambda X_i(t) , \quad t\rightarrow t'=\lambda^{-1}t \label{eq24} \end{equation} \begin{equation} g_s \rightarrow g_s'=\lambda^3 g_s . \label{eq25} \end{equation} One might wonder whether this can be regarded as symmetry since we transformed the coupling constant simultaneously. But this is not unreasonable if we remember that the string coupling constant, being given by the vacuum expectation value of dilaton at infinity, is not really a constant supplied by hand. Ultimately the string coupling should be eliminated from the theory. From the viewpoint of 11 dimensions, the string coupling is replaced by the compactification radius, and the scale transformation can be reinterpreted as the boost transformation along the 11th direction as follows. In the above scale transformation, we have assumed that the string length is invariant. However, from the point of view of M-theory, we should fix the 11 dimensional Planck length instead of the string length. This is achieved by redefining the unit of length as \[ \ell_s \rightarrow \lambda^{-1}\ell_s,\quad t\rightarrow \lambda^{-1}, \quad X_i\rightarrow \lambda^{-1} X_i, \quad A\rightarrow \lambda^{-1}A \, \] where $A$ is the gauge U($N$) gauge field. Combining the change of unit, which does not change the action, with the above scaling transformation, the net transformation becomes \begin{equation} t\rightarrow \lambda^{-2} t, \quad R\rightarrow \lambda^2 R,\quad \end{equation} while the transverse coordinates and gauge field are scalar. This is precisely the boost transformation provided we identify the time as the light-cone time $x^+$ and the compactification radius as that along the light-like direction $x^-$. From this 11 dimensional viewpoint, it is more appropriate to express the space-time uncertainty relation in the form $ R\Delta T \Delta X > \ell_P^3 $ which suggests some `tripod'-like interpretation, possibly connected with the membrane structure, of the relation as already emphasized in \cite{liyo2}. Once we allow the variation of the string coupling, we can easily extend the symmetry to SO(2,1) group by considering the trivial time translation and the `special conformal' transformation whose infinitesimal form is \begin{equation} \delta_K X_i = 2 t X_i , \, \, \delta_K A= 2 t A , \, \, \delta_K t =- t^2 , \, \, \delta_K g_s =6 t g_s \, . \label{eq29} \end{equation} In all these transformations, the fermionic variables are assumed to be scalar. The above transformation property of the string coupling is essentially equivalent to the fact that the characteristic spatial and temporal scale of the dynamics of D-particle is proportional to $g_s^{1/3}\ell_s$ and $g_s^{-1/3}\ell_s$, respectively. Of course the inverse powers with respect to $g_s$ in these length scales just reflects the space-time uncertainty relation. In contrast with this, there is no fixed characteristic scale in the case of D3-brane, because the dynamics is conformal invariant and all scales are equally important with respect to both $\Delta T$ and $\Delta X$. Of course, if we assume some particular background, the dynamics around the background can have characteristic scales. The scales which appeared in the case of D3-branes should be interpreted as such. We emphasize that this dual nature of two different scales in time and space explains the simultaneous appearance of the short distance scale $\Delta X$ and small energy scale $\Delta E\sim 1/\Delta T$ in the weak coupling dynamics of D-particles, ensuring the decoupling of the higher string modes in the short distance regime contrary to the naive intuition. The argument discussed in the previous section connecting the D-brane Yang-Mills theory and supergravity should equally be valid for D-particles. Then we expect that the conformal symmetry of D-particle Yang-Mills theory must be reflected in the metric produced by a heavy source of D-particles. The 10 dimensional metric around the D-particle is given, in the Maldacena limit $\alpha'\rightarrow 0, U=r/\alpha'$, by \begin{equation} ds_{10}^2 =\alpha'\Bigl(- {U^{7/2}\over \sqrt{Q}}dt^2 +{\sqrt{Q}\over U^{7/2}} \bigl(dU^2 + U^2 d\Omega_8^2\bigr)\Bigr) , \label{eq38} \end{equation} where \begin{equation} Q=60\pi^3 (\alpha')^{-3/2} g_sN=240\pi^5 g_{YM}^2 N . \end{equation} In the case of D-particle, the dilaton is not constant and is given as \begin{equation} e^{\phi} = g_s \Bigl({q \over \alpha'^7 U^7})^{3/4} =g_{YM}^2 \Bigl({Q\over U^7}\Bigr)^{3/4} . \label{eq39} \end{equation} Both the metric and the dilaton are invariant under the dilatation \begin{equation} U\rightarrow \lambda U ,\quad t \rightarrow \lambda^{-1}t ,\quad g_s \rightarrow \lambda^3 g_s . \label{eq314} \end{equation} Furthermore, they are also invariant under the infinitesimal special conformal transformation \begin{equation} \delta_K t = - \epsilon (t^2 +{g_{YM}^2N\over 96\pi^5 U^5}) , \label{eq315} \end{equation} \begin{equation} \delta_K U =2 \epsilon tU ,\quad \delta_K g_s=6 \epsilon t g_s \, . \label{eq317} \end{equation} Just as in the Yang-Mills case, these transformations with the time translation form an SO(2,1) algebra. The additional term $g_{{\rm YM}}^2N/ 96\pi^5 U^5$ in the special conformal transformation plays the similar role as in the case of D3-brane: The nonlinear field dependence is equally powerful to determine the effective action of the probe D-particle in the background of source D-particles. We can derive this modification of the transformation law in the bulk, extending the similar mechanism as we have discussed for D3-brane in the previous subsection. For details about this, we refer the reader to refs. \cite{jy}\cite{jky2}. It is straightforward to extend the conformal transformations of the above type to general Dp-branes ( $0\le p\le 4$), as discussed in the second of the latter references. The case of D-instanton matrix model, the so-called type IIB model \cite{ikkt}, is very special in this respect, since here all of the space-time coordinates are treated as matrices. For an interpretation of the model from the point of view of the space-time uncertainty relation and conformal symmetry, we refer the reader to \cite{yo-schild}. Finally, one might wonder what is the relation of the space-time uncertainty relation and the associated conformal symmetries to supersymmetry. We can perhpas say that the supersymmetry is necessary to ensure some of prerequisites for applying the principle. For example, to discuss the scattering of D-branes meaningfully, it is necessary that the clusters far apart from each other should be free except for the weak gravitational forces among them. If the supersymmetry is not there, the quantum zero-point energy induces the forces which do not decay at large distances. \section{Graviton condensation in type IIB matrix model} As the final topic of this report, I would like to present some preliminary considerations on the treatment of graviton condensation in matrix models. We have already seen some evidence that supersymmetric Yang-Mills models indeed describe the gravitational interactions of D-branes to certain extent. However, it is clear that we do not have definite general principles which might explain the emergence of gravity from Yang-Mills theory. From the viewpoint of symmetry, the existence of $N=2$ supersymmetry in space-time in 10 dimensions is the strongest argument for the existence of supermultiplet containing graviton, since only massless representation of the maximal $N=2$ supersymmetry in 10 dimensions is indeed the supergravity multiplet. However, it is difficult to decide the presence of massless particles only from the logical structure of the Yang-Mills theory. In other words, without making concrete computations of D-brane scattering, we cannot decide whether the $N=2$ global symmetry is really elevated to the consistent local symmetry ensuring the emergence of gravity in the long-distance regime. Since in general we expect that the matrix models are only sensible after taking appropriate large $N$ limit and then various questions can only be answered by solving complicated dynamics, it is very important to establish the symmetries of the models as far as possible. Now after seeing some evidence for the emergence of gravity in the Yang-Mills matrix models, we should be able to identify the local space-time supersymmetry directly within the models. The purpose of the following preliminary consideration is to start an initial discussion toward such a possibility taking the simplest toy example of the type IIB matrix model. We hope that our discussion will be a useful starting point for exploring possible higher symmetry structure in matrix-model approaches to non-perturbative string/M theory from a more general viewpoint. In the case of usual perturbative string theory, that the theory is indeed a dynamical theory of space-time geometry is reflected on the fact that we can deform an allowed space-time background by insertion of the vertex operator corresponding to physical graviton modes of strings. Or, if we use the language of string field theory, the change of background is compensated by an appropriate redefinition of the string field corresponding to a shift of its graviton component. In particular, the general coordinate transformation is compensated by such a field redefinition.\footnote{ For an initial discussion of this phenomenon, see \cite{y-stringfield}.} That is how the string theory can be generally covariant and in principle be a background independent formulation even if the theory is formulated without introducing the space-time metric explicitly as an independent degree of freedom. In the case of general Yang-Mills matrix models, on the other hand, we cannot identify graviton modes directly in the classical action of the model. They only appear as a part of loop effect in the `t-channel'. For this reason, they can neither be treated as ordinary bound states, in general. In the special case of Matrix theory, only the Kaluza-Klein mode with non-zero 11th momentum can be directly treated, and the graviton with zero 11th momentum can only appear as the loop effect. Let us now concentrate on the case of matrix model of D-instantons. The model is already Lorentz invariant and thus we can immediately ask a question, ``How is the symmetry extended to general coordinate invariance?". The action of the model is \begin{equation} S_N={\rm Tr}_{N} \Bigl(\, {1\over 4g^2}[X^{\mu}, X^{\nu}]^2 + {1\over 2}\overline{\Psi}\Gamma^{\mu} [X^{\mu}, \Psi]\, \Bigr) . \end{equation} Of course, it is not obvious what we mean by the general coordinate transformation for the matrix variables $X^{\mu}, \Psi$. In the usual interpretation, only the diagonal components of $X^{\mu}$ have the meaning of the space-time coordinates and the off-diagonal components are really fields corresponding to the lowest open string modes. In general, the space-time coordinate and the fields of open strings can have different transformation property. However, at least for the general linear transformations GL(10, R) which is globally defined, it is natural to suppose that the transformation law is the standard one \begin{equation} X^{\mu} \rightarrow a^{\mu}_{\nu} X^{\nu}\, , \end{equation} where $a^{\mu}_{\nu}$ are arbitrary 10$\times$10 coefficients. The action is manifestly invariant under the subgroup SO(9,1), if the spinor matrix $\Psi$ transformed as usual. Our question is then whether it can be made invariant under the transformations belonging to the remaining broken quotient group GL(10, R)/SO(9,1). Of course, the standard procedure is to introduce the metric (or viel-bein) degree of freedom which absorbs the noninvariant piece of the action. But as the metric degrees of freedom is supposed to be contained in the loop effect, there must exist different way of compensating the transformation without introducing the metric explicitly. Now we will present briefly an argument showing \cite{yo-prep} that this can be achieved by embedding a model with fixed $N$ into models with larger $N$. The idea is to add more instantons to the model with appropriate information on the `state' of the added instantons such that they effectively produce the metric insertion for the original action with lower $N$. If we perform the embedding for all $N$ recursively, we naturally expect that the set of all such models as a whole, which we denote as $\{\ldots, U(N), U(N+1), \cdots\}$, can in principle describe all possible backgrounds of the model. In this way, it should ultimately be possible to reconstruct the model in a background independent fashion. Let us study the simplest embedding from $N$ to $N+1$. We will use the following notations for the embedded matrices. \begin{equation} X^{\mu}_{a,N+1}=\phi^{\mu}_a , \quad X^{\mu}_{N+1, a}=\overline{\phi}^{\mu}_a , \quad X^{\mu}_{N+1,N+1}=x^{\mu} , \end{equation} \begin{equation} \Psi_{a, N+1}=\theta_a , \quad \overline{\Psi}_{N+1, a}=\overline{\theta}_a , \quad \Psi_{N+1,N+1}=\psi . \end{equation} Thus the $N\times N$ matrices $X^{\mu}_{N\times N}, \Psi_{N\times N}$ are embedded into the corresponding $(N+1)\times (N+1)$ matrices as \[ X^{\mu}_{N\times N} \ni \pmatrix{ . \qquad . \qquad . & . \cr . \qquad . \qquad . & . \cr . \quad \, \, \, X^{\mu}\, \, \quad . & |\phi^{\mu}\rangle \cr .\qquad . \qquad . & . \cr . \qquad . \qquad . & . \cr .\quad \, \, \langle\phi^{\mu}| \, \quad . & x^{\mu}\cr}_{(N+1)\times (N+1)} , \] \[ \Psi_{N\times N} \ni \pmatrix{ . \qquad . \qquad . & . \cr . \qquad . \qquad . & . \cr . \quad\, \, \, \, \, \Psi \quad \, \, \, . & |\theta\rangle \cr .\qquad . \qquad . & . \cr . \qquad . \qquad . & . \cr .\quad \, \, \, \langle\theta| \, \, \, \quad . & \psi\cr}_{(N+1)\times(N+1)} . \] Here we use Dirac's bra-ket notation for the vector part. Since the information on the state of the added instanton is specified by using the $N$-th diagonal elements $x, \psi$ (collective coordinates of the added instanton), we can first integrate over the vector parts to derive the effective action for the insertion of graviton. Then we can further integrate over the collective coordinates by inserting an appropriate function $\Phi(x, \psi)$ whose form is determined later. Let us call the result of this $\Delta \Gamma_N(X, \Psi;\Phi)$. Then by combining with the original U($N$) model $\exp (S_N) \rightarrow \exp (S_N) +\Delta \Gamma_N(X, \Psi;\Phi) $, we can define the new partition function of the ($N, N+1$) system. \begin{equation} Z_N [\Phi] ={\cal N}^{-1}\int d^{10N^2}X d^{16N^2}\Psi \, \exp \Bigl( S_N[X, \Psi] +\sum_i c_i \Delta Z_N[X,\Psi;\Phi_i]\Bigl) \end{equation} to the first order in the strengths $\{c_i\}$ of the insertion, where the sum is over all independent `wave functions' of added instanton. It would be more appropriate to regard the wave functions as the scalar products of two wave functions. In the one-loop approximation, we can show that the following special choice of $\Phi$, which is the simplest candidate corresponding to the degree of freedom of symmetric tensor, gives the infinitesimal (first order) deformation of the action $S_N$ which compensates the change of the action by the infinitesimal GL(10,R)/SO(9,1) coordinate transformation $a_{\mu\nu}=\eta_{\mu\nu} + S_{\mu\nu}$ where $S_{\mu\nu}$ is an arbitrary infinitesimal symmetric tensor. \begin{equation} \Phi(x, \psi) \propto (\Gamma^{\mu\beta\gamma} \Gamma^0)_{ab} (\Gamma^{\nu\beta\gamma} \Gamma^0)_{cd} \, S_{\mu\nu} {\partial \over \partial \psi_a}{\partial \over \partial \psi_b} {\partial \over \partial \psi_c}{\partial \over \partial \psi_d} \Phi_0 \end{equation} with \begin{equation} \Phi_0 = \prod_{a=1}^{16} \psi_a \, \, , \quad (\int d^{16}\psi\, \Phi_0= \langle 0| 0\rangle =1) . \end{equation} Namely, apart from the proportional constants, the first order deformations of the bosonic and fermionic part of the action are, respectively, \[ S_{\mu\beta}{\rm Tr}_N\Bigl([X^{\mu}, X^{\nu}] [X^{\mu}, X^{\beta}]\Bigr) , \quad S^{\mu\alpha} Tr_N\Bigl( \overline{\Psi}\Gamma^{\mu}[X^{\alpha}, \Psi]\Bigr) \, \] which are nothing but the change of the bosonic and fermionic actions corresponding to the quotient group GL(10,R)/SO(9,1). The one-loop approximation is justified for the wave functions which are constant with respect to the space-time coordinates, since then the infrared region $x\rightarrow \infty$ is dominant in the integral over the collective coordinate of the added instanton implying the infinite mass limit in the propagator of the fluctuating fields: The coefficients of the above deformation are proportional to an infrared-divergent integral $\int d^{10}x/x^{10}$, which is cancelled by choosing the normalization of the wave function. Our result suggests how to describe the change of background using only the degrees of freedom of the model itself, if we treat all possible embeddings simultaneously. In particular, we have seen that the model indeed has full GL(10, R) symmetry. Thus the metric degrees of freedom appearing as a loop effect can be regarded as the Goldstone boson associated with the spontaneously broken part GL(10,R)/SO(9,1) of the infinitesimal symmetry GL(10,R) of the recursively embedded model $\{\cdots, U(N), U(N+1),\cdots \}$. Together with the space-time supersymmetry, this explains how the model can indeed be the theory of gravity. \section{Concluding remarks} In principle, the formalism suggested in the last section should be extended to arbitrary changes of background and hence to the definition of the model for general curved space-times. For example, the `wave function' $$ (\Gamma^{\mu\beta\gamma} \Gamma^0)_{ab}H_{\mu\beta\gamma} {\partial \over \partial \psi_a}{\partial \over \partial \psi_b} \Phi_0 $$ describes the infinitesimal condensation of the antisymmetric tensor field $B_{\mu\nu}$ with constant field strength $H_{\mu\nu\gamma}$. Also, it is not difficult to derive the susy transformation law corresponding to the shift of the background. But, technically, computations required for such a generalization become increasingly difficult. I feel that we need some entirely new framework for developing the idea in a tractable way. What we are pursuing amounts to investigating the condensation of Goldstone bosons using the configuration space formalism. Something which can play the role of the field-theory like formalism must be a desired language, by which we can treat the matrix models with different sizes of matrices in a much more unified and dynamical manner. Only by using such a formalism, we would be able to discuss the major questions related to the present approach, such as the proof of S-duality symmetry, the background independent formulation, and so on. If we symbolically represent the whole recursive series $\{\cdots, U(N), U(N+1),\cdots \}$ by ${\cal H}[\Phi]$ as the functional of all possible background fields $\Phi$, background independence of the theory amounts to something like \begin{equation} `` \, \, {\delta {\cal H}[\Phi] \over \delta \Phi} =0 \, \, " \end{equation} which should simultaneously play the role of the field equation in a perturbative approximation. We also note that our idea is intimately related to that\footnote{ For example, such a possibility has been suggested in \cite{douglas-ren}.} of large-$N$ renormalization group, in which we try to derive the equation of motion for the background by imposing the fixed point condition in the sense of the renormalization group with respect to $N$. In the latter, it is not clear how to define the model in curved space-time, and also how to treat the zero modes in formulating the renormalization group. Unless we insert `wave functions' as we have done, the result of embedding would only lead to a null result. In the approach suggested here, it may, at least in principle, be possible to develop the procedure in a more constructive fashion. Finally, it may be worthwhile to mention possible connection \footnote{ I would like to thank J. Polchinski for calling my attention to Witten's work \cite{witten-k} at the Nishinomiya-Yukawa symposium. After submitting the present report to hep-th archive, I learnt the work \cite{horava} of Ho\v{r}ava which also suggested a possible connection of the K-theory formulation to Matrix theory. I would like to thank P. Ho\v{r}ava for pointing out his recent works to me. } of the present idea with the K-theory formulation \cite{witten-k} of bound states of brane-anti-brane systems, which has been discussed to describe the stable non-BPS states \cite{sen}. We should generalize the above construction such that the formalism includes not only variable number of D-instantons but also of anti-D-instantons simultaneously. Obviously, the system with a fixed number of both D-branes and anti-D-branes cannot be supersymmetric. It would be extremely interesting if we could recover supersymmetry by some similar mechanism as we have suggested for recovering the full GL(10,R) symmetry beyond manifest Lorentz symmetry. Such must also be crucial for developing covariant or background-independent formulation of Matrix theory. I hope to report some progress along this line in near future. \section*{Acknowledgements} During the preparation of this report, I have enjoyed stimulating conversations with M. Ikehara, A. Jevicki, Y. Kazama, Y. Okawa, Y. Sekino and W. Taylor. The present work is supported in part by Grant-in-Aid for Scientific Research (No. 09640337) and Grant-in-Aid for International Scientific Research (Joint Research, No. 10044061) from the Ministry of Education, Science and Culture.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Proof of Proposition~\ref{prop:polynomialExpressibility}} \label{sec:app-proof-polynomial-express} \begin{proof} (i) can be proved by inspection: all the $r^2$ and $\psi$ in Examples \ref{ex:singlerotation}-\ref{ex:category} are squared norms of \emph{affine} (degree-one) polynomials in $\vxx$, and are naturally quadratic. To show (ii), we note that the $q$-dimensional ball $\calB^q_T$ can be described by a single quadratic inequality $\calB^q_T = \{\boldsymbol{t} \in \Real{q} \mid T^2 - \inprod{\boldsymbol{t}}{\boldsymbol{t}} \geq 0 \}$, the 3D FOV cone ${\cal C}_\alpha$ can be described by two inequalities ${\cal C}_{\alpha} = \{ \boldsymbol{t} \in \Real{3} \mid \tan^2(\frac{\alpha}{2}) t_3^2 - t_1^2 - t_2^2 \geq 0, t_3 \geq 0 \}$, where the first inequality is quadratic and the second is affine. Now it remains to show that 2D and 3D rotations can be described by polynomial equalities. First, any 2D rotation $\M{R} \in \ensuremath{\mathrm{SO}(2)}\xspace$ can be equivalently parametrized by \begin{eqnarray}\label{eq:2Drotationparam} \M{R} = \cbrace{ \left[\begin{array}{cc} r_1 & -r_2 \\ r_2 & r_1 \end{array}\right] \ \middle\vert\ \boldsymbol{r} \in \Real{2}, \inprod{\boldsymbol{r}}{\boldsymbol{r}} = 1 }, \end{eqnarray} and hence described by a single quadratic equality. For a 3D rotation $\M{R} \in \ensuremath{\mathrm{SO}(3)}\xspace$, we shorthand $\boldsymbol{r}_i$ as its $i$-th column, and $\boldsymbol{r} = [\boldsymbol{r}_1 \,;\,} %\ \Vert\ \boldsymbol{r}_2 \,;\,} %\ \Vert\ \boldsymbol{r}_3] \in \Real{9}$ as its vectorization. Using the results from \cite{Briales17cvpr-registration,Yang20cvpr-perfectshape,Tron15RSSW-rotationdeterminant}, we know that $\M{R} \in \ensuremath{\mathrm{SO}(3)}\xspace$ can be equivalently described by the following set of $15$ quadratic equality constraints \begin{subequations} \begin{eqnarray} \hspace{-8mm} \text{\grayout{Unit norm}} : & h_{i} = 1 - \inprod{\boldsymbol{r}_i}{\boldsymbol{r}_i}, i=1,2,3, \label{eq:3Drotationunit} \\ \hspace{-8mm} \text{\grayout{Orthogonal}} : & h_{i,j} = \inprod{\boldsymbol{r}_i}{\boldsymbol{r}_j}, \parentheses {\substack{i \\j } } \in \{ \parentheses {\substack{1 \\2 } },\parentheses {\substack{2 \\3 } },\parentheses {\substack{3 \\1 } } \}, \label{eq:3Drotationorthogonal} \\ \hspace{-8mm} \text{\grayout{Right-hand}}: & \hspace{-3mm} h_{i,j,k}\! =\! \boldsymbol{r}_i\! \times\! \boldsymbol{r}_j\! -\! \boldsymbol{r}_k,\! \parentheses {\substack{i \\j \\k } }\! \in\! \cbrace{\! \parentheses {\substack{1 \\2 \\3 } }\! ,\! \parentheses {\substack{2 \\3 \\1 } }\! ,\! \parentheses {\substack{3 \\1 \\2 } }\!}\!,\label{eq:3Drotationrighthand} \end{eqnarray} \end{subequations} where ``$\times$'' denotes the vector cross product, and each $h_{i,j,k}$ defines a vector of $3$ equality constraints. Though the set of $15$ equalities is redundant (\emph{e.g.},\xspace \eqref{eq:3Drotationunit} and \eqref{eq:3Drotationrighthand} are sufficient for $\M{R} \in \ensuremath{\mathrm{SO}(3)}\xspace$), we use all of them to enhance robustness and tightness of the relaxation in Section \ref{sec:sdprelax}. \end{proof} \section{Solving the Projection Subproblem} \label{sec:supp-projsubproblem} In this section, we describe how to solve the projection subproblem in {\scenario{STRIDE}} (\emph{cf.}\xspace \eqref{eq:pgd} and \eqref{eq:strideprojection} in Algorithm \ref{alg-iPGMnlp}). In particular, we show that the dual problem of \eqref{eq:pgd} admits an unconstrained formulation, which allows developing a scalable algorithm based on limited-memory BFGS. Recall that the projection step \eqref{eq:strideprojection} of {\scenario{STRIDE}} seeks to compute the projection of a given point onto the spectrahedron $\calF_{\mathrm{P}} = \{ \M{X} \in \mathbb{X} \mid {\cal A}(\M{X})=\boldsymbol{b}, \M{X} \in {\cal K} \}$. Formally, given a point $\M{Z} \in \mathbb{X}$, the projection problem seeks the closest point in $\calF_{\mathrm{P}}$ w.r.t.\xspace $\M{Z}$ \begin{equation} \label{prob:projection} \min_{\M{X} \in \mathbb{X}}\left\{ \frac{1}{2} \left\lVert \M{X} - \M{Z} \right\rVert^2\ \middle\vert\ \M{X}\in \calF_{\mathrm{P}} \right\}. \end{equation} Since $\calF_{\mathrm{P}}$ is the intersection of two convex sets, namely the hyperplane defined by ${\cal A}(\M{X})=\boldsymbol{b}$ and the (product of) positive semidefinite cone ${\cal K}$, a natural idea is to apply Dykstra's projection algorithm (see \emph{e.g.},\xspace~\cite{Combettes11book-proximalSplitting}) to generate an approximate solution by alternating the projection onto the hyperplane and the projection onto the semidefinite cone, both of which are easy to compute. However, Dykstra's projection is known to have slow convergence and it may take too many iterations until a satisfactory projection is found. As a result, instead of solving \eqref{prob:projection} directly, we consider its dual problem \begin{equation} \label{prob:projection-dual} \min_{\boldsymbol{y} \in \Real{m}, \M{S} \in \mathbb{X}}\left\{ \frac{1}{2}\left\lVert \M{S} + \calA^{*} (\boldsymbol{y}) + \M{Z} \right\rVert^2 - \inprod{\boldsymbol{b}}{\boldsymbol{y}}\ \middle\vert\ \M{S}\in {\cal K} \right\}, \end{equation} where we have ignored the constant term $ -\frac{1}{2}\left\lVert \M{Z} \right\rVert^2 $ and converted ``$\max$'' to ``$\min$'' by changing the sign of the objective. The KKT conditions for the pair \eqref{prob:projection} and \eqref{prob:projection-dual} are: \begin{equation} \label{eq:KKT-proj} \hspace{-3mm} {\cal A}(\M{X}) = \boldsymbol{b},\ \calA^{*} (\boldsymbol{y}) + \M{S} = \M{X} - \M{Z},\ \M{X},\M{S}\in {\cal K}, \ \inprod{\M{X}}{\M{S}} = 0.\!\! \end{equation} {\bf An unconstrained formulation}. Now we introduce a key observation that allows us to further simplify the dual \eqref{prob:projection-dual}. Fixing the unconstrained $\boldsymbol{y}$, problem \eqref{prob:projection-dual} can be seen as finding the closest $\M{S} \in {\cal K}$ w.r.t.\xspace the matrix $-\calA^{*} (\boldsymbol{y}) - \M{Z}$, and hence admits a closed-form solution \begin{eqnarray}\label{eq:Sofy} \M{S} = \Pi_{{\cal K}} \parentheses{-\calA^{*} (\boldsymbol{y}) - \M{Z}}. \end{eqnarray} As a result, after inserting~\eqref{eq:Sofy}, problem~\eqref{prob:projection-dual} is equivalent to \begin{equation} \label{prob:projetion-dual-y} \min_{\boldsymbol{y} \in \Real{m}} \ \ \phi(\boldsymbol{y}):= \frac{1}{2}\left\lVert \Pi_{{\cal K}}(\calA^{*} (\boldsymbol{y}) + \M{Z}) \right\rVert^2 - \left\langle \boldsymbol{b}, \boldsymbol{y} \right \rangle, \end{equation} with the gradient of $ \phi(\boldsymbol{y}) $ given as \begin{eqnarray} \nabla \phi(\boldsymbol{y}) = {\cal A} \Pi_{{\cal K}}(\calA^{*} (\boldsymbol{y}) + \M{Z}) - \boldsymbol{b}. \end{eqnarray} Thus, if $ \vy^\star $ is an optimal solution for problem~\eqref{prob:projetion-dual-y}, we can recover $\MS^\star$ from~\eqref{eq:Sofy}, and $\MX^\star$ from the KKT conditions \eqref{eq:KKT-proj}: \begin{eqnarray} \MX^\star = \calA^{*} (\vy^\star) + \MS^\star + \M{Z}. \end{eqnarray} Formulating the dual problem as the unconstrained problem \eqref{prob:projetion-dual-y} has appeared multiple times in~\cite{Zhao10siopt-sdpnal,Malick09siopt-regularizationSDP}. Now that~\eqref{prob:projetion-dual-y} is a \emph{smooth unconstrained convex} problem in $ \boldsymbol{y}\in \Real{m} $, plenty of efficient algorithms are available, such as (accelerated) gradient descend~\cite{Nesterov18book-convexOptimization}, nonlinear conjugate gradient~\cite{Dai99siopt-ncg}, quasi-Newton methods~\cite{Nocedal06book-numericaloptimization} and the semismooth Newton method~\cite{Zhao10siopt-sdpnal}. In this paper, we apply the celebrated limited-memory BFGS (L-BFGS) method, see for example \cite[Algorithm 7.5]{Nocedal06book-numericaloptimization}. {L-BFGS} is easy to implement, can handle very large unconstrained optimization problems due to its low memory consumption, and is typically ``the algorithm of choice'' for large-scale problems~\cite[Chapter 7]{Nocedal06book-numericaloptimization}. Empirically, we observed that {L-BFGS} is efficient and robust for various applications. To the best of our knowledge, this is the first work that demonstrates the effectiveness of L-BFGS, or in general quasi-Newton methods, in solving large-scale and degenerate SDPs. \section{Relative Suboptimality} \label{sec:app-compute-subopt} This section is concerned with the computation of a formally correct suboptimality gap $\eta_s$ for a given estimate (which we use as a performance metric in our experiments), whose validity is not hindered by potential numerical inaccuracies in the solution of the SDP relaxation~\eqref{eq:sparserelax}. In Theorem \ref{thm:sparserelaxtls} and \eqref{eq:subopt}, we stated that, by solving the sparse SDP relaxation \eqref{eq:sparserelax} to global optimality with optimizer $\MX^\star$ and associated optimum $f^\star$, one can round from $\MX^\star$ a feasible solution $(\widehat{\vxx}_1,\widehat{\vtheta}_1)$ to the original \eqref{eq:binaryTLS} problem with associated cost $\widehat{p} = p(\widehat{\vxx}_1,\widehat{\vtheta}_1)$. Then, a measure of suboptimality for the rounded solution $(\widehat{\vxx}_1,\widehat{\vtheta}_1)$ can be computed as follows (also in \eqref{eq:subopt}): \begin{eqnarray} \label{eq:app-subopt} \eta_s \triangleq \frac{\abs{f^\star - \widehat{p}}}{1 + \abs{f^\star} + \abs{\widehat{p}}}. \end{eqnarray} It is apparent that $\eta_s = 0$ implies the relaxation \eqref{eq:sparserelax} is exact and the rounded solution $(\widehat{\vxx}_1,\widehat{\vtheta}_1)$ is indeed globally optimal for \eqref{eq:binaryTLS} (recall $f^\star \leq p^\star \leq \widehat{p}$ by construction, where $p^\star$ is the unknown optimum of the nonconvex \eqref{eq:binaryTLS} problem). However, the caveat in computing the relative suboptimality as in \eqref{eq:app-subopt} is that, although it is almost always possible to compute a rounded solution $(\widehat{\vxx}_1,\widehat{\vtheta}_1)$ with cost $\widehat{p}$ (provided that the feasible set ${\cal X}$ of \eqref{eq:binaryTLS} is simple to project, as in our examples), it can be quite challenging to obtain $f^\star$ (which acts as a valid lower bound for $p^\star$) \emph{to machine precision}, since $f^\star$ is computed by numerically solving the SDP~\eqref{eq:sparserelax}, which may still lead to small inaccuracies. Moreover, as shown in the experimental section of the main text, first-order solvers such as {\scenario{SDPNAL+}} typically cannot solve the SDP to even moderate accuracy (with reasonable amount of iterations), in which case $f^\star$ in not attained. Here we describe a procedure to compute a valid lower bound for $p^\star$, from any approximate solution $(\widehat{\MX},\widehat{\vy},\widehat{\MS}) \in \mathbb{X} \times \Real{m} \times \mathbb{X}$ of the SDP \eqref{eq:sparserelax}, without requiring it to be an optimal solution satisfying the KKT conditions \eqref{eq:sdpKKT}. In fact, as we will show soon, only $\widehat{\vy} \in \Real{m}$ is needed to compute a valid lower bound. {\bf Bounded trace}. Let us first shows that, each block of the primal variable $\M{X}$ in \eqref{eq:sparserelax} has a bounded trace, when \eqref{eq:sparserelax} is applied to all six Examples \ref{ex:singlerotation}-\ref{ex:category}. Towards this goal, let us first observe that the variable $\vxx \in {\cal X}$ has a bounded norm, \emph{i.e.},\xspace there exists $M_0>0$ such that $\norm{\vxx} \leq M_0, \forall \vxx \in {\cal X}$. For example, in Example \ref{ex:singlerotation}, $\vxx = \M{R} \in \ensuremath{\mathrm{SO}(3)}\xspace$ has $\norm{\vxx} = \sqrt{3} = M_0$; in Example \ref{ex:category}, $\vxx = (\M{R},\boldsymbol{t},\boldsymbol{c})$ with $\M{R} \in \ensuremath{\mathrm{SO}(3)}\xspace$, $\boldsymbol{t} \in \calB^3_{T_t}$, and $\boldsymbol{c} \in \calB^K_{T_c}$ has $\norm{\vxx} \leq \sqrt{3+T_t^2 + T_c^2} = M_0$, where $T_t$ and $T_c$ are the upper bounds for the norm of the translation and the norm of the shape parameters, respectively. Now recall that the primal variable $\M{X}$ has $1+l_g$ blocks, with the first block being the moment matrix and the other $l_g$ blocks being localizing matrices. With the observation that $\norm{\vxx} \leq M_0, \forall \vxx \in {\cal X}$, we can bound the trace of the moment matrix $\M{X}_v$ (\emph{cf.}\xspace \eqref{eq:sparsemomentmat}) as \begin{eqnarray} \label{eq:traceboundmomentmat} \trace{\M{X}_v} = & \trace{\boldsymbol{v}(\widetilde{\vxx}) \boldsymbol{v}(\widetilde{\vxx})^{\mathsf{T}}} = \boldsymbol{v}(\widetilde{\vxx})^{\mathsf{T}} \boldsymbol{v}(\widetilde{\vxx}) \nonumber \\ = & 1 + \norm{\vxx}^2 + \sum_{i=1}^N \theta_i^2 + \sum_{i=1}^N \theta_i^2 \norm{\vxx}^2 \nonumber \\ = & (1+N)(1+\norm{\vxx}^2) \nonumber \\ \leq & (1+N)(1+M_0^2) =: M_1 , \nonumber \\ & \forall \vxx \in {\cal X}, \boldsymbol{\theta} \in \{\pm 1\}^N. \end{eqnarray} Regarding the localizing matrices $\M{X}_{g_j} = g_j \cdot [\M{X}_1]_{{\cal I}_j},j=1,\dots,l_g$ (where $\M{X}_1$ is the order-one moment matrix), we have that (recall $g_j \geq 0$ by definition) \begin{eqnarray} \label{eq:traceboundlocalizemat} \trace{\M{X}_{g_j}} = & g_j \cdot \trace{[\M{X}_1]_{{\cal I}_j}} \nonumber \\ \leq & g_j \cdot \trace{\M{X}_1} \nonumber \\ = & g_j \cdot (1+ \norm{\vxx}^2 + \sum_{i=1}^N \theta_i^2 ) \nonumber \\ \leq & g_j \cdot (1+N+M_0^2), \nonumber \\ & \forall \vxx \in {\cal X}, \boldsymbol{\theta} \in \{ \pm 1 \}^N. \end{eqnarray} Therefore, it suffices to show that $g_j$ is upper bounded for any $\vxx \in {\cal X}$. This is obvious for all the examples in this paper. Particularly, there are only two types of inequality constraints among Examples \ref{ex:singlerotation}-\ref{ex:category}. (i) The ball constraint $\boldsymbol{t} \in \calB^K_T$ (bounded translation and bounded shape parameters), which reads $g = T^2 - \norm{\boldsymbol{t}}^2 \geq 0$, which certainly satisfies $g \leq T^2$ and is upper bounded. (ii) The camera FOV cone constraint $\boldsymbol{t} \in {\cal C}_\alpha$ that induces two inequality constraints $g_1 = t_3 \geq 0$ and $g_2 = \tan^2(\alpha/2) t_3^2 - t_1^2 - t_2^2 \geq 0$. However, since the translation also lies in the bounded ball $\calB^3_T$, we have $g_1 = t_3 \leq \norm{\boldsymbol{t}} \leq T$, and $g_2 = \tan^2(\alpha/2) t_3^2 - t_1^2 - t_2^2 \leq \tan^2(\alpha/2) t_3^2 \leq \tan^2(\alpha/2) \norm{\boldsymbol{t}}^2 \leq \tan^2(\alpha/2) T^2$ are both upper bounded. Therefore, we have shown that each localizing matrix also has bounded trace. {\bf A valid lower bound}. Now suppose we are given a $\widehat{\vy} \in \Real{m}$, then for any $\vxx \in {\cal X}, \boldsymbol{\theta} \in \{ \pm 1\}^N$, we have \begin{eqnarray} & p(\vxx,\boldsymbol{\theta}) \nonumber\\ = & \inprod{\M{C}}{\M{X}} \nonumber \\ = & \inprod{\M{C} - \calA^{*} (\widehat{\vy}) }{\M{X}} + \inprod{\calA^{*} (\widehat{\vy})}{\M{X}} \nonumber \\ = & \inprod{\M{C} - \calA^{*} (\widehat{\vy}) }{\M{X}} + \inprod{{\cal A} (\M{X})}{\widehat{\vy}} \nonumber\\ = & \inprod{\M{C} - \calA^{*} (\widehat{\vy}) }{\M{X}} + \inprod{\boldsymbol{b}}{\widehat{\vy}} \label{eq:app-lower-bound-Xfeasible} \\ \geq &\!\!\!\!\! \underbrace{ \inprod{\boldsymbol{b}}{\widehat{\vy}} + \displaystyle \sum_{i=1}^{1+l_g} M_i \cdot \min\{ \lambda_{\min}\parentheses{ [\M{C} - \calA^{*} (\widehat{\vy})]_i },0\} }_{\underline{p}(\widehat{\vy})}, \label{eq:app-lower-bound-tracemineig} \end{eqnarray} where $M_i,i=1,\dots,1+l_g$ are the upper bounds for the traces of the moment matrix and the localizing matrices (shown in previous paragraphs and \eqref{eq:traceboundmomentmat}-\eqref{eq:traceboundlocalizemat}), $[\M{C} - \calA^{*} (\widehat{\vy})]_i$ denotes the $i$-th block of $[\M{C} - \calA^{*} (\widehat{\vy})]$ (recall that both $\M{C}$ and $\calA^{*} (\widehat{\vy})$ are multi-block symmetric matrices, \emph{cf.}\xspace \eqref{eq:adjointAmultiblk}), and $\lambda_{\min}(\cdot)$ denotes the minimum eigenvalue of a symmetric matrix. In \eqref{eq:app-lower-bound-Xfeasible}, we used that any $\M{X}$ that comes from moment matrix and localizing matrices must be primal feasible and hence ${\cal A}(\M{X}) = \boldsymbol{b}$. In \eqref{eq:app-lower-bound-tracemineig}, we used that $\inprod{\M{A}}{\M{B}} \geq \lambda_{\min}(\M{A}) \trace{\M{B}}$ for any $\M{A}\in \sym{n}$ and $\M{B} \in \psd{n}$. With this lower bound $\underline{p}(\widehat{\vy})$, we can compute the relative suboptimality from any $\widehat{\vy} \in \Real{m}$: \begin{eqnarray} \eta_s \triangleq \frac{\abs{\underline{p}(\widehat{\vy}) - \widehat{p}}}{1 + \abs{\underline{p}(\widehat{\vy})} + \abs{\widehat{p}}}. \end{eqnarray} \section{Conclusions} \label{sec:conclusion} We presented the first general {and scalable\xspace} framework to design certifiable algorithms for outlier-robust geometric perception. We first showed that estimation with several common robust cost functions can be reformulated as polynomial optimization problems. We then designed a semidefinite relaxation scheme that exploits the sparsity of outlier-robust estimation to generate SDPs of much smaller sizes {while maintaining empirical exactness}. Finally, we proposed a robust and scalable SDP solver, {\scenario{STRIDE}}, that can solve the sparse relaxations to unprecedented scale and accuracy. We tested our framework on six geometric perception applications using both synthetic and real data, demonstrating its robustness, scalability, and capability to safeguard existing fast heuristics for robust estimation. \section{Proof of Proposition~\ref{prop:robustaspop}} \label{sec:app-proof-robust-pop} \begin{proof} We first prove (i)-(iv) using Black-Rangarajan duality \cite{Black96ijcv-unification}, and then (v)-(vii) by manipulating the cost functions. {\bf Proof of (i)-(iv)}. The TLS proof has been given in \eqref{eq:binaryTLS} of the main text. We start with (ii) MC. With a similar strategy of introducing a binary variable as in \eqref{eq:binaryTLS}, we can write the MC cost function as \begin{eqnarray}\label{eq:mcidentity} \rho_{\mathrm{MC}} \equiv \min_{\theta \in \{+1,-1\}} \cbrace{ \frac{1-\theta}{2} \ \middle\vert\ -\theta(r^2 - \beta_i^2) \geq 0 }, \end{eqnarray} where the constraint $-\theta(r^2 - \beta_i^2) \geq 0$ enforces $\theta = -1$ if $r^2 > \beta_i^2$ (hence $\rho_{\mathrm{MC}} = 1$), and $\theta = +1$ if $r^2 < \beta_i^2$ (hence $\rho_{\mathrm{MC}} = 0$). If $r^2 = \beta_i^2$, then the minimization selects $\theta = +1$ as it minimizes the objective to be zero. Using the identity in \eqref{eq:mcidentity}, problem \eqref{eq:robust} with $\rho = \rho_{\mathrm{MC}}$ is equivalent to \begin{equation}\label{eq:dualMC} \hspace{-4mm} \min_{\substack{\vxx \in {\cal X} \subseteq \Real{d}, \\ \boldsymbol{\theta} \in \{\pm 1 \}^N}} \!\! \cbrace{ \sum_{i=1}^N\! \frac{1\!-\!\theta_i}{2}\! +\! \psi(\vxx)} %{\psi(\vxx,\calR) \middle\vert \substack{ \displaystyle -\theta_i (r^2(\vxx,\boldsymbol{z}_i)\! -\! \beta_i^2)\! \geq\! 0,\! \\ \displaystyle \forall i=1,\dots,N} }\!,\!\!\! \tag{MC} \end{equation} which is an instance of \eqref{eq:pop} in $(\vxx,\boldsymbol{\theta})\in \Real{d+N}$. To prove (iii), we leverage Black-Rangarajan duality \cite[Fig. 28]{Black96ijcv-unification} and write $\rho_{\mathrm{GM}}$ as a minimization problem by introducing a confidence variable $w \in [0,1]$ \begin{eqnarray}\label{eq:GMidentity} \rho_{\mathrm{GM}} \equiv \min_{w \in [0,1]} w \frac{r^2}{\beta_i^2} + (\sqrt{w}-1)^2. \end{eqnarray} One can check the correctness of \eqref{eq:GMidentity} by setting the derivative of the objective in \eqref{eq:GMidentity} w.r.t.\xspace $w$ to zero and obtain $w$ as a function of $r$ in closed form. Eq.~\eqref{eq:GMidentity}, however, does not directly lead to a POP due to the existence of $\sqrt{w}$. Nevertheless, with a change of variable $\theta := \sqrt{w} \in [0,1]$, we can write problem \eqref{eq:robust} with $\rho = \rho_{\mathrm{GM}}$ as the following POP \begin{equation}\label{eq:dualGM} \min_{\substack{\vxx \in {\cal X} \subseteq \Real{d}, \\ \boldsymbol{\theta} \in [0,1]^N}} \sum_{i=1}^N \frac{\theta_i^2 r^2(\vxx,\boldsymbol{z}_i)}{\beta_i^2} + (\theta_i - 1)^2 + \psi(\vxx)} %{\psi(\vxx,\calR). \tag{GM} \end{equation} Similarly, we can use Black-Rangarajan duality \cite[Fig. 25]{Black96ijcv-unification} to prove (iv) by introducing a confidence variable $w$ and writing $\rho_{\mathrm{TB}}$ as the solution of the following minimization \begin{eqnarray} \rho_{\mathrm{TB}} \equiv \min_{w \in [0,1]} w \frac{r^2}{\beta_i^2} + \frac{1}{3} - w + \frac{2}{3} w^{\frac{3}{2}}. \end{eqnarray} Then, with a change of variable $\theta := \sqrt{w}$, we conclude that \eqref{eq:robust} with $\rho = \rho_{\mathrm{TB}}$ can be written as the following POP \begin{equation}\label{eq:dualTB} \min_{\substack{\vxx \in {\cal X} \subseteq \Real{d}, \\ \boldsymbol{\theta} \in [0,1]^N}} \sum_{i=1}^N \frac{\theta_i^2 r^2(\vxx,\boldsymbol{z}_i)}{\beta_i^2} + \frac{1}{3} - \theta_i^2 + \frac{2}{3} \theta_i^3 + \psi(\vxx)} %{\psi(\vxx,\calR). \tag{TB} \end{equation} In \eqref{eq:binaryTLS} and \eqref{eq:dualMC}, $\theta_i$ is binary and discrete, with $\theta_i= +1$ (resp. $\theta_i = -1$) indicating the $i$-the measurement $\boldsymbol{z}_i$ is an inlier (resp. outlier). While in \eqref{eq:dualGM} and \eqref{eq:dualTB}, $\theta_i$ is continuous, with $\theta_i \uparrow 1$ (resp. $\theta_i \downarrow 0$) indicating the $i$-the measurement $\boldsymbol{z}_i$ is an inlier (resp. outlier). {\bf Proof of (v)-(vii)}. The L1 cost function can be simply rewritten as \begin{eqnarray} \rho_{\mathrm{L1}} \equiv \cbrace{ \frac{\gamma}{\beta_i}\ \middle\vert\ \gamma \geq 0, \gamma^2 = r^2 }, \end{eqnarray} where $\gamma \geq 0$ and $\gamma^2 = r^2$ implies that $\gamma = \abs{r}$. Therefore, \eqref{eq:robust} with $\rho = \rho_{\mathrm{L1}}$ is equivalent to the following POP: \begin{equation} \hspace{-5mm}\min_{\substack{\vxx \in {\cal X} \subseteq \Real{d},\\ \vgamma \in \Real{N}} } \cbrace{ \sum_{i=1}^N \frac{\gamma_i}{\beta_i} \ \middle\vert\ \gamma_i \geq 0, \gamma_i^2 = r^2(\vxx,\boldsymbol{z}_i),i=1,\dots,N}\!\!.\!\!\!\tag{L1} \end{equation} Now we prove \eqref{eq:robust} with the Huber loss \cite{Huber81} can also be written as a POP. We first perform a change of variable and let $\gamma = \abs{r}$ (which is equal to $\gamma \geq 0, \gamma^2 = r^2$): \begin{eqnarray} \label{eq:huberafterabs} \rho_{\mathrm{HB}} = \begin{cases} \frac{\gamma^2}{2\beta_i^2} & \gamma \leq \beta_i \\ \frac{\gamma}{\beta_i} - \frac{1}{2} & \text{otherwise} \end{cases}. \end{eqnarray} Then we introduce a binary variable $\theta \in \{ +1, -1\}$, and equivalently write \eqref{eq:huberafterabs} as \begin{eqnarray} \rho_{\mathrm{HB}} = \cbrace{ \frac{1+\theta}{2} \frac{\gamma^2}{2\beta_i^2} + \frac{1-\theta}{2} \parentheses{\frac{\gamma}{\beta_i} - \frac{1}{2}} \middle\vert \theta (\gamma - \beta_i) \leq 0 }, \end{eqnarray} where the constraint $\theta (\gamma - \beta_i) \leq 0$ enforces $\theta = -1$ when $\gamma > \beta_i$ (hence $\rho_{\mathrm{HB}} = \frac{\gamma}{\beta_i} - \frac{1}{2}$), $\theta = +1$ when $\gamma < \beta_i$ (hence $\rho_{\mathrm{HB}} = \frac{\gamma^2}{2\beta_i^2}$). Therefore, we can write \eqref{eq:robust} with $\rho = \rho_{\mathrm{HB}}$ as the following POP: \begin{equation} \hspace{-6mm}\min_{\substack{\vxx \in {\cal X}, \vgamma \in \Real{N}, \\ \boldsymbol{\theta} \in \{ \pm 1\}^N} } \cbrace{ \sum_{i=1}^N \frac{1+\theta_i}{2} \frac{\gamma_i^2}{2\beta_i^2} + \frac{1-\theta_i}{2} \parentheses{\frac{\gamma_i}{\beta_i} - \frac{1}{2}} \middle\vert \substack{ \gamma_i \geq 0, \\ \gamma_i^2 = r^2(\vxx,\boldsymbol{z}_i^2), \\ \theta_i (\gamma_i - \beta_i) \leq 0,\\ i=1,\dots,N }}\!\!. \tag{HB} \end{equation} Finally, we prove \eqref{eq:robust} with the adaptive cost function $\rho_{\mathrm{ADT},s}$, proposed by Barron \cite{Barron19cvpr-adaptRobustLoss}, can also be written as a POP, when we restrict the scale parameter $s$ to be rational numbers (we avoid $s = 0$ and $s=2$ because the cost function is not defined at those two parameters, and \cite{Barron19cvpr-adaptRobustLoss} augments the cost by taking its limits at $s = 0$ and $s=2$). Note that restricting $s$ to rational numbers preserves the expressiveness of the original adaptive cost in \cite{Barron19cvpr-adaptRobustLoss}, because the set of rational numbers is \emph{dense} in the set of real numbers. Because $s$ is a rational number, we can let $s = \frac{p}{q}$ with $p,q$ as integers, and write the adaptive cost as \begin{eqnarray} \rho_{\mathrm{ADT},s} = \frac{\abs{s-2}}{s} \parentheses{ \parentheses{ \frac{r^2/\beta_i^2}{\abs{s-2}} + 1 }^{\frac{p}{2q}} - 1 }. \end{eqnarray} Now we perform a change of variable and let $\gamma = \parentheses{ \frac{r^2/\beta_i^2}{\abs{s-2}} + 1 }^{\frac{p}{2q}}$. This change of variable is equivalent to the following polynomial equality constraint: \begin{eqnarray} 0 = h(\gamma,r^2) := \begin{cases} \gamma^{2q} - \parentheses{ \frac{r^2/\beta_i^2}{\abs{s-2}} + 1 }^p & p > 0\\ \gamma^{2q} \parentheses{ \frac{r^2/\beta_i^2}{\abs{s-2}} + 1 }^{\abs{p}} - 1 & p < 0 \end{cases}. \end{eqnarray} Therefore, we conclude that \eqref{eq:robust} with $\rho = \rho_{\mathrm{ADT},s}$ can be written as the following POP: \begin{equation} \hspace{-3mm} \min_{\substack{\vxx \in {\cal X}, \\ \vgamma \in \Real{N}}} \cbrace{ \sum_{i=1}^N \frac{\abs{s-2}}{s} (\gamma_i - 1)\ \middle\vert\ h(\gamma_i,r^2(\vxx,\boldsymbol{z}_i)) = 0,i=1,\dots,N}. \tag{ADT} \end{equation} This concludes the proof for all seven cost functions. \end{proof} \section{Outlier-Robust Estimation as POP} \label{sec:robustandpop} \input{sections/fig-robustcosts} In this section, we consider a general formulation of estimation with robust cost functions. We show that, for {seven}\xspace popular robust costs, this formulation can be recast as a~\eqref{eq:pop}. We conclude the section by showcasing the resulting formulation on six perception problems. {\bf Outlier-robust estimation}. Given a set of $N$ measurements ${\cal Z} = \{\boldsymbol{z}_i\}_{i=1}^N$ (\emph{e.g.},\xspace 2D image keypoints, 3D point clouds, relative poses), we consider the problem of using ${\cal Z}$ to estimate an unknown geometric model $\vxx \in {\cal X} \subseteq \Real{d}$ (\emph{e.g.},\xspace camera poses, rigid transformations, 3D shapes, robot trajectory), despite the fact that the measurement set ${\cal Z}$ may contain a large amount of \emph{outliers}. Building on standard M-estimation \cite{Maronna19book-robustStats,MacTavish15crv-robustEstimation}, we perform outlier-robust estimation by solving the following optimization problem \begin{equation}\label{eq:robust} \min_{\vxx \in {\cal X} \subseteq \Real{d}}\ \ \sum_{i=1}^N \rho(r(\vxx,\boldsymbol{z}_i), \beta_i) + \psi(\vxx)} %{\psi(\vxx,\calR), \tag{Robust} \end{equation} where $r(\vxx,\boldsymbol{z}_i)$ is a (scalar) \emph{residual} function that measures the mismatch between $\vxx$ and $\boldsymbol{z}_i$ (\emph{e.g.},\xspace Euclidean distances, pose errors), $\beta_i > 0$ (set by the user) is the \emph{maximum admissible residual} for a measurement to be considered as an \emph{inlier} (or minimum residual to be an outlier), $\rho(r,\beta_i)$ is a \emph{robust} cost function that penalizes outliers much \emph{less} than inliers to prevent outliers from contaminating the estimate. We include a \emph{regularization} term $\psi(\vxx)} %{\psi(\vxx,\calR)$ in~\eqref{eq:robust}, to keep full generality: as we will see in the examples below, a regularizer is often added to high-dimensional estimation problems to ensure the solution is unique and well-behaved. We make the following assumption on problem \eqref{eq:robust}. \begin{assumption}[Polynomial Residual, Constraint, and Regularization] \label{assumption:polynomialsrobust} In \eqref{eq:robust}, assume (i) $r^2, \psi$ are polynomials; (ii) the constraint $\vxx \in {\cal X}$ can be described by finitely many polynomial equalities and inequalities, \emph{i.e.},\xspace ${\cal X} = \{\vxx \in \Real{d}\mid h_i(\vxx)=0,i\in 1,\dots,l_h, g_j(\vxx) \geq 0,j=1,\dots,l_g \}$. \end{assumption} Assumption \ref{assumption:polynomialsrobust} is the prerequisite for applying the machinery of semidefinite relaxation for POP in Section \ref{sec:pre-pop}. These assumptions are often mild in geometric perception problems, a point that will become clearer when we introduce the six examples later in this section \mbox{(\emph{cf.}\xspace Proposition \ref{prop:polynomialExpressibility})}. Now the only component of \eqref{eq:robust} that may prevent it from being a POP is the robust cost $\rho(r,\beta_i)$. In \emph{outlier-free} estimation, $\rho = r^2/\beta_i^2$ is chosen as the \emph{least squares} cost and \eqref{eq:robust} is immediately in the form of \eqref{eq:pop}. However, in outlier-robust estimation, $\rho$ is typically not a polynomial. For instance, let us consider the \emph{truncated least squares} (TLS) cost, which will be extensively used in this paper: \begin{equation} \label{eq:tlsdef} \rho_{\mathrm{TLS}}(r,\beta_i) \triangleq \min \cbrace{ \frac{r^2}{\beta_i^2}, 1 } = \begin{cases} \frac{r^2}{\beta_i^2} & \abs{r} \leq \beta_i \\ 1 & \text{otherwise} \end{cases}, \end{equation} The TLS cost~\eqref{eq:tlsdef} is apparently not a polynomial, and it is not even a \emph{smooth} function (\emph{cf.}\xspace Fig. \ref{fig:robust-costs}(a)). {\bf Reformulation as POP}. To build intuition, we now show that \eqref{eq:robust} with the TLS cost \eqref{eq:tlsdef} can be reformulated as a POP; then we generalize this conclusion to other cost functions in Proposition~\ref{prop:robustaspop}. The key observation is that, for any $a,b \in \Real{}$, $\min \{a, b\} \equiv \min_{\theta \in \{ +1,-1\}} \frac{1+\theta}{2} a + \frac{1-\theta}{2} b$ which allows recasting \eqref{eq:robust} with $\rho = \rho_{\mathrm{TLS}}$ as \begin{equation}\label{eq:binaryTLS} \min_{\substack{\vxx \in {\cal X} \subseteq \Real{d}, \\ \boldsymbol{\theta} \in \{\pm 1\}^N} }\ \sum_{i=1}^N \frac{1+\theta_i}{2} \frac{r^2(\vxx,\boldsymbol{z}_i)}{\beta_i^2} + \frac{1-\theta_i}{2} + \psi(\vxx)} %{\psi(\vxx,\calR), \tag{TLS} \end{equation} where each binary variable $\theta_i \in \{+1,-1\}$ decides whether the $i$-th measurement $\boldsymbol{z}_i$ is an inlier ($\theta_i = +1$) or an outlier ($\theta_i=-1$). By recalling that $\theta_i \in \{+1,-1\} \Leftrightarrow \theta_i^2 - 1 = 0$, we see that problem \eqref{eq:binaryTLS} is an instance of \eqref{eq:pop}, with the decision variables now being $(\vxx, \boldsymbol{\theta}) \in \Real{d+N}$. The next proposition states that, the reformulation above can be generalized to a broader set of robust cost functions. \begin{proposition}[Robust Estimation as POP] \label{prop:robustaspop} Under Assumption \ref{assumption:polynomialsrobust}, if the cost function $\rho$ in \eqref{eq:robust} is one of the following: \begin{enumerate}[label=(\roman*)] \item truncated least squares (TLS): $\rho_{\mathrm{TLS}} \triangleq \min \cbrace{ \displaystyle \frac{r^2}{\beta_i^2}, 1 } $; \item maximum consensus: $\rho_{\mathrm{MC}} \triangleq \begin{cases} 0 & \abs{r} \leq \beta_i \\ 1 & \text{otherwise} \end{cases}$; \item Geman-McClure: $\rho_{\mathrm{GM}} \triangleq \frac{\displaystyle r^2/\beta_i^2}{\displaystyle 1+r^2/\beta_i^2}$; \item Tukey's Biweight: $\rho_{\mathrm{TB}} \triangleq \begin{cases} \frac{r^2}{\beta_i^2} - \frac{r^4}{\beta_i^4} + \frac{r^6}{3\beta_i^6} & \abs{r}\leq \beta_i \\ \frac{1}{3} & \text{otherwise} \end{cases}$, \end{enumerate} then \eqref{eq:robust} can be recast as a \eqref{eq:pop} with $d+N$ variables, where each of the additional $N$ variables indicates the confidence of the corresponding measurement being an inlier. Moreover, \eqref{eq:robust} with the following costs can also be written as a \eqref{eq:pop} \begin{enumerate}[label=(\roman*)] \setcounter{enumi}{4} \item L1: $\rho_{\mathrm{L1}} \triangleq \abs{r}/\beta_i $; \item Huber: $\rho_{\mathrm{HB}} \triangleq \begin{cases} \frac{r^2}{2\beta_i^2} & \abs{r} \leq \beta_i \\ \frac{\abs{r}}{\beta_i} - \frac{1}{2} & \text{otherwise} \end{cases}$; \item Adaptive \cite{Barron19cvpr-adaptRobustLoss}: $\rho_{\mathrm{ADT},s} \triangleq \frac{\abs{s-2}}{s} \parentheses{ \parentheses{ \frac{r^2/\beta_i^2}{\abs{s-2}} + 1 }^{\frac{s}{2}} - 1} $, \\for a given scale parameter $s \in \mathbb{Q} \backslash \{0,2\}$, \end{enumerate} by adding slack variable(s) for each measurement. \end{proposition} Fig. \ref{fig:robust-costs} plots the seven robust costs (Fig. \ref{fig:robust-costs}(g) shows $\rho_{\mathrm{ADT},s}$ for six different values of $s$). While we postpone the proof to Supplementary Material\xspace, the key insight is that for common robust cost functions we can either (a) use Black-Rangarajan duality~\cite{Black96ijcv-unification} to convert them into polynomials by introducing additional slack variables -- one for each measurement (we use this approach for (i)-(iv)), or (b) directly manipulate them into polynomials by change of variables (for (v)-(vii)). \input{sections/fig-applications} {\bf Perception examples}. We now shed some light on the generality of the formulation~\eqref{eq:robust} and Assumption~\ref{assumption:polynomialsrobust} by considering six outlier-robust geometric perception problems. We first present the examples and then conclude they all satisfy Assumption~\ref{assumption:polynomialsrobust} in Proposition~\ref{prop:polynomialExpressibility}. We assume $\psi(\vxx)} %{\psi(\vxx,\calR) = 0$ unless otherwise mentioned. \setcounter{theorem}{0} \begin{example}[Single Rotation Averaging \cite{Hartley13ijcv}] \label{ex:singlerotation} Given $N$ measurements of an unknown $q$-dimensional rotation $\{ \boldsymbol{z}_i = \widetilde{\MR}_i \in \mathrm{SO}(\dimrot) \}_{i=1}^N$, single rotation averaging seeks to find the best average rotation $\vxx = \M{R} \in \mathrm{SO}(\dimrot)$. The residual function is chosen as the chordal distance between $\M{R}$ and $\widetilde{\MR}_i$: $r(\vxx, \boldsymbol{z}_i) = \Vert \M{R} - \widetilde{\MR}_i \Vert$. Fig.~\ref{fig:applications}(a) plots an instance of 3D single rotation averaging with $20$ measurements (rotations are plotted as 3D coordinate frames), among which there is a single outlier (shown as transparent). \end{example} \begin{example}[Multiple Rotation Averaging \cite{Eriksson18cvpr-strongDuality,Carlone16TRO-planarPGO,Lajoie19ral-DCGM}] \label{ex:multirotation} Let ${\cal G} = ({\cal V},{\cal E})$ be an undirected graph with vertex set ${\cal V} = [n]$ and edge set ${\cal E}$. Each vertex $i \in {\cal V}$ is associated with an unknown rotation $\M{R}_i \in \mathrm{SO}(\dimrot)$ (typically $q=2$ or $q=3$), while each edge $(i,j) \in {\cal E}$ gives a relative rotation measurement $\widetilde{\MR}_{ij} \in \mathrm{SO}(\dimrot)$ between the unknown rotations at vertex $i$ and $j$. Multiple rotation averaging estimates the set of absolute rotations on the vertices $\vxx = \{\M{R}_i\}_{i \in {\cal V}} \in \mathrm{SO}(\dimrot)^{n}$ from relative measurements over ${\cal E}$. The residual function is chosen as the chordal distance between $\M{R}_i \widetilde{\MR}_{ij}$ and $\M{R}_j$ for $(i,j) \in {\cal E}$: $r(\vxx,\boldsymbol{z}_{ij}) = \Vert \M{R}_i \widetilde{\MR}_{ij} - \M{R}_j \Vert$. \maybeOmit{Optionally, if a set ${\cal R}$ of relative measurements, is known to be free of outliers (\emph{e.g.},\xspace odometry measurements in robot navigation), then a regularization $\psi(\vxx)} %{\psi(\vxx,\calR) = \sum_{(i,j) \in {\cal R}} \Vert \M{R}_i \widetilde{\MR}_{ij} - \M{R}_j \Vert^2$ is added to \eqref{eq:robust}.} Fig. \ref{fig:applications}(b) plots an instance of 2D multiple rotation averaging with $9$ (unknown) absolute rotations and $11$ (measured) relative measurements, two of which are outliers (shown in red). \end{example} \begin{example}[Point Cloud Registration \cite{Yang20tro-teaser}] \label{ex:pointcloud} Given two sets of 3D points with putative correspondences $\{ \boldsymbol{z}_i = (\boldsymbol{p}_i, \boldsymbol{q}_i) \}_{i=1}^{N}$ (\emph{e.g.},\xspace matched by deep-learned features \cite{Yang21cvpr-sgp}), point cloud registration seeks the best rigid transformation $\vxx = (\M{R},\boldsymbol{t}) \in \ensuremath{\mathrm{SO}(3)}\xspace \times \Real{3}$ to align them. The residual function is chosen as the Euclidean distance between pairs of points after applying the rigid transformation: $r(\vxx,\boldsymbol{z}_i) = \norm{\boldsymbol{q}_i - \M{R} \boldsymbol{p}_i - \boldsymbol{t}}$. For mathematical convenience (\emph{i.e.},\xspace to satisfy the Archimedeanness condition in Theorem \ref{thm:lasserre}), we assume the translation to be bounded: $\boldsymbol{t} \in \calB^3_T$, where $ \calB^q_T \triangleq \{\boldsymbol{t} \in \Real{q}\mid \norm{\boldsymbol{t}} \leq T \}$ defines a $q$-dimensional ball centered at the origin with radius $T$. Fig. \ref{fig:applications}(c) plots an instance of point cloud registration using the {\scenario{Bunny}} dataset \cite{Curless96siggraph} (outlier correspondences are shown in red). \end{example} \begin{example}[Mesh Registration \cite{Briales17cvpr-registration,Shi21icra-robin}] \label{ex:mesh} Given a set of $N$ putative correspondences from a 3D point cloud to a 3D mesh, where the point cloud $\{(\boldsymbol{p}_i,\boldsymbol{u}_i) \}_{i=1}^N$ is represented as a collection of points ($\boldsymbol{p}_i \in \Real{3}$) with estimated normals ($\boldsymbol{u}_i \in \usphere{2}$), and the mesh $\{(\boldsymbol{q}_i,\boldsymbol{v}_i )\}_{i=1}^N$ is represented as a collection of faces with unit normals $(\boldsymbol{v}_i \in \usphere{2})$ and arbitrary points that belong to them $(\boldsymbol{q}_i \in \Real{3})$, mesh registration seeks the best rigid transformation $\vxx = (\M{R},\boldsymbol{t}) \in \ensuremath{\mathrm{SO}(3)}\xspace \times \Real{3}$ to align the point cloud with the mesh. The residual function is chosen as: $r(\vxx,\boldsymbol{z}_i) =\sqrt{ \norm{\inprod{\boldsymbol{v}_i}{\boldsymbol{q}_i - \M{R} \boldsymbol{p}_i - \boldsymbol{t}}}^2 + \norm{\boldsymbol{v}_i - \M{R} \boldsymbol{u}_i}^2 }$, where $\norm{\inprod{\boldsymbol{v}_i}{\boldsymbol{q}_i - \M{R} \boldsymbol{p}_i - \boldsymbol{t}}}$ is the point-to-plane distance, and $\norm{\boldsymbol{v}_i - \M{R} \boldsymbol{u}_i}$ is the normal-to-normal distance. Similar to Example \ref{ex:pointcloud}, we enforce $\boldsymbol{t} \in \calB^3_T$. Fig. \ref{fig:applications}(d) visualizes an instance of mesh registration using the {\scenario{TeddyBear}} model from the {\scenario{HomebrewedDB}} dataset \cite{Kaskman19-homebrewedDB} (outlier correspondences shown in red). \end{example} \begin{example}[Absolute Pose Estimation \cite{Kneip2014ECCV-UPnP,Schweighofer2008bmvc-SOSforPnP,Yang21arxiv-damp}] \label{ex:absolutepose} Consider a camera with field of view (FOV) $\alpha \in (0,\pi)$ picturing a 3D object {(conventionally centered at zero)}. Given a set of $N$ putative correspondences between 3D keypoints $\{\boldsymbol{p}_i \in \Real{3} \}_{i=1}^N$ on the object and 2D image keypoint detections $\{\boldsymbol{u}_i \in \usphere{2}\}_{i=1}^N$, where $\boldsymbol{u}_i$ denotes the unit bearing vector corresponding to the $i$-th 2D keypoint, absolute pose estimation (also known as \emph{Perspective-$n$-Points}) seeks to estimate the absolute camera pose $\vxx = (\M{R},\boldsymbol{t})\in \ensuremath{\mathrm{SO}(3)}\xspace \times \Real{3}$ from the 2D-3D correspondences. The residual function is chosen as: $r(\vxx,\boldsymbol{z}_i)\!=\!\sqrt{\inprod{\M{R} \boldsymbol{p}_i + \boldsymbol{t}}{({\mathbf I}_3 - \boldsymbol{u}_i\boldsymbol{u}_i^{\mathsf{T}})(\M{R} \boldsymbol{p}_i + \boldsymbol{t})} }$, \emph{i.e.},\xspace the point-to-line distance from the transformed 3D keypoint $\M{R}\boldsymbol{p}_i + \boldsymbol{t}$ (in camera frame) to the bearing vector $\boldsymbol{u}_i$.\footnote{Instead of using the geometric reprojection error as the residual (a rational function), we follow \cite{Yang21arxiv-damp,Schweighofer2008bmvc-SOSforPnP} and choose the point-to-line distance as the residual so that $r^2$ is a polynomial per Assumption \ref{assumption:polynomialsrobust}.} In this paper, we enforce $\boldsymbol{t} \in \calB^3_T \cap {\cal C}_{\alpha}$, where ${\cal C}_{\alpha}\triangleq \{ \boldsymbol{t} \in \Real{3} \mid \tan(\frac{\alpha}{2}) t_3 \geq \sqrt{t_1^2 + t_2^2} \}$ defines the 3D cone corresponding to the camera FOV; the constraint $\boldsymbol{t} \in {\cal C}_\alpha$ enforces the center of the 3D object (\emph{i.e.},\xspace~$\M{R}\cdot{\mathbf 0} + \boldsymbol{t} = \boldsymbol{t}$ in camera frame) to lie inside the FOV. Fig.~\ref{fig:applications}(e) shows an instance of absolute pose estimation using a satellite \mbox{image from the {\scenario{SPEED}} dataset~\cite{Sharma19arxiv-SPEED} (outliers in red).} \end{example} \begin{example}[Category-Level Object Pose and Shape Estimation \cite{Shi21rss-pace}] \label{ex:category} Given $N$ 3D semantic keypoint observations $\{ \boldsymbol{p}_i \}_{i=1}^N$ of an object of a certain category (\emph{e.g.},\xspace car, chair), category-level perception estimates the object pose and shape. We consider the standard \emph{active shape model}, where the unknown shape of the object is described as a nonnegative combination of $K$ shapes in a library $\{ \{\boldsymbol{q}_{k,i}\}_{i=1}^N \}_{k=1}^K$ (the \emph{bases}, which intuitively correspond to examples of objects in that category). Hence, category-level perception estimates the pose $(\M{R},\boldsymbol{t}) \in \ensuremath{\mathrm{SO}(3)}\xspace \times \Real{3}$ and shape coefficients $\boldsymbol{c} \in \mathbb{R}^{K}_{+}$ describing the object. The residual function is chosen as: $r(\vxx,\boldsymbol{z}_i) = \Vert \M{R} \boldsymbol{p}_i + \boldsymbol{t} - \sum_{k=1}^K c_k \boldsymbol{q}_{k,i} \Vert$, \emph{i.e.},\xspace the Euclidean distance between the transformed 3D keypoint detections and the nonnegative combination of the shape bases. We include $\psi(\vxx,\lambda) = \lambda \norm{\boldsymbol{c}}^2$ as a regularization for the shape parameters $\boldsymbol{c}$, as in~\cite{Shi21rss-pace}. Again, we enforce $\boldsymbol{t} \in \calB^3_T, \boldsymbol{c} \in \calB^K_T$ to be both bounded. Fig. \ref{fig:applications}(f) pictures an example of category-level perception from the {\scenario{ApolloScape}} dataset~\cite{Wang19pami-apolloscape}, where one estimates the pose and shape of a vehicle given 2D semantic keypoint detections with associated depth values (outliers shown in red). \end{example} \begin{proposition}[Polynomial Expressibility] \label{prop:polynomialExpressibility} Examples \ref{ex:singlerotation}-\ref{ex:category} satisfy Assumption~\ref{assumption:polynomialsrobust}. Precisely, (i) $r^2$ and $\psi$ (if $\psi \neq 0$) are quadratic polynomials (\emph{i.e.},\xspace $\deg{r^2}=\deg{\psi}=2$); (ii) the constraint set ${\cal X}$ can be described by polynomial equalities $h_i$'s and inequalities $g_j$'s with degree up to 2 (\emph{i.e.},\xspace $\deg{h_i},\deg{g_j} \leq 2$). \end{proposition} While we postpone the proof to Supplementary Material\xspace, we observe that the key insights behind the proof are simple but powerful: (i) rigid body transformations can be expressed as linear functions (\emph{e.g.},\xspace $\M{R} \boldsymbol{p}_i + \boldsymbol{t}$ for a given point $\boldsymbol{p}_i$), (ii) squared residuals $r^2$ (and our regularizer $\psi$) are commonly squared L2 norms, that can be written as quadratic functions, and (iii) the set of poses and rotations can be described by quadratic (in-)equality constraints, a fact already used in, \emph{e.g.},\xspace~\cite{Tron15RSSW-rotationdeterminant,Carlone15icra-verification,Briales18cvpr-global2view,Yang20cvpr-perfectshape}. Proposition \ref{prop:robustaspop} and \ref{prop:polynomialExpressibility} together establish that outlier-robust geometric perception \eqref{eq:robust} with TLS, MC, GM, TB, L1, Huber and Adaptive costs (Fig. \ref{fig:robust-costs}), when applied to Examples \ref{ex:singlerotation}-\ref{ex:category} (Fig. \ref{fig:applications}), are instances of~\eqref{eq:pop}. The expert reader will also recognize other geometric perception problems that satisfy Assumption~\ref{assumption:polynomialsrobust}, including 2D-2D relative pose estimation \cite{Briales18cvpr-global2view}, triangulation \cite{Aholt12eccv-qcqptriangulation}, rotation search (Wahba problem) \cite{Yang19iccv-quasar}, pose graph optimization \cite{Rosen19IJRR-sesync}, among others. \revise{Although the bundle adjustment problem \cite{Agarwal10eccv} cannot be written as a POP using the geometric reprojection error, adopting the point-to-line error can put bundle adjustment in the form of a POP \cite{Schweighofer06bmvc}. Nevertheless, bundle adjustment typically involves too many variables (\emph{e.g.},\xspace hundreds of camera poses and hundreds of thousands of 3D points) to be practically solvable using existing semidefinite relaxations.} For the rest of the paper, we will focus on designing certifiable algorithms and semidefinite relaxations for~\eqref{eq:robust} with the~\eqref{eq:binaryTLS} cost function. However, semidefinite relaxations proposed in Section \ref{sec:sdprelax} can be extended to the other costs in Proposition~\ref{prop:robustaspop}, and we leave that exercise to the interested reader. We end this section with a remark about why we prefer the TLS cost over the others in Proposition~\ref{prop:robustaspop}. \begin{remark}[Preference for TLS] \label{remark:TLSvsothers} (i) Compared to GM, TB, L1 and Huber, which still penalize outliers, TLS completely discards outliers. Consequently, TLS can often achieve better robustness to outliers \cite{Yang20ral-gnc,MacTavish15crv-robustEstimation}. (ii) MC also completely discards outliers, but it does not select a model to minimize the inlier residuals. Therefore, there can be an infinite number of solutions to problem \eqref{eq:robust} with equal cost (number of outliers). (iii) The adaptive cost typically leads to POPs with high-degree polynomials, which requires a large $\kappa$ from the relaxation hierarchy and results in SDPs that are intractable. (iv) TLS can be shown as a maximum likelihood estimator, when the inliers have a Gaussian distribution and the outliers are uniformly distributed, see \cite[Proposition 5]{Antonante20TRO-outlier}. \end{remark} \section*{Acknowledgments} The authors would like to thank Jie Wang, Victor Magron, and Jean B. Lasserre for the discussion about Lasserre's hierarchy and {\scenario{TSSOS}}; Ling Liang and Kim-Chuan Toh for the discussion about SDP solvers; Bo Chen and Tat-Jun Chin for the {\scenario{SPEED}} data; and Jingnan Shi for the {\scenario{ApolloScape}} data. This work was funded by ARL DCIST CRA W911NF-17-2-0181, ONR RAIDER N00014-18-1-2828, MathWorks, NSF CAREER award ``Certifiable Perception for Autonomous Cyber-Physical Systems'', and Lincoln Laboratory's Resilient Perception in Degraded Environments program. \section{Implementation Details for {\scenario{STRIDE}}} In Section \ref{sec:scalableopt}, we presented the {\scenario{STRIDE}} algorithm and proved its global convergence. We noted that the initial point $(\M{X}^0,\boldsymbol{y}^0,\M{S}^0)$ could have a significant impact on the convergence speed of {\scenario{STRIDE}}. Therefore, in {\scenario{STRIDE}} we use existing fast heuristics ({\scenario{GNC}}, {\scenario{RANSAC}}) to generate a \emph{primal} initial guess (\emph{cf.}\xspace Remark \ref{remark:fastheuristics}). In this section, we describe how to generate a \emph{dual} initial guess (Section \ref{sec:app-dual-warmstart}), and how to use Riemannian optimization for local search (Section \ref{sec:app-local-search}). \subsection{Dual Warmstart} \label{sec:app-dual-warmstart} We propose to use a combination of two techniques to generate a good dual initial point $(\boldsymbol{y}^0,\M{S}^0)$. Section \ref{sec:app-cssr} describes a method to relax the \eqref{eq:binaryTLS} problem by exploiting correlative sparsity. Although such a relaxation is not tight, we show that its solution can be used to warmstart {\scenario{STRIDE}}. In Section \ref{sec:app-admmplus}, we present a fast first-order algorithm to refine both the primal and the dual initializations. \subsubsection{Bootstrapping via Correlative Sparsity} \label{sec:app-cssr} The \eqref{eq:binaryTLS} problem has another special property called \emph{correlative sparsity} \cite{Wang21siopt-chordaltssos,Waki06jopt-SOSSparsity,Lasserre06siopt-correlativesparsity}, which, loosely speaking, refers to the property that there exists a partition of the variables $(\vxx,\boldsymbol{\theta})$ into a union of smaller groups, such that (i) each constraint of \eqref{eq:binaryTLS} involves only one group of the variables, and (ii) the objective of \eqref{eq:binaryTLS} can be decomposed into terms that each involves only one group of the variables (\emph{cf.}\xspace \cite[Assumption 2]{Lasserre06siopt-correlativesparsity}). Particularly, we observe that the objective polynomial, denoted by $p(\vxx,\boldsymbol{\theta})$, can be expressed as a sum of $N$ polynomials: \begin{eqnarray} p(\vxx,\boldsymbol{\theta}) = \sum_{i=1}^N \underbrace{ \parentheses{ \frac{1+\theta_i}{2} \frac{r^2(\vxx,\boldsymbol{z}_i)}{\beta_i^2} + \frac{1-\theta_i}{2} + \frac{1}{N} \psi(\vxx)} %{\psi(\vxx,\calR)} }_{p_i(\vxx,\theta_i)} , \end{eqnarray} where each $p_i$ is a polynomial that only involves $\widetilde{\vxx}_i \triangleq [\vxx\,;\,} %\ \Vert\ \theta_i] \in \Real{d+1}$. The constraint polynomials can also be partitioned into $N$ groups where the $i$-th group of constraints only involves $\widetilde{\vxx}_i$. To see this, note that there are two types of constraints in \eqref{eq:binaryTLS}, the ones that constrain $\vxx$ (to be proper rotations and translations), denoted by ${\cal H}[\vxx]$, and the ones that constrain each $\theta_i$ to be a binary variable, denoted by ${\cal H}[\theta_i] = \{ \theta_i^2 -1 = 0\},i=1,\dots,N$. Therefore, defining ${\cal H}_i \triangleq \{ {\cal H}[\vxx],{\cal H}[\theta_i] \}$, then each ${\cal H}_i$ only contains polynomials in $\widetilde{\vxx}_i$, and the union of ${\cal H}_i$ for $i=1,\dots,N$ is the full constraint set of \eqref{eq:binaryTLS}. This correlative sparsity allows us to design an SDP relaxation for \eqref{eq:binaryTLS} using $N$ moment matrices $\M{X}_{v_i}, i=1,\dots,N$, where each $\M{X}_{v_i}$ is defined as \begin{eqnarray} & \hspace{-3mm} \boldsymbol{v}_i(\widetilde{\vxx}_i) \triangleq [1\,;\,} %\ \Vert\ \vxx \,;\,} %\ \Vert\ \theta_i \,;\,} %\ \Vert\ \theta_i \vxx] \in \Real{2d + 2}, \label{eq:csliftingmonomial}\\ & \hspace{-10mm}\M{X}_{v_i}\! \triangleq\! \boldsymbol{v}_i(\widetilde{\vxx}_i) \boldsymbol{v}_i (\widetilde{\vxx}_i)^{\mathsf{T}}\! =\! \left[\begin{array}{cccc} 1 & \vxx^{\mathsf{T}} & \theta_i & \theta_i \vxx^{\mathsf{T}} \\ \vxx & \vxx \vxx^{\mathsf{T}} & \theta_i \vxx & \theta_i \vxx\vxx^{\mathsf{T}} \\ \theta_i & \theta_i \vxx^{\mathsf{T}} & \theta_i^2 & \theta_i^2 \vxx^{\mathsf{T}} \\ \theta_i \vxx & \theta_i \vxx\vxx^{\mathsf{T}} & \theta_i^2 \vxx & \theta_i^2 \vxx\vxx^{\mathsf{T}} \end{array}\right] \label{eq:csmomentmat} \end{eqnarray} and has a \emph{constant} size $2d+2$. It is easy to verify that $\M{X}_{v_i}$ contains all the monomials in $p_i(\widetilde{\vxx}_i)$ and ${\cal H}_i$. Therefore, by following similar steps as in the main text, we can derive an SDP relaxation that exploits correlative sparsity. \emph{(i) Rewriting \eqref{eq:binaryTLS} using the moment matrices $\{ \M{X}_{v_i} \}_{i=1}^N$}. Because the sparse moment matrix $\M{X}_{v_i}$ contains all monomials in $p_i$, and the \eqref{eq:binaryTLS} cost is a sum of $p_i$'s, we can write the objective of \eqref{eq:binaryTLS} as a linear functions of $\{ \M{X}_{v_i} \}_{i=1}^N$: \beq \begin{array}{ll}\label{eq:objectivesparsecs} \!\!\!\!\!\!\text{\grayout{objective}}: & \sum_{i=1}^N \inprod{\M{C}_i}{\M{X}_{v_i}}. \end{array} \eeq \emph{(ii) Relaxing the rank-$1$ constraint on $\{\M{X}_{v_i}\}_{i=1}^N$}. By construction, $\M{X}_{v_i}$ belongs to the set of rank-one positive semidefinite matrices. Since the rank constraint is non-convex, we drop it and only enforce each $\M{X}_{v_i}$ to be positive semidefinite: \beq \begin{array}{ll} \label{eq:eqMomentIsPSDsparsecs} \!\!\!\text{\grayout{moment matrices}}: & \M{X}_{v_i} \succeq 0, i=1,\dots,N. \\ \end{array} \eeq \emph{(iii) Adding redundant constraints}. Now we add moment constraints to each moment matrix $\M{X}_{v_i}$ and use the set of constraints ${\cal H}_i$ to add redundant equality and localizing constraints for $\M{X}_{v_i}$. Because this procedure is the same for each moment matrix $\M{X}_{v_i}$, we will only describe it once for a fixed $i$. First, some monomials can repeat themselves at multiple entries of $\M{X}_{v_i}$. For example, in \eqref{eq:csmomentmat} the ``$\theta_i \vxx$'' block is the same as the ``$\theta_i \vxx^{\mathsf{T}}$'' block up to rearrangement of entries. In fact, the number of \emph{unique} monomials in $\M{X}_{v_i}$ is $m_{2v_i} = 3\mathfrak{t}(d+1)$, while the dimension of $\M{X}_{v_i}$ (in terms of a symmetric matrix) is $\mathfrak{t}(2d+2)$. Therefore, we can add a total number of $m_{\mathrm{mom}_i} = \mathfrak{t}(2d+2) - m_{2v_i} + 1$ \emph{moment constraints}: \beq \begin{array}{ll}\label{eq:momentConstraintssparsecs} \text{\grayout{moment constraints}}: & \inprod{\M{A}_{\mathrm{mom},0}}{\M{X}_{v_i}} = 1, \\ & \inprod{\M{A}_{\mathrm{mom},j}}{\M{X}_{v_i}} = 0, \\ & j = 1, \ldots, m_{\mathrm{mom}_i}-1, \end{array} \eeq to enforce the repeating monomials in $\M{X}_{v_i}$ to be equal to each other, as well as the leading entry $[\M{X}_{v_i}]_{11} = 1$. Second, we add redundant equality constraints. For each equality constraint $h_k$ in ${\cal H}_i$, we denote $[\widetilde{\vxx}_i]_{h_k}$ as the maximum set of unique monomials such that $h_k \cdot [\widetilde{\vxx}_i]_{h_k}$ only contains monomials in $\M{X}_{v_i}$. Formally, \begin{eqnarray} [\widetilde{\vxx}_i]_{h_k} \triangleq \{\widetilde{\vxx}_i^{\boldsymbol{\alpha}} \mid \mono{ h_k \cdot \widetilde{\vxx}_i^{\boldsymbol{\alpha}} } \subseteq \mono{\M{X}_{v_i}} \}. \label{eq:csliftequalities} \end{eqnarray} Consequently, we can write $h_k \cdot [\widetilde{\vxx}_i]_{h_k} = {\mathbf 0}$ as linear equalities in $\M{X}_{v_i}$: \beq \begin{array}{ll}\label{eq:redundantEqualityConstraintssparsecs} \hspace{-3mm}\!\!\!\text{\grayout{(redundant) equality constraints}}: & \!\!\! \inprod{\M{A}_{\mathrm{req},kj}}{\M{X}_{v_i}} = 0, \\ \!\!\!&\!\!\! k = 1, \ldots, l_{h_i}\\ \!\!\!&\!\!\! j = 1, \ldots, \abs{[\widetilde{\vxx}_i]_{h_k}},\!\!\!\!\!\!\!\!\! \end{array} \eeq where $l_{h_i}$ is the number of equality constraints in ${\cal H}_i$. Finally, for each inequality constraint $g_j$ in ${\cal H}_i$ ($\deg{g_j} \leq 2$ by Proposition \ref{prop:polynomialExpressibility}), we denote by $[\M{X}_1]_{{\cal I}_j}$ the maximum principal submatrix of $\M{X}_1$ (\emph{i.e.},\xspace order-one full moment matrix) such that $g_j \cdot [\M{X}_1]_{{\cal I}_j}$ only contains monomials in $\M{X}_{v_i}$. Formally, \begin{eqnarray} & [\M{X}_1]_{{\cal I}_j} \triangleq [\M{X}_1]_{{\cal I}_j,{\cal I}_j}, \text{ with } \nonumber \\ & \hspace{-9mm} {\cal I}_j \! =\! \displaystyle \argmax_{{\cal J}} \{ \abs{{\cal J}} \mid \mono{ g_j\! \cdot\! [\M{X}_1]_{{\cal J},{\cal J}} } \subseteq \mono{\M{X}_{v_i}} \}. \end{eqnarray} As a result, calling $\M{X}_{g_j} = g_j \cdot [\M{X}_1]_{{\cal I}_j}$, which is positive semidefinite by construction, we can write down the following localizing matrices and constraints: \beq \begin{array}{ll}\label{eq:cslocalizemat} \text{\grayout{localizing matrices}}: & \M{X}_{g_j} \succeq 0, \;\; j=1,\ldots,l_{g_i} \end{array} \eeq \beq \begin{array}{ll}\label{eq:cslocalizecons} \!\!\!\!\!\!\text{\grayout{{localizing} constraints}}: \!\!\!& \!\!\!\inprod{\M{A}_{\mathrm{loc},jkh}}{\M{X}_{v_i}} = [\M{X}_{g_j}]_{hk} \\ \!\!\!\!\!\!&\!\!\! j = 1, \ldots, l_{g_i}, \\ \!\!\!\!\!\!&\!\!\! 1 \leq h\leq k \leq \abs{{\cal I}_j}, \end{array} \eeq where the linear constraints simply enforce each entry of $\M{X}_{g_j}$ to be a linear combination of entries in $\M{X}_{v_i}$, and $l_{g_i}$ is the number of inequality constraints in ${\cal H}_i$. \emph{(iv) Adding overlapping constraints}. The extra step that needs to be performed when there are multiple moment matrices is to add constraints that enforce \emph{overlapping entries} to be the same. Clearly, from \eqref{eq:csmomentmat}, one can see that the top left $2 \times 2$ blocks, \emph{i.e.},\xspace $[1\,;\,} %\ \Vert\ \vxx] [1,\vxx^{\mathsf{T}}]$ is shared among $\M{X}_{v_i}$ for all $i=1,\dots,N$. Therefore, we add the following overlapping constraints \beq \begin{array}{ll}\label{eq:csoverlapcons} \hspace{-3mm}\!\!\!\text{\grayout{overlapping constraints}}: & \!\!\! [\M{X}_{v_i}]_{\mathrm{ovlp}} = [\M{X}_{v_1}]_{\mathrm{ovlp}}, \\ \!\!\!&\!\!\! i = 2, \dots, N, \end{array} \eeq where $[\M{X}_{v_i}]_{\mathrm{ovlp}}$ refers to the top-left $2\times 2$ blocks of $\M{X}_{v_i}$. Steps (i)-(iv) above lead to the following SDP: \begin{equation}\label{eq:correlativerelax} \begin{split} \hspace{-3mm} \min_{\M{X}} \cbrace{ \sum_{i=1}^N \inprod{\M{C}_i}{\M{X}_{v_i}}\ \middle\vert\ {\cal A}(\M{X})\! =\! \boldsymbol{b}, \M{X} \succeq 0} \\ \text{with }\M{X} = \parentheses{ \begin{array}{c} \M{X}_{v_1},\M{X}_{1,1},\dots,\M{X}_{1, l_{g_1}} \\ \M{X}_{v_2},\M{X}_{2,1},\dots,\M{X}_{2, l_{g_2}} \\ \vdots \\ \M{X}_{v_N},\M{X}_{N,1},\dots,\M{X}_{N, l_{g_N}} \end{array}}, \end{split} \tag{CSSR} \end{equation} where we have shorthanded $\M{X}_{i,j}$ as the $j$-th localizing matrix for the $i$-th moment matrix for notation convenience (\emph{cf.}\xspace \eqref{eq:cslocalizemat}), and ${\cal A}(\M{X})=\boldsymbol{b}$ collects all the linear equality constraints from \eqref{eq:momentConstraintssparsecs}, \eqref{eq:redundantEqualityConstraintssparsecs}, \eqref{eq:cslocalizecons}, and \eqref{eq:csoverlapcons}. Comparing \eqref{eq:correlativerelax} with \eqref{eq:sparserelax}, we see that, although \eqref{eq:correlativerelax} has more positive semidefinite blocks than \eqref{eq:sparserelax}, the size of the blocks become much smaller, especially when $N$ is large (\eqref{eq:correlativerelax} has $n_1 = 2d+2$, while \eqref{eq:sparserelax} has $n_1 = (1+d)(1+N)$). Therefore, \eqref{eq:correlativerelax} can be solved much more efficiently using off-the-shelf interior point methods such as \scenario{MOSEK} \cite{mosek}. However, the caveat is that \eqref{eq:correlativerelax} is not tight and cannot provide a certifiably optimal solution to the original \eqref{eq:binaryTLS} problem. {\bf Assembling a dual initialization for {\scenario{STRIDE}}}. Although the \eqref{eq:correlativerelax} relaxation is inexact, it is still useful to solve it because we can use its solution to warmstart {\scenario{STRIDE}}. To do this, let us recall the block structure of \eqref{eq:sparserelax} for the primal variable: \begin{eqnarray} \M{X} = (\M{X}_v, \M{X}_1,\dots,\M{X}_{l_g}). \end{eqnarray} The dual variable $\M{S}$ has the same block structure: \begin{eqnarray} \M{S} = (\M{S}_v, \M{S}_1,\dots,\M{S}_{l_g}), \end{eqnarray} where each block of $\M{S}$ has the same size as the corresponding block of $\M{X}$. With a slight change of notation, let us rewrite the block structure of \eqref{eq:correlativerelax} as: \begin{eqnarray} \M{X}_c = \parentheses{ \begin{array}{c} \M{X}_{v_1},\M{X}_{1,1},\dots,\M{X}_{1, l_g} \\ \M{X}_{v_2},\M{X}_{2,1},\dots,\M{X}_{2, l_g} \\ \vdots \\ \M{X}_{v_N},\M{X}_{N,1},\dots,\M{X}_{N, l_g} \end{array}}, \\ \M{S}_c = \parentheses{ \begin{array}{c} \M{S}_{v_1},\M{S}_{1,1},\dots,\M{S}_{1, l_g} \\ \M{S}_{v_2},\M{S}_{2,1},\dots,\M{S}_{2, l_g} \\ \vdots \\ \M{S}_{v_N},\M{S}_{N,1},\dots,\M{S}_{N, l_g} \end{array}}, \end{eqnarray} where the subscript ``$c$'' indicates correlative, and we have used the fact that $l_{g_i} = l_g$ for all $i=1,\dots,N$ because the only inequality constraints in \eqref{eq:binaryTLS} come from $\vxx \in {\cal X}$ and each ${\cal H}_i$ has an equal number of $l_g$ inequality constraints. Our goal is to generate $\M{S}$, given $\M{S}_c$, for {\scenario{STRIDE}}. Note that the matrices $\M{S}_v$ ($\M{X}_v$) and $\M{S}_{v_i}$ ($\M{X}_{v_i}$) have different dimensions, so that it is inappropriate to just sum up all $\{\M{S}_{v_i} \}_{i=1}^N$ to get $\M{S}_v$. The correct way to ``assemble'' $\{\M{S}_{v_i} \}_{i=1}^N$ is as follows. For each $\M{S}_{v_i}$, we define $\widebar{\MS}_{v_i}$ so that it satisfies the following polynomial equality \begin{eqnarray} \label{eq:definebarS} \inprod{\widebar{\MS}_{v_i}}{\M{X}_v} \equiv \inprod{\M{S}_{v_i}}{\M{X}_{v_i}} \end{eqnarray} for any $\M{X}_v$ and $\M{X}_{v_i}$ that are \emph{proper} moment matrices (note that both sides of \eqref{eq:definebarS} are polynomials and the equality implies that the coefficients of both polynomials must be equal). This is essentially creating $\widebar{\MS}_{v_i}$ to be an all-zero matrix except that the principal submatrix of $\widebar{\MS}_{v_i}$ indexed by the monomials $\boldsymbol{v}_i(\widetilde{\vxx}_i)$ is equal to $\M{S}_{v_i}$. Now that $\widebar{\MS}_{v_i}$ has the same size as $\M{X}_v$ and $\M{S}_v$, we can assemble $\M{S}_v$ as \begin{eqnarray} \label{eq:assembleSv} \M{S}_v = \sum_{i=1}^N \widebar{\MS}_{v_i}, \end{eqnarray} where the rationale for the sum can be partially understood from the complementarity condition of \eqref{eq:sdpKKT}. By the same token, for each $\M{S}_{i,j}$, we create $\widebar{\MS}_{i,j}$ such that \begin{eqnarray}\label{eq:definebarS2} \inprod{\widebar{\MS}_{i,j}}{\M{X}_{j}} \equiv \inprod{\M{S}_{i,j}}{\M{X}_{i,j}}, i = 1,\dots,N, j = 1,\dots,l_g, \end{eqnarray} for any $\M{X}_j$ and $\M{X}_{i,j}$ that are proper localizing matrices. Then we assemble $\M{S}_j$ as \begin{eqnarray} \label{eq:assembleSj} \M{S}_j = \sum_{i=1}^N \widebar{\MS}_{i,j}, \quad j=1,\dots,l_g. \end{eqnarray} The rationale for \eqref{eq:definebarS} and \eqref{eq:definebarS2} can be understood from the complementarity condition of the KKT system \eqref{eq:sdpKKT}, and more deeply from the dual perspective of sums-of-squares (SOS) polynomials \cite{Blekherman12Book-sdpandConvexAlgebraicGeometry} (precisely, we are assembling a SOS polynomial in $(\vxx,\boldsymbol{\theta})$ from $N$ SOS polynomials, each only involves the variables $(\vxx,\theta_i)$). Since this is less relevant for the purpose of this paper (and it is only used for warmstart), we only state the assembling procedure as in \eqref{eq:assembleSv} and \eqref{eq:assembleSj} without diving too deep into the theory of sums of squares. The interested reader is encouraged to refer to the dual SOS perspective in \cite{lasserre10book-momentsOpt}. \subsubsection{Semi-proximal ADMM} \label{sec:app-admmplus} After obtaining $\M{X}^0$ from primal heuristics such as {\scenario{GNC}} \cite{Yang20ral-gnc} or {\scenario{RANSAC}} \cite{Fischler81}, and $\M{S}^0$ from solving \eqref{eq:correlativerelax} and performing the assembly procedure in Section \ref{sec:app-cssr}, we use the {semi-proximal alternating direction method of multipliers} ({\scenario{ADMM+}}) proposed in \cite{Sun15siopt-admmplus} to refine both the primal and the dual initializations $(\M{X}^0,\M{S}^0)$. The full {\scenario{ADMM+}} algorithm, for solving a standard SDP \eqref{eq:primalSDP}-\eqref{eq:dualSDP}, is presented in Algorithm~\ref{alg:admmplus}. As we can see, at each iteration of {\scenario{ADMM+}}, the major computation involves solving a linear system (\emph{cf.}\xspace \eqref{eq:admmpluslinsolve1} and \eqref{eq:admmpluslinsolve2}) and performing a projection onto the product of positive semidefinite cones ${\cal K}$ (\emph{cf.}\xspace \eqref{eq:admmplusprojpsd}). Since ${\cal A}$ is typically sparse in our examples, Cholesky factorization of ${\cal A}\calA^{*}$ can be done efficiently and needs to be performed only once. {\scenario{ADMM+}} is a globally convergent algorithm for solving the SDP \eqref{eq:primalSDP}-\eqref{eq:dualSDP} and the interested reader can refer to \cite{Sun15siopt-admmplus} for a detailed study. Notably, \cite{Sun15siopt-admmplus} shows that {\scenario{ADMM+}} is typically $2$ to $3$ times faster than a conventional ADMM. In our implementation, we use the function \texttt{admmplus} in {\scenario{SDPNAL+}} \cite{Yang2015mpc-sdpnalplus} to refine $(\M{X}^0,\M{S}^0)$ and warmstart {\scenario{STRIDE}}. Although one can directly pass $(\M{X}^0,\M{S}^0)$ to {\scenario{STRIDE}}, empirically we found it is beneficial to refine $(\M{X}^0,\M{S}^0)$ using {\scenario{ADMM+}} because the refined initial points will have higher quality that promotes the convergence of {\scenario{STRIDE}}. In our experiments, we run {\scenario{ADMM+}} for a maximum of $20,000$ iterations, or until $\max\{ \eta_p,\eta_d \}$ is below a threshold (\emph{e.g.},\xspace $1\ee{-6}$). \input{sections/app-alg-admmplus} \subsection{Local Search and Nonlinear Programming} \label{sec:app-local-search} Recall the local search step \eqref{eq:nlpinlocalsearch} applies a nonlinear programming (NLP) algorithm to solve the \eqref{eq:binaryTLS} problem given an initial point. Since \eqref{eq:binaryTLS} is a polynomial optimization, it is straightforward to implement NLP using {\texttt{fmincon}} in Matlab. However, here we show that it is possible to exploit the smooth manifold structure of \eqref{eq:binaryTLS} and solve it more efficiently with Riemannian optimization \cite{Absil07book} (\emph{e.g.},\xspace using {\scenario{Manopt}} \cite{manopt}). First, we can model the vector of binary variables $\boldsymbol{\theta}$ as an \emph{oblique manifold} of size $N \times 1$ (an oblique manifold contains matrices with unit-norm rows). Second, from Examples \ref{ex:singlerotation}-\ref{ex:category}, we know the geometric model $\vxx$ contains 2D and 3D rotations, which are both smooth manifolds. However, $\vxx$ can also contain translation $\boldsymbol{t}$ and shape parameters $\boldsymbol{c}$ that do not live on smooth manifolds. Fortunately, we can drop some constraints so that they both live on smooth manifolds. For example, in Examples \ref{ex:pointcloud}, \ref{ex:mesh}, and \ref{ex:category}, we can relax $\boldsymbol{t} \in \calB^3_T$ to $\boldsymbol{t} \in \Real{3}$, with the rationale that when the SDP iterate $\M{X}^k$ is close to optimal, $\norm{\boldsymbol{t}} \leq T$ should be naturally satisfied (from rounding \eqref{eq:roundingrestate}) even without explicit constraint. Similarly, we relax $\boldsymbol{t} \in \calB^3_T \cap {\cal C}_\alpha$ in Example \ref{ex:absolutepose} to $\boldsymbol{t} \in \Real{3}$, and relax $\boldsymbol{c} \in \mathbb{R}^{K}_{+} \cap \calB^K_T$ in Example \ref{ex:category} to $\boldsymbol{c} \in \mathbb{R}^{K}_{++}$ (matrices with strictly positive entries live on a smooth manifold). Note that these modifications will not affect the global convergence of {\scenario{STRIDE}} because \eqref{eq:accept-reject} will reject the NLP solution if it violates the constraints that have been dropped. \section{Introduction} \label{sec:introduction}} \IEEEPARstart{G}EOMETRIC perception, the task of estimating unknown geometric models ({\emph{e.g.},\xspace} object poses, rotations, 3D structure, robot trajectory) from sensor measurements ({\emph{e.g.},\xspace} images, point clouds, relative poses), is a fundamental problem in computer vision and robotics. It finds extensive applications to object detection and localization \cite{Yang19rss-teaser}, motion estimation and 3D reconstruction \cite{Choi15cvpr-robustReconstruction}, simultaneous localization and mapping (SLAM) \cite{Rosen19IJRR-sesync} and structure from motion (SfM)~\cite{Schonberger16cvpr-SfMRevisited}, virtual and augmented reality \cite{Klein07ismar-PTAM}, and medical imaging \cite{Audette00mia-surveyMedical}, to name a few. A modern machine perception pipeline includes a \emph{perception front-end} that extracts, describes, and matches relevant features from raw sensor data, and a \emph{perception back-end} that estimates the geometric models of interest given the putative feature matches. In practice, due to various sources of imperfections and uncertainties ({\emph{e.g.},\xspace} sensor failures, incorrect detections and matchings by hand-crafted or deep-learned features), a large amount of \emph{outliers} ---measurements that tell no or little information about the underlying geometric models--- are generated by the front-end. Therefore, designing an \emph{outlier-robust} back-end that can tolerate large amounts of outliers, also known as \emph{robust fitting} \cite{Chin17slcv-maximumConsensusAdvances} in computer vision and \emph{robust state estimation} \cite{Barfoot17book} in robotics, has been a longstanding quest in both communities. Unfortunately, from a theoretical standpoint, performing robust estimation by discerning \emph{inliers} ({\emph{i.e.},\xspace} the correct and useful measurements) from outliers, is known to be NP-hard and \emph{inapproximable} due to its combinatorial nature \cite{Chin17slcv-maximumConsensusAdvances,Antonante20TRO-outlier,Enqvist15IJCV-tractableRobustEstimation,Chin18eccv-robustFitting}. Consequently, existing algorithms for outlier-robust estimation are mostly divided into \emph{fast heuristics}, {\emph{e.g.},\xspace} \scenario{RANSAC} \cite{Fischler81,Chum03-LORANSAC,Barath18-gcRANSAC} and graduated non-convexity (\scenario{GNC}) \cite{Yang20ral-gnc,Black96ijcv-unification,Blake1987book-visualReconstruction}, that are efficient but offer no optimality guarantees, and \emph{global solvers}, {\emph{e.g.},\xspace} Branch-and-Bound \cite{Yang16pami-goicp} and mixed-integer programming \cite{Izatt17isrr-MIPregistration,Li09cvpr-robustFitting}, that guarantee optimality but run in worst-case exponential time. Although in some cases it is acceptable to trade off \emph{optimality} (hence robustness) for \emph{efficiency}, real-time safety-critical applications ---such as autonomous driving and space robotics--- pose high demands for \emph{efficient global optimality}. The conflict between the {fundamental intractability} of robust estimation and the demand for computational efficiency calls for a paradigm shift: since it is impossible to solve all robust estimation problems in polynomial time, we argue that a useful goal is to design algorithms that perform well in typical instances and are able to \emph{certify} optimality of the resulting estimates, but at the same time can declare ``failure'' on worst-case instances rather than blindly returning an incorrect estimate. Inspired by related works \cite{Bandeira16crm,Yang20tro-teaser}, we formalize the notion of a \emph{certifiable algorithm} below. \begin{definition}[Certifiable Algorithm]\label{def:certifiablealg} Given an optimization problem $\mathbb{P}(\mathbb{D})$ with input data $\mathbb{D}$, an algorithm $\mathbb{A}$ is said to be {certifiable} if (i) $\mathbb{A}$ runs in polynomial time; and after solving $\mathbb{P}(\mathbb{D})$, $\mathbb{A}$ \revise{(ii)} either returns the global optimizer of $\mathbb{P}$ together with a certificate of optimality \revise{for common instances of $\mathbb{D}$ (empirically or provably)}, or \revise{(iii)} fails to do so \revise{for the worst instances of $\mathbb{D}$} but provides a measure of suboptimality ({\emph{e.g.},\xspace} a bound on the objective value, or the distance to the global optimizer). \end{definition} \revise{A certifiable algorithm respects the theoretical intractability of robust estimation \cite{Chin17slcv-maximumConsensusAdvances,Antonante20TRO-outlier} in that it does \emph{not} globally optimize $\mathbb{P}(\mathbb{D})$ for \emph{all} instances of $\mathbb{D}$ and it is allowed to fail in the worst cases (\emph{cf.}\xspace (iii) of Definition \ref{def:certifiablealg}).} \revise{However}, our notion of a certifiable algorithm is stricter than that of \cite{Yang20tro-teaser}, as it requires $\mathbb{A}$ to solve $\mathbb{P}(\mathbb{D})$ to global optimality for common $\mathbb{D}$ \revise{(at least empirically, \emph{cf.}\xspace (ii) of Definition \ref{def:certifiablealg})}. This requirement rules out algorithms that seldomly attain global optimality but provide suboptimality guarantees (\emph{e.g.},\xspace approximation algorithms \cite{Vazirani13book-approximation}). \emph{Semidefinite relaxations} are a natural choice for designing certifiable algorithms. If the problem $\mathbb{P}$ is a \emph{polynomial optimization problem} (POP, {\emph{i.e.},\xspace} both its objective and constraints are polynomials), then there exists a standard semidefinite relaxation \emph{hierarchy}, known as Lasserre's hierarchy \cite{Lasserre01siopt-lasserrehierarchy}, that relaxes $\mathbb{P}$ into a hierarchy of {convex} semidefinite programs (SDPs) of increasing size. Each relaxation in this hierarchy can be solved in polynomial time~\cite{todd1998nesterov} and provides a measure of {suboptimality} for the resulting estimate. Moreover, under\,mild\,technical\,conditions, the suboptimality of these relaxations becomes zero when their size is large enough, in which case we say the relaxation is \emph{exact}, or \emph{tight}.\footnote{ \revise{Lasserre's hierarchy respects the worst-case NP-hardness of POPs because one may need an SDP relaxation whose size grows exponentially with the dimension of the POP to attain certifiable optimality.}} We provide an {accessible introduction to POPs and their relaxations in Section~\ref{sec:preliminaries}.} Semidefinite relaxations have been successfully used to design certifiable algorithms for many geometric perception problems. The pioneering work by Kahl and Henrion \cite{Kahl07IJCV-GlobalOptGeometricReconstruction} applies Lasserre's hierarchy to solve several early perception problems including camera resectioning, homography estimation, and fundamental matrix estimation. Since then, certifiable algorithms have been designed for modern applications such as pose graph optimization \cite{Carlone16TRO-planarPGO,Rosen19IJRR-sesync}, rotation averaging \cite{Eriksson18cvpr-strongDuality,Fredriksson12accv}, triangulation \cite{Cifuentes21SIMAA-rankdeficient,Aholt12eccv-qcqptriangulation}, 3D registration \cite{Briales17cvpr-registration,Maron16tog-PMSDP,Chaudhury15Jopt-multiplePointCloudRegistration,Iglesias20cvpr-PSRGlobalOptimality}, absolute pose estimation \cite{Agostinho2019arXiv-cvxpnpl}, relative pose estimation \cite{Briales18cvpr-global2view,Zhao20pami-relativepose,Garcia21IVC-certifiablerelativepose}, hand-eye calibration \cite{Heller14icra-handeyePOP,Giamou19ral-SDPExtrinsicCalibration,Wise20MFI-certifiablyhandeye}, and category-level object perception \cite{Yang20cvpr-perfectshape,Shi21rss-pace}. Although the original formulations of the problems mentioned above are {nonconvex}, semidefinite relaxations at the \emph{lowest} relaxation order in the hierarchy are shown to be exact in practical applications. Since the SDP resulting from the lowest relaxation order can usually be solved efficiently ({\emph{e.g.},\xspace} below one second) by off-the-shelf SDP solvers ({\emph{e.g.},\xspace} \scenario{SDPT3} \cite{tutuncu03MP-SDPT3}, \scenario{MOSEK}~\cite{mosek}) or the Burer-Monteiro (B-M) low-rank factorization method \cite{Burer03mp,Boumal16nips,Rosen20wafr-scalableLowRankSDP}, both efficiency and (certifiable) optimality can be obtained. However, these successful examples of certifiable algorithms are underpinned by the restrictive assumption that \emph{the measurements are free of outliers}, which seldomly holds in practice. Heuristics like \scenario{RANSAC} and \scenario{GNC} are typically used to filter out outliers, but it is precisely the use of such heuristics that breaks the optimality guarantee and makes the system prone to undetected failures. Although several works have attempted to design certifiable algorithms for \emph{outlier-robust} geometric perception~\cite{Wang13ima,Lajoie19ral-DCGM,Carlone18ral-convexHuber,Yang19iccv-quasar,Speciale17cvpr-MaxconLMI}, most approaches (i) are problem-specific, (ii) cannot tolerate high outlier rates ({\emph{e.g.},\xspace} above $70\%$) \cite{Wang13ima,Lajoie19ral-DCGM,Carlone18ral-convexHuber}, or (iii) become too large to be solved by existing SDP solvers \cite{Yang19iccv-quasar}. {\bf Contributions}. In this paper, we propose a {general} and scalable\xspace framework for designing certifiable outlier-robust estimation algorithms that are empirically \emph{exact} with up to {$90\%$ outliers}, and present a fast SDP solver that can solve the tight relaxations at an unprecedented scale. We now describe our four contributions in detail. {\bf (I) {Robust estimation as polynomial optimization} (Section~\ref{sec:robustandpop})}. We investigate outlier-robust estimation with common {robust cost functions}, including truncated least squares (TLS), maximum consensus, Geman-McClure, Tukey's Biweight, L1, Huber, and Barron's adaptive kernel \cite{Barron19cvpr-adaptRobustLoss}. Our first contribution is to show that robust estimation using these costs can be equivalently reformulated as POPs, even though the robust costs themselves are not polynomials. This result is established by introducing additional variables and manipulating the original costs to polynomials. {\bf (II) {A sparse, but exact, semidefinite relaxation} (Section~\ref{sec:sdprelax})}. With the POP reformulation, it is tempting to apply the standard Lasserre's hierarchy to develop certifiable algorithms for robust estimation. Nevertheless, due to the additional variables (one or two variables per measurement), even for small estimation problems with fewer than 20 measurements, the \emph{lowest-order relaxation} can already lead to SDPs that are too large for existing SDP solvers. Therefore, our second contribution is to focus on the TLS cost and show that it allows us to exploit \emph{term sparsity} of the polynomials in the POP and design a much smaller semidefinite relaxation using \emph{basis reduction}. \revise{Although exploiting sparsity of POPs is a known idea in applied mathematics \cite{Wang21SIOPT-tssos,Lasserre06siopt-correlativesparsity}, our method is more effective than existing generic-purpose techniques since it leverages the special structure of our perception problems.} Compared to the standard Lasserre's hierarchy, our sparse semidefinite relaxation leads to $100$ times reduction in the size of the SDP. Unfortunately, even with our sparse relaxation, solving the SDP using off-the-shelf SDP solvers (\emph{e.g.},\xspace~\scenario{MOSEK}) is still too slow, and we can only demonstrate empirical exactness of our relaxation on small estimation problems ({\emph{e.g.},\xspace} $30$ measurements). {\bf (III) {A scalable and robust SDP solver} (Section~\ref{sec:scalableopt})}. The limitations of existing SDP solvers lead to our third contribution, a scalable SDP solver that can \emph{certifiably optimally} solve robust estimation problems of moderate but realistic sizes ({\emph{e.g.},\xspace} $100$ measurements). Our solver, called \emph{SpecTrahedral pRojected gradIent Descent along vErtices} (\scenario{STRIDE}), blends fast \emph{local search} on the nonconvex POP with \emph{global descent} on the convex SDP. Specifically, {\scenario{STRIDE}} follows a globally convergent trajectory driven by a \emph{projected gradient descent method} for solving the SDP, while simultaneously probing long, but \emph{safeguarded}, \emph{rank-one} "strides", generated by fast nonlinear programming algorithms on the POP, to seek rapid descent. Notably, fast heuristics such as {\scenario{RANSAC}} and {\scenario{GNC}} can be readily used to bootstrap {\scenario{STRIDE}}. Particularly, when {\scenario{RANSAC}} and {\scenario{GNC}} succeed in finding the globally optimal solution (which happens frequently in the low-outlier regime), {\scenario{STRIDE}} serves to certify global optimality. Otherwise, when fast heuristics converge to local minima, {\scenario{STRIDE}} detects suboptimality and escapes such minima. {\bf (IV) {Evaluation on six geometric perception problems} (Section~\ref{sec:experiments})}. Our last contribution is to apply our framework and solver to six perception problems: single and multiple rotation averaging, point cloud and mesh registration, absolute pose estimation, and category-level object perception. With extensive experiments on synthetic and real datasets, we demonstrate (i) our sparse SDP relaxation is exact in the presence of up to $60\%$--$90\%$ outliers, (ii) while still being far from real-time, {\scenario{STRIDE}} is up to 100 times faster than existing SDP solvers on medium-scale problems, and is the only solver than can solve large-scale SDPs with hundreds of thousands of constraints to high accuracy, (iii) {\scenario{STRIDE}} safeguards existing fast heuristics, \emph{i.e.},\xspace it certifies global optimality if the heuristic estimates are already optimal, or detects and escapes local minima otherwise. We showcase real examples of {\scenario{STRIDE}} \emph{certifiably} performing scan matching on \scenario{3DMatch} \cite{Zeng17cvpr-3dmatch}, mesh registration on \scenario{HomebrewedDB} \cite{Kaskman19-homebrewedDB}, satellite pose estimation on \scenario{SPEED} \cite{Sharma19arxiv-SPEED}, and vehicle pose and shape estimation on \scenario{ApolloScape} \cite{Wang19pami-apolloscape}. {\bf Novelty with respect to~\cite{Yang20neurips-onering,Yang21arxiv-stride}}. This paper extends and unifies the contributions presented in our previous conference papers~\cite{Yang20neurips-onering,Yang21arxiv-stride}. More in detail, we expand on~\cite{Yang20neurips-onering} by (i)~showing that other robust costs (beyond TLS) can be rephrased as POPs, (ii)~providing a more extensive comparison between (and discussion about) Lasserre's hierarchy and the proposed sparse relaxations, (iii)~going beyond certification (in this paper we propose a \emph{solver}, rather than a certification approach), (iv)~considering a broader set of applications. We also extend~\cite{Yang21arxiv-stride}, which introduced \scenario{STRIDE}, by (i)~generalizing \scenario{STRIDE} to work on multi-block SDPs arising from the proposed relaxations, (ii)~tailoring \scenario{STRIDE} to use fast heuristics (\emph{e.g.},\xspace~\scenario{RANSAC} or \scenario{GNC}) as a warmstart, and (iii)~testing \scenario{STRIDE} on a broader range of problems. We remark that the main goal of this paper is \emph{not} to produce a method that outperforms problem-specific state-of-the-art algorithms in terms of robustness or efficiency. Our key contribution is instead to show that a broad class of robust estimation problems in geometric perception can be solved to certifiable optimality in polynomial time (despite their hardness), and lay out a scalable framework to build SDP relaxations, that we believe ---with further advancement of SDP solvers--- will eventually run in real time. \section{{\sf STRIDE}: Scalable SDP Solver} \label{sec:scalableopt} The sparse relaxation \eqref{eq:sparserelax} leads to an SDP that can still have $m$ as large as hundreds of thousands when $N$ is large (\emph{cf.}\xspace Fig.~\ref{fig:LASvsSSR}). Therefore, with IPMs such as \scenario{MOSEK}, the scale at which \eqref{eq:sparserelax} can be solved is still quite limited (recall IPMs can typically handle $m$ up to $50,000$). This section presents \scenario{STRIDE} (\emph{SpecTrahedral pRojected gradIent Descent along vErtices}), an SDP solver that goes far beyond IPMs and enables solving~\eqref{eq:sparserelax} on problems of moderate but realistic size. {\bf Intuition}. The key insight behind {\scenario{STRIDE}} comes from Theorem~\ref{thm:sparserelaxtls}(ii): assuming the relaxation \eqref{eq:sparserelax} is exact, then the SDP \eqref{eq:primalSDP} admits \emph{rank-one} optimal solutions $\MX^\star_v = \boldsymbol{v}(\tldvxx^\star)\boldsymbol{v}(\tldvxx^\star)^{\mathsf{T}}$, where $\tldvxx^\star = (\vxx^{\star},{\vtheta}^{\star})$ corresponds to the global minimizer of \eqref{eq:binaryTLS}. Therefore, {\scenario{STRIDE}} tries to move between rank-one matrices in the feasible set of the SDP (these are the \emph{vertices} of the spectrahedron \cite{Blekherman12Book-sdpandConvexAlgebraicGeometry}), searching for a globally optimal solution. More in detail, {\scenario{STRIDE}} employs a globally convergent \revise{\emph{projected gradient descent} (PGD)} method as the backbone for solving the convex SDP \eqref{eq:sparserelax}, but blends \emph{short} \revise{PGD} steps with \emph{long} rank-one steps generated by fast NLP algorithms on the POP~\eqref{eq:binaryTLS}. Intuitively, the long rank-one steps circumvent the slow convergence of \revise{PGD}, while the \revise{PGD} backbone allows escaping local minima where the NLP algorithm can be stuck in. With this insight, we now develop the details of \scenario{STRIDE}. {\bf Short {PGD} step}. The backbone of {\scenario{STRIDE}} implements a {PGD} for solving the primal SDP \eqref{eq:primalSDP}. Given an initial point $\M{X}^0 \in \mathbb{X}$, the $k$-th ($k \geq 0$) iteration of {PGD} performs \begin{equation}\label{eq:pgd} \M{X}^{k+1} = \Pi_{\calF_{\mathrm{P}}} \parentheses{\M{X}^k - \sigma_k \M{C}}, \tag{PGD} \end{equation} for a given constant $\sigma_k > 0$, where $\Pi_{\calF_{\mathrm{P}}}$ denotes the metric projection onto the spectrahedron $\calF_{\mathrm{P}} \triangleq \{\M{X} \in \mathbb{X} \mid {\cal A}(\M{X})=\boldsymbol{b}, \M{X} \succeq 0 \}$ (\emph{i.e.},\xspace the feasible set of \eqref{eq:primalSDP}). In words, the \eqref{eq:pgd} step first moves along the direction of the negative gradient for some step size $\sigma_k$ (recall the objective of \eqref{eq:primalSDP} is $\inprod{\M{C}}{\M{X}}$ with a constant gradient $\M{C}$), and then projects the new point $\M{X}^k - \sigma_k \M{C}$ onto the feasible set $\calF_{\mathrm{P}}$. It is well known that \eqref{eq:pgd} guarantees to converge to an optimal solution of \eqref{eq:primalSDP}, provided that $\sigma_{k+1} \geq \sigma_{k}, \forall k \geq 0$ (see \cite{Jiang12siopt-PGMSDP,Beck09SIIS-FISTA,Bertsekas99book-nlp}). In Supplementary Material\xspace, we show the Lagrangian dual of the projection subproblem in \eqref{eq:pgd} can be reformulated as a \emph{smooth unconstrained optimization}, which allows solving~\eqref{eq:pgd} for large-scale problems using a limited-memory BFGS (L-BFGS) algorithm. \revise{For this reason, in \eqref{eq:strideprojection} we also output the dual optimal solution.} {\bf Long rank-one step}. The issue with \eqref{eq:pgd} is that the convergence can be slow, particularly when the optimal $\MX^\star$ is rank-one and degenerate (as in \eqref{eq:sparserelax}). Here we propose to exploit the low-rankness of $\MX^\star$ and accelerate the convergence by generating long rank-one steps. Towards this goal, calling $\overline{\MX}^{k+1} := \Pi_{\calF_{\mathrm{P}}}(\M{X}^k - \sigma_k \M{C})$, and $\overline{\MX}_v^{k+1} \in \psd{n_1}$ as the first block in $\overline{\MX}^{k+1}$ (\emph{i.e.},\xspace the moment matrix), we compute a potentially better rank-one iterate via three steps: \begin{enumerate}[label=(\roman*)] \item\label{item:rounding} {\bf (Rounding)}. Let $\overline{\MX}^{k+1}_v = \sum_{i=1}^{n_1} \lambda_i \boldsymbol{v}_i \boldsymbol{v}_i^{\mathsf{T}}$ be the spectral decomposition of $\overline{\MX}^{k+1}_v$ with $\lambda_1 \geq \dots \geq \lambda_{n_1}$ in nonincreasing order. Compute $r$ hypotheses from the leading $r$ eigenvectors \begin{eqnarray} \label{eq:roundingrestate} (\widebar{\vxx}^{k+1}_{i},\widebar{\vtheta}^{k+1}_i) = \texttt{rounding}(\boldsymbol{v}_i), \quad i = 1,\dots,r, \end{eqnarray} where the function~\texttt{rounding}~is defined as in \eqref{eq:rounding}. \item {\bf (Local search)}. Apply a local search method for \eqref{eq:binaryTLS} using NLP with initial point chosen as $ (\widebar{\vxx}^{k+1}_{i},\widebar{\vtheta}^{k+1}_i) $ for each $i=1,\dots,r$. Denoting the solution of each local search as $ (\widehat{\vxx}^{k+1}_i,\widehat{\vtheta}_i^{k+1}) $, with associated objective value $p(\widehat{\vxx}^{k+1}_i,\widehat{\vtheta}_i^{k+1})$, choose the best local solution with \emph{minimum} objective value. Formally, \begin{subequations} \begin{eqnarray} \hspace{-3mm} (\widehat{\vxx}_i^{k+1},\widehat{\vtheta}_i^{k+1}) =&\!\!\!\! \texttt{nlp}(\widebar{\vxx}^{k+1}_i,\widebar{\vtheta}^{k+1}_i),\ \ i=1,\dots,r, \label{eq:nlpinlocalsearch}\\ \hspace{-3mm} (\widehat{\vxx}^{k+1},\widehat{\vtheta}^{k+1}) =&\!\!\!\! \displaystyle \argmin_{(\widehat{\vxx}^{k+1}_i,\widehat{\vtheta}^{k+1}_i), i=1\dots,r} p(\widehat{\vxx}^{k+1}_i,\widehat{\vtheta}^{k+1}_i). \end{eqnarray} \end{subequations} \item\label{item:lifting} {\bf (Lifting)}. Perform a rank-one lifting of the best local solution $\widetilde{\vxx}^{k+1} \triangleq (\widehat{\vxx}^{k+1},\widehat{\vtheta}^{k+1}) $ \begin{subequations}\label{eq:lifting} \begin{eqnarray} \hspace{-3mm} \widehat{\MX}^{k+1}_v =&\!\!\! \boldsymbol{v}(\widetilde{\vxx}^{k+1}) \boldsymbol{v}(\widetilde{\vxx}^{k+1})^{\mathsf{T}}, \ \ (\emph{cf.}\xspace\ \eqref{eq:sparsebasis}) \\ \hspace{-3mm} \widehat{\MX}^{k+1}_{g_j} =&\!\!\! \M{X}^{k+1}_{g_j} (\widetilde{\vxx}^{k+1}), j = 1,\dots,l_g, \ \ (\emph{cf.}\xspace\ \eqref{eq:liftPSDsubblks}) \\ \hspace{-3mm} \widehat{\MX}^{k+1} = &\!\!\! (\widehat{\MX}^{k+1}_v,\dots,\widehat{\MX}^{k+1}_{g_j},\dots)_{j=1}^{l_g}, \end{eqnarray} \end{subequations} where $\widehat{\MX}^{k+1},\widehat{\MX}^{k+1}_{g_j},j=1,\dots,l_g$ are computed by \emph{evaluating} the moment and localizing matrices at $\widetilde{\vxx}^{k+1}$. \end{enumerate} {\bf Taking the right step}. Now we are given two candidates for the next iteration, namely the short {PGD} step $\overline{\MX}^{k+1}$ (generated by computing the projection of $\M{X}^k - \sigma_k \M{C}$ onto $\calF_{\mathrm{P}}$) and the long rank-one step $ \widehat{\MX}^{k+1} $ (obtained by rounding, local search, and lifting). Which one should we choose to be the next iterate $\M{X}^{k+1}$ such that the entire sequence $\{\M{X}^k\}$ is globally convergent? The answer to this question is quite natural --we accept $\widehat{\MX}^{k+1}$ if and only if it attains a strictly lower cost than $\overline{\MX}^{k+1}$ (\emph{cf.}\xspace eq. \eqref{eq:accept-reject}). \maybeOmit{-- \red{and guarantees that the algorithm visits a sequence of rank-one vertices (\emph{i.e.},\xspace local minima via NLP) with descending costs}.} \input{sections/alg-stride} The full {\scenario{STRIDE}} algorithm is presented in Algorithm~\ref{alg-iPGMnlp}. \begin{theorem}[Global Convergence]\label{thm:strideconverge} Suppose the Slater condition for \eqref{eq:primalSDP} holds and $\{ (\M{X}^k,\boldsymbol{y}^k,\M{S}^k) \}$ is generated by {\scenario{STRIDE}}, then $\{f(\M{X}^k) \}$ converges to $f^\star$, where $f^\star$ is the optimum of \eqref{eq:primalSDP}. \end{theorem} While we provide the proof in Supplementary Material\xspace, the intuition is that eq. \eqref{eq:accept-reject} ensures the rank-one ``strides'' are accepted only if they strictly decrease the objective value. Therefore, either the last rank-one point is already optimal, or ---if it is suboptimal--- it still provides an improved reinitialization for \eqref{eq:pgd} to globally converge to the optimal $\MX^\star$. Note that the \revise{PGD} backbone allows {\scenario{STRIDE}} to converge even when the optimal solution has rank higher than one. \revise{In \cite{Yang21arxiv-stride}, we show it is also possible to \emph{accelerate} and \emph{generalize} the \eqref{eq:pgd} backbone using \emph{proximal gradient methods}.} Although {\scenario{STRIDE}} is a globally convergent algorithm for solving the primal SDP \eqref{eq:primalSDP}, the initial guess $(\M{X}^0,\M{S}^0,\boldsymbol{y}^0)$ can have a significant impact on its convergence speed. The next remark states that existing fast heuristics for robust perception can be readily incorporated into {\scenario{STRIDE}}. \begin{remark}[Fast Heuristics and Certification] \label{remark:fastheuristics} Existing fast heuristics for robust estimation, such as graduated non-convexity (\scenario{GNC}) \cite{Yang20ral-gnc,Black96ijcv-unification} and {\scenario{RANSAC}} \cite{Fischler81}, can typically return the \emph{globally optimal} solution to \eqref{eq:binaryTLS} when the measurement set ${\cal Z}$ contains a low or medium portion of outliers (\emph{e.g.},\xspace below $70\%$). Therefore, we use {\scenario{GNC}} or {\scenario{RANSAC}} to generate an initial guess for the SDP relaxation \eqref{eq:sparserelax}. Formally, calling $(\widehat{\vxx},\widehat{\vtheta})$ the candidate solution obtained by solving \eqref{eq:binaryTLS} using {\scenario{GNC}} or {\scenario{RANSAC}}, we generate $\M{X}^0$ (for {\scenario{STRIDE}}) by applying the lifting procedure in \eqref{eq:lifting} to $(\widehat{\vxx},\widehat{\vtheta})$. Notably, when $(\widehat{\vxx},\widehat{\vtheta})$ is already globally optimal to \eqref{eq:binaryTLS} (hence $\M{X}^0$ is an optimizer of \eqref{eq:sparserelax} as long as the relaxation is exact), {\scenario{STRIDE}} only finds a \emph{certificate of optimality} for $(\widehat{\vxx},\widehat{\vtheta})$ by performing one step of \eqref{eq:pgd} (\emph{cf.}\xspace \eqref{eq:strideprojection} in Algorithm \ref{alg-iPGMnlp}). \end{remark} Fast heuristics provide a good \emph{primal} initialization for {\scenario{STRIDE}}. However, little information is known about how to obtain a good \emph{dual} initialization. In {Supplementary Material\xspace}, we describe a dual initialization procedure that exploits \emph{correlative sparsity} \cite{Wang21siopt-chordaltssos} and leverages a fast first-order algorithm called \emph{semi-proximal ADMM} (also known as {\scenario{ADMM+}}) \cite{Sun15siopt-admmplus}. We also give more implementation details about how to use Riemannian optimization to perform local search. \section{Proof of Theorem~\ref{thm:strideconverge}} \begin{proof} Let ${\cal V} = \{ \widehat{\MX}^{k(i)} \}$ be the sequence of all the $\widehat{\MX}$ that have been accepted due to \eqref{eq:accept-reject}, where $k(i)$ returns the iteration index of the $i$-th element in ${\cal V}$. If ${\cal V} = \emptyset$, then {\scenario{STRIDE}} reduces to \eqref{eq:pgd} and is globally convergent. If ${\cal V} \neq \emptyset$, then we claim that ${\cal V}$ must be finite. Note that, for any two consecutive elements $\widehat{\MX}^{k(i)}$ and $\widehat{\MX}^{k(i+1)}$ in ${\cal V}$, we have \begin{eqnarray} \label{eq:strictdescent} f(\widehat{\MX}^{k(i+1)}) \leq& f(\overline{\MX}^{k(i+1)}) - \epsilon \nonumber \\ <& f(\M{X}^{k(i+1) - 1}) - \epsilon \leq f(\widehat{\MX}^{k(i)}) - \epsilon, \end{eqnarray} where the first inequality is due to \eqref{eq:accept-reject}, the second inequality is due to \eqref{eq:strideprojection} and the fact that projected gradient descent must strictly decrease the objective value when optimality has not been achieved \cite[Proposition 3.4.1]{Bertsekas99book-nlp}, and the last inequality holds because $k(i+1) - 1 \geq k(i)$. Eq.~\eqref{eq:strictdescent} states that the objective value must decrease by at least $\epsilon$ along each element of ${\cal V}$. Therefore, we have $f_\min({\cal V}) \leq f_{\max}({\cal V}) - (|{\cal V}|-1) \epsilon$, where $f_{\min}$ and $f_{\max}$ are the minimum and maximum objective values along ${\cal V}$. Hence $|{\cal V}|$ must be finite, otherwise $f^\star$ is unbounded below, contradicting Slater's condition and strong duality. Let $\widehat{\MX}^{k(|{\cal V}|)}$ be the last element of ${\cal V}$, then {\scenario{STRIDE}} reduces to \eqref{eq:pgd} with a new initial point at $\widehat{\MX}^{k(|{\cal V}|)}$ and is globally convergent. \end{proof} \section{Further Reduction on Multiple Rotation Averaging (Example \ref{ex:multirotation})} \label{sec:app-reduce-mra} Recall that in multiple rotation averaging we are given a graph ${\cal G} = ({\cal V},{\cal E})$ with vertex set ${\cal V} = [n]$ and edge set ${\cal E}$. Each vertex $i \in {\cal V}$ is associated with an unknown rotation $\M{R}_i \in \mathrm{SO}(\dimrot)$, and each edge $(i,j) \in {\cal E}$ provides a relative measurement $\widetilde{\MR}_{ij}$ between the unknown rotations $\M{R}_i$ and $\M{R}_j$ at vertices $i$ and $j$. Let ${\cal R}$ be the set of edges whose relative measurements are known to be free of outliers (\emph{e.g.},\xspace odometry measurements in SLAM), and let ${\cal Z} = {\cal E} / {\cal R}$ be the set of edges whose measurements are corrupted by outliers (\emph{e.g.},\xspace loop closures in SLAM). If no edge set is known to be free of outliers, then we set ${\cal R} = \emptyset$. We now present a further reduction for multiple rotation averaging. Let us denote by ${\cal V}_{{\cal Z}} \triangleq \{ i \in {\cal V} \mid \exists j \in {\cal V}, (i,j) \in {\cal Z} \} \subseteq {\cal V}$ the subset of nodes that are attached to at least one edge in ${\cal Z}$. Note that typically $\abs{{\cal V}_{\cal Z}} \ll n$ for SLAM applications (\emph{i.e.},\xspace these are the nodes at which loop closures occur). For each edge $(i,j) \in {\cal Z}$, we define its \emph{depth-$\zeta$ neighbor set} for $\zeta \in \int_{+}$, in the following recursive manner: \begin{eqnarray} {\cal V}_{(i,j)}^0 \triangleq \{ i,j \}, {\cal V}_{(i,j)}^{\zeta} \triangleq \{i \in {\cal V} \mid \exists j \in {\cal V}_{(i,j)}^{\zeta-1}, (i,j) \in {\cal E} \}, \end{eqnarray} where one can see that ${\cal V}_{(i,j)}^{\zeta}$ (for $\zeta \geq 1$) is essentially the union of the $\zeta$-hop neighbor set of node $i$ with the $\zeta$-hop neighbor set of node $j$. It is easy to see that ${\cal V}_{(i,j)}^\zeta = {\cal V}, \forall (i,j) \in {\cal Z}$ when $\zeta$ is sufficiently large, as long as the graph ${\cal G}$ is connected. With ${\cal V}_{\cal Z}$ and ${\cal V}_{(i,j)}^\zeta$, for each edge $(i,j) \in {\cal Z}$, we define \begin{eqnarray} \vxx_{(i,j)}^\zeta \triangleq \{\M{R}_k \mid k \in {\cal V}_{(i,j)}^\zeta \cap {\cal V}_{\cal Z} \} \supseteq \{\M{R}_i, \M{R}_j \}, \end{eqnarray} as the set of node-wise rotations in ${\cal V}_{\cal Z}$ that are attached to $(i,j)$ within depth $\zeta$. By definition, ${\cal V}_{(i,j)}^\zeta \cap {\cal V}_{\cal Z}$ must contain node $i$ and node $j$, and hence $\vxx_{(i,j)}^\zeta$ contains at least two rotations (attached to the edge $(i,j)$). We now replace the sparse basis in \eqref{eq:sparsebasis} as \begin{eqnarray} & \boldsymbol{v}(\widetilde{\vxx}) = [1 \,;\,} %\ \Vert\ \vxx \,;\,} %\ \Vert\ \boldsymbol{\theta} \,;\,} %\ \Vert\ \dots \theta_{ij} \vxx_{(i,j)}^\zeta \dots ]_{(i,j) \in {\cal Z}} \in \Real{n_1}, \nonumber \\ & 1+2n+5N \leq n_1 \leq 1+2n+ N(1+ 2\abs{{\cal V}_{\cal Z}}), \end{eqnarray} and use it to generate the semidefinite relaxation \eqref{eq:sparserelax}. It is worth noting that our relaxation recovers the hand-crafted SDP relaxation in \cite{Lajoie19ral-DCGM} with the choice of $\zeta = 0$, which is shown to be \emph{inexact} when the outlier rate is around $50\%$. In Section \ref{sec:experiments}, we show that, with a larger $\zeta$, we can achieve exact relaxation in the presence of over $70\%$ outliers. \section{Experiments} \label{sec:experiments} In this section, we test the sparse relaxation \eqref{eq:sparserelax} and the SDP solver {\scenario{STRIDE}} on Examples \ref{ex:singlerotation}-\ref{ex:category} using both synthetic and real data \revise{(we defer the results for Example \ref{ex:mesh} mesh registration to {Supplementary Material\xspace} due to space constraints)}. The goal of our experiments is not to claim state-of-the-art efficiency or robustness (\emph{e.g.},\xspace against problem-specific implementations), but rather to show that \eqref{eq:sparserelax} and {\scenario{STRIDE}}, for the first time, provide a general framework to solve large-scale nonconvex outlier-robust perception problems to certifiable global optimality within reasonable computation time. We believe with the advancement of SDP solvers, our framework will eventually run in real time. {\bf Baselines}. We use two state-of-the-art SDP solvers, {\scenario{MOSEK}} \cite{mosek} and {\scenario{SDPNAL+}} \cite{Yang2015mpc-sdpnalplus}, as baseline solvers to compare against {\scenario{STRIDE}}. We omit {\scenario{MOSEK}} whenever the SDP becomes too large to be solved by {\scenario{MOSEK}} (\emph{i.e.},\xspace when $m > 50,000$). We use default settings for both {\scenario{MOSEK}} and {\scenario{SDPNAL+}}. {\bf {\scenario{STRIDE}}'s settings}. In Algorithm \ref{alg-iPGMnlp}, we choose $\texttt{tol}\!=\!1\ee{-6}$, $r\!=\!3$, $\epsilon\!=\!1\ee{-12}$, $\sigma_k\!=\!10,\forall k$, and run it for a maximum of $5$ iterations. As described in Remark \ref{remark:fastheuristics}, we use {\scenario{GNC}} or {\scenario{RANSAC}} to initialize the primal variable, and {\scenario{ADMM+}} to initialize the dual variable. The local search is performed using {\scenario{Manopt}} with a trust region solver. Details about local search and {\scenario{ADMM+}} can be found in Supplementary Material\xspace. {\bf Evaluation metrics}. Let $(\widehat{\MX},\widehat{\vy},\widehat{\MS})$ be the solution for \eqref{eq:sparserelax} returned by an SDP solver and $(\widehat{\vxx},\widehat{\vtheta})$ be the corresponding rounded solution for \eqref{eq:binaryTLS}. We evaluate the performance of the solver using four metrics: (i)~the estimation errors of $\widehat{\vxx}$ compared to the groundtruth, whenever the groundtruth is available; (ii)~SDP solution quality, using the maximum KKT residual $\eta_{\max}$ from \eqref{eq:KKTresiduals}; (iii)~certified suboptimality, using the {\texttt{rounding}} procedure in \eqref{eq:rounding} and the relative suboptimality measure $\eta_s$ in \eqref{eq:subopt} (we deem a rounded solution as globally optimal if $\eta_s < 1\ee{-3}$); (iv) solver CPU time in seconds. {For simulation experiments, statistics are computed over $20$ Monte Carlo runs per setup.} {\bf Hardware}. Experiments are performed on a Linux PC with 12-core Intel i9-7920X CPU@2.90GHz and 128GB RAM. \subsection{Single Rotation Averaging} \input{sections/fig-sra} {\bf Setup}. At each Monte Carlo run, we first randomly generate a groundtruth 3D rotation $\MR^{\circ}$; then inliers are generated by $\MR_{\mathrm{in}} = \MR^{\circ} \MR_{\varepsilon}$, where the inlier noise $\MR_{\varepsilon}$ is generated by randomly sampling a rotation axis and a rotation angle $\varepsilon \sim {\cal N}(0,\sigma^2)$ with $\sigma = 5^{\circ}$; outliers are arbitrary random rotations. We test two setups with $N=30$ and $N=100$. At $N=30$, we sweep the outlier ratio from $0\%$ to $90\%$, while at $N=100$, we sweep the outlier ratio up to $95\%$. {\bf Results}. Fig. \ref{fig:exp-sra-results}(a)-(b) plot the evaluation metrics for $N=30$ and $N=100$, respectively. We make the following observations. (i) Our sparse relaxation \eqref{eq:sparserelax} is exact with up to $90\%$ outliers when $N=30$ and up to $95\%$ outliers when $N=100$ (the suboptimality $\eta_s$ is below $1\ee{-3}$ for all test runs). (ii) For $N=30$, {\scenario{STRIDE}} solves the SDP to an accuracy that is comparable to {\scenario{MOSEK}} (\emph{cf.}\xspace the $\eta_{\max}$ plot), but is about $100$ (and up to $270$) times faster (\emph{cf.}\xspace the time plot). (iii) For $N=100$, {\scenario{MOSEK}} cannot run. While {\scenario{SDPNAL+}} still runs, its accuracy is at least five orders of magnitude worse than {\scenario{STRIDE}} (\emph{cf.}\xspace the $\eta_{\max}$ plot, where {\scenario{STRIDE}} attains $1\ee{-8}$ accuracy, but {\scenario{SDPNAL+}} only attains $1\ee{-3}$ accuracy), and its runtime is about $40$ times slower than {\scenario{STRIDE}}. (iv) {\scenario{STRIDE}} safeguards {\scenario{GNC}}. While {\scenario{GNC}} is used to initialize {\scenario{STRIDE}}, {\scenario{STRIDE}} can \emph{certify} the global optimality of {\scenario{GNC}} and escapes the local minima of {\scenario{GNC}} (\emph{e.g.},\xspace at $80\%$ outlier rate in the rotation error plot, while {\scenario{GNC}} fails many times, the solution of {\scenario{STRIDE}} is always correct and optimal). (v) When the outlier rate is too high, global optimality not necessarily implies a correct estimation (in the sense of being close to the groundtruth). For example, at $90\%$ outlier rate when $N=30$, {\scenario{STRIDE}} and {\scenario{MOSEK}} both obtain certifiable optimality ($\eta_s = 0$), but the rotation error can be quite large (about $100^{\circ}$). Similarly, at $95\%$ outlier rate when $N=100$, the optimal estimate obtained by {\scenario{STRIDE}} also has large rotation errors. For further discussion about this point, we refer the reader to the notion of \emph{estimation contract} in~\cite{Yang20tro-teaser}, which ties the number of inliers to the accuracy of the optimal solution of \eqref{eq:binaryTLS} w.r.t.\xspace the groundtruth, and reports estimation contracts for a 3D registration problem. \subsection{Multiple Rotation Averaging} \input{sections/fig-mra} {\bf Setup}. We test 2D multiple rotation averaging in a SLAM setting, where a robot traverses a trajectory following a 2D grid pattern (\emph{e.g.},\xspace Fig. \ref{fig:applications}(b) shows a $3\times 3$ grid) with both odometry measurements (between consecutive nodes) and loop closures. We assume the odometry measurements are outlier-free (\emph{i.e.},\xspace we include them in the function $\psi(\vxx)} %{\psi(\vxx,\calR)$ in Example \ref{ex:multirotation}) and only the loop closures could be corrupted by outliers as in~\cite{Yang20ral-gnc}. Inlier relative rotations are generated by $\MR_{\mathrm{in}} = \MR^{\circ} \MR_{\varepsilon}$ where $\MR^{\circ} = \M{R}_i^{\mathsf{T}} \M{R}_j$ is the groundtruth relative rotation between nodes $(i,j)$ and $\MR_{\varepsilon}$ is a random 2D rotation with angle $\varepsilon \sim {\cal N}(0,\sigma^2)$ ($\sigma=0.6^{\circ}$). Outlier relative rotations are arbitrary 2D rotations. We test two cases with increasing outlier rates: a $10\times 10$ grid with $N=10$ loop closures, and a $20 \times 20$ grid with $N=20$ loop closures. {\bf Results}. Fig. \ref{fig:exp-mra-results}(a)-(b) plot the evaluation metrics for both cases. We make the following observations. (i) For $10 \times 10$ grid with $N=10$, our relaxation is always exact, with up to $80\%$ outliers. In this case, {\scenario{STRIDE}} can solve the SDP to an accuracy that is comparable to {\scenario{MOSEK}}, while being about $20$ times faster (up to $40$ times faster). {\scenario{SDPNAL+}}, unfortunately, completely fails in this problem. Therefore, we did not run {\scenario{SDPNAL+}} for the more challenging $20 \times 20$ grid. (ii) For $20\times 20$ grid with $N=20$, our relaxation is also almost always exact, with up to $80\%$ outliers. However, there exist 1-2 runs per outlier rate where {\scenario{STRIDE}} fails to obtain an $\eta_s < 1\ee{-3}$. In such cases, we suspect the relaxation is inexact. \subsection{Point Cloud Registration} \input{sections/fig-pcr} {\bf Setup}. We first sample a random set of 3D points $\{ \boldsymbol{p}_i\}_{i=1}^N$, where each $\boldsymbol{p}_i \sim {\cal N}({\mathbf 0},{\mathbf I}_3)$. Then we generate a random rotation and translation $(\MR^{\circ},\vt^{\circ})$ such that $\norm{\vt^{\circ}} \leq T = 10$. Using $(\MR^{\circ},\vt^{\circ})$, we generate $\{\boldsymbol{q}_i\}_{i=1}^N$ by $\boldsymbol{q}_i = \MR^{\circ} \boldsymbol{p}_i + \vt^{\circ} + \boldsymbol{\varepsilon}_i$ ($\boldsymbol{\varepsilon}_i \sim {\cal N}({\mathbf 0},0.01^2 {\mathbf I}_3)$) if $\boldsymbol{q}_i$ is an inlier, or by $\boldsymbol{q}_i \sim {\cal N}({\mathbf 0},{\mathbf I}_3)$ if $\boldsymbol{q}_i$ is an outlier. We test $N = 20$ and $N=100$. {\bf Results}. Fig. \ref{fig:exp-pcr-results}(a)-(b) plot the evaluation metrics for $N=20$ and $N=100$, respectively. We make the following observations. (i) When $N=20$, our relaxation is tight with up to $80\%$ outlier correspondences. Both {\scenario{MOSEK}} and {\scenario{STRIDE}} can obtain a certifiably optimal solution, except that {\scenario{STRIDE}} failed once to attain sufficient accuracy (within $5$ iterations) at $80\%$ outlier rate. \footnote{Consistent with \cite{Yang20neurips-onering}, we empirically noticed that the relaxation breaks earlier when fewer measurements are available. We remark that the formulation considered in this paper is more challenging that the rotation-only version in~\cite{Yang19iccv-quasar}, which remains tight at $90\%$ outliers.} However, {\scenario{STRIDE}} is about $5$ times faster than {\scenario{MOSEK}}. {\scenario{SDPNAL+}} completely fails in this problem. (ii) When $N=100$, our relaxation is exact with up to $90\%$ outliers and {\scenario{STRIDE}} is the only solver that can certify exactness. At $90\%$ outlier rate, {\scenario{STRIDE}} certified the global optimality for $17$ runs, while failed to do so for $3$ runs. (iii) {\scenario{STRIDE}} can certify the success of {\scenario{GNC}} and escape local minima when {\scenario{GNC}} fails (\emph{e.g.},\xspace at $60-80\%$ when $N=20$ and at $90\%$ when $N=100$). {\bf Scan matching on {\scenario{3DMatch}}}. To showcase the practical value of {\scenario{STRIDE}}, we perform scan matching using the {\scenario{3DMatch}} test data \cite{Zeng17cvpr-3dmatch}. We use {\scenario{FPFH}} \cite{Rusu09icra-fast3Dkeypoints} to generate putative feature matches, followed by using {\scenario{ROBIN}} \cite{Shi21icra-robin} to filter out gross outliers. The result of {\scenario{FPFH}} and {\scenario{ROBIN}} is typically a set of sparse keypoint matches with only a few outliers. We then use {\scenario{STRIDE}} to \emph{certifiably} estimate the rigid transformation. Fig. \ref{fig:exp-pcr-results}(c)-(d) visualize two examples where {\scenario{STRIDE}} returns certified globally optimal estimates ($\eta_s < 1\ee{-6}$). More examples are provided in {Supplementary Material\xspace}. \subsection{Absolute Pose Estimation} \input{sections/fig-ape} {\bf Setup}. We first generate a set of random 3D points $\{ \boldsymbol{p}_i \}_{i=1}^N$ that are centered at zero. We then generate a random pose $(\MR^{\circ},\vt^{\circ})$ such that $\norm{\vt^{\circ}} \leq T=10$ and $\vt^{\circ}$ lies inside the camera FOV cone ${\cal C}_\alpha$ with $\alpha = \frac{\pi}{2}$. Using $(\MR^{\circ},\vt^{\circ})$ and $\{ \boldsymbol{p}_i \}_{i=1}^N$, we generate 2D keypoints by projecting the transformed 3D points onto the imaging plane, \emph{i.e.},\xspace $\boldsymbol{v}_i = {\cal P}(\MR^{\circ} \boldsymbol{p}_i + \vt^{\circ})$, where ${\cal P}: \Real{3} \rightarrow \Real{2}$ is defined as ${\cal P}(\va) = [a_1/a_3\,;\,} %\ \Vert\ a_2/a_3]$. We then generate the inlier bearing vectors from the 2D keypoints by $\boldsymbol{u}_i = \texttt{normalize}([\boldsymbol{v}_i + \boldsymbol{\varepsilon}_i \,;\,} %\ \Vert\ 1])$, where $\boldsymbol{\varepsilon}_i \sim {\cal N}({\mathbf 0},0.001^2 {\mathbf I}_2)$ is a random 2D Gaussian noise. For outliers, we generate $\boldsymbol{u}_i$ as random unit vectors inside the FOV cone. We test $N=20$ and $N=100$ with increasing outlier rates. We use {\scenario{RANSAC}} implemented in the Matlab function $\texttt{estimateWorldCameraPose}$ to initialize {\scenario{STRIDE}}. {\bf Results}. Fig. \ref{fig:exp-ape-results}(a)-(b) plot the evaluation metrics. We make the following observations. (i) When $N=20$, our relaxation is exact with up to $60\%$ outliers. At $70\%$ outlier rate, even if {\scenario{MOSEK}} solves the SDP to high accuracy, since the solution in not rank one, the rounding procedure obtains a pose estimation that is far from the groundtruth. (ii) When $N=100$, our relaxation becomes mostly tight at $70\%$ outlier rate, which suggests that increasing the total number of matches could lead to a tighter relaxation. {\bf Satellite pose estimation on {\scenario{SPEED}}}. We showcase {\scenario{STRIDE}} on satellite pose estimation using the {\scenario{SPEED}} dataset \cite{Sharma19arxiv-SPEED}. We use the 3D satellite model provided in \cite{Chen19ICCVW-satellitePoseEstimation} (with $N=11$ keypoints) and spoil the groundtruth 2D keypoints with outliers. Fig. \ref{fig:exp-ape-results}(c) shows four examples with 2-5 outliers, where {\scenario{STRIDE}} obtains accurate pose estimates with certified global optimality in less than one minute. More examples are provided in {Supplementary Material\xspace}. \subsection{Category-Level Object Perception} \input{sections/fig-catreg} {\bf Setup}. We use the ``\emph{car}'' category from {\scenario{PASCAL3D+}} dataset \cite{Xiang2014WACV-PASCAL+} for simulation experiments, which contains $N=12$ keypoints with $K=9$ basis shapes. We generate an unknown instance of the category by sampling a random vector of shape coefficients $\vc^{\circ} \in \mathbb{R}^K_{+}$ such that $\sum_{k=1}^K c_i^{\circ} = 1$ and using $\vc^{\circ}$ to linearly combine the $K$ basis shapes. We then add random Gaussian noise (with standard deviation $0.01$) to the new instance and transform it with a random rigid transformation $(\MR^{\circ},\vt^{\circ})$ with $\norm{\vt^{\circ}} \leq T = 10$. We test increasing outlier rates up to $60\%$ with 20 runs per outlier rate. We use a regularization parameter $\lambda = 1$. {\bf Results}. Fig. \ref{fig:exp-catreg-results}(a) plots the evaluation metrics: (i) our relaxation is exact with up to $60\%$ outliers; (ii) {\scenario{STRIDE}} can can certify the global optimality of {\scenario{GNC}} and escapes its local minima; (iii) {\scenario{STRIDE}} is about $10$ times faster than {\scenario{MOSEK}}. {\bf Vehicle pose and shape estimation on {\scenario{ApolloScape}}}. We use {\scenario{STRIDE}} to jointly estimate the pose and shape of an unknown vehicle from the {\scenario{ApolloScape}} self-driving dataset \cite{Wang19pami-apolloscape}. We use a set of $K=5$ basis shapes, each with $N=66$ annotated 3D keypoints. Given a 2D image depicting an unknown vehicle, we use the pretrained {\scenario{GSNet}} \cite{Ke20-gsnet} to detect 2D keypoints of the unknown vehicle with groundtruth depth (same setup as one of the tests in \cite{Shi21rss-pace}). Fig. \ref{fig:exp-catreg-results}(b-1) shows four examples where {\scenario{STRIDE}} certified the global optimality of solutions returned by {\scenario{GNC}} ($\eta_s = 1.5\ee{-7},1.3\ee{-9},1.4\ee{-10},1.6\ee{-9}$) , and Fig. \ref{fig:exp-catreg-results}(b-2) shows two example where {\scenario{STRIDE}} escapes the suboptimal solutions returned by {\scenario{GNC}} and finds the certified globally optimal solutions ($\eta_s = 3.2\ee{-4},4.6\ee{-4}$). More examples are provided in {Supplementary Material\xspace}. \subsection{Summary and Discussion} Table \ref{table:overalltiming} summarizes the timing results of {\scenario{STRIDE}}, compared with {\scenario{MOSEK}}, for all six problems. We make a few comments. (i) {\scenario{STRIDE}} is able to solve problems far beyond the reach of {\scenario{MOSEK}} (in fact, the SDPs solved in this paper are among the largest in all semidefinite programming literature). (ii) When fast heuristics converge to the globally optimal solution, {\scenario{STRIDE}} just needs to perform optimality certification and can be $2$-$5$ times faster (\emph{cf.}\xspace~{\scenario{STRIDE}} (Certify) vs. {\scenario{STRIDE}} (Escape)). (iii) For problems of similar sizes (in terms of $n_1$ and $m$), the speed of {\scenario{STRIDE}} can be \emph{application-dependent} (\emph{e.g.},\xspace~{\scenario{STRIDE}} is much faster in single rotation averaging than in other applications). This suggests that relaxations of different applications lead to SDP problems of \emph{drastically different geometry}. Understanding the geometry and leveraging new tools to further speedup the computation is an exciting research venue. For example, it could be promising to use data-driven methods to ``learn'' the geometry of different problems to generate high-quality initializations. \input{sections/table-timing} \section{Sparse Semidefinite Relaxation} \label{sec:sdprelax} In the previous section, we showed how to rephrase the TLS cost as a nonconvex polynomial optimization in $\widetilde{\vxx} \triangleq [\vxx \,;\,} %\ \Vert\ \boldsymbol{\theta}] \in \Real{d+N}$. The goal of this section is to design algorithms that can solve~\eqref{eq:binaryTLS} to certifiable global optimality. {\bf Can we just use Lasserre's hierarchy?} Before introducing our sparse semidefinite relaxation, let us attempt to apply the dense Lasserre's hierarchy \eqref{eq:lasserre} to~\eqref{eq:binaryTLS}. We know that the objective in \eqref{eq:binaryTLS} has degree $3$,\footnote{The residuals $r^2(\vxx,\boldsymbol{z}_i)$ are quadratic from Proposition \ref{prop:polynomialExpressibility}, hence the terms $\theta_i r^2(\vxx,\boldsymbol{z}_i)$ in the objective of \eqref{eq:binaryTLS} become cubic.} thus $\kappa\geq2$ is needed for \eqref{eq:lasserre}. In fact, as we have shown in \cite{Yang20neurips-onering}, \eqref{eq:lasserre} at $\kappa=2$ is empirically exact (on small problem instances). However, as we can see from Examples \ref{ex:singlerotation}-\ref{ex:category}, the problems we care about have minimum $d=9$ (a 3D rotation in Example \ref{ex:singlerotation}) and maximum $d=9n$ ($n$ 3D rotations in Example \ref{ex:multirotation}) with $n$ being as large as a few hundreds, and meanwhile, it is desirable to be able to handle $N=100$ measurements. Choosing $d=10,N=100,\kappa=2$, the SDP resulting from the dense relaxation \eqref{eq:lasserre} has $n_1 =6216,m_{\mathrm{mom}} =12,649,561$; when $d=100,N=100,\kappa=2$, such SDP would have $n_1 \approx 2 \times 10^4, m_{\mathrm{mom}} \approx 1.4\times 10^8$. In both cases, it is hopeless to solve the resulting SDPs using existing solvers. {\bf Sparse semidefinite relaxation (SSR)}. Now we present a semidefinite relaxation that is much more scalable than \eqref{eq:lasserre}. Note that the fundamental reason why \eqref{eq:lasserre} leads to an intractable SDP is the use of the \emph{dense} monomial basis $[\widetilde{\vxx}]_\kappa$ for building the moment matrix $\M{X}_\kappa$. Although the full set of monomials $[\widetilde{\vxx}]_\kappa$ is necessary when the polynomials $p,h_i,g_j$ contain all monomials up to degree $2\kappa$, in practice $p,h_i,g_j$ are almost always \emph{sparse} (\emph{i.e.},\xspace include a small set of monomials). Therefore, the crux of our semidefinite relaxation is to construct a {sparse} set of monomials that result in a much smaller moment matrix. Towards this, we analyze the sparsity of the objective and constraint polynomials in~\eqref{eq:binaryTLS} and observe they only contain three types of monomials: \begin{enumerate}[label=(\roman*)] \item \label{item:tlsmono1} $[\vxx]_2$, coming from $r^2$ and $\psi$ in the objective, and polynomials defining the feasible set ${\cal X}$ (\emph{cf.}\xspace Proposition \ref{prop:polynomialExpressibility}); \item \label{item:tlsmono2} $\theta_i \cdot [\vxx]_2,i=1\dots,N$, coming from $\theta_i r^2$ and $\theta_i$ in the objective for $i=1,\dots,N$; and \item \label{item:tlsmono3} $\theta_i^2,i=1,\dots,N$, coming from the equality constraints $\theta_i^2-1=0$ for $i=1,\dots,N$. \end{enumerate} Therefore, it is easy to see that, with the Kronecker product denoted by ``$\otimes$'', choosing the sparse basis \begin{eqnarray}\label{eq:sparsebasis} \boldsymbol{v}(\widetilde{\vxx}) \triangleq [1 \,;\,} %\ \Vert\ \vxx \,;\,} %\ \Vert\ \boldsymbol{\theta} \,;\,} %\ \Vert\ \boldsymbol{\theta} \otimes \vxx] \in \Real{n_1},\ n_1 \triangleq (1+d)(1+N) \end{eqnarray} leads to the following moment matrix \begin{eqnarray} \label{eq:sparsemomentmat} \M{X}_v \triangleq \boldsymbol{v} \boldsymbol{v}^{\mathsf{T}}\!\! =\!\! \left[\begin{array}{cccc} 1 & \vxx^{\mathsf{T}} & \boldsymbol{\theta}^{\mathsf{T}} & \boldsymbol{\theta}^{\mathsf{T}} \otimes \vxx^{\mathsf{T}} \\ \vxx & \vxx \vxx^{\mathsf{T}} & \vxx\boldsymbol{\theta}^{\mathsf{T}} &\!\!\!\! \vxx (\boldsymbol{\theta}^{\mathsf{T}} \otimes \vxx^{\mathsf{T}})\!\!\!\! \\ \boldsymbol{\theta} & \boldsymbol{\theta} \vxx^{\mathsf{T}} & \boldsymbol{\theta} \boldsymbol{\theta}^{\mathsf{T}} &\!\!\!\! \boldsymbol{\theta} (\boldsymbol{\theta}^{\mathsf{T}} \otimes \vxx^{\mathsf{T}})\!\!\!\! \\ \!\!\!\boldsymbol{\theta} \otimes \vxx\! &\!\!\! (\boldsymbol{\theta} \otimes \vxx)\vxx^{\mathsf{T}}\!\!\! &\!\!\! (\boldsymbol{\theta} \otimes \vxx) \boldsymbol{\theta}^{\mathsf{T}}\!\! &\!\!\!\! \boldsymbol{\theta}\vtheta^{\mathsf{T}} \otimes \vxx\vxx^{\mathsf{T}}\!\!\!\! \end{array}\right] \end{eqnarray} that contains all the three types of monomials ($[\vxx]_2$, $\theta_i \cdot [\vxx]_2$, and $\theta_i^2$) in \ref{item:tlsmono1}-\ref{item:tlsmono3}. Therefore, \emph{we can write the objective and constraint polynomials in \eqref{eq:binaryTLS} as linear functions of the smaller moment matrix~\eqref{eq:sparsemomentmat}.} Clearly, the advantage is that the size of the moment matrix is now $(1+d)(1+N)$, which is much smaller than $\nchoosek{d+N+\kappa}{\kappa}$ (for $\kappa=2$) from Lasserre's hierarchy. Now we can formulate our sparse relaxation using $\M{X}_v$ in~\eqref{eq:sparsemomentmat}, by following the same procedure as in Section \ref{sec:pre-pop}. \emph{(i) Rewriting \eqref{eq:binaryTLS} using the sparse moment matrix $\M{X}_v$}. Because the sparse moment matrix $\M{X}_v$ contains all monomials in the objective and constraint polynomials of \eqref{eq:binaryTLS}, we can write them as linear functions of $\M{X}_v$. For example, the objective can be written as $\inprod{\M{C}_1}{\M{X}_v}$. \emph{(ii) Relaxing the rank-$1$ constraint on $\M{X}_v$}. By construction, $\M{X}_v$ belongs to the set of rank-one positive semidefinite matrices. Since the rank constraint is non-convex, we drop it and only enforce $\M{X}_v$ to be positive semidefinite: $\M{X}_v \succeq 0$. \emph{(iii) Adding redundant constraints}. First, similar to the dense relaxation case, some monomials can repeat themselves at multiple entries of $\M{X}_v$. For example, in \eqref{eq:sparsemomentmat}, the ``$\boldsymbol{\theta} \otimes \vxx$'' block is the same as the ``$\boldsymbol{\theta} \vxx^{\mathsf{T}}$'' block up to rearrangement of entries. In fact, the number of \emph{unique} monomials in $\M{X}_v$ is $m_{2v} = \mathfrak{t}(d+1)\mathfrak{t}(N+1)$, while the dimension of $\M{X}_v$ (in terms of a symmetric matrix) is $\mathfrak{t}((1+d)(1+N))$. Therefore, we can add a total number of $m_{\mathrm{mom}} = \mathfrak{t}((1+d)(1+N)) - m_{2v} + 1$ \emph{moment constraints}: \beq \begin{array}{ll}\label{eq:momentConstraintssparse} \text{\grayout{moment constraints}}:& \revise{\inprod{\M{A}_{\mathrm{mom},0}}{\M{X}_v} = 1, } \\ & \inprod{\M{A}_{\mathrm{mom},j}}{\M{X}_v} = 0, \\ & j = 1, \ldots, m_{\mathrm{mom}}-1, \end{array} \eeq to enforce the repeating monomials in $\M{X}_v$ to be equal to each other, as well as the leading entry $[\M{X}_v]_{11} = 1$ \revise{(similar to \eqref{eq:momentConstraints}, $\M{A}_{\mathrm{mom},0}$ is all zero except $[\M{A}_{\mathrm{mom},0}]_{11} =1$)}. Second, we add redundant equality constraints. For each equality constraint $h_i$ in $\eqref{eq:binaryTLS}$, we denote $[\widetilde{\vxx}]_{h_i}$ as the largest set of unique monomials such that $h_i \cdot [\widetilde{\vxx}]_{h_i}$ only contains monomials in $\M{X}_v$. Formally, \begin{eqnarray} [\widetilde{\vxx}]_{h_i} \triangleq \{\widetilde{\vxx}^{\boldsymbol{\alpha}} \mid \mono{ h_i \cdot \widetilde{\vxx}^{\boldsymbol{\alpha}} } \subseteq \mono{\M{X}_v} \}, \label{eq:liftequalities} \end{eqnarray} where $\mono{\cdot}$ returns the set of unique monomials of a polynomial (or of a matrix of polynomials). Consequently, we can write $h_i \cdot [\widetilde{\vxx}]_{h_i} = {\mathbf 0}$ as linear equalities in $\M{X}_v$: \beq \begin{array}{ll}\label{eq:redundantEqualityConstraintssparse} \hspace{-3mm} \text{\grayout{(redundant) equality constraints}}: \inprod{\M{A}_{\mathrm{req},ij}}{\M{X}_v} = 0, \\ \quad\quad \quad i = 1, \ldots, l_h,\ \ j = 1, \ldots, \abs{[\widetilde{\vxx}]_{h_i}}. \end{array} \eeq Note that since each $[\widetilde{\vxx}]_{h_i}$ must include the monomial ``1'', eq.~\eqref{eq:redundantEqualityConstraintssparse} includes the original equality constraints $h_i$ in $\eqref{eq:binaryTLS}$. Finally, for each inequality constraint $g_j$ (recall $\deg{g_j} \leq 2$ by Proposition \ref{prop:polynomialExpressibility}), we denote by $[\M{X}_1]_{{\cal I}_j,{\cal I}_j}$ the maximum principal submatrix of $\M{X}_1$ (\emph{i.e.},\xspace order-one full moment matrix) such that $g_j \cdot [\M{X}_1]_{{\cal I}_j,{\cal I}_j}$ only contains monomials in $\M{X}_v$. Formally, the indices ${\cal I}_j$ are selected as: \begin{eqnarray} & \hspace{-6mm} {\cal I}_j = \displaystyle \argmax_{{\cal J}} \{ \abs{{\cal J}} \mid \mono{ g_j\! \cdot\! [\M{X}_1]_{{\cal J},{\cal J}} } \subseteq \mono{\M{X}_v} \}. \label{eq:liftPSDsubblks} \end{eqnarray} As a result, calling $\M{X}_{g_j} = g_j \cdot [\M{X}_1]_{{\cal I}_j,{\cal I}_j}$, which is positive semidefinite by construction, we can write down the following localizing matrices and constraints: \beq \begin{array}{ll}\label{eq:locMatricessparse} \text{\grayout{localizing matrices}}: & \M{X}_{g_j} \succeq 0, \;\; j=1,\ldots,l_g \end{array} \eeq \beq \begin{array}{ll} \label{eq:localizingConstraintssparse} \text{\grayout{{localizing} constraints}}: \inprod{\M{A}_{\mathrm{loc},jkh}}{\M{X}_v} = [\M{X}_{g_j}]_{hk} \\ \quad\quad\quad j = 1, \ldots, l_g, \ \ 1 \leq h\leq k \leq \abs{{\cal I}_j}, \end{array} \eeq where the linear constraints in \eqref{eq:localizingConstraintssparse} simply enforce each entry of $\M{X}_{g_j}$ to be a linear combination of entries in $\M{X}_v$. Steps (i)-(iii) above lead to the following SDP: \begin{equation}\label{eq:sparserelax} \hspace{-3mm} f^\star =\!\! \min_{\M{X} = (\M{X}_v, \M{X}_1,\dots,\M{X}_{l_g})} \cbrace{\inprod{\M{C}_1}{\M{X}_v} \mid {\cal A}(\M{X})\! =\! \boldsymbol{b}, \M{X} \succeq 0}\!,\!\!\! \tag{SSR} \end{equation} where we have shorthanded $\M{X}_j = \M{X}_{g_j}$ for notation convenience, and ${\cal A}(\M{X})=\boldsymbol{b}$ collects all the linear equality constraints in \eqref{eq:momentConstraintssparse}, \eqref{eq:redundantEqualityConstraintssparse} and \eqref{eq:localizingConstraintssparse}. Similar to Theorem \ref{thm:lasserre} for \eqref{eq:lasserre}, we have the following result for \eqref{eq:sparserelax} about certifiable global optimality. \begin{theorem}[Sparse Semidefinite Relaxation for \eqref{eq:binaryTLS}] \label{thm:sparserelaxtls} Denote by $p(\vxx,\boldsymbol{\theta})$ the objective function of \eqref{eq:binaryTLS}, by $p^\star$ the optimum of \eqref{eq:binaryTLS}, and by $f^\star$ \revise{(resp. $\MX^\star_v$)} the optimum \revise{(resp. one optimizer)} of \eqref{eq:sparserelax}, we have \begin{enumerate}[label=(\roman*)] \item (lower bound) $f^\star \leq p^\star$; \item (rank-one solutions) if $f^\star = p^\star$, then for each global minimizer $\tldvxx^\star = (\vxx^{\star},{\vtheta}^{\star})$ of \eqref{eq:binaryTLS}, its rank-one lifting $\M{X}_v = \boldsymbol{v} (\tldvxx^\star) \boldsymbol{v} (\tldvxx^\star)^{\mathsf{T}}$ is optimal for \eqref{eq:sparserelax}, \revise{and every rank-one optimal solution $\MX^\star_v$ of \eqref{eq:sparserelax} can be written as $\boldsymbol{v} (\tldvxx^\star) \boldsymbol{v} (\tldvxx^\star)^{\mathsf{T}}$ for some $\tldvxx^\star$ that is optimal for \eqref{eq:binaryTLS}}; \item \revise{(optimality certificate) if $\rank{\MX^\star_v} = 1$, then $f^\star = p^\star$.} \end{enumerate} \end{theorem} Theorem \ref{thm:sparserelaxtls} states that \eqref{eq:sparserelax} is a relaxation for \eqref{eq:binaryTLS} \revise{and solving the convex SDP \eqref{eq:sparserelax} can provide a certificate for the exactness of the relaxation if the rank of the optimal solution $\MX^\star_v$ equals one. In practice, rank computation can be subject to numerical inaccuracies (\emph{e.g.},\xspace it can be difficult to decide if the relaxation is exact when the second largest eigenvalue is, say $10^{-3}$). Therefore, we now introduce a continuous metric for evaluating the exactness of the relaxation (that also applies to the dense relaxation \eqref{eq:lasserre}). } {\bf Relative suboptimality}. \revise{Assume $\MX^\star_v$ is an optimal solution of \eqref{eq:sparserelax} and let $\boldsymbol{v}$ be the eigenvector corresponding to the maximum eigenvalue of $\MX^\star_v$. If the maximum eigenvalue has multiplicity larger than one, then choose $\boldsymbol{v}$ as any of the eigenvectors corresponding to the maximum eigenvalue. Define the rounding function $(\widehat{\vxx},\widehat{\vtheta}) = {\texttt{rounding}}(\boldsymbol{v})$ that returns from $\boldsymbol{v}$ a \emph{feasible} solution to \eqref{eq:binaryTLS} as} \begin{equation}\label{eq:rounding} \boldsymbol{v} \leftarrow \boldsymbol{v} / \boldsymbol{v}_1,\ \widehat{\vxx} = \Pi_{{\cal X}} (\boldsymbol{v}_{x}),\ \widehat{\vtheta} = \mathrm{sgn}\parentheses{\boldsymbol{v}_{\theta}}, \end{equation} where $\boldsymbol{v}_{x}$ (resp. $\boldsymbol{v}_{1},\boldsymbol{v}_{\theta}$) takes the entries of $\boldsymbol{v}$ corresponding to monomials $\vxx$ (resp. $1,\boldsymbol{\theta}$) in \eqref{eq:sparsebasis}, $\mathrm{sgn}(a)$ returns the sign of a scalar ``$a$'', and $\Pi_{{\cal X}}$ denotes the projection onto set ${\cal X}$.\footnote{For our Examples \ref{ex:singlerotation}-\ref{ex:category}, the feasible set ${\cal X}$ includes $\mathrm{SO}(\dimrot)$, whose projections can be performed in closed form, and $\calB^3_T$, ${\cal C}_\alpha$, $\calB^3_T \cap {\cal C}_\alpha$, $\calB^K_T \cap \mathbb{R}^K_{+}$, all of which are \emph{low-dimensional convex} sets whose projections can be computed to arbitrary accuracy using standard convex solvers. Therefore, the {\texttt{rounding}} procedure~\eqref{eq:rounding} can be done efficiently.} Denoting $\widehat{p} \triangleq p(\widehat{\vxx},\widehat{\vtheta})$ as the cost attained by the rounded solution, we have $f^\star \leq p^\star \leq \widehat{p}$. Moreover, we can compute a \emph{relative suboptimality} of the rounded solution $(\widehat{\vxx},\widehat{\vtheta})$ \begin{eqnarray} \label{eq:subopt} \eta_s \triangleq \abs{f^\star - \widehat{p}}/ \parentheses{ 1 + \abs{f^\star} + \abs{\widehat{p}} } \end{eqnarray} as a measure of suboptimality. \revise{Intuitively, the relative suboptimality certifies that a solution $(\widehat{\vxx},\widehat{\vtheta})$ with objective value \emph{at most} $\eta_s$ (\emph{e.g.},\xspace $0.1\%$) away from the unknown global optimizer has been found.} Evidently, $\eta_s = 0$ implies $(\widehat{\vxx},\widehat{\vtheta})$ is optimal and \eqref{eq:sparserelax} is exact. \revise{In fact, for \emph{any feasible solution} $(\widehat{\vxx},\widehat{\vtheta})$, not necessarily obtained from the SDP solution $\MX^\star_v$, we can evaluate $\widehat{p} = p(\widehat{\vxx},\widehat{\vtheta})$ to compute the relative suboptimality at the given feasible solution using \eqref{eq:subopt}. Similarly, if $\eta_s = 0$ is attained at any feasible solution, we can certify the exactness of the relaxation and the global optimality of the feasible solution.} As an advanced reading, in Supplementary Material\xspace, we discuss how to compute a relative suboptimality measure that is not sensitive to potential numerical inaccuracies in the computation of $f^\star$ \revise{(as mentioned in Section \ref{sec:pre-sdp}, it can be challenging to compute $f^\star$ to high accuracy for large-scale SDPs)}. {\bf Scalability improvement}. Table \ref{table:LASvsSSR} compares the size of the SDP from our sparse relaxation \eqref{eq:sparserelax} with that from the standard Lasserre's hierarchy \eqref{eq:lasserre}, in terms of the size of the maximum positive semidefinite block $n_1$ and the number of moment constraints $m_{\mathrm{mom}}$ (in our problems, over $60\%$ of the equality constraints are moment constraints, hence $m_{\mathrm{mom}}$ is representative of the size of the SDP). For illustration purpose, Fig. \ref{fig:LASvsSSR} plots $n_1$ and $m_{\mathrm{mom}}$ as $N$ increases from $20$ to $200$, when applying \eqref{eq:lasserre} and \eqref{eq:sparserelax} to Example \ref{ex:singlerotation} ($d=9$). We can observe a drastic reduction in both $n_1$ and $m_{\mathrm{mom}}$ when using \eqref{eq:sparserelax}. Notably, when $N=200$, $n_1 > 20,000$ and $m_{\mathrm{mom}} > 100,000,000$ if using \eqref{eq:lasserre}, while $n_1 \approx 2,000$ and $m_{\mathrm{mom}} \approx 1,000,000$ if using \eqref{eq:sparserelax}. This is about $10$ times reduction in $n_1$ and $100$ times reduction in $m_{\mathrm{mom}}$. Certainly, such scalability improvement would be meaningless if \eqref{eq:sparserelax} is \emph{inexact} and fails to solve the original \eqref{eq:binaryTLS} problem to global optimality. However, as we will show in Section \ref{sec:experiments}, \eqref{eq:sparserelax} is {empirically exact across all Examples \ref{ex:singlerotation}-\ref{ex:category}, even in the presence of many outliers}. \input{sections/table-LASvsSSR} {\bf Further reduction on Example \ref{ex:multirotation}}. For multiple rotation averaging, the dimension of the geometric model is $d=2n$ (2D) or $d=9n$ (3D), where $n$ is the number of nodes of a graph. Practical rotation averaging problems in structure from motion and SLAM can have $n$ and $N$ being a few hundreds to a few thousands \cite{Rosen19IJRR-sesync,Eriksson18cvpr-strongDuality}. Taking $d=400, N=20$ leads to $m_{\mathrm{mom}}=16,842,001$ that is still too large. In Supplementary Material\xspace, we present a method to further reduce the size of the sparse monomial basis in \eqref{eq:sparsebasis}. We end this section with a remark about how to exploit sparsity while preserving exactness of the relaxation. \begin{remark}[Exploiting Sparsity] \label{remark:sparsity} (i) A sparse relaxation can be exact only when the dense relaxation \eqref{eq:lasserre} is \revise{exact}. Therefore, we believe it is good practice to first obtain an \revise{empirically exact} relaxation using \revise{the dense hierarchy} \eqref{eq:lasserre} at certain $\kappa$ (as we have done in \cite{Yang20neurips-onering} \revise{with extensive experimental validation}), and then try to find a sparse monomial basis at that $\kappa$. (ii) When the dense relaxation is exact, it is nontrivial to decide if a sparse relaxation will be \revise{exact} without empirical evaluation. For example, replacing \eqref{eq:sparsebasis} with $\boldsymbol{v}(\widetilde{\vxx}) = [[\vxx]_2 \,;\,} %\ \Vert\ \boldsymbol{\theta}]$ is also a sparse relaxation ---the corresponding moment matrix includes all monomials in \ref{item:tlsmono1}-\ref{item:tlsmono3}--- but it is far from being exact. (iii) Parallel to our work \cite{Yang20neurips-onering}, \cite{Wang21SIOPT-tssos} has presented a methodology, {\scenario{TSSOS}}, to systematically exploit term sparsity for general POPs. However, {\scenario{TSSOS}} tends to find a larger monomial basis when compared to problem-specific techniques such as \eqref{eq:sparserelax} in this paper. For Example \ref{ex:absolutepose} with $N=10$ measurements, the dense monomial basis has dimension $276$, our sparse basis \eqref{eq:sparsebasis} has dimension $143$, but {\scenario{TSSOS}} with ``maximal chordal extension'' finds a sparse basis that has dimension $246$ and is a strict superset of \eqref{eq:sparsebasis}. \end{remark} \section{Proof of Theorem~\ref{thm:sparserelaxtls}} \label{sec:app-proof-theorem-sparserelax} \begin{proof} (i): Every $\widetilde{\vxx}=(\vxx,\boldsymbol{\theta}) \in {\cal X} \times \{\pm 1\}^N$ of \eqref{eq:binaryTLS} leads to a rank-one lifting $\boldsymbol{v}(\widetilde{\vxx})\boldsymbol{v}(\widetilde{\vxx})^{\mathsf{T}}$ that is feasible for \eqref{eq:sparserelax}. Therefore, the feasible set of \eqref{eq:binaryTLS} is a subset of the feasible set of \eqref{eq:sparserelax}, and hence $f^\star \leq p^\star$. \revise{ (ii) \& (iii) Since $\tldvxx^\star$ is optimal for \eqref{eq:binaryTLS}, we have $p(\tldvxx^\star) = p^\star$. Now because $\M{X}_v = \boldsymbol{v}(\tldvxx^\star)\boldsymbol{v}(\tldvxx^\star)^{\mathsf{T}}$ is a rank-one lifting of $\tldvxx^\star$, we have that $\M{X}_v$ is feasible for the SDP \eqref{eq:sparserelax} and it attains $f(\M{X}_v) = p^\star = f^\star$. Therefore $\M{X}_v$ (and its corresponding localizing matrices $\M{X}_1,\dots,\M{X}_{l_g}$) are optimal for \eqref{eq:sparserelax}. Now we prove that if an optimal SDP solution $\MX^\star_v$ has rank one, then the relaxation is exact and a global optimizer can be extracted from $\MX^\star_v$. Towards this goal, first observe that since $\rank{\MX^\star_v} = 1$, $\MX^\star_v \succeq 0$, and $[\MX^\star_v]_{11} = 1$ (because $\MX^\star_v$ is a feasible point of \eqref{eq:sparserelax}, which requires the leading entry to be one. \emph{cf.}\xspace \eqref{eq:momentConstraintssparse}), we can perform a rank-one factorization of $\MX^\star_v$ as \begin{eqnarray} \MX^\star_v = \left[\begin{array}{c} 1 \\ \widehat{\vxx} \\ \widehat{\vtheta} \\ \widehat{\vxx\vtheta} \end{array}\right] \left[\begin{array}{cccc} 1 & \widehat{\vxx}^{\mathsf{T}} & \widehat{\vtheta}^{\mathsf{T}} & \widehat{\vxx\vtheta}^{\mathsf{T}}\end{array}\right] \\ = \left[\begin{array}{cccc} 1 & \widehat{\vxx}^{\mathsf{T}} & \widehat{\vtheta}^{\mathsf{T}} & \widehat{\vxx\vtheta}^{\mathsf{T}} \\ \widehat{\vxx} & \widehat{\vxx}\lowvxx^{\mathsf{T}} & \widehat{\vxx} \widehat{\vtheta}^{\mathsf{T}} & \widehat{\vxx} \widehat{\vxx\vtheta}^{\mathsf{T}} \\ \widehat{\vtheta} & \widehat{\vtheta} \widehat{\vxx}^{\mathsf{T}} & \widehat{\vtheta} \widehat{\vtheta}^{\mathsf{T}} & \widehat{\vtheta} \widehat{\vxx\vtheta}^{\mathsf{T}} \\ \widehat{\vxx\vtheta} & \widehat{\vxx\vtheta} \widehat{\vxx}^{\mathsf{T}} & \widehat{\vxx\vtheta} \widehat{\vtheta}^{\mathsf{T}} & \widehat{\vxx\vtheta} \widehat{\vxx\vtheta}^{\mathsf{T}} \end{array}\right], \label{eq:expandrankone} \end{eqnarray} where $\widehat{\vxx} \in \Real{d}$, $\widehat{\vtheta} \in \Real{N}$, $\widehat{\vxx\vtheta} \in \Real{dN}$ and they correspond to the partition in the ``$\boldsymbol{v}$'' monomial basis in \eqref{eq:sparsebasis} (note that here we overload the ``$\widehat{\cdot}$'' symbol only in the context of this proof). Now we first show that $\widehat{\vxx\vtheta} = \widehat{\vxx} \otimes \widehat{\vtheta}$, \emph{i.e.},\xspace $\widehat{\vxx\vtheta}$ are second-order monomials in $\widehat{\vxx}$ and $\widehat{\vtheta}$ of the form $[\widehat{\vxx}]_i [\widehat{\vtheta}]_j$ for $1\leq i \leq d$ and $1\leq j \leq N$. This is evident when we observe that, asking $\MX^\star_v$ to be a moment matrix (\emph{i.e.},\xspace enforcing the moment constraints in \eqref{eq:momentConstraintssparse}) requires that the block $\widehat{\vxx\vtheta}$ in \eqref{eq:expandrankone} is just a rearrangement of the entries of the block $\widehat{\vxx}\widehat{\vtheta}^{\mathsf{T}}$ in \eqref{eq:expandrankone}, where the latter block contains all the second-order monomials of the form $[\widehat{\vxx}]_i [\widehat{\vtheta}]_j$. Then we show that $\widehat{\vxx} \in {\cal X}$ and $\widehat{\vtheta} \in \{+1,-1\}^N$, \emph{i.e.},\xspace $\widehat{\vxx}$ and $\widehat{\vtheta}$ are indeed \emph{feasible} points to the original \eqref{eq:binaryTLS} problem. This is equivalent to showing that all equality constraints hold: $h_i(\widehat{\vxx},\widehat{\vtheta}) = 0,\forall i = 1,\dots,l_h$, and all inequality constraints hold: $g_j(\widehat{\vxx},\widehat{\vtheta}) \geq 0,j=1,\dots,l_g$. This follows from the fact that (a) each $h_i(\widehat{\vxx},\widehat{\vtheta}) = 0$ is enforced by one of the redundant constraints in \eqref{eq:redundantEqualityConstraintssparse}, and (b) each $g_j(\widehat{\vxx},\widehat{\vtheta}) \geq 0$ is enforced by the one of the localizing constraints in \eqref{eq:localizingConstraintssparse}. At this point, we have shown that $(\widehat{\vxx},\widehat{\vtheta})$ is a feasible point of \eqref{eq:binaryTLS} that attains $p(\widehat{\vxx},\widehat{\vtheta}) = f(\MX^\star_v) = f^\star$. However, we know that $p(\widehat{\vxx},\widehat{\vtheta}) \geq p^\star$ by the nature of the minimization problem \eqref{eq:binaryTLS}. Therefore, we have \begin{eqnarray} p^\star \leq p(\widehat{\vxx},\widehat{\vtheta}) = f(\MX^\star_v) = f^\star, \nonumber \end{eqnarray} but $f^\star \leq p^\star$ by construction of the semidefinite relaxation. Hence $p^\star = f^\star$ and $(\widehat{\vxx},\widehat{\vtheta})$ is globally optimal for \eqref{eq:binaryTLS}. } \end{proof} \section{Related Work} \label{sec:relatedwork} We review related works on outlier-free and outlier-robust geometric perception, while we refer the interested reader to \cite{Yang2015mpc-sdpnalplus,Wang21SIOPT-tssos} for recent progress in semidefinite programming and semidefinite relaxations. {\bf Outlier-free geometric perception} algorithms can be divided into \emph{minimal solvers} and \emph{non-minimal solvers}. Minimal solvers assume \emph{noiseless} measurements (\emph{i.e.},\xspace~$r(\vxx,\boldsymbol{z}_i)=0,\forall \; i$ in~\eqref{eq:robust}) and use the minimum number of measurements necessary to estimate $\vxx$, which typically leads to solving a system of polynomial equations~\cite{Kukelova2008ECCV-automaticGeneratorofMinimalProblemSolvers}. Non-minimal solvers account for measurement noise and estimate $\vxx$ via nonlinear least squares (NLS), \emph{i.e.},\xspace~$\rho(r) = r^2/\beta_i^2$ in~\eqref{eq:robust}. While in rare cases NLS can be solved in closed form~\cite{Horn87josa} or by solving the polynomial equations arising from the first-order optimality conditions~\cite{Kneip2014ECCV-UPnP}, in general they lead to nonconvex problems and are attacked using local solvers~\cite{Schonberger16cvpr-SfMRevisited} or exponential-time methods (\emph{e.g.},\xspace \emph{Branch and Bound}~\cite{Olsson09pami-bnbRegistration}). \emph{Certifiable algorithms} for outlier-free perception have recently emerged as an approach to compute globally optimal NLS solutions in polynomial time. These algorithms relax the NLS minimization into a convex optimization, using Lasserre's hierarchy of semidefinite relaxations for \emph{polynomial optimizations}~\cite{lasserre10book-momentsOpt,Kahl07IJCV-GlobalOptGeometricReconstruction}. By solving the SDP resulting from the convex relaxations, certifiable algorithms compute global solutions to NLS problems and provide a certificate of optimality, which usually depends on the rank of the SDP solution or the duality gap. Empirically tight convex relaxations have been discovered in pose graph optimization~\cite{Carlone16TRO-planarPGO,Rosen19IJRR-sesync}, rotation averaging~\cite{Eriksson18cvpr-strongDuality,Fredriksson12accv}, triangulation~\cite{Aholt12eccv-qcqptriangulation}, 3D registration~\cite{Briales17cvpr-registration,Maron16tog-PMSDP,Chaudhury15Jopt-multiplePointCloudRegistration}, absolute pose estimation~\cite{Agostinho2019arXiv-cvxpnpl}, relative pose estimation~\cite{Briales18cvpr-global2view,Zhao20pami-relativepose}, hand-eye calibration~\cite{Heller14icra-handeyePOP} and shape and pose estimation from 2D or 3D landmarks~\cite{Yang20cvpr-perfectshape,Shi21rss-pace}. More recently, theoretical analysis of when and why the relaxations are tight is also emerging~\cite{Carlone15icra-verification,Aholt12eccv-qcqptriangulation,Eriksson18cvpr-strongDuality,Rosen19IJRR-sesync,Cifuentes17arxiv,Zhao20pami-relativepose,Chaudhury15Jopt-multiplePointCloudRegistration,Dym17Jopt-exactPMSDP,Iglesias20cvpr-PSRGlobalOptimality,Eriksson19pami-rotavgstrongduality}. Tight relaxations also enable optimality certification (\emph{i.e.},\xspace checking if a given solution is optimal), which ---in outlier-free perception--- can sometimes be performed in closed form~\cite{Carlone16TRO-planarPGO,Eriksson18cvpr-strongDuality,Garcia21IVC-certifiablerelativepose,Boumal16nips,Burer03mp,Rosen20wafr-scalableLowRankSDP,Iglesias20cvpr-PSRGlobalOptimality}. {Despite being certifiably optimal, these solvers assume all measurements are inliers (\emph{i.e.},\xspace~have small noise), which rarely occurs in practice, and hence give poor estimates even in the presence of a single outlier. {\bf Outlier-robust geometric perception} algorithms can be divided into \emph{fast heuristics} and \emph{globally optimal solvers}. Two general frameworks for designing fast heuristics are \scenario{RANSAC}~\cite{Fischler81} and \emph{graduated non-convexity} (\scenario{GNC})~\cite{Black96ijcv-unification,Yang20ral-gnc,Antonante20TRO-outlier}. {\scenario{RANSAC}} robustifies minimal solvers and acts as a fast heuristics to solve \emph{consensus maximization}~\cite{Chin17slcv-maximumConsensusAdvances}, while {\scenario{GNC}} robustifies non-minimal solvers and acts as a fast heuristics to solve \emph{M-estimation} (\emph{i.e.},\xspace~using a robust cost function $\rho$ in~\eqref{eq:robust}). Local optimization is also a popular fast heuristics~\cite{Schonberger16cvpr-SfMRevisited,Agarwal13icra} for the case where an initial guess is available. Approximate but deterministic algorithms have also been designed to solve consensus maximization \cite{Le19pami-deterministicApproximateMC}. On the other hand, globally optimal solvers are typically designed using Branch and Bound~\cite{Bazin12accv-globalRotSearch,Bustos18pami-GORE,Izatt17isrr-MIPregistration,Yang2014ECCV-optimalEssentialEstimationBnBConsensusMax,Paudel15iccv-robustSOS,Li09cvpr-robustFitting,Chin15cvpr-CMTreeAstar,Li07iccv-3DRegistration}. \emph{Certifiable outlier-robust algorithms} relax problem~\eqref{eq:robust} with a robust cost into a tight convex optimization. While certain robust costs, such as L1~\cite{Wang13ima} and Huber~\cite{Carlone18ral-robustPGO2D}, are already convex, they have low breakdown points (\emph{i.e.},\xspace they can be compromised by a single outlier~\cite{Maronna19book-robustStats}). Problem-specific certifiably robust algorithms have been proposed to deal with high-breakdown-point formulations, such as the TLS cost~\cite{Yang19rss-teaser,Yang19iccv-quasar,Lajoie19ral-DCGM}. \maybeOmit{Even optimality certification becomes harder and problem-specific in the presence of outliers, due to the lack of a closed-form characterization of the dual variables~\cite{Yang20tro-teaser}.} \section{Extra Experimental Results} \label{sec:supp-experiments} In this section, we report extra experimental results. \subsection{Mesh Registration} \input{sections/fig-mr} \revise{ {\bf Setup}. We first simulate a random mesh by sampling a set of $N$ 3D planes $\{\boldsymbol{q}_i, \boldsymbol{v}_i\}_{i=1}^N$, where $\boldsymbol{v}_i$ is the unit normal of the plane (by sampling a random 3D direction) and $\boldsymbol{q}_i \sim {\cal N}({\mathbf 0},{\mathbf I}_3)$ is an arbitrary point on the plane. We then generate a random point on each plane via $\boldsymbol{q}_i' = \boldsymbol{q}_i + \boldsymbol{w}_i \times \boldsymbol{v}_i$, where $\boldsymbol{w}_i \sim {\cal N}({\mathbf 0},{\mathbf I}_3)$ is a random point and ``$\times$'' denotes the vector cross product. After this, we generate a random groundtruth transformation $(\MR^{\circ},\vt^{\circ})$, and transform $(\boldsymbol{q}_i',\boldsymbol{v}_i)$ to be $\boldsymbol{p}_i = \MR^{\circ} \boldsymbol{q}_i' + \vt^{\circ} + \boldsymbol{\varepsilon}_{pi}$ and $\boldsymbol{u}_i = \texttt{normalize}(\MR^{\circ} \boldsymbol{v}_i + \boldsymbol{\varepsilon}_{ni})$, if $(\boldsymbol{p}_i,\boldsymbol{u}_i)$ is an inlier, where $\boldsymbol{\varepsilon}_{pi},\boldsymbol{\varepsilon}_{ni} \sim {\cal N}({\mathbf 0},0.01^2 {\mathbf I}_3)$ are random Gaussian noise, and $\texttt{normalize}(\boldsymbol{v}) \triangleq \boldsymbol{v} / \norm{\boldsymbol{v}}$ normalizes a vector to have unit norm. If $(\boldsymbol{p}_i,\boldsymbol{u}_i)$ is an outlier, then $\boldsymbol{p}_i$ is a random 3D point and $\boldsymbol{u}_i$ is a random 3D direction. Given the mesh $\{\boldsymbol{q}_i, \boldsymbol{v}_i\}_{i=1}^N$ and the noisy point cloud with normals $\{\boldsymbol{p}_i,\boldsymbol{u}_i\}_{i=1}^N$, we seek the best transformation $(\MR^{\star},\vt^{\star})$ to \emph{align the point cloud to the mesh} using the residual defined in Example \ref{ex:mesh}. After $(\MR^{\star},\vt^{\star})$ is found, its \emph{inverse} transformation is used to compute the estimation errors w.r.t.\xspace $(\MR^{\circ},\vt^{\circ})$ (recall $(\MR^{\circ},\vt^{\circ})$ is generated to {align the mesh to the point cloud}). We test $N=20$ and $N=100$ with increasing outlier rates. {\bf Results}. Fig.~\ref{fig:exp-mr-results}(a)-(b) plots the evaluation metrics for $N=20$ and $N=100$, respectively. The results are mostly the same as point cloud registration in Fig. \ref{fig:exp-pcr-results}(a)-(b), except that when $N=20$, the relaxation is not always tight at $70\%$ and $80\%$ outlier rates (from the $\eta_s$ plot of {\scenario{MOSEK}} we see one inexact run at $70\%$ and three inexact runs at $80\%$). {\bf Mesh registration with {\scenario{TeddyBear}}}. We perform mesh registration using the {\scenario{TeddyBear}} mesh model from the {\scenario{HomebrewedDB}} dataset \cite{Kaskman19-homebrewedDB}. From the {\scenario{TeddyBear}} mesh, we generate a noisy point cloud by densely sampling points on each face of the mesh with additive Gaussian noise, and transform the point cloud using a random rigid transformation. We use the $\texttt{pcnormals}$ function in Matlab to estimate surface normals for each point in the point cloud. We then randomly sample $N=50$ point-to-face correspondences with outliers, and use {\scenario{STRIDE}} to estimate the rigid transformation. Fig.~\ref{fig:exp-mr-results}(c-1) shows an instance with $50\%$ outliers, where {\scenario{GNC}} successfully returns the globally optimal solution and {\scenario{STRIDE}} computes a certificate of optimality ($\eta_s = 2.5\ee{-8}$). Fig.~\ref{fig:exp-mr-results}(c-2) shows an instance with $70\%$ outliers, where {\scenario{GNC}} converges to a suboptimal solution but {\scenario{STRIDE}} escapes the local minimum and finds the globally optimal solution with a certificate of optimality ($\eta_s = 1.1\ee{-7}$). } \subsection{Robustness of the TLS Estimator} \revise{ Here we show that the accuracy of the TLS estimator increases with the number of inliers and is comparable with a least-squares solution computed from the inliers only. We perform an experiment in single rotation averaging with outlier rate fixed at $80\%$ but number of measurements $N$ increased from $N=30$ to $N=100$. At each $N$, we perform 20 random simulations and compute the rotation estimation error w.r.t. groundtruth. Fig. \ref{fig:estimationcontract}(c) shows that all TLS solutions are certified as globally optimal. Fig. \ref{fig:estimationcontract}(a) shows that, as $N$ increases and the number of inliers increases, the estimation error in general decreases (both in terms of the average estimation error and the quantiles as shown by the boxplot). This demonstrates the empirical robustness of the TLS estimator against outliers and its capability in exploiting information from the inliers. Using the same experimental setup, we compare the rotation error between the TLS estimator and the least squares (LS) estimator (after running TLS, we discard the measurements deemed as outliers by TLS and run LS on the remaining inliers). Fig. \ref{fig:estimationcontract}(b) shows that the TLS estimator is exactly the same as the LS estimator after discarding outliers (up to numerical inaccuracies, the rotation errors are shown in degrees). This demonstrates that the outliers do not affect the TLS solution, and the TLS estimator is truly robust against outliers. } \begin{figure*}[h] \begin{center} \begin{minipage}{\textwidth} \begin{tabular}{ccc}% \begin{minipage}{5.5cm}% \centering% \includegraphics[width=\columnwidth]{errR_tlsvsgt.pdf} \\ {\small (a) TLS vs. Groundtruth} \end{minipage} & \begin{minipage}{5.5cm}% \centering% \includegraphics[width=\columnwidth]{errR_tlsvsls.pdf} \\ {\small (b) TLS vs. LS } \end{minipage} & \begin{minipage}{5.5cm}% \centering% \includegraphics[width=\columnwidth]{subopt_increase_inlier.pdf} \\ {\small (c) Certified Suboptimality} \end{minipage} \end{tabular} \end{minipage} \caption{Rotation estimation error under increasing number of measurements for single rotation averaging with $80\%$ fixed outlier rate (thus increasing number of inliers). (a) Rotation error between TLS estimate and groundtruth rotation. (b) Rotation error between TLS estimate and least squares (LS) estimate. (c) All TLS solutions are certified as optimal. Rotation errors shown in degrees. \label{fig:estimationcontract}} \end{center} \end{figure*} \subsection{Scaling the Noise Bound} \revise{ We perform an experiment to investigate how the optimal solutions change when the noise bound $\beta$ is varied, using the single rotation averaging problem with $N=50$ measurements and $50\%$ outlier rate. Fig. \ref{fig:scaleboundsra} plots the rotation estimation error and certified suboptimality w.r.t. different scaling on the original noise bound $\beta$ (all the measurements are the same at each random simulation, only the noise bound $\beta$ is chosen differently). We can see that (1) there is a wide range of $\beta$ that leads to certifiably optimal and accurate estimation, and (2) when the noise bound $\beta$ is slightly perturbed (\emph{e.g.},\xspace decreased to $90\%$ or increased to $110\%$), the optimal solution remains the same for most problem instances, as shown by the similar boxplots in Fig. \ref{fig:scaleboundsra}(a) at horizontal locations $0.9$, $1$, and $1.1$ (in fact, $15$ out of the $20$ runs have exactly the same solutions). } \begin{figure*}[h] \begin{center} \begin{minipage}{\textwidth} \centering \begin{tabular}{cc}% \begin{minipage}{5.5cm}% \centering% \includegraphics[width=\columnwidth]{sra_errR_scalebound.pdf} \\ {\small (a) Rotation estimation error} \end{minipage} & \begin{minipage}{6cm}% \centering% \includegraphics[width=\columnwidth]{sra_subopt_scalebound.pdf} \\ {\small (b) Certified suboptimality } \end{minipage} \end{tabular} \end{minipage} \caption{(a) Rotation estimation error and (b) certified suboptimality w.r.t. scaling on the noise bound $\beta$ in single rotation averaging with $N=50$ and $50\%$ outlier rate. \label{fig:scaleboundsra}} \end{center} \end{figure*} \subsection{Point Cloud Registration on {\scenario{3DMatch}}} We provide $10$ extra scan matching results by {\scenario{STRIDE}} on the {\scenario{3DMatch}} dataset \cite{Zeng17cvpr-3dmatch} in Fig. \ref{fig:supp-3dmatch}. {\scenario{STRIDE}} returned the globally optimal transformation estimates in all cases. \input{sections/supp-fig-3dmatch} \subsection{Absolute Pose Estimation on {\scenario{SPEED}}} We provide extra satellite pose estimation results by {\scenario{STRIDE}} on the {\scenario{SPEED}} dataset \cite{Sharma19arxiv-SPEED} in Fig. \ref{fig:supp-speed-results}. In all six image instances with $2$-$5$ outliers, {\scenario{STRIDE}} returned accurate pose estimates with global optimality certificates. \input{sections/supp-fig-speed} \subsection{Vehicle Pose and Shape Estimation on {\scenario{ApolloScape}}} We provide vehicle pose and shape estimation results by {\scenario{STRIDE}} on the {\scenario{ApolloScape}} dataset \cite{Wang19pami-apolloscape} in Fig. \ref{fig:supp-apollo}, whose first row also includes the four examples presented in Fig. \ref{fig:exp-catreg-results}(c-1). We provide details of each problem instance such as $N$, $n_1$ and $m$, as well as evaluation metrics such as $(\M{R},\boldsymbol{t})$ errors, relative suboptimality $\eta_s$, and {\scenario{STRIDE}}'s computation time. In all cases, {\scenario{STRIDE}} returned accurate pose and shape estimates with global optimality certificates. \input{sections/supp-fig-apollo} \subsection*{Notation} {\bf Scalars, vectors, matrices}. We use lowercase characters (\emph{e.g.},\xspace $a$) to denote real scalars, bold lowercase characters (\emph{e.g.},\xspace $\va$) for real (column) vectors, and bold uppercase characters (\emph{e.g.},\xspace $\M{A}$) for real matrices. ${\mathbf I}_d$ denotes the identity matrix of size $d \times d$, and ${\mathbf 0}$ denotes the all-zero vector or matrix. Given $\M{A} \in \Real{m \times n}$, $a_{ij}$ denotes the $(i,j)$-th entry of $\M{A}$, and $[\M{A}]_{{\cal I},{\cal J}}$ denotes the submatrix of $\M{A}$ formed by indexing rows ${\cal I} \subseteq [m]$ and columns ${\cal J} \subseteq [n]$, where $[n] \triangleq \{1,\dots,n \}$ is the set of positive integers up to $n$. For a vector $\boldsymbol{v} \in \Real{n}$, we shorthand $v_i$ for its $i$-th entry and $\boldsymbol{v}_{\cal I}$ for its entries indexed by ${\cal I} \subseteq [n]$. For $\M{A},\M{B} \in \Real{m \times n}$, $\inprod{\M{A}}{\M{B}} \triangleq \sum_{i=1}^m \sum_{j=1}^n a_{ij} b_{ij}$ denotes the usual inner product between real matrices. $\trace{\M{A}} \triangleq \sum_{i=1}^n a_{ii}$ denotes the trace of a square matrix $\M{A} \in \Real{n \times n}$. We use $\norm{\cdot}$ to denote the $\ell_2$ norm of a vector and the Frobenious norm of a matrix, \emph{i.e.},\xspace $\norm{\va} \triangleq \sqrt{\inprod{\va}{\va}}$ for any $\va \in \Real{n}$ and $\norm{\M{A}} \triangleq \sqrt{\inprod{\M{A}}{\M{A}}}$ for any $\M{A} \in \Real{m \times n}$. $\norm{\va}_1 \triangleq \sum_{i=1}^n \abs{a_i}$ denotes the $\ell_1$ norm of a vector. $[\M{A},\M{B}]$ denotes the \emph{horizontal} concatenation, while $[\M{A} \,;\,} %\ \Vert\ \M{B}]$ denotes the \emph{vertical} concatenation, for proper $\M{A},\M{B}$. For $a \in \Real{}$, the symbol $\ceil{a}$ returns the smallest integer above $a$. {\bf Sets}. We use $\sym{n}$ to denote the space of $n \times n$ real symmetric matrices, and $\psd{n}$ (resp. $\pd{n}$) to denote the set of matrices in $\sym{n}$ that are \emph{positive semidefinite} (resp. definite). We also write $\M{X} \succeq 0$ (resp. $\M{X} \succ 0$) to indicate $\M{X}$ is positive semidefinite (resp. definite). $\usphere{d-1} \triangleq \{ \boldsymbol{v} \in \Real{d} \mid \norm{\boldsymbol{v}} = 1 \}$ denotes the $d$-dimensional unit sphere. We denote by $\ensuremath{\mathrm{SO}(d)}\xspace \triangleq \{ \M{R} \in \revise{\Real{d\times d}} \mid \M{R}^{\mathsf{T}} \M{R} = {\mathbf I}_d, \det\parentheses{\M{R}} = +1 \}$ the $d$-dimensional \emph{special orthogonal group} (rotation matrices). $\abs{{\cal A}}$ denotes the cardinality of a finite set ${\cal A}$. $\int_{+}$ (resp. $\int_{++}$) denotes the set of nonnegative (resp. positive) integers, and $\mathbb{Q}$ denotes the set of rational numbers. \section{Preliminaries} \label{sec:preliminaries} This section reviews key facts about multi-block semidefinite programming \cite{tutuncu03MP-SDPT3} (Section \ref{sec:pre-sdp}), and provides an introduction to polynomial optimization and Lasserre's semidefinite relaxation hierarchy~\cite{Lasserre01siopt-lasserrehierarchy} (Section \ref{sec:pre-pop}). While somewhat mathematically dense, these preliminaries are designed as a pragmatic introduction for the non-expert reader. \subsection{Semidefinite Programming} \label{sec:pre-sdp} A \emph{multi-block} semidefinite programming (SDP) problem is an optimization problem in the following \emph{primal} form \cite{tutuncu03MP-SDPT3}: \begin{equation}\label{eq:primalSDP} \min_{\M{X} \in \mathbb{X}} \cbrace{\inprod{\M{C}}{\M{X}} \mid {\cal A} (\M{X}) = \boldsymbol{b},\ \M{X} \in {\cal K}} \tag{P}. \end{equation} where the variable $\M{X} = (\M{X}_1,\dots,\M{X}_l)$ is a collection of $l$ square matrices (the ``blocks'') with $\M{X}_i \in \Real{n_i \times n_i}$ for $i=1,\ldots,l$ (conveniently ordered such that $n_1\geq \dots \geq n_l$); the domain $\mathbb{X} \triangleq \sym{n_1} \times \dots \times \sym{n_l}$ restricts the matrices to be symmetric. The objective is a linear combination of the matrices in $\M{X}$, \emph{i.e.},\xspace $\inprod{\M{C}}{\M{X}} \triangleq \sum_{i=1}^l \inprod{\M{C}_i}{\M{X}_i}$ (for given matrices $\M{C}_i\in \sym{n_i}, i=1,\ldots,l$). The problem includes independent linear constraints ${\cal A} (\M{X}) = \boldsymbol{b}$ on $\M{X}$, where: \begin{equation} {\cal A} (\M{X}) \triangleq \sbracket{ \sum_{i=1}^l \inprod{\M{A}_{i1}}{\M{X}_i} \,;\,} %\ \Vert\ \dots \,;\,} %\ \Vert\ \sum_{i=1}^l \inprod{\M{A}_{im}}{\M{X}_i} } \in \Real{m} \end{equation} for given matrices $\M{A}_{ij}\!\in\!\sym{n_i}, i\!=\!1,\ldots,l$ and $j\!=\!1,\dots,m$, and $\boldsymbol{b}\!\in\!\Real{m}$ is a given vector. Finally, the constraint $\M{X}\!\in\!{\cal K}$ enforces that each matrix in $\M{X}$ is positive semidefinite (\emph{i.e.},\xspace ${\cal K}\!\triangleq\!\psd{n_1}\!\times\!\dots\!\times\!\psd{n_l}$ is a product of $l$ positive semidefinite cones). \revise{We also write $\M{X} \succeq 0$ to indicate each matrix in $\M{X}$ is positive semidefinite when $\M{X}$ is a collection of matrices (note that we need the notation $\M{X} \in {\cal K}$ for describing details of our SDP solver).} The feasible set of \eqref{eq:primalSDP}, denoted by $\calF_{\mathrm{P}}\!\triangleq\!\{\!\M{X}\!\in\!\mathbb{X}\!\mid\!{\cal A}(\M{X})\!=\!\boldsymbol{b},\M{X}\!\in\!{\cal K}\}$, \mbox{is called a \emph{spectrahedron} \cite{Blekherman12Book-sdpandConvexAlgebraicGeometry}.} The Lagrangian \emph{dual} of \eqref{eq:primalSDP} is another multi-block SDP: \begin{equation}\label{eq:dualSDP} \max_{\boldsymbol{y} \in \Real{m}, \M{S} \in \mathbb{X}} \cbrace{\inprod{\boldsymbol{b}}{\boldsymbol{y}} \mid \calA^{*}(\boldsymbol{y}) + \M{S} = \M{C},\ \M{S} \in {\cal K}} \tag{D} \end{equation} where $\calA^{*}: \Real{m} \rightarrow \mathbb{X}$ is the adjoint of ${\cal A}$ and is defined as: \begin{equation} \label{eq:adjointAmultiblk} \calA^{*}(\boldsymbol{y}) \triangleq \left( \sum_{j=1}^m y_j \M{A}_{1j},\dots,\sum_{j=1}^m y_j \M{A}_{lj} \right) \in \mathbb{X} \end{equation} and the equality $\calA^{*}(\boldsymbol{y}) + \M{S} = \M{C}$ is enforced block-wise. Under mild assumptions (\emph{e.g.},\xspace Slater's condition \cite{Boyd04book}), \emph{strong duality} holds between \eqref{eq:primalSDP} and \eqref{eq:dualSDP} (\emph{i.e.},\xspace the minimum of \eqref{eq:primalSDP} equals the maximum of \eqref{eq:dualSDP}). In this case, $(\MX^\star,\vy^\star,\MS^\star) \in \mathbb{X} \times \Real{m} \times \mathbb{X}$ is simultaneously \emph{optimal} for \eqref{eq:primalSDP}-\eqref{eq:dualSDP} if and only if the following KKT conditions hold \beq \begin{array}{ll}\label{eq:sdpKKT} \text{\grayout{primal feasibility}}: & {\cal A}(\MX^\star) = \boldsymbol{b}, \MX^\star \in {\cal K}, \\ \text{\grayout{dual feasibility}}: & \calA^{*}(\vy^\star) + \MS^\star = \M{C}, \MS^\star \in {\cal K}, \\ \text{\grayout{complementarity}}: & \inprod{\MX^\star}{\MS^\star} = 0. \end{array} \eeq The KKT conditions \eqref{eq:sdpKKT} imply strong duality because \begin{equation} \begin{split} 0 & = \inprod{\MX^\star}{\MS^\star} = \inprod{\MX^\star}{\M{C} - \calA^{*}(\vy^\star)} \\ & = \inprod{\M{C}}{\MX^\star} - \inprod{{\cal A}(\MX^\star)}{\vy^\star} = \inprod{\M{C}}{\MX^\star} - \inprod{\boldsymbol{b}}{\vy^\star} . \end{split} \end{equation} Given $(\M{X},\boldsymbol{y},\M{S}) \in {\cal K} \times \Real{m} \times {\cal K}$, we measure its feasibility and optimality using the standard relative KKT residuals \beq \begin{array}{ll}\label{eq:KKTresiduals} \eta_p \triangleq & \Vert {\cal A}(\M{X}) - \boldsymbol{b} \Vert / ( 1+\norm{\boldsymbol{b}}), \\ \eta_d \triangleq & \Vert {\calA^{*}(\boldsymbol{y}) + \M{S} - \M{C}} \Vert / ( 1+\norm{\M{C}} ),\\ \eta_g \triangleq & \abs{\inprod{\M{C}}{\M{X}} - \inprod{\boldsymbol{b}}{\boldsymbol{y}} } / ( 1 + \abs{ \inprod{\M{C}}{\M{X}} } + \abs{ \inprod{\boldsymbol{b}}{\boldsymbol{y}} } ), \end{array} \eeq where $\norm{\M{X}} = \sum_{i=1}^l \norm{\M{X}_i}$ for any $\M{X} \in \mathbb{X}$. We define $\eta_{\max} \triangleq \max\{\eta_p,\eta_d,\eta_g \}$ as the \emph{maximum KKT residual}. {\bf SDP solvers}. The most robust approach for solving SDP \eqref{eq:primalSDP} (and~\eqref{eq:dualSDP}) is based on \emph{primal-dual interior point methods} (IPM) \cite{Alizadeh98siam-ipmSDP,todd1998nesterov}, \emph{e.g.},\xspace~{\scenario{SDPT3} \cite{tutuncu03MP-SDPT3} and \scenario{MOSEK} \cite{mosek}}. For problems of small to medium size (\emph{e.g.},\xspace $n_1 \leq 5000, m \leq 50,000$), IPMs can solve the SDP to arbitrary accuracy, \emph{i.e.},\xspace $\eta_{\max} < \varepsilon$ for $\varepsilon$ arbitrarily small, with a typical per-iteration complexity ${\cal O}(n_1^3 + m^2 n_1^2 + m^3)$.\footnote{${\cal O}(n_1^3)$ for spectral decomposition of dense primal and dual iterates $(\M{X},\M{S})$, ${\cal O}(m^2 n_1^2)$ for forming the Schur complement system, and ${\cal O}(m^3)$ for factorizing and solving the Schur complement system.} If each linear constraint only involves a small number of blocks (\emph{i.e.},\xspace for each $j=1,\dots,m$, $\M{A}_{ij} = {\mathbf 0}$ for many blocks $i=1,\dots,l$), then IPMs can be made much more efficient using \emph{dualization} \cite{Zhang20MP-sparseSDP}.\footnote{\revise{``Dualization'' switches the primal-dual data structure in numerical solvers (\emph{e.g.},\xspace writing the dual \eqref{eq:dualSDP} with the structure of the primal \eqref{eq:primalSDP} such that $\boldsymbol{y}$ is represented as an unconstrained cone, or difference of two nonnegative cones, with dimension $m$) \cite{lofberg09OMS-dualize}. When sparsity exists, dualization can lead to better numerical performance.}} Nevertheless, such sparsity is not always present and generally IPMs cannot solve large-scale problems on an ordinary workstation. First-order methods based on ADMM and Augmented Lagrangian, \emph{e.g.},\xspace~{\scenario{CDCS}} \cite{Zheng20MP-CDCS}, and \scenario{SDPNAL+} \cite{Yang2015mpc-sdpnalplus}, can handle large-scale problems but exhibit slow convergence, and hence can only obtain solutions of moderate accuracy. For single-block problems ($l=1$) with low-rank solutions (\emph{i.e.},\xspace $\rank{\MX^\star} \ll n_1 $) and $m = {\cal O}(n_1)$, the Burer-Monteiro (B-M) low-rank factorization method \cite{Burer03mp,Boumal16nips} is preferable. Section \ref{sec:introduction} mentioned the success of SDP relaxations in solving \emph{outlier-free} perception problems. This success is attributed to the following facts: (a) most of the SDPs arising in outlier-free estimation have $n_1 < 100$ and $m < 1000$, and can be solved by IPMs in less than one second; (b) although some SDPs (\emph{e.g.},\xspace~\cite{Rosen19IJRR-sesync}) can have $n_1 > 10,000$, they can be efficiently solved by B-M because the optimal solution is low-rank and $m \approx n_1$ \cite{Rosen20wafr-scalableLowRankSDP}. {\bf Challenges}. Unfortunately, \emph{none} of the existing solvers can solve the SDPs presented in this paper to a desired accuracy. In particular, our SDPs have $n_1 < 5000$ but $m = {\cal O}(n_1^2)$ as large as a few millions, rendering IPMs and B-M factorization inapplicable. Moreover, our SDPs admit rank-one optimal solutions and are necessarily degenerate~\cite{Alizadeh97mp-nondegenerateSDP} (loosely speaking, degeneracy is a property that often leads to slower convergence in SDP solvers and prevents the application of B-M). Our previous work \cite{Yang21arxiv-stride} shows that first-order methods perform poorly on degenerate problems. \maybeOmit{ {\bf New Frontiers}. Large-scale degenerate SDPs are an unsolved puzzle in the mathematical optimization community \cite{Yang2015mpc-sdpnalplus}. {\scenario{STRIDE}}, originally proposed in~\cite{Yang21arxiv-stride}, not only achieves strong performance on solving degenerate SDPs in certifiable outlier-robust perception in this paper, but also enables solving degenerate SDP relaxations from mathematics and machine learning that were previously deemed too difficult to be solved~\cite{Yang21arxiv-stride,Yang21report-STRIDE}.} \subsection{Polynomial Optimization and Lasserre's Hierarchy} \label{sec:pre-pop} {\bf Polynomial optimization}. Given $\vxx = [x_1 \,;\,} %\ \Vert\ x_2 \,;\,} %\ \Vert\ \ldots \,;\,} %\ \Vert\ x_d] \in \Real{d}$, a \emph{monomial} in $\vxx$ is a product of $x_i$'s with \emph{nonnegative} integer exponents, \emph{i.e.},\xspace $\vxx^{\boldsymbol{\alpha}} \triangleq x_1^{\alpha_1}\cdots x_d^{\alpha_d}$ for $\boldsymbol{\alpha} \in \int_{+}^d$ (for instance $x_1^2 x_5 x_6^3$ is a monomial). The sum of the exponents, $\norm{\boldsymbol{\alpha}}_1$, \revise{or $\inprod{ {\mathbf{1}} }{\boldsymbol{\alpha}}$,} is called the \emph{degree} of the monomial (\emph{e.g.},\xspace the monomial $x_1^2 x_5 x_6^3$ has degree $6$). A real \emph{polynomial} $p(\vxx)$ is a finite sum of monomials with real coefficients. \revise{We shorthand $p$ in place of $p(\vxx)$ when the variable $\vxx$ is clear.} The degree of a polynomial $p$, denoted by $\deg{p}$, is the \emph{maximum} degree of its monomials. The ring of real polynomials is denoted by $\polyring{\vxx}$. A standard polynomial optimization problem (POP) reads \begin{equation}\label{eq:pop} p^\star \triangleq \min_{\vxx \in \Real{d}} \cbrace{p(\vxx) \ \middle\vert\ \substack{ \displaystyle h_i(\vxx) = 0, i=1,\dots,l_h \\ \displaystyle g_j(\vxx) \geq 0, j = 1,\dots,l_g } }, \tag{POP} \end{equation} where $p, h_i, g_j \in \polyring{\vxx}$. Problem \eqref{eq:pop} is easily seen to be NP-hard \cite{lasserre10book-momentsOpt}, \emph{e.g.},\xspace it can model combinatorial binary constraints $x_i \in \{+1,-1\}$ via $x_i^2 - 1 = 0,i=1,\dots,d$. {\bf Lasserre's hierarchy}. We now give a simplified (and somewhat less conventional) introduction to Lasserre's hierarchy that is sufficient for understanding our paper. For a comprehensive treatment, we refer the reader to~\cite{lasserre10book-momentsOpt}. We define $[\vxx]_{\kappa} \triangleq \{ \vxx^{\boldsymbol{\alpha}} \mid \norm{\boldsymbol{\alpha}}_1 \!\leq \!\kappa, \boldsymbol{\alpha} \!\in\! \int_{+}^d \}$ to be the \revise{vector} of monomials of degree up to $\kappa$. For example, if $\vxx = [x_1 \,;\,} %\ \Vert\ x_2]$ and $\kappa=2$, then $[\vxx]_2 = [1\,;\,} %\ \Vert\ x_1 \,;\,} %\ \Vert\ x_2 \,;\,} %\ \Vert\ x_1^2 \,;\,} %\ \Vert\ x_1 x_2 \,;\,} %\ \Vert\ x_2^2]$. The dimension of $[\vxx]_\kappa$ is $\binomial{d}{\kappa} \triangleq \nchoosek{d+\kappa}{\kappa}$. With $[\vxx]_\kappa$, we form the so-called \emph{moment matrix} $\M{X}_{\kappa} \triangleq [\vxx]_\kappa [\vxx]_{\kappa}^{\mathsf{T}}$. For instance, for $\vxx = [x_1 \,;\,} %\ \Vert\ x_2]$ and $\kappa=2$ (\emph{cf.}\xspace with $[\vxx]_2$ above): \begin{equation}\label{eq:momentMatrix} \M{X}_{\kappa} \triangleq [\vxx]_2 [\vxx]_2^{\mathsf{T}} \!=\! \small{ \left[ \begin{array}{cccccc} 1 & x_1 & x_2 & x_1^2 & x_1 x_2 & x_2^2 \\ x_1 & x_1^2 & x_1 x_2 & x_1^3 & x_1^2 x_2 & x_1 x_2^2 \\ x_2 & x_1 x_2 & x_2^2 & x_1^2 x_2 & x_1 x_2^2 & x_2^3 \\ x_1^2 & x_1^3 & x_1^2 x_2 & x_1^4 & x_1^3 x_2 & x_1^2 x_2^2 \\ x_1 x_2 & x_1^2 x_2 & x_1 x_2^2 & x_1^3 x_2 & x_1^2 x_2^2 & x_1 x_2^3 \\ x_2^2 & x_1 x_2^2 & x_2^3 & x_1^2 x_2^2 & x_1 x_2^3 & x_2^4 \end{array} \right] }. \end{equation} By construction, $\M{X}_{\kappa} \in \psd{\binomial{d}{\kappa}}$ is positive semidefinite and has $\rank{\M{X}_{\kappa}} = 1$. Moreover, the set of \emph{unique} entries in $\M{X}_{\kappa}$ is simply $[\vxx]_{2\kappa}$, \emph{i.e.},\xspace the set of monomials of degree up to $2\kappa$ (these monomials typically appear multiple times in $\M{X}_{\kappa}$, \emph{e.g.},\xspace see $x_1 x_2$ in eq.~\eqref{eq:momentMatrix}). Therefore, a key fact is that \emph{---for a suitable matrix $\M{A}$--- the linear function $\inprod{\M{A}}{\M{X}_{\kappa}}$ can express any polynomial in $\vxx$ of degree up to $2\kappa$.} The key idea of Lasserre's hierarchy is to (i) rewrite~\eqref{eq:pop} using the moment matrix $\M{X}_{\kappa}$, (ii) relax the (non-convex) rank-1 constraint on $\M{X}_{\kappa}$, and (iii) add redundant constraints that are trivially satisfied in~\eqref{eq:pop}; as we show below, this leads to a \emph{convex} semidefinite program. \emph{(i) Rewriting~\eqref{eq:pop} using $\M{X}_{\kappa}$}. We pick a positive integer $\kappa\!\in\!\int_{++}$ (the \emph{order} of the relaxation) such that $2\kappa \geq \max \{\deg{p}\!,\deg{h_1}\!, \ldots, \deg{h_{l_h}}\!, \deg{g_1}\!, \ldots, \deg{g_{l_g}}\!\}.$ (this way we can express both objective function and constraints using $\M{X}_{\kappa}$). For instance, we can rewrite the objective and the equality constraints as: \beq \begin{array}{ll}\label{eq:objective} \!\!\!\!\!\!\text{\grayout{objective}}: & \inprod{\M{C}_1}{\M{X}_\kappa} \\ \end{array} \eeq \beq \begin{array}{ll} \label{eq:eqConstraints1} \!\!\!\text{\grayout{equality constraints}}: & \!\!\! \inprod{\M{A}_{\mathrm{eq},j}}{\M{X}_\kappa} = 0, \; j=1,\ldots,l_h \\ \end{array} \eeq for suitable matrices $\M{C}_1$ and $\M{A}_{\mathrm{eq},j}$. Note that using $\M{X}_{\kappa}$ is already a relaxation since we are no longer enforcing the entries of $\M{X}_{\kappa}$ to be monomials (\emph{e.g.},\xspace we do not enforce the entry $x_1 x_2$ in~\eqref{eq:momentMatrix} to be the product of the entries $x_1$ and $x_2$, which would be a non-convex constraint). \emph{(ii) Relaxing the (non-convex) rank-$1$ constraint on $\M{X}_{\kappa}$}. At the previous point we noticed we can rewrite objective and constraints in~\eqref{eq:pop} as linear (hence convex) functions of $\M{X}_\kappa$. However, $\M{X}_\kappa$ still belongs to the set of positive-semidefinite rank-1 matrices, which is a non-convex set due to the rank constraint. Therefore, we simply relax the rank constraint and only enforce: \beq \begin{array}{ll} \label{eq:eqMomentIsPSD} \!\!\!\text{\grayout{moment matrix}}: & \M{X}_\kappa \succeq 0. \\ \end{array} \eeq \emph{(iii) Adding redundant constraints}. Since we have relaxed~\eqref{eq:pop} by re-parametrizing it in $\M{X}_\kappa$ and dropping the rank constraint, the final step to obtain Lasserre's relaxation consists in adding extra constraints to make the relaxation tighter. First of all, we observe that there are multiple repeated entries in the moment matrix (\emph{e.g.},\xspace in~\eqref{eq:momentMatrix}, the entry $x_1 x_2$ appears 4 times in the matrix). Therefore, we can enforce these entries to be the same. In general, this leads to $m_{\mathrm{mom}} = \mathfrak{t}(\binomial{d}{\kappa}) - \binomial{d}{2\kappa} + 1$ linear constraints, where $\mathfrak{t}(n) \triangleq \frac{n(n+1)}{2}$ is the dimension of $\sym{n}$. These constraints are typically called \emph{moment constraints}: \beq \begin{array}{ll}\label{eq:momentConstraints} \text{\grayout{moment constraints}}: & \revise{\inprod{\M{A}_{\mathrm{mom},0}}{\M{X}_\kappa} = 1}, \\ & \inprod{\M{A}_{\mathrm{mom},j}}{\M{X}_\kappa} = 0, \\ & j = 1, \ldots, \mathfrak{t}(\binomial{d}{\kappa}) - \binomial{d}{2\kappa}, \end{array} \eeq \revise{where $\M{A}_{\mathrm{mom},0}$ is all-zero except $[\M{A}_{\mathrm{mom},0}]_{11} =1$, and it defines the constraint $[\M{X}_\kappa]_{11} = 1$, following from the definition of the moment matrix (see eq.~\eqref{eq:momentMatrix}).} Second, we can also add \emph{redundant} equality constraints. Simply put, if $h_i = 0$, then also $h_i \cdot x_1 = 0$, $h_i \cdot x_2 = 0$, and so on, for any monomial we multiply by $h_i$. Since via $\M{X}_\kappa$ we can represent any polynomial of degree up to $2\kappa$, we can write as linear constraints any polynomial equality in the form $h_i \cdot [\vxx]_{2\kappa - \deg{h_i}} = {\mathbf 0}$ (the order of the monomials is chosen such that the product does not exceed order $2\kappa$). These new equalities can again be written linearly as: \beq \begin{array}{ll}\label{eq:redundantEqualityConstraints} \hspace{-3mm}\text{\grayout{(redundant) equality constraints}}: \inprod{\M{A}_{\mathrm{req},ij}}{\M{X}_\kappa} = 0, \\ \quad\quad i = 1, \ldots, l_h, \ \ j = 1, \ldots, \binomial{d}{2\kappa - \deg{h_i}}\!\!\!\!\!\!\!\!\! \end{array} \eeq for suitable $\M{A}_{\mathrm{req},ij}$. Since the first entry of $[\vxx]_{2\kappa - \deg{h_i}}$ is always 1 (\emph{i.e.},\xspace the monomial of order zero),~eq.~\eqref{eq:redundantEqualityConstraints} already includes the original equality constraints in~\eqref{eq:eqConstraints1}. Finally, we observe that if $g_j \geq 0$, then for any positive semidefinite matrix $\M{M}$, it holds $g_j \cdot \M{M} \succeq 0$. Since we can represent any polynomial of order up to $2\kappa$ as a linear function of $\M{X}_\kappa$, we can add redundant constraints in the form $g_j \cdot \M{X}_{\kappa - \ceil{\deg{g_j}/2}} \succeq 0$ (by construction $g_j \cdot \M{X}_{\kappa - \ceil{\deg{g_j}/2}}$ only contains polynomials of degree up to $2\kappa$). To phrase the resulting relaxation in the standard form~\eqref{eq:primalSDP}, it is common to add extra matrix variables $\M{X}_{g_j} = g_j \cdot \M{X}_{\kappa - \ceil{\deg{g_j}/2}}$ for $j=1,\ldots,l_g$ (the so called \emph{localizing matrices} \cite[\S 3.2.1]{lasserre10book-momentsOpt}) and then force these matrices to be a linear function of $\M{X}_\kappa$: \beq \begin{array}{ll}\label{eq:locMatrices} \text{\grayout{localizing matrices}}: & \M{X}_{g_j} \succeq 0, \;\; j=1,\ldots,l_g \end{array} \eeq \beq \begin{array}{ll} \label{eq:localizingConstraints} \hspace{-2mm} \text{\grayout{{localizing} constraints}}: \inprod{\M{A}_{\mathrm{loc},jkh}}{\M{X}_\kappa} = [\M{X}_{g_j}]_{hk} \\ \quad \quad j = 1, \ldots, l_g,\ \ 1 \leq h\leq k \leq \binomial{d}{\kappa - \ceil{\deg{g_j}/2}} \end{array} \eeq where the linear constraints (for some $\M{A}_{\mathrm{loc},jkh}$) enforce each entry of $\M{X}_{g_j}$ to be a linear combination of entries in $\M{X}_\kappa$. Following steps (i)-(iii) above, it is straightforward to obtain the following (convex) semidefinite program: \begin{equation}\label{eq:lasserre} \hspace{-4mm} f^\star_{\kappa} =\!\! \displaystyle\min_{\M{X} = (\M{X}_\kappa, \M{X}_1, \ldots, \M{X}_{l_g})} \cbrace{\inprod{\M{C}_1}{\M{X}_\kappa} \mid {\cal A}(\M{X})\!=\!\boldsymbol{b},\M{X}\! \succeq\! 0}\!,\!\!\! \tag{LAS} \end{equation} where the variable $\M{X} = (\M{X}_\kappa, \M{X}_1,\dots,\M{X}_{l_g})$ is a collection of positive-semidefinite matrices (\emph{cf.}\xspace~\eqref{eq:eqMomentIsPSD} and~\eqref{eq:locMatrices}, and we shorthand $\M{X}_j = \M{X}_{g_j}$ for notation convenience), the objective is the one given in~\eqref{eq:objective}, and the linear constraints ${\cal A}(\M{X})=\boldsymbol{b}$ collect all the constraints in~\eqref{eq:momentConstraints},~\eqref{eq:redundantEqualityConstraints}, and~\eqref{eq:localizingConstraints}. Problem \eqref{eq:lasserre} can be readily formulated as a multi-block SDP in the primal form~\eqref{eq:primalSDP}, which matches the data format used by common SDP solvers. Problem \eqref{eq:lasserre} is commonly known as the \emph{dense} Lasserre's relaxation because a fully dense monomial basis $[\vxx]_\kappa$ is used to build the moment matrix \cite{Lasserre01siopt-lasserrehierarchy}. One can solve the relaxation for different choices of $\kappa$, leading to a \emph{hierarchy} of convex relaxations. While we presented Lasserre's hierarchy in a somewhat procedural way, the importance of the hierarchy lies in its stunning theoretical properties, that we review below. \begin{theorem}[Lasserre's Hierarchy \cite{Lasserre01siopt-lasserrehierarchy,lasserre10book-momentsOpt,Nie14mp-finiteConvergenceLassere}] \label{thm:lasserre} Let $-\infty < p^\star < \infty$ be the optimum of \eqref{eq:pop} and \revise{$f^\star_{\kappa}$ (resp. $\MX^\star_\kappa$) be the optimum (resp. one optimizer) of \eqref{eq:lasserre},} assume \eqref{eq:pop} satisfies the Archimedeanness condition (a stronger form of compactness, \emph{cf.}\xspace \cite[Definition 3.137]{Blekherman12Book-sdpandConvexAlgebraicGeometry}), then \begin{enumerate}[label=(\roman*)] \item \revise{(lower bound and convergence)} $f^\star_\kappa$ converges to $p^\star$ from below as $\kappa \rightarrow \infty$, and convergence occurs at a finite $\kappa$ under suitable technical conditions \cite{Nie14mp-finiteConvergenceLassere}; \item \revise{(rank-one solutions)} if $f^\star_\kappa = p^\star$ at some finite $\kappa$, then for every global minimizer $\vxx^{\star}$ of \eqref{eq:pop}, $\MX^\star_\kappa \triangleq [\vxx^{\star}]_{\kappa} [\vxx^{\star}]_{\kappa}^{\mathsf{T}}$ is optimal for \eqref{eq:lasserre}, and every rank-one optimal solution $\MX^\star_\kappa$ of \eqref{eq:lasserre} can be written as $[\vxx^{\star}]_\kappa [\vxx^{\star}]_{\kappa}^{\mathsf{T}}$ for some $\vxx^{\star}$ that is optimal for \eqref{eq:pop}; \item \revise{(optimality certificate)} if $\rank{\MX^\star_\kappa} = 1$ at some finite $\kappa$, then $f^\star_\kappa = p^\star$. \end{enumerate} \end{theorem} Theorem \ref{thm:lasserre} states that~\eqref{eq:lasserre} provides a hierarchy of lower bounds for~\eqref{eq:pop}. When the relaxation is exact ($p^\star\!=\!f^\star_\kappa$), global minimizers of~\eqref{eq:pop} correspond to rank-one solutions of~\eqref{eq:lasserre}. \revise{Moreover, after solving the convex SDP \eqref{eq:lasserre}, one can check the rank of the optimal solution $\MX^\star_\kappa$ to obtain a \emph{certificate} of global optimality. In practice, rank computation can be subject to numerical inaccuracies, and we introduce a continuous metric for evaluating the exactness of the relaxation in Section \ref{sec:sdprelax} (\emph{cf.}\xspace Theorem \ref{thm:sparserelaxtls}).} \maybeOmit{ {\bf Curse of Dimensionality}. As we will see in Section~\ref{sec:robustandpop}, for outlier-robust geometric perception problems, (i) $d$ ---the size of the variable in the original~\eqref{eq:pop}--- increases w.r.t.\xspace the number of measurements and can be a few hundreds (contrarily, outlier-free problems have $d$ fixed and typically less than $20$), (ii) $\kappa=2$ is the minimum relaxation order because $\deg{p} > 2$, leading to $n_1 = \binomial{d}{2}$ and $m \geq m_{\mathrm{mom}} = \mathfrak{t}(\binomial{d}{2}) - \binomial{d}{4} + 1$, which both grow quickly w.r.t.\xspace $d$ (contrarily, outlier-free problems typically have $\deg{p}=2$, and one can use $\kappa=1$ in \eqref{eq:lasserre}, which is much more scalable). Therefore, Lasserre's hierarchy, at least in its dense form \eqref{eq:lasserre}, is impractical for outlier-robust perception. In Section \ref{sec:sdprelax}, we present a \emph{sparse} version of \eqref{eq:lasserre} for outlier-robust perception that significantly improves scalability. } \section{Proof of the First Zonklar Equation} \bibliographystyle{ieee} \section{Proof of the First Zonklar Equation} \bibliographystyle{ieee} \section{Proof of Proposition~\ref{prop:polynomialExpressibility}} \label{sec:app-proof-polynomial-express} \begin{proof} (i) can be proved by inspection: all the $r^2$ and $\psi$ in Examples \ref{ex:singlerotation}-\ref{ex:category} are squared norms of \emph{affine} (degree-one) polynomials in $\vxx$, and are naturally quadratic. To show (ii), we note that the $q$-dimensional ball $\calB^q_T$ can be described by a single quadratic inequality $\calB^q_T = \{\boldsymbol{t} \in \Real{q} \mid T^2 - \inprod{\boldsymbol{t}}{\boldsymbol{t}} \geq 0 \}$, the 3D FOV cone ${\cal C}_\alpha$ can be described by two inequalities ${\cal C}_{\alpha} = \{ \boldsymbol{t} \in \Real{3} \mid \tan^2(\frac{\alpha}{2}) t_3^2 - t_1^2 - t_2^2 \geq 0, t_3 \geq 0 \}$, where the first inequality is quadratic and the second is affine. Now it remains to show that 2D and 3D rotations can be described by polynomial equalities. First, any 2D rotation $\M{R} \in \ensuremath{\mathrm{SO}(2)}\xspace$ can be equivalently parametrized by \begin{eqnarray}\label{eq:2Drotationparam} \M{R} = \cbrace{ \left[\begin{array}{cc} r_1 & -r_2 \\ r_2 & r_1 \end{array}\right] \ \middle\vert\ \boldsymbol{r} \in \Real{2}, \inprod{\boldsymbol{r}}{\boldsymbol{r}} = 1 }, \end{eqnarray} and hence described by a single quadratic equality. For a 3D rotation $\M{R} \in \ensuremath{\mathrm{SO}(3)}\xspace$, we shorthand $\boldsymbol{r}_i$ as its $i$-th column, and $\boldsymbol{r} = [\boldsymbol{r}_1 \,;\,} %\ \Vert\ \boldsymbol{r}_2 \,;\,} %\ \Vert\ \boldsymbol{r}_3] \in \Real{9}$ as its vectorization. Using the results from \cite{Briales17cvpr-registration,Yang20cvpr-perfectshape,Tron15RSSW-rotationdeterminant}, we know that $\M{R} \in \ensuremath{\mathrm{SO}(3)}\xspace$ can be equivalently described by the following set of $15$ quadratic equality constraints \begin{subequations} \begin{eqnarray} \hspace{-8mm} \text{\grayout{Unit norm}} : & h_{i} = 1 - \inprod{\boldsymbol{r}_i}{\boldsymbol{r}_i}, i=1,2,3, \label{eq:3Drotationunit} \\ \hspace{-8mm} \text{\grayout{Orthogonal}} : & h_{i,j} = \inprod{\boldsymbol{r}_i}{\boldsymbol{r}_j}, \parentheses {\substack{i \\j } } \in \{ \parentheses {\substack{1 \\2 } },\parentheses {\substack{2 \\3 } },\parentheses {\substack{3 \\1 } } \}, \label{eq:3Drotationorthogonal} \\ \hspace{-8mm} \text{\grayout{Right-hand}}: & \hspace{-3mm} h_{i,j,k}\! =\! \boldsymbol{r}_i\! \times\! \boldsymbol{r}_j\! -\! \boldsymbol{r}_k,\! \parentheses {\substack{i \\j \\k } }\! \in\! \cbrace{\! \parentheses {\substack{1 \\2 \\3 } }\! ,\! \parentheses {\substack{2 \\3 \\1 } }\! ,\! \parentheses {\substack{3 \\1 \\2 } }\!}\!,\label{eq:3Drotationrighthand} \end{eqnarray} \end{subequations} where ``$\times$'' denotes the vector cross product, and each $h_{i,j,k}$ defines a vector of $3$ equality constraints. Though the set of $15$ equalities is redundant (\emph{e.g.},\xspace \eqref{eq:3Drotationunit} and \eqref{eq:3Drotationrighthand} are sufficient for $\M{R} \in \ensuremath{\mathrm{SO}(3)}\xspace$), we use all of them to enhance robustness and tightness of the relaxation in Section \ref{sec:sdprelax}. \end{proof} \section{Solving the Projection Subproblem} \label{sec:supp-projsubproblem} In this section, we describe how to solve the projection subproblem in {\scenario{STRIDE}} (\emph{cf.}\xspace \eqref{eq:pgd} and \eqref{eq:strideprojection} in Algorithm \ref{alg-iPGMnlp}). In particular, we show that the dual problem of \eqref{eq:pgd} admits an unconstrained formulation, which allows developing a scalable algorithm based on limited-memory BFGS. Recall that the projection step \eqref{eq:strideprojection} of {\scenario{STRIDE}} seeks to compute the projection of a given point onto the spectrahedron $\calF_{\mathrm{P}} = \{ \M{X} \in \mathbb{X} \mid {\cal A}(\M{X})=\boldsymbol{b}, \M{X} \in {\cal K} \}$. Formally, given a point $\M{Z} \in \mathbb{X}$, the projection problem seeks the closest point in $\calF_{\mathrm{P}}$ w.r.t.\xspace $\M{Z}$ \begin{equation} \label{prob:projection} \min_{\M{X} \in \mathbb{X}}\left\{ \frac{1}{2} \left\lVert \M{X} - \M{Z} \right\rVert^2\ \middle\vert\ \M{X}\in \calF_{\mathrm{P}} \right\}. \end{equation} Since $\calF_{\mathrm{P}}$ is the intersection of two convex sets, namely the hyperplane defined by ${\cal A}(\M{X})=\boldsymbol{b}$ and the (product of) positive semidefinite cone ${\cal K}$, a natural idea is to apply Dykstra's projection algorithm (see \emph{e.g.},\xspace~\cite{Combettes11book-proximalSplitting}) to generate an approximate solution by alternating the projection onto the hyperplane and the projection onto the semidefinite cone, both of which are easy to compute. However, Dykstra's projection is known to have slow convergence and it may take too many iterations until a satisfactory projection is found. As a result, instead of solving \eqref{prob:projection} directly, we consider its dual problem \begin{equation} \label{prob:projection-dual} \min_{\boldsymbol{y} \in \Real{m}, \M{S} \in \mathbb{X}}\left\{ \frac{1}{2}\left\lVert \M{S} + \calA^{*} (\boldsymbol{y}) + \M{Z} \right\rVert^2 - \inprod{\boldsymbol{b}}{\boldsymbol{y}}\ \middle\vert\ \M{S}\in {\cal K} \right\}, \end{equation} where we have ignored the constant term $ -\frac{1}{2}\left\lVert \M{Z} \right\rVert^2 $ and converted ``$\max$'' to ``$\min$'' by changing the sign of the objective. The KKT conditions for the pair \eqref{prob:projection} and \eqref{prob:projection-dual} are: \begin{equation} \label{eq:KKT-proj} \hspace{-3mm} {\cal A}(\M{X}) = \boldsymbol{b},\ \calA^{*} (\boldsymbol{y}) + \M{S} = \M{X} - \M{Z},\ \M{X},\M{S}\in {\cal K}, \ \inprod{\M{X}}{\M{S}} = 0.\!\! \end{equation} {\bf An unconstrained formulation}. Now we introduce a key observation that allows us to further simplify the dual \eqref{prob:projection-dual}. Fixing the unconstrained $\boldsymbol{y}$, problem \eqref{prob:projection-dual} can be seen as finding the closest $\M{S} \in {\cal K}$ w.r.t.\xspace the matrix $-\calA^{*} (\boldsymbol{y}) - \M{Z}$, and hence admits a closed-form solution \begin{eqnarray}\label{eq:Sofy} \M{S} = \Pi_{{\cal K}} \parentheses{-\calA^{*} (\boldsymbol{y}) - \M{Z}}. \end{eqnarray} As a result, after inserting~\eqref{eq:Sofy}, problem~\eqref{prob:projection-dual} is equivalent to \begin{equation} \label{prob:projetion-dual-y} \min_{\boldsymbol{y} \in \Real{m}} \ \ \phi(\boldsymbol{y}):= \frac{1}{2}\left\lVert \Pi_{{\cal K}}(\calA^{*} (\boldsymbol{y}) + \M{Z}) \right\rVert^2 - \left\langle \boldsymbol{b}, \boldsymbol{y} \right \rangle, \end{equation} with the gradient of $ \phi(\boldsymbol{y}) $ given as \begin{eqnarray} \nabla \phi(\boldsymbol{y}) = {\cal A} \Pi_{{\cal K}}(\calA^{*} (\boldsymbol{y}) + \M{Z}) - \boldsymbol{b}. \end{eqnarray} Thus, if $ \vy^\star $ is an optimal solution for problem~\eqref{prob:projetion-dual-y}, we can recover $\MS^\star$ from~\eqref{eq:Sofy}, and $\MX^\star$ from the KKT conditions \eqref{eq:KKT-proj}: \begin{eqnarray} \MX^\star = \calA^{*} (\vy^\star) + \MS^\star + \M{Z}. \end{eqnarray} Formulating the dual problem as the unconstrained problem \eqref{prob:projetion-dual-y} has appeared multiple times in~\cite{Zhao10siopt-sdpnal,Malick09siopt-regularizationSDP}. Now that~\eqref{prob:projetion-dual-y} is a \emph{smooth unconstrained convex} problem in $ \boldsymbol{y}\in \Real{m} $, plenty of efficient algorithms are available, such as (accelerated) gradient descend~\cite{Nesterov18book-convexOptimization}, nonlinear conjugate gradient~\cite{Dai99siopt-ncg}, quasi-Newton methods~\cite{Nocedal06book-numericaloptimization} and the semismooth Newton method~\cite{Zhao10siopt-sdpnal}. In this paper, we apply the celebrated limited-memory BFGS (L-BFGS) method, see for example \cite[Algorithm 7.5]{Nocedal06book-numericaloptimization}. {L-BFGS} is easy to implement, can handle very large unconstrained optimization problems due to its low memory consumption, and is typically ``the algorithm of choice'' for large-scale problems~\cite[Chapter 7]{Nocedal06book-numericaloptimization}. Empirically, we observed that {L-BFGS} is efficient and robust for various applications. To the best of our knowledge, this is the first work that demonstrates the effectiveness of L-BFGS, or in general quasi-Newton methods, in solving large-scale and degenerate SDPs. \section{Relative Suboptimality} \label{sec:app-compute-subopt} This section is concerned with the computation of a formally correct suboptimality gap $\eta_s$ for a given estimate (which we use as a performance metric in our experiments), whose validity is not hindered by potential numerical inaccuracies in the solution of the SDP relaxation~\eqref{eq:sparserelax}. In Theorem \ref{thm:sparserelaxtls} and \eqref{eq:subopt}, we stated that, by solving the sparse SDP relaxation \eqref{eq:sparserelax} to global optimality with optimizer $\MX^\star$ and associated optimum $f^\star$, one can round from $\MX^\star$ a feasible solution $(\widehat{\vxx}_1,\widehat{\vtheta}_1)$ to the original \eqref{eq:binaryTLS} problem with associated cost $\widehat{p} = p(\widehat{\vxx}_1,\widehat{\vtheta}_1)$. Then, a measure of suboptimality for the rounded solution $(\widehat{\vxx}_1,\widehat{\vtheta}_1)$ can be computed as follows (also in \eqref{eq:subopt}): \begin{eqnarray} \label{eq:app-subopt} \eta_s \triangleq \frac{\abs{f^\star - \widehat{p}}}{1 + \abs{f^\star} + \abs{\widehat{p}}}. \end{eqnarray} It is apparent that $\eta_s = 0$ implies the relaxation \eqref{eq:sparserelax} is exact and the rounded solution $(\widehat{\vxx}_1,\widehat{\vtheta}_1)$ is indeed globally optimal for \eqref{eq:binaryTLS} (recall $f^\star \leq p^\star \leq \widehat{p}$ by construction, where $p^\star$ is the unknown optimum of the nonconvex \eqref{eq:binaryTLS} problem). However, the caveat in computing the relative suboptimality as in \eqref{eq:app-subopt} is that, although it is almost always possible to compute a rounded solution $(\widehat{\vxx}_1,\widehat{\vtheta}_1)$ with cost $\widehat{p}$ (provided that the feasible set ${\cal X}$ of \eqref{eq:binaryTLS} is simple to project, as in our examples), it can be quite challenging to obtain $f^\star$ (which acts as a valid lower bound for $p^\star$) \emph{to machine precision}, since $f^\star$ is computed by numerically solving the SDP~\eqref{eq:sparserelax}, which may still lead to small inaccuracies. Moreover, as shown in the experimental section of the main text, first-order solvers such as {\scenario{SDPNAL+}} typically cannot solve the SDP to even moderate accuracy (with reasonable amount of iterations), in which case $f^\star$ in not attained. Here we describe a procedure to compute a valid lower bound for $p^\star$, from any approximate solution $(\widehat{\MX},\widehat{\vy},\widehat{\MS}) \in \mathbb{X} \times \Real{m} \times \mathbb{X}$ of the SDP \eqref{eq:sparserelax}, without requiring it to be an optimal solution satisfying the KKT conditions \eqref{eq:sdpKKT}. In fact, as we will show soon, only $\widehat{\vy} \in \Real{m}$ is needed to compute a valid lower bound. {\bf Bounded trace}. Let us first shows that, each block of the primal variable $\M{X}$ in \eqref{eq:sparserelax} has a bounded trace, when \eqref{eq:sparserelax} is applied to all six Examples \ref{ex:singlerotation}-\ref{ex:category}. Towards this goal, let us first observe that the variable $\vxx \in {\cal X}$ has a bounded norm, \emph{i.e.},\xspace there exists $M_0>0$ such that $\norm{\vxx} \leq M_0, \forall \vxx \in {\cal X}$. For example, in Example \ref{ex:singlerotation}, $\vxx = \M{R} \in \ensuremath{\mathrm{SO}(3)}\xspace$ has $\norm{\vxx} = \sqrt{3} = M_0$; in Example \ref{ex:category}, $\vxx = (\M{R},\boldsymbol{t},\boldsymbol{c})$ with $\M{R} \in \ensuremath{\mathrm{SO}(3)}\xspace$, $\boldsymbol{t} \in \calB^3_{T_t}$, and $\boldsymbol{c} \in \calB^K_{T_c}$ has $\norm{\vxx} \leq \sqrt{3+T_t^2 + T_c^2} = M_0$, where $T_t$ and $T_c$ are the upper bounds for the norm of the translation and the norm of the shape parameters, respectively. Now recall that the primal variable $\M{X}$ has $1+l_g$ blocks, with the first block being the moment matrix and the other $l_g$ blocks being localizing matrices. With the observation that $\norm{\vxx} \leq M_0, \forall \vxx \in {\cal X}$, we can bound the trace of the moment matrix $\M{X}_v$ (\emph{cf.}\xspace \eqref{eq:sparsemomentmat}) as \begin{eqnarray} \label{eq:traceboundmomentmat} \trace{\M{X}_v} = & \trace{\boldsymbol{v}(\widetilde{\vxx}) \boldsymbol{v}(\widetilde{\vxx})^{\mathsf{T}}} = \boldsymbol{v}(\widetilde{\vxx})^{\mathsf{T}} \boldsymbol{v}(\widetilde{\vxx}) \nonumber \\ = & 1 + \norm{\vxx}^2 + \sum_{i=1}^N \theta_i^2 + \sum_{i=1}^N \theta_i^2 \norm{\vxx}^2 \nonumber \\ = & (1+N)(1+\norm{\vxx}^2) \nonumber \\ \leq & (1+N)(1+M_0^2) =: M_1 , \nonumber \\ & \forall \vxx \in {\cal X}, \boldsymbol{\theta} \in \{\pm 1\}^N. \end{eqnarray} Regarding the localizing matrices $\M{X}_{g_j} = g_j \cdot [\M{X}_1]_{{\cal I}_j},j=1,\dots,l_g$ (where $\M{X}_1$ is the order-one moment matrix), we have that (recall $g_j \geq 0$ by definition) \begin{eqnarray} \label{eq:traceboundlocalizemat} \trace{\M{X}_{g_j}} = & g_j \cdot \trace{[\M{X}_1]_{{\cal I}_j}} \nonumber \\ \leq & g_j \cdot \trace{\M{X}_1} \nonumber \\ = & g_j \cdot (1+ \norm{\vxx}^2 + \sum_{i=1}^N \theta_i^2 ) \nonumber \\ \leq & g_j \cdot (1+N+M_0^2), \nonumber \\ & \forall \vxx \in {\cal X}, \boldsymbol{\theta} \in \{ \pm 1 \}^N. \end{eqnarray} Therefore, it suffices to show that $g_j$ is upper bounded for any $\vxx \in {\cal X}$. This is obvious for all the examples in this paper. Particularly, there are only two types of inequality constraints among Examples \ref{ex:singlerotation}-\ref{ex:category}. (i) The ball constraint $\boldsymbol{t} \in \calB^K_T$ (bounded translation and bounded shape parameters), which reads $g = T^2 - \norm{\boldsymbol{t}}^2 \geq 0$, which certainly satisfies $g \leq T^2$ and is upper bounded. (ii) The camera FOV cone constraint $\boldsymbol{t} \in {\cal C}_\alpha$ that induces two inequality constraints $g_1 = t_3 \geq 0$ and $g_2 = \tan^2(\alpha/2) t_3^2 - t_1^2 - t_2^2 \geq 0$. However, since the translation also lies in the bounded ball $\calB^3_T$, we have $g_1 = t_3 \leq \norm{\boldsymbol{t}} \leq T$, and $g_2 = \tan^2(\alpha/2) t_3^2 - t_1^2 - t_2^2 \leq \tan^2(\alpha/2) t_3^2 \leq \tan^2(\alpha/2) \norm{\boldsymbol{t}}^2 \leq \tan^2(\alpha/2) T^2$ are both upper bounded. Therefore, we have shown that each localizing matrix also has bounded trace. {\bf A valid lower bound}. Now suppose we are given a $\widehat{\vy} \in \Real{m}$, then for any $\vxx \in {\cal X}, \boldsymbol{\theta} \in \{ \pm 1\}^N$, we have \begin{eqnarray} & p(\vxx,\boldsymbol{\theta}) \nonumber\\ = & \inprod{\M{C}}{\M{X}} \nonumber \\ = & \inprod{\M{C} - \calA^{*} (\widehat{\vy}) }{\M{X}} + \inprod{\calA^{*} (\widehat{\vy})}{\M{X}} \nonumber \\ = & \inprod{\M{C} - \calA^{*} (\widehat{\vy}) }{\M{X}} + \inprod{{\cal A} (\M{X})}{\widehat{\vy}} \nonumber\\ = & \inprod{\M{C} - \calA^{*} (\widehat{\vy}) }{\M{X}} + \inprod{\boldsymbol{b}}{\widehat{\vy}} \label{eq:app-lower-bound-Xfeasible} \\ \geq &\!\!\!\!\! \underbrace{ \inprod{\boldsymbol{b}}{\widehat{\vy}} + \displaystyle \sum_{i=1}^{1+l_g} M_i \cdot \min\{ \lambda_{\min}\parentheses{ [\M{C} - \calA^{*} (\widehat{\vy})]_i },0\} }_{\underline{p}(\widehat{\vy})}, \label{eq:app-lower-bound-tracemineig} \end{eqnarray} where $M_i,i=1,\dots,1+l_g$ are the upper bounds for the traces of the moment matrix and the localizing matrices (shown in previous paragraphs and \eqref{eq:traceboundmomentmat}-\eqref{eq:traceboundlocalizemat}), $[\M{C} - \calA^{*} (\widehat{\vy})]_i$ denotes the $i$-th block of $[\M{C} - \calA^{*} (\widehat{\vy})]$ (recall that both $\M{C}$ and $\calA^{*} (\widehat{\vy})$ are multi-block symmetric matrices, \emph{cf.}\xspace \eqref{eq:adjointAmultiblk}), and $\lambda_{\min}(\cdot)$ denotes the minimum eigenvalue of a symmetric matrix. In \eqref{eq:app-lower-bound-Xfeasible}, we used that any $\M{X}$ that comes from moment matrix and localizing matrices must be primal feasible and hence ${\cal A}(\M{X}) = \boldsymbol{b}$. In \eqref{eq:app-lower-bound-tracemineig}, we used that $\inprod{\M{A}}{\M{B}} \geq \lambda_{\min}(\M{A}) \trace{\M{B}}$ for any $\M{A}\in \sym{n}$ and $\M{B} \in \psd{n}$. With this lower bound $\underline{p}(\widehat{\vy})$, we can compute the relative suboptimality from any $\widehat{\vy} \in \Real{m}$: \begin{eqnarray} \eta_s \triangleq \frac{\abs{\underline{p}(\widehat{\vy}) - \widehat{p}}}{1 + \abs{\underline{p}(\widehat{\vy})} + \abs{\widehat{p}}}. \end{eqnarray} \section{Conclusions} \label{sec:conclusion} We presented the first general {and scalable\xspace} framework to design certifiable algorithms for outlier-robust geometric perception. We first showed that estimation with several common robust cost functions can be reformulated as polynomial optimization problems. We then designed a semidefinite relaxation scheme that exploits the sparsity of outlier-robust estimation to generate SDPs of much smaller sizes {while maintaining empirical exactness}. Finally, we proposed a robust and scalable SDP solver, {\scenario{STRIDE}}, that can solve the sparse relaxations to unprecedented scale and accuracy. We tested our framework on six geometric perception applications using both synthetic and real data, demonstrating its robustness, scalability, and capability to safeguard existing fast heuristics for robust estimation. \section{Proof of Proposition~\ref{prop:robustaspop}} \label{sec:app-proof-robust-pop} \begin{proof} We first prove (i)-(iv) using Black-Rangarajan duality \cite{Black96ijcv-unification}, and then (v)-(vii) by manipulating the cost functions. {\bf Proof of (i)-(iv)}. The TLS proof has been given in \eqref{eq:binaryTLS} of the main text. We start with (ii) MC. With a similar strategy of introducing a binary variable as in \eqref{eq:binaryTLS}, we can write the MC cost function as \begin{eqnarray}\label{eq:mcidentity} \rho_{\mathrm{MC}} \equiv \min_{\theta \in \{+1,-1\}} \cbrace{ \frac{1-\theta}{2} \ \middle\vert\ -\theta(r^2 - \beta_i^2) \geq 0 }, \end{eqnarray} where the constraint $-\theta(r^2 - \beta_i^2) \geq 0$ enforces $\theta = -1$ if $r^2 > \beta_i^2$ (hence $\rho_{\mathrm{MC}} = 1$), and $\theta = +1$ if $r^2 < \beta_i^2$ (hence $\rho_{\mathrm{MC}} = 0$). If $r^2 = \beta_i^2$, then the minimization selects $\theta = +1$ as it minimizes the objective to be zero. Using the identity in \eqref{eq:mcidentity}, problem \eqref{eq:robust} with $\rho = \rho_{\mathrm{MC}}$ is equivalent to \begin{equation}\label{eq:dualMC} \hspace{-4mm} \min_{\substack{\vxx \in {\cal X} \subseteq \Real{d}, \\ \boldsymbol{\theta} \in \{\pm 1 \}^N}} \!\! \cbrace{ \sum_{i=1}^N\! \frac{1\!-\!\theta_i}{2}\! +\! \psi(\vxx)} %{\psi(\vxx,\calR) \middle\vert \substack{ \displaystyle -\theta_i (r^2(\vxx,\boldsymbol{z}_i)\! -\! \beta_i^2)\! \geq\! 0,\! \\ \displaystyle \forall i=1,\dots,N} }\!,\!\!\! \tag{MC} \end{equation} which is an instance of \eqref{eq:pop} in $(\vxx,\boldsymbol{\theta})\in \Real{d+N}$. To prove (iii), we leverage Black-Rangarajan duality \cite[Fig. 28]{Black96ijcv-unification} and write $\rho_{\mathrm{GM}}$ as a minimization problem by introducing a confidence variable $w \in [0,1]$ \begin{eqnarray}\label{eq:GMidentity} \rho_{\mathrm{GM}} \equiv \min_{w \in [0,1]} w \frac{r^2}{\beta_i^2} + (\sqrt{w}-1)^2. \end{eqnarray} One can check the correctness of \eqref{eq:GMidentity} by setting the derivative of the objective in \eqref{eq:GMidentity} w.r.t.\xspace $w$ to zero and obtain $w$ as a function of $r$ in closed form. Eq.~\eqref{eq:GMidentity}, however, does not directly lead to a POP due to the existence of $\sqrt{w}$. Nevertheless, with a change of variable $\theta := \sqrt{w} \in [0,1]$, we can write problem \eqref{eq:robust} with $\rho = \rho_{\mathrm{GM}}$ as the following POP \begin{equation}\label{eq:dualGM} \min_{\substack{\vxx \in {\cal X} \subseteq \Real{d}, \\ \boldsymbol{\theta} \in [0,1]^N}} \sum_{i=1}^N \frac{\theta_i^2 r^2(\vxx,\boldsymbol{z}_i)}{\beta_i^2} + (\theta_i - 1)^2 + \psi(\vxx)} %{\psi(\vxx,\calR). \tag{GM} \end{equation} Similarly, we can use Black-Rangarajan duality \cite[Fig. 25]{Black96ijcv-unification} to prove (iv) by introducing a confidence variable $w$ and writing $\rho_{\mathrm{TB}}$ as the solution of the following minimization \begin{eqnarray} \rho_{\mathrm{TB}} \equiv \min_{w \in [0,1]} w \frac{r^2}{\beta_i^2} + \frac{1}{3} - w + \frac{2}{3} w^{\frac{3}{2}}. \end{eqnarray} Then, with a change of variable $\theta := \sqrt{w}$, we conclude that \eqref{eq:robust} with $\rho = \rho_{\mathrm{TB}}$ can be written as the following POP \begin{equation}\label{eq:dualTB} \min_{\substack{\vxx \in {\cal X} \subseteq \Real{d}, \\ \boldsymbol{\theta} \in [0,1]^N}} \sum_{i=1}^N \frac{\theta_i^2 r^2(\vxx,\boldsymbol{z}_i)}{\beta_i^2} + \frac{1}{3} - \theta_i^2 + \frac{2}{3} \theta_i^3 + \psi(\vxx)} %{\psi(\vxx,\calR). \tag{TB} \end{equation} In \eqref{eq:binaryTLS} and \eqref{eq:dualMC}, $\theta_i$ is binary and discrete, with $\theta_i= +1$ (resp. $\theta_i = -1$) indicating the $i$-the measurement $\boldsymbol{z}_i$ is an inlier (resp. outlier). While in \eqref{eq:dualGM} and \eqref{eq:dualTB}, $\theta_i$ is continuous, with $\theta_i \uparrow 1$ (resp. $\theta_i \downarrow 0$) indicating the $i$-the measurement $\boldsymbol{z}_i$ is an inlier (resp. outlier). {\bf Proof of (v)-(vii)}. The L1 cost function can be simply rewritten as \begin{eqnarray} \rho_{\mathrm{L1}} \equiv \cbrace{ \frac{\gamma}{\beta_i}\ \middle\vert\ \gamma \geq 0, \gamma^2 = r^2 }, \end{eqnarray} where $\gamma \geq 0$ and $\gamma^2 = r^2$ implies that $\gamma = \abs{r}$. Therefore, \eqref{eq:robust} with $\rho = \rho_{\mathrm{L1}}$ is equivalent to the following POP: \begin{equation} \hspace{-5mm}\min_{\substack{\vxx \in {\cal X} \subseteq \Real{d},\\ \vgamma \in \Real{N}} } \cbrace{ \sum_{i=1}^N \frac{\gamma_i}{\beta_i} \ \middle\vert\ \gamma_i \geq 0, \gamma_i^2 = r^2(\vxx,\boldsymbol{z}_i),i=1,\dots,N}\!\!.\!\!\!\tag{L1} \end{equation} Now we prove \eqref{eq:robust} with the Huber loss \cite{Huber81} can also be written as a POP. We first perform a change of variable and let $\gamma = \abs{r}$ (which is equal to $\gamma \geq 0, \gamma^2 = r^2$): \begin{eqnarray} \label{eq:huberafterabs} \rho_{\mathrm{HB}} = \begin{cases} \frac{\gamma^2}{2\beta_i^2} & \gamma \leq \beta_i \\ \frac{\gamma}{\beta_i} - \frac{1}{2} & \text{otherwise} \end{cases}. \end{eqnarray} Then we introduce a binary variable $\theta \in \{ +1, -1\}$, and equivalently write \eqref{eq:huberafterabs} as \begin{eqnarray} \rho_{\mathrm{HB}} = \cbrace{ \frac{1+\theta}{2} \frac{\gamma^2}{2\beta_i^2} + \frac{1-\theta}{2} \parentheses{\frac{\gamma}{\beta_i} - \frac{1}{2}} \middle\vert \theta (\gamma - \beta_i) \leq 0 }, \end{eqnarray} where the constraint $\theta (\gamma - \beta_i) \leq 0$ enforces $\theta = -1$ when $\gamma > \beta_i$ (hence $\rho_{\mathrm{HB}} = \frac{\gamma}{\beta_i} - \frac{1}{2}$), $\theta = +1$ when $\gamma < \beta_i$ (hence $\rho_{\mathrm{HB}} = \frac{\gamma^2}{2\beta_i^2}$). Therefore, we can write \eqref{eq:robust} with $\rho = \rho_{\mathrm{HB}}$ as the following POP: \begin{equation} \hspace{-6mm}\min_{\substack{\vxx \in {\cal X}, \vgamma \in \Real{N}, \\ \boldsymbol{\theta} \in \{ \pm 1\}^N} } \cbrace{ \sum_{i=1}^N \frac{1+\theta_i}{2} \frac{\gamma_i^2}{2\beta_i^2} + \frac{1-\theta_i}{2} \parentheses{\frac{\gamma_i}{\beta_i} - \frac{1}{2}} \middle\vert \substack{ \gamma_i \geq 0, \\ \gamma_i^2 = r^2(\vxx,\boldsymbol{z}_i^2), \\ \theta_i (\gamma_i - \beta_i) \leq 0,\\ i=1,\dots,N }}\!\!. \tag{HB} \end{equation} Finally, we prove \eqref{eq:robust} with the adaptive cost function $\rho_{\mathrm{ADT},s}$, proposed by Barron \cite{Barron19cvpr-adaptRobustLoss}, can also be written as a POP, when we restrict the scale parameter $s$ to be rational numbers (we avoid $s = 0$ and $s=2$ because the cost function is not defined at those two parameters, and \cite{Barron19cvpr-adaptRobustLoss} augments the cost by taking its limits at $s = 0$ and $s=2$). Note that restricting $s$ to rational numbers preserves the expressiveness of the original adaptive cost in \cite{Barron19cvpr-adaptRobustLoss}, because the set of rational numbers is \emph{dense} in the set of real numbers. Because $s$ is a rational number, we can let $s = \frac{p}{q}$ with $p,q$ as integers, and write the adaptive cost as \begin{eqnarray} \rho_{\mathrm{ADT},s} = \frac{\abs{s-2}}{s} \parentheses{ \parentheses{ \frac{r^2/\beta_i^2}{\abs{s-2}} + 1 }^{\frac{p}{2q}} - 1 }. \end{eqnarray} Now we perform a change of variable and let $\gamma = \parentheses{ \frac{r^2/\beta_i^2}{\abs{s-2}} + 1 }^{\frac{p}{2q}}$. This change of variable is equivalent to the following polynomial equality constraint: \begin{eqnarray} 0 = h(\gamma,r^2) := \begin{cases} \gamma^{2q} - \parentheses{ \frac{r^2/\beta_i^2}{\abs{s-2}} + 1 }^p & p > 0\\ \gamma^{2q} \parentheses{ \frac{r^2/\beta_i^2}{\abs{s-2}} + 1 }^{\abs{p}} - 1 & p < 0 \end{cases}. \end{eqnarray} Therefore, we conclude that \eqref{eq:robust} with $\rho = \rho_{\mathrm{ADT},s}$ can be written as the following POP: \begin{equation} \hspace{-3mm} \min_{\substack{\vxx \in {\cal X}, \\ \vgamma \in \Real{N}}} \cbrace{ \sum_{i=1}^N \frac{\abs{s-2}}{s} (\gamma_i - 1)\ \middle\vert\ h(\gamma_i,r^2(\vxx,\boldsymbol{z}_i)) = 0,i=1,\dots,N}. \tag{ADT} \end{equation} This concludes the proof for all seven cost functions. \end{proof} \section{Outlier-Robust Estimation as POP} \label{sec:robustandpop} \input{sections/fig-robustcosts} In this section, we consider a general formulation of estimation with robust cost functions. We show that, for {seven}\xspace popular robust costs, this formulation can be recast as a~\eqref{eq:pop}. We conclude the section by showcasing the resulting formulation on six perception problems. {\bf Outlier-robust estimation}. Given a set of $N$ measurements ${\cal Z} = \{\boldsymbol{z}_i\}_{i=1}^N$ (\emph{e.g.},\xspace 2D image keypoints, 3D point clouds, relative poses), we consider the problem of using ${\cal Z}$ to estimate an unknown geometric model $\vxx \in {\cal X} \subseteq \Real{d}$ (\emph{e.g.},\xspace camera poses, rigid transformations, 3D shapes, robot trajectory), despite the fact that the measurement set ${\cal Z}$ may contain a large amount of \emph{outliers}. Building on standard M-estimation \cite{Maronna19book-robustStats,MacTavish15crv-robustEstimation}, we perform outlier-robust estimation by solving the following optimization problem \begin{equation}\label{eq:robust} \min_{\vxx \in {\cal X} \subseteq \Real{d}}\ \ \sum_{i=1}^N \rho(r(\vxx,\boldsymbol{z}_i), \beta_i) + \psi(\vxx)} %{\psi(\vxx,\calR), \tag{Robust} \end{equation} where $r(\vxx,\boldsymbol{z}_i)$ is a (scalar) \emph{residual} function that measures the mismatch between $\vxx$ and $\boldsymbol{z}_i$ (\emph{e.g.},\xspace Euclidean distances, pose errors), $\beta_i > 0$ (set by the user) is the \emph{maximum admissible residual} for a measurement to be considered as an \emph{inlier} (or minimum residual to be an outlier), $\rho(r,\beta_i)$ is a \emph{robust} cost function that penalizes outliers much \emph{less} than inliers to prevent outliers from contaminating the estimate. We include a \emph{regularization} term $\psi(\vxx)} %{\psi(\vxx,\calR)$ in~\eqref{eq:robust}, to keep full generality: as we will see in the examples below, a regularizer is often added to high-dimensional estimation problems to ensure the solution is unique and well-behaved. We make the following assumption on problem \eqref{eq:robust}. \begin{assumption}[Polynomial Residual, Constraint, and Regularization] \label{assumption:polynomialsrobust} In \eqref{eq:robust}, assume (i) $r^2, \psi$ are polynomials; (ii) the constraint $\vxx \in {\cal X}$ can be described by finitely many polynomial equalities and inequalities, \emph{i.e.},\xspace ${\cal X} = \{\vxx \in \Real{d}\mid h_i(\vxx)=0,i\in 1,\dots,l_h, g_j(\vxx) \geq 0,j=1,\dots,l_g \}$. \end{assumption} Assumption \ref{assumption:polynomialsrobust} is the prerequisite for applying the machinery of semidefinite relaxation for POP in Section \ref{sec:pre-pop}. These assumptions are often mild in geometric perception problems, a point that will become clearer when we introduce the six examples later in this section \mbox{(\emph{cf.}\xspace Proposition \ref{prop:polynomialExpressibility})}. Now the only component of \eqref{eq:robust} that may prevent it from being a POP is the robust cost $\rho(r,\beta_i)$. In \emph{outlier-free} estimation, $\rho = r^2/\beta_i^2$ is chosen as the \emph{least squares} cost and \eqref{eq:robust} is immediately in the form of \eqref{eq:pop}. However, in outlier-robust estimation, $\rho$ is typically not a polynomial. For instance, let us consider the \emph{truncated least squares} (TLS) cost, which will be extensively used in this paper: \begin{equation} \label{eq:tlsdef} \rho_{\mathrm{TLS}}(r,\beta_i) \triangleq \min \cbrace{ \frac{r^2}{\beta_i^2}, 1 } = \begin{cases} \frac{r^2}{\beta_i^2} & \abs{r} \leq \beta_i \\ 1 & \text{otherwise} \end{cases}, \end{equation} The TLS cost~\eqref{eq:tlsdef} is apparently not a polynomial, and it is not even a \emph{smooth} function (\emph{cf.}\xspace Fig. \ref{fig:robust-costs}(a)). {\bf Reformulation as POP}. To build intuition, we now show that \eqref{eq:robust} with the TLS cost \eqref{eq:tlsdef} can be reformulated as a POP; then we generalize this conclusion to other cost functions in Proposition~\ref{prop:robustaspop}. The key observation is that, for any $a,b \in \Real{}$, $\min \{a, b\} \equiv \min_{\theta \in \{ +1,-1\}} \frac{1+\theta}{2} a + \frac{1-\theta}{2} b$ which allows recasting \eqref{eq:robust} with $\rho = \rho_{\mathrm{TLS}}$ as \begin{equation}\label{eq:binaryTLS} \min_{\substack{\vxx \in {\cal X} \subseteq \Real{d}, \\ \boldsymbol{\theta} \in \{\pm 1\}^N} }\ \sum_{i=1}^N \frac{1+\theta_i}{2} \frac{r^2(\vxx,\boldsymbol{z}_i)}{\beta_i^2} + \frac{1-\theta_i}{2} + \psi(\vxx)} %{\psi(\vxx,\calR), \tag{TLS} \end{equation} where each binary variable $\theta_i \in \{+1,-1\}$ decides whether the $i$-th measurement $\boldsymbol{z}_i$ is an inlier ($\theta_i = +1$) or an outlier ($\theta_i=-1$). By recalling that $\theta_i \in \{+1,-1\} \Leftrightarrow \theta_i^2 - 1 = 0$, we see that problem \eqref{eq:binaryTLS} is an instance of \eqref{eq:pop}, with the decision variables now being $(\vxx, \boldsymbol{\theta}) \in \Real{d+N}$. The next proposition states that, the reformulation above can be generalized to a broader set of robust cost functions. \begin{proposition}[Robust Estimation as POP] \label{prop:robustaspop} Under Assumption \ref{assumption:polynomialsrobust}, if the cost function $\rho$ in \eqref{eq:robust} is one of the following: \begin{enumerate}[label=(\roman*)] \item truncated least squares (TLS): $\rho_{\mathrm{TLS}} \triangleq \min \cbrace{ \displaystyle \frac{r^2}{\beta_i^2}, 1 } $; \item maximum consensus: $\rho_{\mathrm{MC}} \triangleq \begin{cases} 0 & \abs{r} \leq \beta_i \\ 1 & \text{otherwise} \end{cases}$; \item Geman-McClure: $\rho_{\mathrm{GM}} \triangleq \frac{\displaystyle r^2/\beta_i^2}{\displaystyle 1+r^2/\beta_i^2}$; \item Tukey's Biweight: $\rho_{\mathrm{TB}} \triangleq \begin{cases} \frac{r^2}{\beta_i^2} - \frac{r^4}{\beta_i^4} + \frac{r^6}{3\beta_i^6} & \abs{r}\leq \beta_i \\ \frac{1}{3} & \text{otherwise} \end{cases}$, \end{enumerate} then \eqref{eq:robust} can be recast as a \eqref{eq:pop} with $d+N$ variables, where each of the additional $N$ variables indicates the confidence of the corresponding measurement being an inlier. Moreover, \eqref{eq:robust} with the following costs can also be written as a \eqref{eq:pop} \begin{enumerate}[label=(\roman*)] \setcounter{enumi}{4} \item L1: $\rho_{\mathrm{L1}} \triangleq \abs{r}/\beta_i $; \item Huber: $\rho_{\mathrm{HB}} \triangleq \begin{cases} \frac{r^2}{2\beta_i^2} & \abs{r} \leq \beta_i \\ \frac{\abs{r}}{\beta_i} - \frac{1}{2} & \text{otherwise} \end{cases}$; \item Adaptive \cite{Barron19cvpr-adaptRobustLoss}: $\rho_{\mathrm{ADT},s} \triangleq \frac{\abs{s-2}}{s} \parentheses{ \parentheses{ \frac{r^2/\beta_i^2}{\abs{s-2}} + 1 }^{\frac{s}{2}} - 1} $, \\for a given scale parameter $s \in \mathbb{Q} \backslash \{0,2\}$, \end{enumerate} by adding slack variable(s) for each measurement. \end{proposition} Fig. \ref{fig:robust-costs} plots the seven robust costs (Fig. \ref{fig:robust-costs}(g) shows $\rho_{\mathrm{ADT},s}$ for six different values of $s$). While we postpone the proof to Supplementary Material\xspace, the key insight is that for common robust cost functions we can either (a) use Black-Rangarajan duality~\cite{Black96ijcv-unification} to convert them into polynomials by introducing additional slack variables -- one for each measurement (we use this approach for (i)-(iv)), or (b) directly manipulate them into polynomials by change of variables (for (v)-(vii)). \input{sections/fig-applications} {\bf Perception examples}. We now shed some light on the generality of the formulation~\eqref{eq:robust} and Assumption~\ref{assumption:polynomialsrobust} by considering six outlier-robust geometric perception problems. We first present the examples and then conclude they all satisfy Assumption~\ref{assumption:polynomialsrobust} in Proposition~\ref{prop:polynomialExpressibility}. We assume $\psi(\vxx)} %{\psi(\vxx,\calR) = 0$ unless otherwise mentioned. \setcounter{theorem}{0} \begin{example}[Single Rotation Averaging \cite{Hartley13ijcv}] \label{ex:singlerotation} Given $N$ measurements of an unknown $q$-dimensional rotation $\{ \boldsymbol{z}_i = \widetilde{\MR}_i \in \mathrm{SO}(\dimrot) \}_{i=1}^N$, single rotation averaging seeks to find the best average rotation $\vxx = \M{R} \in \mathrm{SO}(\dimrot)$. The residual function is chosen as the chordal distance between $\M{R}$ and $\widetilde{\MR}_i$: $r(\vxx, \boldsymbol{z}_i) = \Vert \M{R} - \widetilde{\MR}_i \Vert$. Fig.~\ref{fig:applications}(a) plots an instance of 3D single rotation averaging with $20$ measurements (rotations are plotted as 3D coordinate frames), among which there is a single outlier (shown as transparent). \end{example} \begin{example}[Multiple Rotation Averaging \cite{Eriksson18cvpr-strongDuality,Carlone16TRO-planarPGO,Lajoie19ral-DCGM}] \label{ex:multirotation} Let ${\cal G} = ({\cal V},{\cal E})$ be an undirected graph with vertex set ${\cal V} = [n]$ and edge set ${\cal E}$. Each vertex $i \in {\cal V}$ is associated with an unknown rotation $\M{R}_i \in \mathrm{SO}(\dimrot)$ (typically $q=2$ or $q=3$), while each edge $(i,j) \in {\cal E}$ gives a relative rotation measurement $\widetilde{\MR}_{ij} \in \mathrm{SO}(\dimrot)$ between the unknown rotations at vertex $i$ and $j$. Multiple rotation averaging estimates the set of absolute rotations on the vertices $\vxx = \{\M{R}_i\}_{i \in {\cal V}} \in \mathrm{SO}(\dimrot)^{n}$ from relative measurements over ${\cal E}$. The residual function is chosen as the chordal distance between $\M{R}_i \widetilde{\MR}_{ij}$ and $\M{R}_j$ for $(i,j) \in {\cal E}$: $r(\vxx,\boldsymbol{z}_{ij}) = \Vert \M{R}_i \widetilde{\MR}_{ij} - \M{R}_j \Vert$. \maybeOmit{Optionally, if a set ${\cal R}$ of relative measurements, is known to be free of outliers (\emph{e.g.},\xspace odometry measurements in robot navigation), then a regularization $\psi(\vxx)} %{\psi(\vxx,\calR) = \sum_{(i,j) \in {\cal R}} \Vert \M{R}_i \widetilde{\MR}_{ij} - \M{R}_j \Vert^2$ is added to \eqref{eq:robust}.} Fig. \ref{fig:applications}(b) plots an instance of 2D multiple rotation averaging with $9$ (unknown) absolute rotations and $11$ (measured) relative measurements, two of which are outliers (shown in red). \end{example} \begin{example}[Point Cloud Registration \cite{Yang20tro-teaser}] \label{ex:pointcloud} Given two sets of 3D points with putative correspondences $\{ \boldsymbol{z}_i = (\boldsymbol{p}_i, \boldsymbol{q}_i) \}_{i=1}^{N}$ (\emph{e.g.},\xspace matched by deep-learned features \cite{Yang21cvpr-sgp}), point cloud registration seeks the best rigid transformation $\vxx = (\M{R},\boldsymbol{t}) \in \ensuremath{\mathrm{SO}(3)}\xspace \times \Real{3}$ to align them. The residual function is chosen as the Euclidean distance between pairs of points after applying the rigid transformation: $r(\vxx,\boldsymbol{z}_i) = \norm{\boldsymbol{q}_i - \M{R} \boldsymbol{p}_i - \boldsymbol{t}}$. For mathematical convenience (\emph{i.e.},\xspace to satisfy the Archimedeanness condition in Theorem \ref{thm:lasserre}), we assume the translation to be bounded: $\boldsymbol{t} \in \calB^3_T$, where $ \calB^q_T \triangleq \{\boldsymbol{t} \in \Real{q}\mid \norm{\boldsymbol{t}} \leq T \}$ defines a $q$-dimensional ball centered at the origin with radius $T$. Fig. \ref{fig:applications}(c) plots an instance of point cloud registration using the {\scenario{Bunny}} dataset \cite{Curless96siggraph} (outlier correspondences are shown in red). \end{example} \begin{example}[Mesh Registration \cite{Briales17cvpr-registration,Shi21icra-robin}] \label{ex:mesh} Given a set of $N$ putative correspondences from a 3D point cloud to a 3D mesh, where the point cloud $\{(\boldsymbol{p}_i,\boldsymbol{u}_i) \}_{i=1}^N$ is represented as a collection of points ($\boldsymbol{p}_i \in \Real{3}$) with estimated normals ($\boldsymbol{u}_i \in \usphere{2}$), and the mesh $\{(\boldsymbol{q}_i,\boldsymbol{v}_i )\}_{i=1}^N$ is represented as a collection of faces with unit normals $(\boldsymbol{v}_i \in \usphere{2})$ and arbitrary points that belong to them $(\boldsymbol{q}_i \in \Real{3})$, mesh registration seeks the best rigid transformation $\vxx = (\M{R},\boldsymbol{t}) \in \ensuremath{\mathrm{SO}(3)}\xspace \times \Real{3}$ to align the point cloud with the mesh. The residual function is chosen as: $r(\vxx,\boldsymbol{z}_i) =\sqrt{ \norm{\inprod{\boldsymbol{v}_i}{\boldsymbol{q}_i - \M{R} \boldsymbol{p}_i - \boldsymbol{t}}}^2 + \norm{\boldsymbol{v}_i - \M{R} \boldsymbol{u}_i}^2 }$, where $\norm{\inprod{\boldsymbol{v}_i}{\boldsymbol{q}_i - \M{R} \boldsymbol{p}_i - \boldsymbol{t}}}$ is the point-to-plane distance, and $\norm{\boldsymbol{v}_i - \M{R} \boldsymbol{u}_i}$ is the normal-to-normal distance. Similar to Example \ref{ex:pointcloud}, we enforce $\boldsymbol{t} \in \calB^3_T$. Fig. \ref{fig:applications}(d) visualizes an instance of mesh registration using the {\scenario{TeddyBear}} model from the {\scenario{HomebrewedDB}} dataset \cite{Kaskman19-homebrewedDB} (outlier correspondences shown in red). \end{example} \begin{example}[Absolute Pose Estimation \cite{Kneip2014ECCV-UPnP,Schweighofer2008bmvc-SOSforPnP,Yang21arxiv-damp}] \label{ex:absolutepose} Consider a camera with field of view (FOV) $\alpha \in (0,\pi)$ picturing a 3D object {(conventionally centered at zero)}. Given a set of $N$ putative correspondences between 3D keypoints $\{\boldsymbol{p}_i \in \Real{3} \}_{i=1}^N$ on the object and 2D image keypoint detections $\{\boldsymbol{u}_i \in \usphere{2}\}_{i=1}^N$, where $\boldsymbol{u}_i$ denotes the unit bearing vector corresponding to the $i$-th 2D keypoint, absolute pose estimation (also known as \emph{Perspective-$n$-Points}) seeks to estimate the absolute camera pose $\vxx = (\M{R},\boldsymbol{t})\in \ensuremath{\mathrm{SO}(3)}\xspace \times \Real{3}$ from the 2D-3D correspondences. The residual function is chosen as: $r(\vxx,\boldsymbol{z}_i)\!=\!\sqrt{\inprod{\M{R} \boldsymbol{p}_i + \boldsymbol{t}}{({\mathbf I}_3 - \boldsymbol{u}_i\boldsymbol{u}_i^{\mathsf{T}})(\M{R} \boldsymbol{p}_i + \boldsymbol{t})} }$, \emph{i.e.},\xspace the point-to-line distance from the transformed 3D keypoint $\M{R}\boldsymbol{p}_i + \boldsymbol{t}$ (in camera frame) to the bearing vector $\boldsymbol{u}_i$.\footnote{Instead of using the geometric reprojection error as the residual (a rational function), we follow \cite{Yang21arxiv-damp,Schweighofer2008bmvc-SOSforPnP} and choose the point-to-line distance as the residual so that $r^2$ is a polynomial per Assumption \ref{assumption:polynomialsrobust}.} In this paper, we enforce $\boldsymbol{t} \in \calB^3_T \cap {\cal C}_{\alpha}$, where ${\cal C}_{\alpha}\triangleq \{ \boldsymbol{t} \in \Real{3} \mid \tan(\frac{\alpha}{2}) t_3 \geq \sqrt{t_1^2 + t_2^2} \}$ defines the 3D cone corresponding to the camera FOV; the constraint $\boldsymbol{t} \in {\cal C}_\alpha$ enforces the center of the 3D object (\emph{i.e.},\xspace~$\M{R}\cdot{\mathbf 0} + \boldsymbol{t} = \boldsymbol{t}$ in camera frame) to lie inside the FOV. Fig.~\ref{fig:applications}(e) shows an instance of absolute pose estimation using a satellite \mbox{image from the {\scenario{SPEED}} dataset~\cite{Sharma19arxiv-SPEED} (outliers in red).} \end{example} \begin{example}[Category-Level Object Pose and Shape Estimation \cite{Shi21rss-pace}] \label{ex:category} Given $N$ 3D semantic keypoint observations $\{ \boldsymbol{p}_i \}_{i=1}^N$ of an object of a certain category (\emph{e.g.},\xspace car, chair), category-level perception estimates the object pose and shape. We consider the standard \emph{active shape model}, where the unknown shape of the object is described as a nonnegative combination of $K$ shapes in a library $\{ \{\boldsymbol{q}_{k,i}\}_{i=1}^N \}_{k=1}^K$ (the \emph{bases}, which intuitively correspond to examples of objects in that category). Hence, category-level perception estimates the pose $(\M{R},\boldsymbol{t}) \in \ensuremath{\mathrm{SO}(3)}\xspace \times \Real{3}$ and shape coefficients $\boldsymbol{c} \in \mathbb{R}^{K}_{+}$ describing the object. The residual function is chosen as: $r(\vxx,\boldsymbol{z}_i) = \Vert \M{R} \boldsymbol{p}_i + \boldsymbol{t} - \sum_{k=1}^K c_k \boldsymbol{q}_{k,i} \Vert$, \emph{i.e.},\xspace the Euclidean distance between the transformed 3D keypoint detections and the nonnegative combination of the shape bases. We include $\psi(\vxx,\lambda) = \lambda \norm{\boldsymbol{c}}^2$ as a regularization for the shape parameters $\boldsymbol{c}$, as in~\cite{Shi21rss-pace}. Again, we enforce $\boldsymbol{t} \in \calB^3_T, \boldsymbol{c} \in \calB^K_T$ to be both bounded. Fig. \ref{fig:applications}(f) pictures an example of category-level perception from the {\scenario{ApolloScape}} dataset~\cite{Wang19pami-apolloscape}, where one estimates the pose and shape of a vehicle given 2D semantic keypoint detections with associated depth values (outliers shown in red). \end{example} \begin{proposition}[Polynomial Expressibility] \label{prop:polynomialExpressibility} Examples \ref{ex:singlerotation}-\ref{ex:category} satisfy Assumption~\ref{assumption:polynomialsrobust}. Precisely, (i) $r^2$ and $\psi$ (if $\psi \neq 0$) are quadratic polynomials (\emph{i.e.},\xspace $\deg{r^2}=\deg{\psi}=2$); (ii) the constraint set ${\cal X}$ can be described by polynomial equalities $h_i$'s and inequalities $g_j$'s with degree up to 2 (\emph{i.e.},\xspace $\deg{h_i},\deg{g_j} \leq 2$). \end{proposition} While we postpone the proof to Supplementary Material\xspace, we observe that the key insights behind the proof are simple but powerful: (i) rigid body transformations can be expressed as linear functions (\emph{e.g.},\xspace $\M{R} \boldsymbol{p}_i + \boldsymbol{t}$ for a given point $\boldsymbol{p}_i$), (ii) squared residuals $r^2$ (and our regularizer $\psi$) are commonly squared L2 norms, that can be written as quadratic functions, and (iii) the set of poses and rotations can be described by quadratic (in-)equality constraints, a fact already used in, \emph{e.g.},\xspace~\cite{Tron15RSSW-rotationdeterminant,Carlone15icra-verification,Briales18cvpr-global2view,Yang20cvpr-perfectshape}. Proposition \ref{prop:robustaspop} and \ref{prop:polynomialExpressibility} together establish that outlier-robust geometric perception \eqref{eq:robust} with TLS, MC, GM, TB, L1, Huber and Adaptive costs (Fig. \ref{fig:robust-costs}), when applied to Examples \ref{ex:singlerotation}-\ref{ex:category} (Fig. \ref{fig:applications}), are instances of~\eqref{eq:pop}. The expert reader will also recognize other geometric perception problems that satisfy Assumption~\ref{assumption:polynomialsrobust}, including 2D-2D relative pose estimation \cite{Briales18cvpr-global2view}, triangulation \cite{Aholt12eccv-qcqptriangulation}, rotation search (Wahba problem) \cite{Yang19iccv-quasar}, pose graph optimization \cite{Rosen19IJRR-sesync}, among others. \revise{Although the bundle adjustment problem \cite{Agarwal10eccv} cannot be written as a POP using the geometric reprojection error, adopting the point-to-line error can put bundle adjustment in the form of a POP \cite{Schweighofer06bmvc}. Nevertheless, bundle adjustment typically involves too many variables (\emph{e.g.},\xspace hundreds of camera poses and hundreds of thousands of 3D points) to be practically solvable using existing semidefinite relaxations.} For the rest of the paper, we will focus on designing certifiable algorithms and semidefinite relaxations for~\eqref{eq:robust} with the~\eqref{eq:binaryTLS} cost function. However, semidefinite relaxations proposed in Section \ref{sec:sdprelax} can be extended to the other costs in Proposition~\ref{prop:robustaspop}, and we leave that exercise to the interested reader. We end this section with a remark about why we prefer the TLS cost over the others in Proposition~\ref{prop:robustaspop}. \begin{remark}[Preference for TLS] \label{remark:TLSvsothers} (i) Compared to GM, TB, L1 and Huber, which still penalize outliers, TLS completely discards outliers. Consequently, TLS can often achieve better robustness to outliers \cite{Yang20ral-gnc,MacTavish15crv-robustEstimation}. (ii) MC also completely discards outliers, but it does not select a model to minimize the inlier residuals. Therefore, there can be an infinite number of solutions to problem \eqref{eq:robust} with equal cost (number of outliers). (iii) The adaptive cost typically leads to POPs with high-degree polynomials, which requires a large $\kappa$ from the relaxation hierarchy and results in SDPs that are intractable. (iv) TLS can be shown as a maximum likelihood estimator, when the inliers have a Gaussian distribution and the outliers are uniformly distributed, see \cite[Proposition 5]{Antonante20TRO-outlier}. \end{remark} \section*{Acknowledgments} The authors would like to thank Jie Wang, Victor Magron, and Jean B. Lasserre for the discussion about Lasserre's hierarchy and {\scenario{TSSOS}}; Ling Liang and Kim-Chuan Toh for the discussion about SDP solvers; Bo Chen and Tat-Jun Chin for the {\scenario{SPEED}} data; and Jingnan Shi for the {\scenario{ApolloScape}} data. This work was funded by ARL DCIST CRA W911NF-17-2-0181, ONR RAIDER N00014-18-1-2828, MathWorks, NSF CAREER award ``Certifiable Perception for Autonomous Cyber-Physical Systems'', and Lincoln Laboratory's Resilient Perception in Degraded Environments program. \section{Implementation Details for {\scenario{STRIDE}}} In Section \ref{sec:scalableopt}, we presented the {\scenario{STRIDE}} algorithm and proved its global convergence. We noted that the initial point $(\M{X}^0,\boldsymbol{y}^0,\M{S}^0)$ could have a significant impact on the convergence speed of {\scenario{STRIDE}}. Therefore, in {\scenario{STRIDE}} we use existing fast heuristics ({\scenario{GNC}}, {\scenario{RANSAC}}) to generate a \emph{primal} initial guess (\emph{cf.}\xspace Remark \ref{remark:fastheuristics}). In this section, we describe how to generate a \emph{dual} initial guess (Section \ref{sec:app-dual-warmstart}), and how to use Riemannian optimization for local search (Section \ref{sec:app-local-search}). \subsection{Dual Warmstart} \label{sec:app-dual-warmstart} We propose to use a combination of two techniques to generate a good dual initial point $(\boldsymbol{y}^0,\M{S}^0)$. Section \ref{sec:app-cssr} describes a method to relax the \eqref{eq:binaryTLS} problem by exploiting correlative sparsity. Although such a relaxation is not tight, we show that its solution can be used to warmstart {\scenario{STRIDE}}. In Section \ref{sec:app-admmplus}, we present a fast first-order algorithm to refine both the primal and the dual initializations. \subsubsection{Bootstrapping via Correlative Sparsity} \label{sec:app-cssr} The \eqref{eq:binaryTLS} problem has another special property called \emph{correlative sparsity} \cite{Wang21siopt-chordaltssos,Waki06jopt-SOSSparsity,Lasserre06siopt-correlativesparsity}, which, loosely speaking, refers to the property that there exists a partition of the variables $(\vxx,\boldsymbol{\theta})$ into a union of smaller groups, such that (i) each constraint of \eqref{eq:binaryTLS} involves only one group of the variables, and (ii) the objective of \eqref{eq:binaryTLS} can be decomposed into terms that each involves only one group of the variables (\emph{cf.}\xspace \cite[Assumption 2]{Lasserre06siopt-correlativesparsity}). Particularly, we observe that the objective polynomial, denoted by $p(\vxx,\boldsymbol{\theta})$, can be expressed as a sum of $N$ polynomials: \begin{eqnarray} p(\vxx,\boldsymbol{\theta}) = \sum_{i=1}^N \underbrace{ \parentheses{ \frac{1+\theta_i}{2} \frac{r^2(\vxx,\boldsymbol{z}_i)}{\beta_i^2} + \frac{1-\theta_i}{2} + \frac{1}{N} \psi(\vxx)} %{\psi(\vxx,\calR)} }_{p_i(\vxx,\theta_i)} , \end{eqnarray} where each $p_i$ is a polynomial that only involves $\widetilde{\vxx}_i \triangleq [\vxx\,;\,} %\ \Vert\ \theta_i] \in \Real{d+1}$. The constraint polynomials can also be partitioned into $N$ groups where the $i$-th group of constraints only involves $\widetilde{\vxx}_i$. To see this, note that there are two types of constraints in \eqref{eq:binaryTLS}, the ones that constrain $\vxx$ (to be proper rotations and translations), denoted by ${\cal H}[\vxx]$, and the ones that constrain each $\theta_i$ to be a binary variable, denoted by ${\cal H}[\theta_i] = \{ \theta_i^2 -1 = 0\},i=1,\dots,N$. Therefore, defining ${\cal H}_i \triangleq \{ {\cal H}[\vxx],{\cal H}[\theta_i] \}$, then each ${\cal H}_i$ only contains polynomials in $\widetilde{\vxx}_i$, and the union of ${\cal H}_i$ for $i=1,\dots,N$ is the full constraint set of \eqref{eq:binaryTLS}. This correlative sparsity allows us to design an SDP relaxation for \eqref{eq:binaryTLS} using $N$ moment matrices $\M{X}_{v_i}, i=1,\dots,N$, where each $\M{X}_{v_i}$ is defined as \begin{eqnarray} & \hspace{-3mm} \boldsymbol{v}_i(\widetilde{\vxx}_i) \triangleq [1\,;\,} %\ \Vert\ \vxx \,;\,} %\ \Vert\ \theta_i \,;\,} %\ \Vert\ \theta_i \vxx] \in \Real{2d + 2}, \label{eq:csliftingmonomial}\\ & \hspace{-10mm}\M{X}_{v_i}\! \triangleq\! \boldsymbol{v}_i(\widetilde{\vxx}_i) \boldsymbol{v}_i (\widetilde{\vxx}_i)^{\mathsf{T}}\! =\! \left[\begin{array}{cccc} 1 & \vxx^{\mathsf{T}} & \theta_i & \theta_i \vxx^{\mathsf{T}} \\ \vxx & \vxx \vxx^{\mathsf{T}} & \theta_i \vxx & \theta_i \vxx\vxx^{\mathsf{T}} \\ \theta_i & \theta_i \vxx^{\mathsf{T}} & \theta_i^2 & \theta_i^2 \vxx^{\mathsf{T}} \\ \theta_i \vxx & \theta_i \vxx\vxx^{\mathsf{T}} & \theta_i^2 \vxx & \theta_i^2 \vxx\vxx^{\mathsf{T}} \end{array}\right] \label{eq:csmomentmat} \end{eqnarray} and has a \emph{constant} size $2d+2$. It is easy to verify that $\M{X}_{v_i}$ contains all the monomials in $p_i(\widetilde{\vxx}_i)$ and ${\cal H}_i$. Therefore, by following similar steps as in the main text, we can derive an SDP relaxation that exploits correlative sparsity. \emph{(i) Rewriting \eqref{eq:binaryTLS} using the moment matrices $\{ \M{X}_{v_i} \}_{i=1}^N$}. Because the sparse moment matrix $\M{X}_{v_i}$ contains all monomials in $p_i$, and the \eqref{eq:binaryTLS} cost is a sum of $p_i$'s, we can write the objective of \eqref{eq:binaryTLS} as a linear functions of $\{ \M{X}_{v_i} \}_{i=1}^N$: \beq \begin{array}{ll}\label{eq:objectivesparsecs} \!\!\!\!\!\!\text{\grayout{objective}}: & \sum_{i=1}^N \inprod{\M{C}_i}{\M{X}_{v_i}}. \end{array} \eeq \emph{(ii) Relaxing the rank-$1$ constraint on $\{\M{X}_{v_i}\}_{i=1}^N$}. By construction, $\M{X}_{v_i}$ belongs to the set of rank-one positive semidefinite matrices. Since the rank constraint is non-convex, we drop it and only enforce each $\M{X}_{v_i}$ to be positive semidefinite: \beq \begin{array}{ll} \label{eq:eqMomentIsPSDsparsecs} \!\!\!\text{\grayout{moment matrices}}: & \M{X}_{v_i} \succeq 0, i=1,\dots,N. \\ \end{array} \eeq \emph{(iii) Adding redundant constraints}. Now we add moment constraints to each moment matrix $\M{X}_{v_i}$ and use the set of constraints ${\cal H}_i$ to add redundant equality and localizing constraints for $\M{X}_{v_i}$. Because this procedure is the same for each moment matrix $\M{X}_{v_i}$, we will only describe it once for a fixed $i$. First, some monomials can repeat themselves at multiple entries of $\M{X}_{v_i}$. For example, in \eqref{eq:csmomentmat} the ``$\theta_i \vxx$'' block is the same as the ``$\theta_i \vxx^{\mathsf{T}}$'' block up to rearrangement of entries. In fact, the number of \emph{unique} monomials in $\M{X}_{v_i}$ is $m_{2v_i} = 3\mathfrak{t}(d+1)$, while the dimension of $\M{X}_{v_i}$ (in terms of a symmetric matrix) is $\mathfrak{t}(2d+2)$. Therefore, we can add a total number of $m_{\mathrm{mom}_i} = \mathfrak{t}(2d+2) - m_{2v_i} + 1$ \emph{moment constraints}: \beq \begin{array}{ll}\label{eq:momentConstraintssparsecs} \text{\grayout{moment constraints}}: & \inprod{\M{A}_{\mathrm{mom},0}}{\M{X}_{v_i}} = 1, \\ & \inprod{\M{A}_{\mathrm{mom},j}}{\M{X}_{v_i}} = 0, \\ & j = 1, \ldots, m_{\mathrm{mom}_i}-1, \end{array} \eeq to enforce the repeating monomials in $\M{X}_{v_i}$ to be equal to each other, as well as the leading entry $[\M{X}_{v_i}]_{11} = 1$. Second, we add redundant equality constraints. For each equality constraint $h_k$ in ${\cal H}_i$, we denote $[\widetilde{\vxx}_i]_{h_k}$ as the maximum set of unique monomials such that $h_k \cdot [\widetilde{\vxx}_i]_{h_k}$ only contains monomials in $\M{X}_{v_i}$. Formally, \begin{eqnarray} [\widetilde{\vxx}_i]_{h_k} \triangleq \{\widetilde{\vxx}_i^{\boldsymbol{\alpha}} \mid \mono{ h_k \cdot \widetilde{\vxx}_i^{\boldsymbol{\alpha}} } \subseteq \mono{\M{X}_{v_i}} \}. \label{eq:csliftequalities} \end{eqnarray} Consequently, we can write $h_k \cdot [\widetilde{\vxx}_i]_{h_k} = {\mathbf 0}$ as linear equalities in $\M{X}_{v_i}$: \beq \begin{array}{ll}\label{eq:redundantEqualityConstraintssparsecs} \hspace{-3mm}\!\!\!\text{\grayout{(redundant) equality constraints}}: & \!\!\! \inprod{\M{A}_{\mathrm{req},kj}}{\M{X}_{v_i}} = 0, \\ \!\!\!&\!\!\! k = 1, \ldots, l_{h_i}\\ \!\!\!&\!\!\! j = 1, \ldots, \abs{[\widetilde{\vxx}_i]_{h_k}},\!\!\!\!\!\!\!\!\! \end{array} \eeq where $l_{h_i}$ is the number of equality constraints in ${\cal H}_i$. Finally, for each inequality constraint $g_j$ in ${\cal H}_i$ ($\deg{g_j} \leq 2$ by Proposition \ref{prop:polynomialExpressibility}), we denote by $[\M{X}_1]_{{\cal I}_j}$ the maximum principal submatrix of $\M{X}_1$ (\emph{i.e.},\xspace order-one full moment matrix) such that $g_j \cdot [\M{X}_1]_{{\cal I}_j}$ only contains monomials in $\M{X}_{v_i}$. Formally, \begin{eqnarray} & [\M{X}_1]_{{\cal I}_j} \triangleq [\M{X}_1]_{{\cal I}_j,{\cal I}_j}, \text{ with } \nonumber \\ & \hspace{-9mm} {\cal I}_j \! =\! \displaystyle \argmax_{{\cal J}} \{ \abs{{\cal J}} \mid \mono{ g_j\! \cdot\! [\M{X}_1]_{{\cal J},{\cal J}} } \subseteq \mono{\M{X}_{v_i}} \}. \end{eqnarray} As a result, calling $\M{X}_{g_j} = g_j \cdot [\M{X}_1]_{{\cal I}_j}$, which is positive semidefinite by construction, we can write down the following localizing matrices and constraints: \beq \begin{array}{ll}\label{eq:cslocalizemat} \text{\grayout{localizing matrices}}: & \M{X}_{g_j} \succeq 0, \;\; j=1,\ldots,l_{g_i} \end{array} \eeq \beq \begin{array}{ll}\label{eq:cslocalizecons} \!\!\!\!\!\!\text{\grayout{{localizing} constraints}}: \!\!\!& \!\!\!\inprod{\M{A}_{\mathrm{loc},jkh}}{\M{X}_{v_i}} = [\M{X}_{g_j}]_{hk} \\ \!\!\!\!\!\!&\!\!\! j = 1, \ldots, l_{g_i}, \\ \!\!\!\!\!\!&\!\!\! 1 \leq h\leq k \leq \abs{{\cal I}_j}, \end{array} \eeq where the linear constraints simply enforce each entry of $\M{X}_{g_j}$ to be a linear combination of entries in $\M{X}_{v_i}$, and $l_{g_i}$ is the number of inequality constraints in ${\cal H}_i$. \emph{(iv) Adding overlapping constraints}. The extra step that needs to be performed when there are multiple moment matrices is to add constraints that enforce \emph{overlapping entries} to be the same. Clearly, from \eqref{eq:csmomentmat}, one can see that the top left $2 \times 2$ blocks, \emph{i.e.},\xspace $[1\,;\,} %\ \Vert\ \vxx] [1,\vxx^{\mathsf{T}}]$ is shared among $\M{X}_{v_i}$ for all $i=1,\dots,N$. Therefore, we add the following overlapping constraints \beq \begin{array}{ll}\label{eq:csoverlapcons} \hspace{-3mm}\!\!\!\text{\grayout{overlapping constraints}}: & \!\!\! [\M{X}_{v_i}]_{\mathrm{ovlp}} = [\M{X}_{v_1}]_{\mathrm{ovlp}}, \\ \!\!\!&\!\!\! i = 2, \dots, N, \end{array} \eeq where $[\M{X}_{v_i}]_{\mathrm{ovlp}}$ refers to the top-left $2\times 2$ blocks of $\M{X}_{v_i}$. Steps (i)-(iv) above lead to the following SDP: \begin{equation}\label{eq:correlativerelax} \begin{split} \hspace{-3mm} \min_{\M{X}} \cbrace{ \sum_{i=1}^N \inprod{\M{C}_i}{\M{X}_{v_i}}\ \middle\vert\ {\cal A}(\M{X})\! =\! \boldsymbol{b}, \M{X} \succeq 0} \\ \text{with }\M{X} = \parentheses{ \begin{array}{c} \M{X}_{v_1},\M{X}_{1,1},\dots,\M{X}_{1, l_{g_1}} \\ \M{X}_{v_2},\M{X}_{2,1},\dots,\M{X}_{2, l_{g_2}} \\ \vdots \\ \M{X}_{v_N},\M{X}_{N,1},\dots,\M{X}_{N, l_{g_N}} \end{array}}, \end{split} \tag{CSSR} \end{equation} where we have shorthanded $\M{X}_{i,j}$ as the $j$-th localizing matrix for the $i$-th moment matrix for notation convenience (\emph{cf.}\xspace \eqref{eq:cslocalizemat}), and ${\cal A}(\M{X})=\boldsymbol{b}$ collects all the linear equality constraints from \eqref{eq:momentConstraintssparsecs}, \eqref{eq:redundantEqualityConstraintssparsecs}, \eqref{eq:cslocalizecons}, and \eqref{eq:csoverlapcons}. Comparing \eqref{eq:correlativerelax} with \eqref{eq:sparserelax}, we see that, although \eqref{eq:correlativerelax} has more positive semidefinite blocks than \eqref{eq:sparserelax}, the size of the blocks become much smaller, especially when $N$ is large (\eqref{eq:correlativerelax} has $n_1 = 2d+2$, while \eqref{eq:sparserelax} has $n_1 = (1+d)(1+N)$). Therefore, \eqref{eq:correlativerelax} can be solved much more efficiently using off-the-shelf interior point methods such as \scenario{MOSEK} \cite{mosek}. However, the caveat is that \eqref{eq:correlativerelax} is not tight and cannot provide a certifiably optimal solution to the original \eqref{eq:binaryTLS} problem. {\bf Assembling a dual initialization for {\scenario{STRIDE}}}. Although the \eqref{eq:correlativerelax} relaxation is inexact, it is still useful to solve it because we can use its solution to warmstart {\scenario{STRIDE}}. To do this, let us recall the block structure of \eqref{eq:sparserelax} for the primal variable: \begin{eqnarray} \M{X} = (\M{X}_v, \M{X}_1,\dots,\M{X}_{l_g}). \end{eqnarray} The dual variable $\M{S}$ has the same block structure: \begin{eqnarray} \M{S} = (\M{S}_v, \M{S}_1,\dots,\M{S}_{l_g}), \end{eqnarray} where each block of $\M{S}$ has the same size as the corresponding block of $\M{X}$. With a slight change of notation, let us rewrite the block structure of \eqref{eq:correlativerelax} as: \begin{eqnarray} \M{X}_c = \parentheses{ \begin{array}{c} \M{X}_{v_1},\M{X}_{1,1},\dots,\M{X}_{1, l_g} \\ \M{X}_{v_2},\M{X}_{2,1},\dots,\M{X}_{2, l_g} \\ \vdots \\ \M{X}_{v_N},\M{X}_{N,1},\dots,\M{X}_{N, l_g} \end{array}}, \\ \M{S}_c = \parentheses{ \begin{array}{c} \M{S}_{v_1},\M{S}_{1,1},\dots,\M{S}_{1, l_g} \\ \M{S}_{v_2},\M{S}_{2,1},\dots,\M{S}_{2, l_g} \\ \vdots \\ \M{S}_{v_N},\M{S}_{N,1},\dots,\M{S}_{N, l_g} \end{array}}, \end{eqnarray} where the subscript ``$c$'' indicates correlative, and we have used the fact that $l_{g_i} = l_g$ for all $i=1,\dots,N$ because the only inequality constraints in \eqref{eq:binaryTLS} come from $\vxx \in {\cal X}$ and each ${\cal H}_i$ has an equal number of $l_g$ inequality constraints. Our goal is to generate $\M{S}$, given $\M{S}_c$, for {\scenario{STRIDE}}. Note that the matrices $\M{S}_v$ ($\M{X}_v$) and $\M{S}_{v_i}$ ($\M{X}_{v_i}$) have different dimensions, so that it is inappropriate to just sum up all $\{\M{S}_{v_i} \}_{i=1}^N$ to get $\M{S}_v$. The correct way to ``assemble'' $\{\M{S}_{v_i} \}_{i=1}^N$ is as follows. For each $\M{S}_{v_i}$, we define $\widebar{\MS}_{v_i}$ so that it satisfies the following polynomial equality \begin{eqnarray} \label{eq:definebarS} \inprod{\widebar{\MS}_{v_i}}{\M{X}_v} \equiv \inprod{\M{S}_{v_i}}{\M{X}_{v_i}} \end{eqnarray} for any $\M{X}_v$ and $\M{X}_{v_i}$ that are \emph{proper} moment matrices (note that both sides of \eqref{eq:definebarS} are polynomials and the equality implies that the coefficients of both polynomials must be equal). This is essentially creating $\widebar{\MS}_{v_i}$ to be an all-zero matrix except that the principal submatrix of $\widebar{\MS}_{v_i}$ indexed by the monomials $\boldsymbol{v}_i(\widetilde{\vxx}_i)$ is equal to $\M{S}_{v_i}$. Now that $\widebar{\MS}_{v_i}$ has the same size as $\M{X}_v$ and $\M{S}_v$, we can assemble $\M{S}_v$ as \begin{eqnarray} \label{eq:assembleSv} \M{S}_v = \sum_{i=1}^N \widebar{\MS}_{v_i}, \end{eqnarray} where the rationale for the sum can be partially understood from the complementarity condition of \eqref{eq:sdpKKT}. By the same token, for each $\M{S}_{i,j}$, we create $\widebar{\MS}_{i,j}$ such that \begin{eqnarray}\label{eq:definebarS2} \inprod{\widebar{\MS}_{i,j}}{\M{X}_{j}} \equiv \inprod{\M{S}_{i,j}}{\M{X}_{i,j}}, i = 1,\dots,N, j = 1,\dots,l_g, \end{eqnarray} for any $\M{X}_j$ and $\M{X}_{i,j}$ that are proper localizing matrices. Then we assemble $\M{S}_j$ as \begin{eqnarray} \label{eq:assembleSj} \M{S}_j = \sum_{i=1}^N \widebar{\MS}_{i,j}, \quad j=1,\dots,l_g. \end{eqnarray} The rationale for \eqref{eq:definebarS} and \eqref{eq:definebarS2} can be understood from the complementarity condition of the KKT system \eqref{eq:sdpKKT}, and more deeply from the dual perspective of sums-of-squares (SOS) polynomials \cite{Blekherman12Book-sdpandConvexAlgebraicGeometry} (precisely, we are assembling a SOS polynomial in $(\vxx,\boldsymbol{\theta})$ from $N$ SOS polynomials, each only involves the variables $(\vxx,\theta_i)$). Since this is less relevant for the purpose of this paper (and it is only used for warmstart), we only state the assembling procedure as in \eqref{eq:assembleSv} and \eqref{eq:assembleSj} without diving too deep into the theory of sums of squares. The interested reader is encouraged to refer to the dual SOS perspective in \cite{lasserre10book-momentsOpt}. \subsubsection{Semi-proximal ADMM} \label{sec:app-admmplus} After obtaining $\M{X}^0$ from primal heuristics such as {\scenario{GNC}} \cite{Yang20ral-gnc} or {\scenario{RANSAC}} \cite{Fischler81}, and $\M{S}^0$ from solving \eqref{eq:correlativerelax} and performing the assembly procedure in Section \ref{sec:app-cssr}, we use the {semi-proximal alternating direction method of multipliers} ({\scenario{ADMM+}}) proposed in \cite{Sun15siopt-admmplus} to refine both the primal and the dual initializations $(\M{X}^0,\M{S}^0)$. The full {\scenario{ADMM+}} algorithm, for solving a standard SDP \eqref{eq:primalSDP}-\eqref{eq:dualSDP}, is presented in Algorithm~\ref{alg:admmplus}. As we can see, at each iteration of {\scenario{ADMM+}}, the major computation involves solving a linear system (\emph{cf.}\xspace \eqref{eq:admmpluslinsolve1} and \eqref{eq:admmpluslinsolve2}) and performing a projection onto the product of positive semidefinite cones ${\cal K}$ (\emph{cf.}\xspace \eqref{eq:admmplusprojpsd}). Since ${\cal A}$ is typically sparse in our examples, Cholesky factorization of ${\cal A}\calA^{*}$ can be done efficiently and needs to be performed only once. {\scenario{ADMM+}} is a globally convergent algorithm for solving the SDP \eqref{eq:primalSDP}-\eqref{eq:dualSDP} and the interested reader can refer to \cite{Sun15siopt-admmplus} for a detailed study. Notably, \cite{Sun15siopt-admmplus} shows that {\scenario{ADMM+}} is typically $2$ to $3$ times faster than a conventional ADMM. In our implementation, we use the function \texttt{admmplus} in {\scenario{SDPNAL+}} \cite{Yang2015mpc-sdpnalplus} to refine $(\M{X}^0,\M{S}^0)$ and warmstart {\scenario{STRIDE}}. Although one can directly pass $(\M{X}^0,\M{S}^0)$ to {\scenario{STRIDE}}, empirically we found it is beneficial to refine $(\M{X}^0,\M{S}^0)$ using {\scenario{ADMM+}} because the refined initial points will have higher quality that promotes the convergence of {\scenario{STRIDE}}. In our experiments, we run {\scenario{ADMM+}} for a maximum of $20,000$ iterations, or until $\max\{ \eta_p,\eta_d \}$ is below a threshold (\emph{e.g.},\xspace $1\ee{-6}$). \input{sections/app-alg-admmplus} \subsection{Local Search and Nonlinear Programming} \label{sec:app-local-search} Recall the local search step \eqref{eq:nlpinlocalsearch} applies a nonlinear programming (NLP) algorithm to solve the \eqref{eq:binaryTLS} problem given an initial point. Since \eqref{eq:binaryTLS} is a polynomial optimization, it is straightforward to implement NLP using {\texttt{fmincon}} in Matlab. However, here we show that it is possible to exploit the smooth manifold structure of \eqref{eq:binaryTLS} and solve it more efficiently with Riemannian optimization \cite{Absil07book} (\emph{e.g.},\xspace using {\scenario{Manopt}} \cite{manopt}). First, we can model the vector of binary variables $\boldsymbol{\theta}$ as an \emph{oblique manifold} of size $N \times 1$ (an oblique manifold contains matrices with unit-norm rows). Second, from Examples \ref{ex:singlerotation}-\ref{ex:category}, we know the geometric model $\vxx$ contains 2D and 3D rotations, which are both smooth manifolds. However, $\vxx$ can also contain translation $\boldsymbol{t}$ and shape parameters $\boldsymbol{c}$ that do not live on smooth manifolds. Fortunately, we can drop some constraints so that they both live on smooth manifolds. For example, in Examples \ref{ex:pointcloud}, \ref{ex:mesh}, and \ref{ex:category}, we can relax $\boldsymbol{t} \in \calB^3_T$ to $\boldsymbol{t} \in \Real{3}$, with the rationale that when the SDP iterate $\M{X}^k$ is close to optimal, $\norm{\boldsymbol{t}} \leq T$ should be naturally satisfied (from rounding \eqref{eq:roundingrestate}) even without explicit constraint. Similarly, we relax $\boldsymbol{t} \in \calB^3_T \cap {\cal C}_\alpha$ in Example \ref{ex:absolutepose} to $\boldsymbol{t} \in \Real{3}$, and relax $\boldsymbol{c} \in \mathbb{R}^{K}_{+} \cap \calB^K_T$ in Example \ref{ex:category} to $\boldsymbol{c} \in \mathbb{R}^{K}_{++}$ (matrices with strictly positive entries live on a smooth manifold). Note that these modifications will not affect the global convergence of {\scenario{STRIDE}} because \eqref{eq:accept-reject} will reject the NLP solution if it violates the constraints that have been dropped. \section{Introduction} \label{sec:introduction}} \IEEEPARstart{G}EOMETRIC perception, the task of estimating unknown geometric models ({\emph{e.g.},\xspace} object poses, rotations, 3D structure, robot trajectory) from sensor measurements ({\emph{e.g.},\xspace} images, point clouds, relative poses), is a fundamental problem in computer vision and robotics. It finds extensive applications to object detection and localization \cite{Yang19rss-teaser}, motion estimation and 3D reconstruction \cite{Choi15cvpr-robustReconstruction}, simultaneous localization and mapping (SLAM) \cite{Rosen19IJRR-sesync} and structure from motion (SfM)~\cite{Schonberger16cvpr-SfMRevisited}, virtual and augmented reality \cite{Klein07ismar-PTAM}, and medical imaging \cite{Audette00mia-surveyMedical}, to name a few. A modern machine perception pipeline includes a \emph{perception front-end} that extracts, describes, and matches relevant features from raw sensor data, and a \emph{perception back-end} that estimates the geometric models of interest given the putative feature matches. In practice, due to various sources of imperfections and uncertainties ({\emph{e.g.},\xspace} sensor failures, incorrect detections and matchings by hand-crafted or deep-learned features), a large amount of \emph{outliers} ---measurements that tell no or little information about the underlying geometric models--- are generated by the front-end. Therefore, designing an \emph{outlier-robust} back-end that can tolerate large amounts of outliers, also known as \emph{robust fitting} \cite{Chin17slcv-maximumConsensusAdvances} in computer vision and \emph{robust state estimation} \cite{Barfoot17book} in robotics, has been a longstanding quest in both communities. Unfortunately, from a theoretical standpoint, performing robust estimation by discerning \emph{inliers} ({\emph{i.e.},\xspace} the correct and useful measurements) from outliers, is known to be NP-hard and \emph{inapproximable} due to its combinatorial nature \cite{Chin17slcv-maximumConsensusAdvances,Antonante20TRO-outlier,Enqvist15IJCV-tractableRobustEstimation,Chin18eccv-robustFitting}. Consequently, existing algorithms for outlier-robust estimation are mostly divided into \emph{fast heuristics}, {\emph{e.g.},\xspace} \scenario{RANSAC} \cite{Fischler81,Chum03-LORANSAC,Barath18-gcRANSAC} and graduated non-convexity (\scenario{GNC}) \cite{Yang20ral-gnc,Black96ijcv-unification,Blake1987book-visualReconstruction}, that are efficient but offer no optimality guarantees, and \emph{global solvers}, {\emph{e.g.},\xspace} Branch-and-Bound \cite{Yang16pami-goicp} and mixed-integer programming \cite{Izatt17isrr-MIPregistration,Li09cvpr-robustFitting}, that guarantee optimality but run in worst-case exponential time. Although in some cases it is acceptable to trade off \emph{optimality} (hence robustness) for \emph{efficiency}, real-time safety-critical applications ---such as autonomous driving and space robotics--- pose high demands for \emph{efficient global optimality}. The conflict between the {fundamental intractability} of robust estimation and the demand for computational efficiency calls for a paradigm shift: since it is impossible to solve all robust estimation problems in polynomial time, we argue that a useful goal is to design algorithms that perform well in typical instances and are able to \emph{certify} optimality of the resulting estimates, but at the same time can declare ``failure'' on worst-case instances rather than blindly returning an incorrect estimate. Inspired by related works \cite{Bandeira16crm,Yang20tro-teaser}, we formalize the notion of a \emph{certifiable algorithm} below. \begin{definition}[Certifiable Algorithm]\label{def:certifiablealg} Given an optimization problem $\mathbb{P}(\mathbb{D})$ with input data $\mathbb{D}$, an algorithm $\mathbb{A}$ is said to be {certifiable} if (i) $\mathbb{A}$ runs in polynomial time; and after solving $\mathbb{P}(\mathbb{D})$, $\mathbb{A}$ \revise{(ii)} either returns the global optimizer of $\mathbb{P}$ together with a certificate of optimality \revise{for common instances of $\mathbb{D}$ (empirically or provably)}, or \revise{(iii)} fails to do so \revise{for the worst instances of $\mathbb{D}$} but provides a measure of suboptimality ({\emph{e.g.},\xspace} a bound on the objective value, or the distance to the global optimizer). \end{definition} \revise{A certifiable algorithm respects the theoretical intractability of robust estimation \cite{Chin17slcv-maximumConsensusAdvances,Antonante20TRO-outlier} in that it does \emph{not} globally optimize $\mathbb{P}(\mathbb{D})$ for \emph{all} instances of $\mathbb{D}$ and it is allowed to fail in the worst cases (\emph{cf.}\xspace (iii) of Definition \ref{def:certifiablealg}).} \revise{However}, our notion of a certifiable algorithm is stricter than that of \cite{Yang20tro-teaser}, as it requires $\mathbb{A}$ to solve $\mathbb{P}(\mathbb{D})$ to global optimality for common $\mathbb{D}$ \revise{(at least empirically, \emph{cf.}\xspace (ii) of Definition \ref{def:certifiablealg})}. This requirement rules out algorithms that seldomly attain global optimality but provide suboptimality guarantees (\emph{e.g.},\xspace approximation algorithms \cite{Vazirani13book-approximation}). \emph{Semidefinite relaxations} are a natural choice for designing certifiable algorithms. If the problem $\mathbb{P}$ is a \emph{polynomial optimization problem} (POP, {\emph{i.e.},\xspace} both its objective and constraints are polynomials), then there exists a standard semidefinite relaxation \emph{hierarchy}, known as Lasserre's hierarchy \cite{Lasserre01siopt-lasserrehierarchy}, that relaxes $\mathbb{P}$ into a hierarchy of {convex} semidefinite programs (SDPs) of increasing size. Each relaxation in this hierarchy can be solved in polynomial time~\cite{todd1998nesterov} and provides a measure of {suboptimality} for the resulting estimate. Moreover, under\,mild\,technical\,conditions, the suboptimality of these relaxations becomes zero when their size is large enough, in which case we say the relaxation is \emph{exact}, or \emph{tight}.\footnote{ \revise{Lasserre's hierarchy respects the worst-case NP-hardness of POPs because one may need an SDP relaxation whose size grows exponentially with the dimension of the POP to attain certifiable optimality.}} We provide an {accessible introduction to POPs and their relaxations in Section~\ref{sec:preliminaries}.} Semidefinite relaxations have been successfully used to design certifiable algorithms for many geometric perception problems. The pioneering work by Kahl and Henrion \cite{Kahl07IJCV-GlobalOptGeometricReconstruction} applies Lasserre's hierarchy to solve several early perception problems including camera resectioning, homography estimation, and fundamental matrix estimation. Since then, certifiable algorithms have been designed for modern applications such as pose graph optimization \cite{Carlone16TRO-planarPGO,Rosen19IJRR-sesync}, rotation averaging \cite{Eriksson18cvpr-strongDuality,Fredriksson12accv}, triangulation \cite{Cifuentes21SIMAA-rankdeficient,Aholt12eccv-qcqptriangulation}, 3D registration \cite{Briales17cvpr-registration,Maron16tog-PMSDP,Chaudhury15Jopt-multiplePointCloudRegistration,Iglesias20cvpr-PSRGlobalOptimality}, absolute pose estimation \cite{Agostinho2019arXiv-cvxpnpl}, relative pose estimation \cite{Briales18cvpr-global2view,Zhao20pami-relativepose,Garcia21IVC-certifiablerelativepose}, hand-eye calibration \cite{Heller14icra-handeyePOP,Giamou19ral-SDPExtrinsicCalibration,Wise20MFI-certifiablyhandeye}, and category-level object perception \cite{Yang20cvpr-perfectshape,Shi21rss-pace}. Although the original formulations of the problems mentioned above are {nonconvex}, semidefinite relaxations at the \emph{lowest} relaxation order in the hierarchy are shown to be exact in practical applications. Since the SDP resulting from the lowest relaxation order can usually be solved efficiently ({\emph{e.g.},\xspace} below one second) by off-the-shelf SDP solvers ({\emph{e.g.},\xspace} \scenario{SDPT3} \cite{tutuncu03MP-SDPT3}, \scenario{MOSEK}~\cite{mosek}) or the Burer-Monteiro (B-M) low-rank factorization method \cite{Burer03mp,Boumal16nips,Rosen20wafr-scalableLowRankSDP}, both efficiency and (certifiable) optimality can be obtained. However, these successful examples of certifiable algorithms are underpinned by the restrictive assumption that \emph{the measurements are free of outliers}, which seldomly holds in practice. Heuristics like \scenario{RANSAC} and \scenario{GNC} are typically used to filter out outliers, but it is precisely the use of such heuristics that breaks the optimality guarantee and makes the system prone to undetected failures. Although several works have attempted to design certifiable algorithms for \emph{outlier-robust} geometric perception~\cite{Wang13ima,Lajoie19ral-DCGM,Carlone18ral-convexHuber,Yang19iccv-quasar,Speciale17cvpr-MaxconLMI}, most approaches (i) are problem-specific, (ii) cannot tolerate high outlier rates ({\emph{e.g.},\xspace} above $70\%$) \cite{Wang13ima,Lajoie19ral-DCGM,Carlone18ral-convexHuber}, or (iii) become too large to be solved by existing SDP solvers \cite{Yang19iccv-quasar}. {\bf Contributions}. In this paper, we propose a {general} and scalable\xspace framework for designing certifiable outlier-robust estimation algorithms that are empirically \emph{exact} with up to {$90\%$ outliers}, and present a fast SDP solver that can solve the tight relaxations at an unprecedented scale. We now describe our four contributions in detail. {\bf (I) {Robust estimation as polynomial optimization} (Section~\ref{sec:robustandpop})}. We investigate outlier-robust estimation with common {robust cost functions}, including truncated least squares (TLS), maximum consensus, Geman-McClure, Tukey's Biweight, L1, Huber, and Barron's adaptive kernel \cite{Barron19cvpr-adaptRobustLoss}. Our first contribution is to show that robust estimation using these costs can be equivalently reformulated as POPs, even though the robust costs themselves are not polynomials. This result is established by introducing additional variables and manipulating the original costs to polynomials. {\bf (II) {A sparse, but exact, semidefinite relaxation} (Section~\ref{sec:sdprelax})}. With the POP reformulation, it is tempting to apply the standard Lasserre's hierarchy to develop certifiable algorithms for robust estimation. Nevertheless, due to the additional variables (one or two variables per measurement), even for small estimation problems with fewer than 20 measurements, the \emph{lowest-order relaxation} can already lead to SDPs that are too large for existing SDP solvers. Therefore, our second contribution is to focus on the TLS cost and show that it allows us to exploit \emph{term sparsity} of the polynomials in the POP and design a much smaller semidefinite relaxation using \emph{basis reduction}. \revise{Although exploiting sparsity of POPs is a known idea in applied mathematics \cite{Wang21SIOPT-tssos,Lasserre06siopt-correlativesparsity}, our method is more effective than existing generic-purpose techniques since it leverages the special structure of our perception problems.} Compared to the standard Lasserre's hierarchy, our sparse semidefinite relaxation leads to $100$ times reduction in the size of the SDP. Unfortunately, even with our sparse relaxation, solving the SDP using off-the-shelf SDP solvers (\emph{e.g.},\xspace~\scenario{MOSEK}) is still too slow, and we can only demonstrate empirical exactness of our relaxation on small estimation problems ({\emph{e.g.},\xspace} $30$ measurements). {\bf (III) {A scalable and robust SDP solver} (Section~\ref{sec:scalableopt})}. The limitations of existing SDP solvers lead to our third contribution, a scalable SDP solver that can \emph{certifiably optimally} solve robust estimation problems of moderate but realistic sizes ({\emph{e.g.},\xspace} $100$ measurements). Our solver, called \emph{SpecTrahedral pRojected gradIent Descent along vErtices} (\scenario{STRIDE}), blends fast \emph{local search} on the nonconvex POP with \emph{global descent} on the convex SDP. Specifically, {\scenario{STRIDE}} follows a globally convergent trajectory driven by a \emph{projected gradient descent method} for solving the SDP, while simultaneously probing long, but \emph{safeguarded}, \emph{rank-one} "strides", generated by fast nonlinear programming algorithms on the POP, to seek rapid descent. Notably, fast heuristics such as {\scenario{RANSAC}} and {\scenario{GNC}} can be readily used to bootstrap {\scenario{STRIDE}}. Particularly, when {\scenario{RANSAC}} and {\scenario{GNC}} succeed in finding the globally optimal solution (which happens frequently in the low-outlier regime), {\scenario{STRIDE}} serves to certify global optimality. Otherwise, when fast heuristics converge to local minima, {\scenario{STRIDE}} detects suboptimality and escapes such minima. {\bf (IV) {Evaluation on six geometric perception problems} (Section~\ref{sec:experiments})}. Our last contribution is to apply our framework and solver to six perception problems: single and multiple rotation averaging, point cloud and mesh registration, absolute pose estimation, and category-level object perception. With extensive experiments on synthetic and real datasets, we demonstrate (i) our sparse SDP relaxation is exact in the presence of up to $60\%$--$90\%$ outliers, (ii) while still being far from real-time, {\scenario{STRIDE}} is up to 100 times faster than existing SDP solvers on medium-scale problems, and is the only solver than can solve large-scale SDPs with hundreds of thousands of constraints to high accuracy, (iii) {\scenario{STRIDE}} safeguards existing fast heuristics, \emph{i.e.},\xspace it certifies global optimality if the heuristic estimates are already optimal, or detects and escapes local minima otherwise. We showcase real examples of {\scenario{STRIDE}} \emph{certifiably} performing scan matching on \scenario{3DMatch} \cite{Zeng17cvpr-3dmatch}, mesh registration on \scenario{HomebrewedDB} \cite{Kaskman19-homebrewedDB}, satellite pose estimation on \scenario{SPEED} \cite{Sharma19arxiv-SPEED}, and vehicle pose and shape estimation on \scenario{ApolloScape} \cite{Wang19pami-apolloscape}. {\bf Novelty with respect to~\cite{Yang20neurips-onering,Yang21arxiv-stride}}. This paper extends and unifies the contributions presented in our previous conference papers~\cite{Yang20neurips-onering,Yang21arxiv-stride}. More in detail, we expand on~\cite{Yang20neurips-onering} by (i)~showing that other robust costs (beyond TLS) can be rephrased as POPs, (ii)~providing a more extensive comparison between (and discussion about) Lasserre's hierarchy and the proposed sparse relaxations, (iii)~going beyond certification (in this paper we propose a \emph{solver}, rather than a certification approach), (iv)~considering a broader set of applications. We also extend~\cite{Yang21arxiv-stride}, which introduced \scenario{STRIDE}, by (i)~generalizing \scenario{STRIDE} to work on multi-block SDPs arising from the proposed relaxations, (ii)~tailoring \scenario{STRIDE} to use fast heuristics (\emph{e.g.},\xspace~\scenario{RANSAC} or \scenario{GNC}) as a warmstart, and (iii)~testing \scenario{STRIDE} on a broader range of problems. We remark that the main goal of this paper is \emph{not} to produce a method that outperforms problem-specific state-of-the-art algorithms in terms of robustness or efficiency. Our key contribution is instead to show that a broad class of robust estimation problems in geometric perception can be solved to certifiable optimality in polynomial time (despite their hardness), and lay out a scalable framework to build SDP relaxations, that we believe ---with further advancement of SDP solvers--- will eventually run in real time. \section{{\sf STRIDE}: Scalable SDP Solver} \label{sec:scalableopt} The sparse relaxation \eqref{eq:sparserelax} leads to an SDP that can still have $m$ as large as hundreds of thousands when $N$ is large (\emph{cf.}\xspace Fig.~\ref{fig:LASvsSSR}). Therefore, with IPMs such as \scenario{MOSEK}, the scale at which \eqref{eq:sparserelax} can be solved is still quite limited (recall IPMs can typically handle $m$ up to $50,000$). This section presents \scenario{STRIDE} (\emph{SpecTrahedral pRojected gradIent Descent along vErtices}), an SDP solver that goes far beyond IPMs and enables solving~\eqref{eq:sparserelax} on problems of moderate but realistic size. {\bf Intuition}. The key insight behind {\scenario{STRIDE}} comes from Theorem~\ref{thm:sparserelaxtls}(ii): assuming the relaxation \eqref{eq:sparserelax} is exact, then the SDP \eqref{eq:primalSDP} admits \emph{rank-one} optimal solutions $\MX^\star_v = \boldsymbol{v}(\tldvxx^\star)\boldsymbol{v}(\tldvxx^\star)^{\mathsf{T}}$, where $\tldvxx^\star = (\vxx^{\star},{\vtheta}^{\star})$ corresponds to the global minimizer of \eqref{eq:binaryTLS}. Therefore, {\scenario{STRIDE}} tries to move between rank-one matrices in the feasible set of the SDP (these are the \emph{vertices} of the spectrahedron \cite{Blekherman12Book-sdpandConvexAlgebraicGeometry}), searching for a globally optimal solution. More in detail, {\scenario{STRIDE}} employs a globally convergent \revise{\emph{projected gradient descent} (PGD)} method as the backbone for solving the convex SDP \eqref{eq:sparserelax}, but blends \emph{short} \revise{PGD} steps with \emph{long} rank-one steps generated by fast NLP algorithms on the POP~\eqref{eq:binaryTLS}. Intuitively, the long rank-one steps circumvent the slow convergence of \revise{PGD}, while the \revise{PGD} backbone allows escaping local minima where the NLP algorithm can be stuck in. With this insight, we now develop the details of \scenario{STRIDE}. {\bf Short {PGD} step}. The backbone of {\scenario{STRIDE}} implements a {PGD} for solving the primal SDP \eqref{eq:primalSDP}. Given an initial point $\M{X}^0 \in \mathbb{X}$, the $k$-th ($k \geq 0$) iteration of {PGD} performs \begin{equation}\label{eq:pgd} \M{X}^{k+1} = \Pi_{\calF_{\mathrm{P}}} \parentheses{\M{X}^k - \sigma_k \M{C}}, \tag{PGD} \end{equation} for a given constant $\sigma_k > 0$, where $\Pi_{\calF_{\mathrm{P}}}$ denotes the metric projection onto the spectrahedron $\calF_{\mathrm{P}} \triangleq \{\M{X} \in \mathbb{X} \mid {\cal A}(\M{X})=\boldsymbol{b}, \M{X} \succeq 0 \}$ (\emph{i.e.},\xspace the feasible set of \eqref{eq:primalSDP}). In words, the \eqref{eq:pgd} step first moves along the direction of the negative gradient for some step size $\sigma_k$ (recall the objective of \eqref{eq:primalSDP} is $\inprod{\M{C}}{\M{X}}$ with a constant gradient $\M{C}$), and then projects the new point $\M{X}^k - \sigma_k \M{C}$ onto the feasible set $\calF_{\mathrm{P}}$. It is well known that \eqref{eq:pgd} guarantees to converge to an optimal solution of \eqref{eq:primalSDP}, provided that $\sigma_{k+1} \geq \sigma_{k}, \forall k \geq 0$ (see \cite{Jiang12siopt-PGMSDP,Beck09SIIS-FISTA,Bertsekas99book-nlp}). In Supplementary Material\xspace, we show the Lagrangian dual of the projection subproblem in \eqref{eq:pgd} can be reformulated as a \emph{smooth unconstrained optimization}, which allows solving~\eqref{eq:pgd} for large-scale problems using a limited-memory BFGS (L-BFGS) algorithm. \revise{For this reason, in \eqref{eq:strideprojection} we also output the dual optimal solution.} {\bf Long rank-one step}. The issue with \eqref{eq:pgd} is that the convergence can be slow, particularly when the optimal $\MX^\star$ is rank-one and degenerate (as in \eqref{eq:sparserelax}). Here we propose to exploit the low-rankness of $\MX^\star$ and accelerate the convergence by generating long rank-one steps. Towards this goal, calling $\overline{\MX}^{k+1} := \Pi_{\calF_{\mathrm{P}}}(\M{X}^k - \sigma_k \M{C})$, and $\overline{\MX}_v^{k+1} \in \psd{n_1}$ as the first block in $\overline{\MX}^{k+1}$ (\emph{i.e.},\xspace the moment matrix), we compute a potentially better rank-one iterate via three steps: \begin{enumerate}[label=(\roman*)] \item\label{item:rounding} {\bf (Rounding)}. Let $\overline{\MX}^{k+1}_v = \sum_{i=1}^{n_1} \lambda_i \boldsymbol{v}_i \boldsymbol{v}_i^{\mathsf{T}}$ be the spectral decomposition of $\overline{\MX}^{k+1}_v$ with $\lambda_1 \geq \dots \geq \lambda_{n_1}$ in nonincreasing order. Compute $r$ hypotheses from the leading $r$ eigenvectors \begin{eqnarray} \label{eq:roundingrestate} (\widebar{\vxx}^{k+1}_{i},\widebar{\vtheta}^{k+1}_i) = \texttt{rounding}(\boldsymbol{v}_i), \quad i = 1,\dots,r, \end{eqnarray} where the function~\texttt{rounding}~is defined as in \eqref{eq:rounding}. \item {\bf (Local search)}. Apply a local search method for \eqref{eq:binaryTLS} using NLP with initial point chosen as $ (\widebar{\vxx}^{k+1}_{i},\widebar{\vtheta}^{k+1}_i) $ for each $i=1,\dots,r$. Denoting the solution of each local search as $ (\widehat{\vxx}^{k+1}_i,\widehat{\vtheta}_i^{k+1}) $, with associated objective value $p(\widehat{\vxx}^{k+1}_i,\widehat{\vtheta}_i^{k+1})$, choose the best local solution with \emph{minimum} objective value. Formally, \begin{subequations} \begin{eqnarray} \hspace{-3mm} (\widehat{\vxx}_i^{k+1},\widehat{\vtheta}_i^{k+1}) =&\!\!\!\! \texttt{nlp}(\widebar{\vxx}^{k+1}_i,\widebar{\vtheta}^{k+1}_i),\ \ i=1,\dots,r, \label{eq:nlpinlocalsearch}\\ \hspace{-3mm} (\widehat{\vxx}^{k+1},\widehat{\vtheta}^{k+1}) =&\!\!\!\! \displaystyle \argmin_{(\widehat{\vxx}^{k+1}_i,\widehat{\vtheta}^{k+1}_i), i=1\dots,r} p(\widehat{\vxx}^{k+1}_i,\widehat{\vtheta}^{k+1}_i). \end{eqnarray} \end{subequations} \item\label{item:lifting} {\bf (Lifting)}. Perform a rank-one lifting of the best local solution $\widetilde{\vxx}^{k+1} \triangleq (\widehat{\vxx}^{k+1},\widehat{\vtheta}^{k+1}) $ \begin{subequations}\label{eq:lifting} \begin{eqnarray} \hspace{-3mm} \widehat{\MX}^{k+1}_v =&\!\!\! \boldsymbol{v}(\widetilde{\vxx}^{k+1}) \boldsymbol{v}(\widetilde{\vxx}^{k+1})^{\mathsf{T}}, \ \ (\emph{cf.}\xspace\ \eqref{eq:sparsebasis}) \\ \hspace{-3mm} \widehat{\MX}^{k+1}_{g_j} =&\!\!\! \M{X}^{k+1}_{g_j} (\widetilde{\vxx}^{k+1}), j = 1,\dots,l_g, \ \ (\emph{cf.}\xspace\ \eqref{eq:liftPSDsubblks}) \\ \hspace{-3mm} \widehat{\MX}^{k+1} = &\!\!\! (\widehat{\MX}^{k+1}_v,\dots,\widehat{\MX}^{k+1}_{g_j},\dots)_{j=1}^{l_g}, \end{eqnarray} \end{subequations} where $\widehat{\MX}^{k+1},\widehat{\MX}^{k+1}_{g_j},j=1,\dots,l_g$ are computed by \emph{evaluating} the moment and localizing matrices at $\widetilde{\vxx}^{k+1}$. \end{enumerate} {\bf Taking the right step}. Now we are given two candidates for the next iteration, namely the short {PGD} step $\overline{\MX}^{k+1}$ (generated by computing the projection of $\M{X}^k - \sigma_k \M{C}$ onto $\calF_{\mathrm{P}}$) and the long rank-one step $ \widehat{\MX}^{k+1} $ (obtained by rounding, local search, and lifting). Which one should we choose to be the next iterate $\M{X}^{k+1}$ such that the entire sequence $\{\M{X}^k\}$ is globally convergent? The answer to this question is quite natural --we accept $\widehat{\MX}^{k+1}$ if and only if it attains a strictly lower cost than $\overline{\MX}^{k+1}$ (\emph{cf.}\xspace eq. \eqref{eq:accept-reject}). \maybeOmit{-- \red{and guarantees that the algorithm visits a sequence of rank-one vertices (\emph{i.e.},\xspace local minima via NLP) with descending costs}.} \input{sections/alg-stride} The full {\scenario{STRIDE}} algorithm is presented in Algorithm~\ref{alg-iPGMnlp}. \begin{theorem}[Global Convergence]\label{thm:strideconverge} Suppose the Slater condition for \eqref{eq:primalSDP} holds and $\{ (\M{X}^k,\boldsymbol{y}^k,\M{S}^k) \}$ is generated by {\scenario{STRIDE}}, then $\{f(\M{X}^k) \}$ converges to $f^\star$, where $f^\star$ is the optimum of \eqref{eq:primalSDP}. \end{theorem} While we provide the proof in Supplementary Material\xspace, the intuition is that eq. \eqref{eq:accept-reject} ensures the rank-one ``strides'' are accepted only if they strictly decrease the objective value. Therefore, either the last rank-one point is already optimal, or ---if it is suboptimal--- it still provides an improved reinitialization for \eqref{eq:pgd} to globally converge to the optimal $\MX^\star$. Note that the \revise{PGD} backbone allows {\scenario{STRIDE}} to converge even when the optimal solution has rank higher than one. \revise{In \cite{Yang21arxiv-stride}, we show it is also possible to \emph{accelerate} and \emph{generalize} the \eqref{eq:pgd} backbone using \emph{proximal gradient methods}.} Although {\scenario{STRIDE}} is a globally convergent algorithm for solving the primal SDP \eqref{eq:primalSDP}, the initial guess $(\M{X}^0,\M{S}^0,\boldsymbol{y}^0)$ can have a significant impact on its convergence speed. The next remark states that existing fast heuristics for robust perception can be readily incorporated into {\scenario{STRIDE}}. \begin{remark}[Fast Heuristics and Certification] \label{remark:fastheuristics} Existing fast heuristics for robust estimation, such as graduated non-convexity (\scenario{GNC}) \cite{Yang20ral-gnc,Black96ijcv-unification} and {\scenario{RANSAC}} \cite{Fischler81}, can typically return the \emph{globally optimal} solution to \eqref{eq:binaryTLS} when the measurement set ${\cal Z}$ contains a low or medium portion of outliers (\emph{e.g.},\xspace below $70\%$). Therefore, we use {\scenario{GNC}} or {\scenario{RANSAC}} to generate an initial guess for the SDP relaxation \eqref{eq:sparserelax}. Formally, calling $(\widehat{\vxx},\widehat{\vtheta})$ the candidate solution obtained by solving \eqref{eq:binaryTLS} using {\scenario{GNC}} or {\scenario{RANSAC}}, we generate $\M{X}^0$ (for {\scenario{STRIDE}}) by applying the lifting procedure in \eqref{eq:lifting} to $(\widehat{\vxx},\widehat{\vtheta})$. Notably, when $(\widehat{\vxx},\widehat{\vtheta})$ is already globally optimal to \eqref{eq:binaryTLS} (hence $\M{X}^0$ is an optimizer of \eqref{eq:sparserelax} as long as the relaxation is exact), {\scenario{STRIDE}} only finds a \emph{certificate of optimality} for $(\widehat{\vxx},\widehat{\vtheta})$ by performing one step of \eqref{eq:pgd} (\emph{cf.}\xspace \eqref{eq:strideprojection} in Algorithm \ref{alg-iPGMnlp}). \end{remark} Fast heuristics provide a good \emph{primal} initialization for {\scenario{STRIDE}}. However, little information is known about how to obtain a good \emph{dual} initialization. In {Supplementary Material\xspace}, we describe a dual initialization procedure that exploits \emph{correlative sparsity} \cite{Wang21siopt-chordaltssos} and leverages a fast first-order algorithm called \emph{semi-proximal ADMM} (also known as {\scenario{ADMM+}}) \cite{Sun15siopt-admmplus}. We also give more implementation details about how to use Riemannian optimization to perform local search. \section{Proof of Theorem~\ref{thm:strideconverge}} \begin{proof} Let ${\cal V} = \{ \widehat{\MX}^{k(i)} \}$ be the sequence of all the $\widehat{\MX}$ that have been accepted due to \eqref{eq:accept-reject}, where $k(i)$ returns the iteration index of the $i$-th element in ${\cal V}$. If ${\cal V} = \emptyset$, then {\scenario{STRIDE}} reduces to \eqref{eq:pgd} and is globally convergent. If ${\cal V} \neq \emptyset$, then we claim that ${\cal V}$ must be finite. Note that, for any two consecutive elements $\widehat{\MX}^{k(i)}$ and $\widehat{\MX}^{k(i+1)}$ in ${\cal V}$, we have \begin{eqnarray} \label{eq:strictdescent} f(\widehat{\MX}^{k(i+1)}) \leq& f(\overline{\MX}^{k(i+1)}) - \epsilon \nonumber \\ <& f(\M{X}^{k(i+1) - 1}) - \epsilon \leq f(\widehat{\MX}^{k(i)}) - \epsilon, \end{eqnarray} where the first inequality is due to \eqref{eq:accept-reject}, the second inequality is due to \eqref{eq:strideprojection} and the fact that projected gradient descent must strictly decrease the objective value when optimality has not been achieved \cite[Proposition 3.4.1]{Bertsekas99book-nlp}, and the last inequality holds because $k(i+1) - 1 \geq k(i)$. Eq.~\eqref{eq:strictdescent} states that the objective value must decrease by at least $\epsilon$ along each element of ${\cal V}$. Therefore, we have $f_\min({\cal V}) \leq f_{\max}({\cal V}) - (|{\cal V}|-1) \epsilon$, where $f_{\min}$ and $f_{\max}$ are the minimum and maximum objective values along ${\cal V}$. Hence $|{\cal V}|$ must be finite, otherwise $f^\star$ is unbounded below, contradicting Slater's condition and strong duality. Let $\widehat{\MX}^{k(|{\cal V}|)}$ be the last element of ${\cal V}$, then {\scenario{STRIDE}} reduces to \eqref{eq:pgd} with a new initial point at $\widehat{\MX}^{k(|{\cal V}|)}$ and is globally convergent. \end{proof} \section{Further Reduction on Multiple Rotation Averaging (Example \ref{ex:multirotation})} \label{sec:app-reduce-mra} Recall that in multiple rotation averaging we are given a graph ${\cal G} = ({\cal V},{\cal E})$ with vertex set ${\cal V} = [n]$ and edge set ${\cal E}$. Each vertex $i \in {\cal V}$ is associated with an unknown rotation $\M{R}_i \in \mathrm{SO}(\dimrot)$, and each edge $(i,j) \in {\cal E}$ provides a relative measurement $\widetilde{\MR}_{ij}$ between the unknown rotations $\M{R}_i$ and $\M{R}_j$ at vertices $i$ and $j$. Let ${\cal R}$ be the set of edges whose relative measurements are known to be free of outliers (\emph{e.g.},\xspace odometry measurements in SLAM), and let ${\cal Z} = {\cal E} / {\cal R}$ be the set of edges whose measurements are corrupted by outliers (\emph{e.g.},\xspace loop closures in SLAM). If no edge set is known to be free of outliers, then we set ${\cal R} = \emptyset$. We now present a further reduction for multiple rotation averaging. Let us denote by ${\cal V}_{{\cal Z}} \triangleq \{ i \in {\cal V} \mid \exists j \in {\cal V}, (i,j) \in {\cal Z} \} \subseteq {\cal V}$ the subset of nodes that are attached to at least one edge in ${\cal Z}$. Note that typically $\abs{{\cal V}_{\cal Z}} \ll n$ for SLAM applications (\emph{i.e.},\xspace these are the nodes at which loop closures occur). For each edge $(i,j) \in {\cal Z}$, we define its \emph{depth-$\zeta$ neighbor set} for $\zeta \in \int_{+}$, in the following recursive manner: \begin{eqnarray} {\cal V}_{(i,j)}^0 \triangleq \{ i,j \}, {\cal V}_{(i,j)}^{\zeta} \triangleq \{i \in {\cal V} \mid \exists j \in {\cal V}_{(i,j)}^{\zeta-1}, (i,j) \in {\cal E} \}, \end{eqnarray} where one can see that ${\cal V}_{(i,j)}^{\zeta}$ (for $\zeta \geq 1$) is essentially the union of the $\zeta$-hop neighbor set of node $i$ with the $\zeta$-hop neighbor set of node $j$. It is easy to see that ${\cal V}_{(i,j)}^\zeta = {\cal V}, \forall (i,j) \in {\cal Z}$ when $\zeta$ is sufficiently large, as long as the graph ${\cal G}$ is connected. With ${\cal V}_{\cal Z}$ and ${\cal V}_{(i,j)}^\zeta$, for each edge $(i,j) \in {\cal Z}$, we define \begin{eqnarray} \vxx_{(i,j)}^\zeta \triangleq \{\M{R}_k \mid k \in {\cal V}_{(i,j)}^\zeta \cap {\cal V}_{\cal Z} \} \supseteq \{\M{R}_i, \M{R}_j \}, \end{eqnarray} as the set of node-wise rotations in ${\cal V}_{\cal Z}$ that are attached to $(i,j)$ within depth $\zeta$. By definition, ${\cal V}_{(i,j)}^\zeta \cap {\cal V}_{\cal Z}$ must contain node $i$ and node $j$, and hence $\vxx_{(i,j)}^\zeta$ contains at least two rotations (attached to the edge $(i,j)$). We now replace the sparse basis in \eqref{eq:sparsebasis} as \begin{eqnarray} & \boldsymbol{v}(\widetilde{\vxx}) = [1 \,;\,} %\ \Vert\ \vxx \,;\,} %\ \Vert\ \boldsymbol{\theta} \,;\,} %\ \Vert\ \dots \theta_{ij} \vxx_{(i,j)}^\zeta \dots ]_{(i,j) \in {\cal Z}} \in \Real{n_1}, \nonumber \\ & 1+2n+5N \leq n_1 \leq 1+2n+ N(1+ 2\abs{{\cal V}_{\cal Z}}), \end{eqnarray} and use it to generate the semidefinite relaxation \eqref{eq:sparserelax}. It is worth noting that our relaxation recovers the hand-crafted SDP relaxation in \cite{Lajoie19ral-DCGM} with the choice of $\zeta = 0$, which is shown to be \emph{inexact} when the outlier rate is around $50\%$. In Section \ref{sec:experiments}, we show that, with a larger $\zeta$, we can achieve exact relaxation in the presence of over $70\%$ outliers. \section{Experiments} \label{sec:experiments} In this section, we test the sparse relaxation \eqref{eq:sparserelax} and the SDP solver {\scenario{STRIDE}} on Examples \ref{ex:singlerotation}-\ref{ex:category} using both synthetic and real data \revise{(we defer the results for Example \ref{ex:mesh} mesh registration to {Supplementary Material\xspace} due to space constraints)}. The goal of our experiments is not to claim state-of-the-art efficiency or robustness (\emph{e.g.},\xspace against problem-specific implementations), but rather to show that \eqref{eq:sparserelax} and {\scenario{STRIDE}}, for the first time, provide a general framework to solve large-scale nonconvex outlier-robust perception problems to certifiable global optimality within reasonable computation time. We believe with the advancement of SDP solvers, our framework will eventually run in real time. {\bf Baselines}. We use two state-of-the-art SDP solvers, {\scenario{MOSEK}} \cite{mosek} and {\scenario{SDPNAL+}} \cite{Yang2015mpc-sdpnalplus}, as baseline solvers to compare against {\scenario{STRIDE}}. We omit {\scenario{MOSEK}} whenever the SDP becomes too large to be solved by {\scenario{MOSEK}} (\emph{i.e.},\xspace when $m > 50,000$). We use default settings for both {\scenario{MOSEK}} and {\scenario{SDPNAL+}}. {\bf {\scenario{STRIDE}}'s settings}. In Algorithm \ref{alg-iPGMnlp}, we choose $\texttt{tol}\!=\!1\ee{-6}$, $r\!=\!3$, $\epsilon\!=\!1\ee{-12}$, $\sigma_k\!=\!10,\forall k$, and run it for a maximum of $5$ iterations. As described in Remark \ref{remark:fastheuristics}, we use {\scenario{GNC}} or {\scenario{RANSAC}} to initialize the primal variable, and {\scenario{ADMM+}} to initialize the dual variable. The local search is performed using {\scenario{Manopt}} with a trust region solver. Details about local search and {\scenario{ADMM+}} can be found in Supplementary Material\xspace. {\bf Evaluation metrics}. Let $(\widehat{\MX},\widehat{\vy},\widehat{\MS})$ be the solution for \eqref{eq:sparserelax} returned by an SDP solver and $(\widehat{\vxx},\widehat{\vtheta})$ be the corresponding rounded solution for \eqref{eq:binaryTLS}. We evaluate the performance of the solver using four metrics: (i)~the estimation errors of $\widehat{\vxx}$ compared to the groundtruth, whenever the groundtruth is available; (ii)~SDP solution quality, using the maximum KKT residual $\eta_{\max}$ from \eqref{eq:KKTresiduals}; (iii)~certified suboptimality, using the {\texttt{rounding}} procedure in \eqref{eq:rounding} and the relative suboptimality measure $\eta_s$ in \eqref{eq:subopt} (we deem a rounded solution as globally optimal if $\eta_s < 1\ee{-3}$); (iv) solver CPU time in seconds. {For simulation experiments, statistics are computed over $20$ Monte Carlo runs per setup.} {\bf Hardware}. Experiments are performed on a Linux PC with 12-core Intel i9-7920X CPU@2.90GHz and 128GB RAM. \subsection{Single Rotation Averaging} \input{sections/fig-sra} {\bf Setup}. At each Monte Carlo run, we first randomly generate a groundtruth 3D rotation $\MR^{\circ}$; then inliers are generated by $\MR_{\mathrm{in}} = \MR^{\circ} \MR_{\varepsilon}$, where the inlier noise $\MR_{\varepsilon}$ is generated by randomly sampling a rotation axis and a rotation angle $\varepsilon \sim {\cal N}(0,\sigma^2)$ with $\sigma = 5^{\circ}$; outliers are arbitrary random rotations. We test two setups with $N=30$ and $N=100$. At $N=30$, we sweep the outlier ratio from $0\%$ to $90\%$, while at $N=100$, we sweep the outlier ratio up to $95\%$. {\bf Results}. Fig. \ref{fig:exp-sra-results}(a)-(b) plot the evaluation metrics for $N=30$ and $N=100$, respectively. We make the following observations. (i) Our sparse relaxation \eqref{eq:sparserelax} is exact with up to $90\%$ outliers when $N=30$ and up to $95\%$ outliers when $N=100$ (the suboptimality $\eta_s$ is below $1\ee{-3}$ for all test runs). (ii) For $N=30$, {\scenario{STRIDE}} solves the SDP to an accuracy that is comparable to {\scenario{MOSEK}} (\emph{cf.}\xspace the $\eta_{\max}$ plot), but is about $100$ (and up to $270$) times faster (\emph{cf.}\xspace the time plot). (iii) For $N=100$, {\scenario{MOSEK}} cannot run. While {\scenario{SDPNAL+}} still runs, its accuracy is at least five orders of magnitude worse than {\scenario{STRIDE}} (\emph{cf.}\xspace the $\eta_{\max}$ plot, where {\scenario{STRIDE}} attains $1\ee{-8}$ accuracy, but {\scenario{SDPNAL+}} only attains $1\ee{-3}$ accuracy), and its runtime is about $40$ times slower than {\scenario{STRIDE}}. (iv) {\scenario{STRIDE}} safeguards {\scenario{GNC}}. While {\scenario{GNC}} is used to initialize {\scenario{STRIDE}}, {\scenario{STRIDE}} can \emph{certify} the global optimality of {\scenario{GNC}} and escapes the local minima of {\scenario{GNC}} (\emph{e.g.},\xspace at $80\%$ outlier rate in the rotation error plot, while {\scenario{GNC}} fails many times, the solution of {\scenario{STRIDE}} is always correct and optimal). (v) When the outlier rate is too high, global optimality not necessarily implies a correct estimation (in the sense of being close to the groundtruth). For example, at $90\%$ outlier rate when $N=30$, {\scenario{STRIDE}} and {\scenario{MOSEK}} both obtain certifiable optimality ($\eta_s = 0$), but the rotation error can be quite large (about $100^{\circ}$). Similarly, at $95\%$ outlier rate when $N=100$, the optimal estimate obtained by {\scenario{STRIDE}} also has large rotation errors. For further discussion about this point, we refer the reader to the notion of \emph{estimation contract} in~\cite{Yang20tro-teaser}, which ties the number of inliers to the accuracy of the optimal solution of \eqref{eq:binaryTLS} w.r.t.\xspace the groundtruth, and reports estimation contracts for a 3D registration problem. \subsection{Multiple Rotation Averaging} \input{sections/fig-mra} {\bf Setup}. We test 2D multiple rotation averaging in a SLAM setting, where a robot traverses a trajectory following a 2D grid pattern (\emph{e.g.},\xspace Fig. \ref{fig:applications}(b) shows a $3\times 3$ grid) with both odometry measurements (between consecutive nodes) and loop closures. We assume the odometry measurements are outlier-free (\emph{i.e.},\xspace we include them in the function $\psi(\vxx)} %{\psi(\vxx,\calR)$ in Example \ref{ex:multirotation}) and only the loop closures could be corrupted by outliers as in~\cite{Yang20ral-gnc}. Inlier relative rotations are generated by $\MR_{\mathrm{in}} = \MR^{\circ} \MR_{\varepsilon}$ where $\MR^{\circ} = \M{R}_i^{\mathsf{T}} \M{R}_j$ is the groundtruth relative rotation between nodes $(i,j)$ and $\MR_{\varepsilon}$ is a random 2D rotation with angle $\varepsilon \sim {\cal N}(0,\sigma^2)$ ($\sigma=0.6^{\circ}$). Outlier relative rotations are arbitrary 2D rotations. We test two cases with increasing outlier rates: a $10\times 10$ grid with $N=10$ loop closures, and a $20 \times 20$ grid with $N=20$ loop closures. {\bf Results}. Fig. \ref{fig:exp-mra-results}(a)-(b) plot the evaluation metrics for both cases. We make the following observations. (i) For $10 \times 10$ grid with $N=10$, our relaxation is always exact, with up to $80\%$ outliers. In this case, {\scenario{STRIDE}} can solve the SDP to an accuracy that is comparable to {\scenario{MOSEK}}, while being about $20$ times faster (up to $40$ times faster). {\scenario{SDPNAL+}}, unfortunately, completely fails in this problem. Therefore, we did not run {\scenario{SDPNAL+}} for the more challenging $20 \times 20$ grid. (ii) For $20\times 20$ grid with $N=20$, our relaxation is also almost always exact, with up to $80\%$ outliers. However, there exist 1-2 runs per outlier rate where {\scenario{STRIDE}} fails to obtain an $\eta_s < 1\ee{-3}$. In such cases, we suspect the relaxation is inexact. \subsection{Point Cloud Registration} \input{sections/fig-pcr} {\bf Setup}. We first sample a random set of 3D points $\{ \boldsymbol{p}_i\}_{i=1}^N$, where each $\boldsymbol{p}_i \sim {\cal N}({\mathbf 0},{\mathbf I}_3)$. Then we generate a random rotation and translation $(\MR^{\circ},\vt^{\circ})$ such that $\norm{\vt^{\circ}} \leq T = 10$. Using $(\MR^{\circ},\vt^{\circ})$, we generate $\{\boldsymbol{q}_i\}_{i=1}^N$ by $\boldsymbol{q}_i = \MR^{\circ} \boldsymbol{p}_i + \vt^{\circ} + \boldsymbol{\varepsilon}_i$ ($\boldsymbol{\varepsilon}_i \sim {\cal N}({\mathbf 0},0.01^2 {\mathbf I}_3)$) if $\boldsymbol{q}_i$ is an inlier, or by $\boldsymbol{q}_i \sim {\cal N}({\mathbf 0},{\mathbf I}_3)$ if $\boldsymbol{q}_i$ is an outlier. We test $N = 20$ and $N=100$. {\bf Results}. Fig. \ref{fig:exp-pcr-results}(a)-(b) plot the evaluation metrics for $N=20$ and $N=100$, respectively. We make the following observations. (i) When $N=20$, our relaxation is tight with up to $80\%$ outlier correspondences. Both {\scenario{MOSEK}} and {\scenario{STRIDE}} can obtain a certifiably optimal solution, except that {\scenario{STRIDE}} failed once to attain sufficient accuracy (within $5$ iterations) at $80\%$ outlier rate. \footnote{Consistent with \cite{Yang20neurips-onering}, we empirically noticed that the relaxation breaks earlier when fewer measurements are available. We remark that the formulation considered in this paper is more challenging that the rotation-only version in~\cite{Yang19iccv-quasar}, which remains tight at $90\%$ outliers.} However, {\scenario{STRIDE}} is about $5$ times faster than {\scenario{MOSEK}}. {\scenario{SDPNAL+}} completely fails in this problem. (ii) When $N=100$, our relaxation is exact with up to $90\%$ outliers and {\scenario{STRIDE}} is the only solver that can certify exactness. At $90\%$ outlier rate, {\scenario{STRIDE}} certified the global optimality for $17$ runs, while failed to do so for $3$ runs. (iii) {\scenario{STRIDE}} can certify the success of {\scenario{GNC}} and escape local minima when {\scenario{GNC}} fails (\emph{e.g.},\xspace at $60-80\%$ when $N=20$ and at $90\%$ when $N=100$). {\bf Scan matching on {\scenario{3DMatch}}}. To showcase the practical value of {\scenario{STRIDE}}, we perform scan matching using the {\scenario{3DMatch}} test data \cite{Zeng17cvpr-3dmatch}. We use {\scenario{FPFH}} \cite{Rusu09icra-fast3Dkeypoints} to generate putative feature matches, followed by using {\scenario{ROBIN}} \cite{Shi21icra-robin} to filter out gross outliers. The result of {\scenario{FPFH}} and {\scenario{ROBIN}} is typically a set of sparse keypoint matches with only a few outliers. We then use {\scenario{STRIDE}} to \emph{certifiably} estimate the rigid transformation. Fig. \ref{fig:exp-pcr-results}(c)-(d) visualize two examples where {\scenario{STRIDE}} returns certified globally optimal estimates ($\eta_s < 1\ee{-6}$). More examples are provided in {Supplementary Material\xspace}. \subsection{Absolute Pose Estimation} \input{sections/fig-ape} {\bf Setup}. We first generate a set of random 3D points $\{ \boldsymbol{p}_i \}_{i=1}^N$ that are centered at zero. We then generate a random pose $(\MR^{\circ},\vt^{\circ})$ such that $\norm{\vt^{\circ}} \leq T=10$ and $\vt^{\circ}$ lies inside the camera FOV cone ${\cal C}_\alpha$ with $\alpha = \frac{\pi}{2}$. Using $(\MR^{\circ},\vt^{\circ})$ and $\{ \boldsymbol{p}_i \}_{i=1}^N$, we generate 2D keypoints by projecting the transformed 3D points onto the imaging plane, \emph{i.e.},\xspace $\boldsymbol{v}_i = {\cal P}(\MR^{\circ} \boldsymbol{p}_i + \vt^{\circ})$, where ${\cal P}: \Real{3} \rightarrow \Real{2}$ is defined as ${\cal P}(\va) = [a_1/a_3\,;\,} %\ \Vert\ a_2/a_3]$. We then generate the inlier bearing vectors from the 2D keypoints by $\boldsymbol{u}_i = \texttt{normalize}([\boldsymbol{v}_i + \boldsymbol{\varepsilon}_i \,;\,} %\ \Vert\ 1])$, where $\boldsymbol{\varepsilon}_i \sim {\cal N}({\mathbf 0},0.001^2 {\mathbf I}_2)$ is a random 2D Gaussian noise. For outliers, we generate $\boldsymbol{u}_i$ as random unit vectors inside the FOV cone. We test $N=20$ and $N=100$ with increasing outlier rates. We use {\scenario{RANSAC}} implemented in the Matlab function $\texttt{estimateWorldCameraPose}$ to initialize {\scenario{STRIDE}}. {\bf Results}. Fig. \ref{fig:exp-ape-results}(a)-(b) plot the evaluation metrics. We make the following observations. (i) When $N=20$, our relaxation is exact with up to $60\%$ outliers. At $70\%$ outlier rate, even if {\scenario{MOSEK}} solves the SDP to high accuracy, since the solution in not rank one, the rounding procedure obtains a pose estimation that is far from the groundtruth. (ii) When $N=100$, our relaxation becomes mostly tight at $70\%$ outlier rate, which suggests that increasing the total number of matches could lead to a tighter relaxation. {\bf Satellite pose estimation on {\scenario{SPEED}}}. We showcase {\scenario{STRIDE}} on satellite pose estimation using the {\scenario{SPEED}} dataset \cite{Sharma19arxiv-SPEED}. We use the 3D satellite model provided in \cite{Chen19ICCVW-satellitePoseEstimation} (with $N=11$ keypoints) and spoil the groundtruth 2D keypoints with outliers. Fig. \ref{fig:exp-ape-results}(c) shows four examples with 2-5 outliers, where {\scenario{STRIDE}} obtains accurate pose estimates with certified global optimality in less than one minute. More examples are provided in {Supplementary Material\xspace}. \subsection{Category-Level Object Perception} \input{sections/fig-catreg} {\bf Setup}. We use the ``\emph{car}'' category from {\scenario{PASCAL3D+}} dataset \cite{Xiang2014WACV-PASCAL+} for simulation experiments, which contains $N=12$ keypoints with $K=9$ basis shapes. We generate an unknown instance of the category by sampling a random vector of shape coefficients $\vc^{\circ} \in \mathbb{R}^K_{+}$ such that $\sum_{k=1}^K c_i^{\circ} = 1$ and using $\vc^{\circ}$ to linearly combine the $K$ basis shapes. We then add random Gaussian noise (with standard deviation $0.01$) to the new instance and transform it with a random rigid transformation $(\MR^{\circ},\vt^{\circ})$ with $\norm{\vt^{\circ}} \leq T = 10$. We test increasing outlier rates up to $60\%$ with 20 runs per outlier rate. We use a regularization parameter $\lambda = 1$. {\bf Results}. Fig. \ref{fig:exp-catreg-results}(a) plots the evaluation metrics: (i) our relaxation is exact with up to $60\%$ outliers; (ii) {\scenario{STRIDE}} can can certify the global optimality of {\scenario{GNC}} and escapes its local minima; (iii) {\scenario{STRIDE}} is about $10$ times faster than {\scenario{MOSEK}}. {\bf Vehicle pose and shape estimation on {\scenario{ApolloScape}}}. We use {\scenario{STRIDE}} to jointly estimate the pose and shape of an unknown vehicle from the {\scenario{ApolloScape}} self-driving dataset \cite{Wang19pami-apolloscape}. We use a set of $K=5$ basis shapes, each with $N=66$ annotated 3D keypoints. Given a 2D image depicting an unknown vehicle, we use the pretrained {\scenario{GSNet}} \cite{Ke20-gsnet} to detect 2D keypoints of the unknown vehicle with groundtruth depth (same setup as one of the tests in \cite{Shi21rss-pace}). Fig. \ref{fig:exp-catreg-results}(b-1) shows four examples where {\scenario{STRIDE}} certified the global optimality of solutions returned by {\scenario{GNC}} ($\eta_s = 1.5\ee{-7},1.3\ee{-9},1.4\ee{-10},1.6\ee{-9}$) , and Fig. \ref{fig:exp-catreg-results}(b-2) shows two example where {\scenario{STRIDE}} escapes the suboptimal solutions returned by {\scenario{GNC}} and finds the certified globally optimal solutions ($\eta_s = 3.2\ee{-4},4.6\ee{-4}$). More examples are provided in {Supplementary Material\xspace}. \subsection{Summary and Discussion} Table \ref{table:overalltiming} summarizes the timing results of {\scenario{STRIDE}}, compared with {\scenario{MOSEK}}, for all six problems. We make a few comments. (i) {\scenario{STRIDE}} is able to solve problems far beyond the reach of {\scenario{MOSEK}} (in fact, the SDPs solved in this paper are among the largest in all semidefinite programming literature). (ii) When fast heuristics converge to the globally optimal solution, {\scenario{STRIDE}} just needs to perform optimality certification and can be $2$-$5$ times faster (\emph{cf.}\xspace~{\scenario{STRIDE}} (Certify) vs. {\scenario{STRIDE}} (Escape)). (iii) For problems of similar sizes (in terms of $n_1$ and $m$), the speed of {\scenario{STRIDE}} can be \emph{application-dependent} (\emph{e.g.},\xspace~{\scenario{STRIDE}} is much faster in single rotation averaging than in other applications). This suggests that relaxations of different applications lead to SDP problems of \emph{drastically different geometry}. Understanding the geometry and leveraging new tools to further speedup the computation is an exciting research venue. For example, it could be promising to use data-driven methods to ``learn'' the geometry of different problems to generate high-quality initializations. \input{sections/table-timing} \section{Sparse Semidefinite Relaxation} \label{sec:sdprelax} In the previous section, we showed how to rephrase the TLS cost as a nonconvex polynomial optimization in $\widetilde{\vxx} \triangleq [\vxx \,;\,} %\ \Vert\ \boldsymbol{\theta}] \in \Real{d+N}$. The goal of this section is to design algorithms that can solve~\eqref{eq:binaryTLS} to certifiable global optimality. {\bf Can we just use Lasserre's hierarchy?} Before introducing our sparse semidefinite relaxation, let us attempt to apply the dense Lasserre's hierarchy \eqref{eq:lasserre} to~\eqref{eq:binaryTLS}. We know that the objective in \eqref{eq:binaryTLS} has degree $3$,\footnote{The residuals $r^2(\vxx,\boldsymbol{z}_i)$ are quadratic from Proposition \ref{prop:polynomialExpressibility}, hence the terms $\theta_i r^2(\vxx,\boldsymbol{z}_i)$ in the objective of \eqref{eq:binaryTLS} become cubic.} thus $\kappa\geq2$ is needed for \eqref{eq:lasserre}. In fact, as we have shown in \cite{Yang20neurips-onering}, \eqref{eq:lasserre} at $\kappa=2$ is empirically exact (on small problem instances). However, as we can see from Examples \ref{ex:singlerotation}-\ref{ex:category}, the problems we care about have minimum $d=9$ (a 3D rotation in Example \ref{ex:singlerotation}) and maximum $d=9n$ ($n$ 3D rotations in Example \ref{ex:multirotation}) with $n$ being as large as a few hundreds, and meanwhile, it is desirable to be able to handle $N=100$ measurements. Choosing $d=10,N=100,\kappa=2$, the SDP resulting from the dense relaxation \eqref{eq:lasserre} has $n_1 =6216,m_{\mathrm{mom}} =12,649,561$; when $d=100,N=100,\kappa=2$, such SDP would have $n_1 \approx 2 \times 10^4, m_{\mathrm{mom}} \approx 1.4\times 10^8$. In both cases, it is hopeless to solve the resulting SDPs using existing solvers. {\bf Sparse semidefinite relaxation (SSR)}. Now we present a semidefinite relaxation that is much more scalable than \eqref{eq:lasserre}. Note that the fundamental reason why \eqref{eq:lasserre} leads to an intractable SDP is the use of the \emph{dense} monomial basis $[\widetilde{\vxx}]_\kappa$ for building the moment matrix $\M{X}_\kappa$. Although the full set of monomials $[\widetilde{\vxx}]_\kappa$ is necessary when the polynomials $p,h_i,g_j$ contain all monomials up to degree $2\kappa$, in practice $p,h_i,g_j$ are almost always \emph{sparse} (\emph{i.e.},\xspace include a small set of monomials). Therefore, the crux of our semidefinite relaxation is to construct a {sparse} set of monomials that result in a much smaller moment matrix. Towards this, we analyze the sparsity of the objective and constraint polynomials in~\eqref{eq:binaryTLS} and observe they only contain three types of monomials: \begin{enumerate}[label=(\roman*)] \item \label{item:tlsmono1} $[\vxx]_2$, coming from $r^2$ and $\psi$ in the objective, and polynomials defining the feasible set ${\cal X}$ (\emph{cf.}\xspace Proposition \ref{prop:polynomialExpressibility}); \item \label{item:tlsmono2} $\theta_i \cdot [\vxx]_2,i=1\dots,N$, coming from $\theta_i r^2$ and $\theta_i$ in the objective for $i=1,\dots,N$; and \item \label{item:tlsmono3} $\theta_i^2,i=1,\dots,N$, coming from the equality constraints $\theta_i^2-1=0$ for $i=1,\dots,N$. \end{enumerate} Therefore, it is easy to see that, with the Kronecker product denoted by ``$\otimes$'', choosing the sparse basis \begin{eqnarray}\label{eq:sparsebasis} \boldsymbol{v}(\widetilde{\vxx}) \triangleq [1 \,;\,} %\ \Vert\ \vxx \,;\,} %\ \Vert\ \boldsymbol{\theta} \,;\,} %\ \Vert\ \boldsymbol{\theta} \otimes \vxx] \in \Real{n_1},\ n_1 \triangleq (1+d)(1+N) \end{eqnarray} leads to the following moment matrix \begin{eqnarray} \label{eq:sparsemomentmat} \M{X}_v \triangleq \boldsymbol{v} \boldsymbol{v}^{\mathsf{T}}\!\! =\!\! \left[\begin{array}{cccc} 1 & \vxx^{\mathsf{T}} & \boldsymbol{\theta}^{\mathsf{T}} & \boldsymbol{\theta}^{\mathsf{T}} \otimes \vxx^{\mathsf{T}} \\ \vxx & \vxx \vxx^{\mathsf{T}} & \vxx\boldsymbol{\theta}^{\mathsf{T}} &\!\!\!\! \vxx (\boldsymbol{\theta}^{\mathsf{T}} \otimes \vxx^{\mathsf{T}})\!\!\!\! \\ \boldsymbol{\theta} & \boldsymbol{\theta} \vxx^{\mathsf{T}} & \boldsymbol{\theta} \boldsymbol{\theta}^{\mathsf{T}} &\!\!\!\! \boldsymbol{\theta} (\boldsymbol{\theta}^{\mathsf{T}} \otimes \vxx^{\mathsf{T}})\!\!\!\! \\ \!\!\!\boldsymbol{\theta} \otimes \vxx\! &\!\!\! (\boldsymbol{\theta} \otimes \vxx)\vxx^{\mathsf{T}}\!\!\! &\!\!\! (\boldsymbol{\theta} \otimes \vxx) \boldsymbol{\theta}^{\mathsf{T}}\!\! &\!\!\!\! \boldsymbol{\theta}\vtheta^{\mathsf{T}} \otimes \vxx\vxx^{\mathsf{T}}\!\!\!\! \end{array}\right] \end{eqnarray} that contains all the three types of monomials ($[\vxx]_2$, $\theta_i \cdot [\vxx]_2$, and $\theta_i^2$) in \ref{item:tlsmono1}-\ref{item:tlsmono3}. Therefore, \emph{we can write the objective and constraint polynomials in \eqref{eq:binaryTLS} as linear functions of the smaller moment matrix~\eqref{eq:sparsemomentmat}.} Clearly, the advantage is that the size of the moment matrix is now $(1+d)(1+N)$, which is much smaller than $\nchoosek{d+N+\kappa}{\kappa}$ (for $\kappa=2$) from Lasserre's hierarchy. Now we can formulate our sparse relaxation using $\M{X}_v$ in~\eqref{eq:sparsemomentmat}, by following the same procedure as in Section \ref{sec:pre-pop}. \emph{(i) Rewriting \eqref{eq:binaryTLS} using the sparse moment matrix $\M{X}_v$}. Because the sparse moment matrix $\M{X}_v$ contains all monomials in the objective and constraint polynomials of \eqref{eq:binaryTLS}, we can write them as linear functions of $\M{X}_v$. For example, the objective can be written as $\inprod{\M{C}_1}{\M{X}_v}$. \emph{(ii) Relaxing the rank-$1$ constraint on $\M{X}_v$}. By construction, $\M{X}_v$ belongs to the set of rank-one positive semidefinite matrices. Since the rank constraint is non-convex, we drop it and only enforce $\M{X}_v$ to be positive semidefinite: $\M{X}_v \succeq 0$. \emph{(iii) Adding redundant constraints}. First, similar to the dense relaxation case, some monomials can repeat themselves at multiple entries of $\M{X}_v$. For example, in \eqref{eq:sparsemomentmat}, the ``$\boldsymbol{\theta} \otimes \vxx$'' block is the same as the ``$\boldsymbol{\theta} \vxx^{\mathsf{T}}$'' block up to rearrangement of entries. In fact, the number of \emph{unique} monomials in $\M{X}_v$ is $m_{2v} = \mathfrak{t}(d+1)\mathfrak{t}(N+1)$, while the dimension of $\M{X}_v$ (in terms of a symmetric matrix) is $\mathfrak{t}((1+d)(1+N))$. Therefore, we can add a total number of $m_{\mathrm{mom}} = \mathfrak{t}((1+d)(1+N)) - m_{2v} + 1$ \emph{moment constraints}: \beq \begin{array}{ll}\label{eq:momentConstraintssparse} \text{\grayout{moment constraints}}:& \revise{\inprod{\M{A}_{\mathrm{mom},0}}{\M{X}_v} = 1, } \\ & \inprod{\M{A}_{\mathrm{mom},j}}{\M{X}_v} = 0, \\ & j = 1, \ldots, m_{\mathrm{mom}}-1, \end{array} \eeq to enforce the repeating monomials in $\M{X}_v$ to be equal to each other, as well as the leading entry $[\M{X}_v]_{11} = 1$ \revise{(similar to \eqref{eq:momentConstraints}, $\M{A}_{\mathrm{mom},0}$ is all zero except $[\M{A}_{\mathrm{mom},0}]_{11} =1$)}. Second, we add redundant equality constraints. For each equality constraint $h_i$ in $\eqref{eq:binaryTLS}$, we denote $[\widetilde{\vxx}]_{h_i}$ as the largest set of unique monomials such that $h_i \cdot [\widetilde{\vxx}]_{h_i}$ only contains monomials in $\M{X}_v$. Formally, \begin{eqnarray} [\widetilde{\vxx}]_{h_i} \triangleq \{\widetilde{\vxx}^{\boldsymbol{\alpha}} \mid \mono{ h_i \cdot \widetilde{\vxx}^{\boldsymbol{\alpha}} } \subseteq \mono{\M{X}_v} \}, \label{eq:liftequalities} \end{eqnarray} where $\mono{\cdot}$ returns the set of unique monomials of a polynomial (or of a matrix of polynomials). Consequently, we can write $h_i \cdot [\widetilde{\vxx}]_{h_i} = {\mathbf 0}$ as linear equalities in $\M{X}_v$: \beq \begin{array}{ll}\label{eq:redundantEqualityConstraintssparse} \hspace{-3mm} \text{\grayout{(redundant) equality constraints}}: \inprod{\M{A}_{\mathrm{req},ij}}{\M{X}_v} = 0, \\ \quad\quad \quad i = 1, \ldots, l_h,\ \ j = 1, \ldots, \abs{[\widetilde{\vxx}]_{h_i}}. \end{array} \eeq Note that since each $[\widetilde{\vxx}]_{h_i}$ must include the monomial ``1'', eq.~\eqref{eq:redundantEqualityConstraintssparse} includes the original equality constraints $h_i$ in $\eqref{eq:binaryTLS}$. Finally, for each inequality constraint $g_j$ (recall $\deg{g_j} \leq 2$ by Proposition \ref{prop:polynomialExpressibility}), we denote by $[\M{X}_1]_{{\cal I}_j,{\cal I}_j}$ the maximum principal submatrix of $\M{X}_1$ (\emph{i.e.},\xspace order-one full moment matrix) such that $g_j \cdot [\M{X}_1]_{{\cal I}_j,{\cal I}_j}$ only contains monomials in $\M{X}_v$. Formally, the indices ${\cal I}_j$ are selected as: \begin{eqnarray} & \hspace{-6mm} {\cal I}_j = \displaystyle \argmax_{{\cal J}} \{ \abs{{\cal J}} \mid \mono{ g_j\! \cdot\! [\M{X}_1]_{{\cal J},{\cal J}} } \subseteq \mono{\M{X}_v} \}. \label{eq:liftPSDsubblks} \end{eqnarray} As a result, calling $\M{X}_{g_j} = g_j \cdot [\M{X}_1]_{{\cal I}_j,{\cal I}_j}$, which is positive semidefinite by construction, we can write down the following localizing matrices and constraints: \beq \begin{array}{ll}\label{eq:locMatricessparse} \text{\grayout{localizing matrices}}: & \M{X}_{g_j} \succeq 0, \;\; j=1,\ldots,l_g \end{array} \eeq \beq \begin{array}{ll} \label{eq:localizingConstraintssparse} \text{\grayout{{localizing} constraints}}: \inprod{\M{A}_{\mathrm{loc},jkh}}{\M{X}_v} = [\M{X}_{g_j}]_{hk} \\ \quad\quad\quad j = 1, \ldots, l_g, \ \ 1 \leq h\leq k \leq \abs{{\cal I}_j}, \end{array} \eeq where the linear constraints in \eqref{eq:localizingConstraintssparse} simply enforce each entry of $\M{X}_{g_j}$ to be a linear combination of entries in $\M{X}_v$. Steps (i)-(iii) above lead to the following SDP: \begin{equation}\label{eq:sparserelax} \hspace{-3mm} f^\star =\!\! \min_{\M{X} = (\M{X}_v, \M{X}_1,\dots,\M{X}_{l_g})} \cbrace{\inprod{\M{C}_1}{\M{X}_v} \mid {\cal A}(\M{X})\! =\! \boldsymbol{b}, \M{X} \succeq 0}\!,\!\!\! \tag{SSR} \end{equation} where we have shorthanded $\M{X}_j = \M{X}_{g_j}$ for notation convenience, and ${\cal A}(\M{X})=\boldsymbol{b}$ collects all the linear equality constraints in \eqref{eq:momentConstraintssparse}, \eqref{eq:redundantEqualityConstraintssparse} and \eqref{eq:localizingConstraintssparse}. Similar to Theorem \ref{thm:lasserre} for \eqref{eq:lasserre}, we have the following result for \eqref{eq:sparserelax} about certifiable global optimality. \begin{theorem}[Sparse Semidefinite Relaxation for \eqref{eq:binaryTLS}] \label{thm:sparserelaxtls} Denote by $p(\vxx,\boldsymbol{\theta})$ the objective function of \eqref{eq:binaryTLS}, by $p^\star$ the optimum of \eqref{eq:binaryTLS}, and by $f^\star$ \revise{(resp. $\MX^\star_v$)} the optimum \revise{(resp. one optimizer)} of \eqref{eq:sparserelax}, we have \begin{enumerate}[label=(\roman*)] \item (lower bound) $f^\star \leq p^\star$; \item (rank-one solutions) if $f^\star = p^\star$, then for each global minimizer $\tldvxx^\star = (\vxx^{\star},{\vtheta}^{\star})$ of \eqref{eq:binaryTLS}, its rank-one lifting $\M{X}_v = \boldsymbol{v} (\tldvxx^\star) \boldsymbol{v} (\tldvxx^\star)^{\mathsf{T}}$ is optimal for \eqref{eq:sparserelax}, \revise{and every rank-one optimal solution $\MX^\star_v$ of \eqref{eq:sparserelax} can be written as $\boldsymbol{v} (\tldvxx^\star) \boldsymbol{v} (\tldvxx^\star)^{\mathsf{T}}$ for some $\tldvxx^\star$ that is optimal for \eqref{eq:binaryTLS}}; \item \revise{(optimality certificate) if $\rank{\MX^\star_v} = 1$, then $f^\star = p^\star$.} \end{enumerate} \end{theorem} Theorem \ref{thm:sparserelaxtls} states that \eqref{eq:sparserelax} is a relaxation for \eqref{eq:binaryTLS} \revise{and solving the convex SDP \eqref{eq:sparserelax} can provide a certificate for the exactness of the relaxation if the rank of the optimal solution $\MX^\star_v$ equals one. In practice, rank computation can be subject to numerical inaccuracies (\emph{e.g.},\xspace it can be difficult to decide if the relaxation is exact when the second largest eigenvalue is, say $10^{-3}$). Therefore, we now introduce a continuous metric for evaluating the exactness of the relaxation (that also applies to the dense relaxation \eqref{eq:lasserre}). } {\bf Relative suboptimality}. \revise{Assume $\MX^\star_v$ is an optimal solution of \eqref{eq:sparserelax} and let $\boldsymbol{v}$ be the eigenvector corresponding to the maximum eigenvalue of $\MX^\star_v$. If the maximum eigenvalue has multiplicity larger than one, then choose $\boldsymbol{v}$ as any of the eigenvectors corresponding to the maximum eigenvalue. Define the rounding function $(\widehat{\vxx},\widehat{\vtheta}) = {\texttt{rounding}}(\boldsymbol{v})$ that returns from $\boldsymbol{v}$ a \emph{feasible} solution to \eqref{eq:binaryTLS} as} \begin{equation}\label{eq:rounding} \boldsymbol{v} \leftarrow \boldsymbol{v} / \boldsymbol{v}_1,\ \widehat{\vxx} = \Pi_{{\cal X}} (\boldsymbol{v}_{x}),\ \widehat{\vtheta} = \mathrm{sgn}\parentheses{\boldsymbol{v}_{\theta}}, \end{equation} where $\boldsymbol{v}_{x}$ (resp. $\boldsymbol{v}_{1},\boldsymbol{v}_{\theta}$) takes the entries of $\boldsymbol{v}$ corresponding to monomials $\vxx$ (resp. $1,\boldsymbol{\theta}$) in \eqref{eq:sparsebasis}, $\mathrm{sgn}(a)$ returns the sign of a scalar ``$a$'', and $\Pi_{{\cal X}}$ denotes the projection onto set ${\cal X}$.\footnote{For our Examples \ref{ex:singlerotation}-\ref{ex:category}, the feasible set ${\cal X}$ includes $\mathrm{SO}(\dimrot)$, whose projections can be performed in closed form, and $\calB^3_T$, ${\cal C}_\alpha$, $\calB^3_T \cap {\cal C}_\alpha$, $\calB^K_T \cap \mathbb{R}^K_{+}$, all of which are \emph{low-dimensional convex} sets whose projections can be computed to arbitrary accuracy using standard convex solvers. Therefore, the {\texttt{rounding}} procedure~\eqref{eq:rounding} can be done efficiently.} Denoting $\widehat{p} \triangleq p(\widehat{\vxx},\widehat{\vtheta})$ as the cost attained by the rounded solution, we have $f^\star \leq p^\star \leq \widehat{p}$. Moreover, we can compute a \emph{relative suboptimality} of the rounded solution $(\widehat{\vxx},\widehat{\vtheta})$ \begin{eqnarray} \label{eq:subopt} \eta_s \triangleq \abs{f^\star - \widehat{p}}/ \parentheses{ 1 + \abs{f^\star} + \abs{\widehat{p}} } \end{eqnarray} as a measure of suboptimality. \revise{Intuitively, the relative suboptimality certifies that a solution $(\widehat{\vxx},\widehat{\vtheta})$ with objective value \emph{at most} $\eta_s$ (\emph{e.g.},\xspace $0.1\%$) away from the unknown global optimizer has been found.} Evidently, $\eta_s = 0$ implies $(\widehat{\vxx},\widehat{\vtheta})$ is optimal and \eqref{eq:sparserelax} is exact. \revise{In fact, for \emph{any feasible solution} $(\widehat{\vxx},\widehat{\vtheta})$, not necessarily obtained from the SDP solution $\MX^\star_v$, we can evaluate $\widehat{p} = p(\widehat{\vxx},\widehat{\vtheta})$ to compute the relative suboptimality at the given feasible solution using \eqref{eq:subopt}. Similarly, if $\eta_s = 0$ is attained at any feasible solution, we can certify the exactness of the relaxation and the global optimality of the feasible solution.} As an advanced reading, in Supplementary Material\xspace, we discuss how to compute a relative suboptimality measure that is not sensitive to potential numerical inaccuracies in the computation of $f^\star$ \revise{(as mentioned in Section \ref{sec:pre-sdp}, it can be challenging to compute $f^\star$ to high accuracy for large-scale SDPs)}. {\bf Scalability improvement}. Table \ref{table:LASvsSSR} compares the size of the SDP from our sparse relaxation \eqref{eq:sparserelax} with that from the standard Lasserre's hierarchy \eqref{eq:lasserre}, in terms of the size of the maximum positive semidefinite block $n_1$ and the number of moment constraints $m_{\mathrm{mom}}$ (in our problems, over $60\%$ of the equality constraints are moment constraints, hence $m_{\mathrm{mom}}$ is representative of the size of the SDP). For illustration purpose, Fig. \ref{fig:LASvsSSR} plots $n_1$ and $m_{\mathrm{mom}}$ as $N$ increases from $20$ to $200$, when applying \eqref{eq:lasserre} and \eqref{eq:sparserelax} to Example \ref{ex:singlerotation} ($d=9$). We can observe a drastic reduction in both $n_1$ and $m_{\mathrm{mom}}$ when using \eqref{eq:sparserelax}. Notably, when $N=200$, $n_1 > 20,000$ and $m_{\mathrm{mom}} > 100,000,000$ if using \eqref{eq:lasserre}, while $n_1 \approx 2,000$ and $m_{\mathrm{mom}} \approx 1,000,000$ if using \eqref{eq:sparserelax}. This is about $10$ times reduction in $n_1$ and $100$ times reduction in $m_{\mathrm{mom}}$. Certainly, such scalability improvement would be meaningless if \eqref{eq:sparserelax} is \emph{inexact} and fails to solve the original \eqref{eq:binaryTLS} problem to global optimality. However, as we will show in Section \ref{sec:experiments}, \eqref{eq:sparserelax} is {empirically exact across all Examples \ref{ex:singlerotation}-\ref{ex:category}, even in the presence of many outliers}. \input{sections/table-LASvsSSR} {\bf Further reduction on Example \ref{ex:multirotation}}. For multiple rotation averaging, the dimension of the geometric model is $d=2n$ (2D) or $d=9n$ (3D), where $n$ is the number of nodes of a graph. Practical rotation averaging problems in structure from motion and SLAM can have $n$ and $N$ being a few hundreds to a few thousands \cite{Rosen19IJRR-sesync,Eriksson18cvpr-strongDuality}. Taking $d=400, N=20$ leads to $m_{\mathrm{mom}}=16,842,001$ that is still too large. In Supplementary Material\xspace, we present a method to further reduce the size of the sparse monomial basis in \eqref{eq:sparsebasis}. We end this section with a remark about how to exploit sparsity while preserving exactness of the relaxation. \begin{remark}[Exploiting Sparsity] \label{remark:sparsity} (i) A sparse relaxation can be exact only when the dense relaxation \eqref{eq:lasserre} is \revise{exact}. Therefore, we believe it is good practice to first obtain an \revise{empirically exact} relaxation using \revise{the dense hierarchy} \eqref{eq:lasserre} at certain $\kappa$ (as we have done in \cite{Yang20neurips-onering} \revise{with extensive experimental validation}), and then try to find a sparse monomial basis at that $\kappa$. (ii) When the dense relaxation is exact, it is nontrivial to decide if a sparse relaxation will be \revise{exact} without empirical evaluation. For example, replacing \eqref{eq:sparsebasis} with $\boldsymbol{v}(\widetilde{\vxx}) = [[\vxx]_2 \,;\,} %\ \Vert\ \boldsymbol{\theta}]$ is also a sparse relaxation ---the corresponding moment matrix includes all monomials in \ref{item:tlsmono1}-\ref{item:tlsmono3}--- but it is far from being exact. (iii) Parallel to our work \cite{Yang20neurips-onering}, \cite{Wang21SIOPT-tssos} has presented a methodology, {\scenario{TSSOS}}, to systematically exploit term sparsity for general POPs. However, {\scenario{TSSOS}} tends to find a larger monomial basis when compared to problem-specific techniques such as \eqref{eq:sparserelax} in this paper. For Example \ref{ex:absolutepose} with $N=10$ measurements, the dense monomial basis has dimension $276$, our sparse basis \eqref{eq:sparsebasis} has dimension $143$, but {\scenario{TSSOS}} with ``maximal chordal extension'' finds a sparse basis that has dimension $246$ and is a strict superset of \eqref{eq:sparsebasis}. \end{remark} \section{Proof of Theorem~\ref{thm:sparserelaxtls}} \label{sec:app-proof-theorem-sparserelax} \begin{proof} (i): Every $\widetilde{\vxx}=(\vxx,\boldsymbol{\theta}) \in {\cal X} \times \{\pm 1\}^N$ of \eqref{eq:binaryTLS} leads to a rank-one lifting $\boldsymbol{v}(\widetilde{\vxx})\boldsymbol{v}(\widetilde{\vxx})^{\mathsf{T}}$ that is feasible for \eqref{eq:sparserelax}. Therefore, the feasible set of \eqref{eq:binaryTLS} is a subset of the feasible set of \eqref{eq:sparserelax}, and hence $f^\star \leq p^\star$. \revise{ (ii) \& (iii) Since $\tldvxx^\star$ is optimal for \eqref{eq:binaryTLS}, we have $p(\tldvxx^\star) = p^\star$. Now because $\M{X}_v = \boldsymbol{v}(\tldvxx^\star)\boldsymbol{v}(\tldvxx^\star)^{\mathsf{T}}$ is a rank-one lifting of $\tldvxx^\star$, we have that $\M{X}_v$ is feasible for the SDP \eqref{eq:sparserelax} and it attains $f(\M{X}_v) = p^\star = f^\star$. Therefore $\M{X}_v$ (and its corresponding localizing matrices $\M{X}_1,\dots,\M{X}_{l_g}$) are optimal for \eqref{eq:sparserelax}. Now we prove that if an optimal SDP solution $\MX^\star_v$ has rank one, then the relaxation is exact and a global optimizer can be extracted from $\MX^\star_v$. Towards this goal, first observe that since $\rank{\MX^\star_v} = 1$, $\MX^\star_v \succeq 0$, and $[\MX^\star_v]_{11} = 1$ (because $\MX^\star_v$ is a feasible point of \eqref{eq:sparserelax}, which requires the leading entry to be one. \emph{cf.}\xspace \eqref{eq:momentConstraintssparse}), we can perform a rank-one factorization of $\MX^\star_v$ as \begin{eqnarray} \MX^\star_v = \left[\begin{array}{c} 1 \\ \widehat{\vxx} \\ \widehat{\vtheta} \\ \widehat{\vxx\vtheta} \end{array}\right] \left[\begin{array}{cccc} 1 & \widehat{\vxx}^{\mathsf{T}} & \widehat{\vtheta}^{\mathsf{T}} & \widehat{\vxx\vtheta}^{\mathsf{T}}\end{array}\right] \\ = \left[\begin{array}{cccc} 1 & \widehat{\vxx}^{\mathsf{T}} & \widehat{\vtheta}^{\mathsf{T}} & \widehat{\vxx\vtheta}^{\mathsf{T}} \\ \widehat{\vxx} & \widehat{\vxx}\lowvxx^{\mathsf{T}} & \widehat{\vxx} \widehat{\vtheta}^{\mathsf{T}} & \widehat{\vxx} \widehat{\vxx\vtheta}^{\mathsf{T}} \\ \widehat{\vtheta} & \widehat{\vtheta} \widehat{\vxx}^{\mathsf{T}} & \widehat{\vtheta} \widehat{\vtheta}^{\mathsf{T}} & \widehat{\vtheta} \widehat{\vxx\vtheta}^{\mathsf{T}} \\ \widehat{\vxx\vtheta} & \widehat{\vxx\vtheta} \widehat{\vxx}^{\mathsf{T}} & \widehat{\vxx\vtheta} \widehat{\vtheta}^{\mathsf{T}} & \widehat{\vxx\vtheta} \widehat{\vxx\vtheta}^{\mathsf{T}} \end{array}\right], \label{eq:expandrankone} \end{eqnarray} where $\widehat{\vxx} \in \Real{d}$, $\widehat{\vtheta} \in \Real{N}$, $\widehat{\vxx\vtheta} \in \Real{dN}$ and they correspond to the partition in the ``$\boldsymbol{v}$'' monomial basis in \eqref{eq:sparsebasis} (note that here we overload the ``$\widehat{\cdot}$'' symbol only in the context of this proof). Now we first show that $\widehat{\vxx\vtheta} = \widehat{\vxx} \otimes \widehat{\vtheta}$, \emph{i.e.},\xspace $\widehat{\vxx\vtheta}$ are second-order monomials in $\widehat{\vxx}$ and $\widehat{\vtheta}$ of the form $[\widehat{\vxx}]_i [\widehat{\vtheta}]_j$ for $1\leq i \leq d$ and $1\leq j \leq N$. This is evident when we observe that, asking $\MX^\star_v$ to be a moment matrix (\emph{i.e.},\xspace enforcing the moment constraints in \eqref{eq:momentConstraintssparse}) requires that the block $\widehat{\vxx\vtheta}$ in \eqref{eq:expandrankone} is just a rearrangement of the entries of the block $\widehat{\vxx}\widehat{\vtheta}^{\mathsf{T}}$ in \eqref{eq:expandrankone}, where the latter block contains all the second-order monomials of the form $[\widehat{\vxx}]_i [\widehat{\vtheta}]_j$. Then we show that $\widehat{\vxx} \in {\cal X}$ and $\widehat{\vtheta} \in \{+1,-1\}^N$, \emph{i.e.},\xspace $\widehat{\vxx}$ and $\widehat{\vtheta}$ are indeed \emph{feasible} points to the original \eqref{eq:binaryTLS} problem. This is equivalent to showing that all equality constraints hold: $h_i(\widehat{\vxx},\widehat{\vtheta}) = 0,\forall i = 1,\dots,l_h$, and all inequality constraints hold: $g_j(\widehat{\vxx},\widehat{\vtheta}) \geq 0,j=1,\dots,l_g$. This follows from the fact that (a) each $h_i(\widehat{\vxx},\widehat{\vtheta}) = 0$ is enforced by one of the redundant constraints in \eqref{eq:redundantEqualityConstraintssparse}, and (b) each $g_j(\widehat{\vxx},\widehat{\vtheta}) \geq 0$ is enforced by the one of the localizing constraints in \eqref{eq:localizingConstraintssparse}. At this point, we have shown that $(\widehat{\vxx},\widehat{\vtheta})$ is a feasible point of \eqref{eq:binaryTLS} that attains $p(\widehat{\vxx},\widehat{\vtheta}) = f(\MX^\star_v) = f^\star$. However, we know that $p(\widehat{\vxx},\widehat{\vtheta}) \geq p^\star$ by the nature of the minimization problem \eqref{eq:binaryTLS}. Therefore, we have \begin{eqnarray} p^\star \leq p(\widehat{\vxx},\widehat{\vtheta}) = f(\MX^\star_v) = f^\star, \nonumber \end{eqnarray} but $f^\star \leq p^\star$ by construction of the semidefinite relaxation. Hence $p^\star = f^\star$ and $(\widehat{\vxx},\widehat{\vtheta})$ is globally optimal for \eqref{eq:binaryTLS}. } \end{proof} \section{Related Work} \label{sec:relatedwork} We review related works on outlier-free and outlier-robust geometric perception, while we refer the interested reader to \cite{Yang2015mpc-sdpnalplus,Wang21SIOPT-tssos} for recent progress in semidefinite programming and semidefinite relaxations. {\bf Outlier-free geometric perception} algorithms can be divided into \emph{minimal solvers} and \emph{non-minimal solvers}. Minimal solvers assume \emph{noiseless} measurements (\emph{i.e.},\xspace~$r(\vxx,\boldsymbol{z}_i)=0,\forall \; i$ in~\eqref{eq:robust}) and use the minimum number of measurements necessary to estimate $\vxx$, which typically leads to solving a system of polynomial equations~\cite{Kukelova2008ECCV-automaticGeneratorofMinimalProblemSolvers}. Non-minimal solvers account for measurement noise and estimate $\vxx$ via nonlinear least squares (NLS), \emph{i.e.},\xspace~$\rho(r) = r^2/\beta_i^2$ in~\eqref{eq:robust}. While in rare cases NLS can be solved in closed form~\cite{Horn87josa} or by solving the polynomial equations arising from the first-order optimality conditions~\cite{Kneip2014ECCV-UPnP}, in general they lead to nonconvex problems and are attacked using local solvers~\cite{Schonberger16cvpr-SfMRevisited} or exponential-time methods (\emph{e.g.},\xspace \emph{Branch and Bound}~\cite{Olsson09pami-bnbRegistration}). \emph{Certifiable algorithms} for outlier-free perception have recently emerged as an approach to compute globally optimal NLS solutions in polynomial time. These algorithms relax the NLS minimization into a convex optimization, using Lasserre's hierarchy of semidefinite relaxations for \emph{polynomial optimizations}~\cite{lasserre10book-momentsOpt,Kahl07IJCV-GlobalOptGeometricReconstruction}. By solving the SDP resulting from the convex relaxations, certifiable algorithms compute global solutions to NLS problems and provide a certificate of optimality, which usually depends on the rank of the SDP solution or the duality gap. Empirically tight convex relaxations have been discovered in pose graph optimization~\cite{Carlone16TRO-planarPGO,Rosen19IJRR-sesync}, rotation averaging~\cite{Eriksson18cvpr-strongDuality,Fredriksson12accv}, triangulation~\cite{Aholt12eccv-qcqptriangulation}, 3D registration~\cite{Briales17cvpr-registration,Maron16tog-PMSDP,Chaudhury15Jopt-multiplePointCloudRegistration}, absolute pose estimation~\cite{Agostinho2019arXiv-cvxpnpl}, relative pose estimation~\cite{Briales18cvpr-global2view,Zhao20pami-relativepose}, hand-eye calibration~\cite{Heller14icra-handeyePOP} and shape and pose estimation from 2D or 3D landmarks~\cite{Yang20cvpr-perfectshape,Shi21rss-pace}. More recently, theoretical analysis of when and why the relaxations are tight is also emerging~\cite{Carlone15icra-verification,Aholt12eccv-qcqptriangulation,Eriksson18cvpr-strongDuality,Rosen19IJRR-sesync,Cifuentes17arxiv,Zhao20pami-relativepose,Chaudhury15Jopt-multiplePointCloudRegistration,Dym17Jopt-exactPMSDP,Iglesias20cvpr-PSRGlobalOptimality,Eriksson19pami-rotavgstrongduality}. Tight relaxations also enable optimality certification (\emph{i.e.},\xspace checking if a given solution is optimal), which ---in outlier-free perception--- can sometimes be performed in closed form~\cite{Carlone16TRO-planarPGO,Eriksson18cvpr-strongDuality,Garcia21IVC-certifiablerelativepose,Boumal16nips,Burer03mp,Rosen20wafr-scalableLowRankSDP,Iglesias20cvpr-PSRGlobalOptimality}. {Despite being certifiably optimal, these solvers assume all measurements are inliers (\emph{i.e.},\xspace~have small noise), which rarely occurs in practice, and hence give poor estimates even in the presence of a single outlier. {\bf Outlier-robust geometric perception} algorithms can be divided into \emph{fast heuristics} and \emph{globally optimal solvers}. Two general frameworks for designing fast heuristics are \scenario{RANSAC}~\cite{Fischler81} and \emph{graduated non-convexity} (\scenario{GNC})~\cite{Black96ijcv-unification,Yang20ral-gnc,Antonante20TRO-outlier}. {\scenario{RANSAC}} robustifies minimal solvers and acts as a fast heuristics to solve \emph{consensus maximization}~\cite{Chin17slcv-maximumConsensusAdvances}, while {\scenario{GNC}} robustifies non-minimal solvers and acts as a fast heuristics to solve \emph{M-estimation} (\emph{i.e.},\xspace~using a robust cost function $\rho$ in~\eqref{eq:robust}). Local optimization is also a popular fast heuristics~\cite{Schonberger16cvpr-SfMRevisited,Agarwal13icra} for the case where an initial guess is available. Approximate but deterministic algorithms have also been designed to solve consensus maximization \cite{Le19pami-deterministicApproximateMC}. On the other hand, globally optimal solvers are typically designed using Branch and Bound~\cite{Bazin12accv-globalRotSearch,Bustos18pami-GORE,Izatt17isrr-MIPregistration,Yang2014ECCV-optimalEssentialEstimationBnBConsensusMax,Paudel15iccv-robustSOS,Li09cvpr-robustFitting,Chin15cvpr-CMTreeAstar,Li07iccv-3DRegistration}. \emph{Certifiable outlier-robust algorithms} relax problem~\eqref{eq:robust} with a robust cost into a tight convex optimization. While certain robust costs, such as L1~\cite{Wang13ima} and Huber~\cite{Carlone18ral-robustPGO2D}, are already convex, they have low breakdown points (\emph{i.e.},\xspace they can be compromised by a single outlier~\cite{Maronna19book-robustStats}). Problem-specific certifiably robust algorithms have been proposed to deal with high-breakdown-point formulations, such as the TLS cost~\cite{Yang19rss-teaser,Yang19iccv-quasar,Lajoie19ral-DCGM}. \maybeOmit{Even optimality certification becomes harder and problem-specific in the presence of outliers, due to the lack of a closed-form characterization of the dual variables~\cite{Yang20tro-teaser}.} \section{Extra Experimental Results} \label{sec:supp-experiments} In this section, we report extra experimental results. \subsection{Mesh Registration} \input{sections/fig-mr} \revise{ {\bf Setup}. We first simulate a random mesh by sampling a set of $N$ 3D planes $\{\boldsymbol{q}_i, \boldsymbol{v}_i\}_{i=1}^N$, where $\boldsymbol{v}_i$ is the unit normal of the plane (by sampling a random 3D direction) and $\boldsymbol{q}_i \sim {\cal N}({\mathbf 0},{\mathbf I}_3)$ is an arbitrary point on the plane. We then generate a random point on each plane via $\boldsymbol{q}_i' = \boldsymbol{q}_i + \boldsymbol{w}_i \times \boldsymbol{v}_i$, where $\boldsymbol{w}_i \sim {\cal N}({\mathbf 0},{\mathbf I}_3)$ is a random point and ``$\times$'' denotes the vector cross product. After this, we generate a random groundtruth transformation $(\MR^{\circ},\vt^{\circ})$, and transform $(\boldsymbol{q}_i',\boldsymbol{v}_i)$ to be $\boldsymbol{p}_i = \MR^{\circ} \boldsymbol{q}_i' + \vt^{\circ} + \boldsymbol{\varepsilon}_{pi}$ and $\boldsymbol{u}_i = \texttt{normalize}(\MR^{\circ} \boldsymbol{v}_i + \boldsymbol{\varepsilon}_{ni})$, if $(\boldsymbol{p}_i,\boldsymbol{u}_i)$ is an inlier, where $\boldsymbol{\varepsilon}_{pi},\boldsymbol{\varepsilon}_{ni} \sim {\cal N}({\mathbf 0},0.01^2 {\mathbf I}_3)$ are random Gaussian noise, and $\texttt{normalize}(\boldsymbol{v}) \triangleq \boldsymbol{v} / \norm{\boldsymbol{v}}$ normalizes a vector to have unit norm. If $(\boldsymbol{p}_i,\boldsymbol{u}_i)$ is an outlier, then $\boldsymbol{p}_i$ is a random 3D point and $\boldsymbol{u}_i$ is a random 3D direction. Given the mesh $\{\boldsymbol{q}_i, \boldsymbol{v}_i\}_{i=1}^N$ and the noisy point cloud with normals $\{\boldsymbol{p}_i,\boldsymbol{u}_i\}_{i=1}^N$, we seek the best transformation $(\MR^{\star},\vt^{\star})$ to \emph{align the point cloud to the mesh} using the residual defined in Example \ref{ex:mesh}. After $(\MR^{\star},\vt^{\star})$ is found, its \emph{inverse} transformation is used to compute the estimation errors w.r.t.\xspace $(\MR^{\circ},\vt^{\circ})$ (recall $(\MR^{\circ},\vt^{\circ})$ is generated to {align the mesh to the point cloud}). We test $N=20$ and $N=100$ with increasing outlier rates. {\bf Results}. Fig.~\ref{fig:exp-mr-results}(a)-(b) plots the evaluation metrics for $N=20$ and $N=100$, respectively. The results are mostly the same as point cloud registration in Fig. \ref{fig:exp-pcr-results}(a)-(b), except that when $N=20$, the relaxation is not always tight at $70\%$ and $80\%$ outlier rates (from the $\eta_s$ plot of {\scenario{MOSEK}} we see one inexact run at $70\%$ and three inexact runs at $80\%$). {\bf Mesh registration with {\scenario{TeddyBear}}}. We perform mesh registration using the {\scenario{TeddyBear}} mesh model from the {\scenario{HomebrewedDB}} dataset \cite{Kaskman19-homebrewedDB}. From the {\scenario{TeddyBear}} mesh, we generate a noisy point cloud by densely sampling points on each face of the mesh with additive Gaussian noise, and transform the point cloud using a random rigid transformation. We use the $\texttt{pcnormals}$ function in Matlab to estimate surface normals for each point in the point cloud. We then randomly sample $N=50$ point-to-face correspondences with outliers, and use {\scenario{STRIDE}} to estimate the rigid transformation. Fig.~\ref{fig:exp-mr-results}(c-1) shows an instance with $50\%$ outliers, where {\scenario{GNC}} successfully returns the globally optimal solution and {\scenario{STRIDE}} computes a certificate of optimality ($\eta_s = 2.5\ee{-8}$). Fig.~\ref{fig:exp-mr-results}(c-2) shows an instance with $70\%$ outliers, where {\scenario{GNC}} converges to a suboptimal solution but {\scenario{STRIDE}} escapes the local minimum and finds the globally optimal solution with a certificate of optimality ($\eta_s = 1.1\ee{-7}$). } \subsection{Robustness of the TLS Estimator} \revise{ Here we show that the accuracy of the TLS estimator increases with the number of inliers and is comparable with a least-squares solution computed from the inliers only. We perform an experiment in single rotation averaging with outlier rate fixed at $80\%$ but number of measurements $N$ increased from $N=30$ to $N=100$. At each $N$, we perform 20 random simulations and compute the rotation estimation error w.r.t. groundtruth. Fig. \ref{fig:estimationcontract}(c) shows that all TLS solutions are certified as globally optimal. Fig. \ref{fig:estimationcontract}(a) shows that, as $N$ increases and the number of inliers increases, the estimation error in general decreases (both in terms of the average estimation error and the quantiles as shown by the boxplot). This demonstrates the empirical robustness of the TLS estimator against outliers and its capability in exploiting information from the inliers. Using the same experimental setup, we compare the rotation error between the TLS estimator and the least squares (LS) estimator (after running TLS, we discard the measurements deemed as outliers by TLS and run LS on the remaining inliers). Fig. \ref{fig:estimationcontract}(b) shows that the TLS estimator is exactly the same as the LS estimator after discarding outliers (up to numerical inaccuracies, the rotation errors are shown in degrees). This demonstrates that the outliers do not affect the TLS solution, and the TLS estimator is truly robust against outliers. } \begin{figure*}[h] \begin{center} \begin{minipage}{\textwidth} \begin{tabular}{ccc}% \begin{minipage}{5.5cm}% \centering% \includegraphics[width=\columnwidth]{errR_tlsvsgt.pdf} \\ {\small (a) TLS vs. Groundtruth} \end{minipage} & \begin{minipage}{5.5cm}% \centering% \includegraphics[width=\columnwidth]{errR_tlsvsls.pdf} \\ {\small (b) TLS vs. LS } \end{minipage} & \begin{minipage}{5.5cm}% \centering% \includegraphics[width=\columnwidth]{subopt_increase_inlier.pdf} \\ {\small (c) Certified Suboptimality} \end{minipage} \end{tabular} \end{minipage} \caption{Rotation estimation error under increasing number of measurements for single rotation averaging with $80\%$ fixed outlier rate (thus increasing number of inliers). (a) Rotation error between TLS estimate and groundtruth rotation. (b) Rotation error between TLS estimate and least squares (LS) estimate. (c) All TLS solutions are certified as optimal. Rotation errors shown in degrees. \label{fig:estimationcontract}} \end{center} \end{figure*} \subsection{Scaling the Noise Bound} \revise{ We perform an experiment to investigate how the optimal solutions change when the noise bound $\beta$ is varied, using the single rotation averaging problem with $N=50$ measurements and $50\%$ outlier rate. Fig. \ref{fig:scaleboundsra} plots the rotation estimation error and certified suboptimality w.r.t. different scaling on the original noise bound $\beta$ (all the measurements are the same at each random simulation, only the noise bound $\beta$ is chosen differently). We can see that (1) there is a wide range of $\beta$ that leads to certifiably optimal and accurate estimation, and (2) when the noise bound $\beta$ is slightly perturbed (\emph{e.g.},\xspace decreased to $90\%$ or increased to $110\%$), the optimal solution remains the same for most problem instances, as shown by the similar boxplots in Fig. \ref{fig:scaleboundsra}(a) at horizontal locations $0.9$, $1$, and $1.1$ (in fact, $15$ out of the $20$ runs have exactly the same solutions). } \begin{figure*}[h] \begin{center} \begin{minipage}{\textwidth} \centering \begin{tabular}{cc}% \begin{minipage}{5.5cm}% \centering% \includegraphics[width=\columnwidth]{sra_errR_scalebound.pdf} \\ {\small (a) Rotation estimation error} \end{minipage} & \begin{minipage}{6cm}% \centering% \includegraphics[width=\columnwidth]{sra_subopt_scalebound.pdf} \\ {\small (b) Certified suboptimality } \end{minipage} \end{tabular} \end{minipage} \caption{(a) Rotation estimation error and (b) certified suboptimality w.r.t. scaling on the noise bound $\beta$ in single rotation averaging with $N=50$ and $50\%$ outlier rate. \label{fig:scaleboundsra}} \end{center} \end{figure*} \subsection{Point Cloud Registration on {\scenario{3DMatch}}} We provide $10$ extra scan matching results by {\scenario{STRIDE}} on the {\scenario{3DMatch}} dataset \cite{Zeng17cvpr-3dmatch} in Fig. \ref{fig:supp-3dmatch}. {\scenario{STRIDE}} returned the globally optimal transformation estimates in all cases. \input{sections/supp-fig-3dmatch} \subsection{Absolute Pose Estimation on {\scenario{SPEED}}} We provide extra satellite pose estimation results by {\scenario{STRIDE}} on the {\scenario{SPEED}} dataset \cite{Sharma19arxiv-SPEED} in Fig. \ref{fig:supp-speed-results}. In all six image instances with $2$-$5$ outliers, {\scenario{STRIDE}} returned accurate pose estimates with global optimality certificates. \input{sections/supp-fig-speed} \subsection{Vehicle Pose and Shape Estimation on {\scenario{ApolloScape}}} We provide vehicle pose and shape estimation results by {\scenario{STRIDE}} on the {\scenario{ApolloScape}} dataset \cite{Wang19pami-apolloscape} in Fig. \ref{fig:supp-apollo}, whose first row also includes the four examples presented in Fig. \ref{fig:exp-catreg-results}(c-1). We provide details of each problem instance such as $N$, $n_1$ and $m$, as well as evaluation metrics such as $(\M{R},\boldsymbol{t})$ errors, relative suboptimality $\eta_s$, and {\scenario{STRIDE}}'s computation time. In all cases, {\scenario{STRIDE}} returned accurate pose and shape estimates with global optimality certificates. \input{sections/supp-fig-apollo} \subsection*{Notation} {\bf Scalars, vectors, matrices}. We use lowercase characters (\emph{e.g.},\xspace $a$) to denote real scalars, bold lowercase characters (\emph{e.g.},\xspace $\va$) for real (column) vectors, and bold uppercase characters (\emph{e.g.},\xspace $\M{A}$) for real matrices. ${\mathbf I}_d$ denotes the identity matrix of size $d \times d$, and ${\mathbf 0}$ denotes the all-zero vector or matrix. Given $\M{A} \in \Real{m \times n}$, $a_{ij}$ denotes the $(i,j)$-th entry of $\M{A}$, and $[\M{A}]_{{\cal I},{\cal J}}$ denotes the submatrix of $\M{A}$ formed by indexing rows ${\cal I} \subseteq [m]$ and columns ${\cal J} \subseteq [n]$, where $[n] \triangleq \{1,\dots,n \}$ is the set of positive integers up to $n$. For a vector $\boldsymbol{v} \in \Real{n}$, we shorthand $v_i$ for its $i$-th entry and $\boldsymbol{v}_{\cal I}$ for its entries indexed by ${\cal I} \subseteq [n]$. For $\M{A},\M{B} \in \Real{m \times n}$, $\inprod{\M{A}}{\M{B}} \triangleq \sum_{i=1}^m \sum_{j=1}^n a_{ij} b_{ij}$ denotes the usual inner product between real matrices. $\trace{\M{A}} \triangleq \sum_{i=1}^n a_{ii}$ denotes the trace of a square matrix $\M{A} \in \Real{n \times n}$. We use $\norm{\cdot}$ to denote the $\ell_2$ norm of a vector and the Frobenious norm of a matrix, \emph{i.e.},\xspace $\norm{\va} \triangleq \sqrt{\inprod{\va}{\va}}$ for any $\va \in \Real{n}$ and $\norm{\M{A}} \triangleq \sqrt{\inprod{\M{A}}{\M{A}}}$ for any $\M{A} \in \Real{m \times n}$. $\norm{\va}_1 \triangleq \sum_{i=1}^n \abs{a_i}$ denotes the $\ell_1$ norm of a vector. $[\M{A},\M{B}]$ denotes the \emph{horizontal} concatenation, while $[\M{A} \,;\,} %\ \Vert\ \M{B}]$ denotes the \emph{vertical} concatenation, for proper $\M{A},\M{B}$. For $a \in \Real{}$, the symbol $\ceil{a}$ returns the smallest integer above $a$. {\bf Sets}. We use $\sym{n}$ to denote the space of $n \times n$ real symmetric matrices, and $\psd{n}$ (resp. $\pd{n}$) to denote the set of matrices in $\sym{n}$ that are \emph{positive semidefinite} (resp. definite). We also write $\M{X} \succeq 0$ (resp. $\M{X} \succ 0$) to indicate $\M{X}$ is positive semidefinite (resp. definite). $\usphere{d-1} \triangleq \{ \boldsymbol{v} \in \Real{d} \mid \norm{\boldsymbol{v}} = 1 \}$ denotes the $d$-dimensional unit sphere. We denote by $\ensuremath{\mathrm{SO}(d)}\xspace \triangleq \{ \M{R} \in \revise{\Real{d\times d}} \mid \M{R}^{\mathsf{T}} \M{R} = {\mathbf I}_d, \det\parentheses{\M{R}} = +1 \}$ the $d$-dimensional \emph{special orthogonal group} (rotation matrices). $\abs{{\cal A}}$ denotes the cardinality of a finite set ${\cal A}$. $\int_{+}$ (resp. $\int_{++}$) denotes the set of nonnegative (resp. positive) integers, and $\mathbb{Q}$ denotes the set of rational numbers. \section{Preliminaries} \label{sec:preliminaries} This section reviews key facts about multi-block semidefinite programming \cite{tutuncu03MP-SDPT3} (Section \ref{sec:pre-sdp}), and provides an introduction to polynomial optimization and Lasserre's semidefinite relaxation hierarchy~\cite{Lasserre01siopt-lasserrehierarchy} (Section \ref{sec:pre-pop}). While somewhat mathematically dense, these preliminaries are designed as a pragmatic introduction for the non-expert reader. \subsection{Semidefinite Programming} \label{sec:pre-sdp} A \emph{multi-block} semidefinite programming (SDP) problem is an optimization problem in the following \emph{primal} form \cite{tutuncu03MP-SDPT3}: \begin{equation}\label{eq:primalSDP} \min_{\M{X} \in \mathbb{X}} \cbrace{\inprod{\M{C}}{\M{X}} \mid {\cal A} (\M{X}) = \boldsymbol{b},\ \M{X} \in {\cal K}} \tag{P}. \end{equation} where the variable $\M{X} = (\M{X}_1,\dots,\M{X}_l)$ is a collection of $l$ square matrices (the ``blocks'') with $\M{X}_i \in \Real{n_i \times n_i}$ for $i=1,\ldots,l$ (conveniently ordered such that $n_1\geq \dots \geq n_l$); the domain $\mathbb{X} \triangleq \sym{n_1} \times \dots \times \sym{n_l}$ restricts the matrices to be symmetric. The objective is a linear combination of the matrices in $\M{X}$, \emph{i.e.},\xspace $\inprod{\M{C}}{\M{X}} \triangleq \sum_{i=1}^l \inprod{\M{C}_i}{\M{X}_i}$ (for given matrices $\M{C}_i\in \sym{n_i}, i=1,\ldots,l$). The problem includes independent linear constraints ${\cal A} (\M{X}) = \boldsymbol{b}$ on $\M{X}$, where: \begin{equation} {\cal A} (\M{X}) \triangleq \sbracket{ \sum_{i=1}^l \inprod{\M{A}_{i1}}{\M{X}_i} \,;\,} %\ \Vert\ \dots \,;\,} %\ \Vert\ \sum_{i=1}^l \inprod{\M{A}_{im}}{\M{X}_i} } \in \Real{m} \end{equation} for given matrices $\M{A}_{ij}\!\in\!\sym{n_i}, i\!=\!1,\ldots,l$ and $j\!=\!1,\dots,m$, and $\boldsymbol{b}\!\in\!\Real{m}$ is a given vector. Finally, the constraint $\M{X}\!\in\!{\cal K}$ enforces that each matrix in $\M{X}$ is positive semidefinite (\emph{i.e.},\xspace ${\cal K}\!\triangleq\!\psd{n_1}\!\times\!\dots\!\times\!\psd{n_l}$ is a product of $l$ positive semidefinite cones). \revise{We also write $\M{X} \succeq 0$ to indicate each matrix in $\M{X}$ is positive semidefinite when $\M{X}$ is a collection of matrices (note that we need the notation $\M{X} \in {\cal K}$ for describing details of our SDP solver).} The feasible set of \eqref{eq:primalSDP}, denoted by $\calF_{\mathrm{P}}\!\triangleq\!\{\!\M{X}\!\in\!\mathbb{X}\!\mid\!{\cal A}(\M{X})\!=\!\boldsymbol{b},\M{X}\!\in\!{\cal K}\}$, \mbox{is called a \emph{spectrahedron} \cite{Blekherman12Book-sdpandConvexAlgebraicGeometry}.} The Lagrangian \emph{dual} of \eqref{eq:primalSDP} is another multi-block SDP: \begin{equation}\label{eq:dualSDP} \max_{\boldsymbol{y} \in \Real{m}, \M{S} \in \mathbb{X}} \cbrace{\inprod{\boldsymbol{b}}{\boldsymbol{y}} \mid \calA^{*}(\boldsymbol{y}) + \M{S} = \M{C},\ \M{S} \in {\cal K}} \tag{D} \end{equation} where $\calA^{*}: \Real{m} \rightarrow \mathbb{X}$ is the adjoint of ${\cal A}$ and is defined as: \begin{equation} \label{eq:adjointAmultiblk} \calA^{*}(\boldsymbol{y}) \triangleq \left( \sum_{j=1}^m y_j \M{A}_{1j},\dots,\sum_{j=1}^m y_j \M{A}_{lj} \right) \in \mathbb{X} \end{equation} and the equality $\calA^{*}(\boldsymbol{y}) + \M{S} = \M{C}$ is enforced block-wise. Under mild assumptions (\emph{e.g.},\xspace Slater's condition \cite{Boyd04book}), \emph{strong duality} holds between \eqref{eq:primalSDP} and \eqref{eq:dualSDP} (\emph{i.e.},\xspace the minimum of \eqref{eq:primalSDP} equals the maximum of \eqref{eq:dualSDP}). In this case, $(\MX^\star,\vy^\star,\MS^\star) \in \mathbb{X} \times \Real{m} \times \mathbb{X}$ is simultaneously \emph{optimal} for \eqref{eq:primalSDP}-\eqref{eq:dualSDP} if and only if the following KKT conditions hold \beq \begin{array}{ll}\label{eq:sdpKKT} \text{\grayout{primal feasibility}}: & {\cal A}(\MX^\star) = \boldsymbol{b}, \MX^\star \in {\cal K}, \\ \text{\grayout{dual feasibility}}: & \calA^{*}(\vy^\star) + \MS^\star = \M{C}, \MS^\star \in {\cal K}, \\ \text{\grayout{complementarity}}: & \inprod{\MX^\star}{\MS^\star} = 0. \end{array} \eeq The KKT conditions \eqref{eq:sdpKKT} imply strong duality because \begin{equation} \begin{split} 0 & = \inprod{\MX^\star}{\MS^\star} = \inprod{\MX^\star}{\M{C} - \calA^{*}(\vy^\star)} \\ & = \inprod{\M{C}}{\MX^\star} - \inprod{{\cal A}(\MX^\star)}{\vy^\star} = \inprod{\M{C}}{\MX^\star} - \inprod{\boldsymbol{b}}{\vy^\star} . \end{split} \end{equation} Given $(\M{X},\boldsymbol{y},\M{S}) \in {\cal K} \times \Real{m} \times {\cal K}$, we measure its feasibility and optimality using the standard relative KKT residuals \beq \begin{array}{ll}\label{eq:KKTresiduals} \eta_p \triangleq & \Vert {\cal A}(\M{X}) - \boldsymbol{b} \Vert / ( 1+\norm{\boldsymbol{b}}), \\ \eta_d \triangleq & \Vert {\calA^{*}(\boldsymbol{y}) + \M{S} - \M{C}} \Vert / ( 1+\norm{\M{C}} ),\\ \eta_g \triangleq & \abs{\inprod{\M{C}}{\M{X}} - \inprod{\boldsymbol{b}}{\boldsymbol{y}} } / ( 1 + \abs{ \inprod{\M{C}}{\M{X}} } + \abs{ \inprod{\boldsymbol{b}}{\boldsymbol{y}} } ), \end{array} \eeq where $\norm{\M{X}} = \sum_{i=1}^l \norm{\M{X}_i}$ for any $\M{X} \in \mathbb{X}$. We define $\eta_{\max} \triangleq \max\{\eta_p,\eta_d,\eta_g \}$ as the \emph{maximum KKT residual}. {\bf SDP solvers}. The most robust approach for solving SDP \eqref{eq:primalSDP} (and~\eqref{eq:dualSDP}) is based on \emph{primal-dual interior point methods} (IPM) \cite{Alizadeh98siam-ipmSDP,todd1998nesterov}, \emph{e.g.},\xspace~{\scenario{SDPT3} \cite{tutuncu03MP-SDPT3} and \scenario{MOSEK} \cite{mosek}}. For problems of small to medium size (\emph{e.g.},\xspace $n_1 \leq 5000, m \leq 50,000$), IPMs can solve the SDP to arbitrary accuracy, \emph{i.e.},\xspace $\eta_{\max} < \varepsilon$ for $\varepsilon$ arbitrarily small, with a typical per-iteration complexity ${\cal O}(n_1^3 + m^2 n_1^2 + m^3)$.\footnote{${\cal O}(n_1^3)$ for spectral decomposition of dense primal and dual iterates $(\M{X},\M{S})$, ${\cal O}(m^2 n_1^2)$ for forming the Schur complement system, and ${\cal O}(m^3)$ for factorizing and solving the Schur complement system.} If each linear constraint only involves a small number of blocks (\emph{i.e.},\xspace for each $j=1,\dots,m$, $\M{A}_{ij} = {\mathbf 0}$ for many blocks $i=1,\dots,l$), then IPMs can be made much more efficient using \emph{dualization} \cite{Zhang20MP-sparseSDP}.\footnote{\revise{``Dualization'' switches the primal-dual data structure in numerical solvers (\emph{e.g.},\xspace writing the dual \eqref{eq:dualSDP} with the structure of the primal \eqref{eq:primalSDP} such that $\boldsymbol{y}$ is represented as an unconstrained cone, or difference of two nonnegative cones, with dimension $m$) \cite{lofberg09OMS-dualize}. When sparsity exists, dualization can lead to better numerical performance.}} Nevertheless, such sparsity is not always present and generally IPMs cannot solve large-scale problems on an ordinary workstation. First-order methods based on ADMM and Augmented Lagrangian, \emph{e.g.},\xspace~{\scenario{CDCS}} \cite{Zheng20MP-CDCS}, and \scenario{SDPNAL+} \cite{Yang2015mpc-sdpnalplus}, can handle large-scale problems but exhibit slow convergence, and hence can only obtain solutions of moderate accuracy. For single-block problems ($l=1$) with low-rank solutions (\emph{i.e.},\xspace $\rank{\MX^\star} \ll n_1 $) and $m = {\cal O}(n_1)$, the Burer-Monteiro (B-M) low-rank factorization method \cite{Burer03mp,Boumal16nips} is preferable. Section \ref{sec:introduction} mentioned the success of SDP relaxations in solving \emph{outlier-free} perception problems. This success is attributed to the following facts: (a) most of the SDPs arising in outlier-free estimation have $n_1 < 100$ and $m < 1000$, and can be solved by IPMs in less than one second; (b) although some SDPs (\emph{e.g.},\xspace~\cite{Rosen19IJRR-sesync}) can have $n_1 > 10,000$, they can be efficiently solved by B-M because the optimal solution is low-rank and $m \approx n_1$ \cite{Rosen20wafr-scalableLowRankSDP}. {\bf Challenges}. Unfortunately, \emph{none} of the existing solvers can solve the SDPs presented in this paper to a desired accuracy. In particular, our SDPs have $n_1 < 5000$ but $m = {\cal O}(n_1^2)$ as large as a few millions, rendering IPMs and B-M factorization inapplicable. Moreover, our SDPs admit rank-one optimal solutions and are necessarily degenerate~\cite{Alizadeh97mp-nondegenerateSDP} (loosely speaking, degeneracy is a property that often leads to slower convergence in SDP solvers and prevents the application of B-M). Our previous work \cite{Yang21arxiv-stride} shows that first-order methods perform poorly on degenerate problems. \maybeOmit{ {\bf New Frontiers}. Large-scale degenerate SDPs are an unsolved puzzle in the mathematical optimization community \cite{Yang2015mpc-sdpnalplus}. {\scenario{STRIDE}}, originally proposed in~\cite{Yang21arxiv-stride}, not only achieves strong performance on solving degenerate SDPs in certifiable outlier-robust perception in this paper, but also enables solving degenerate SDP relaxations from mathematics and machine learning that were previously deemed too difficult to be solved~\cite{Yang21arxiv-stride,Yang21report-STRIDE}.} \subsection{Polynomial Optimization and Lasserre's Hierarchy} \label{sec:pre-pop} {\bf Polynomial optimization}. Given $\vxx = [x_1 \,;\,} %\ \Vert\ x_2 \,;\,} %\ \Vert\ \ldots \,;\,} %\ \Vert\ x_d] \in \Real{d}$, a \emph{monomial} in $\vxx$ is a product of $x_i$'s with \emph{nonnegative} integer exponents, \emph{i.e.},\xspace $\vxx^{\boldsymbol{\alpha}} \triangleq x_1^{\alpha_1}\cdots x_d^{\alpha_d}$ for $\boldsymbol{\alpha} \in \int_{+}^d$ (for instance $x_1^2 x_5 x_6^3$ is a monomial). The sum of the exponents, $\norm{\boldsymbol{\alpha}}_1$, \revise{or $\inprod{ {\mathbf{1}} }{\boldsymbol{\alpha}}$,} is called the \emph{degree} of the monomial (\emph{e.g.},\xspace the monomial $x_1^2 x_5 x_6^3$ has degree $6$). A real \emph{polynomial} $p(\vxx)$ is a finite sum of monomials with real coefficients. \revise{We shorthand $p$ in place of $p(\vxx)$ when the variable $\vxx$ is clear.} The degree of a polynomial $p$, denoted by $\deg{p}$, is the \emph{maximum} degree of its monomials. The ring of real polynomials is denoted by $\polyring{\vxx}$. A standard polynomial optimization problem (POP) reads \begin{equation}\label{eq:pop} p^\star \triangleq \min_{\vxx \in \Real{d}} \cbrace{p(\vxx) \ \middle\vert\ \substack{ \displaystyle h_i(\vxx) = 0, i=1,\dots,l_h \\ \displaystyle g_j(\vxx) \geq 0, j = 1,\dots,l_g } }, \tag{POP} \end{equation} where $p, h_i, g_j \in \polyring{\vxx}$. Problem \eqref{eq:pop} is easily seen to be NP-hard \cite{lasserre10book-momentsOpt}, \emph{e.g.},\xspace it can model combinatorial binary constraints $x_i \in \{+1,-1\}$ via $x_i^2 - 1 = 0,i=1,\dots,d$. {\bf Lasserre's hierarchy}. We now give a simplified (and somewhat less conventional) introduction to Lasserre's hierarchy that is sufficient for understanding our paper. For a comprehensive treatment, we refer the reader to~\cite{lasserre10book-momentsOpt}. We define $[\vxx]_{\kappa} \triangleq \{ \vxx^{\boldsymbol{\alpha}} \mid \norm{\boldsymbol{\alpha}}_1 \!\leq \!\kappa, \boldsymbol{\alpha} \!\in\! \int_{+}^d \}$ to be the \revise{vector} of monomials of degree up to $\kappa$. For example, if $\vxx = [x_1 \,;\,} %\ \Vert\ x_2]$ and $\kappa=2$, then $[\vxx]_2 = [1\,;\,} %\ \Vert\ x_1 \,;\,} %\ \Vert\ x_2 \,;\,} %\ \Vert\ x_1^2 \,;\,} %\ \Vert\ x_1 x_2 \,;\,} %\ \Vert\ x_2^2]$. The dimension of $[\vxx]_\kappa$ is $\binomial{d}{\kappa} \triangleq \nchoosek{d+\kappa}{\kappa}$. With $[\vxx]_\kappa$, we form the so-called \emph{moment matrix} $\M{X}_{\kappa} \triangleq [\vxx]_\kappa [\vxx]_{\kappa}^{\mathsf{T}}$. For instance, for $\vxx = [x_1 \,;\,} %\ \Vert\ x_2]$ and $\kappa=2$ (\emph{cf.}\xspace with $[\vxx]_2$ above): \begin{equation}\label{eq:momentMatrix} \M{X}_{\kappa} \triangleq [\vxx]_2 [\vxx]_2^{\mathsf{T}} \!=\! \small{ \left[ \begin{array}{cccccc} 1 & x_1 & x_2 & x_1^2 & x_1 x_2 & x_2^2 \\ x_1 & x_1^2 & x_1 x_2 & x_1^3 & x_1^2 x_2 & x_1 x_2^2 \\ x_2 & x_1 x_2 & x_2^2 & x_1^2 x_2 & x_1 x_2^2 & x_2^3 \\ x_1^2 & x_1^3 & x_1^2 x_2 & x_1^4 & x_1^3 x_2 & x_1^2 x_2^2 \\ x_1 x_2 & x_1^2 x_2 & x_1 x_2^2 & x_1^3 x_2 & x_1^2 x_2^2 & x_1 x_2^3 \\ x_2^2 & x_1 x_2^2 & x_2^3 & x_1^2 x_2^2 & x_1 x_2^3 & x_2^4 \end{array} \right] }. \end{equation} By construction, $\M{X}_{\kappa} \in \psd{\binomial{d}{\kappa}}$ is positive semidefinite and has $\rank{\M{X}_{\kappa}} = 1$. Moreover, the set of \emph{unique} entries in $\M{X}_{\kappa}$ is simply $[\vxx]_{2\kappa}$, \emph{i.e.},\xspace the set of monomials of degree up to $2\kappa$ (these monomials typically appear multiple times in $\M{X}_{\kappa}$, \emph{e.g.},\xspace see $x_1 x_2$ in eq.~\eqref{eq:momentMatrix}). Therefore, a key fact is that \emph{---for a suitable matrix $\M{A}$--- the linear function $\inprod{\M{A}}{\M{X}_{\kappa}}$ can express any polynomial in $\vxx$ of degree up to $2\kappa$.} The key idea of Lasserre's hierarchy is to (i) rewrite~\eqref{eq:pop} using the moment matrix $\M{X}_{\kappa}$, (ii) relax the (non-convex) rank-1 constraint on $\M{X}_{\kappa}$, and (iii) add redundant constraints that are trivially satisfied in~\eqref{eq:pop}; as we show below, this leads to a \emph{convex} semidefinite program. \emph{(i) Rewriting~\eqref{eq:pop} using $\M{X}_{\kappa}$}. We pick a positive integer $\kappa\!\in\!\int_{++}$ (the \emph{order} of the relaxation) such that $2\kappa \geq \max \{\deg{p}\!,\deg{h_1}\!, \ldots, \deg{h_{l_h}}\!, \deg{g_1}\!, \ldots, \deg{g_{l_g}}\!\}.$ (this way we can express both objective function and constraints using $\M{X}_{\kappa}$). For instance, we can rewrite the objective and the equality constraints as: \beq \begin{array}{ll}\label{eq:objective} \!\!\!\!\!\!\text{\grayout{objective}}: & \inprod{\M{C}_1}{\M{X}_\kappa} \\ \end{array} \eeq \beq \begin{array}{ll} \label{eq:eqConstraints1} \!\!\!\text{\grayout{equality constraints}}: & \!\!\! \inprod{\M{A}_{\mathrm{eq},j}}{\M{X}_\kappa} = 0, \; j=1,\ldots,l_h \\ \end{array} \eeq for suitable matrices $\M{C}_1$ and $\M{A}_{\mathrm{eq},j}$. Note that using $\M{X}_{\kappa}$ is already a relaxation since we are no longer enforcing the entries of $\M{X}_{\kappa}$ to be monomials (\emph{e.g.},\xspace we do not enforce the entry $x_1 x_2$ in~\eqref{eq:momentMatrix} to be the product of the entries $x_1$ and $x_2$, which would be a non-convex constraint). \emph{(ii) Relaxing the (non-convex) rank-$1$ constraint on $\M{X}_{\kappa}$}. At the previous point we noticed we can rewrite objective and constraints in~\eqref{eq:pop} as linear (hence convex) functions of $\M{X}_\kappa$. However, $\M{X}_\kappa$ still belongs to the set of positive-semidefinite rank-1 matrices, which is a non-convex set due to the rank constraint. Therefore, we simply relax the rank constraint and only enforce: \beq \begin{array}{ll} \label{eq:eqMomentIsPSD} \!\!\!\text{\grayout{moment matrix}}: & \M{X}_\kappa \succeq 0. \\ \end{array} \eeq \emph{(iii) Adding redundant constraints}. Since we have relaxed~\eqref{eq:pop} by re-parametrizing it in $\M{X}_\kappa$ and dropping the rank constraint, the final step to obtain Lasserre's relaxation consists in adding extra constraints to make the relaxation tighter. First of all, we observe that there are multiple repeated entries in the moment matrix (\emph{e.g.},\xspace in~\eqref{eq:momentMatrix}, the entry $x_1 x_2$ appears 4 times in the matrix). Therefore, we can enforce these entries to be the same. In general, this leads to $m_{\mathrm{mom}} = \mathfrak{t}(\binomial{d}{\kappa}) - \binomial{d}{2\kappa} + 1$ linear constraints, where $\mathfrak{t}(n) \triangleq \frac{n(n+1)}{2}$ is the dimension of $\sym{n}$. These constraints are typically called \emph{moment constraints}: \beq \begin{array}{ll}\label{eq:momentConstraints} \text{\grayout{moment constraints}}: & \revise{\inprod{\M{A}_{\mathrm{mom},0}}{\M{X}_\kappa} = 1}, \\ & \inprod{\M{A}_{\mathrm{mom},j}}{\M{X}_\kappa} = 0, \\ & j = 1, \ldots, \mathfrak{t}(\binomial{d}{\kappa}) - \binomial{d}{2\kappa}, \end{array} \eeq \revise{where $\M{A}_{\mathrm{mom},0}$ is all-zero except $[\M{A}_{\mathrm{mom},0}]_{11} =1$, and it defines the constraint $[\M{X}_\kappa]_{11} = 1$, following from the definition of the moment matrix (see eq.~\eqref{eq:momentMatrix}).} Second, we can also add \emph{redundant} equality constraints. Simply put, if $h_i = 0$, then also $h_i \cdot x_1 = 0$, $h_i \cdot x_2 = 0$, and so on, for any monomial we multiply by $h_i$. Since via $\M{X}_\kappa$ we can represent any polynomial of degree up to $2\kappa$, we can write as linear constraints any polynomial equality in the form $h_i \cdot [\vxx]_{2\kappa - \deg{h_i}} = {\mathbf 0}$ (the order of the monomials is chosen such that the product does not exceed order $2\kappa$). These new equalities can again be written linearly as: \beq \begin{array}{ll}\label{eq:redundantEqualityConstraints} \hspace{-3mm}\text{\grayout{(redundant) equality constraints}}: \inprod{\M{A}_{\mathrm{req},ij}}{\M{X}_\kappa} = 0, \\ \quad\quad i = 1, \ldots, l_h, \ \ j = 1, \ldots, \binomial{d}{2\kappa - \deg{h_i}}\!\!\!\!\!\!\!\!\! \end{array} \eeq for suitable $\M{A}_{\mathrm{req},ij}$. Since the first entry of $[\vxx]_{2\kappa - \deg{h_i}}$ is always 1 (\emph{i.e.},\xspace the monomial of order zero),~eq.~\eqref{eq:redundantEqualityConstraints} already includes the original equality constraints in~\eqref{eq:eqConstraints1}. Finally, we observe that if $g_j \geq 0$, then for any positive semidefinite matrix $\M{M}$, it holds $g_j \cdot \M{M} \succeq 0$. Since we can represent any polynomial of order up to $2\kappa$ as a linear function of $\M{X}_\kappa$, we can add redundant constraints in the form $g_j \cdot \M{X}_{\kappa - \ceil{\deg{g_j}/2}} \succeq 0$ (by construction $g_j \cdot \M{X}_{\kappa - \ceil{\deg{g_j}/2}}$ only contains polynomials of degree up to $2\kappa$). To phrase the resulting relaxation in the standard form~\eqref{eq:primalSDP}, it is common to add extra matrix variables $\M{X}_{g_j} = g_j \cdot \M{X}_{\kappa - \ceil{\deg{g_j}/2}}$ for $j=1,\ldots,l_g$ (the so called \emph{localizing matrices} \cite[\S 3.2.1]{lasserre10book-momentsOpt}) and then force these matrices to be a linear function of $\M{X}_\kappa$: \beq \begin{array}{ll}\label{eq:locMatrices} \text{\grayout{localizing matrices}}: & \M{X}_{g_j} \succeq 0, \;\; j=1,\ldots,l_g \end{array} \eeq \beq \begin{array}{ll} \label{eq:localizingConstraints} \hspace{-2mm} \text{\grayout{{localizing} constraints}}: \inprod{\M{A}_{\mathrm{loc},jkh}}{\M{X}_\kappa} = [\M{X}_{g_j}]_{hk} \\ \quad \quad j = 1, \ldots, l_g,\ \ 1 \leq h\leq k \leq \binomial{d}{\kappa - \ceil{\deg{g_j}/2}} \end{array} \eeq where the linear constraints (for some $\M{A}_{\mathrm{loc},jkh}$) enforce each entry of $\M{X}_{g_j}$ to be a linear combination of entries in $\M{X}_\kappa$. Following steps (i)-(iii) above, it is straightforward to obtain the following (convex) semidefinite program: \begin{equation}\label{eq:lasserre} \hspace{-4mm} f^\star_{\kappa} =\!\! \displaystyle\min_{\M{X} = (\M{X}_\kappa, \M{X}_1, \ldots, \M{X}_{l_g})} \cbrace{\inprod{\M{C}_1}{\M{X}_\kappa} \mid {\cal A}(\M{X})\!=\!\boldsymbol{b},\M{X}\! \succeq\! 0}\!,\!\!\! \tag{LAS} \end{equation} where the variable $\M{X} = (\M{X}_\kappa, \M{X}_1,\dots,\M{X}_{l_g})$ is a collection of positive-semidefinite matrices (\emph{cf.}\xspace~\eqref{eq:eqMomentIsPSD} and~\eqref{eq:locMatrices}, and we shorthand $\M{X}_j = \M{X}_{g_j}$ for notation convenience), the objective is the one given in~\eqref{eq:objective}, and the linear constraints ${\cal A}(\M{X})=\boldsymbol{b}$ collect all the constraints in~\eqref{eq:momentConstraints},~\eqref{eq:redundantEqualityConstraints}, and~\eqref{eq:localizingConstraints}. Problem \eqref{eq:lasserre} can be readily formulated as a multi-block SDP in the primal form~\eqref{eq:primalSDP}, which matches the data format used by common SDP solvers. Problem \eqref{eq:lasserre} is commonly known as the \emph{dense} Lasserre's relaxation because a fully dense monomial basis $[\vxx]_\kappa$ is used to build the moment matrix \cite{Lasserre01siopt-lasserrehierarchy}. One can solve the relaxation for different choices of $\kappa$, leading to a \emph{hierarchy} of convex relaxations. While we presented Lasserre's hierarchy in a somewhat procedural way, the importance of the hierarchy lies in its stunning theoretical properties, that we review below. \begin{theorem}[Lasserre's Hierarchy \cite{Lasserre01siopt-lasserrehierarchy,lasserre10book-momentsOpt,Nie14mp-finiteConvergenceLassere}] \label{thm:lasserre} Let $-\infty < p^\star < \infty$ be the optimum of \eqref{eq:pop} and \revise{$f^\star_{\kappa}$ (resp. $\MX^\star_\kappa$) be the optimum (resp. one optimizer) of \eqref{eq:lasserre},} assume \eqref{eq:pop} satisfies the Archimedeanness condition (a stronger form of compactness, \emph{cf.}\xspace \cite[Definition 3.137]{Blekherman12Book-sdpandConvexAlgebraicGeometry}), then \begin{enumerate}[label=(\roman*)] \item \revise{(lower bound and convergence)} $f^\star_\kappa$ converges to $p^\star$ from below as $\kappa \rightarrow \infty$, and convergence occurs at a finite $\kappa$ under suitable technical conditions \cite{Nie14mp-finiteConvergenceLassere}; \item \revise{(rank-one solutions)} if $f^\star_\kappa = p^\star$ at some finite $\kappa$, then for every global minimizer $\vxx^{\star}$ of \eqref{eq:pop}, $\MX^\star_\kappa \triangleq [\vxx^{\star}]_{\kappa} [\vxx^{\star}]_{\kappa}^{\mathsf{T}}$ is optimal for \eqref{eq:lasserre}, and every rank-one optimal solution $\MX^\star_\kappa$ of \eqref{eq:lasserre} can be written as $[\vxx^{\star}]_\kappa [\vxx^{\star}]_{\kappa}^{\mathsf{T}}$ for some $\vxx^{\star}$ that is optimal for \eqref{eq:pop}; \item \revise{(optimality certificate)} if $\rank{\MX^\star_\kappa} = 1$ at some finite $\kappa$, then $f^\star_\kappa = p^\star$. \end{enumerate} \end{theorem} Theorem \ref{thm:lasserre} states that~\eqref{eq:lasserre} provides a hierarchy of lower bounds for~\eqref{eq:pop}. When the relaxation is exact ($p^\star\!=\!f^\star_\kappa$), global minimizers of~\eqref{eq:pop} correspond to rank-one solutions of~\eqref{eq:lasserre}. \revise{Moreover, after solving the convex SDP \eqref{eq:lasserre}, one can check the rank of the optimal solution $\MX^\star_\kappa$ to obtain a \emph{certificate} of global optimality. In practice, rank computation can be subject to numerical inaccuracies, and we introduce a continuous metric for evaluating the exactness of the relaxation in Section \ref{sec:sdprelax} (\emph{cf.}\xspace Theorem \ref{thm:sparserelaxtls}).} \maybeOmit{ {\bf Curse of Dimensionality}. As we will see in Section~\ref{sec:robustandpop}, for outlier-robust geometric perception problems, (i) $d$ ---the size of the variable in the original~\eqref{eq:pop}--- increases w.r.t.\xspace the number of measurements and can be a few hundreds (contrarily, outlier-free problems have $d$ fixed and typically less than $20$), (ii) $\kappa=2$ is the minimum relaxation order because $\deg{p} > 2$, leading to $n_1 = \binomial{d}{2}$ and $m \geq m_{\mathrm{mom}} = \mathfrak{t}(\binomial{d}{2}) - \binomial{d}{4} + 1$, which both grow quickly w.r.t.\xspace $d$ (contrarily, outlier-free problems typically have $\deg{p}=2$, and one can use $\kappa=1$ in \eqref{eq:lasserre}, which is much more scalable). Therefore, Lasserre's hierarchy, at least in its dense form \eqref{eq:lasserre}, is impractical for outlier-robust perception. In Section \ref{sec:sdprelax}, we present a \emph{sparse} version of \eqref{eq:lasserre} for outlier-robust perception that significantly improves scalability. }
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Recognising individuals repeatedly over time is a basic requirement for field-based ecology and related life sciences \citep{marshall12}. In scenarios where photographic capture is feasible and animals are visually unique, biometric computer vision offers a non-invasive identification paradigm for handling this problem class efficiently \citep{Kuhl2013}. To act as an effective aid to biologists, these systems are required to operate reliably on large sets of unconstrained, natural imagery so as to facilitate adoption over widely available, manual or semi-manual identification systems \citep{stanley95,van07,ranguelova04,kelly01,speed07}. Further automation of identification pipelines for 2D biometric entities is currently subject to extensive research activity~\citep{Duyck2014,Loos2013,Ravela2013}. Generally, fully automated approaches require at least an integration of 1)~a robust fine-grained detection framework to locate the animal or structure of interest in a natural image, and 2)~a biometric system to extract individuality-bearing features, normalise and match them \citep{Kuhl2013}. A recent example of such a system for the identification of great apes \citep{Freytag2016,Loos2013} uses facial texture information to determine individuals. In fact, all fully automated systems so far rely on the presence of distinctive 2D colour and texture information for object detection as well as biometric analysis. In this paper we will focus on contour information of textureless objects as biometric entities instead. In specific, we propose a visual identification approach for great white shark fins as schematically outlined in Figure~\ref{fig:teaser}, one that extends work in~\cite{Hughes2015} and is applicable to unconstrained fin imagery. To the best of our knowledge this line of work establishes the first \textit{fully automated} \textit{contour-based visual ID system} in the field of animal biometrics. It automates the pipeline from natural image to animal identity. We build on the fact that fin shape information has been used in the past manually to track individual great white sharks over prolonged periods of time \citep{anderson2011} or global space \citep{Bonfil2005}. Shark fin re-identification has also been conducted semi-automati-cally to support research on the species \citep{towner13,chapple11,hillman03}. We pose the vision task of `shark fin identification' as a fine-grained, multi-instance classification problem for flexible, fairly textureless and possibly partly occluded object parts. `Fine-grained' in that each individual fin, described by a characteristic shape and jagged trailing edge, is a subclass of the parent class great white shark fin. `Multi-instance' since the system must be able to assign multiple semantic labels to an image, each label corresponding to an individual shark present. `Flexible' since fins may bend, and `fairly textureless' since fins lack distinctive 2D texture. In line with work by \citet{arandjelovic11}, we will also refer to the latter as `smooth'. We note that some sharks carry fin pigmentation, yet not all do and its permanence is disputed \citep{robbins13}. Finally, fin detection poses a part recognition problem since region-based detection of the whole fin would fail to tackle common scenarios: fins are often visually smoothly connected to the shark body whilst being partly occluded by the water line and white splash. Figure~\ref{fig:teaser} shows examples of the dataset~(top left) and outlines our solution pipeline based on contour information -- from image to individual shark ID. We will now review works closest related to the tasks of the recognition pipeline. \vspace{-10pt} \section{Related Work and Rationale} \begin{figure*} \vspace{-4pt} \includegraphics[width=1.0\textwidth]{Fig2.pdf} \caption{FIN DETECTION AS OPEN\ CONTOUR STROKES: Multi-scale 2D~region-based segmentation algorithms~\cite{arbelaez14} on their own (left images show one level of the ultrametric map) regularly fail to detect the extent of fins due to visual ambiguities produced by shark body, water reflections or white splash. Thus, often no level of the underlying ultrametric contour map captures fin regions. We suggest combining properties of the 1D (open) contour segment shape with local 2D region structure in a contour stroke model to recognise the fin section~(shown in solid white).\vspace{-2pt}} \label{fig:teaser2} \end{figure*} \begin{figure*}[ht] \includegraphics[width=1.0\textwidth]{Fig3.pdf}\vspace{-4pt} \textcolor[rgb]{1,1,1}{..................................................................................................................................................}u \caption{FIN DETECTION MODEL: Partitioning the (closed) 2D region structures from across \textit{all} levels of the ultrametric contour map via DoG-generated keypoints (rightmost visualisation) yields a pool of (open) contour strokes, whose normal-encoded shape and nearby opponentSIFT descriptors feed into a random forest regressor to detect fin objects.} \label{fig:teaser3} \vspace{-8pt} \end{figure*} \textbf{Smooth Object Detection.} Smooth object detection traditionally builds on utilising boundary and internal contour features, and configurations thereof. Recent approaches \citep{arandjelovic11,arandjelovicO12} extend these base features by mechanisms for regionalising or globalising information, and infer object presence from learning configuration classifiers. A prominent, recent example is Arandjelovic and Zisserman's~`Bag of Boundaries~(BoB)' approach \citep{arandjelovic11}, which employs multi-scale, semi-local shape-based boundary descriptors to regionalise BoB features and predict object presence. A related, more efficient boundary representation is proposed by \citet{arandjelovicO12}, which focusses on a 1D semi-local description of boundary neighbourhoods around salient scale-space curvature maxima. This description is based on a vector of boundary normals (Bag of Normals; BoN). However, experiments by \citet{arandjelovicO12} are run on images taken under controlled conditions \citep{geusebroek05}, whilst in our work, in common with \citet{arandjelovic11}, we have the goal of separating objects in natural images and against cluttered backgrounds (see again Figure~\ref{fig:teaser}). \\ \ \\ \textbf{Fin Segmentation Considerations.} The biometric problem at hand requires an explicit, pixel-accurate encoding of the fin boundary and sections thereof to readily derive individually characteristic descriptors. To achieve such segmentation one could utilise various approaches, including 1) a bottom-up grouping process from which to generate object hypotheses for subsequent detection \citep{carreira10,li10,uijlings13,gu09}, or 2)~a~top-down sliding window detector such as \citep{viola01,dalal05,felzenszwalb10} and then segment further detail, or 3) combining the two simultaneously \citep{arbelaez12}. We select the first option here since boundary encoding is intrinsic, and bottom-up, efficient and accurate object segmentation has recently become feasible. \cite{arbelaez14} introduce a fast normalised cuts algorithm, which is used to globalise local edge responses produced by the structured edge detector of \cite{dollar13}. However, since fins represent open contour structures (see Figure~\ref{fig:teaser2}) we require some form of (multi-scale) open contour generation, which is proposed, similar to~\cite{arandjelovicO12}, by stipulating key points along closed contours of the ultrametric map as generated by~\cite{arbelaez14}. Our proposed contour stroke model (see Section~\ref{sec:strokeModel}) then combines shape information along these open contour sections and nearby regional information to identify and segment fin structures. Note that these are objects which are not present as segments at \textit{any} level of the ultrametric map. \\ \\ \textbf{Biometrics Context.} Most closely related within the animal biometrics literature are the computer-assisted fin recognition systems; DARWIN \citep{stanley95,stewman06} and Finscan \citep{hillman03}. DARWIN has been applied to great white sharks \citep{towner13,chapple11} and bottlenose dolphins \citep{vanHoey13} while Finscan has been applied to false killer whales \citep{baird08}, bottlenose dolphins \citep{baird09} and great white sharks, among other species \citep{hillman03}. However both differ significantly from our work in that they rely on user interaction to detect and extract fin instances. Their fin descriptors are also sensitive to partial occlusions since they are represented by single, global reference encodings. Additionally, in the case of DARWIN, fin shape is encoded as 2D Cartesian coordinates, requiring the use of pairwise correspondence matching. By contrast, we introduce an occlusion robust vector representation of semi-local fin shape (see Section~\ref{sec:idEncoding}). As in \cite{crall13}, this allows images of individuals to be held in~tree-based search structures, which facilitate identity discovery in sub-linear time. \\ \ \\ \textbf{Paper Structure.} The paper covers six further sections, which will detail the methodology and algorithms proposed, and report on application results and discuss our approach in its wider context. In (\ref{sec:strokeModel}), in accordance with~\cite{Hughes2015}, a contour stroke model for fin detection is presented combining a partitioning of ultrametric contour maps with normal descriptors and dense local features. Then, expanding on previous work, in (\ref{sec:idEncoding}) and (\ref{sec:LNBNN}) a dual biometric encoding scheme for fins and an associated LNBNN baseline identification approach are discussed. In (\ref{sec:finspace}) we quantify species-specific visual individuality via a `fin space', and in (\ref{sec:ID}) an improved non-linear identification framework that uses this space is shown and evaluated. Finally, in~(\ref{sec:conclusions}) we discuss the scope and conclusions of individually identifying great white sharks visually. \vspace{-12pt} \section{Contour Stroke Object Model} \label{sec:strokeModel} In this section we describe our contour stroke model for bottom-up fin detection. It constructs fin candidates as subsections (or `strokes') of contours in partitioned ultrametric maps and validates them by regression of associated stroke properties. The approach progresses in three stages: 1)~ we detect and group object boundaries at multiple scales into an ultrametric contour map, 2)~salient boundary locations are detected and used to partition region boundaries into contour sections called strokes, 3)~strokes are classified into fin and background classes based on shape, encoded by normals, and local appearance encoded by opponentSIFT features \citep{van10}. Figure~\ref{fig:teaser3} illustrates this fin detection approach in detail. \\ \ \\ \textbf{Stage 1: Hierarchical Segmentation.} We use work by~\cite{arbelaez14} to generate a region hierarchy in the form of an ultrametric map. This provides sets of closed contours for any chosen level-threshold in the range $[0,1]$. Starting with the whole image, we descend the hierarchy to a pool of $200$ unique regions. Similar to~\cite{carreira10}, we then employ region rejection to remove areas too small to represent a fin, or too similar to another region\footnote{Any region with a boundary length of less than $70$ pixels is discarded, before the remainder are clustered into groups where all regions in a cluster have an overlap of $0.95$ or more. Within each cluster, we rank regions according to the level in the hierarchy at which they first appeared, retaining the top ranked region in each cluster. }. We subsequently rank remaining regions, again by their location in the hierarchy, and retain the top $k$ regions, choosing $k=12$ empirically for the results given in this paper. \\ \ \\ \textbf{Stage 2: Generating Fin Candidates.} In almost all cases, the segmentation produces at least one single region, within the set, that provides a high recall description of the fin's external boundary. However, in cases where the boundary between the fin and the body is visually smooth, segmentation tends to group both in a single region (see Figure~\ref{fig:teaser2}). The global appearance of such regions can vary dramatically, making 2D structures unsuitable targets for recognition. By contrast, locations along the 1D contour of regions provide discontinuities in curvature suitable for region sub-sectioning and thereby stroke generation. We detect boundary keypoints using the Difference of Gaussian (DoG) corner detector of \citet{zhang09}. Letting $C(u) = (x(u),y(u))$ represent a planar curve, the corner response function is given by the evolution difference of two Gaussian smoothed planar curves, measured using the distance $D(u,\sigma)$:\vspace{-5pt} \begin{multline} D(u,\sigma)=[DoG*x(u)]^2 + [DoG*y(u)]^2\\ = [G(u,m\sigma)*x(u) - G(u,\sigma)*x(u)]^2+\\ [G(u,m\sigma)*y(u) - G(u,\sigma)*y(u)]^2 \label{eq:dog} \end{multline} where $G(u,\sigma)$ is a zero mean Gaussian function with standard deviation $\sigma$, and $m>0$ is a multiplication factor. Viewed as a bandpass filter, by varying $m$ and~$\sigma$, the operator can be tuned to different frequency components of contour shape. For keypoint detection (visualised rightmost in Figure~\ref{fig:teaser3}), we resample contours to $128$ pixels and compute $D$ using $\sigma=1$ and $m=4$ before ranking the local maxima of $D$ by their prominence (see Figure \ref{fig:nonmax}). This allows for the selection of the $n$ peaks with largest prominence suppressing other, locally non-maximal corner responses. Choosing small values of $\sigma$ ensures accurate keypoint localisation whilst a relatively large value of $m$ ensures that the $n$ largest maxima of $D$ correspond to globally salient locations. \begin{figure}[hbt] \includegraphics[width=84mm]{Fig4.pdf} \caption{NON-MAXIMUM SUPPRESSION: We utilise the Matlab function `findpeaks' as a reference implementation for non-maximum suppression. That is, from a local maximum on $D(u,\sigma)$, the horizontal distance to $D(u,\sigma)$ is measured to define left and right intervals: min\textsubscript{L}=$\text{min}\textsubscript{interval\textsubscript{L}}(D(u,\sigma))$, min\textsubscript{R} is defined likewise. Subsequently, max(min\textsubscript{L},min\textsubscript{R}) is taken as a reference level. The prominence of each local maximum is then computed as the difference between the value of $D(u,\sigma)$ at the local maximum and the reference level. Low prominence peaks are suppressed. If either interval reaches the end of the signal, we set its minimum to be zero. \vspace{-7pt} } \label{fig:nonmax} \end{figure} We then generate fin candidates as contour strokes by sampling the region contour between every permutation of keypoint pairs. This results in a pool of $ N_c=(n^2-n)k$ strokes per image without taking the two encoding directions (clockwise and anticlockwise) into account. We set $n$ by assessing the \textit{achievable quality} (the quality of the best candidate as selected by an oracle) of the candidate pool with respect to the number of candidates. We denote this fin-like quality of stroke candidates by $F^g\textsubscript{inst}$. Evaluated with respect to a human-labelled ground truth contour, we use the standard $F$-measure for evaluating contour detections based on bipartite matching of boundary pixels \citep{martin04}. We observe that average achievable quality does not increase beyond $n=7$ given the described DoG parameterisation and therefore use this value to define~$N_c$. The result is that, on average, we obtain $504$ candidates per image, with an average achievable quality of $F^g\textsubscript{inst}=0.97$ measured against human-labelled ground truth contours for $240$ randomly selected images. By means of comparison, the average quality of the pool of $k=12$ closed region contours is $F^g\textsubscript{inst}=0.75$. \ \\ \\ \textbf{Stage 3: Fin Candidate Scoring.} For training and testing the candidate classifier, $240$ high visibility~(H) images, where the whole fin could clearly be seen \textit{above} the waterline, are selected at random and then randomly assigned to either a training or validation set, each containing $120$ images. In addition, we perform validation using a second set of $165$ `lower' visibility (L) images where fins are partially occluded, again, selected at random. This will enable us to establish whether the trained model is representative given partial occlusion. Examples of each image type are shown in Figure~\ref{fig:hiLowVis}. Ground truth fin boundary locations are labelled by hand using a single, continuous contour,~$1$~pixel~in~width. Each contour section is described by a $180$-dimensional feature vector consisting of two components, contributing 2D and 1D distinctive information, respectively. The first is a bag of opponentSIFT \citep{van10} visual words (dictionary size $20$) computed at multiple scales (patch sizes $16,24,32,40$) centred at every pixel within a distance of $4$ pixels of the contour section. This descriptor is utilised to capture the local appearance of fin contours. The second describes contour shape using a histogram of boundary normals consisting of $20$ spatial bins and $8$~orientation bins. Note that the opponentSIFT histogram is independent of encoding direction whilst the histogram of boundary normals is dependent on it\footnote{When training the histogram of boundary model, we flip images so the fin is facing the same way in each. For testing, we compute two feature vectors, one for each fin direction. We then obtain a predicted quality score for each direction and take the maximum over directions as the predicted quality for that stroke.}. \begin{figure}[t] \vspace{-5pt} \includegraphics[width=84mm]{Fig5.pdf} \caption{HIGH AND LOWER VISIBILITY FIN IMAGES: The top row shows examples of lower visibility fin images where parts of the fin are occluded by water line and white splash. The bottom row shows high visibility fin images -- the entire extent of the fin is visible.\vspace{-8pt}} \label{fig:hiLowVis} \end{figure} In either case, the two components are $L_2$ normalised and concatenated to produce the final descriptor. A random forest regressor \citep{breiman01} is trained to predict the quality of fin hypotheses where the quality of individual candidates is assessed using the $F$-measure as computed using the~BSDS contour detection evaluation framework \citep{martin04}. Following non-maximum suppression with a contour overlap threshold of $0.2$, a final classification is made by thresholding the predicted quality score. Given an image, the resulting detector then produces a set of candidate detections, each with a predicted quality score $F^p\textsubscript{inst}$. Figure~\ref{fig:candidates} illustrates example candidates together with their scores. \begin{figure}[] \vspace{-5pt} \includegraphics[width=80mm]{Fig6.pdf} \caption{ EXAMPLE FIN CANDIDATES AND PREDICTED QUALITY ($F^p\textsubscript{inst}$). (Top) Candidates and their scores after non-maximum suppression. (Other) Candidates and scores from region around the fin before non-maximum suppression. The predictive ability of the model is reflected in the stroke quality predictions for strokes describing at least part of the fin. It is unsurprising that the model makes high quality-predictions for the caudal fin stroke. We also see that while higher scores are sometimes predicted for purely background objects, the scores predicted for these are typically not as high as those predicted for good quality strokes describing fins themselves.\vspace{-10pt} } \label{fig:candidates} \end{figure} \\ \textbf{Measuring Detection Performance.} We use 1) average precision~(AP$^{t}_{det}$), the area under the precision-recall~(PR) curve for a given threshold $t$, and 2) volume under PR surface~(AP\textsuperscript{vol}) as evaluation metrics. \begin{figure*}[hbt] \vspace{-4pt} \includegraphics[width=1.0\textwidth]{Fig7.pdf}\\ \textcolor[rgb]{1,1,1}{.........................}(A)\textcolor[rgb]{1,1,1}{.............................................}(B) \textcolor[rgb]{1,1,1}{............................................}(C) \textcolor[rgb]{1,1,1}{...........................................}(D) \caption{FIN DETECTION RESULTS: (A,B) Scatter plots show that the full fin detection model is able strongly to predict, as captured by~$F^p\textsubscript{inst}$, the true quality of fin candidates $F^g\textsubscript{inst}$ for both high and low visibility images. (C) The plot summarises performance at different stages of fin detection. Note, that for the `segmentation' line, AP$^{t}_{det}$ is equivalent to the proportion of fins for which it is possible to obtain a stroke of quality $F^g\textsubscript{inst} \geq t$, given a machine generated segmentation. (D) The plot shows PR plots for both high and low visibility images at different thresholds. } \label{fig:finDetectResults} \end{figure*} In order to generalise AP$^{t}_{det}$, the AP\textsuperscript{vol} measure was proposed by \cite{hariharan14} for simultaneous object detection and segmentation. It measures the volume under a PR surface traced out by PR curves generated for variable quality thresholds $t$, and thus avoids arbitrary threshold choices. It reflects both fin detection performance and the quality of candidates detected and, as noted by \cite{hariharan14}, has the attractive property that a value of $1$ indicates perfect candidate generation as well as fin detection. We base our fin detection evaluation on AP instead of a receiver operating characteristic (ROC) curve based measure such as AUC-ROC, since the choice of precision over FPR increases evaluation sensitivity to changing numbers of false positives in the presence of large numbers of negative examples~\citep{davis06}. In addition, the choice of AP-based evaluation is in line, not only with \cite{hariharan14}, but also with the methodology adopted in the object detection components of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) \citep{russakovsky15} and in the PASCAL Visual Object Challenge (PASCAL VOC) \citep{everingham10}, two standard benchmarks for visual object recognition. \\ \ \\ \textbf{Fin Detection Results.} Results for fin candidate generation and detection are shown in Figure~\ref{fig:finDetectResults}, Table~\ref{tab:intermediateResults} and Table~\ref{tab:detectResults}. Scatter plots in Figure~\ref{fig:finDetectResults} for high and lower visibility images confirm that the model is able strongly to identify fins, and many high quality candidates are generated as shown by the large number of instances with high $F^g\textsubscript{inst}$ scores. The Pearson correlation coefficients between true and predicted quality scores are 0.95 and 0.93, respectively. The plot of Figure~\ref{fig:finDetectResults}(C) summarises performance at different stages of fin detection. We note that for segmentation, a stroke of quality $F^g\textsubscript{inst}\geq 0.95$ is possible for almost all fin instances ($98.3\%$), with an average achievable quality, AP\textsuperscript{vol}, of 0.99. Candidate generation also performs well. It can be seen that for almost all high visibility fins ($98.3\%$), a candidate of $F^g\textsubscript{inst}> 0.9$ is generated and $F^g\textsubscript{inst}> 0.85$ for $98.8\%$ of lower visibility fins. Across all thresholds and fin instances, average achievable qualities of 0.97 and 0.96 are seen respectively. Table~\ref{tab:intermediateResults} summarises these intermediate results. Finally, we show results for the whole pipeline in Figure~\ref{fig:finDetectResults}(C) and Table~\ref{tab:detectResults}, that of combined segmentation, candidate generation and candidate classification. Here we see that a candidate of quality $F^g\textsubscript{inst} \geq 0.83$ is generated and recognised (with AP$^{t}_{det} =0.98$) for almost all high visibility fins ($F^g\textsubscript{inst} \geq 0.78$ for lower visibility with AP$^{t}_{det} =0.99$), as indicated by AP$^{t}_{det}$ close to $1$ for these quality thresholds, with AP$^{t}_{det} =1$ only possible if both P$^{t}_{det}=1$ and R$^{t}_{det} =1$. \begin{table}[b] \caption{INTERMEDIATE RESULTS (AP$^{t}_{det}$)} \label{tab:intermediateResults} \begin{tabular}{lllll} \hline\noalign{\smallskip} & t=0.7 & t=0.85 & t=0.9 & AP\textsuperscript{vol} \\ \noalign{\smallskip}\hline\noalign{\smallskip} Segmentation & 1.0 & 0.99 & 0.99 & 0.99\\ Candidate gen. (H) & 0.99 & 0.98 & 0.98 & 0.97 \\ Candidate gen. (L) & 1.0 & 0.99 & 0.92 & 0.96 \\ \noalign{\smallskip}\hline \end{tabular}\vspace{10pt} \end{table} \begin{table}[b] \caption{FIN DETECTION RESULTS (AP$^{t}_{det}$)} \label{tab:detectResults} \begin{tabular}{lllll} \hline\noalign{\smallskip} Feature type & t=0.7 & t=0.85 & t=0.9 & AP\textsuperscript{vol} \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textbf{High Visibility (H)} & & & & \\ OpponentSIFT & 0.99 & 0.85 & 0.73 & -\\ Normal & 0.98 & 0.85 & 0.7 & - \\ Combined & 0.98 & 0.95 & 0.86 & 0.92 \\ \noalign{\smallskip}\hline\noalign{\smallskip} \textbf{Lower Visibility (L)} & & & & \\ Combined & 1.0 & 0.93 & 0.62 & 0.89 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} To fully understand values of AP$^{t}_{det}<1$, we must consider detection precision and recall separately, as shown in Figure~\ref{fig:finDetectResults}(D). Here we show PR curves for selected quality thresholds of the complete detection pipeline. We see for example that for $t=0.85$, perfect precision (P$^{t}_{det} =1.0$) is achieved for about 63\% of both high and lower visibility fins (R$^{t}_{det} =0.63$), after which, false positives are introduced as shown by reduced precision. We also see that detection recall does not equal~$1$ for any value of precision, repeating the observation that a candidate of this quality is not generated for every fin. Meanwhile, we see near perfect detection if we accept candidates with $F^g\textsubscript{inst} \geq 0.7$. Finally, observing the summary of results in Table~\ref{tab:detectResults}, we see the effectiveness of the different features types for fin candidate classification. It can be seen that while both opponentSIFT and normal features enable good detection performance (say for $t=0.7$), a combination of the two is required to obtain good recognition of the highest quality candidates at $t=0.9$. In summary, for almost all fin instances, a high quality candidate is generated and recognised with high precision, demonstrating the effectiveness of our contour stroke model for the task at hand. \ \\ \\ \begin{figure*}[ht] \includegraphics{Fig8.pdf} \caption{COMBINATORIAL CONTOUR SAMPLING: (A) The DoG corner response function of a fin contour. (B) The $n=50$ most prominent maxima of $D$ are selected as keypoints. The detected keypoints are shown on the fin contour. (C) The contour is combinatorially sampled between every keypoint pair to produce a set of local, semi-local and global subsections. } \label{fig:combinatorialSampling} \end{figure*} \vspace{-35pt} \section{Biometric Contour Encoding} \label{sec:idEncoding} In this section we develop a method of encoding smooth object shape suited to individual white shark fin representation. It enables efficient and accurate individual recognition whilst being robust to noisy, partially occluded input generated by automatic shape extraction. Global shape descriptions, as used in~\cite{stewman06}, maximise inter-class variance but are sensitive to partial occlusions and object-contour detection errors, while the removal of nuisance variables such as in- and out-of-plane rotation rely upon computing point correspondences and inefficient pairwise matching. By contrast, the semi-local descriptions of~\cite{arandjelovic11,arandjelovicO12} are robust and allow efficient matching, but their encoding of inter-class variance will always be sub-optimal. To maximise the descriptiveness of features, we utilise \textit{both} semi-local and global shape descriptions with a framework extending that used to generate fin candidates. \\ \ \\ \textbf{Edge Refinement.} Our segmentation and contour partitioning framework so far produces descriptions of fin contours, but it does not resolve \textit{to sufficient resolution} the fin shape along trailing edge and tip vital to distinguishing individuals within shark populations \citep{anderson2011, Bonfil2005}. To recover this detailing we apply border matting in a narrow strip either side of region boundaries using the local learning method and code of \citet{zheng09}. This produces an opacity mask $\alpha$ which defines a soft segmentation of the image $(\alpha_i \in [0,1])$. We obtain a binary assignment of pixels (by threshold $0.5$) to separate fin and background, and extract the resulting high resolution contour of best Chamfer distance fit as a precursor to biometric encoding. Full details of this edge refinement procedure can be found in~\cite{Hughes2015a}. \\ \ \\ \textbf{Generating Boundary Subsections.} As a first step towards a biometric encoding, we detect salient boundary keypoints on the extracted contour strokes to produce stably recognisable contour subsections that serve as descriptor regions. For keypoint detection we use the same approach as that used for detecting keypoints when generating fin candidates, as described in Section~\ref{sec:strokeModel}. To generate boundary subsections, we resample fin candidates to a fixed resolution of 1024 pixels and compute $D(u,\sigma)$ in Equation~\ref{eq:dog}, re-parameterised with $\sigma=2$ and $m=8$. Subdivision by these keypoints yields ${50 \choose 2}=1225$ contour subsections\footnote{Taking as keypoints the $n=48+2$ largest local maxima of $D$, that is plus the start and end points of the contour, the putative fin boundary is sampled between every keypoint pair. }. Note that for reference images, we encode subsections in both directions. For test images, we encode in one direction only. As a result, later subsection matching does not need to consider the directions. The approach is illustrated in Figure~\ref{fig:combinatorialSampling}. \ \\ \\ \textbf{Boundary Descriptors.} Following the generation of boundary subsections, the task is to encode their shape information. We investigate two regimes for subsection description: the standard DoG norm (DoG\textsubscript{N}) as defined in Equation~\ref{eq:dog}, and the boundary descriptor of \cite{arandjelovicO12}. DoG\textsubscript{N} provides a number of properties relevant to biometric contour encoding: first, the associated metric is suitable for establishing similarity between descriptors, meaning contour sections can be matched efficiently. Secondly, by varying the parameters $\sigma$ and $m$, the description can be tuned to encode different components of shape scale-space. Third, the descriptor is rotation invariant and robust to changes in viewpoint (see Figure~\ref{fig:matches}). We also consider the boundary descriptor of \cite{arandjelovicO12} composed of a vector of boundary normals, denoted $\mathcal{N}(u,\sigma)$. At each vertex the normal vector of the contour is computed and the two orthogonal components are concatenated to yield the descriptor:\vspace{-6pt} \begin{equation} \label{eq:normalDescriptor} \mathcal{N}(u,\sigma)=(G(u,\sigma)*x(u),G(u,\sigma)*y(u)) \end{equation} This normal descriptor lacks rotational invariance. This is overcome by aligning the ends of each subsection with a fixed axis as a precursor to descriptor computation. As illustrated in Figure~\ref{fig:coeffs} over the entire fin segment, both DoG\textsubscript{N} and Arandjelovic's normal descriptor provide spatial and scale selectivity. \section{ Identification Baseline via LNBNN} \label{sec:LNBNN} As noted by \citet{boiman08}, information is lost in processes such as vector quantisation. For this reason, we utilise a scoring mechanism inspired by the local naive Bayes nearest neighbour (LNBNN) classification algorithm \citep{mccann12}, and similar to that employed by \citet{crall13} in the context of patterned species individual identification, to provide a recognition baseline. \begin{figure}[hb] \centering \includegraphics[width=0.45\textwidth]{Fig9.pdf} \caption{DESCRIPTORS FOR ENCODING INDIVIDUAL FIN SHAPE: We utilise the DoG\textsubscript{N} and Arandjelovic's normal descriptor as a feature pool for characterising individuality. It can be seen that both location on the segment~(x-axis) and scale-space band ($\sigma$) are encoded by the descriptors.} \label{fig:coeffs} \end{figure} Specifically, denoting the set of descriptors for a query object $D_Q$, for each query descriptor $d_i \in D_Q$ and for each class $c\in C$, we find two nearest neighbours ($NN_c(d_i),NN_{\bar{C}}(d_i)$) where $\bar{C}$ is the set of all classes other than $c$. Using the shorthand $\delta(NN_{\cdot}) = ||d_i - NN_{\cdot}(d_i)||^2$, queries are classified according to: \vspace{-1pt} \begin{equation} \label{eq:nbnn1} \hat{C}=\operatorname*{arg\,max}_C \sum^{|D_Q|}_{i=1} f(d_i,c) \end{equation}\vspace{-10pt} \begin{equation} \label{eq:localScore} f(d,c)= \begin{cases} \delta(NN_{\bar{C}}) - \delta(NN_c) & \delta(NN_{\bar{C}})>\delta(NN_c) \\ 0 & \text{otherwise} \\ \end{cases} \end{equation} This decision rule can be extended to a multi-scale case. Letting $S=\{\sigma_1,...,\sigma_j,...,\sigma_v\}$ denote the set of scales for which we compute descriptors, the multi-scale decision rule linearly combines the contribution of the descriptors at each scale (see also top of Figure~\ref{fig:overview3}):\vspace{-5pt} \begin{equation} \label{eq:nbnn2} \hat{C}=\operatorname*{arg\,max}_C\displaystyle\sum_{j=1}^{v}w_j\cdot \displaystyle\sum_{i=1}^{|D_Q^j|}f(d_i^j,c) \end{equation} \\ \textbf{Implementation Details.} To achieve scale normalisation, each contour subsection is re-sampled to a fixed length of $256$ pixels. DoG\textsubscript{N} and normal descriptors are computed at filter scales $S=\{1,2,4,8\}$, with a constant value of $m=2$ in the DoG\textsubscript{N} case. Each descriptor is $L2$ normalised to allow similarities between descriptors to be computed using Euclidean distance. FLANN \citep{muja09} is employed to store descriptors and to perform efficient approximate nearest neighbour searches. Classification is performed at each scale separately for both descriptor types and then combined, with each scale weighted equally~($w_j=1$). \vspace{-12pt} \ \\ \\ \textbf{Dataset.} In order to benchmark individual fin classification, we use a dataset representing $85$ individuals and consisting of $2456$ images (see Acknowledgements for data source). For each individual there are on average $29$ images (standard deviation of $28$). The minimum number for an individual was two. As such, when the dataset was split into labelled and test images, just one labelled training example was selected to represent each shark. The remaining $2371$ images were used as queries all of which show at least $25\%$ of the fin's trailing edge. They exhibited significant variability in waterline and white splash occlusion, viewpoint, orientation and scale (see Figure~\ref{fig:teaser} and Figure~\ref{fig:matches} for example images). \\ \ \\ \textbf{Performance Evaluation Measures.} Two measures are reported for performance evaluation. Both are based on average precision as the classifier returns a ranked list of candidate identities, each associated with a score as computed according to Equations~\ref{eq:nbnn1} or~\ref{eq:nbnn2}. The first is AP, computed for all test images. For the second, we compute AP for each individual and then take the mean of the individual AP scores~(mAP). This second measure avoids bias towards individuals with large numbers of test images. In each case, AP is computed as area under precision-recall curves computed directly using the individuals' scores, in contrast say to the ranking employed in~\cite{everingham14}. \\ \ \\ \textbf{Results.} The mAP and AP scores for DoG\textsubscript{N} and normal-based individual identification are shown in Table~\ref{tab:idResults}. Overall, our contour stroke model for fin detection combined with a combinatorial biometric contour encoding proves suitable for the task of individual fin identification. For DoG\textsubscript{N}, as reported in~\cite{Hughes2015} for one-shot-learning, of the $2371$ query instances presented to the system, a particular shark is correctly identified with a mAP of~$0.79$. \begin{table}[t] \caption{INDIVIDUAL LNBNN ID RESULTS } \label{tab:idResults} \begin{tabular}{llllll} \hline\noalign{\smallskip} \multicolumn{6}{c}{1 training image per class (1-shot-learning): 2371 queries} \\ \noalign{\smallskip}\hline\noalign{\smallskip} Encoding & $\sigma=8$ & $\sigma=4$ & $\sigma=2$ & $\sigma=1$ & combined\\ \noalign{\smallskip}\hline\noalign{\smallskip} AP:DoG\textsubscript{N} & 0.63 & 0.72 & 0.69 & 0.49 & 0.76 \\ AP:Norm & 0.33 & 0.70 & 0.72 & 0.65 & 0.72 \\ mAP:DoG\textsubscript{N} & 0.67 & 0.74 & 0.73 & 0.56& 0.79 \\ mAP:Norm & 0.49 & 0.75 & 0.76 & 0.73 & 0.76 \\ \noalign{\smallskip}\hline\noalign{\smallskip} \multicolumn{6}{c}{2 training images per class: 2286 queries } \\ \noalign{\smallskip}\hline\noalign{\smallskip} \multicolumn{3}{l}{AP:SIFT} & & & 0.20 \\ \multicolumn{3}{l}{mAP:SIFT} & & & 0.35 \\ AP:DoG\textsubscript{N} & & & & & 0.81 \\ mAP:DoG\textsubscript{N} & & & & & 0.83 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} Figure~\ref{fig:matches} illustrates such examples of fin matches. An examination of recognition performance for high quality fin detections ($F^g\textsubscript{inst}$$>0.9$) provides insight into the effect of fin detection on individual identification. Of $217$ such detections, where additionally, the entire fin contour was clearly visible, $82\%$ were correctly identified with a mAP of $0.84$. In $91\%$ of cases, the correct identity was returned in the top ten ranks. Thus, approximately $9\%$ of fin instances could not be classified correctly, independent of the quality of the detected contour. The results demonstrate the benefit of combining DoG\textsubscript{N} descriptors computed for independent scale-space components of fin shape, as shown by a $6.7\%$ gain in AP performance from AP=$0.72$ to AP=$0.76$ compared to that obtained using any individual scale alone. The normal encoding also proves suitable for individual recognition, with AP of 0.72 and mAP of 0.76, although the best performance obtained with this descriptor type falls below the multi-scale DoG\textsubscript{N} approach. Figure~\ref{fig:PRNormVsDoG} shows precision-recall curves for DoG\textsubscript{N} and normal encoding types. It can be seen that the recognition performance difference between the two feature types occurs in the high precision region, with a normal encoding providing recognition precision of less than one for almost all values of recall. When descriptors corresponding to the trailing edge of fins alone are considered, the normal encoding provides superior recognition to that obtained using DoG\textsubscript{N}, but nevertheless remains inferior to that obtained using a multi-scale DoG\textsubscript{N} representation of the whole fin. \begin{figure}[hb] \centering \includegraphics[width=0.4\textwidth]{Fig10.pdf} \caption[Comparing performance for different encoding types]{PRECISION-RECALL CURVES FOR LNBNN. Precision-recall curves for DoG\textsubscript{N} and normal fin encodings, comparing identification via whole fins and just trailing edges.} \label{fig:PRNormVsDoG} \end{figure} \vspace{-8pt} Finally, we observe that the DoG\textsubscript{N} and normal approaches produce different predictions on a significant set of samples, pointing towards an opportunity in combining these classifiers, depending on fin structure. This complementarity is exploited in Section~\ref{sec:finspace}. \newpage \noindent \textbf{Comparison with Off-the-shelf Features.} To put the performance of our biometric contour representation in context, we report individual fin identification results using a methodology previously applied to patterned species individual recognition \citep{crall13}. In our case, a sparse, affine covariant SIFT encoding \citep{mikolajczyk04} of fin shape and surface texture is generated by detecting features centred within closed regions, created by drawing straight lines between the two ends of detected fin stokes (illustrated using dashed lines in Figure~\ref{fig:teaser2}). As before, LNBNN (Equations~\ref{eq:nbnn1} and~\ref{eq:localScore}) is used for individual classification. In this experiment (and only this experiment) two training images are used per individual, one for each side of the fin, leaving 2286 query images. Results in Table~\ref{tab:idResults} unambiguously demonstrate the superiority of our biometric contour representation over one describing surface texture, for individual fin identification. Using SIFT features, fins are identified with mAP of 0.35 (AP=0.2). Using exactly the same training data, this compares with mAP of 0.83 using the combinatorial multi-scale DoG\textsubscript{N} encoding (AP=0.81). Interestingly however, 45 fin instances, misclassified using biometric contour encoding, are correctly identified using SIFT, with examples shown in Figure~\ref{fig:siftMatches}. Noting that the permanence of fin surface markings additionally captured by 2D features such as SIFT is disputed \citep{robbins13}, this observation nevertheless suggests that texture-based representations may have potential utility, at least for a sub-set of the population and over short observation windows. \begin{figure}[t] \centering \includegraphics[width=84mm]{Fig11.pdf} \caption[Comparing performance for different encoding types]{EXAMPLE IDENTIFICATIONS USING AFFINE-COVARIANT SIFT DESCRIPTIONS: Rarely, fins misclassified using biometric contour representations are correctly identified using surface texture descriptors. Here, two such examples are shown, with query images on the left of each pair. The coloured lines represent discriminative feature matches (as evaluated by the value of $f(d,c)$ in Equation~\ref{eq:localScore})\vspace{-15pt}} \label{fig:siftMatches} \end{figure} \vspace{-12pt} \section{Construction of a Population-wide Fin Space } \label{sec:finspace}\vspace{-3pt} In this section, we introduce a globally normalised cross-class~(cross-individual) coordinate system over \textit{both} descriptors DoG$_N$ and normals, i.e. a global `fin space', in which we embed fin descriptors along the dimensions of descriptor type, spatial location and spatial extent on the fin contour, as well as along feature scale. The resulting $4D$ fin space is illustrated in Figure~\ref{fig:locLenFreq}. \begin{figure}[b] \centering \includegraphics[width=0.45\textwidth]{Fig12.pdf} \caption{FIN SPACE AND LOCALISATION OF INDIVIDUALITY. Organising visual descriptors indexed over spatial location (x-axes) and extent on the fin (dotted line marks fin tip), and filter scale (rows) allows for the learning of population-wide distinctiveness properties associated with the anatomic fin locations. Colouration depicts the distinctiveness of bins with respect to animal individuality, as quantified by classification AP at the subsection level.} \label{fig:locLenFreq} \end{figure} This space allows for reasoning about and learning of population-wide properties using anatomically interpretable dimensions; be that to 1) quantify the distinctiveness of feature descriptors by their type, location or extent on the fin, or to 2) move from a non-parametric and linear method of cue combination to one that non-linearly learns how to combine indexed evidence from across the fin space. Importantly, this entails learning a single model for the species, one which can be seen as characterising a species-specific pattern of individual distinctiveness, and not one that learns a pattern of uniqueness solely for any given individual. \\ \ \\ \textbf{Embedding Descriptors into Fin Space.} The fabric of the proposed fin space can be described as subdividing the leading and trailing edges of fins into ($n=5$) equally sized partitions\footnote{As the lengths of either edge of the fin are not necessarily the same, the size of the partitions on the leading edge are not necessarily the same as those on the trailing edge.}. We then consider every connected combination of partitions yielding $55 $ spatial bins for each of the two feature types. As illustrated in Figure~\ref{fig:spatialMapping}, fin subsections can be mapped to spatial bins by first assigning them to partitions - a subsection is said to occupy a partition if it occupies more than half of it. Finally, each subsection is assigned to the spatial bin that corresponds to the set of partitions it occupies. Scale-space partitioning is achieved by dividing filter scale into five bins. \vspace{-8pt} \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{Fig13.pdf} \caption{SPATIAL EMBEDDING OF FIN PATTERNS. Example of a subsection mapped to a spatial bin (shown in yellow) covering 3 partitions.\vspace{-20pt} } \label{fig:spatialMapping} \end{figure} \\ \ \\ More formally, this yields an overall set of bins given by $B=\{(\sigma_1^g,\sigma_2^g],...,(\sigma_k^g,\sigma_{k+1}^g],...,(\sigma_{m-1}^g,\sigma_m^g]\}$ and the set of filter scales is $S^g = \{\sigma_1^g,...,\sigma_k^g,...,\sigma_m^g\}$. Here $g$ denotes that filter scale is considered as a proportion of the fin contour length~globally. Defined globally, the filter scale of the $i^{th}$ subsection descriptor computed at scale $j$ (as in Equation~\ref{eq:nbnn2}) can be expressed as $\sigma^g_{i,j} =\nicefrac{\sigma_j}{l_n} \cdot p$ where $p$ expresses the length of the subsection as a proportion of the length of the fin contour, and $l_n$ is the number of samples used to encode the subsection. Having computed $\sigma^g_{i,j}$, the descriptor is mapped to the corresponding bin. \vspace{-16pt} \section{Non-Linear Model Exploiting Fin Space} \vspace{-8pt} \label{sec:ID} In this section we show that learning distributions of reliable match locations in fin space can significantly improve identification rates compared to the baseline. This appreciates the fact that certain feature \textit{combinations} in fin space are common and not individually discriminative in sharks, whilst others are highly distinctive. To implement a practical approach that captures such species-specific information, we learn a non-linear map from patterns of matching locations in fin space to likelihoods of reliability for identification. \\ \ \\ \textbf{Obtaining Scoring Vectors from Fin Space.} As in the baseline case, for each query descriptor (representing the input image) and for each class (representing the individuals), we find the nearest reference descriptor in that class, i.e. perform max-pooling over the class. As described in Section \ref{sec:LNBNN}, based on the distance to that nearest neighbour and the distance to the nearest neighbour in \textit{another} class, we compute a local match score according to Equation~\ref{eq:localScore}. Now, instead of sum-pooling local match scores over class labels directly, as performed in Equations~\ref{eq:nbnn1} and~\ref{eq:nbnn2}, we first project local match scores into fin space via their associated reference descriptors, and then perform sum-pooling over fin space bins (see Figure~\ref{fig:overview3}). As a result, for each class and for each discrete fin space location, we obtain a score. These scores form a vector of dimensionality equal to the cardinality of fin space. As such, each query-class comparison yields such a vector.\\ \ \\ \textbf{Learning a Non-Linear Identification Model.} The described procedure rearranges matching information so that the scoring pattern as observed spatially and in scale-space along the fin, as well as over descriptor types, is made explicit by the scoring vector. We now analyse the structure of scoring vectors over an entire population of fins to learn and predict their reliability for inferring animal identity. This procedure is designed to selectively combine descriptor evidence (see Section \ref{sec:LNBNN}), exploit the observed variance in local distinctiveness (see Figure~\ref{fig:locLenFreq}), and address potential correlations between features in fin space. To allow for complex, non-linear relationships between scoring structures and identification reliability, we select a random forest classifier to implement the mapping. Practically, we train the random forest to map from query-class scoring vectors to probability distributions over binary match category labels `query-is-same-class' and `query-is-not-same-class'. Importantly, performing two-fold cross-validation, the dataset is split randomly by individual, and not by query, when training and evaluating the classifier. This ensures that what is learned generalises across the species and does not over-fit the individuals in the present dataset. \\ \ \\ \textbf{Final Results.} Evaluation is performed by reporting AP and precision-recall curves over the same 2371 queries as used to obtain the identification baselines in Section~\ref{sec:LNBNN}. We present the results in Figure~\ref{fig:PR_RF}. It can be seen that, overall, the final fin space approach achieves an AP of $0.81$, representing 7\% and 12\% performance gains over the DoG$_N$ and normal baselines, respectively. The results also clearly demonstrate the benefit of selectively combining both descriptor types - precision measures are improved or kept across the entire recall spectrum for a combined, dual descriptor approach. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{Fig14.pdf} \caption{COMPARISON OF BASELINE (top) AND FIN-SPACE IDENTIFICATION SCHEME (bottom). The two paradigms are illustrated proceeding from left to right. By associating descriptor matching scores~(left column) to reference locations in a global fin space (colouration), the improved scheme (bottom) accumulates information not into a single, class-specific scalar (top approach), but forms a scoring vector that encodes the pattern of matchings over fin space. Identity is then judged via a random forest based on the learned reliability of the matching patterns. } \label{fig:overview3} \end{figure} \begin{figure}[hb] \centering \vspace{-12pt} \includegraphics[width=0.4\textwidth]{Fig15.pdf} \vspace{-8pt} \caption{RESULTS OF IDENTIFICATION USING THE FIN SPACE APPROACH. Precision-recall curves reported considering each of the descriptor types separately (effectively training the random forest on only half the fin space), as well as considering the full dual descriptor set. } \vspace{-10pt} \label{fig:PR_RF} \end{figure} \section{Conclusions and Future Work} \vspace{-10pt} \label{sec:conclusions} We have described a vision framework for automatically identifying individual great white sharks as they appear in unconstrained imagery as used by white shark researchers. To do so, we have first described a contour stroke model that partitions ultrametric contour maps and detects fin objects based on the resulting open contour descriptions. We have shown that this process simultaneously generates fin object candidates and separates them from background clutter. Secondly, a multi-scale and combinatorial method for encoding smooth object boundaries biometrically has been described. In combination with an LNBNN classifier, the method is both discriminative and robust, and shows individual shark fin identification performance at a level of AP=$0.76$ when employed using a multi-scale DoG descriptor in a one shot learning paradigm. Thirdly, we have introduced a domain-specific `fin space' which indexes fin shapes spatially, by filter scale and along descriptor types. We have measured the distinctiveness for individual shark identification of different regions in this space, providing some insight into the distribution of individuality over the fin. Finally, we have proposed a shark fin identification framework that achieves an AP=$0.81$ outperforming the baseline system published in~\cite{Hughes2015}. In essence, we achieved this improvement by introducing a non-linear recognition model, which integrates different descriptors and operates based on a population-wide, learned model for predicting identification reliability from matching patterns in fin space. For the species at hand, we conclude practical applicability at accuracy levels ready to assist human identification efforts without a need for any manual labelling. The approach may therefore be integrated to enhance large scale citizen science \citep{zoo2011,ibe2015,Duyck2014} for ecological data collection of white sharks. A related project to make available this work to the biological research community is underway~\citep{SoS2016}. Furthermore, we expect our framework to generalise to other classes of smooth biometric entity, in particular marine life exhibiting individually distinctive fin and fluke contours such as various other species of shark and whale, e.g. humpback whales \citep{ranguelova04}. \vspace{-18pt} \section*{Dataset}\vspace{-8pt} The dataset "FinsScholl2456" containing 2456 images of great white sharks and their IDs was used in this paper. Since the authors and host institution hold no copyright, to obtain a copy please directly contact: \\ Michael C. Scholl, Save Our Seas Foundation (CEO), Rue Philippe Plantamour 20, CH-1201, Geneva, Switzerland; \ Email: Michael@SaveOurSeas.com \begin{figure*}[ht] \centering \includegraphics[width=0.8\textwidth]{Fig16.pdf} \caption{LNBNN INDIVIDUAL IDENTIFICATION EXAMPLES: left images are queries and right ones are predicted individuals. Coloured lines indicate start and end of the ten sections contributing most evidence for the matched individual. For illustration of false matches, bottom three rows, left pairs, show misidentifications while correct matches are shown right. All example matches are obtained using multiscale DoG\textsubscript{N} descriptors combined using the LNBNN classifier. Out of respect for their privacy, the human subject appearing in row 3, column 2, was masked out of the image prior to publication, but only after fin detection and photo-identification results had been obtained.} \label{fig:matches} \end{figure*} \clearpage \vspace{-8pt} \section*{Acknowledgements}\vspace{-12pt} B.H. was supported by EPSRC grant EP/E501214/1. We gratefully acknowledge Michael C. Scholl and the Save Our Seas Foundation for allowing the use of fin images and ground truth labels.\vspace{-8pt} \bibliographystyle{spbasic}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:introduction} 5G systems will adopt millimeter wave (mmWave) frequency bands to meet the capacity demand for future mobile broadband applications and new use cases\cite{TWC_01, TWC_02,TAP_4}. However, the high path loss and sensitivity to blockages~\cite{ TAP_2, TAP_15, TAP_3}, channel state information acquisition challenges~\cite{TAP_13}, hardware limitation and other difficulties~\cite{TAP_20} make it challenging to provide high user rate at high frequencies without shrinking the traditional cell coverage range. The critical part of high frequency links is the antenna and associated beamforming method. High beamforming gain is essential to combat the severe path loss such that Gbps throughput over long distance and coverage in non-line of sight (NLOS) areas can be realized. Full digital beamforming, capable of altering both amplitude and phase for each antenna element, is costly as it requires a dedicated RF chain for every antenna element and powerful baseband processing. Analog or hybrid beamforming with limited number of RF chains will be used in most of the products indented for mm-Wave frequency bands. However, owing to the channel angular spread and limited number of RF chains, the effective beamwidth of the antenna will be ``widened'' by the channel, as illustrated in Fig.~\ref{fig:effective_pattern}, resulting in reduced effective beamforming gain. This can be intuitively understood by an analogy of lighthouse beacons being scattered in fog, leading to shortened reach. A sample measured beam pattern, presented in Fig.~\ref{fig:effective_pattern}, shows 4.5 dB gain reduction as compared to its nominal gain of 14.5 dBi (as measured in anechoic chamber). Previous measurement campaigns have reported significant loss of directional gain in various deployment scenarios, including suburban fixed wireless access (FWA)~\cite{TWC_2, TWC_4}, indoor offices~\cite{TWC_3}, and industrial factories~\cite{TWC_6}, where up to 7 dB gain reduction (90th percentile) out of 14.5 dBi nominal gain was reported. \begin{figure} \centering \includegraphics[width=0.9\figwidth]{effective_pattern}\\ \includegraphics[width=0.65\figwidth]{Beam_Pattern_NLOS_suburban} \caption{Illustration of angular spread in a NLOS multi-path propagation channel (upper) and the phenomenon of ``widened'' effective beamwidth from a NLOS measurement at 28 GHz (lower), where a 4.5 dB gain reduction has been observed compared to its nominal gain of 14.5 dBi.} \label{fig:effective_pattern} \end{figure} Angular spread has been widely acknowledged and carefully modeled for wireless communications, for example, by the 3rd Generation Partnership Project (3GPP)~\cite{TWC_5}. It is different in azimuth and in elevation for most relevant deployment scenarios, and a chart of the root-mean-square (RMS) angular spread (its mean and associated 10\% to 90\% range) for base station (BS) and for outdoor user equipment (UE) is presented in Fig.~\ref{fig:3GPP_spread}, created based on 3GPP channel models~\cite{TWC_5} for 28 GHz with BS-UE distance of 100 m\footnote{Angular spreads are not sensitive to frequency or distance in~\cite{TWC_5}.}. Such difference has also been observed in other channel models developed by mmMagic, METIS, and NYU Wireless~\cite{R4-1706876}. \begin{figure} \centering \includegraphics[width=0.9\figwidth]{angular_spread_3GPP} \caption{A chart of RMS angular spread (mean value and the corresponding range of 10\% to 90\%) for BS and for outdoor UE using 3GPP channel models~\cite{TWC_5} for 28 GHz with BS-UE distance of 100 m.} \label{fig:3GPP_spread} \end{figure} However, the impact of channel angular spread on system design, planning and performance evaluation has not been well understood. The prevailing practice for link budget calcualtion, inter-site interference and co-existence studies is to use nominal antenna pattern rather than the effective pattern, leading to inaccurate received power and interference level estimation. Although high directional antennas have been used for backhaul links, they are usually installed at high heights with almost clear direct line-of-sight (LOS) path and close to zero angular spread. This is in contrast to mobile or fixed wireless access applications where the antennas might be below average clutter height and the impact of angular spread could be significant. \subsection{Our Contribution} In this paper, we focus on wireless access deployment scenarios where large antenna arrays are deployed to improve the link budget. We take advantage of the difference in elevation and azimuth angular spread and formulate the analog beamforming as a constrained optimization problem to maximize the effective beamforming gain. We derive a closed form solution of the optimal array geometry, whose nominal beam pattern turns out to match the given channel angular spread. The potential gain of the optimal array over a square array of the same size is demonstrated by system level simulations using 3D channel models. Furthermore, we also propose a method of estimating channel angular spread in azimuth and in evaluation using as few as three power measurements, and validate its accuracy via lab measurements using a $16{\times}16$ phased array at 28 GHz. The capability of estimating angular spread and optimizing beam pattern on the fly enables dynamic directional beam configuration, and it helps to achieve high effective gain using low cost analog/hybrid beamforming implementation. We also demonstrate a few examples where substantial gain can be achieved through array geometry optimization. To the best of our knowledge, this work is the first of its kind in matching antenna pattern with channel angular spreads to improve effective direction gain, which is essential and critical to ensure sufficient link budget in real deployment. \subsection{Related Work} Some recent work have provided preliminary investigations on the impact of channel angular spread for channel modeling, link budget analysis, and system performance evaluation. The mismatch between nominal antenna gain and received power level was observed in various channel measurements with directional antennas, and such antenna specific variation was embedded directly into ``directional'' path loss models~\cite{model_1, model_2, model_3}, which leads to different path loss models for each different combination of transmit and receive directive antennas. This is in contrast to the ``omni'' path loss models widely adopted by industrial standards such as~\cite{TWC_5} where the propagation channel is characterized free from any antenna assumptions and the path loss is modeled as it would be observed with ideal omni antennas at both the transmitter and the receiver. For example, in~\cite{TWC_3, TWC_4, TWC_6} the effective gain reduction caused by angular spread is modeled separately from the ``omni'' path loss channel models. Reduction of directional gain and capacity by azimuth angular spread have been evaluated in \cite{JSAC13} for single/multiple sector beams, and the impact of angular spread in azimuth and in elevation for mmWave square arrays have been studied in \cite{TWC_1} for Gbps coverage with wireless relayed backhaul. System level simulations of mobile networks in \cite{TWC_04} have demonstrated up to 40\% deviation from realistic value of Long Term Evolution (LTE) downlink throughput when nominal antenna pattern is assumed instead of effective antenna pattern. Study for 5G scenario with analog beamforming in mm-Wave range was presented in \cite{TWC_05} where the radio link budget for serving link and interfering links were evaluated for both nominal and effective antenna gains. The impact of 3GPP 3D channel models on effective antenna array patterns has been visualized in \cite{TWC_06} and it was found that the downlink Signal to Interference and Noise Ratio (SINR) can be overestimated by 10 to 17 dB in NLOS scenario when using nominal beam pattern rather than effective pattern. The impact of angular spread on the efficiency of tapering method has been evaluated via simulations \cite{TWC_07} which indicates that the first side-lobe suppression level (SSL) can decrease to 16 dB in line of sight (LOS) conditions, or even to 2 dB in NLOS, in comparison to SSL of 20 dB for the nominal antenna pattern. \subsection{Paper Organization} A brief description of system model is in Sec.~\ref{sec:model} and array geometry optimization is presented in Sec.~\ref{sec:optimization}. System level simulation and lab measurements are reported in Sec.~\ref{sec:sim}. Several potential applications are discussed in Sec.~\ref{sec:application} and conclusions are in Sec.~\ref{sec:conclusion}. \section{System Models}\label{sec:model} To simplify presentation, we focus exclusively on beamforming over uniform planar array where elements are separated by half a wavelength. This configuration facilitates simple and direct representation of the nominal beam pattern by the underlining array size and array geometry. The same concept and methodology apply to other array types and beamforming methods where the RMS beamwidth of the beam pattern should be used for optimization. We consider the case of high gain antennas whose beam pattern can be approximately characterized by Gaussian functions\footnote{Such approximation has been widely adopted in standard specifications such as 3GPP~\cite{TWC_5}. Empirical observation indicates that the main lobe can be well approximated by Gaussian function for antenna gain as low as 5 dBi.} both in azimuth and in elevation~\cite{TWC_1} \begin{align} g(\phi,\theta)=\frac{2}{B_h B_v} e^{-\frac{\phi^2}{2B^2_h}} e^{- \frac{\theta^2 }{2 B^2_v} }, \label{eqn:pattern} \end{align} where $B_v$ and $B_h$ are the RMS beamwidth (in radius) in elevation and in azimuth, respectively. The directional gain, defined as the peak to average power ratio of the antenna pattern, is determined by the RMS beamwidths as~\cite{TWC_1} \begin{align} G =\frac{2}{B_h B_v }, \label{eqn:3} \end{align} In the absence of scattering, the RMS beamwidths are set, correspondingly, to their nominal value $B_{v0}$ and $B_{h0}$, which can be determined from measurement in an anechoic chamber. In the presence of scattering, signals may come from multiple directions. The received signal along a certain direction is the circular convolution of the nominal antenna pattern and the channel power angular response~\cite{TWC_3}. Assuming, for tractability, the channel angular spectrum of RMS azimuthal angular spread (ASD) $\sigma_h$ and RMS elevation angular spread (ZSD) $\sigma_v$ can be modeled as Gaussian functions with variance $\sigma_h^2$ and $\sigma_v^2$, respectively. The effective antenna pattern, which is a circular convolution of two independent Gaussian signals, still has the Gaussian form as \eqref{eqn:pattern} but with effective RMS beamwidth given by \begin{align} B_v=\sqrt{B_{v0}^2+\sigma_v^2 }, \ B_h=\sqrt{B_{h0}^2+\sigma_h^2}. \label{eqn:4} \end{align} Therefore, we can determine the effective beamforming gain based on the nominal antenna pattern and channel angular spread. As a result, when the number of antenna elements increases, the effective gain in scattering environment is always smaller than its nominal gain, and will saturate\footnote{When there are as many RF chains as the number of antenna elements, generalized beamforming has the potential to provide effective gain that grows linearly with the number of elements, providing that perfect channel state information is available.} at the limit imposed by the channel angular spread. \section{Array Geometry Optimization and Angular Spread Estimation}\label{sec:optimization} \subsection{Theoretical Derivation of Optimal Array Geometry}\label{sec:Geom_opt} We focus on analog/RF beamforming where there are in total $N$ antenna elements, arranged in rectangular/square shape to form a uniform planar array of size $(K_1, K_2)$, with \begin{align} K_1K_2 \leq N. \label{eqn:5} \end{align} Array of $(K_1, K_2){=}(1,N)$ corresponds to a horizontally deployed uniform linear array whereas $K_2{=}1$ indicates a vertically deployed uniform linear array. Since the effective beamforming gain depends on the panel geometry $(K_1, K_2)$, the nominal beamwidths $B_{ve}$ and $B_{he}$ of the antenna elements, and channel angular spread $\sigma_h$ and $\sigma_v$, we can optimize the array geometry $(K_1, K_2)$ to maximize the effective beamforming gain subject to the size constraint \eqref{eqn:5}. \begin{theorem} \label{theorem:gain} Ignoring the integer constraint on array dimension $K_1$ and $K_2$, the effective beamforming gain of an antenna array with $N$ elements is upper bounded as \begin{align} G(N, B_{ve},B_{he},\sigma_v,\sigma_h ) \leq \frac{2}{\sigma_h\sigma_v + \frac{B_{ve}B_{he} }{N}}, \end{align} with equality if and only if the array geometry is given by \begin{align} K_1=\sqrt{ \frac{N B_{ve}\sigma_h}{B_{he}\sigma_v }}, \ K_2=\sqrt{\frac{N B_{he}\sigma_v}{B_{ve}\sigma_h }}. \label{eqn:opt_geometry} \end{align} \end{theorem} \begin{IEEEproof} See Appendix \ref{app:Proof}. \end{IEEEproof} The nearest integer pair close to $(K_1, K_2)$ as specified by \eqref{eqn:opt_geometry} and satisfying the total elements constraint \eqref{eqn:5} gives the best analog beamforming gain. Note that the ratio between the optimal RMS azimuth and elevation beamwidth equals the ratio of the channel RMS spread in azimuth and in elevation, i.e., \begin{align} \frac{B_{h0}}{B_{v0}} =\frac{B_{he}/K_2}{ B_{ve}/K_1} = \frac{\sigma_h}{\sigma_v}. \label{eqn:match} \end{align} Hence, the optimal beam pattern (generated by the optimal array geometry) matches the channel angular spread in both azimuth and elevation, as illustrated in Fig.~\ref{fig:match}. \begin{figure} \centering \includegraphics[width=0.9\figwidth]{beam_match} \caption{The optimal beam pattern (and the underlining array geometry using uniform plenary array) should match the channel angular spread as prescribed by \eqref{eqn:match} to maximize the effective analog beamforming gain. } \label{fig:match} \end{figure} \begin{remark}\label{remark:implementation} The optimal geometry that provides the maximal effective gain is determined for the given angular spread and number of elements. The actually implementation might not be exact as what suggested by the optimal solution due to implementation difficulties or cost constraints. For example, RF design would prefer symmetric circuits and antenna elements placement, and the use of splitters in the feed network may limit the granularity of array geometry options. Nevertheless, the beam pattern should match the angular spread as close as possible as prescribed in \eqref{eqn:match} after balancing all the tradeoffs. \end{remark} \begin{remark}\label{remark:other_antenna_types} The array geometry optimization for uniform plenary arrays is also applicable for other types of directional antennas (like horn, reflector antennas, plasma antennas, etc.) or antenna arrays using non-directional elements (dipole, monopole, etc.) where the optimal antenna (array) is designed by optimizing the beam pattern in azimuth and in elevation to achieve the maximal effective antenna gain in a given channel. \end{remark} \subsection{Theoretical Derivation of Angular Spread Estimation}\label{sec:spread_estimation} When channel angular spread (ASD $\sigma_h$ and/or ZSD $\sigma_v$) is unknown or time varying, the effective gain of a rectangular-shaped sub-array can be determined in real time from measured signal strength using three or more different sub-array configurations, as detailed below. For a uniform planar array of size $(N_1, N_2)$, i.e., there are $N_1$ rows and $N_2$ column, we can measure the signal strength of three sub-panels of size $(n_1, k_1)$, $(n_1, k_2)$, and $(n_2, k_1)$, where $n_1, n_2\leq N_1$, and $k_1, k_2 \leq N_2$. The effective gains of the corresponding sub-arrays, which depend on $(B_{ve},B_{he},\sigma_v,\sigma_h)$ but not shown explicitly to simplify notation, can be written as \begin{align} G(n_1,k_1 )=\frac{2}{\sqrt{(B_{ve}/n_1)^2+\sigma_v^2 } \sqrt{(B_{he}/k_1)^2+\sigma_h^2 }}, \label{eqn:11} \\ G(n_1,k_2)=\frac{2}{\sqrt{(B_{ve}/n_1)^2+\sigma_v^2 } \sqrt{(B_{he}/k_2)^2+\sigma_h^2 }}, \label{eqn:12}\\ G(n_2,k_1)=\frac{2}{\sqrt{(B_{ve}/n_2)^2+\sigma_v^2 } \sqrt{(B_{he}/k_1)^2+\sigma_h^2 }}. \label{eqn:13} \end{align} By combining \eqref{eqn:11} and \eqref{eqn:12} we have, \begin{align} \frac{G(n_1,k_1)}{G(n_1,k_2) & =\frac{\sqrt{(B_{he}/k_2)^2+\sigma_h^2 }}{\sqrt{(B_{he}/k_1)^2+\sigma_h^2 }}, \label{eqn:14} \end{align} from which we can obtain \begin{align} \left[\frac{G^2(n_1,k_2)}{G^2(n_1,k_1)}-1\right]\left(\frac{\sigma_h}{B_{he}}\right)^2 = \frac{1}{k_1^2} - \frac{G^2(n_1,k_2)}{k_2^2G^2(n_1,k_1)}, \label{eqn:15} \end{align} leading to an estimate of normalized ASD, in its squared form, \begin{align} \left(\frac{\sigma_h}{B_{he}}\right)^2 = \frac{1/k_1^2 - {G^2(n_1,k_2)}/{(k_2^2G^2(n_1,k_1))}}{ {G^2(n_1,k_2)}/{G^2(n_1,k_1)}-1}. \label{eqn:16} \end{align} Similarly, by combining (11) and (13) we obtain an estimate of the normalized ZSD, in its squared form,as \begin{align} \left[\frac{G^2(n_2,k_1)}{G^2(n_1,k_1)}-1\right]\left(\frac{\sigma_v}{B_{ve}}\right)^2 = \frac{1}{n_1^2} - \frac{G^2(n_2,k_1)}{n_2^2G^2(n_1,k_1)}, \label{eqn:17}\\ \left(\frac{\sigma_v}{B_{ve}}\right)^2 = \frac{1/n_1^2 - {G^2(n_2,k_1)}/{(n_2^2G^2(n_1,k_1))}}{ {G^2(n_2,k_1)}/{G^2(n_1,k_1)}-1}. \label{eqn:18} \end{align} If there are more measurements using different sub-arrays, each such pair would provide an estimate of the normalized ASD or ZSD, and such estimates should be combined together by treating each of such estimation as one realization of \eqref{eqn:15} and \eqref{eqn:17} for ASD and ZSD, respectively. Then all the equations formulated using \eqref{eqn:15} will be treated as an overdetermined linear system for ASD and all the equations formulated using \eqref{eqn:17} will be treated as an overdetermined linear system for ZSD. Given $n$ independent measurements of ASD established by \eqref{eqn:15}, we donate $a_i$ and $b_i$ as the corresponding constant on the left-hand-side (LHS) and the right-hand-side (RHS), respectively, of ASD estimation \eqref{eqn:15}, for pair $i=1,\ldots,n$. Similarly, denote $c_j,b_j,j=1,\ldots,l$, as the LHS and RHS constants, respectively, of ZSD estimation \eqref{eqn:17}. We will have \begin{align} \left(\frac{\sigma_h}{B_{he}}\right)^2 \bar{\boldsymbol{a}} = \bar{\boldsymbol{b}}, \ \left(\frac{\sigma_v}{B_{ve}}\right)^2 \bar{\boldsymbol{c}} = \bar{\boldsymbol{d}}, \label{eqn:19} \end{align} where \begin{align*} \bar{\boldsymbol{a}}\triangleq [a_1,\ldots,a_n]^T, \ \bar{\boldsymbol{b}}\triangleq [b_1,\ldots,b_n]^T, \\ \bar{\boldsymbol{c}}\triangleq [c_1,\ldots,c_l]^T, \ \bar{\boldsymbol{d}}\triangleq [d_1,\ldots,d_l]^T. \end{align*} Then we can apply the classical Least Square estimator to obtain the improved estimation of the normalized ASD and ZSD, in their squared form, as \begin{align} \left(\frac{\sigma_h}{B_{he}}\right)^2 = \frac{\bar{\boldsymbol{a}}^T\bar{\boldsymbol{b}}}{\bar{\boldsymbol{a}}^T\bar{\boldsymbol{a}}}, \ \left(\frac{\sigma_v}{B_{ve}}\right)^2 = \frac{\bar{\boldsymbol{c}}^T\bar{\boldsymbol{d}}}{\bar{\boldsymbol{c}}^T\bar{\boldsymbol{c}}}. \label{eqn:20} \end{align} Estimators other than the Lease Square estimator used here in \eqref{eqn:20} can also be applied here to tradeoff among accuracy, complexity and robustness. Note that a legitimate estimate of the squared ASD and ZSD should always be non-negative, but the estimates obtained using \eqref{eqn:16}, \eqref{eqn:18}, or \eqref{eqn:20} might be negative because of estimation noise. Therefore, any of the estimates whose value is negative should be replaced by zero. With estimation from \eqref{eqn:16}, \eqref{eqn:18}, or \eqref{eqn:20}, the effective gain of a sub-array of size $(m_1, m_2)$ can be estimated as \begin{align} {G(m_1,m_2)} = {G(n_1,k_1)}\frac{ \sqrt{\frac{1}{n_1^2} + \frac{\sigma_v^2}{B_{ve}^2}} \sqrt{\frac{1}{k_1^2} + \frac{\sigma_h^2}{B_{he}^2}}} {\sqrt{\frac{1}{m_1^2} + \frac{\sigma_v^2}{B_{ve}^2}} \sqrt{\frac{1}{m_2^2} + \frac{\sigma_h^2}{B_{he}^2}}}. \label{eqn:21} \end{align} \section{Numerical Evaluation, System Level Simulation and Lab Measurements}\label{sec:sim} In this section we will demonstrate the benefits of array geometry optimization by numerical results, system level simulation and lab measurements using a 28 GHz phased array with 256 elements. \subsection{Numerical Evaluation} Effective beamforming gain for analog beamforming (i.e., one RF chain) using uniform planar arrays with $256$ antenna elements at 28 GHz are shown in Fig.~\ref{fig:analog_BF} for both the UMa NLOS scenario (blue line) and the UMi Street Canyon LOS scenario (red line), where the angular spreads of radio channels are from 3GPP models~\cite{TWC_5} assuing BS-UE distance of 100 m. The effective gain obtained using \eqref{eqn:effective_gain} for a set of different array geometry are marked by markers and connected by solid curves to illustrate the general trend of effective gain with respect to array geometry. The optimal array geometries for each channel, designed based on Theorem~\ref{theorem:gain}, are highlighted in the plot using black triangles. With total of 256 elements, 5dBi each, the ideal gain obtained by digital beamforming with full channel state information would be $29.1$ dBi. In scenarios where angular spread is moderate, such as the 3GPP UMi Street Canyon LOS with medium ASD of $14^\circ$ and ZSD $0.6^\circ$, a $64\times 4$ tall array (very close to the optimal geometry of $85\times3$) is 4 dB better than the $16\times16$ square array, and 16 dB better than a $1\times 256$ fat array. In a different environment such as the 3GPP UMa NLOS case which is characterized by larger angular spreads (medium ASD of $22^\circ$ and ZSD $5^\circ$), a $32\times8$ tall array (optimal) is 0.5 dB better than a $16\times16$ square array, 9 dB better than a $1\times256$ fat array. This shows how important it is to have matched antenna beam pattern to radio channels and highlights the benefit of adapting antenna beam pattern to particular angular spreads of radio channels. \begin{figure} \centering \includegraphics[width=0.9\figwidth]{effective_gain_analog_BF} \caption{Effective beamforming gain of \eqref{eqn:effective_gain} for analog beamforming using uniform planar array with $256$ elements at 28 GHz with BS-UE distance of 100 meters for both the UMa NLOS scenario (blue line) and the UMi Street Canyon LOS scenario (red line) using 3GPP models~\cite{TWC_5}. The optimal array geometries from Theorem~\ref{theorem:gain} are highlighted as black triangles.} \label{fig:analog_BF} \end{figure} \subsection{System Level Simulation Using 3D Channel Models} The system level simulation was performed to examine the accuracy of the theoretical analysis presented in Sec.~\ref{sec:optimization} with full 3D spatial statistical channel model, as specified in 3GPP TR 38.901~\cite{TWC_5}, and antenna array model with beamforming algorithm adopted from 3GPP 5G system evaluation described in 3GPP TR 38.803~\cite{TWC_5b}. Key parameters of our system level simulation are summarized in Table~\ref{tab:sim_setup}. \begin{table}[t] \centering \caption{Summary of key parameters of the system level simulation.} \begin{tabular}{|c|l|} \hline Parameters & Values\\ \hline Network layout & 3-ring hexagon-grid with wrap around, 200 m ISD \\ \hline Macro cell & 19 sites, each has 3 ``cell'' (location anchor, no BS)\\ \hline Micro BS & 3 cluster circles per macro; each has 1 micro BS\\ \hline BS drop & Random drop along the edge of cluster circles\\ \hline BS antenna & Uniform planar array with 128 elements (8 dBi)\\ \hline Antenna pattern & as per 3GPP TR 38.803~\cite{TWC_5b}\\ \hline BS antenna height & 10 m\\ \hline UE height & 1.5 to 22.5 m\\ \hline Number of UE & 1 per micro BS\\ \hline UE location & 20\% outdoor, 80\% indoor\\ \hline Penetration loss & 50\% high loss, 50\% low loss\\ \hline UE distribution & uniform\\ \hline BS-UE distance & minimum 3 m (2D)\\ \hline LOS probability & as per 3GPP TR 38.901~\cite{TWC_5}\\ \hline Channel model & 3GPP TR 38.901 UMi Street Canyon\\ \hline Correlation & 0.5 between sites\\ \hline \end{tabular} \label{tab:sim_setup} \end{table} \begin{table}[t] \centering \caption{Simulation results match theoretical analysis of effective beamforming gain of rectangular arrays with maximum of 128 elements (each of 8 dBi gain).} \label{tab:simulation} \begin{tabular}{|c|c|c|c|} \hline BF gain [dBi] & Nominal & Analysis & Simulation\\ \hline $8 \times 16$ & 29.07 & 19.91 & 19.58\\ \hline $42 \times 3$ & 29.00 & 24.31 & 24.35\\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.9\figwidth]{analog_BF_sim.eps} \caption{System level simulation results of the effective beamforming gain of rectangular arrays with maximum 128 elements using 3GPP 3D spatial channel model with medium ASD of $16^\circ$ and ZSD of $1^\circ$ for both the default array geometry of $8\times16$ (blue lines) and the optimal geometry of $42\times3$ (red lines). The median of the gains obtained from system level simulation match the values predicted by theoretical analysis within 0.5 dB.} \label{fig:analog_BF_sim} \end{figure} First set of simulation results aim to verify correctness of analysis of effective antenna gain, for BS transmission in downlink, described above. For this purpose, we override some of the simulation parameters from Table~\ref{tab:sim_setup} to remove some constraints normally seen in system level simulations. More specifically, we set all UEs at 10 m high (same height as the BS) and 60 m from its serving BS with both the BS and UE antennas aiming towards the strongest direction on its boresight. The means ASD is fixed to $16^\circ$ and mean ZSD of $1^\circ$ to facilitate direct comparison against theoretical analysis. Results of simulations are presented in Fig.~\ref{fig:analog_BF_sim} and Table~\ref{tab:simulation}. It can be noticed, that median value of antenna gain CDF matches the analytical effective antenna gain within 0.5 dB. Second set of simulation results are to demonstrate the benefits of optimizing antenna array geometry in realistic deployment scenarios as described in Table~\ref{tab:sim_setup}. Two array geometries are used in simulation, i.e., the default $8{\times}16$ arrangement and the optimal $42{\times}3$ configuration as obtained using Theorem~\ref{theorem:gain}. Simulation results for the received DL serving signal power, DL interference power, and DL SINR are presented in Fig.~\ref{fig:analog_BF_sim_UMi}. As compared to the default $8{\times}16$ array configuration assumed by 3GPP, the optimized $42\times3$ array has demonstrated large increase in signal power (Fig.~\ref{fig:analog_BF_sim_UMi} left) thanks to its matching to channel angular spread, and modest reduction in interference power (Fig.~\ref{fig:analog_BF_sim_UMi} middle) thanks to its increased vertical resolution, leading to a combined gain of 6.6 dB on median SINR (Fig.~\ref{fig:analog_BF_sim_UMi} right). Should all users/devices distributed at the same height, widened azimuthal beam may lead to an increase in interference and therefore smaller SINR gain using optimized array geometry. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{analog_BF_sim_UMi_SC.eps} \caption{System level simulation results of the DL received signal power (left), interference power (middle), and the SINR (right) of rectangular arrays for both the default array geometry of $8\times16$ (blue lines) and the optimized geometry of $42\times3$ (red lines). 3GPP 3D spatial channel model under UMi Street Canyon scenario. The combined gain of signal power increase and interference power decrease leads to an increase of median SINR by 6.6 dB.} \label{fig:analog_BF_sim_UMi} \end{figure*} \subsection{Lab Measurements} \begin{figure} \centering \includegraphics[width=0.34\figwidth]{lab_meas_setup_LOS} \includegraphics[width=0.61\figwidth]{lab_meas_setup_NLOS} \caption{Lab measurement setup for both LOS (left) and NLOS (right).} \label{fig:lab_meas_setup} \end{figure} Lab measurements were carried out using a 28 GHz $16{\times}16$ array as the transmitter (Tx) and a 10 dBi horn as the receiver (Rx). Different antenna array geometry was configured by setting zero amplitude for selected antenna elements (AE). The ``muted'' antenna elements behaved like dummy elements which have marginal impact for antenna pattern due to EM coupling from active AEs. However, this small impact does not influence our general conclusion. The Rx horn antenna was connected to a Signal Analyzer. Tx signal with 100~MHz bandwidth was radiated from the antenna array and the received signal power was measured at Rx side. Since different Tx sub-array has different Tx power, the difference in beamforming gain is determined by the difference in Rx power subtracting the difference in Tx power. This operation also eliminates the common losses (such as cable loss, connector loss) experienced by all signals. Calibration in anechoic chamber was done using different antenna array configurations with boresight alignment. The measured total array gain with the same number of antenna elements but different geometry (e.g. $8\times8$, $16\times4$, $4\times16$ for 64 elements) was almost the same, with difference around 0.5 dB which could be attributed to dummy elements coupling effect, beam alignment offset or other measurement noise. Lab measurements, as shown in Fig.~\ref{fig:lab_meas_setup}, were carried out for both LOS and NLOS scenarios. For LOS, two rows of reflective panels are used to create multipath-rich environment with larger angular spread in azimuth to verify the gain of optimal antenna arrays. For NLOS measurements, a metal rack and additional panels are used to increase angular spread. The measured relative gain, using the full $16{\times}16$ array as baseline, as well as the estimated gain based on estimated angular spreads using the methods presented in Sec.~\ref{sec:spread_estimation} (rounded to integer value) are shown Fig.~\ref{fig:lab_meas_results}. \begin{figure} \centering \includegraphics[width=0.95\figwidth]{lab_meas_LOS}\\ \includegraphics[width=0.95\figwidth]{lab_meas_NLOS} \caption{Lab measurement results and estimated effective beamforming gains for LOS (upper) and NLOS (lower).} \label{fig:lab_meas_results} \end{figure} The results have verified the effective antenna gain for different antenna array geometry with different number of antenna elements for LOS and NLOS scenarios. For example, in LOS, the $16{\times}2$ sub-array has similar gain as the $8{\times}8$ by using 2 times less antenna elements. In NLOS, the effective antenna gain of $16{\times}2$ array is only 2.2 dB worse than $16{\times}16$, whereas the effective gain of $2{\times}16$ array is $8.7$ dB worse, clearly demonstrated the need of array optimization. Furthermore, these measurement results match our estimated gain (based on estimated angular spread) with high accuracy. These examples clearly validate our analysis on antenna array optimization and angular spread estimation. \section{Potential Applications}\label{sec:application} We present here a few potential applications where optimizing array geometry can be applied to improve system performance. \subsection{Deployment Specific Array Optimization} \begin{figure} \centering \includegraphics[width=0.95\figwidth]{opt_array_UMi_NLOS.eps \caption{Example of optimal array geometry and the effective gain as function of array size. The ASD and ZSD are according to specifications in 3GPP UMi street canyon NLOS channel~\cite{TWC_5}.} \label{fig:opt_array_UMi} \end{figure} In environments where azimuth angular spread is much larger than elevation angular spread, which is the case for deployment scenarios covered by 3GPP channel models, a tall array with the same number of elements (e.g., 16$\times$4) may improve the signal strength by a few dB as compared to the square array (i.e., 8$\times$8), thus leading to better performance. Pre-design of arrays in different geometry can be targeted for each typical deployment scenarios, such as urban macro sites, urban micro small cells, suburban FWA, and indoor office. For each typical deployment scenario, one may design the array geometry based on the mean value of angular spread in such cases and exploit the fact that the spreads in azimuth and in elevation are not the same. Such design strategy would provide similar gain on SNR over the square array for majority of the users, as verified by our system level simulations. In Fig.~\ref{fig:opt_array_UMi} we compare the effective analog beamforming gain of the optimal array to the gain of traditional squared arrays in 3GPP UMi street canyon NLOS deployment scenarios. Optimal array geometry as labeled in the figure are obtained according to Theorem~\ref{theorem:gain} and the corresponding effective beamforming gain is obtained using \eqref{eqn:effective_gain}. For same number of antenna elements, 5 dBi each, the optimal array design can improve the effective beamforming gain (thus the signal strength) by 2 to 3 dB over squared arrays. Configuration for other radio propagation environments with different angular spreads or other values of element gain can be obtained in a similar way straightforwardly. Since the angular spreads at UE are much larger than those at BS, as shown in Fig.~\ref{fig:3GPP_spread}, using large antenna arrays at UE is inefficient in providing beamforming gain. \subsection{Optimizing Array Geometry under EIRP Constraint} \begin{figure} \centering \includegraphics[width=0.95\figwidth]{opt_array_EIRP.eps} \caption{Example of optimal analog beamforming gain and array geometry as a function of EIRP limit for 3GPP indoor LOS channel~\cite{TWC_5}. } \label{fig:opt_array_EIRP} \end{figure} For devices with strict equivalent isotropic radiated power (EIRP) limit, such as indoor AP/CPE, the maximum allowable number of antenna elements $N$ can be determined from the EIRP limit as: \[N\leq10^{(\text{EIRP}- P_t-G_e)/20},\] where EIRP is in dBm, $P_t$ is the per-element transmit power in dBm and $G_e$ is the per-element gain in dBi. For example, with per-element directional gain of 5 dBi and per-element transmit power of 10 dBm, a maximum of 25 elements is allowed for indoor mobile stations subject to the peak 43 dBm EIRP limit imposed in the United States~\cite{FCC2016}. At a higher peak EIRP limit of 55 dBm for indoor modems, up to 100 such antenna elements can be used. In Fig.~\ref{fig:opt_array_EIRP} we plot the nominal gain, the effective gain of squared arrays, and the effective gain of optimal arrays with the same number of elements, as a function of EIRP limit, where the optimal configuration of antenna array, obtained by applying Theorem~\ref{theorem:gain}, is as indicated in the figure. Compared to squared arrays with the same EIRP limit, 3 to 4 dB improvement of effective beamforming gain (thus signal strength) can be achieved by array geometry optimization for 3GPP indoor LOS scenarios~\cite{TWC_5}. Configurations for other radio propagation environments with different angular spreads or other values of element gain and element power can be obtained straightforwardly following the same method. On the other hand, the improved effective gain from array geometry optimization can also be leveraged to maintain the same link budget (thus throughput) but with fewer antenna elements as compared to conventional square arrays. For example, as shown in Fig.~\ref{fig:opt_array_EIRP}, a $5\times 5$ squared array with 43 dBm EIRP (including 24 dBm Tx power) would have effective gain of 13 dBi, whereas a $16\times 1$ array would have 22 dBm Tx power but with effective gain of 15 dBi. Thus, using the $16\times 1$ array would maintain the same link signal strength as the $5\times 5$ squared array but with 2 dB less Tx power and 36\% reduction in antenna elements, which translates to a combined 4 dB reduction of EIRP. Such reduction will not only leads to lower power consumption and reduced hardware cost, but also lower EMF radiation, which could help 5G system to meet performance expectations under RF EMF compliance limits~\cite{EMF_2019}. \subsection{Array Optimization for FWA Cell Capacity Enhencement} High path loss and large signal bandwidth (in the order of 1000 MHz) at mmWave bands lead to low to medium SNR for users in NLOS or at long distance. Since the throughput is close to linear of SNR level in noise limited systems, a modest gain in signal strength could lead to substantial gain in throughput, especially for cell edge users. In Fig.~\ref{fig:opt_array_FWA} we plot the CDFs of the DL cell capacity (bps/Hz) for 5G FWA at 28 GHz in a suburban residential deployment scenario~\cite{5GWF-2019} where antenna arrays of 64 elements are used at lamppost-mounted access points. Detailed simulation setup can be found in~\cite{5GWF-2019}. With 800 MHz bandwidth and 285 m inter-site distance along the same street, the system is essentially noise limited for most of the Customer Premise Equipment (CPE). The optimized array of $16{\times}4$ achieves about 2 dB gain in median DL SINR as compared to the default $8{\times}8$ squared array. We map the DL SINR to DL cell capacity using the 3GPP configuration~\cite{TWC_5b}, and the plot the CDFs of cell capacity in Fig.~\ref{fig:opt_array_FWA}. As compared to the default squared array, the optimized array provides a 20\% increase of cell capacity at median and 60\% increase at 10th percentile (i.e., cell edge). \begin{figure} \centering \includegraphics[width=0.98\figwidth]{FWA_throughput.eps} \caption{CDF of DL cell capacity (bps/Hz) for 5G FWA at 28 GHz in a suburban deployment scenario~\cite{5GWF-2019}, where optimized array geometry of $16{\times}4$ is compared to the default $8{\times}8$ squared array. } \label{fig:opt_array_FWA} \end{figure} \section{Conclusions and Discussions}\label{sec:conclusion} In this paper we address the link budget challenge of high speed wireless access at high bands by focusing on the effective beamforming gain of antenna arrays under channel angular spread. We have presented closed form solution to match the antenna beam pattern with channel angular spread, which can be very useful in designing deployment specific antenna arrays for typical scenarios based on long-term historical data to improve link budget. We have also developed a method to estimate channel angular spread based on as few as three power measurements, which facilitate dynamic directional beam configuration in a per-transmission basis. This opens the door of a new operation regime for analog beamforming at high frequencies. Although we made a few assumptions regarding the angular-power distribution to make analysis tractable, the feasibility and projected gains of our methods have been confirmed with impressive accuracy by our 3GPP compliant system level simulations using 3D channel models and by our lab measurement using a 16$\times$16 phased array at 28 GHz. Furthermore, our proposed use cases for deployment-specific array geometry optimization only require the average value of RMS angular spread, which can be estimated based on historical data for each deployment scenarios. Since the key ingredients of our solution is to match the beam pattern with channel angular spread, the proposed geometry optimization and angular spread estimation methods also apply to other array types and beamforming methods, despite that our description focused exclusively on beamforming over uniform planar array. For such applications, it is the RMS beamwidths in azimuth and in elevation that should be used in analysis rather than the dimension of arrays. The capability of real-time link-specific optimal beam pattern determination developed here is especially interesting for advanced beamforming techniques of phased arrays~\cite{PhasedArray2018} and novel antenna technologies using metasurfaces~\cite{MetaSurface2016}. Extension to panel-based hybrid beamforming is straightforward. Assuming there are in total $N$ antenna elements evenly allocated to $M$ sub-panels, each supported by one dedicated RF chain. Each sub-panel has $N/M$ elements arranged in rectangular/square shape to form a uniform planar array, where the optimal array geometry $(K_1, K_2)$ can be optimized as in Sec.~\ref{sec:optimization} to maximize the effective analog beamforming gain $G(K_1,K_2)$ for each sub-panel. Assuming perfect CSI is available for digital beamforming when combining M panels via maximum ratio combining/transmission, the effective beamforming gain of the $N$-element $M$-subpanel hybrid beamforming is therefore $MG(K_1,K_2)$. \section*{Acknowledgment} The authors would like to thank Dmitry Chizhik for helpful discussions on channel angular spread, and Jakub Bartz for help during all the measurements in the laboratory. \appendices \section{Proof of Optimal Array Geometry}\label{app:Proof} Assume each antenna element has nominal beamwidth $B_{ve}$ in elevation and $B_{he}$ in azimuth, which could be measured from anechoic chamber. They can also be derived from its nominal gain by assuming identical beamwidth in elevation and in azimuth, i.e., \[B_{ve} = B_{he}=\sqrt{{2}/{G_e}},\] where $G_e$ is the gain of the antenna element and the last step is from \eqref{eqn:3}. In free space or anechoic chamber where there is no angular spread, the analog beams formed by an antenna array of size $(K_1, K_2)$ shall preserve its ideal RMS beamwidths $B_{v0}$ and $B_{h0}$, \begin{align} B_{v0}= \frac{B_{ve}}{K_1}, \ B_{h0} =\frac{B_{he}}{K_2}. \label{eqn:6} \end{align} Given angular spread $\sigma_v$ and $\sigma_h$, the effective analog beamforming gain can be determined by substituting \eqref{eqn:6} and \eqref{eqn:4} into \eqref{eqn:3}, described as follows \begin{align} G&(K_1,K_2, B_{ve},B_{he},\sigma_v,\sigma_h ) = \frac{2}{B_v B_h} \nonumber\\ & =\frac{2}{\sqrt{(\frac{B_{ve}}{K_1} )^2 {+} \sigma_v^2 } \sqrt{ (\frac{B_{he}}{K_2})^2 {+} \sigma_h^2}}, \label{eqn:effective_gain}\\ & =\frac{2}{\sqrt{ \frac{B_{ve}^2 B_{he}^2}{K_1^2K_2^2} +\sigma_v^2\sigma_h^2 + \sigma_h^2\frac{B_{ve}^2}{K_1^2} +\sigma_v^2 \frac{B_{he}^2}{K_2^2} }}. \nonumbe \end{align} Since $K_1K_2 \leq N$, the effective beamforming gain \eqref{eqn:effective_gain} can be rewritten as \begin{align} G & = \frac{2}{\sqrt{ \frac{B_{ve}^2 B_{he}^2}{N^2} +\sigma_v^2\sigma_h^2 + \sigma_h^2\frac{B_{ve}^2}{K_1^2} +\sigma_v^2 \frac{B_{he}^2}{K_2^2} }} \label{eqn:A2}\\ & \leq \frac{2}{\sqrt{ \frac{B_{ve}^2 B_{he}^2}{N^2} +\sigma_v^2\sigma_h^2 + 2\sigma_h\sigma_v \frac{B_{ve}B_{he} }{N} }} \label{eqn:A3}\\ & = \frac{2}{\sigma_h\sigma_v + \frac{B_{ve}B_{he} }{N}}, \label{eqn:A4} \end{align} where \eqref{eqn:A2} is by substitution of $K_1K_2{=}N$, and \eqref{eqn:A3} is from the inequality of arithmetic and geometric means (i.e., the AM-GM inequality), with equality hold, thus achieving the maximal effective gain \eqref{eqn:A4}, if and only if \begin{align} \frac{ K_1 }{K_2 } = \frac{\sigma_h B_{ve} }{\sigma_vB_{he} } = \frac{\sigma_h/B_{he}}{\sigma_v/B_{ve}}. \label{eqn:A5} \end{align} Combine \eqref{eqn:A5} with constraint $K_1K_2{=}N$ leads to the solution presented in \eqref{eqn:opt_geometry}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec-introd} Many properties of heavy nuclei can be described in terms of the statistical level density \cite{Be36,Er60,GC65,BM67,St72,LLv5,Ig83,So90,Sh92,Ra97,Ig98,AB00,AB03,EB09, Ba09,Ch10, AB16,ZS16,HJ16,KS18,ZK18,Ze19,KS20}. A well-known old example is the description of neutron resonances using the level density. Usually, the level density $\rho(E,A)$, where $E$ and $A$ are the energy and nucleon number, respectively, is given by the inverse Laplace transformation of the partition function $\mathcal{Z}(\beta,\alpha)$. Within the grand canonical ensemble the standard saddle-point method (SPM) is used for integration over all variables, including $\beta$, which is related to the total energy $E$ \cite{Er60,BM67}. This method assumes large excitation energies $U$, so that the temperature $T$ is related to a well-determined saddle point in the integration variable $\beta$ for a finite Fermi system of large particle numbers. However, data from many experiments for energy levels and spins also exist for regions of low excitation energy $U$, where such a saddle point does not exist. For presentation of experimental data on nuclear spectra, the cumulative level-density distribution -- cumulative number of quantum levels below the excitation energy $U$ -- is conveniently often used for statistical analysis \cite{Ze96,Go11,ML18} of the experimental data on collective excitations \cite{Le94,ML18,Le19,Le19a}. For calculations of this cumulative level density, one has to integrate the level density over a large interval of the excitation energy $U$. This interval extends from small values of $U$, where there is no thermodynamic equilibrium (and no meaning to the temperature), to large values of $U$, where the standard grand canonical ensemble can be successfully applied in terms of the temperature $T$ in a finite Fermi system. Therefore, to simplify the calculations of the level density, $\rho(E,A)$, we will, in the following, carry out the integration over the Lagrange multiplier $\beta$ in the inverse Laplace transformation of the partition function $\mathcal{Z}(\beta,\alpha)$ more accurately beyond the SPM \cite{KM79,MK79,PLB}. However, for a nuclear system with large particle number $A$ one can apply the SPM for the variable $\alpha$, related to $A$. The case of neutron-proton asymmetry of the Fermi system will be worked out separately. Thus, for remaining integration over $\beta$ we shall use approximately the micro-canonical ensemble which does not assume a temperature and an existence of thermodynamic equilibrium. Notice that there are other methods to overcome divergence of the full SPM for low excitation-energy limit $U \rightarrow 0$; see Refs.~\cite{JB75,BJ76,PG07,ZS16,ZK18}. The well-known method suggested in Ref.~\cite{JB75} is applied successfully for the partition function of the extended Thomas-Fermi (ETF) theory at finite temperature to obtain the smooth level density and free energy; see also Refs.~\cite{BJ76} and \cite{BB03}, and references therein. For formulation of the unified microscopic canonical and macroscopic grand-canonical approximation (MMA) to the level density, we will find a simple analytical approximation for the level density $\rho$ which satisfies the two well-known limits. One of them is the Fermi gas asymptotote, $\rho \propto \exp(S)$, with the entropy $S$, for large entropy $S$. Another limit is the combinatorics expansion in powers of $S$ for a small entropy $S$ or excitation energy $U$, always at large particle numbers $A$; see Refs.\ \cite{St58,Er60,Ig72,Ig83}. The empiric formula, $\rho\propto \exp[(U-E_0)/T]$ with free parameters $E_0$, $T$, and a preexponent factor, was suggested for the description of the excited low energy states (LESs) in Ref.\ \cite{GC65}. Later, this formula was named the constant ``temperature'' model (CTM) where the ``temperature'' is considered an ``effective temperature'' related to the excitation energy (with no direct physical meaning of temperature for LESs); see also Ref.~\cite{ZK18,Ze19}. We will show below that the MMA has the same power expansions as the CTM for LES at small excitation energies $U$. We will also show that, within the MMA, the transition between these two limits is sufficiently rapid, when considered over the dimensionless entropy variable $S$. Therefore, our aim is to derive approximately a simple statistically averaged analytical expression for the level density $\rho(S)$ with the correct two limits, mentioned above, for small and large values of $S$. Such an MMA for the level density $\rho$ was suggested in Refs.~\cite{KM79,MK79} in terms of the modified Bessel function of the entropy variable in the case of small excitation energy $U$ as compared to the rotational energy $E_{\rm rot}$. The so-called a ``classical rotation'' of the spherical or axially symmetric nucleus was considered alignment of nucleons along the symmetry axis on the basis of the periodic orbit theory with a fixed angular momentum and its projection (see Ref.~\cite{MK78}), in contrast to the collective rotation around the perpendicular axis \cite{MG17,GM21}. The yrast line was defined to be at zero excitation energy for a given angular momentum within the cranking model \cite{In54,RS80}. One of the important characteristics of the yrast line is the moment of inertia (MI). The Strutinsky shell-correction method (SCM) \cite{St67,BD72}, extended by Pashkevich and Frauendorf \cite{PF75} to the description of nuclear rotational bands, was applied \cite{KM79,MK79} for studying the shell effects in the MI near the yrast line. For a deeper understanding of the correspondence between the classical and the quantum approach, especially their applications to high-spin physics, it is worthwhile to analyze the shell effects in the level density $\rho$ (see Refs.~\cite{Ig83,So90}), in particular, in the entropy $S$ and MI, within the semiclassical periodic-orbit (PO) theory (POT) \cite{Gu71,BB72,Gu90,SM76,SM77,BB03,MY11,MG17,MK78,GM21}. This theory, based on the semiclassical time-dependent propagator, enables determining the total level-density, energy, free-energy, and grand canonical ensemble potential in terms of the smooth ETF term and PO-shell corrections \cite{SM76,SM77,BB03,MY11,MG17,KM79,MK79}. We will extend the MMA approach \cite{KM79}, in order to consider the shell effects in the yrast line as a minimum of the nuclear level density (minimum excitation energy), for the description of shell and collective effects in terms of the level density itself for larger excitation energies $U$. The level density parameter $a$ is one of the key quantities under intensive experimental and theoretical investigations; see, e.g., Refs.~\cite{Be36,Er60,GC65,BM67,St72,Ig83,Sh92,So90,EB09,KS20}. Mean values of $a$ are largely proportional to the particle number $A$. The inverse level density parameter $K=A/a$ is conveniently introduced to exclude a basic mean $A$ dependence in $a$. Smooth properties of $K$ as function of the nucleon number $A$ have been studied within the framework of the self-consistent ETF approach \cite{Sh92,KS18}. However, for instance, shell effects in the statistical level density are still an attractive subject. This is due to the major shell effects in the distribution of single-particle (s.p.) states near the Fermi surface within the mean-field approach. The nuclear shell effects influence the statistical level density of a heavy nucleus, which is especially important near magic numbers, see Refs. \cite{Ig83,So90} and references therein. In the present study, for simplicity, we shall first work out the derivations of the level density $\rho(E,A)$ for a one-component nucleon system, taking into account the shell, rotational and, qualitatively, pairing effects. This work is concentrated on LESs of nuclear excitation-energy spectra below the neutron resonances. The paper is organized as the following. The level density $\rho(E,A)$ is derived within the MMA by using the POT in Sec. \ref{sec-levden}. We extend the MMA to large excitation energies $U$, up to about the neutron separation energy, taking essentially into account the shell effects. Several analytical approximations, in particular the spin dependence of the level density are presented in Sec. \ref{sec-MMAas}. Illustrations of the MMA for the level density $\rho(E,A)$ and inverse level density parameter $K$ versus experimental data, discussed for typical heavy nuclei, are given in Sec. \ref{sec-disc}. Our conclusions are presented in Sec. \ref{sec-concl}. The semiclassical POT is described in Appendix \ref{appA}. The level density, $\rho(E,A,M)$, derived by accounting for the rotational excitations with the fixed projection of the angular momentum $M$ and spin $I$ of nuclei in the case of spherically symmetric or axially symmetric mean fields, is given in Appendix \ref{appB}. The full SPM level density derivations generalized by shell effects are described in Appendix \ref{appC}. \section{Microscopic-macroscopic approach } \l{sec-levden} For a statistical description of level density of a nucleus in terms of the conservation variables, the total energy, $E$, and nucleon number, $A$, one can begin with the micro-canonical expression for the level density, \begin{equation}\label{dendef1} \rho(E,A)= \sum\limits_i\!\delta(E-E_i)~\delta(A-A_i) \equiv \int \frac{\d \beta \d \alpha}{(2\pi i)^2}~e^{S}, \end{equation} where $E_i$ and $A_i$ represent the system spectrum, and $S=\ln \mathcal{Z}(\beta,\alpha) +\beta E -\alpha A~$ is the entropy. Using the mean field approximation for the partition function $\mathcal{Z}(\beta,\alpha)$, one finds \cite{BM67} \bea\l{parfunF} &\ln \mathcal{Z}= \sum\limits_{i}\ln\left[1 + \exp\left(\alpha - \beta \varepsilon_i\right)\right]\nonumber\\ &\approx \int\limits_0^{\infty}\d \varepsilon~g(\varepsilon) \ln\left[1+ \exp\left(\alpha - \beta\varepsilon\right)\right]~, \eea where $\varepsilon_i$ are the s.p. energies of the quantum states in the mean field. In the transformation from the sum to an integral, we introduced the s.p. level density $g(\varepsilon)$ as a sum of the smooth, $\tilde{g}(\varepsilon)$, and oscillating shell, $\delta g(\varepsilon)$, components, using the SCM (see Refs.~\cite{St67,BD72}): \be\l{spden} g(\varepsilon)\cong \tilde{g}(\varepsilon)+ \delta g(\varepsilon)~. \ee Within the semiclassical POT \cite{SM76,BB03}, the smooth and oscillating parts of the s.p. level density, $g(\varepsilon)$, can be approximated, with good accuracy, by the sum of the ETF level density, $\tilde{g} \approx g^{}_{\rm ETF}$, and the semiclassical PO contribution, $\delta g(\varepsilon)\approx \delta g_{\rm scl}$, Eq.~(\ref{goscsem}). In integrating over $\alpha$ in Eq.~(\ref{dendef1}) for a given $\beta$ by the standard SPM, we use the expansion for the entropy $S(\beta,\alpha)$ near the saddle point $\alpha=\alpha^\ast$ as \be\l{Sexp} S(\beta,\alpha)=S(\beta,\alpha^\ast) +\frac12 \left(\frac{\partial^2 S}{\partial \alpha^2}\right)^\ast \left(\alpha-\alpha^\ast\right)^2+\ldots~. \ee The first-order term of this expansion disappears because the Lagrange multiplier, $\alpha^\ast$, is defined by the saddle-point condition \begin{equation}\label{Seqsd} \left(\frac{\partial S}{\partial \alpha}\right)^\ast\equiv \left(\frac{\partial \ln Z}{\partial \alpha}\right)^\ast-A=0~. \end{equation} Introducing, for convenience, the potential $\Omega=-\mbox{ln}\mathcal{Z}/\beta$, one can use its SCM decomposition in terms of the smooth part and shell corrections for the level density $g$, see Eq.~(\ref{spden}) and Ref.~\cite{KM79}, through the partition function, $ \ln \mathcal{Z}$ (Eq.~(\ref{parfunF})): \be\l{SCMpotF} \Omega\left(\beta,\lambda\right) \cong ~\tilde{\Omega}\left(\beta,\lambda\right) + \delta \Omega\left(\beta,\lambda\right)~. \ee Here, $\tilde{\Omega}\approx \Omega^{}_{\rm ETF}$ is the smooth ETF component \cite{KM79,KS20}, \be\l{TFpotF} \tilde{\Omega}\left(\beta,\lambda\right) =\tilde{E} -\lambda A -\frac{\pi^2}{6\beta^2}\tilde{g}(\lambda)~, \ee where $\tilde{E}\approx E^{}_{\rm ETF}$ is the nuclear ETF energy (or the liquid-drop energy). For a given $\beta$, the chemical potential, $\lambda=\alpha^\ast/\beta$, is a function of the particle number $A$, according to Eq.~(\ref{Seqsd}), and $\lambda \approx \tilde{\lambda}$ is approximately equal to the SCM smooth chemical potential. With the help of the POT \cite{SM76,SM77,BB03}, one obtains \cite{KM79} for the oscillating (shell) component, $\delta \Omega$, in Eq.~(\ref{SCMpotF}), \bea\l{potoscparF} &\delta \Omega= -\beta^{-1} \int\limits_0^\infty\d\varepsilon~ \delta g(\varepsilon)~ \ln\left\{1+\exp\left[\beta\left(\lambda- \varepsilon\right)\right]\right\}\nonumber\\ &\cong \delta \Omega_{\rm scl}\left(\beta,\lambda\right) =\delta F_{\rm scl}~. \eea For the semiclassical free-energy shell correction, $\delta F_{\rm scl}$ (see Appendix \ref{appA}), we incorporate the POT expression: \be\l{FESCF} \delta F_{\rm scl} \cong \sum^{}_{\rm PO} F_{\rm PO}~, \ee where, \be\l{dFESCF} F_{\rm PO}= E_{\rm PO}~ \frac{x^{}_{\rm PO}}{ \sinh\left(x^{}_{\rm PO}\right)}~,\quad x^{}_{\rm PO}= \frac{\pi t^{}_{\rm PO}}{\hbar \beta}~, \ee and \be\l{dEPO0F} E_{\rm PO}=\frac{\hbar^2}{t_{\rm PO}^2}\, g^{}_{\rm PO}(\lambda)~. \ee Here, $t^{}_{\rm PO} = k~t^{k=1}_{\rm PO}(\lambda)$ is the period of particle motion along the PO (taking into account its repetition, or period number $k$), and $t^{k=1}_{\rm PO}$ is the period of the particle motion along the primitive ($k=1$) PO. The period $t^{}_{\rm PO}$ (and $t^{k=1}_{\rm PO}$) and the partial oscillating level density component, $g^{}_{\rm PO}$, given by Eq.~(\ref{goscPO}), are taken at the chemical potential $\varepsilon=\lambda$; see also Eqs.~(\ref{goscsem}) and (\ref{goscPO}) for the semiclassical s.p. level-density shell correction $\delta g_{\rm scl}(\varepsilon)$ (see Refs.~\cite{SM76,BB03}). Notice that equivalence of the variations of the grand-canonical- and canonical- ensemble potentials, Eq.~(\ref{potoscparF}), is valid approximately in the corresponding variables, for large particle numbers $A$. This equivalence has to be valid in the semiclassical POT. Expanding, then, $x^{}_{\rm PO}/\sinh(x^{}_{\rm PO})$, Eq.~(\ref{dFESCF}), in the shell correction $\delta \Omega$ [Eqs.~(\ref{potoscparF}) and (\ref{dFESCF})] in powers of $1/\beta^2$ up to the quadratic terms, $\propto 1/\beta^2$, one obtains \be\l{OmadF} \Omega \approx E_0-\lambda A-\frac{a}{\beta^2}~. \ee Here $E_0$ is the ground state energy, $E_0=\tilde{E}+\delta E$, and $\delta E$ is the energy shell correction of a cold nucleus, $\delta E \approx \delta E_{\rm scl}$, Eq.~(\ref{escscl}). In Eq.~(\ref{OmadF}), $a$ is the level density parameter $a$, \be\l{denpar} a=\tilde{a}+\delta a~, \ee where $\tilde{a}\approx a^{}_{\rm ETF}$ and $\delta a$ are the ETF and the shell correction components, \be\l{daF} \tilde{a} \approx \frac{\pi^2}{6} g^{}_{\rm \tt{ETF}}(\lambda), \quad \delta a=\frac{\pi^2}{6}\delta g_{\rm scl}(\lambda)~. \ee Note that for the ETF components one commonly accounts for self-consistency using Skyrme interactions, see Refs.~\cite{BG85,BB03,AS05,KS18,KS20,PLB}. For the semiclassical POT level density, $\delta g_{\rm scl}(\lambda)$, one employs the method of Eqs.~(\ref{goscsem}) and (\ref{goscPO}), see Refs.~\cite{BB72,SM76,SM77,BB03,MY11,MG17}. Note that in the grand canonical ensemble, the level density parameter $a$, Eqs.~(\ref{denpar}) with (\ref{daF}), is function of the chemical potential $\lambda$. We may include, generally speaking, the collective (rotational) component into $E_0$; see Sec.~\ref{subsec-I} and Appendix \ref{appB}. Substituting Eq.~(\ref{Sexp}) into Eq.~(\ref{dendef1}), and taking the error integral over $\alpha $ in the extended infinite limits including the saddle point $\alpha^\ast$, one obtains \bea\l{rhoE1F} &\rho(E,A) \approx \frac{1}{2\pi i~\sqrt{2\pi}} \int \d \beta~\beta^{1/2} \mathcal{J}^{-1/2}\nonumber\\ &\times \exp\left(\beta U + a/\beta\right)~, \eea where $U=E-E_0$ is the excitation energy, and $a$ is the level density parameter, given by Eqs.~(\ref{denpar}) and (\ref{daF}). In equation (\ref{rhoE1F}), $\mathcal{J}$ is the one-dimensional Jacobian determinant [$c$ number, $\mathcal{J}(\lambda)$] taken at the saddle point over $\alpha$ at $\alpha=\alpha^\ast=\lambda \beta$, Eq.~ (\ref{Seqsd}): \bea\l{Jac1F} &\mathcal{J}\equiv\beta \left(\frac{\partial^2 S}{\partial \alpha^2}\right)^\ast \equiv \beta \left(\frac{\partial^2 \ln Z}{\partial \alpha^2}\right)^\ast\nonumber\\ &=-\left(\frac{\partial^2\Omega}{\partial \lambda^2}\right)^\ast \cong \tilde{\mathcal{J}}+\delta \mathcal{J}~. \eea The asterisks mean the saddle point for integration over $\alpha$ for any $\beta$ (here and in the following we omit the superscript asterisk in $\mathcal{J}$). Differentiating the potential $\Omega$, Eq.~(\ref{SCMpotF}), over $\lambda$ within the grand-canonical ensemble we obtain for the smooth part of the Jacobian $\tilde{\mathcal{J}}= -\left(\partial^2\Omega^{}_{\rm \tt{ETF}}/\partial\lambda^2\right)^\ast \approx g^{}_{\rm \tt{ETF}}(\lambda)$. We note that, for not too large thermal excitations, the main contribution from the oscillating potential component $\delta \Omega$ as function of $\lambda$ is coming from the differentiation of the sine function in the PO energy shell correction factor $E_{\rm PO}$, Eq.~(\ref{dEPO0F}), through the PO action phase $\mathcal{S}_{\rm PO}(\lambda)/\hbar$ of the PO level density component $g^{}_{\rm PO}(\lambda)$, Eq.~(\ref{goscPO}). The temperatures $T=1/\beta^\ast$, when the saddle point $\beta=\beta^\ast$ exists, are assumed to be much smaller than the chemical potential $\lambda$. The reason is that for large particle numbers $A$ the semiclassical large parameter, $\sim\mathcal{S}_{\rm PO}/\hbar \sim A^{1/3}$, appears. This leads to a dominating contribution, much larger than that coming from differentiation of other terms, the $\beta$-dependent function $x^{}_{\rm PO}(\beta)$, and the PO period $t^{}_{\rm PO}(\lambda)$. Using Eqs.~(\ref{potoscparF}), (\ref{d2Edl2}), and (\ref{d2g}), one approximately obtains for the oscillating Jacobian part $\delta \mathcal{J}(\lambda)$, Eq.~(\ref{Jac1F}), the expression: \be\l{dJ} \delta \mathcal{J} \approx\sum^{}_{\rm PO}g^{}_{\rm PO}\frac{x^{}_{\rm PO}}{ \sinh\left(x^{}_{\rm PO}\right)}~, \ee where $x^{}_{\rm PO}(\beta,\lambda)$ [through $t^{}_{\rm PO}(\lambda)$] is the dimensionless quantity, Eq.~(\ref{dFESCF}), proportional to $1/\beta$. The total Jacobian $\mathcal{J}(\lambda)$ as function of $\lambda$ can be presented as \be\l{Jac2} \mathcal{J} \cong \tilde{\mathcal{J}}\left(1+\delta \mathcal{J}/\tilde{\mathcal{J}}\right) =g(\lambda)\left(1+\xi\right), \ee where $\xi(\beta,\lambda)$ is defined by [see also Eqs.~(\ref{Jac1F}) and (\ref{OmadF})] \be\l{xipar} \xi=\frac{a^{\prime\prime}(\lambda)}{\beta^2g(\lambda)}\approx \sum^{}_{\rm PO}\frac{g^{}_{\rm PO}(\lambda)}{g(\lambda)} \left(\frac{x^{}_{\rm PO}}{\sinh\left(x^{}_{\rm PO}\right)}-1\right). \ee This approximation was derived for the case when a smooth (E)TF part can be neglected. Notice, that the rotational excitations can be included into the ETF part and shell corrections of the potential $\Omega$; see Sec.~\ref{subsec-I} and Appendix \ref{appB}. In this case, Eq.~(\ref{Jac2}) will be similar but with more complicate expressions for the two-dimensional Jacobian $\tilde{\mathcal{J}}$, especially for its shell component $\delta \mathcal{J}$. Substituting now $\lambda$, found from Eq.~(\ref{Seqsd}), for a given particle number $A$, one can obtain relatively small thermal and shell corrections to the smooth chemical potential in $\lambda(A)$ of the SCM \cite{BD72}. For simplicity, neglecting these correction terms for large particle numbers, $A^{1/3}\gg 1$, one can consider $\lambda$ as a constant related to that of the particle number density of nuclear matter; see Sec. 2.3 of Ref.~\cite{BM67}. Therefore, $\lambda$ is independent of the particle number $A$ for large values of $A$. \section{MMA analytical expressions} \l{sec-MMAas} In linear approximation in $1/\beta^2$, one finds from Eq.~(\ref{xipar}) for $\xi$ and Eq.~(\ref{dFESCF}) for $x^{}_{\rm PO}$ \be\l{xiLIN} \xi =\frac{\overline{\xi}}{\beta^2}\approx -\frac{\pi^2}{6\hbar^2\beta^2} \sum^{}_{\rm PO}t^{2}_{\rm PO}\frac{g^{}_{\rm PO}(\lambda)}{g(\lambda)}~, \ee where \be\l{xib} \overline{\xi}=\frac{a^{\prime\prime}(\lambda)}{g(\lambda)}\approx -\frac{\pi^2}{6\hbar^2} \sum^{}_{\rm PO}t^{2}_{\rm PO}\frac{g^{}_{\rm PO}(\lambda)}{g(\lambda)} \approx -\frac{2\pi^4}{3 D_{\rm sh}^2} \frac{\delta g(\lambda)}{g(\lambda)}~; \ee see also Eq.~(\ref{xipar}). In Eq.~(\ref{xib}), $D_{\rm sh}\approx \lambda/A^{1/3}$ is the distance between major shells; see Eq.~(\ref{periode}). For convenience, introducing the dimensionless energy shell correction, $\mathcal{E}_{\rm sh}$, in units of the smooth ETF energy per particle, $E_{\rm \tt{ETF}}/A$, one can present Eq.~(\ref{xib}) as \be\l{xibdE} \overline{\xi} \approx \frac{4\pi^6 A^{1/3}\mathcal{E}_{\rm sh}}{3\lambda^2}~, \quad \mathcal{E}_{\rm sh}=-\frac{\delta E}{E_{\rm \tt{ETF}}}~A~. \ee In the applications below we will use $\overline{\xi}>0$ and $\mathcal{E}_{\rm sh}>0$ if $\delta E<0$. The smooth ETF energy $E_{\rm \tt{ETF}}$ in Eq.~(\ref{xibdE}) [see Eq.~(\ref{TFE0})] can be approximated as $E_{\rm \tt{ETF}}\approx \tilde{g}(\lambda)\lambda^2/2 $. The energy shell correction, $\delta E$, was approximated, for a major shell structure, with the semiclassical POT accuracy (see Eqs.~(\ref{escscl}) and (\ref{dEPO0F}), and Refs.~\cite{SM76,SM77,BB03,MY11}) by, \be\l{dedg} \delta E \approx \delta E_{\rm scl}\approx \left(\frac{D_{\rm sh}}{2 \pi}\right)^2~ \delta g^{}_{\rm scl}(\lambda)~. \ee The correction $\propto 1/\beta^4$ of the expansion of the Jacobian (\ref{Jac2}) in $1/\beta$ through the oscillating part $\delta \mathcal{J}$, Eq.~(\ref{dJ}), is relatively small for $\beta $ which, at the saddle point values $T=1/\beta^\ast$, is related to the chemical potential $\lambda$ as $T \ll \lambda$. The high order, $\propto 1/\beta^4$, term of this expansion can be neglected under the following condition: \be\l{condU} \frac{1}{\tilde{g}}\siml U\ll \sqrt{\frac{90}{7}}\frac{A^{1/3}\lambda^2}{2\pi^4 K}~ \ee Using typical values for parameters $\lambda=40$ MeV, $A=200$, and $K\approx 10$ MeV, $1/\tilde{g}\approx 0.1-0.2$ MeV; see Ref.~\cite{KS18}; we may approximately evaluate very right-hand-side of Eq.~(\ref{condU}) as 20 MeV. For simplicity, small shell and temperature corrections to $\lambda(A)$ from the conservation equation (\ref{Seqsd}) are neglected by using linear shell effects of the leading order \cite{BD72} and constant particle number density of nuclear matter, $\rho_0^{}$. Taking $\rho^{}_0=2k_F^{3}/3\pi^2=0.16$ fm$^{-3}$, one finds about constant $\lambda=\hbar^2k^{2}_F/2\mu\approx 40$ MeV, where $\mu$ is the nucleon mass. In the derivations of the condition (\ref{condU}), we used the POT distance between major shells, $D_{\rm sh}$, Eq.~(\ref{periode}). Evaluation of the upper limit for the excitation energy at the saddle point $\beta=\beta^\ast=1/T$ is justified because this upper limit is always so large that this point does certainly exist. Therefore, for consistence, one can neglect the quadratic, $1/\beta^2$ (temperature $T^2$), corrections to the Fermi energy $\varepsilon^{}_{F}$ in the chemical potential, $\lambda\approx \varepsilon^{}_{F}$, for large particle numbers $A$. Under the condition of Eq.~(\ref{condU}), one can obtain simple analytical expressions for the level density $\rho(E,A)$ from the integral representation (\ref{rhoE1F}), because the Jacobian factor $\mathcal{J}^{-1/2}$ in its integrand can be simplified by expanding in small values of $\xi$ or of $1/\xi$ [see Eq.~(\ref{xiLIN})]. Notice that one has two terms in the Jacobian $\mathcal{J}$, Eq.~(\ref{Jac2}). One of them is independent of the integration variable $\beta$ and the other one is proportional to $1/\beta^2$. These two terms are connected to those of the potential $\Omega$, Eq.~(\ref{OmadF}), by the inverse Laplace transformation (\ref{dendef1}) of the partition function (\ref{parfunF}) and the corresponding direct operation transformation. Expanding the square root $\mathcal{J}^{-1/2}$ in the integrand of the integral representation (\ref{rhoE1F}), for small and large $\xi$ at linear order in $\xi$ and $1/\xi$, respectively, one arrives at two different approximations marked below as (i) and (ii) cases, respectively. At each finite order of these expansions, one can exactly take the inverse Laplace transformation. Convergence of the corresponding corrections to the level density, Eq.~(\ref{rhoE1F}), after applying the inverse transformation, Eq.~(\ref{Laplace}), will be considered in the next subsections. \vspace{0.2cm} \begin{figure*} \includegraphics[width=11.5cm]{Fig1-prc-rev.eps} \vspace{-0.2cm} \caption{{\small MMA level density $\rho$ [Eq.~(\ref{denbes}) in units of MeV$^{-1}$] as function of the excitation energy $U$ (in units of MeV) at the inverse level density parameter $K=10$ MeV (a,b), and at $20$ MeV (c,d) for the relative energy shell corrections $\mathcal{E}_{\rm sh}=1.7$ (a,c) and $0.6$ (b,d) values. The black solid ($n=0$) and dotted ($n=1$) lines are of MMA2, without [Eqs.~(\ref{rho52})] and with [Eq.~(\ref{rhoGEN52})] the second term, respectively. The magenta dashed line ($n=2$) [numerical, Eq.~(\ref{rhoE1F})] with the next leading correction term presents good convergence to the MMA2 results owing to the expansion of the Jacobian factor, $\mathcal{J}^{-1/2}$ [see Eq.~(\ref{Jac2}) for the Jacobian $\mathcal{J}$], in the integrand of Eq.~(\ref{rhoE1F}), over $1/\xi$ (see text). The heavy dashed red line ($n=0$) and blue dotted line ($n=1$), and the dashed cyan line ($n=2$) [see Eqs.~(\ref{rho32}) (MMA1) and (\ref{rhoGEN32}), and (\ref{rhoE1F}), respectively], show the convergence to the MMA1 results due to the expansion of this Jacobian $\mathcal{J}$, over $\xi$. The particle number $A=200$ was used. }} \label{fig1} \end{figure*} \subsection{(i) Small shell effects} \l{subsec-32} Using Eq.~(\ref{Jac2}), one can write for small $\xi$, Eq.~(\ref{xiLIN}), \be\l{Jac3} \frac{1}{\mathcal{J}^{1/2}}=\frac{1}{\sqrt{g(\lambda)\left(1+\xi\right)}}\approx \frac{1}{\sqrt{g(\lambda)}}\left(1-\frac{\overline{\xi}}{2\beta^2}\right)~. \ee Substituting this expression for the Jacobian factor, $\mathcal{J}^{-1/2}$, into Eq.~(\ref{rhoE1F}) one obtains two terms, which are related to those of the last equation in (\ref{Jac3}). Due to the transformation of the integration variable $\beta$ to $\tau=1/\beta$ in the first term and using $\beta$ directly as the integration variable in the second term, they are reduced to the analytical inverse-Laplace form (\ref{Laplace}) for the transformation from $\tau$ to $a$ variables \cite{AS64}. Thus, one can approximately represent the level density $\rho(E,A)$ as a superposition of the two Bessel functions of the orders of 3/2 and 1/2, \bea\l{rhoGEN32} &\rho(E,A)\approx \overline{\rho}_{3/2}\left(S^{-3/2}I_{3/2}(S)- r^{}_1 S^{-1/2}I_{1/2}(S)\right)\nonumber\\ &\mbox{with}\quad\overline{\rho}_{3/2}=a\sqrt{\frac{2\pi}{3}}~. \eea Here \be\l{r1} r^{}_1=\frac{\overline{\xi}U^{1/2}}{4 a^{3/2}}\approx \frac{\pi^6K^{3/2}U^{1/2}}{3\lambda^2A^{7/6}}~\mathcal{E}_{\rm sh}~, \ee where $\overline{\xi}$ is given in Eq.~(\ref{xib}), $K=A/a$, $a$ is the level density parameter, Eq.~(\ref{denpar}), and \be\l{entrFG} S=2\sqrt{a U}~. \ee This expression is associated with an entropy in the mean field approximation because of its two clear asymptotic limits for large and small excitation energies, $U$ [both asymptotic limits in terms of the level density, $\rho(E,A)$, will be discussed below]. The relative contribution of the second term in Eq.~(\ref{rhoGEN32}) decreases with the shell effects, $\mathcal{E}_{\rm sh}$, inverse level density parameter, $K$, and excitation energy, $U$. In the case (i), referred to below as the MMA1 approach, up to these corrections (small $r^{}_1$), one arrives approximately at expression (11) of Ref.\ \cite{PLB}: \be\l{rho32} \rho(E,A) \approx \overline{\rho}_{3/2}~S^{-3/2}I_{3/2}(S)~,\qquad \mbox{(i)}~. \ee \subsection{(ii) Dominating shell effects} \l{subsec-52} In this case, expanding the Jacobian factor $\mathcal{J}^{-1/2}$, see Eq.~(\ref{Jac3}), over small $1/\xi$, one finds \be\l{Jac4} \frac{1}{\mathcal{J}^{1/2}}\approx\frac{1}{\sqrt{g(\lambda)\xi}} \left(1-\frac{1}{2\xi}\right)~, \ee where $\xi>0$, Eq.~(\ref{xiLIN}) (for $\delta E<0$). Substituting this approximate expression for the Jacobian factor into Eq.~(\ref{rhoE1F}) and transforming the integration variable $\beta$ to $\tau=1/\beta$ in the integral representation for the level density $\rho(E,A)$, we obtain by using the inverse Laplace transformation (\ref{Laplace}) from $\tau$ to $a$ variable: \bea &\rho(E,A)\approx \overline{\rho}_{5/2} \left(S^{-5/2}I_{5/2}(S) + r^{}_2 S^{-9/2}I_{9/2}(S)\right),\l{rhoGEN52}\\ &\mbox{with} \quad\overline{\rho}_{5/2}= 4a^2\left(\pi/6\overline{\xi}\right)^{1/2}~\l{rhobar52}, \eea where $\overline{\xi}$ is given by Eqs.~(\ref{xib}) and (\ref{xibdE}), and \be\l{r2} r_{2}=\frac{2a^2}{\overline{\xi}} \approx \frac{3 \lambda^2A^{5/3}}{2\pi^6K^2\mathcal{E}_{sh}}~. \ee In contrast to case (i), the relative contribution of the second term in the r.h.s. of Eq.~(\ref{rhoGEN52}) [case (ii)] has the opposite behavior in the values of parameters $\mathcal{E}_{\rm sh}$ and $K$, and is almost independent of $U$. Up to small contribution of the second term in Eq.~(\ref{rhoGEN52}), one arrives approximately at \be\l{rho52} \rho(E,A)\approx \overline{\rho}_{5/2}~S^{-5/2}I_{5/2}(S)~, \qquad \mbox{(ii)}~, \ee where $\overline{\rho}_{5/2}$ is given by Eq.~(\ref{rhobar52}). This approximation is referred to below as the MMA2 approach. Figure \ref{fig1} shows good convergence of different approximations to their main term ($n=0$) for $\rho(E,A)$. Here we accounted for the first ($n=1$) analytical and second ($n=2$) numerical corrections in the expansion of the Jacobian factor $\mathcal{J}^{-1/2}$ [see Eq.~(\ref{Jac2}) for the Jacobian $\mathcal{J}$], over $1/\xi$ (MMA2) and over $\xi$ (MMA1) as functions of the excitation energy $U$. Calculations are carried out for typical values of the parameters: the inverse level density $K$, the relative energy shell corrections $\mathcal{E}_{\rm sh}$, and a large particle number $A$. The results of the analytical MMA1 approach, Eq.~(\ref{rhoGEN32}), and MMA2, Eq.~(\ref{rhoGEN52}), with the first correction terms, are compared with those of Eqs.~(\ref{rho52}) and (\ref{rho32}) without first correction terms, respectively, using different values of these parameters. The contributions of these corrections to the simplest analytical expressions, Eq.~(\ref{rho32}) and (\ref{rho52}), are smaller the smaller excitation energies $U$ for the MMA1 and the larger $U$ for the MMA2 such that a transition between the approaches, Eq.~(\ref{rhoGEN32}) and (\ref{rhoGEN52}), takes place with increasing value of $U$; see Fig.~\ref{fig1}. We also demonstrate good convergence to the leading terms ($n=0$) by taking into account numerically the next order ($n=2$ in this figure) corrections in the direct calculations of the integral representation (\ref{rhoE1F}). Such a convergence occurs for the MMA1 better for smaller $U$ with increasing inverse density parameter $K$ and decreasing relative energy shell correction $\mathcal{E}_{\rm sh}$. An opposite behavior takes place for the MMA2 approach. Especially, a good convergence with increasing excitation energy $U$ is seen clearly with $n=1$ and $2$ for the MMA1 in panels (a) and (c); see, e.g., panel (c) for larger values of both $K$ and $\mathcal{E}_{\rm sh}$. Notice that for the case (ii) when the shell effects are dominating, the derivatives are relatively large, $a^{\prime\prime}(\lambda)\lambda^2/a \gg 1$, but at the same time the shell corrections, $\mathcal{E}_{\rm sh}$, can be small. In this case, referred to below as the MMA2b approach, we have for the coefficient $\overline{\rho}_{5/2}$ \be\l{rhobar52TF} \overline{\rho}_{5/2}\approx 2\sqrt{2/\pi}\lambda a^2. \ee Here, in the calculation of $\overline{\rho}_{5/2}$ given by Eq.~(\ref{rhobar52}), we used the TF evaluation of the level density, $\tilde{g}\propto A/\lambda$, and its derivatives over $\lambda$ in the first equation of (\ref{xib}) for $\overline{\xi}$. \subsection{Disappearance of shell effects with temperature} \l{subsec-LT} As well known (see for instance Refs.~\cite{BB03,SM76,KM79,MG17}), with increasing temperatures $T$, the shell component $\delta \Omega$, Eq.~(\ref{potoscparF}), disappears exponentially as $\exp(-2\pi^2T/D_{\rm sh})$ in the potential $\Omega$ or free energy $F$, see also Eqs.~(\ref{FESCF}) and (\ref{dFESCF}). This occurs at temperatures $T\approx D_{\rm sh}/\pi=2-3$ MeV ($D_{\rm sh}=\lambda/A^{1/3}=7-10$ MeV in heavy nuclei, $A\approx 100-200$). For such large temperatures with excitation energies $U$, near or larger than neutron resonances energies, one can approximate the Jacobian factor $\mathcal{J}^{-1/2}$ in Eq.~(\ref{rhoE1F}) as \be\l{Jac5} \mathcal{J}^{-1/2}\approx \tilde{\mathcal{J}}^{-1/2}\left(1-\delta \mathcal{J}/(2\tilde{\mathcal{J}}) \right)~, \ee where $\tilde{\mathcal{J}}\approx \tilde{g}$, and \be\l{dJac5} \delta \mathcal{J}\approx 2 \sum^{}_{\rm PO}g^{}_{\rm PO}x^{}_{\rm PO}~\exp\left(-x^{}_{\rm PO}\right), \ee and $x^{}_{\rm PO}=\pi t^{}_{\rm PO}/\hbar \beta$, Eq.~(\ref{dFESCF}). With this approximation, using the transformation of the integration variable $\beta$ to $\tau=1/\beta$ in Eq.~(\ref{rhoE1F}), one can analytically take the inverse Laplace integral [Eq.~(\ref{Laplace})] for the level density. Finally, one obtains $\rho=\tilde{\rho}+\delta \rho$, where \bea\l{rhoT} &\delta \rho(E,A)=\sqrt{\frac{\pi}{2 \tilde{g}^3}} \sum_{\rm PO}^{} g^{}_{\rm PO}\frac{t^{}_{\rm PO}}{\hbar} \int \frac{{\rm d} \tau}{2\pi i\tau^{3/2}}\exp\left(a_{\rm sh} \tau + \frac{U}{\tau}\right)\nonumber\\ &=\!\!\sqrt{\frac{\pi}{2 \tilde{g}^3}} \sum^{}_{\rm PO}g^{}_{\rm PO}\frac{t^{}_{\rm PO}}{\hbar}\left(4 a~a^{}_{\rm sh}\right)^{1/4} S_{\rm sh}^{-1/2}I_{1/2}\left(S_{\rm sh}\right). \eea Here, $a^{}_{\rm sh}=\tilde{a}-\pi t^{}_{\rm PO}/\hbar$ is the shifted level density parameter due to the shell effects, and $S_{\rm sh}=2\sqrt{a^{}_{\rm sh}U}$ is the shifted entropy. For a major shell structure, one arrives at \bea\l{rhoTms} &\delta \rho(E,A)\approx \sqrt{\frac{\pi}{2 \tilde{g}^3}}\frac{2\pi}{D_{\rm sh}}\left(4 a~a^{}_{\rm sh}\right)^{1/4} \delta g(\lambda) ~S_{\rm sh}^{-1/2}I_{1/2}\left(S_{\rm sh}\right)\nonumber\\ &\approx \sqrt{\frac{\pi}{2 \tilde{g}^3}}\left(\frac{2\pi}{D_{\rm sh}}\right)^3 \left(4 a~a^{}_{\rm sh}\right)^{1/4} \delta E~ S_{\rm sh}^{-1/2}I_{1/2}\left(S_{\rm sh}\right) \eea [see Eq.~(\ref{dedg})], and \be\l{ash} a^{}_{\rm sh}\approx \tilde{a}-\frac{2\pi^2}{D_{\rm sh}}~. \ee Hence, the shifted inverse level-density parameter is $K=A/a~=~ \tilde{K}\left(1+\Delta K/\tilde{K} \right)$, where the dimensionless shift is given by \be\l{Ksh} \frac{\Delta K}{\tilde{K}} \approx \frac{2\pi^2\tilde{K}}{A D_{\rm sh}}\approx\frac{2\pi^2\tilde{K}}{\lambda A^{2/3}}~. \ee This is approximately equal to $\Delta K \approx 1-2$ MeV for $\tilde{K}=10$ MeV (see Refs.~\cite{SN90,SN91,KS18,KS20}) at typical parameters $\lambda=40$ MeV and $A=100-200$ ($\Delta K \approx 6-9$ MeV for $\tilde{K}=20$ MeV). We note that an important shift in the inverse level density parameter $K$ for double magic nuclei near the neutron resonances is due to a strong shell effect. \subsection{General MMA} \l{subsec-GMMA} All final results for the level density $\rho(E,A)$ discussed in the previous subsections of this section can be approximately summarized as \begin{equation}\label{denbes} \rho \approx \rho^{}_{\tt{MMA}}(S) =\overline{\rho}_\nu~f_\nu(S)~,~~~f_\nu(S)= S^{-\nu}I_{\nu}(S)~, \end{equation} with corresponding expressions for the coefficient $\overline{\rho}_\nu$ (see above). For large entropy $S$, one finds \begin{equation} f_\nu(S) =\frac{\exp(S)}{S^{\nu}\sqrt{2\pi S}}\left[1+\frac{1-4\nu^2}{8S} +\mbox{O}\left(\frac{1}{S^2}\right)\right]. \label{rhoasgen} \end{equation} At small entropy, $S \ll 1$, one obtains also from Eq.~(\ref{denbes}) the finite combinatorics power expansion \cite{St58,Er60,Ig72,Ig83} \begin{equation} f_\nu(S)= \frac{2^{-\nu}}{\Gamma(\nu+1)}\left[1+\frac{S^2}{4(\nu+1)}+ \mbox{O}\left(S^4\right)\right], \label{den0gen} \end{equation} where $\Gamma(x)$ is the gamma function. This expansion over powers of $S^2 \propto U$ is the same as that of the ``constant temperature model'' (CTM) \cite{GC65,ZK18,Ze19}, used often for the level density calculations at small excitation energies $U$, but here we have it without free parameters. In order to clarify Eq.~(\ref{rhoasgen}) for the MMA level density at a large entropy, one can directly obtain a more general full SPM asymptote, including the shell effects, by taking the integral over $\beta$ in Eq.~(\ref{rhoE1F}) using the SPM (see Appendix \ref{appC}). We have \vspace{0.2cm} \begin{figure*} \includegraphics[width=11.5cm]{Fig2-prc-rev.eps} \vspace{-0.2cm} \caption{{\small Level density $\rho$ [Eq.~(\ref{denbes})], in units of $\overline{\rho}_\nu$, with the accurate result ``1'' (solid line), Eq.~(\ref{rho32}), $(a)$ for $\nu=3/2$ [MMA1 (i)], and $(b)$, Eq.~(\ref{rho52}), for $\nu=5/2$ [MMA2 (ii)], shown as functions of the entropy $S$ for different approximations: (1) $S \ll 1$ (dashed lines), Eq.~(\ref{den0gen}) at the second order, and (2) $S \gg 1$ (dotted and thin solid lines),~ Eq.~(\ref{rhoasgen}); ``3'' is the main term of the expansion in powers of $1/S$, and ``4'' is the expansion over $1/S$ up to first [in $(a)$], and second [in $(b)$] order terms in square brackets of Eq.\ (\ref{rhoasgen}), respectively.}} \label{fig2} \end{figure*} \be\l{SPMgen} \rho(E,A)= \frac{\exp(2\sqrt{aU})}{\sqrt{48}~U~\sqrt{1+\xi^\ast}}~, \ee where $\xi^\ast$ is $\xi$ of Eq.~(\ref{xipar}) at the saddle point $\beta=\beta^\ast$, which is, in turn, determined by Eq.~(\ref{spmcondbeta}): \be\l{par} \xi^\ast\approx -\frac{\pi^2T^2}{6\hbar^2} \sum^{}_{\rm PO}t^{2}_{\rm PO}\frac{g^{}_{\rm PO}(\lambda)}{g(\lambda)} \approx \frac{4\pi^6 U K\mathcal{E}_{\rm sh}}{3 \lambda^2 A^{2/3}}~. \ee We took the factor $\mathcal{J}^{-1/2}$, obtained from the Jacobian $\mathcal{J}$ of Eq.~(\ref{Jac2}), off the integral (\ref{rhoE1F}) at $\beta=\beta^\ast=1/T$. The Jacobian ratio $\xi^\ast$ of $\delta \mathcal{J}/\tilde{\mathcal{J}}$ at the saddle point, $\beta=\beta^\ast$ ($\lambda=\lambda^\ast=\alpha^\ast T$ is the standard chemical potential of the grand-canonical ensemble), Eq.~(\ref{par}), is the critical quantity for these derivations. The quantity $\xi^\ast$ is approximately proportional to the semiclassical POT energy shell correction, $\delta E$, Eq.~(\ref{dedg}), through $\mathcal{E}_{\rm sh}$, Eq.~(\ref{xibdE}), the excitation energy, $U=a T^2$, and to a small semiclassical parameter $A^{-1/3}$ squared for heavy nuclei (see Ref.~\cite{PLB} and Appendix \ref{appA}). For typical values of parameters, $\lambda=40$ MeV, $A\approx 200$, and $\mathcal{E}_{\rm sh}=|\delta E~A/E_{\rm \tt{ETF}}|\approx 2.0$ \cite{BD72,MSIS12}, one finds the approximate values of $\xi^\ast\approx 0.1 - 10$ for temperatures $T \approx 0.1-1$ MeV. This corresponds approximately to rather wide excitation energies $U=0.2-20$~ MeV for $K=10$~MeV \cite{KS18} (and $U=0.1-10$~MeV for $K=20$~ MeV). These values of $U$ overlap the interval of energies of the low energy states with that of the energies of states significantly above the neutron resonances. In line with the SCM \cite{BD72} and ETF approaches \cite{BB03}, these values are given by the realistic smooth energy $E_{\rm \tt{ETF}}$ for which the binding energy approximately equals $E_{\rm \tt{ETF}}+ \delta E$ \cite{MSIS12}. Accounting for the shell effects, Eq.~(\ref{SPMgen}) is more general large-excitation energy asymptote with respect to the well-known Bethe expression \cite{Be36} \be\l{ldBethe} \rho(E,A) = \frac{\exp\left(S\right)}{\sqrt{48}U}~, \ee where such effects were neglected; see also Refs.~\cite{Er60,GC65,BM67}. This expression can be alternatively obtained as the limit of Eq.~(\ref{SPMgen}) at large excitation energy, $U \rightarrow \infty$, up to shell effects [small $\xi^\ast$ of the case (i)]. This asymptotic result is the same as that of expression (\ref{rho32}), proportional to the Bessel function $I_\nu$ of the order $\nu=3/2$ [the case (i)], at the main zero-order expansion in $1/S$; see Eq.~(\ref{rhoasgen}). For large-entropy $S$ asymptote, we find also that the Bessel solution (\ref{rho52}) with $\nu=5/2$ in the case (ii) ($\xi^\ast \gg 1$) at zero-order expansion in $1/S$ coincides with that of the general asymptote (\ref{SPMgen}). The asymptotic expressions, Eqs.~(\ref{rhoasgen}), (\ref{SPMgen}), and, in particular, (\ref{ldBethe}), for the level density are obviously divergent at $U\rightarrow 0$, in contrast to the finite MMA limit (\ref{den0gen}) for the level density; see Eq.~(\ref{denbes}) and, e.g., Eqs.~(\ref{rho32}) and (\ref{rho52}). Our MMA results will be compared also with the popular Fermi gas (FG) approximation to the level density $\rho(E,N,Z)$ as a function of the neutron $N$ and proton $Z$ numbers near the $\beta $ stability line, $(N-Z)^2/A^2\ll 1$ \cite{Er60,GC65,EB09}: \begin{equation} \rho(E,N,Z)= \frac{\sqrt{\pi}}{12 a^{1/4} U^{5/4}}~ \exp\left(2\sqrt{a U}\right)~. \label{intFGSS} \end{equation} Notice that in all our calculations of the statistical level density, $\rho(E,A)$ [also $\rho(E,N,Z)$, Eq.~(\ref{intFGSS})], we did not use a popular assumption of small spins at large excitation energy which is valid for the neutron resonances. For typical values of spin $I \simg 10$, moment of inertia $\Theta \approx \Theta_{\rm \tt{TF}} \approx 2\mu R^2 A/5$, Eq.~(\ref{rigMIpar}), radius $R=r^{}_0A^{1/3}$, with $r_0=1.14 $ fm, and particle number $A \siml 200$, one finds that, for large entropy, the applicability condition (\ref{condI2}) is not strictly speaking valid. In these estimates, the corresponding excitation energies $U$ of LESs are essentially smaller than neutron resonance energies. However, near neutron resonances the excitation energies $U$ are large, spins are small, and Eq.~(\ref{intFGSS}) is well justified. We should also emphasize that the MMA1 approximation for the level density, $\rho(E,A)$, Eq.~(\ref{rho32}), and the Fermi gas approximation, Eq.~(\ref{ldBethe}) can be also applied for large excitation energies $U$, with respect to the collective rotational excitations, if one can neglect shell effects, $\xi^{\ast}\ll 1$. Thus, with increasing temperature $T \simg 1$ MeV (if it exists), or excitation energy $U$, where the shell effects are yet significant, one first obtains the asymptotical expression (\ref{SPMgen}) at $\xi^\ast\gg 1$, i.e., the asymptote of Eq.~(\ref{rho52}). Then, with further increasing temperature to about 2-3 MeV with the disappearance of shell effects (section \ref{subsec-LT}), one gets the transition to the Bethe formula, i.e., the large entropy asymptote (\ref{ldBethe}) of Eq.~(\ref{rho32}). \vspace{0.2cm} \begin{figure*} \includegraphics[width=11.5cm]{Fig3-prc-rev.eps} \vspace{-0.2cm} \caption{{\small MMA level density $\rho$ [Eq.~(\ref{denbes})] in units of MeV$^{-1}$ as function of the entropy $S$ (a), and excitation energy $U$, in units of MeV (b). The black solid and dotted lines are the MMA2 approach for $\mathcal{E}_{\rm sh}=2.0$ and $0.002$, Eq.~(\ref{rho52}), respectively. Green dashed and blue dotted lines are the general Fermi gas (GFG) approach, Eq.~(\ref{SPMgen}), for the same values of $\mathcal{E}_{\rm sh}$, respectively. The red dashed line is the MMA1, Eq.~(\ref{rho32}); in (b) $K=10$ MeV, of the order of the ETF value of Ref.~\cite{KS18}. }} \label{fig3} \end{figure*} In Fig.~\ref{fig2} we show the level density dependence $\rho(S)$, Eq.~(\ref{denbes}), for $\nu=3/2$ in $(a)$ and $\nu=5/2$ in $(b)$, on the entropy variable $S$ with the corresponding asymptote. In this figure, small [$S\ll 1$, Eq.~(\ref{den0gen})] and large [$S\gg 1$, Eq.~(\ref{rhoasgen})] entropy $S$ behaviors are presented. For small $S\ll 1$ expansion we take into account the quadratic approximation ``2'', where $S^2 \propto U$, that is the same as in the linear expansion within the CTM \cite{GC65,ZK18}. For large $S\gg 1 $ we neglected the corrections of the inverse power entropy expansion of the preexponent factor in square brackets of Eq.\ (\ref{rhoasgen}), lines ``3'', and took into account the corrections of the first [$\nu=3/2$, $(a)$] and up to second [$\nu=5/2$, $(b)$] order in $1/S$ (thin solid lines ``4'') to show their slow convergence to the accurate MMA result ``1'' (\ref{denbes}). It is interesting to find almost a constant shift of the results of the simplest, $\rho \propto \exp(S)/S^{\nu+1/2}$, SPM asymptotic approximation at large $S$ (dotted lines ``3'') with respect to the accurate MMA results of Eq.~(\ref{denbes}) (solid lines ``1''). This may clarify one of the phenomenological models, e.g., the back-shifted Fermi-gas (BSFG) model for the level density \cite{DS73,So90,EB09}. Figure \ref{fig3} shows the shell effects in the main approximations derived in this section, Eqs.(\ref{rho32}), (\ref{rho52}), and (\ref{SPMgen}), taking two essentially different values of finite $\mathcal{E}_{\rm sh}=2.0$ and much smaller $0.002$, between which one can find basically those given by Ref.~\cite{MSIS12}. For convenience, we show these results as functions of the entropy $S$ in panel (a), and the excitation energy $U$ in panel (b), taking the value of the averaged inverse density parameter $K$ found in Ref.~\cite{KS18}; see also Ref.~\cite{KS20}. As expected, the shell effect is very strong for the MMA2 approach, as can be seen from the difference between solid and dotted black lines\footnote{The dotted black line is very close to the explicit analytical limit (\ref{rhobar52TF}) of $\overline{\rho}_{5/2}$, Eq.~(\ref{rhobar52}), for the MMA2 equation (\ref{rho52}), see also Eq.~(\ref{rhobar52TF}).} depending on the second derivatives of strong oscillating functions of $\lambda$, $a''(\lambda)\approx \delta a''\propto \delta g''(\lambda) $ [see Appendix \ref{appA} around Eq.~(\ref{d2g}) and Sec.~\ref{sec-MMAas} below Eq.~(\ref{dedg})]. This is not the case for the full SPM asymptotic GFG, Eq.~(\ref{SPMgen}), for which this effect is very small. As seen from this figure, the MMA1, Eq.~(\ref{rho32}), independently of $\mathcal{E}_{\rm sh}$, converges rapidly to the GFG with increasing excitation energy $U$ as well as to the Bethe formula (\ref{ldBethe}). They all coincide at small values of $U$, about 0.5 MeV, particularly for $\mathcal{E}_{\rm sh}=0.002$. The Bethe approach is very close everywhere to the GFG line at $\mathcal{E}_{\rm sh}=0.002$ and therefore, it is not shown in this figure. Notice also that MMA2 at this small $\mathcal{E}_{\rm sh}$ is also close to the MMA1 everywhere. Again, one can see that the MMA1 and MMA2 have no divergence at zero excitation energy limit, $U \rightarrow 0$, while the full SPM asymptotic GFG, Eq.~(\ref{SPMgen}), and, in particular, the Bethe approach, Eq.~(\ref{ldBethe}), both diverge at $U \rightarrow 0$. \subsection{The spin-dependent level density} \l{subsec-I} Assuming that there are no external forces acting on an axially symmetrical nuclear system, the total angular momentum $I$ and its projection $M$ on a space-fixed axis are conserved, and states with a given energy $E$ and spin $I$ are $2I+1$ degenerated. As shown in Appendix \ref{appB}, for the ``parallel'' rotation around the symmetry axis $Oz$, i.e., an alignment of the individual angular momenta of the particle along $Oz$ (see Ref.~\cite{KM79} for the spherical case), in contrast to the ``perpendicular-to-axis $Oz$'' collective rotation (see, e.g., Ref.~\cite{GM21}), one can derive the level density $\rho(E,A,M) $ within the MMA approach in the same analytical form as for the $\rho(E,A)$, Eq.~(\ref{denbes}): \be\l{denbes1} \rho^{}_{\rm \tt{MMA}}(E,A,M)\approx \overline{\rho}_{\nu}f_\nu(S)~,\qquad \mbox{with}\qquad\nu=2,3~, \ee where \be\l{conM2} \overline{\rho}_2= \hbar~ \left(\frac{2a^{3}}{3\Theta}\right)^{1/2},\quad \nu=2~~~\mbox(i)~, \ee and \be\l{conM3} \overline{\rho}_3 = \hbar \lambda~ \left(\frac{8a^5}{\pi^2\Theta}\right)^{1/2},\quad \nu=3~~~(ii)~. \ee In Eq.~(\ref{denbes1}), the argument of the Bessel-like function, $f_{\nu}(S)\propto I_\nu(S)$, Eq.~(\ref{denbes}), is the entropy $S(E,A,M)$, Eq.~(\ref{entrFG}), with the $M$-dependent excitation energy $U$. Indeed, in the adiabatic mean-field approximation, the level density parameter $a$ in Eq.~(\ref{entrFG}) is given by Eq.~(\ref{daF}). For the intrinsic excitation energy $U$ in Eq.~(\ref{entrFG}), one finds \be\l{Eex} U=E-E_{0}-\frac12 \Theta~\omega^2~,~~~~\omega=\frac{\hbar M}{\Theta}~, \ee where, $E_{0}=\tilde{E} +\delta E$, is the same intrinsic (nonrotating) shell-structure energy as in Eq.~(\ref{OmadF}). With the help of the conservation equation (\ref{conseq}) for the saddle point, $\kappa^\ast=\hbar \omega \beta$, we eliminated the rotation frequency $\omega$, obtaining the second equation in Eq.~(\ref{Eex}); see Appendix \ref{appB}. For the moment of inertia (MI) $\Theta$ one has a similar SCM decomposition: \be\l{MI} \Theta=\tilde{\Theta} + \delta \Theta~, \ee where $\tilde{\Theta}$ is the (E)TF MI component which can be approximated largely by the TF expression, Eq.~(\ref{rigMIpar}), and $\delta \Theta$ is the MI shell correction which can be presented finally for the spherically symmetric mean field by Eq.~(\ref{MIpar}). As mentioned above, Eqs.~(\ref{denbes1})-(\ref{MI}) are valid for the ``parallel'' rotation (an alignment of nucleons' angular momenta along the symmetry axis $Oz$); see Appendix \ref{appB} for the specific derivations by assuming a spherical symmetry of the potential. In these derivations we used Eq.~(\ref{Eex}) for the excitation energy $U$, Eq.~(\ref{parfun}) for the partition function and Eqs.~(\ref{GCEpot}) and (\ref{SCMpot}) for the potential $\Omega(\beta,\lambda,\omega)$. In the evaluations of the Jacobian, $\mathcal{J}$, one can neglect shell corrections, in contrast to the evaluations of the entropy $S$ in the function $f_\nu(S)$. In the derivations of Eqs.~(\ref{conM2}) for $\overline{\rho}^{}_{2}$ and (\ref{conM3}) for $\overline{\rho}^{}_{3}$, we obtained the Jacobian components, $\tilde{\mathcal{J}}$ for the case (i) and $\delta \mathcal{J}$ for the case (ii), both under the assumption of an axially symmetric mean field (see Appendix \ref{appB}). For the Jacobian calculations, one can finally use the (E)TF approximation in the case (i), $\Theta \approx \tilde{\Theta}$. The Jacobian $\mathcal{J}$ in the case (ii) can be approximated by Eq.~ (\ref{Jacsph}). As a result, one may accurately use the (E)TF approximation $\Theta \approx \tilde{\Theta}$ in Eqs.~(\ref{conM2}) and (\ref{conM3}) for the coefficients $\overline{\rho}^{}_{2}$ and $\overline{\rho}^{}_{3}$. Note that there is no divergence of the level density $\rho(E,A,M)$ [Eq.~(\ref{denbes1})] in the limit $U\rightarrow 0$, Eq.~(\ref{den0gen}), in contrast to the standard results of the full SPM within the Fermi gas model. The latter is associated with the leading term in expansion (\ref{rhoasgen}) of the Bessel-like function $f_\nu(S)$. Equation (\ref{denbes1}), with $M=\mathcal{K}$, if it exists, can be used for the calculations of the level density $\rho(E,A,\mathcal{K})$, where $\mathcal{K}$ is the specific projection of the total angular momentum ${\bf I}$ on the symmetry axis of the axially symmetric potential \cite{MK79} (K in notations of Ref.~\cite{BM75}). We note that it is common to use in application \cite{Be36,Er60,BM67} the level density dependence on the spin $I$, $\rho(E,A,I)$. We will consider here only the academic axially symmetric potential case which can be realized practically for the spherical or axial symmetry of a mean nuclear field for the ``parallel'' rotation mentioned above. Using Eq.\ (\ref{denbes1}), under the same assumption of a closed rotating system and, therefore, with conservation of the integrals of motion, the spin $I$ and its projection $M$ on the space-fixed axis, one can calculate the corresponding spin-dependent level density $\rho(E,A,I)$ for a given energy $E$, particle number $A$, and total angular momentum $I$ by employing the Bethe formula \cite{Be36,BM67,Ig83,So90}, \bea \l{denEAIgen} \rho(E,A,I)&=&\rho(E,A,M=I) - \rho(E,A,M=I+1)\nonumber\\ &\approx& -\left(\frac{\partial \rho(E,A,M)}{\partial M}\right)^{}_{M=I+1/2}~. \eea For this level density, $\rho(E,A,I)$, one obtains from Eqs.~(\ref{denbes1}) and (\ref{Eex}), \be \rho^{}_{\rm \tt{MMA}}(E,A,I) \approx \frac{a\overline{\rho}_{\nu}\hbar^2(2I+1)}{\Theta}f_{\nu+1}(S) \l{denbesI}~, \ee where $S$ is given by Eq.~(\ref{entrFG}) with the excitation energy (\ref{Eex}), and $\nu$ equals 2 and 3, in correspondence with Eq.~(\ref{denbes1}). The multiplier $2I+1$ in Eq.~(\ref{denbesI}) appears because of the substitution $M=I+1/2$ into the derivative in Eq.~(\ref{denEAIgen}). In order to obtain the approximate MMA total level density $\rho(E,A)$ from the spin-dependent level density $\rho(E,A,I)$ we can multiply Eq.~(\ref{denbesI}) by the spin degeneracy factor $2 I+1$ and integrate (sum) over all spins $I$. Using the expansion of the Bessel functions in Eq.~(\ref{denbesI}) over the argument $S$ for $S\ll 1$ [Eq.~(\ref{den0gen})] one finds the finite combinatorics expression. For large $S$ [large excitation energy, $aU\gg 1$, Eq.~(\ref{rhoasgen})], one obtains from Eq.~(\ref{denbesI}) the asymptotic Fermi gas expansion. Again, the main term in the expansion for large $S$, Eq.~(\ref{rhoasgen}), coincides with the full SPM limit to the inverse Laplace integrations in Eq.\ (\ref{dengen}). For small angular momentum $I$ and large excitation energy $U_0=E-E_{0}$, so that \be\l{Iexp} \frac{E_{\rm rot}}{U_0}\approx\frac{I(I+1)\hbar^2}{2\Theta~U_0} \ll 1~, \ee one finds the standard separation of the level density, $\rho^{}_{\rm \tt{MMA}}(E,A,I)$, into the product of the dimensionless spin-dependent Gaussian multiplier, $\mathcal{R}(I)$, and another spin-independent factor. Finally, for the case (i) ($\nu=2$), one finds \begin{equation} \rho^{}_{\rm \tt{MMA}}(E,A,I) \approx \frac{\overline{\rho}^{}_{2}~ \mathcal{R}(I)~\exp\left(2\sqrt{a U_0}\right)}{ 16\sqrt{\pi}~(a U_0)^{5/4}} \quad (i)~. \label{rhoIexp2} \end{equation} The spin-dependent factor $\mathcal{R}(I)$ is given by \begin{equation} \mathcal{R}(I)=\frac{2I+1}{q^2} \exp\left(-\frac{I(I+1)}{2q^2}\right)~, \label{qIexp} \end{equation} where $q^2=\Theta\sqrt{U_0/a}/\hbar^2$ is the dimensionless spin dispersion. This dispersion $q$ at the saddle point, $\beta^\ast=1/T=\sqrt{a/U_0}$, is the standard spin dispersion $\Theta T/\hbar^2$; see Refs.~\cite{Be36,Er60}. Similarly, for the $\nu=3$ (ii) case one obtains \begin{equation} \rho^{}_{\rm \tt{MMA}}(E,A,I) \approx \frac{\overline{\rho}^{}_{3}~ \mathcal{R}(I)~\exp\left(2\sqrt{aU_0}\right)}{ 32\sqrt{\pi}~(a U_0)^{7/4}} \quad (ii)~. \label{rhoIexp3} \end{equation} Note that the power dependence of the preexponent factor of the level density $\rho(E,A,I) $ on the excitation energy, $U_0=E-E_0$, differs from that of $\rho(E,A,M)$; see Eqs.~(\ref{denbes1}) and (\ref{rhoasgen}). The exponential dependence, $\rho \propto \exp(2\sqrt{a(E-E_0)})$, for large excitation energy $E-E_0$ is the same for $\nu=2 $ (i) and $3$ (ii), but the pre-exponent factor is different; cf. Eqs.~(\ref{rhoIexp2}) and (\ref{rhoIexp3}). A small angular momentum $I$ means that the condition of Eq.~(\ref{Iexp}) was applied. Equations (\ref{rhoIexp2}) and (\ref{rhoIexp3}) with Eq.~(\ref{qIexp}), are valid for excited states within approximately the condition $1/\tilde{g} \ll U \ll \lambda$; see Eq.~(\ref{condU}). For relatively small spins [Eq.\ (\ref{Iexp})] we have the so-called small-spins Fermi-gas model (see, e.g., Refs.\ \cite{Be36,Er60,GC65,BM67,Ig83,So90,KS20}). General derivations of equations applicable for axially symmetric systems (a ``parallel'' rotation) in this section are specified in Appendix \ref{appB} by using the spherical potential to present explicitly the expressions for the shell correction components of several POT quantities. However, the results for the spin-dependent level density, $~\rho(E,A,I)$ in this section, Eqs. (\ref{denbesI})-(\ref{rhoIexp3}), cannot be immediately applied for comparison with the available experimental data on rotational bands in the collective rotation of a deformed nucleus. They are presented within the unified rotation model \cite{BM75} in terms of the spin $I$ and its projection $\mathcal{K}$ to the internal symmetry axis for the deformed nuclei. We are going to use the ideas of Refs.~\cite{Bj74,BM75,Gr13,Gr19,Ju98} (see also Refs.~\cite{Ig83,So90}) concerning another definition of the spin-dependent level density $\rho(E,A,I)$ in terms of the intrinsic level density and collective rotation (and vibration) enhancement in a forthcoming work. The level density $\rho(E,A,\mathcal{K})$, e.g., Eq.~(\ref{denbes1}) at $M=\mathcal{K} $, depending on the spin projection $\mathcal{K}$ on the symmetry axis of an axially-symmetric deformed nucleus, can be helpful in this work. \vspace{-0.4cm} \begin{figure*} \vskip1mm \includegraphics[width=12.0cm]{Fig4-prc-rev.eps} \vskip-3mm\caption{{\small Level density, $\mbox{ln}\rho(E,A)$, is obtained for low energy states in $^{144}$Sm (a), $^{166}$Ho (b), $^{208}$Pb (c) and $^{230}$Th (d) within different approximations: The MMA dashed green line ``1'', Eq.~(\ref{rho32}); the MMA solid black line ``2a'', Eq.~(\ref{rho52}), at the relative realistic shell correction $\mathcal{E}_{\rm sh}$ \cite{MSIS12}; the MMA dash-dotted red ``2b'', Eq.~(\ref{rho52}) at an extremely small $\mathcal{E}_{\rm sh}$, Eq.~(\ref{rho52}) with (\ref{rhobar52TF}); and the Fermi gas Bethe3 rare blue dotted line, Eq.~(\ref{ldBethe}). The realistic values of $ \mathcal{E}_{\rm sh}$= 0.37 (a), 0.50 (b), 1.77 (c), and 0.55 (d) for MMA2 are taken from Ref.~\cite{MSIS12} (the chemical potential $\lambda=40$ MeV, independent of particle numbers). Heavy dashed red lines test shifts of the excitation energies $U$ for MMA1 and MMA2a by +$1.1$ and $+2.2$ MeV in $^{144}$Sm and $^{208}$Pb, respectively, which are due, presumably, to the pairing condensation energy shown by arrows in the panels $(a)$ and $(c)$, as explained in the text and Table \ref{table-1}. Experimental dots (with error bars, $\Delta \rho_i/\rho_i=1/\sqrt{N_i}$) are obtained directly from the excitation states (with spins and their degeneracies) spectrum \cite{ENSDFdatabase} in shown nuclei (Table \ref{table-1}) by using the sample method where the sample lengths $U_s=0.45 (a), 0.15 (b) , 0.34 (c)$, and $0.17 (d)$~MeV are found on the plateau condition over the inverse level density parameter $K$. }} \label{fig4} \end{figure*} \section{Discussion of the results} \l{sec-disc} In Fig.\ \ref{fig4} and Table~\ref{table-1} we present results of theoretical calculations of the statistical level density $\rho(E,A)$ (in logarithms) within the MMA, Eq.~(\ref{denbes}), and Bethe, Eq.~(\ref{ldBethe}), approaches as functions of the excitation energy $U$ and compared with experimental data. The results of the popular FG approach, Eq.~(\ref{intFGSS}), and our GFG, Eq.~(\ref{SPMgen}), are very close to those of the Bethe approximation and, therefore, they are presented only in Table \ref{table-1}. All of the presented results are calculated by using the values of the inverse level density parameter $K$ obtained from their least mean-square fits (LMSF) to experimental data for several nuclei. The data shown by dots with error bars in Fig.~\ref{fig4} are obtained for the statistical level density $\rho(E,A)$ from the experimental data for the excitation energies $U$ and spins $I$ of the states spectra \cite{ENSDFdatabase} by using the sample method: $\rho_i^{\rm exp}=N_i/U_s$, where $N_i$ is the number of states in the $i$th sample, $i=1,2,...,N_{\rm tot}$; see, e.g., Refs.\ \cite{So90,LLv5}. The dots are plotted at mean positions $U_i$ of the excitation energies for each $i$th sample. Convergence of the sample method over the equivalent sample-length parameter $U_s$ of the statistical averaging was studied under statistical plateau conditions, for all plots in Fig.~\ref{fig4}. The sample lengths $U_s$ play a role which is similar to that of averaging parameters in the Strutinsky smoothing procedure for the SCM calculations of the averaged s.p. level density \cite{St67,BD72}. This plateau means almost constant value of the physical parameter $K$ within large enough energy intervals $U_s$. A sufficiently good plateau was obtained in a wide range around the values near $U_s$ for nuclei presented in Fig.~\ref{fig4} and Table~\ref{table-1} \cite{ENSDFdatabase,HJ16}. Some values of $U_s$ are given in the caption of Fig.~\ref{fig4}. Therefore, the results of Table \ref{table-1}, calculated at the same values of the found plateau, do not depend, with the statistical accuracy, on the averaging parameter $U_s$ within the plateau. This is similar to the results that the energy and density shell corrections are independent of the smoothing parameters in the SCM. The {\it statistical condition, $N_i\gg 1$ at $N_{\rm tot}\gg 1$,} determines the accuracy of our calculations. Microscopic details are neglected under these conditions, but one obtains more simple, general, and analytical results, in contrast to a micro-canonical approach. As in the SCM, in our calculations by the sample method with good plateau values for the sample lengths $U_s$ (see the caption of the Fig.~\ref{fig4}), one obtains a sufficiently smooth statistical level density as a function of the excitation energy $U$. We require such a smooth function because the statistical fluctuations are neglected in our theoretical derivations. \begin{table*}[pt] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Nuclei & $\langle\Delta \rho_i/\rho_i \rangle$ &$\mathcal{E}_{\rm {sh}}$& Version & K~[MeV] & $\sigma$ && Version & K~[MeV] & $\sigma$ \\ \hline Sm-144&~0.18~&~0.37 & MMA2b & 40.3 &~5.1~& & MMA1$^{\ast}$ & 22.7 (16.7$^\ast$) &~3.6 (3.3$^\ast$)~\\ & & & GFG & 21.8 & 3.8& & MMA2a &22.1 &~3.9 \\ & & &Bethe & 23.2 &~3.7~& & FG & 19.7 &~3.6~\\ \hline Sm-148&~0.17~&~0.12~ & MMA2b & 32.5 &~5.2~& & MMA1 & 16.8 &~1.5\\ & & & GFG & 16.9 &~1.7~& & MMA2a & 19.3 &~3.0\\ & & &Bethe & 17.2 &~1.7~& & FG & 14.6 &~1.6\\ \hline Ho-166&~0.09~ &~0.50 & MMA2b & 17.5 &~1.6~& & MMA1 & 5.4 &~12.3~\\ & & & GFG & 5.5 &~11.1~& & MMA2a & 7.1 &~7.0\\ & & &Bethe & 5.6 &~11.2~& & FG & 4.7 &~11.5\\ \hline Pb-208~&~0.20~&~1.77~ & MMA2b & 70.1 &~3.8~& & MMA1 & 43.9 &~3.1\\ & & &GFG & 36.5 & 3.1& & MMA2a$^\ast$ &34.9 (21.9$^\ast$) &~3.0 (2.4$^\ast$)\\ & & &Bethe & 45.1 &~3.2~& & FG & 38.2 &~3.1\\ \hline Th-230&~0.24~&~0.55~ & MMA2b & 36.8 &~2.6~& & MMA1 & 12.3 &~2.1\\ & & &GFG & 12.7 &~1.3& & MMA2a & 14.9 &~0.9\\ & & & Bethe & 12.9 &~1.3~& & FG & 10.8 &~ 1.3\\ \hline \end{tabular} \vspace{-0.2cm} \caption{{\small The maximal mean errors (second column) in the statistical distribution of the states over the samples, $\langle\Delta \rho_i/\rho_i\rangle=\langle 1/\sqrt{N_i}\rangle $, in nuclei (first column) from Ref.~\cite{ENSDFdatabase}; the relative energy shell corrections $\mathcal{E}_{\rm sh}$, Eq.~(\ref{xibdE}}) (third column, from Ref.~\cite{MSIS12}); the inverse level density parameter $K$ (fifth and eighth columns), found by the LMSF with the precision of the standard expression for $\sigma$, Eq.~(\ref{chi}), (sixth and ninth columns) by using the sample method and experimental data from Ref.~\cite{ENSDFdatabase}, are shown for the version of the approximation in the fourth and seventh columns at the relative shell corrections $\mathcal{E}_{\rm sh}$ of the third column. The MMA1 and MMA2b (with the same notations for different MMA as in Fig.~\ref{fig4}) are MMA approaches (\ref{rho32}) ($\nu=3/2$) and (\ref{rho52}) ($\nu=5/2$ at extremely small $\mathcal{E}_{\rm sh}$); GFG is the general full Fermi gas asymptote (\ref{SPMgen}). The MMA2a is a more general MMA, Eq.~(\ref{rho52}), at different relative shell corrections $\mathcal{E}_{\rm sh}$ \cite{MSIS12}. The asterisks denote the MMA1 and MMA2a approaches which are shifted along the excitation energy $U$ axis by the assumed pairing condensation energy $E_{\rm cond}\approx 1.1$ and $2.2$ MeV, $U \rightarrow U-E_{\rm cond}$, for $^{144}$Sm and $^{208}$Pb as shown in parentheses, respectively (see Sec.~\ref{sec-disc}). Bethe [Eq.~(\ref{ldBethe})] and FG [Eq.~(\ref{intFGSS})] approaches are the same as in Refs.~\cite{Be36,Er60,GC65}. } \label{table-1} \end{center} \end{table*} The relative quantity $\sigma$ of the standard LMSF (see Table \ref{table-1}), which determines the applicability of the theoretical approximations, $\rho(U_i)$ (Sec.~\ref{sec-MMAas}) for the description of the experimental data \cite{ENSDFdatabase} $\rho_i^{\rm exp}$, is given by \be\l{chi} \sigma^2=\frac{\chi^2}{N_{\rm tot}-1}~,\quad \chi^2=\sum_{i=1}^{N_{\rm tot}} \frac{(y(U_i)-y^{\rm exp}_i)^2}{(\Delta y_i)^2}~, \ee where $y=\ln\rho$ and $\Delta y_i \approx 1/\sqrt{N_i}$. For the theoretical approaches one has the conditions of the applicability assumed in their derivations. We consider the commonly accepted Fermi gas asymptote \cite{Be36,Er60,BM67,LLv5,Ig83,So90} for large excitation energies $U$; see the Bethe [Eq.~(\ref{ldBethe})] and FG [Eq.~(\ref{intFGSS})] approaches, cf.\ with Eq.~(\ref{rhoasgen}) and our GFG (with shell effects) expression (\ref{SPMgen}). In a forthcoming work we will use the asymptote of Eqs. (\ref{rhoasgen}) and (\ref{SPMgen}), and the sample method for evaluations of the statistical accuracy of the experimental data at relatively large excitation energies (near and higher than neutron resonances). It is especially helpful in the case of low-resolution dense states at sufficiently large excitation energies. The examination using the value of $\sigma$ obtained by the LMSF is an additional procedure for examining these theoretical conditions, using the available experimental data. Notice also that application of the sample method in determining the experimental statistically averaged level density from the nuclear spectra in terms of $\sigma^2$ differs essentially from the methods employed in previous works (see, e.g., Ref.\ \cite{EB09}) by using the statistical averaging of the nuclear level density and accounting for the spin degeneracies of the excited states. We do not use empiric free parameters in all of our calculations, in particular, for the FG results shown in Table~\ref{table-1}. The commonly accepted nonlinear FG asymptote (\ref{rhoasgen}) could be a critical (necessary but, of course, not sufficient) theoretical guide which, with a given statistical accuracy, is helpful for understanding spectrum completeness of the experimental data at large excitation energies where the spectrum is very dense. Figure \ref{fig4} shows the two opposite situations concerning the state distributions as functions of the excitation energy $U$. We show results for the spherical magic $^{144}$Sm (a) and double magic $^{208}$Pb (c) nuclei with maximal (in absolute value but negative) shell correction energies, in terms of the positive, $~\mathcal{E}_{\rm sh}$; see Table \ref{table-1} and Ref.~\cite{MSIS12}. In these nuclei, there are almost no states with extremely low excitation energies in the range of $U \siml 1-2$ MeV \cite{ENSDFdatabase}. In Table \ref{table-1}, we present also results for the deformed nucleus $^{148}$Sm where only a few levels exist in such a range which yields entropies $S \siml 1$. For the significantly deformed nucleus $^{166}$Ho, with intermediate values of $\mathcal{E}_{\rm sh}$ between minimum and maximum [Fig.~\ref{fig4}(b)], one finds the opposite situation when there are many such LESs. An intermediate number of LESs is observed, e.g., in another deformed nucleus, $^{230}$Th [Fig.~\ref{fig4}(d)], which has a complicated strong shell structure including subshell effects \cite{MSIS12}. Thus, we also present the results for two deformed nuclei, $^{166}$Ho and $^{230}$Th, from both sides of the desired heavy particle-number interval $A\approx 140-240$. In Fig.~\ref{fig4}, the results of the MMA approaches (1 and 2) are compared with those of the well-known ``Bethe3'' \cite{Be36} [Eq.~(\ref{ldBethe})] asymptote; see also Table \ref{table-1} for these and a few other asymptotical approaches, the FG [Eq.~(\ref{intFGSS})] and, with a focus on shell effects, GFG [Eq.~(\ref{SPMgen})]. Results for the MMA2a, the MMA2 [Eq.~(\ref{rho52})] at the dominating shell effect case (ii) [$\xi^\ast \gg 1$, Eq.~(\ref{par}), in the saddle point $\beta=\beta^\ast$ for large excitation energies $U$], and for those with realistic relative shell correction $\mathcal{E}_{\rm sh}$ \cite{MSIS12}, are shown versus the results of a small shell effects approach MMA1 (i), Eq.~(\ref{rho32}) ($\xi^\ast \ll 1$ at $\beta=\beta^\ast$). For a very small value of $\mathcal{E}_{\rm sh}$, but still within the values of the case (ii), Eq.~(\ref{rho52}) with (\ref{rhobar52TF}) (in particular, large $\xi^\ast$), we have the approach named MMA2b. Results for the MMA2b approach are also shown in Fig.~\ref{fig4}. Results of calculations within the full SPM GFG asymptotical approach, Eq.~(\ref{SPMgen}), and within the popular FG approximation, Eq.~(\ref{intFGSS}), which are in good agreement with the standard Bethe3 approximation, are only presented in Table \ref{table-1}. For finite realistic values of $\mathcal{E}_{\rm sh}$, the results of the MMA2a approach are closer to those of the MMA1 approach. Therefore, since the MMA2b approach, Eqs.~(\ref{rho52}) with (\ref{rhobar52TF}), is the limit of the MMA2 one at a very small $\mathcal{E}_{\rm sh}$ within the case (ii), we conclude that the MMA2 approach is a more general shell-structure MMA formulation of the statistical level-density problem. In all panels of Fig.~\ref{fig4}, one can see the divergence of the level densities of the Bethe formula [also, the FG, Eq.~(\ref{intFGSS}), and the GFG, Eqs.~(\ref{SPMgen}) and (\ref{rhoasgen})], near the zero excitation energy, $U\rightarrow 0$. This is, obviously, in contrast to any MMAs, combinatorics expression (\ref{den0gen}) in the limit of zero excitation energy; see Eqs.~(\ref{denbes}), (\ref{rho32}), and (\ref{rho52}). The MMA1 results are close to the Bethe, FG and GFG approaches everywhere, for all presented nuclei. The reason is that their differences are essential only for extremely small excitation energies $U$ where MMA1 is finite while other (Bethe, FG and GFG) approaches are divergent. However, there are almost no excited states in the range of their differences in the nuclei under consideration. The results of the MMA2b approach [the same as MMA2 approach, Eq.~(\ref{rho52}) but with Eq.~(\ref{rhobar52TF}) for the coefficient $\overline{\rho}_{5/2}$, at relatively very small shell correction, $\mathcal{E}_{\rm sh}$] within the case (ii), for $^{166}$Ho [see Fig.~\ref{fig4}(b)] with $\sigma $ of the order of one are in significantly better agreement with experimental data as compared to the results of all other approaches (for the same nucleus). The MMA1 [Eq.~(\ref{rho32})], Bethe [Eq.~(\ref{ldBethe})], FG [Eq.~(\ref{intFGSS})], and full SPM GFG [Eq.~(\ref{SPMgen})] approaches are characterized by values of $\sigma\gg 1 $, which are largely of the order of 10 (see Table \ref{table-1}). In contrast to the $^{166}$Ho excitation energy spectrum with many very LESs below about 1 MeV, for $^{144}$Sm (a) and $^{208}$Pb (c) one finds no such states. For the MMA2b [MMA2 for very small $\mathcal{E}_{\rm sh}$, but within the (ii)] approach we have larger values of $\sigma$, $\sigma \gg 1$ for $^{144,148}$Sm and little larger for $^{208}$Pb, versus those of other approximations. In particular, for MMA1 (i), and the other asymptotic approaches of Bethe, FG, and GFG, one finds almost the same $\sigma $ of the order of one, that is in better agreement with data \cite{ENSDFdatabase,HJ16}. We obtain basically the same for MMA2a (ii) with realistic values of $\mathcal{E}_{\rm sh}$. Notice that for $^{144,148}$Sm and $^{208}$Pb nuclei, the MMA2a [Eq.~(\ref{rho52})] at realistic $\mathcal{E}_{\rm sh}$ is close to the MMA1 (i), Bethe, FG, and GFG approaches. The MMA1 and MMA2a (at realistic values of $\mathcal{E}_{\rm sh}$) as well as Bethe, FG and GFG approaches are obviously in much better agreement with experimental data \cite{ENSDFdatabase} for $^{144}$Sm (or $^{148}$Sm) and $^{208}$Pb [Fig.~ \ref{fig4}(a) and (c)], for which one has the opposite situation: very small states number in the LES range. We note that the results of the MMA1 and MMA2a with shifted excitation energies $U\rightarrow U_{\rm eff}=U-E_{\rm cond}>0$ by constant condensation energies $E_{\rm cond}\approx 1.1$ and $2.2$ MeV, shown by arrows in Fig.~\ref{fig4} for $^{144}$Sm and $^{208}$Pb, respectively, may indicate the pairing phase transition effect because of disappearance of the pairing correlations \cite{Ig83,So90,SC19}. With increasing $U$, one can see a sharp jump in the level density for the double magic $^{208}$Pb nucleus within the shown spectrum range. In $^{144}$Sm, one finds such a phase transition a little above the presented range of the excitation energies. This effect could be related to the pairing phase transition\footnote{For temperature dependence of the pairing gap in the simplest BCS theory, one can evaluate $\Delta(T)-\Delta_0=-\sqrt{2\pi\Delta_0T}\exp(-\Delta_0/T)$, where $\Delta_0 \approx 12/A^{1/2}$ MeV at $T=0$; see Refs.~\cite{SY63,Mo72,Ig83,So90,Sv06,SC19}. Therefore, for disappearance of pairing gap, the critical temperature, $T_{cr}=\gamma \Delta_0/\pi$, where $\gamma$ is defined by the Euler constant, $\ln \gamma=0.577...$. Evaluating the condensation energy, $E_{\rm cond}=g \Delta_0^2/4= 3A\Delta_0^2/(2\pi^2K)$, one arrives at the effective excitation energy, $U_{\rm eff}=U-E_{\rm cond}$.} near the critical temperature $T_{\rm cr}=0.47$ MeV in $^{208}$Pb ($0.57$ MeV in $^{144}$Sm), i.e. at the critical effective excitation energy, $U_{\rm eff}=U-E_{\rm cond}\approx 3.3$ MeV ($4.1 $ MeV in $^{144}$Sm), resulting in a level density jump. These simple estimates show a qualitative agreement, by order of magnitude, with the condensation energy, $E_{\rm cond}\approx 1$ MeV. This procedure is a self-consistent calculation. Starting from a value of the condensation energy, $E_{\rm cond}$, one can obtain the inverse level density parameter $K$. Then, one evaluates a new $E_{\rm cond}$ and reiterates till convergence in the values of $K$ and $E_{\rm cond}$ is achieved, at least in order of the magnitudes. This can be realized for the MMA1 for $^{144}$Sm and MMA2a for $^{208}$Pb; see Table \ref{table-1} and Fig.~\ref{fig4}(a) and (c). The phase transition jump is well seen in the plot (c) but is not seen in plot (a) being above the excitation energy range, at both the effective excitation energies $U_{\rm eff}$ mentioned above. One of the reasons for the exclusive properties of $^{166}$Ho [Fig.~\ref{fig4}(b)] as compared to both $^{144}$Sm (a) and $^{208}$Pb (c) might be assumed to be the nature of the excitation energy in these nuclei. Our MMA (i) or (ii) approaches could clarify the excitation nature [see Sec.~\ref{subsec-I} and Appendix \ref{appB} for the rotational contribution which can be included in $E_0$ of Eq.~(\ref{OmadF}) as done in Eq.~(\ref{Omad})]. Since the results of the MMA2b (ii) approach are in much better agreement with experimental data than those of the MMA1 (i) approach for $^{166}$Ho, one could presumably conclude that for $^{166}$Ho one finds more clear thermal excitations, $U \gg E_{\rm rot}$, Eq.~(\ref{condU}), for LESs. For $^{144}$Sm and $^{208}$Pb one observes more regular (large spins owing to the alignment) excitation contributions for dominating rotational energy $E_{\rm rot}$, Eq.~(\ref{condI2}); see Ref.~\cite{KM79}. The latter effect is much less pronounced in $^{208}$Pb than in $^{144}$Sm, but all the inverse level density parameters $K$ are significant for states below neutron resonances; see Table \ref{table-1}. However, taking into account the pairing effects, even qualitatively, the thermal contribution (ii) is also important for $^{208}$Pb while the regular nonthermal motions might be dominating in $^{144}$Sm. In any case, the shell effects are important, especially for the (ii) case which does not even exist without taking them into account. For $^{230}$Th [Fig.~ \ref{fig4}(d)], one has the experimental LESs data in the middle of two limiting cases MMA1 (i) and MMA2b (ii). This agrees also with an intermediate number of very LESs in this nucleus. As shown in Fig.~\ref{fig4}(d) and Table \ref{table-1}, the MMA2a approach at realistic values of $\mathcal{E}_{\rm sh}$ is in good agreement with the data. The shell structure is, of course, not so strong in $^{230}$Th as compared to that of the double magic nucleus, $^{208}$Pb, but it is of the same order as in other presented nuclei. Also notice that, in contrast to the spherical nuclei in Figs.~\ref{fig4} (a) and (c), the nuclei $^{166}$Ho (b) and $^{230}$Th (d) are significantly deformed, which is also important, in particular, because of their large angular momenta of the LES excitation spectrum states. We do not use free empiric parameters of the BSFG, spin cutoff FG, and empiric CTM approaches \cite{EB09}. As an advantage, one has only the parameter $K$ with the physical meaning of the inverse level density parameter. The variations in $K$ are related, e.g., to those of the mean field parameters through Eq.~(\ref{entrFG}). All the densities $\rho(E,A)$ compared in Fig.~\ref{fig4} and Table \ref{table-1} do not depend on the cutoff spin factor and moment of inertia because of summation (integrations) over all spins (however, with accounting for the degeneracy $2I+1$ factor). In line with the results of Ref.~\cite{ZS16}, the obtained values of $K$ for the MMA2 approach can be essentially different from the MMA1 ones and those (e.g., FG) found, mainly, for the neutron resonances (NRs). However, the level densities with the excitation energy shifted by constant condensation energies, due to pairing, for $^{208}$Pb (c) and $^{144}$Sm (a) in Fig.~\ref{fig4}, notably improve the comparison with data \cite{ENSDFdatabase}. These densities correspond to inverse level-density parameters $K$, smaller even than those obtained in the FG approach which agreed with NR data. We note that for the MMA1 approach one finds values of $K$ which are of the same order as those of the Bethe, FG and GFG approaches. These values of $K$ are mostly close to the NR values in order of magnitude. For the FG approach, Eq.~(\ref{intFGSS}), in accordance with its nondirect derivation through the spin-dependent level density $\rho(E,A,I)$, Eq.~(\ref{rhoIexp2}) (Sec.~\ref{subsec-I}), it is obviously because the neutron resonances occur at large excitation energies $U$ and small spins; see Eqs.~(\ref{condU}) and (\ref{Iexp}). Large deformations, neutron-proton asymmetry, spin dependence for deformed nuclei, and pairing correlations \cite{Er60,Ig83,So90,AB00,AB03,ZK18,Ze19} in rare earth and actinide nuclei should be also taken into account to improve the comparison with experimental data. \section{Conclusions} \l{sec-concl} We derived the statistical level density $\rho(S)$ as function of the entropy $S$ within the micro-macroscopic approximation (MMA) using the mixed micro- and grand-canonical ensembles beyond the standard saddle point method of the Fermi gas model. The obtained level density can be applied for small and relatively large entropies $S$ or excitation energies $U$ of a nucleus. For a large entropy (excitation energy), one obtains the exponential asymptote of the standard SPM Fermi gas model, but with significant powers of $1/S$ corrections. For small $S$ one finds the usual finite combinatorics expansion in powers of $S^2$. Functionally, the MMA at linear approximation in $S^2 \propto U$ expansion, at small excitation energies $U$, coincides with the empiric constant ``temperature'' model except it is obtained without using free fitting parameters. Thus, MMA unifies the commonly accepted Fermi gas approximation with the empiric CTM for large and small entropies $S$, respectively, in line with the suggestions in Refs.~\cite{GC65,ZK18}. The MMA clearly manifests an advantage over the standard full SPM approaches at low excitation energies, because it does not diverge in the limit of small excitation energies, in contrast to every full SPM approaches, e.g., Bethe asymptote and FG asymptote. Another advantage applies when nuclei have many more states in the very low energy state range. The values of the inverse level density parameter $K$ were compared with those of experimental data for LESs below neutron resonances (NRs) in spectra of several nuclei. The MMA results with only one physical parameter in the least mean-square fit, the inverse level density parameter $K$, were usually better with larger number of the extremely low energy states, certainly much better than for the results with the FG model in this case. The MMA values of the inverse level density parameter $K$ for LESs can be significantly different from those of the neutron resonances within the FG model. We found significant shell effects in the MMA level density for the nuclear LES range within the semiclassical periodic orbit theory. In particular, we generalized the known SPM results for the level density in terms of the full SPM GFG approximation accounting for the shell effects using the POT. Exponential disappearance of shell effects with increasing temperature was analytically studied within the POT for the level density. Shifts in the entropy $S$ and in the inverse level density parameter $K$ due to the shell effects were also obtained and given in the explicit analytical forms. The shifts occur at temperatures much lower than the chemical potential, near the NR excitation energies. Simple estimates of pairing effects in spherical magic nuclei, by pairing condensation energy to the excitation energies shift, significantly improve the comparison with experimental data. Pairing correlations essentially influence the level density parameters at low excitation energies. We found an attractive description of the well-known jump in the level density within our MMA approach using the pairing phase transition. Other analytical reasons for the excitation energy shifts in the BSFG model are found by also using a more accurate expansion of the modified Bessel expression for the MMA level density at large entropies $S$, taking into account high order terms in $1/S$. This is important in both the LES and NR regions, especially for LESs. We presented a reasonable description of the LES experimental data for the statistical averaged level density obtained by the sampling method within the MMA with the help of the semiclassical POT. We have emphasized the importance of the shell and pairing effects in these calculations. We obtained values of the inverse level density parameter $K$ for the LES range which are essentially different from those of NRs. These results are basically extended to the level density dependence on the spin variables for nuclear rotations around the symmetry axis of the mean field due to alignment of the individual nucleon angular momenta along the symmetry axis. Our approach can be applied to statistical analysis of experimental data on collective nuclear states. As the semiclassical POT MMA is better with larger particle number in a Fermi system, one can also apply this method to study metallic clusters and quantum dots in terms of the statistical level density, and to problems in nuclear astrophysics. The neutron-proton asymmetry, large nuclear angular momenta and deformation for collective rotations, additional consequences of pairing correlations, as well as other perspectives, will be taken into account in a future work in order to improve the comparison of the theoretical results with experimental data on the level density parameter significantly, in particular below the neutron resonances. \bigskip \centerline{{\bf Acknowledgment}} \medskip The authors gratefully acknowledge Y.\ Alhassid, D.\ Bucurescu, R.K.\ Bhaduri, M.\ Brack, A.N.\ Gorbachenko, and V.A.\ Plujko for creative discussions. This work was supported in part by the budget program ``Support for the development of priority areas of scientific researches,'' a the project of the Academy of Sciences of Ukraine (Code 6541230, No 0120U100434). S.\ S.\ is partially supported by the US Department of Energy under Grant No. DE-FG03-93ER-40773. \vspace{0.5cm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{intro} Jets are remnants of hard-scattered quarks and gluons and they are studied extensively in high energy collisions of all kinds. Despite the naive expectations, jets are not fundamental objects in Quantum Chromo Dynamics (QCD), but instead they are artificial event properties defined by hand, i.e., well-defined and easy to measure from the hadronic final-state, easy to calculate in pQCD from the partonic final-state and closely related to the final-state quarks and gluons \cite{seymor}. A well defined jet definition allows us to study the fundamental objects of pQCD which are the quarks and the gluons. But recently, many new areas outside the standard QCD in high energy physics are utilizing jet physics to answer major questions. For example at RHIC, jets are put to a new use to study the hot QCD matter through their interaction and energy loss in the medium (``jet quenching"). Direct jet measurements in $p+p$ collisions at RHIC have been carried out since the third year of RHIC operations \cite{starpp}. However until now, to avoid the complex backgrounds of heavy ion events, inclusive hadron distributions and di-hadron correlations at high transverse momentum are utilized and indirect measurements of jet quenching have been made at RHIC. However, these measurements of jet fragmentation particles are biased towards the population of jets that has the least interaction with the medium. Since 2006, the STAR barrel electromagnetic calorimeter (BEMC) has been operated with full azimuthal coverage ($\phi$) and large pseudorapidity ($\eta$) acceptance. This detector upgrade together with the increased beam luminosities of RHIC and data recording capabilities of STAR, enables the study of full jet reconstruction in heavy ion collisions for the first time at RHIC \cite{salurww}. This article discusses results from a recent new approach of full jet reconstruction measurement in heavy ion collisions, utilizing the high luminosity Au+Au data set collected by the STAR experiment from 2007 RHIC run. The experimental details can be found in \cite{me} for the direct measurement of jets and \cite{jor,bruna,elena} for the accompanying jet fragmentation studies in heavy ion collisions utilizing the STAR experiment. \section{Jet Reconstruction Analysis} During the last 20 years, various jet reconstruction algorithms have been developed for both leptonic and hadronic colliders. For a detailed overview of jet algorithms in high energy collisions, see \cite{me,davidE} and the references therein. Here we will briefly discuss the algorithms used for the STAR analysis. Two kinds of jet reconstruction algorithms are utillized; seeded cone (leading order high seed cone (LOHSC)) and sequential recombination ($\rm k_{T}$ and Cambridge/Aachen). The cone algorithm is based on the simple picture that a jet consists of a large amount of hadronic energy in a small angular region. Therefore, the main method for the cone algorithm is to combine particles in $\eta - \phi $ space with their neighbors within a cone of radius R ($R=\sqrt{ \Delta \phi ^{2}+ \Delta \eta^{2} }$). The sequential recombination algorithms combine objects in relative to the closeness of their $p_{T}$. Particles are merged into a new cluster via successive pair-wise recombination. In the sequential recombination algorithm, arbitrarily shaped jets are allowed to follow the energy flow resulting in less bias on the reconstructed jet shape than with the cone algorithm \cite{catchment}. Algorithmic details of cone and sequential recombination can be found in \cite{jets,kt,ktref,blazey} and the references therein. Most recently a new approach to jet reconstruction and background subtraction, motivated by the need of precision jet measurements in the search for new physics in high luminosity $p+p$ collisions at the LHC is developed by M. Cacciari, G. Salam and G. Soyez \cite{catchment,salamtalk}. A key feature of their approach is a new QCD inspired algorithm for separating jets from the large backgrounds due to pile up. As it turns out from simulations, these improved techniques can also be used in heavy ion environments where the background subtraction is essential for jet measurements. Sequential recombination algorithms ($\rm k_{T}$, anti-$\rm k_{T}$ and Cambridge/Aachen (CAMB)) encoded in the $FastJet$ suite of programs \cite{catchment,antikt}, along with an alternative seeded cone algorithm (labeled LOHSC) are utilized to search for jets in the Au+Au collisions. A seedless infrared-safe cone algorithm (SISCone) \cite{sis} which is also available in the $FastJet$ suite of programs as a plug-in, is already used in $p+p$ collisions at $\sqrt{s}=200$ GeV and the first results can be found in \cite{elena}. \section{Results} Figure~\ref{fig:dijets} shows an example of an identified di-jet event for central Au+Au collisions, using both the neutral energy from the BEMC and charged particles from the Time Projection Chamber of the STAR experiment. In order to assess the bias of the heavy ion jet measurements, the inclusive jet cross section is compared to that from $p+p$ collisions presented in reference~\cite{starpp}. Figure~\ref{fig:kt} shows the comparison of the inclusive jet spectrum for central Au+Au collisions (taken with a Minimum Bias online trigger ``MB-Trig'' ) to the $\rm N_{Bin}$ scaled $p+p$ spectrum, for the $\rm k_{T}$ and CAMB jet reconstruction algorithms. Jets in $p+p$ collisions are measured the same way as in $Au+Au$ collisions, utilizing the STAR Time Projection Chamber and BEMC and correcting for missing and double counted energy \cite{starpp}. However for the $p+p$ case, a mid-point cone jet algorithm with splitting and merging steps is used. The inclusive jet spectrum from $p+p$ collisions agrees well with the Next-to-leading order perturbative QCD calculation \cite{nlo}. The same comparison for the jets reconstructed with the LOHSC algorithm is presented in Figure~\ref{fig:LOHSC}. For both figures, to account for nuclear geometric effects, the $p+p$ spectrum is scaled by $\rm N_{Bin}$, the number of binary nucleon+nucleon collisions ($\rm N_{Bin}$) equivalent to a central Au+Au collisions, as calculated by a Glauber model \cite{glauber}. \begin{figure}[] \begin{center} \resizebox{0.60\textwidth}{!}{% \includegraphics{dijets.eps} } \end{center} \caption{21 GeV di-jet reconstructed from a central Au+Au event at $\sqrt{s_{NN}}=200$ GeV in the STAR detector \cite{me,jor}.} \label{fig:dijets} \end{figure} \begin{figure}[t!] \centering $\begin{array}{cc} \resizebox{0.47\textwidth}{!}{ \includegraphics{ktnew.eps}}& \resizebox{0.47\textwidth}{!}{ \includegraphics{cambnew.eps}} \\ \end{array}$ \caption[]{ Jet yield per event vs transverse jet energy ($E_{T}$) for the central Au+Au collisions obtained by the sequential recombination ($\rm k_{T}$, CAMB) algorithms \cite{starpp,me}. Triangle symbols are from MB-Trig and corrected for efficiency, acceptance and energy resolution. Only statistical error bars are shown for the $Au+Au$ data. Solid black squares are the distribution from $p+p$ collisions, scaled by $N_{Binary}$. The systematic uncertainty of the $p+p$ jet spectra normalization is $\sim 50 \%$. } \label{fig:kt} \end{figure} \begin{figure}[t!] \centering \resizebox{0.49\textwidth}{!}{ \includegraphics{conenew.eps}} \\ \caption[]{ Jet yield per event vs transverse jet energy ($E_{T}$) for the central Au+Au collisions obtained by the Leading Order High Seed Cone (LOHSC) algorithm \cite{starpp,me}. Triangle symbols are from MB-Trig and corrected for efficiency, acceptance and energy resolution. Only statistical error bars are shown for the $Au+Au$ data. Solid black squares are the distribution from $p+p$ collisions, scaled by $N_{Binary}$. The systematic uncertainty of the $p+p$ jet spectra normalization is $\sim 50 \%$. } \label{fig:LOHSC} \end{figure} In the case of jet reconstruction, $\rm N_{Bin}$ scaling is expected if the reconstruction is unbiased, i.e. the jet energy is recovered independent of the fragmentation, even in the presence of strong jet quenching. This scaling is analogous to the cross section scaling of high $p_{T}$ direct photon production in heavy ion collisions, observed by the PHENIX experiment \cite{phenix}. At present, the total systematic systematic uncertainty on the normalisation of the inclusive $p+p$ jet spectrum is around 50\%. Figure~\ref{fig:kt} and Figure~\ref{fig:LOHSC} show that the heavy ion jet spectrum agrees well with the scaled $p+p$ measurement within the systematic uncertainty. Figure~\ref{fig:panelkt} is for the jet spectra obtained with the $\rm k_{T}$ algorithm by using different threshold cuts on the track momenta and calorimeter tower energies ($p_{T}^{cut}$). It is found that the agreement between the binary scaled $p+p$ spectra and the Au+Au measurement is worse for larger $p_{T}^{cut}$. This suggests that the threshold cuts introduce biases which are not fully corrected with the current procedure that uses fragmentation models that are developed for $e^{+}+e^{-}$ and $p+p$ collisions. It could also be an indication of modified fragmentation due to jet quenching. \begin{figure}[] \begin{center} \resizebox{0.95\textwidth}{!}{% \includegraphics{panelkt.eps} } \end{center} \caption{Jet yield per event vs $E_{T}$ for 0-10\% central $Au+Au$ collisions obtained by the $k_{T}$ algorithm for the two selection of the $p_{T}^{cut}$. The distribution from $p+p$ collisions are scaled by $\rm N_{Binary}$ \cite{starpp,me}. Triangle symbols are from MB-Trig and corrected for efficiency, acceptance and energy resolution. Only statistical error bars are shown for the $Au+Au$ data. Solid black squares are the distribution from $p+p$ collisions, scaled by $N_{Binary}$. The systematic uncertainty of the $p+p$ jet spectra normalization is $\sim 50 \%$.} \label{fig:panelkt} \end{figure} \section{Conclusions} Unbiased reconstruction of jets in central heavy ion collisions at RHIC energies would be a breakthrough to investigate the properties of the matter produced at RHIC. The study shown here indicates that unbiased reconstruction of jets may be possible in heavy ion events. However, spectrum corrections are currently based on model calculations using PYTHIA fragmentation \cite{PYTHIA}. This aspect, together with the spectrum variations due to cuts and reconstruction algorithms, must be investigated further in order to assess the systematic uncertainties of this measurement. Further, we utilize the reconstructed jets and study the jet shapes to test the underlying QCD theory \cite{vitev,vitev2}. The results from the intra-jet energy distributions, jet-jet and hadron-jet correlation studies will be available in the coming months and will enable us to study the medium properties produced at RHIC. Also a copious production of very energetic jets, well above the heavy ion background is predicted to occur at the LHC \cite{peter,solan}. The large kinematic reach of high luminosity running at RHIC and at the LHC may provide sufficient lever-arm to map out the QCD evolution of jet quenching. The comparison of full jet measurements in the different physical systems generated at RHIC and the LHC will provide unique and crucial insights into our understanding of jet quenching and the nature of hot QCD matter. \section*{Acknowledgments} We thank the organizers of 25th WWND for the opportunity and E. Bruna, W. Holzmann, G. Soyez, and I. Vitev for particularly fruitful discussions about jet reconstruction in heavy ion collisions during the workshop. \section*{Note(s)} \begin{notes} \item[a] E-mail: ssalur@lbl.gov \end{notes} \bibliographystyle{bigsky2009}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A purely algebro-geometric approach to the cohomology of the coarse moduli spaces $\mathcal{M}_{g,n}$ and $\overline{\mathcal{M}}_{g,n}$ of smooth (respectively, stable) curves of genus $g$ with $n$ marked points has been recently developed by Enrico Arbarello and Maurizio Cornalba in the two papers \cite{ArbCor:98}, \cite{ArbCor:08}, where the only essential result borrowed from geometric topology is a vanishing theorem due to John Harer. Namely, the fact that $H_k(\mathcal{M}_{g,n})$ vanishes for $k > 4g-4+n$ if $n>0$ and for $k > 4g-5$ if $n=0$ was deduced in \cite{Harer:86} from the construction of a $(4g-4+n)$-dimensional spine for $\mathcal{M}_{g,n}$ by means of Strebel differentials. On the other hand, it is conceivable that Harer's vanishing is only the tip of an iceberg of deeper geometrical properties (see \cite{HaiLoo:98}, Problem (6.5)). For instance, a conjecture of Eduard Looijenga says that $\mathcal{M}_g$ is a union of $g-1$ affine open subsets (see \cite{FabLoo:99}, Conjecture~11.3), but (as far as we know) until now there have been no advances in this direction. Notice that Looijenga's conjecture trivially holds for $g=2,3$: indeed, it is well-known that $\mathcal{M}_2$ is affine and that non-hyperelliptic curves of genus $3$ can be canonically embedded as quartic plane curves, hence $\mathcal{M}_3 = \mathcal{M}_3 \setminus \{\text{the hyperelliptic locus}\} \cup \mathcal{M}_3 \setminus \{\text{the locus of plane quartics with at least one hyperflex}\}$ is the union of two affine open subsets. Here we are going to prove the following \begin{Theorem}\label{main} For every $g$ with $2 \le g \le 5$ the moduli space $\mathcal{M}_g$ is the union of $g-1$ affine open subsets. \end{Theorem} We point out that, under the same assumptions on $g$, from the properties of the linear system of quadrics passing through the canonical image of a smooth projective genus $g$ curve it follows that $\mathcal{M}_g$ admits an affine stratification of depth $g - 2$ (see \cite{FL}). Our approach in the present note relies instead on the theory of modular forms. \section{Notation and preliminaries} We work over the complex field $\mathbb{C}$ and we denote by $\mathbb{H}_g$ the Siegel upper half-space of symmetric complex matrices with positive-definite imaginary part, the so-called period matrices. The action of the symplectic group $\Sp{g}$ on $\mathbb{H}_g$ is given by \begin{equation}\label{ABCD} \begin{pmatrix} A&B\\ C&D\end{pmatrix}\circ\tau:= (A\tau+B)(C\tau+D)^{-1}, \end{equation} where the elements of $\Sp{g}$ are thought as four $g\times g$ blocks and they preserve the symplectic form given in the block form as $\left(\begin{smallmatrix} 0& 1\\ -1& 0 \end{smallmatrix}\right)$. We recall that the quotient $\mathcal{A}_g = \mathbb{H}_g / \Sp{g}$ has the structure of a quasi-projective variety and it can be viewed as the moduli space of principally polarized abelian varieties. Let $\mathcal{M}_g$ and $\mathcal{H}_g$ be the moduli spaces of smooth curves and hyperelliptic curves of genus $g$, respectively. It is a very well known fact that \begin{equation}\label{torelli} \mathcal{H}_g \hookrightarrow \mathcal{M}_g \hookrightarrow \mathcal{A}_g. \end{equation} Moreover, we denote by $\Gamma_g:=\Sp{g}$ the integral symplectic group and we define the principal congruence subgroup $\Gamma_g[2] \subseteq \Gamma_g$: \[ \Gamma_g[2] = \{M \in \Gamma_g \mid M \equiv \Id_{2g} \mod 2\}, \] which acts on $\mathbb{H}_g$ in the same way as $\Gamma_g$ does. The action of $\Gamma_g[2]$ on $\mathcal{A}_g$ induces a level $2$ structure: namely, we denote by $\mathcal{A}_g[2] = \mathbb{H}_g / \Gamma_g[2]$ the moduli space of principally polarized abelian varieties with a level $2$ structure. Since we have a map $\mathcal{A}_g[2] \to \mathcal{A}_g$, we can define $\mathcal{M}_g[2]$ as the preimage of $\mathcal{M}_g$ and $\mathcal{H}_g[2]$ as the preimage of $\mathcal{H}_g$ in $\mathcal{A}_g[2]$. The analogue of \eqref{torelli} holds for a level $2$ structure: \[ \mathcal{H}_g[2] \hookrightarrow \mathcal{M}_g[2] \hookrightarrow \mathcal{A}_g[2]. \] For a period matrix $\tau\in\mathbb{H}_g$, $z\in \mathbb{C}^g$ and $\varepsilon,\delta\in \mathbb{F}_2^g$ (where $\mathbb{F}_2$ denotes the abelian group $\mathbb{Z}/2 \mathbb{Z} = \{0,1\}$, for which we use the additive notation) the associated theta function with characteristic $m=[\varepsilon, \delta]$ is \[ \theta_m(\tau, z) =\sum_{n\in\mathbb{Z}^g} \exp \left(\pi i ((n+\varepsilon/2)'\tau (n+\varepsilon/2)+ 2(n+\varepsilon/2)'( z+\delta/2) \right), \] where we denote by $X'$ the transpose of $X$. As a function of $z$, $\theta_m(\tau, z)$ is odd or even depending on whether the scalar product $\varepsilon\cdot\delta\in\mathbb{F}_2$ is equal to 1 or 0, respectively. Theta constants are restrictions of theta functions to $z=0$. We shall write $\theta_m$ for theta constants. It is easy to check that odd theta constants are identically 0, since they are the valuation in 0 of odd functions. \begin{Remark}\label{split} Let $\tau \in \mathbb{H}_g$ be a period matrix of the form $\tau = \left(\begin{smallmatrix}\tau_1 & 0\\ 0 & \tau_2 \end{smallmatrix}\right)$, with $\tau_1 \in \mathbb{H}_{g_1}$ and $\tau_2 \in \mathbb{H}_{g_2}$, $g_1 + g_2 = g$. We split $m = [\varepsilon, \delta] \in \mathbb{F}_2^{2g}$ as $m_1 \oplus m_2$, where $m_1 = [\varepsilon_1, \delta_1] \in \mathbb{F}_2^{2g_1}$, $m_2 = [\varepsilon_2, \delta_2] \in \mathbb{F}_2^{2g_2}$ and $m = [\varepsilon_1 \varepsilon_2, \delta_1 \delta_2]$; then we have \[ \theta_m(\tau) = \theta_{m_1}(\tau_1)\cdot \theta_{m_2}(\tau_2). \] \end{Remark} For a set of characteristics $M=(m_1, m_2,\dots, m_k)$ we let \[ P(M):=\prod_{i=1}^k\theta_{m_i}. \] A holomorphic function $f\colon \mathbb{H}_g\to\mathbb{C}$ is a modular form of weight $k/2$ with respect to a subgroup $\Gamma\subset\Gamma_g$ of finite index if \[ f(\gamma\circ\tau)=\det(C\tau+D)^{k/2}f(\tau) \qquad \forall\gamma\in\Gamma,\forall\tau\in\mathbb{H}_g, \] where $C$ and $D$ are as in (\ref{ABCD}), and if additionally $f$ is holomorphic at all cusps for $g=1$. We denote by $[\Gamma, k/2]$ the space of such functions, which turns out to be a finite dimensional vector space. Moreover, \[ A(\Gamma):=\bigoplus_{k=0} ^{\infty}[\Gamma, k/2] \] is a finitely generated ring. The associated projective variety $\mathcal{A}_g^* = \proj(A(\Gamma))$ is the so called \emph{Satake compactification} of $\mathbb{H}_g / \Gamma$. Theta constants are modular forms of weight $k/2$ with respect to the subgroup $\Gamma(4,8) \subseteq \Gamma_g[2]$ of matrices $M = \left(\begin{smallmatrix}A&B\\mathbb{C}&D\end{smallmatrix}\right) \equiv 1 \mod 4$ and $\diag(AB')\equiv \diag(CD')\equiv 0 \mod 8$ (see \cite{Igu:64}). For further use, we denote by $\Gamma_g (1,2)$ the subgroup of $\Gamma_g$ defined by $\diag(AB')\equiv \diag(CD')\equiv 0 \mod 2$; note that theta constants will have an automorphy factor with respect to $\Gamma_g(1,2)$, while their eighth power is modular form. We need the following rough formulation of a classical result by Mumford about the hyperelliptic locus (for a more precise statement we refer to \cite{Mum:84}). \begin{Theorem}\label{mumford} Let $\tau \in \mathbb{H}_g$. Then $\tau$ is the period matrix of a smooth hyperelliptic curve if and only if exactly \[ v(g) = 2^{g-1}(2^g+1) - \frac{1}{2}\binom{2g+2}{g+1} \] suitable even theta constants vanish at the point $\tau$. Each suitable sequence of theta constants defines an irreducible component of $\mathcal{H}_g[2]$. The full modular group acts transitively on the components. \end{Theorem} Moreover, we recall the structure of $\mathcal{H}_g[2]$ (see \cite{Tsu:90}). \begin{Remark}\label{Tsu} For $g \leq 2$, $\mathcal{H}_g[2]$ is an irreducible variety. For $g \geq 3$, $\mathcal{H}_g[2]$ breaks into disjoint irreducible components isomorphic to each other. The number of components of the hyperelliptic locus is \[ 2^{g(2g+1)}\prod_{k=1}^g \frac{1-2^{-2k}}{(2g+2)!}. \] Hence, we have 36 components for $g = 3$, each defined by the vanishing of a single even theta constant, 13056 components for $g=4$, each defined by the vanishing of 10 suitable even theta constants, 51806208 components for $g=5$, each defined by the vanishing of 66 suitable even theta constants. \end{Remark} We introduce four special modular forms which will give us the affine open covering we are looking for. We denote by $\mathcal{E}$ the subset of $\mathbb{F}_2^{2g}$ of even characteristics; it is easy to show that $|\mathcal{E}| = 2^{g-1}(2^g + 1)$. Let \begin{align} F_\text{null} &= P(\mathcal{E}) = \prod_{m \in \mathcal{E}} \theta_m;\label{thetanull}\\ F_1 &= \sum_{m \in \mathcal{E}} \left(P(\mathcal{E})/\theta_m\right)^8 ;\label{una}\\ F_H &= \sum_{A \subseteq \mathbb{F}_2^{2g}} (P(\mathcal{E}\setminus A))^8;\label{hyperelliptic}\\ F_T &= 2^g \sum_{m \in \mathcal{E}} \theta_m^{16} - \left(\sum_{m \in \mathcal{E}} \theta_m^8 \right)^2\label{trigonal}. \end{align} In \eqref{hyperelliptic}, $A$ varies among all suitable sets of $v(g)$ characteristics corresponding to the irreducible components of $\mathcal{H}_g[2]$ as in Theorem \ref{mumford}. We need to take the eighth power of the theta constants in order to obtain the modularity of the above forms with respect to the modular group. \begin{Remark} The modular form $F_\text{null}$ has weight $2^{g-2}(2^g+1)$. Its vanishing locus is a divisor $\Theta_\text{null}$ on $\mathcal{A}_g$. \end{Remark} \begin{Remark} The modular form $F_1$ has weight $2^{g+1}(2^g+1)-4$. It defines a divisor $D_1$ on $\mathcal{A}_g$. \end{Remark} \begin{Remark} The modular form $F_H$ has weight $2\binom{2g+2}{g+1}$ and it coincides with $F_\text{null}^8$ for $g = 2$, since no theta constant vanishes on the hyperelliptic locus. In the case $g = 3$, the modular form $F_H$ coincides with $F_1$ since every component of the hyperelliptic locus is characterized by the vanishing of a single theta constant. Let $D_H$ be the divisor defined by $\{F_H = 0\}$ on $\mathcal{A}_g$. \end{Remark} \begin{Remark}\label{trigonal_locus} The modular form $F_T$ has weight $8$ and it is not identically 0 only for $g \geq 4$. Indeed, for $g = 2,3$ it vanishes identically on $\mathbb{H}_g$, while for $g = 4$ it vanishes on the preimage of $\mathcal{M}_4$ in $\mathbb{H}_4$. When $g \geq 4$, it defines a divisor $D_T$ on $\mathcal{A}_g$, which coincides with the closure of $\mathcal{M}_4$ when $g = 4$ and with the closure of the trigonal locus when $g = 5$ (see \cite{GruSal:09}). \end{Remark} In order to handle the divisors defined by modular forms on $\mathcal{A}_g^*$, we will make use of the following fact. \begin{Lemma}\label{ampleness} For $g \geq 3$ all modular forms define ample divisors on $\mathcal{A}_g^*$. \end{Lemma} \begin{proof} For $g \geq 3 $ the group of the Weil divisors of $\mathcal{A}_g^{*}$ modulo principal divisors is isomorphic to $\mathbb{Z}$, hence a suitable multiple of any effective divisor $D$ in $\mathcal{A}_g^*$ is very ample. By \cite{Db}, Lemma 2.1, $D$ is ample. Since every divisor defined by a modular form is effective, our claim follows. \end{proof} Let $D$ be a divisor defined by a modular form. Since $\mathcal{M}_g$ contains complete curves when $g \geq 3$ (indeed, the Satake compactification is projective and has boundary of codimension $2$ for $g \ge 3$), we have that $D \cap \mathcal{M}_g \neq \emptyset$. Hence each of the previously described divisors either contains $\mathcal{M}_g$ or defines a divisor in $\mathcal{M}_g$. In the latter case, we use the same notation for the induced divisor inside $\mathcal{M}_g$. Next, we prove a fundamental result about $F_H$ (for further details, see also \cite{Salvati}, Theorem 2). \begin{Lemma}\label{hyperavoid} The modular form $F_H$ never vanishes on $\mathcal{H}_g$. \end{Lemma} \begin{proof} Let $\tau$ be the period matrix of a hyperelliptic curve. By Theorem~\ref{mumford}, there is a suitable set $A$ of $v(g)$ theta constants which vanish at $\tau$. Hence all terms but $P(\mathcal{E} \setminus A)^8$ contain at least one of the vanishing theta constants and $F_H(\tau) = P(\mathcal{E} \setminus A)^8(\tau) \neq 0$. It follows that $F_H$ never vanishes on $\mathcal{H}_g$. \end{proof} \section{The main result} Now we can exhibit an explicit open covering of $\mathcal{M}_g$ for every $2 \leq g \leq 5$. We have already recalled the description for $g = 2,3$ in the Introduction, hence here we focus on the cases $g = 4$ and $g = 5$. Namely, we are going to prove that \begin{align} \mathcal{M}_4 &= \mathcal{M}_4 \setminus \Theta_\text{null} \cup \mathcal{M}_4 \setminus D_1 \cup \mathcal{M}_4 \setminus D_H, \label{M4},\\ \mathcal{M}_5 &= \mathcal{M}_5 \setminus \Theta_\text{null} \cup \mathcal{M}_5 \setminus D_1 \cup \mathcal{M}_5 \setminus D_H \cup \mathcal{M}_5 \setminus D_T. \label{M5} \end{align} Our proof relies on two key ideas. The first one is a straightforward application of the Cornalba-Harris ampleness criterion. (see \cite{CorHar:88}). \begin{Proposition} \label{affine} Let $D$ be an effective divisor on $\mathcal{M}_g$ and let $\overline{D}$ be its closure in the Deligne-Mumford compactification $\overline{\mathcal{M}}_g$. If $[\overline{D}]= a \lambda - \sum b_i \delta_i$ with $a, b_i > 0$, then $\mathcal{M}_g \setminus \mathrm{supp}(D)$ is an affine open subset. \end{Proposition} \begin{proof} Just notice that $E = a \lambda - \sum b_i \delta_i + \sum (b_i - \varepsilon) \delta_i = a \lambda - \varepsilon \delta$ with $\varepsilon > 0$ small enough is an effective divisor on $\overline{\mathcal{M}}_g$ such that $\overline{\mathcal{M}}_g \setminus \mathrm{supp}(E) = \mathcal{M}_g \setminus \mathrm{supp}(D)$ and $E$ is ample by Cornalba-Harris ampleness criterion \cite{CorHar:88}. \end{proof} Proposition~\ref{affine} yields the following useful result. \begin{Corollary}\label{affcor} Let $D$ be an effective divisor on $\mathcal{M}_g$, let $\tilde{D}$ be its closure in the Satake compactification $\mathcal{A}_g^{*}$ and $\overline{D}$ be its closure in the Deligne-Mumford compactification $\overline{\mathcal{M}}_g$. If $\tilde{D}$ contains the product of periods of smooth curves and periods of nodal curves, then $\overline{D} = a \lambda - \sum b_i \delta_i$ with $a, b_i > 0$. In particular, $\mathcal{M}_g \setminus \mathrm{supp}(D)$ is affine. \end{Corollary} \begin{proof} There is a standard map from the Deligne-Mumford compactification $\overline{\mathcal{M}}_g$ to the Satake compactification $\mathcal{M}_g^*$ (i.e. the closure of $\mathcal{M}_g$ in the Satake compactification) which takes boundary divisors of $\overline{\mathcal{M}}_g$ to products of periods of smooth curves and periods of nodal curves. Thus in the notation of Proposition \ref{affine} we have $b_i > 0$, since $\tilde{D}$ contains the image of the whole boundary of $\overline{\mathcal{M}}_g$, therefore any function which vanishes on $\overline{D}$ vanishes on every $\delta_i$ with positive multiplicity, and now our claim follows from Proposition~\ref{affine}. \end{proof} \begin{Remark} We could avoid the Cornalba-Harris ampleness criterion by observing that a modular form always induces an ample divisor $\tilde{D}$ on $\mathcal{M}_g^*$ and it defines a divisor $D$ on $\mathcal{M}_g$. Obviously $\mathcal{M}_g^*\setminus \tilde{D}$ is affine. Now, whenever $\tilde{D} = D \cup (\mathcal{M}_g^*\setminus \mathcal{M}_g)$ we have that $\mathcal{M}_g \setminus D = \mathcal{M}_g^*\setminus \tilde{D}$ is affine. \end{Remark} Next we present a nice criterion to check whether $F_H$ vanishes on $\tau$. \begin{Lemma}\label{crit} Let $\tau \in \mathbb{H}_g$. If there exist more than $v(g)$ even theta constants vanishing on $\tau$, then $F_H(\tau) = 0$. \end{Lemma} \begin{proof} This is just a trivial application of the pigeon hole principle. Indeed, each summand of $F_H$ is the product of $\frac{1}{2}\binom{2g+2}{g+1}$ even theta constants out of $2^{g-1}(2^g+1)$ total even theta constants, since there are exactly $v(g)$ theta constants left out of the product. If there are more than $v(g)$ theta constants vanishing at $\tau$, then every summand contains at least one of them, hence it vanishes at $\tau$. \end{proof} We shall also apply the following auxiliary result, which is essentially Lemma 3 in \cite{Accola}. A self-contained proof is reproduced here for readers' convenience. \begin{Lemma} \label{accola} If a curve $C$ of genus $5$ with a base point free $g^1_3$ carries two half-canonical $g^1_4$, then $C$ is hyperelliptic. \end{Lemma} \begin{proof} We claim (see \cite{Accola}, Lemma 2) that if $C$ is a curve of genus $5$ with a base point free $g^1_3$, then every half-canonical $g^1_4$ has a fixed point. Indeed, let $x+y+z$ be a divisor in the $g^1_3$ with three distint points and notice that $h^0(C, K_C-x-y-z)= h^0(C,K_C-x-y) = h^0(C,K_C-x-z)$. If $D_x$ and $D_y$ are two divisors in the half-canonical $g^1_4$ containing $x$ and $y$, respectively, it follows that $z$ is contained in the canonical divisor $D_x+D_y$, say in $D_x$. Now, if $y$ is not a base point of the $g^1_4$, then there is a divisor $D$ in the $g^1_4$ which does not contain $y$. On the other hand, $y$ has to be contained in the canonical divisor $D_x+D$ containing $x$ and $z$, hence $D_x =x+y+z+w$ and $g^1_4 = g^1_3 + w$, as claimed. By the claim, both half-canonical $g^1_4$ have a fixed point, namely, the first one is $g^1_3 + x$ and the second one is $g^1_3 + y$ with $x \ne y$. We have $2g^1_3 + 2x = \vert K_C \vert = 2 g^1_3 + 2y$, hence $2x \sim 2y$ with $x \ne y$ and $C$ turns out to be hyperelliptic. \end{proof} \begin{proof}[Proof of Theorem \ref{main}] For $g = 2,3$ the statement is trivial, as recalled in the Introduction, hence we need to check it only for $g = 4,5$. Let first $g = 4$. According to \eqref{M4}, our three open sets are the following: \[ \mathcal{M}_4 = (\mathcal{M}_4\setminus \Theta_\text{null}) \cup (\mathcal{M}_4 \setminus D_1) \cup (\mathcal{M}_4\setminus D_H). \] We show that the above divisors satisfy the hypotheses of Corollary \ref{affcor}. For $D_1$ and $\Theta_\text{null}$ this is obvious. Hence we just need to check that the closure $\tilde{D}_H$ of $D_H$ in $\mathcal{A}_4^*$ contains the product of periods of smooth curves and periods of nodal curves, and then apply Corollary \ref{affcor}. If $\tau \in \tilde{D}_H$ is a product of periods, i.e. $\tau = \left(\begin{smallmatrix}\tau_1 & 0 \\ 0 & \tau_2\end{smallmatrix}\right)$, with $\tau_1 \in \mathcal{M}_{g_1}$ and $\tau_2 \in \mathcal{M}_{g_2}$, $g_1 + g_2 = 4$, then by Remark~\ref{split} we have $\theta_m(\tau) = 0$ if $m = m_1\oplus m_2$ with $m_1 \in \mathbb{F}_2^{2g_1}$ and $m_2 \in \mathbb{F}_2^{2g_2}$ odd characteristics. There are two possible cases. If $g_1 = g_2 = 2$, then we have $6 \cdot 6 = 36 > 10 = v(4)$ even characteristics which split as odd$\oplus$odd in the notation of Remark \ref{split}; by Lemma \ref{crit}, $F_H$ vanishes on $\tau$. If $g_1 = 1$ and $g_2 = 3$, then we have $1\cdot 28 = 28 > 10 = v(4)$ even characteristics and again $F_H$ vanishes on $\tau$. The analogue result holds for nodal curves. Hence $\mathcal{M}_4 \setminus D_H$ is an affine open set. Moreover, $\Theta_\text{null} \cap D_1$ is the hyperelliptic locus in $\mathcal{M}_4$. Indeed, set-theoretically \begin{equation} \label{twotheta} \Theta_\text{null} \cap D_1 = \bigcup_{m_1 \neq m_2} \{\theta_{m_1}=\theta_{m_2} = 0\} \end{equation} and the intersection of this locus with $\mathcal{M}_4$ gives the hyperelliptic locus (see \cite{Igu:81}). By Lemma \ref{hyperavoid} we have $\mathcal{M}_4 \supset \Theta_\text{null} \cap D_1 \cap D_H = \emptyset$, hence \eqref{M4} holds. Let now $g = 5$. According to \eqref{M5}, our four open sets are the following: \[ \mathcal{M}_5 = ( \mathcal{M}_5 \setminus \Theta_\text{null}) \cup (\mathcal{M}_5 \setminus D_1) \cup (\mathcal{M}_5 \setminus D_H) \cup (\mathcal{M}_5 \setminus D_T). \] Again, we check that all involved divisors satisfy the hypotheses of Corollary \ref{affcor}. For $D_1$ and $\Theta_\text{null}$ this is obvious. Next, we claim that the closure $\tilde{D}_H$ of $D_H$ in $\mathcal{A}_5^*$ contains the product of periods of smooth curves and periods of nodal curves. Indeed, it is sufficient to prove that if $\tau = \left(\begin{smallmatrix}\tau_1 & 0 \\ 0 & \tau_2\end{smallmatrix}\right) \in \mathcal{M}_{g_1}\times \mathcal{M}_{g_2}$, with $g_1 + g_2 = 5$, then more than $v(5) = 66$ theta constants vanish on $\tau$. If $g_1 = 1$ and $g_2 = 4$, then we have $1 \cdot 120 = 120 > 66$ even theta constants vanishing on $\tau$. If $g_1 = 2$ and $g_2 = 3$, then we have $6\cdot 28 = 168 > 66$ even theta constants vanishing on $\tau$. By Lemma \ref{crit}, $F_H$ vanish on every product of smooth curves. The analogue result holds for nodal curves. Hence $\mathcal{M}_4\setminus D_H$ is an affine open set. Finally, we obtain \eqref{twotheta} as before and together with Lemma \ref{accola} we conclude that $\Theta_\text{null} \cap D_1 \cap D_T$ is exactly the hyperelliptic locus. By Lemma \ref{hyperavoid} we have $\mathcal{M}_5 \supset \Theta_\text{null} \cap D_1 \cap D_T \cap D_H = \emptyset$, hence \eqref{M5} holds. \end{proof} \section{Aknowledgements} We are grateful to Riccardo Salvati Manni for strongly stimulating our joint project, as well as to Enrico Arbarello and Gabriele Mondello for enlightening conversations on this research topic. Our work has been partially supported by GNSAGA of INdAM and MIUR Cofin 2008 - Geometria delle variet\`{a} algebriche e dei loro spazi di moduli (Italy).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A new and exciting field in condensed matter physics started when graphene - a two-dimensional, one carbon-atom thick material - was isolated for the first time.\cite{novo1,pnas} It was experimentally shown that the charge carriers in graphene could be controlled by a bottom gate setup-up; the charge carrier were shown to be either holes or electrons depending on the sign of the bottom gate voltage. In the transition from hole-based to electron-based transport the conductivity shows a minimum (not zero) value, $\sigma_{\rm min}$. Its experimental value is of the order of $\sigma_{\rm min}\simeq 4e^2/h$,\cite{novo1,pnas,novo3,kim3} but the actual value seems to be some what sample dependent.\cite{Tan} This value for $\sigma_{\rm min}$ imposes therefore a limitation on the minimum value of the current a field effect transistor made of graphene can transport. The existence of a conductivity minimum in graphene is a consequence of the fact that the elementary excitations of graphene are Dirac fermions, with a linear dispersion relation, instead of usual electrons with parabolic-like dispersion, characteristic of ordinary semi-conductors. Interestingly enough, the calculated value of the conductivity of graphene at the neutrality point is off the experimental value by the factor $1/\pi$. \cite{Ando98,Ando02,Peres06}Although this value is the more common result for the theoretical calculation of $\sigma_{\rm min}^{\rm theo }$, there are, however, several different values available in the literature.\cite{Ziegler07} It is also interesting that a clean graphene sample with metallic leads and smooth edges has a value of $\sigma=\sigma_{\rm min}^{\rm theo}$ as long as it width ($w$) is much larger than its length ($L$), being smaller than $\sigma_{\rm min}^{\rm theo}$ in the opposite limit. \cite{Beenakker06} Considering the case of metallic armchair edges, it found that $\sigma>\sigma_{\rm min}^{\rm theo}$ for $w/L\ll 1$ and that $\sigma\rightarrow\sigma_{\rm min}^{\rm theo}$ for $w/L\gg 1$.\cite{Beenakker06} This shows that disorder is not needed for having $\sigma\simeq\sigma_{\rm min}^{\rm theo}$. Another characteristic of Dirac electrons in graphene is their ability to tunnel through a potential barrier with probability one. \cite{NaturePhys,Bai07} This so called Klein tunneling of chiral particles has long ago been proposed in the framework of quantum electrodynamics,\cite{Klein29,Calogeracos,Zuber} but was never observed experimentally. Graphene opens up a route to observe this effect in a tabletop experiment, where the potential is created by some electrostatic gate potential. The manifestation of Klein tunneling is also present when electrons in graphene are forced to transverse a $n-p$ junction, leading to a selective transmission of those electrons approaching the junction perpendicularly.\cite{cheianov06} Other unusual effects, such as the focusing of an electric current by a single $p-n$ junction are also characteristic of Dirac electrons in graphene. \cite{cheianov07} As appealing as the Klein tunneling may sound from the point of view of fundamental research, its presence in graphene is unwanted when it comes to applications of graphene to nanoelectronics. This comes about because the pinch-off the field effect transistor may be very ineffective. The same may occur because of the minimum conductivity of graphene at the neutrality point (as discussed above). One way to overcome these difficulties is by generating a gap in the spectrum. From the point of view of Dirac fermions this is equivalent to the generation of a mass term. There are two known forms of generating gaps in the spectrum of graphene. The first one is by patterning graphene nanoribbons.\cite{Louie06,Duan07} The mechanism of producing these gaps depends on the nature of the termination of these nanoribbons. For armchair nanoribbons the gap comes from quantum confinement of Dirac fermions induced by the finite nature of the ribbons in the transverse direction. For zigzag nanoribbons the gap stems from the formation of polarized spin edge-states characteristic of these type of ribbons. The formation of these polarized states is also possible in bilayer graphene.\cite{Eduardo07} It is interesting to notice that Klein tunneling can also be circunvented by using a graphene bilayer.\cite{NaturePhys} The value of the induced gaps depend on the width of the ribbons, but for large widths it is of the order of 0.1 {\ttfamily eV}. Another possibility of generating gaps in the graphene spectrum is to deposit graphene on top of hexagonal boron nitride (BN). \cite{Giovannetti07} This material is a band gap insulator with a boron to nitrogen distance of the order of 1.45 \AA, \cite{Zupan71} (in graphene the carbon-carbon distance is 1.42 \AA) and a gap of the order of $4$ {\ttfamily eV}. It was shown that in the most stable configuration, where a carbon is on top of a boron and the other carbon in the unit cell is centered above a BN ring, the value of the induced gap is of the order of 53 m{\ttfamily eV}. Depositing graphene on a metal surface with a BN buffer layer leads to $n-$doped graphene with an energy gap of 0.5 {\ttfamily eV}.\cite{Lu07} The two mechanisms described above can be used to produce arrangements of graphene where in some spatial zones of the material the Dirac electrons will have gaps in the spectrum. The first possibility is to pattern graphene planes such that in several areas of the graphene flake narrow nanoribbons may exist. Another possibility it to combine wafers of silicon oxide and hexagonal boron nitride, such that in the region where the BN is located the local spectrum of graphene will present a finite gap. We shall explore in this paper this latter possibility and the way it can prevent Klein tunneling from occurring. The two mechanisms just mentioned can then be at the heart of future nanoelectronics built of graphene. The second method is also related to junctions of graphene with other kind of systems, being them superconducting,\cite{Titov06,Moghaddam} normal-conductor/graphene/normal-conductor, \cite{Robinson07} or multiterminal junctions.\cite{Jayasekera} Also the study of electron transport in disordered graphene samples is of interest,\cite{Wakabayashi} specially because the tunneling may be assisted by impurities,\cite{Titov07} which is a manifestation of Klein tunneling. For a review on the experimental aspects of graphene physics see the work of Geim and Novoselov.\cite{Nov07} Some of the theoretical aspects of graphene physics are reviewed qualitatively by Castro Neto {\it et al.},\cite{castro} by Katsnelson, \cite{katsnelson} and by Geim and MacDonald; \cite{Geim07} a more comprehensive review is given by Castro Neto {\it et al.}.\cite{rmp} For a review on Klein tunneling see the work by Beenakker.\cite{rmpBeenakker} \section{Basic definitions.} As described in the previous section, we assume that it is possible to manufacture slabs with SiO$_2$-BN interfaces, on top of which a graphene flake is deposit. This will induce spatial regions where graphene has a vanishing gap intercalated with regions where the BN will cause a finite gap. In the following we will consider the graphene physics in two different regions: the $k-$region, where the graphene sheet is standing on top of SiO$_2$, and a $q-$region, where a mass-like term is present, caused by BN, inducing an energy gap of value $2t'$ (for all numerical purposes we use $t'=$0.1 {\ttfamily eV}). The wavefunctions in these two regions will be referred by $\psi_{\vec{k}}$ and $\psi_{\vec q}$, respectively. The geometry of the scattering process is represented in Fig. \ref{geometry}. The Hamiltonian for massless Dirac electrons in graphene, around the $\bm K$-point in the Brillouin zone is given by \begin{equation} H_g=v_F\bm \sigma\cdot \bm p\,, \end{equation} where $\bm \sigma=(\sigma_x,\sigma_y)$, $\bm p=-i\hbar\bm\nabla$, $\sigma_{i}$, with $i=x,y,z$, is the $i$ Pauli matrix, and $v_F=3ta/(2\hbar)$, with $t$ the nearest neighbor hopping matrix in graphene and $a$ the carbon-carbon distance. Therefore, in the massless wave function, in the $k-$region ($t'=0$), is given by \begin{eqnarray} \psi_{\vec{k},s}={1\over\sqrt{2}} \left( \begin{array}{c} {1}\\ {u(\vec{k},s)} \end{array} \right) \,e^{i\vec{k}\cdot\vec{r}}, \label{psi_k} \end{eqnarray} with \begin{eqnarray} u(\vec{k},s)=s\,e^{i\theta}, \label{def_u} \end{eqnarray} $s=\texttt{sign}(E)$ and $\theta=\arctan{(k_y/k_x)}$. The corresponding energy eingenvalue is \begin{eqnarray} E=\pm v_F\hbar \sqrt{k_x^2+k_y^2}=\pm\hbar v_Fk, \label{free_energy} \end{eqnarray} with $k$ the absolute value of the wavevector. In a region of finite mass the Hamiltonian for Dirac electrons is \begin{equation} H_g=v_F\bm \sigma\cdot\bm p +t'\sigma_z\,, \label{Hmass} \end{equation} with $mv_F^2=t'$ the mass term ($m$ is the effective mass); as a consequence the electronic spectrum will present a finite energy gap of value $2t'$. In the $q$-region (the gaped region, $t'\ne 0$), the wave funcion is \begin{eqnarray} \psi_{\vec{q},s}={1\over\sqrt{2}} \left( \begin{array}{c} {1}\\ {v(\vec{q},s)} \end{array} \right) \,e^{i\vec{q}\cdot\vec{r}}, \label{psi_q} \end{eqnarray} where \begin{eqnarray} v(\vec{q},s)=\frac{E-t'}{\hbar v_F(q_x-iq_y)}. \label{def_v} \end{eqnarray} Due to momentum conservation, electrons propagating through a $k-q$ interface will conserve their wavevector component parallel to the interface. Thus, taking this interface to be located along the $\hat y$ axis, we will have always $k_y=q_y$. The $q$-region eigenenergy, associated with the eigenstate (\ref{psi_q}) and the Hamiltonian (\ref{Hmass}), is \begin{eqnarray} E=\pm\sqrt{(q_x^2+k_y^2)(\hbar\,v_F)^2+t'^2}. \label{gap_energy} \end{eqnarray} It is amusing to notice that the spectrum (\ref{gap_energy}) has the same form as for the electrons in a graphene bilayer, when the two graphene planes are at different electrostatic potentials. \cite{McCann,Pereira} Using Eqs. (\ref{gap_energy}) and (\ref{free_energy}), we write \begin{eqnarray} v_F\hbar q_x=\sqrt{E^2\cos^2{(\theta)}-t'^2} \end{eqnarray} and, depending on $E^2\cos^2{(\theta)}$ being larger or smaller than $t'^2$, $q_x$ may take a real or a pure imaginary value. Wave propagation follows for the former case, evanescent waves in the latter. For a real $q_x$, and since $q_y=k_y$, we have \begin{eqnarray} \sqrt{E^2-t'^2}\sin{(\phi)}=|E|\sin{(\theta)} \label{snell} \end{eqnarray} where $\phi$ is the angle of propagation of the electron in the $q$-region (see Fig.\ref{geometry}). Equation (\ref{snell}) is just the usual Snell's law, for electrons being refracted at the interface separating the $k-$ and $q-$ regions. We see that $\phi\geq\theta$ whenever $|E|>t'$. \begin{figure} \begin{center} \includegraphics [scale=.55]{geometry.eps}% \end{center} \caption{Top figure: graphene band structure for the massless and massive cases. In this latter, the quasi-parabolic bands have a gap energy $2t'$. Bottom figure: geometry in the reflection in the $k-q$ interface. An incident wavefunction $\psi_k^+$ with wavevector $\vec{k}_+$ is reflected and refracted into the $\psi_k^-$ and the $\psi_q^+$ wavefunctions with wavevectors $\vec{k}_-$ and $\vec{q}_+$, respectively. Since the momentum is conserved at the interface, one has that $\vec{q}_y=\vec{k}_y$. The refracted wave propagates with an angle $\phi$, which is slightly larger than the incident and reflected angles $\theta$ with $|\vec{q}_x|<|\vec{k}_x|$, a consequence of energy conservation.} \label{geometry} \end{figure} \subsection{Forward and backward propagation.} We consider now the simple reflection in the interface, with the incident and the reflected waves both on the $k-$ or on the $q-$region. In the $k-$region, the $\hat{x}-$component of the wavevector of the reflected wave is symmetrical with respect to the incident wave. Thus, for this case, we have the following transformations under a reflection (see also Fig.\ref{geometry}) \begin{eqnarray} k_x\rightarrow -k_x~~~\rm{and}~~~e^{i\theta}\rightarrow e^{i(\pi-\theta)}=-e^{-i\theta}. \label{kx_to_minus_kx} \end{eqnarray} This leads to the generalization of Eqs. (\ref{psi_k}) and (\ref{def_u}) \begin{eqnarray} u_\pm=\pm s\,e^{\pm i\theta}, \label{def_u_pm} \end{eqnarray} \begin{eqnarray} \psi^{\pm}_{\vec{k}}=\frac{1}{\sqrt{2}} \left( \begin{array}{c} {1}\\ {u_\pm} \end{array} \right) \,e^{\pm ik_x x+ik_y y}. \label{psi_k_pm} \end{eqnarray} where the plus and minus signs refers to waves propagating, respectively, in the positive and negative directions of the $\hat x$-axis. A similar reasoning leads to the generalization of (\ref{def_v}) to $v(\vec{q},s)$, \begin{eqnarray} v_\pm=\frac{E-t'}{\hbar v_F(\pm q_x-ik_y)}. \label{def_v_pm} \end{eqnarray} and, also, of the $q-$region wavefunction to \begin{eqnarray} \psi^{\pm}_{\vec{q}}={1\over\sqrt{2}} \left( \begin{array}{c} {1}\\ {v_\pm} \end{array} \right) \,e^{\pm iq_x x+ik_y y}. \label{psi_q_pm} \end{eqnarray} The differences we have just highlighted on the wavefunctions and coefficients for forward and backward propagating particles can be also seen in the differences in positive and negative angles of incidence in the interface. This changes are useful, when a guiding-wave kind of device is made. Let us therefore analyze the case when $k_y\rightarrow -k_y$. If in Eq. (\ref{kx_to_minus_kx}) we keep $k_x$ unaltered and ``reflect'' instead $k_y$ we would obtain \begin{eqnarray} k_y\rightarrow -k_y~~~\rm{and}~~~e^{i\theta}\rightarrow e^{-i\theta}, \label{ky_to_minus_ky} \end{eqnarray} with similar relations for $\phi$, the angle in the q-region. For this cases, we get \begin{eqnarray} v_\pm(-\phi)=-v_\mp(\phi~)~~\rm{and}~~u_\pm(-\theta)=-u_\mp(\theta). \label{def_by_angle} \end{eqnarray} Apart from a minus sign, these relations shows that the the operation of changing the sign of $k_y$ (i.e., the angle of incidence) is equivalent to the one of changing the sign of $k_x$ ($q_x$ in the q-region). Of curse that the extra minus sign in the right hand side of both expressions in Eq. (\ref{def_by_angle}) are of no consequence within the calculation of reflection and transmission coefficients that follow. \subsection{Real and evanescent waves in the $q$-region} Since in the $q$-region there is a gap in the energy spectrum then $q_x$ can take both real and pure imaginary values. In the first case, we have wave propagation in this region, in the latter just evanescent waves. No simple expression as the one given by Eq. (\ref{def_u_pm}) can be written in this case. Instead, we need to consider separately the cases where $q_x$ is a real or a pure imaginary number. \subsubsection{For $q_x$ real.} For real $q_x$, we can write a similar expression to the one in Eq. (\ref{def_u_pm}), \begin{eqnarray} v_\pm=\pm ve^{\pm i\phi}, \label{def_v_free} \end{eqnarray} with $\phi$ given by Eq. (\ref{snell}) and \begin{eqnarray} v=\frac{E-t'}{v_F\hbar|q|}=\frac{E-t'}{\sqrt{E^2-t'^2}}, \label{def_v_real} \end{eqnarray} where Eq. (\ref{gap_energy}) was used. \subsubsection{For $q_x$ pure imaginary.} Since $k_y$ is always a real number, Eq. (\ref{def_v_pm}) implies that, if $q_x$ is a pure-imaginary, $v_\pm$ also is, and then \begin{eqnarray} v_\pm=\mp i\nu_\pm, \label{def_v_gap} \end{eqnarray} where \begin{eqnarray} \nu_\pm=\pm\frac{E-t'}{\hbar v_F(\pm k_y-\alpha)}, \end{eqnarray} with the real {\it absorption coefficient} $\alpha$ defined as $q_x=i\alpha$, and $\alpha$ given by \begin{eqnarray} \alpha=(v_F\hbar)^{-1}\sqrt{t'^2-E^2\cos(\theta)^2}\,. \label{alpha} \end{eqnarray} \subsubsection{Complex conjugate of the $u_\pm$ and $v_\pm$ coefficients.} For the calculation of the intensity reflection and transmission coefficient we will need to deal with the complex conjugate of the $u_\pm$ and $v_\pm$ coefficients. The definition (\ref{def_u_pm}) for $u_\pm$ implies that \begin{eqnarray} u_\pm^*=-u_\mp. \label{u_alg} \end{eqnarray} In the case of $v_\pm$, its complex conjugate depends on the the fact of having a real or imaginary $q_x$, \begin{eqnarray} v_\pm^*=\left\{ \begin{array}{cl} -v_\mp & ~~~{\rm if}~q_x~{\rm is~~real} \\ -v_\pm & ~~~{\rm if}~q_x~{\rm is~~imaginary} \\ \end{array}\right. \label{v_alg} \end{eqnarray} \section{Transmission and reflection at the interface: the step case.} \subsection{Reflection and transmission amplitude coefficients.} We compute now the reflection and transmission amplitude coefficients for electrons crossing an interface between a $k-$ and a $q-$ region. Unlike what happens in optics and due to the differences on back and forward propagation, we will need to consider not only two but four different cases: electrons crossing the interface coming from the $k-$region in the forward and backward senses, and those crossing the interface coming from the $q-$region, also propagating on the positive and negative senses of the $x$ axis. These four cases are summarized in Fig. \ref{fig1}. \begin{figure} \begin{center} \includegraphics [scale=.75]{figura1.eps}% \end{center} \caption{The four different possible cases for reflection/transmission in an interface between $k$ and $q$ regions.} \label{fig1} \end{figure} \subsubsection{Propagation from a $k-$ into a $q-$region.} We start by deriving the amplitude reflection and transmission coefficients, which will be denoted as $r^\pm_{kq}$ and $t^\pm_{kq}$ respectively, for the case of the propagation from a $k-$ into a $q-$ region. This situation is described in Fig. \ref{geometry} and also in Fig. \ref{fig1}.a). Since there is a partially reflected wave, the total wave function in the $k-$region must be written as an superposition of one associated with the incident electrons and other with those that are reflected, \begin{eqnarray} \Psi_k(\vec r)=A\,\psi^{+}_{\vec{k}}+B\,\psi^{-}_{\vec{k}}. \label{psi_kA} \end{eqnarray} $A$ and $B$ are the normalized amplitudes for the incident and reflected wave functions. In the $q$-region, with $C$ the amplitude of the transmitted wave function, we have \begin{eqnarray} \Psi_q(\vec r)=C\,\psi^+_{\vec{q}}. \label{psi_qC} \end{eqnarray} Using Eqs. (\ref{psi_kA}) and (\ref{psi_qC}), and imposing the continuity condition of the particle's wave function at the interface, i.e. $\Psi_k(x,y=0)=\Psi_q(x,y=0)$, we find \begin{eqnarray} r^+_{kq}={B\over A}=\frac{v_+-u_+}{u_--v_+}~~\rm{and}~~t^+_{kq}={C\over A}=1+r^+_{kq}, \label{rp_kq} \end{eqnarray} where the superscript $+$ recall that the incident wave function is, in this case, traveling in the positive direction of the $x$-axis. Had we considered the case were the particles travel in the backward direction, represented by Fig.\ref{fig1}.c), we would have obtained \begin{eqnarray} r^-_{kq}=\frac{v_--u_-}{u_+-v_-}~~{\rm and}~~t^-_{kq}=1+r^-_{kq}. \label{rm_kq} \end{eqnarray} This result can be obtain simply by exchanging the plus by the minus signs in Eq. (\ref{rp_kq}). \subsubsection{Propagation from the $q-$ into the $k-$region.} For computing the reflection and transmission coefficients for the cases where the electrons come from the $q-$region into the $k$-region, $r^\pm_{qk}$ and $t^\pm_{qk}$, we need only to exchange $u\leftrightarrow v$ in the corresponding backward and forward expressions (\ref{rp_kq}) and (\ref{rm_kq}). The result is \begin{equation} r^\pm_{qk}=\frac{u_\pm-v_\pm}{v_\mp-u_\pm}~~{\rm and}~~ t^\pm_{qk}=1+r^\pm_{qk}. \label{rpm_qk} \end{equation} \subsection{Amplitude coefficients: general algebraic relations.} It will be very useful in the following, for expression simplification purposes, the derivation of simple relations between the reflection and transmission amplitude coefficients. Similar relations to those we present here exist also for the photons' optics case. For instance, we may write $ \pm r_{12}+t_{12}=1 $ when a light beam is reflected and refracted in a diopter between regions $1$ and $2$, with the plus or minus sign corresponding respectively to the cases where $n_1$, the index of refraction of medium $1$, is smaller or larger that the one of region $2$. Here, however, we have that in general $r^+_{mn}\neq r^-_{mn}$ (and similarly for the transmission coefficients) and these relations are less trivial (we have used the notation $m\neq n=\{k,q\}$). Using the definitions in Eqs. (\ref{rp_kq}), (\ref{rm_kq}) and (\ref{rpm_qk}), we can write \begin{eqnarray} \mathcal{R}+\mathcal{T}=1, \label{RR_plus_TT_1} \end{eqnarray} where \begin{eqnarray} \begin{array}{c} \mathcal{R}=r^+_{kq}r^-_{kq}= r^+_{qk}r^-_{qk} \\ \mathcal{T}=t^+_{qk}t^+_{kq}= t^-_{qk}t^-_{kq}. \\ \end{array} \label{RR_TT_def} \end{eqnarray} These relations are general and don't depend on the $q_x$ being real or imaginary. Another general relations, useful to simplify expressions of the transmission of multi-layered structures, are \begin{eqnarray} r^\pm_{mn}r^\mp_{nm}=- \mathcal{R}\times{t^\mp_{nm}\over t^\pm_{nm}}\,. \label{odd_rel} \end{eqnarray} \subsection{Intensity reflection and transmission coefficients.} The general definitions for the intensity reflection and transmission coefficients are \begin{eqnarray} \begin{array}{lll} R^\pm_{mn}=&r^\pm_{mn}\left(r^\pm_{mn}\right)^*&\\ T^\pm_{mn}=&t^\pm_{mn}\left(t^\pm_{mn}\right)^*=&1-R^\pm_{mn},\\ \end{array} \label{RT_def} \end{eqnarray} where we keep the same notation as before. We will consider now, separately, the cases were $q_x$ is a real number or a pure imaginary. \subsubsection{For $q_x$ real.} For $q_x$ a real number, we note first that for any $m\neq n=\{k,q\}$, \begin{eqnarray} \left\{\begin{array}{c} \left(r^\pm_{mn}\right)^*=r^\mp_{mn} \\ \\ \mathcal{R}=\mathcal{R}^* \\ \end{array} \right. \label{r_rel_out_gap} \end{eqnarray} These relations are just a consequence of the definitions of $r^\pm_{mn}$ in Eqs. (\ref{rp_kq}), (\ref{rm_kq}), and (\ref{rpm_qk}), and in Eqs. (\ref{u_alg}) and (\ref{v_alg}). Using Eq. (\ref{r_rel_out_gap}) in Eq. (\ref{RT_def}) results in $$ R=\mathcal{R}~~~{\rm and}~~~T=\mathcal{T}~~~{\rm for}~q_x~{\rm real}, $$ which is valid, for both interfaces and both directions of propagation. Furthermore, using Eq. (\ref{RR_plus_TT_1}) we get $$ R+T=1, $$ an expected result. Explicitly, the $\mathcal{R}$ coefficient defined in Eq. (\ref{RR_TT_def}) is given by $$ \mathcal{R}=\frac{(v_+v_--1)-(u_-v_++u_+v_-)}{(v_+v_--1)-(u_-v_-+u_+v_+)}. $$ Making use of Eqs. (\ref{def_v_pm}) and (\ref{def_u_pm}), we may write \begin{eqnarray} v_+v_-&=&-v^2,~~ u_\pm v_\mp=-s\,v\,e^{\pm i(\theta-\phi)}\nonumber\\ &&{\rm and}~~u_\pm v_\pm=s\,v\,e^{\pm i(\theta+\phi)},\nonumber \end{eqnarray} where $v$ is given by Eq. (\ref{def_v}) and $\phi$ by Eq. (\ref{snell}). Using these expressions we obtain $$ \mathcal{R}=\frac{1+v^2-2\,s\,v\cos{(\theta-\phi)}} {1+v^2+2\,s\,v\cos{(\theta+\phi)}}, $$ where $$ v\cos{(\theta\pm\phi)}= \frac{v_F\hbar}{E+t'}\left(q_x\cos{(\theta)}\mp k_y\sin{(\theta)}\right). $$ Finally, after algebraic simplification, we obtain \begin{eqnarray} R=\mathcal{R}=\frac{k_x-q_x}{k_x+q_x}. \label{R_real} \end{eqnarray} \subsubsection{For $q_x$ pure imaginary.} For $q_x$ a pure imaginary, we see that \begin{eqnarray} \left\{\begin{array}{c} \left(r^\pm_{mn}\right)^*r^\pm_{mn}=1\\ \\ \mathcal{R}\,\mathcal{R}^* =1\\ \end{array} \right. \label{r_rel_in_gap} \end{eqnarray} Using these relations along with Eq. (\ref{RT_def}), we straightforwardly obtain \begin{eqnarray} R^\pm_{kq}=1 ~~{\rm and}~~ T^\pm_{kq}=0~~{\rm with}~~q_x~{\rm a~~ pure~~ imaginary}. \end{eqnarray} This is an expected result since the transmission $T=1-R$ must be zero in the case where the wave in the $k$-region enters in the gap of the $q$-region. If the incident wave propagates in the gap region, i.e. it is an evanescent wave, the coefficients $R^\pm_{qk}$ and $T^\pm_{qk}$ are physically meaningless. We see from the second expression in (\ref{r_rel_in_gap}) that $\mathcal{R}$ is a modulo 1 complex quantity. It may be written as \begin{eqnarray} \mathcal{R}=e^{i2\varphi}\,, \label{def_varphi} \end{eqnarray} with $2\varphi$ a convenient definition of its argument. To compute this angle, in the spirit of Eq. (\ref{alpha}), we replace $q_x$ by $i\alpha$ in Eq. (\ref{R_real}) to obtain $$ \mathcal{R}=\frac{k_x-i\alpha}{k_x+i\alpha}. $$ Computing the real part of this quantity, we get $$ \cos{(2\varphi)}=\frac{2E^2\cos(\theta)^2}{t'^2}-1 $$ and, after straightforward manipulation, \begin{eqnarray} \cos{(\varphi)}\,t'=\cos{(\theta)}\,|E|~~{\rm or~~ else}~~ \tan{(\varphi)}={\alpha\over k_x}, \label{ev_snell} \end{eqnarray} a Snell type expression for the $q_x$ pure imaginary case. Since $R=\mathcal{R}\mathcal{R}^*=1$ for $q_x$ a pure imaginary, a general expression for the intensity reflection and transmissions coefficients (valid for $q_x$ both real and pure imaginary) is given by \begin{eqnarray} R&=&1-T\nonumber\\ &=&\left|\frac{k_x-q_x}{k_x+q_x}\right|\nonumber\\ &=&\left|\frac{1+v^2-2s\,v\cos{(\theta-\phi)}}{1+v^2+2s\,v\cos{(\theta+\phi)}}\right|. \label{R_step} \end{eqnarray} Naturally, Eq. (\ref{R_step}) depends on $t'$, since both $v$, $\theta$, and $\phi$ depends on this quantity. When one considers the case $\theta=\phi=t'=0$ one obtains $R=0$. This expression is plotted in a density/contour plot in Fig.\ref{T_step0}. \begin{figure} \begin{center} \includegraphics [scale=.55]{T_step_s.eps} \end{center} \caption{Intensity transmission for particles crossing the interface from a $k$-region into a $q$-region. The black region corresponds to a zero transmission, a case that correspond to the total internal reflection in the usual photonic optics.} \label{T_step0} \end{figure} \section{The barrier.} With the above definitions, the computation of transmission and reflections coefficients for any type of multi-interface device follows similar expressions as those found in normal optics.\footnote{There is no analog, however, for the gap-region with normal incidence.} To illustrate this, we consider in the following a heterostructure made of a $q-$region of width $w$ placed between two semi-infinite slabs of $k$-regions, as shown in the Fig. \ref{barrier_scheme}. Our goal will be the derivation of the intensity transmission coefficient for this case, which we will denoted by $T_b$. We notice that results for barriers of the same height when the spectrum of the the electrons is linear in every spatial regions was considered in Ref. \cite{Pereira2}. \begin{figure} \begin{center} \includegraphics [scale=.85]{figura2.eps}% \end{center} \caption{Barrier: scheme for the computation of the transmission.} \label{barrier_scheme} \end{figure} In Figure \ref{barrier_scheme}, the wave function $\Psi_1$ describes an electron, traveling in the positive direction of $\hat{x}-$axis, just before crossing the {\it diopter} $q-k$. This wave function can be seen as resulting from the coherent superposition of two wave functions, one being itself after a round trip in the $q-$region, given by $\Psi_1\,t^+_{qk}t^-_{qk}\,e^{i2q_xw}$, and another one which is the {\it incident} wave funtion $\Psi_0$ after crossing the first interface $k-q$, equal to $\Psi_0\,t^+_{kq}\,e^{iq_xw}$. Adding these two contribution and solving in order to $\Psi_1$ we obtain, $$ \Psi_1=\Psi_0\frac{t^+_{kq}\,e^{iq_xw}}{1-r^+_{qk}r^-_{qk}\,e^{i2q_xw}}. $$ If we denote the amplitude transmission coefficient for this barrier as $t_B=\Psi_2/\Psi_0$ and using the fact that $\Psi_2=t^+_{qk}\Psi_1$, we finally obtain \begin{eqnarray} t_B=\frac{\mathcal{T}e^{iq_xw}} {1-\mathcal{R}e^{i2q_xw}}\, \label{amp_trans_coef_barrier} \end{eqnarray} where the definitions (\ref{RR_plus_TT_1}) were used. \subsection{$q_x$ real: free propagation.} If there is wave propagation in the $q$-region, $q_x$ is real, $\mathcal{R}=R$ and $\mathcal{T}=T$, and \begin{eqnarray} T_B=t_B\,t_B^*=1-R_B=\left[1+\left(\frac{2}{\pi}\mathcal{F}\right)^2\sin^2{(q_xw)}\right]^{-1}. \label{int_trans_barrier} \end{eqnarray} where we used the finesse definition \begin{eqnarray} \mathcal{F}=\pi\frac{\sqrt{R}}{T}={\pi\over 2}\frac{t'}{q_x}, \label{finesse} \end{eqnarray} to highlight the similarity with an Fabry-P\'erot solid etalon (made of glass, for example) in usual optics.\cite{BornWolf} However, this similarity is elusive. In the solid etalon case, in general, the finesse is almost a constant coefficient since the interfaces' reflectivities (e.g., in a diopter glass-air) has a small dependence on the energy (optical resonances are typically far from the visible part of the spectrum and there's no gap as in the case of the graphene with a mass term). In the case treated here, $\mathcal{F}$ has a strong dependence on the energy $E$ of the particles and, furthermore, there is also a gap present. We will revisit a Fabry-P\'erot type of device further in this work. \begin{figure} \begin{center} \includegraphics [scale=.5]{T_barrier_w50_s.eps}% \end{center} \begin{center} \includegraphics [scale=.5]{T_barrier_w300_s.eps}% \end{center} \caption{Transmission of a barrier for a $q$-region width of $w=50 a$ (top) and $w=300 a$ (bottom). For a sufficient narrow width, the wave tunnels across the $q$-region resulting in a non null transmission. In optics, this behavior is known as \textit{frustrated} total internal reflection. The dashed lines marks the region where in the step, the transmission is zero. } \label{T_step_w50_w200} \end{figure} \begin{figure} \begin{center} \includegraphics [scale=1]{T_step_barrier_s.eps}% \end{center} \caption{Transmission coefficient for a simple step and several barriers for normal incidence. In the barriers case, there is a \textit{frustrated} total internal reflection and, in the gap, the transmission is non zero and increases with decreasing values of the width of the $q$-region.} \label{T_step} \end{figure} \begin{figure} \begin{center} \includegraphics [scale=.8]{T_barrier_w50_5thetas_s.eps}% \end{center} \caption{\label{T_barrier_w50_5thetas} Left: Transmission coefficient for a barrier with a width $50a$ and for different angles of incidence $\theta$. At $E=0$, all curves have the same value and for $\theta=\pi/2$, the transmission is a delta function at $E=0$. Right: Zero energy transmission coefficient of a barrier as function of its width $w$.} \end{figure} \subsection{Inside the gap: frustrated total internal reflection.} Inside the gap, $q_x$ is a pure imaginary and there's no wave propagation. This is similar to the total internal reflection in optics, where only an evanescent wave exists that carries no energy (since in here, the coefficient $R_{kq}=1$) and decays exponentially in the $x$ direction (although keeping the phase term $e^{ik_yy}$ in the $y$ direction). However, by placing a $k$-region nearby the evanescent wave, some of the energy of the totally reflected wave tunnels throughout the gap region, a phenomena known in optics as \textit{frustrated total internal reflection}. This phenomena is also described by Eq. (\ref{int_trans_barrier}), whenever $q_x$ becomes a pure imaginary. In this case, replacing $q_x=i\alpha$ and using the definitions (\ref{alpha}) and (\ref{def_varphi}) we may simplify Eq. (\ref{int_trans_barrier}) to \begin{eqnarray} T_B=t_B\,t_B^*=1-R_B ={1\over 1+\zeta}, \label{int_trans_barrier_ingap} \end{eqnarray} where we have used the definition \begin{eqnarray} \zeta=\frac{\sinh^2{(\alpha w)}}{\sin^2{(\varphi)}}, \label{zeta} \end{eqnarray} which will be used in the following. If $E=0$, Eq. (\ref{ev_snell}) implies that $\varphi=-\pi/2$ and $\alpha=t'/v_F\hbar$. $T_B$ is in this case independent of $\theta$ and is equal to \begin{eqnarray} T_B(E=0)=\cosh^{-2}{\left(w\frac{t'}{v_F\hbar}\right)}. \end{eqnarray} This behavior is clearly shown in left panel of Fig. \ref{T_barrier_w50_5thetas}. In the right panel of Fig.\ref{T_barrier_w50_5thetas} it is shown how this tunneling transmittance at zero energy varies with the barrier width. A $50\%$ reduction is accomplished for a barrier with a width of approximately $36~a$. \section{Transfer matrices} The method used in the last Section for computing $t_B$, although being simple becomes very difficult to handle for more complex hetero-structures, with more than two interfaces. These type of cases are usually treated with the use of \textit{transfer} matrices. These will be computed in the following. \begin{figure} \begin{center} \includegraphics[scale=.85]{figura3.eps}% \end{center} \caption{\label{fig2}Schemes for the computation of the transfer matrices.} \end{figure} Figure \ref{fig2} shows the scheme used to compute the transfer matrix in an interface $k-q$ and $q-k$. In both cases, our goal is to derive $\Psi_1$ and $\Psi'_1$ from the knowledge of $\Psi_0$ and $\Psi'_0$. Defining $$ \left( \begin{array}{c} {\Psi_1} \\{\Psi'_1} \end{array} \right) =\mathbf{M}_{mn} \left( \begin{array}{c} {\Psi_0}\\ {\Psi'_0} \end{array} \right) , $$ where $\mathbf{M}_{mn}$ is the transfer matrix for the generic $m-n$ interface, and using Eq. (\ref{RR_TT_def}), we obtain the result \begin{eqnarray} \mathbf{M}_{mn}=\left[ \begin{array}{ccc} \displaystyle{ \frac{1 }{t^+_{nm}}}& &\displaystyle{ \frac{r^-_{nm}}{t^-_{nm}}}\\ \\ \displaystyle{-\frac{r^+_{mn}}{t^-_{nm}}}& &\displaystyle{ \frac{1 }{t^-_{nm}}}\\ \end{array} \right] \end{eqnarray} The determinant of this matrix is given by \begin{eqnarray} Det(\mathbf{M}_{kq})&=&\left[Det(\mathbf{M}_{qk})\right]^{-1}=\frac{t^+_{kq}}{t^-_{qk}}\, \end{eqnarray} As expected, $Det(\mathbf{M}_{kq}\times\mathbf{M}_{qk})=1$. The free propagation of a particle in a $k-$ and in a $q-$region of width $\xi$ is given, respectively, by \begin{eqnarray} \begin{array}{ll} \mathbf{L}_k(\xi)=\left[ \begin{array}{cc} e^{ik_x\xi}&0\\ 0&e^{-ik_x\xi}\\ \end{array} \right]; & \mathbf{L}_q(\xi)=\left[ \begin{array}{cc} e^{iq_x\xi}&0\\ 0&e^{-iq_x\xi}\\ \end{array} \right]\,. \end{array} \label{free_prop_matrices} \end{eqnarray} \section{The diode.} \begin{figure} \begin{center} \includegraphics[scale=.7]{diode.eps}% \end{center} \caption{\label{diode}The diode heterostructure: two thin slabs of $q-$regions of width $w$ separated by a $k-$region of width $d$, all inside semi-infinite slabs of $k-$regions.} \end{figure} We consider now a more complex system composed by a sandwich of two $q$-regions of width $w$ separated by a slab of a $k-$region with width $d$, inside two semi-infinite $k-$regions. To derive the amplitude transmission coefficient of such a structure we need to compute the expression, $$ t_d=\mathbf{M}_{qk}\,\mathbf{L}_q(w)\,\mathbf{M}_{kq}\,\mathbf{L}_k(d)\,\mathbf{M}_{qk}\,\mathbf{L}_q(w)\mathbf{M}_{kq}. $$ The result of this expression can be simplified using Eq. (\ref{odd_rel}), resulting in \begin{eqnarray} t_D=\frac{\mathcal{T}^2\,e^{2iq_xw}} {(\mathcal{R}\,e^{2iq_xw}-1)^2-\mathcal{R}(e^{2iq_xw}-1)^2\,e^{2ik_xd}}\nonumber. \end{eqnarray} \begin{figure} \begin{center} \includegraphics[scale=.5]{T_diode_w50_d100_s.eps}% \end{center} \begin{center} \includegraphics[scale=.5]{T_diode_w50_d200_s.eps}% \end{center} \label{T_diode_w50_d100_200} \caption{Transmission of a diode structure with $w=50a$ and $d=100a$ (top) and $d=200a$ (bottom).} \end{figure} For the most important case where the $q-$regions are barriers with energy higher than the energy of the particles, we have a resonant diode. In this case and using the definitions (\ref{alpha}) and (\ref{def_varphi}) we get \begin{eqnarray} t_D=t_B^2\times\left[1-\frac{\sinh^2{(\alpha w)}}{\sinh^2{(\alpha w+i\varphi)}}\, e^{i2k_xd}\right]^{-1}, \label{t_D_1} \end{eqnarray} where $t_B$ is the amplitude transmission for a simple barrier, given by Eq. (\ref{amp_trans_coef_barrier}). We may simplify Eq. (\ref{t_D_1}) by expanding the term $\sinh{(\alpha w+i\varphi)}$ and expressing the result in a complex polar representation. Doing this and using the definition for $\zeta$ in Eq. (\ref{zeta}) we obtain $$ \frac{\sinh^2{(\alpha w)}}{\sinh^2{(\alpha w+i\varphi)}}= \frac{\zeta}{1+\zeta}\exp{(i2\widetilde{\varphi})} $$ with the phase term argument given by $$ \widetilde{\varphi}=-\arctan{\left[\coth{(\alpha w)}\tan{(\varphi)}\right]}. $$ The intensity transmission coefficient can now be easily computed, being equal to \begin{eqnarray} T_D= \frac{1}{1+\left({2\over\pi}\mathcal{F}_D\right)^2 \sin^2{(\widetilde{\varphi}+k_xd)}}, \label{int_trans_diode} \end{eqnarray} where now, the \textit{diode finesse} $\mathcal{F}_D$ is given by \begin{eqnarray} \mathcal{F}_D=\pi\sqrt{\zeta(1+\zeta)}. \label{finesse_diode} \end{eqnarray} \subsection{Revisiting the Fabry-P\'erot: etalon made with "mirrors".} The expression (\ref{int_trans_diode}) results in the simple case of a Fabry-P\'erot etalon if: \begin{enumerate} \item $\alpha w\gg1$, which implies $\coth{(\alpha w)}\cong1$ and $\tilde{\varphi}\cong\varphi$; \item $E\ll t'$, which implies that $\varphi\approx\pi/2$. \end{enumerate} With these approximations we get $$ \mathcal{F}_D={\pi\over2}\sinh{(2\alpha w)} $$ and then \begin{eqnarray} T_D=\frac{1}{1+\sinh^2{(2\alpha w)}\cos^2{(k_xd)}}, \label{int_trans_diode_ingap} \end{eqnarray} an expression that can be derived from a delta function potentials treatment, treated in Section \ref{rtd}. \subsection{The diode tunneling conductance.} Let us now compute the tunneling conductance of the device as function of the potential bias $V$, the chemical potential of the leads, and the length $w$ of the barrier. We shall assume that the device is operating in the region where the chemical potential of the leads lies inside the gap of the barrier. The total tunneling current density (i. e. the current per unit of cross-sectional length) through the device is given by \begin{eqnarray} J(V,w)&=&-\frac {2e}{4\pi^2v_F\hbar^2}\int d\theta EdE\cos\theta\,T(E,\theta) \nonumber\\ &\times& [f(E-\mu_L)-f(E-\mu_R)] \end{eqnarray} where $f(x)=(1+e^{x/(k_BT)})^{-1}$, $\mu_L=\mu+eV/2$, $\mu_R=\mu-eV/2$, and $\mu$ the chemical potential of leads in the equilibrium state. The linear response conductance per unit of cross-sectional length is given, at zero temperature, by \begin{equation} G(\mu,w)=\frac {e^2}{\hbar}\frac {\vert \mu\vert }{3\pi^2a t} \int_{-\pi/2}^{\pi/2} d\theta\cos\theta\, T(\mu,\theta). \label{G} \end{equation} In Fig. (\ref{Fig_G_linear_response_barrier}) we plot $G(\mu,w)$ as function of $\mu$ for several widths $w$. It is clear that the value of $G(\mu,w)$ may change by several orders of magnitude, close to $\mu=0$ by a small change of $\mu$. Naturally for wider barriers one obtains a smaller conductance. \begin{figure} \begin{center} \includegraphics[scale=1]{cond_b_s.eps}% \end{center} \caption{(color on line) Linear response conductance $G(\mu,w)$, per Dirac cone, as function of the chemical potential $\mu$, for different values of the width $w$. The hopping matrix $t$ is taken to be $t=2.7$ {\ttfamily eV} and $t'=$ 0.1 {\ttfamily eV}. } \label{Fig_G_linear_response_barrier} \end{figure} \subsection{A limiting case.} The limiting case of barrier can be represented by a delta-function potential, $V(x,y)=g\sigma_z\delta(x)$. (See next section for complete discussion of delta function potentials in the Dirac equation.) It is interesting to compute the reflected flux for both Sch\"odinger and Dirac electrons for this potential. In the first case one obtains \begin{equation} R=\frac {(2mg/\hbar^2)^2}{4k^2_F\cos^2\theta +(2mg/\hbar^2)^2 }\,, \end{equation} whereas for Dirac electrons the result is \begin{equation} R = \tanh^2[g/(\hbar v_F)]\,. \end{equation} It is clear that for electrons in graphene $R$ is angular and energy independent. For the case $g\gg \hbar v_F$ the reflexion tends to unity. \section{The diode: a limiting case.} \label{rtd} Finally we want to discuss a limiting case of the resonant tunneling diode made of graphene. The device is represented in Fig. \ref{diode}. The corresponding study for Schr\"odinger electrons was done by Tsu and Esaki.\cite{Tsu} A limiting situation of the device described in Fig. \ref{diode} is one where the barriers are are describe by a scalar Lorentz potential of the form \begin{eqnarray} V(x,y) &=& \lim_{\epsilon\rightarrow 0}g \frac {1}{2\epsilon}[1-\theta(\vert x\vert-\epsilon)]\sigma_z \nonumber\\ &+& \lim_{\epsilon\rightarrow 0}g \frac {1}{2\epsilon}[1-\theta(\vert x-d\vert-\epsilon)]\sigma_z \nonumber\\ &=&g\sigma_z[\delta(x)+\delta(x-d)] \,. \label{pot} \end{eqnarray} The connection with the true barrier is made by identifying $g$ with $\alpha t'w a$, with $\alpha$ a numerical constant of dimensions inverse of length. This form of the potential is equivalent to a mass term and therefore to a gap in the spectrum. However, given the short range nature of the potential, its effect comes only in the boundary conditions imposed on the wave function at the potential position. The problem of Dirac electrons in delta function potentials has been studied in the past\cite{McKellar87a,McKellar87b} and is not without subtleties. \cite{Sutherland81,Adame89,Roy93} The subtleties can be traced back to the problem of evaluating the integral \begin{equation} \int^{\epsilon}_{-\epsilon}f(x)\delta(x)dx\,, \label{delta} \end{equation} where $f(x)$ is a discontinuous function at $x=0$. If we try to solve the problem of Dirac electrons with a delta function potential using the same trick\cite{Davies} one uses for Schr\"odinger electrons we face the problem defined by the integral (\ref{delta}). This is so because the wave function of Dirac electrons in a delta function potential is discontinuous at the point where the delta function is located. There are several strategies to overcome this difficulty. \cite{McKellar87a,McKellar87b,Sutherland81,Adame89,Roy93} The most straightforward was devised by McKellar and Stephenson \cite{McKellar87a,McKellar87b} and generalized by Dominguez-Adame and Maci\'a.\cite{Adame89} In short, the Dirac equation along the $x$ direction can be can be written as \begin{equation} \frac {d\phi(x)}{d x}=\hat G(x)\phi(x)\,, \end{equation} where $\phi(x)$ is a spinor wave function. This problem can be formally solved as \begin{equation} \phi(x)=T_x e^{\int_{x_0}^x\hat G(x)dx}\phi(x_0)\,, \end{equation} where the operator $T_x$ is the position order operator such that \begin{equation} T_x[ \hat G(x)\hat G(y)]=\hat G(x)\hat G(y)\theta(x-y)+ \hat G(y)\hat G(x)\theta(y-x)\,. \end{equation} Since we are interested in determine the boundary conditions obeyed by the wave function $\phi(x)$ at the delta function position we consider the infinitesimal interval $x\in[-\epsilon,\epsilon]$ obtaining \begin{equation} \phi(\epsilon)=T_x e^{\int_{-\epsilon}^\epsilon\hat G(x)dx}\phi(-\epsilon)\,. \end{equation} The integral is dominated by the delta function and for the problem we are treating in this paper we obtain the following boundary condition \begin{equation} \phi(\epsilon)=e^{-i\frac {g}{v_F\hbar}\sigma_x\sigma_z}\phi(-\epsilon)\,. \end{equation} To evaluate how the exponential acts on $\phi(-\epsilon)$ we use the Lagrange-Silvester formula\cite{Adame89} for a function $f(\bm M)$ of a matrix $\bm M$ \begin{equation} f(\bm M) = f(\lambda_1)\frac {\bm 1\lambda_2 -{\bm M}}{\lambda_2-\lambda_1} +f(\lambda_2)\frac {\bm 1\lambda_1 -{\bm M}}{\lambda_1-\lambda_2}\,, \label{lagrange} \end{equation} where $\lambda_{1,2}$ are the eigenvalues of $\bm M$. For the problem at hand, Eq. (\ref{lagrange}) leads to the following boundary condition around $x=0$ \begin{equation} \left( \begin{array}{c} \phi_a(0^+)\\ \phi_b(0^+) \end{array} \right) =\cosh\tilde g \left( \begin{array}{c} \phi_a(0^-)\\ \phi_b(0^-) \end{array} \right) +i\sinh\tilde g \left( \begin{array}{c} \phi_b(0^-)\\ -\phi_a(0^-) \end{array} \right)\,, \end{equation} where $\tilde g =\frac {g}{v_F\hbar}$ is an adimensional interaction constant and $0^{\pm}$ represent positive and negative infinitesimals. A similar boundary condition holds for $x=d$. For the potential (\ref{pot}) we can now define three different regions, I, II, and III, defined as $x<0$, $0<x<d$, and $x>d$, respectively. In each of these regions the wave function is a sum of two plane waves of opposite momentum along the $x-$direction, with each plane wave multiplied by the coefficients $A_\Gamma$ and $B_\Gamma$, where $\Gamma=$I, II, and III labels the three regions defined above. \begin{figure} \begin{center} \includegraphics*[scale=.5]{T_diode_E01_s.eps} \includegraphics*[scale=.5]{T_diode_theta0_s.eps} \end{center} \caption{(color on line) Top panels: transmitted flux $T_f(E=0.1,\theta)$ for fixed $\tilde g=2$ and two $d=$100$a$, 200$a$ and for fixed $d=100a$ and three $\tilde g=$0.5, 1, 2. Bottom panels: transmitted flux $T_f(E,\theta=0)$ for the same previous cases. } \label{Fig_RTD_transmitted_flux} \end{figure} Once the matrix $T$ has been computed (see appendix \ref{app}) the reflection coefficient is obtained from \begin{equation} r= -\frac {T^\ast_{12}}{T^\ast_{11}}\,, \end{equation} and the transmitted flux $T_f$ from \begin{equation} T_f=1-rr^\ast\,. \end{equation} For the case of zero electrostatic potentials, $U_i=0$, we obtain \begin{equation} T_f = \frac {1} {1+\sinh^2{(2\tilde g)}\cos^2{(2dk)}}\,, \label{tf} \end{equation} which is a similar expression to the one in (\ref{int_trans_diode_ingap}) if $\tilde g=\alpha w$. It is simple to identify the limit $T\rightarrow 1$, which occurs when $2kd=(2n+1)\pi$, with $n=0,1,2,\ldots$. In Figure \ref{Fig_RTD_transmitted_flux} we show the transmitted flux $T_f(E,\theta)$ as function of the energy and of the angle $\theta$. The barrier between the leads and the center of the device is represented by a delta function potential, therefore wider barriers are represented by larger values of $\tilde g$. From Fig.\ref{Fig_RTD_transmitted_flux} we can see that for larger values of $\tilde g$ the transmission in the forward direction is essentially zero except at some resonant energies, where the transmission goes to one. As function of the angle we see that there are some angles for which the transmission in also one. When the length of central part of the device is increase (larger $d$) the resonances become closer to each other and more resonances appear. In Figure \ref{Fig_intensity} we present an intensity plot of $T_f(E,\theta)$ for a device with $\tilde g=0.5$ and $d=200a$. In this figure we can follow the evolution of the resonances in the $E$ versus $\theta$ plane. The six lines of larger transmission are associated with the resonances we see in Fig. \ref{Fig_RTD_transmitted_flux} for $\tilde g=2$ and $d=100a$. \begin{figure} \begin{center} \includegraphics[scale=.5]{diode_g05_d200_s.eps}% \end{center} \caption{(color on line) Intensity plot of $T_f(E,\theta)$ for a device with $\tilde g=0.5$ and $d=200a$. There are clearly well defined regions of large intensity transmission.} \label{Fig_intensity} \end{figure} As before, the linear response conductance per unit of cross-sectional length is given, at zero temperature, by Eq. (\ref{G}) and represented in Fig \ref{Fig_G_linear_response_res_diode}. For small values of $\tilde g$ the conductance shows smooth oscillations whereas for larger $\tilde g$ values strong resonances are observed. The number of observed resonances depends on the length $d$. \begin{figure} \begin{center} \includegraphics[scale=.5]{cond_diode_1.eps}% \end{center} \begin{center} \includegraphics[scale=.5]{cond_diode_2.eps}% \end{center} \caption{Linear response conductance per unit of cross-sectional length at zero temperature. The top panels show $G(\mu,d)$ with $\tilde g$=0.5, 1 and the lower panels show the same quantity but for $\tilde g$=2, 4.} \label{Fig_G_linear_response_res_diode} \end{figure} \section{Final remarks.} In this paper we discussed the tunneling properties of Dirac electrons in two-dimensions when they transverse regions of space where the spectrum presents a finite energy gap. In the case we considered here the gap is induced by depositing graphene on top of Boron Nitride, rendering, in this way, the sub-lattices A and B non-equivalent. The consequence is the opening of a gap in the energy spectrum, that we have parametrized by the parameter $t'=0.1t$. We have shown that the existence of an energy gap prevents the Klein paradox from taking place, a necessary condition for building nanoelectronic devices made of graphene. We have also shown that basic devices like a resonant tunneling diode can be made of graphene, by intercalating two regions where the spectrum of graphene presents a gap. We have also shown that simple analytical expressions can be derived for the tunneling through these types of heterostructures. In addition we have showed that a limiting case of the resonant tunneling diode can be understood by using Dirac delta function potentials. Clearly that one is lead to think that a full description of the ballistic (no impurities) transport process in the system should also include the effect of temperature and phonons. We note, however, that the electron-phonon interaction has been shown to have a small effect in the optical conductivity of graphene. \cite{Stauber} Which means that phonons should not be very important in the description of the transport process. Also the polaronic effect leads to a renormalization of the velocity $v_F$ in the $k-region$ and to the renormalization of the effective mass $mv^2_F=t'$ in the $q-region$. But since these two parameters can be considered as effective ones there is no point in including the polaronic effect explicitly. In concerns the temperature, clearly it will be of no importance when the chemical potential is above the gap. When the energy is in the gap there will certainly be temperature activated transport adding on top of the tunneling current. For small temperatures this will be a small effect. We believe our results of relevant for future nanoelectronics applications of graphene. \section*{Acknowledgments} NMRP acknowledge the financial support from POCI 2010 via project PTDC/FIS/64404/2006.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Acknowledgements} \input{Acknowledgements} \printbibliography \end{document} \subsection{Description of the cross-section measurements} \label{xsec_intro} The number of signal events is determined by subtracting the estimated backgrounds from the number of observed events. The signal yield is then corrected for detection efficiencies in the fiducial region, defined in Table~\ref{table:ZgSigReg}. The integrated cross section in the extended fiducial region, defined in Table~\ref{table:ZgRegions_extfid}, is calculated as \begin{linenomath} \begin{equation*} \sigma_{\textrm{ext-fid}} = \frac{N - B}{A_{Z\gamma} \cdot C_{Z\gamma} \cdot \int{L{\textrm{d}}\,t}}, \end{equation*} \end{linenomath} where $N$ is the number of observed candidate events, $B$ is the expected number of background events and $\int{L{\textrm{d}}\,t}$ is the integrated luminosity corresponding to the analyzed data set. The factors $C_{Z\gamma}$ and $A_{Z\gamma}$ correct for detection efficiency and acceptance, respectively: \begin{itemize} \item $C_{Z\gamma}$ is defined as the number of reconstructed signal events satisfying all selection criteria divided by the number of events that, at particle level, meet the acceptance criteria of the fiducial region; \item $A_{Z\gamma}$ is defined as the number of signal events within the fiducial region divided by the number of signal events within the extended fiducial region, with both numerator and denominator defined at particle level. \end{itemize} The corrections $A_{Z\gamma}$ and $C_{Z\gamma}$ are determined using the $Z\gamma$ signal events generated by $\SHERPA$ and are summarized in Table~\ref{table:Acceptances} along with their uncertainties. \subsection{Systematic uncertainties} \label{sec_uncert} Systematic uncertainties in the acceptances $A_{Z\gamma}$ are evaluated by varying the PDF sets, the value of $\alphas$, the renormalization and factorization scales (QCD scale uncertainty), and the Monte Carlo parameter tunes for the parton shower (PS) and multi-parton interactions (MPI). In total, 100 error sets are checked for the NNPDF3.0 NNLO PDF variation, leading to a relative uncertainty of $0.76\%$ for the inclusive case and $0.35\%$ for the exclusive case. These numbers fully cover variations arising from the use of alternative PDF sets such as CT14~\cite{Dulat:2015mca} and MMHT2014~\cite{Harland-Lang:2014zoa}. The uncertainty from $\alphas$ is estimated by varying it within the range of its world-average value as provided in Ref.~\cite{Agashe:2014kda} and is found to be negligible. The effects of the renormalization and factorization scale uncertainties are assessed by varying these two scales independently by a factor of two from their nominal values, removing combinations where the two variations differ by a factor of four, and taking the envelope of the resulting cross-section variations as the size of the associated systematic uncertainty. Uncertainties from the PS and MPI are evaluated using a series of eigentunes for the $\PYTHIA$ generator with its A14 parameter tune~\cite{ATL-PHYS-PUB-2014-021}. The size of the uncertainty from the renormalization and factorization scales does not exceed $3.0\%$, while PS and MPI uncertainties cause variations from $1.9\%$ to $2.7\%$ for the inclusive and exclusive cases, respectively. The total uncertainties in the acceptance factors are summarized in Table~\ref{table:Acceptances}. \begin{table} \begin{center} \begin{tabular*}{0.4\textwidth}{@{\extracolsep{\fill}}lcc} \hline & $N_{\mathrm{jets}} \geq 0$ & $N_{\mathrm{jets}} = 0$\\ \hline $A_{Z\gamma}$ & $ 0.816 \pm 0.029 $ & $0.952\pm0.026$ \\ $C_{Z\gamma}$ & $ 0.904 \pm 0.031 $ & $0.889\pm0.037$ \\ \hline \end{tabular*} \caption{Summary of values of the correction factors ($C_{Z\gamma}$) and acceptances ($A_{Z\gamma}$) for the $Z\gamma$ cross-section measurements. The uncertainty presented here includes only systematic components, since the statistical uncertainty is found to be negligible.} \label{table:Acceptances} \end{center} \end{table} Systematic uncertainties affecting the correction factor $C_{Z\gamma}$ include contributions arising from uncertainties in the efficiencies of the trigger, reconstruction and particle identification, as well as the uncertainties in the energy, momentum scales and resolutions of the final-state objects. Additional systematic uncertainty sources arise from the modelling of particle spectra and pile-up events. Spectrum modelling uncertainties are estimated by varying the PDF set and QCD scales as described above for the case of the acceptance factor $A_{Z\gamma}$. Some of these contributions are found to have a non-linear dependence on photon transverse energy, $\met$ or jet multiplicity. In these cases, uncertainties estimated as a function of these observables are used in the unfolding process of Section~\ref{sec:Diff_cross_sec} when the corresponding kinematic distributions are derived from the signal sample. Table~\ref{table:SystematicAcceptances} displays the size of the individual contributions to the uncertainties in the $C_{Z\gamma}$ factor; the total uncertainty is summarized in Table~\ref{table:Acceptances}. \begin{table} \begin{center} \begin{tabular*}{0.61\textwidth}{l r@{.}l@{\extracolsep{-1.5cm}}r@{\extracolsep{0cm}.}l} \hline Source & \multicolumn{2}{c}{Relative uncertainty [$\%$]} \\ \hline & \multicolumn{2}{l}{\hspace{-0.1cm}$N_{\mathrm{jets}}\geq0$} & \multicolumn{2}{c}{\hspace{-0.3cm}$N_{\mathrm{jets}}=0$}\\ \hline Trigger efficiency & 0&79 & 0&79 \\ Photon identification efficiency & 1&5 & 1&5 \\ Photon isolation efficiency & 0&48 & 0&47 \\ Electron--photon energy scale & 2&5 & 2&5 \\ Electron--photon energy resolution & 0&11 & 0&09 \\ Jet energy scale & 0&92 & 2&2 \\ Jet energy resolution & 0&10 & 0&43 \\ $\met$ scale & $<$0&1 & $<$0&1 \\ $\met$ resolution & 0&13 & $<$0&1 \\ Pile-up simulation & 0&85 & 1&1 \\ Spectrum modelling & 1&3 & 1&3 \\ \hline Sum & 3&5 & 4&2 \\ \hline \end{tabular*} \caption{Relative systematic uncertainties in the signal correction factor $C_{Z\gamma}$ for the inclusive and exclusive $Z\gamma$ measurements.} \label{table:SystematicAcceptances} \end{center} \end{table} \subsection{Integrated extended fiducial cross section} \label{xsec_pheno} The measurements of the cross sections, along with their uncertainties, are based on the maximization of the profile-likelihood ratio \begin{linenomath} \begin{equation*} \Lambda(\sigma) = \frac{\mathcal{L}(\sigma, \hat{\hat{\boldsymbol{\theta}}}(\sigma))}{\mathcal{L}(\hat{\sigma}, \hat{\boldsymbol{\theta}})}, \end{equation*} \end{linenomath} where $\mathcal{L}$ represents the likelihood function, $\sigma$ is the cross section, and $\boldsymbol{\theta}$ are the nuisance parameters corresponding to the sources of systematic uncertainty. The $\hat{\sigma}$ and $\hat{\boldsymbol{\theta}}$ terms denote the unconditional maximum-likelihood estimate of the parameters, i.e., the parameters for which the likelihood is maximized for both $\sigma$ and $\boldsymbol{\theta}$. The term $\hat{\hat{\boldsymbol{\theta}}}(\sigma)$ denotes the value of $\boldsymbol{\theta}$ that maximizes $\mathcal{L}$ for a given value of $\sigma$. The likelihood function is defined as \begin{linenomath} \begin{equation*} \mathcal{L}(\sigma, \boldsymbol{\theta}) = \mathrm{Poisson}(N~|~S(\sigma, \boldsymbol{\theta}) + B(\boldsymbol{\theta})) \cdot {\mathrm{Gaussian}}( \boldsymbol{\theta_0}~|~\boldsymbol{\theta}), \end{equation*} \end{linenomath} representing the product of the Poisson probability of observing $N$ events, given expectations of $S$ for the signal and $B$ for the background, multiplied by the Gaussian constraints $\boldsymbol{\theta}$ on the systematic uncertainties, with central values $\boldsymbol{\theta_0}$ from auxiliary measurements, as described in Section~\ref{xsec_intro}. The measured cross sections for $Z(\nu\bar{\nu})\gamma$ production in the extended fiducial region are summarized in Table~\ref{table:ZgCrossSections}, along with the theoretical predictions of the \textsc{Mcfm}~\cite{MCFM} generator described in Section~\ref{sec:SMpredictions}. The measured cross sections agree with the SM expectations to within one standard deviation. Systematic uncertainties arise from uncertainties in the acceptances and correction factors, as well as from uncertainties in the background estimates. These two sources contribute roughly equally to the uncertainty in the measured cross sections. Compared with the $Z\gamma$ measurements at $\sqrt{s} = 8$~\TeV~\cite{STDM-2014-01}, the systematic uncertainty is significantly reduced. This improvement is due primarily to the reduction of systematic uncertainty allowed by the data-driven estimate of the $\gamma$+jets and $W\gamma$ backgrounds. An overall check of the SM predictions is done with the \textsc{Matrix} generator~\cite{Grazzini:2293309}. Cross sections obtained by \textsc{Matrix} (inclusive case: $\sigma^{\mathrm{ext.fid.}} = 78.6\pm 0.4\pm 4.4$~fb; exclusive case: $\sigma^{\mathrm{ext.fid.}} = 55.8\pm 0.3\pm 3.6$~fb, where the uncertainties are statistical and systematic, respectively) are found to be consistent with those from {\textsc{Mcfm}} to within their statistical uncertainty. \begin{table} \begin{center} \begin{tabular}{cc} \hline $\sigma^{\mathrm{ext.fid.}}$ [fb] & $\sigma^{\mathrm{ext.fid.}}$ [fb] \\ Measurement & NNLO \textsc{Mcfm} Prediction \\ \hline \multicolumn{2}{c}{$N_{\mathrm{jets}} \geq 0$ } \\ 83.7$^{+3.6} _{-3.5}$ (stat.)$ ^{+6.9} _{-6.2}$ (syst.)$ ^{+1.7} _{-2.0}$ (lumi.) & $78.1 \pm 0.2$(stat.)$\pm 4.7$(syst.) \\ \hline \multicolumn{2}{c}{$N_{\mathrm{jets}} = 0$ }\\ 52.4$^{+2.4}_{-2.3}$ (stat.)$ ^{+4.0} _{-3.6}$ (syst.)$ ^{+1.2} _{-1.1}$ (lumi.) & $55.9 \pm 0.1$(stat.)$ \pm 3.9$(syst.) \\ \hline \end{tabular} \caption{Measured cross sections for $Z(\nu\bar{\nu})\gamma$ production within the extended fiducial region for a centre-of-mass energy of $\sqrt{s} = 13$~\TeV, with corresponding SM expectations obtained from the \textsc{Mcfm}~\cite{MCFM} generator at next-to-next-to-leading order in the strong coupling constant $\alphas$. } \label{table:ZgCrossSections} \end{center} \end{table} \subsection{Standard Model calculations} \label{sec:SMpredictions} The resulting measurement of the rate and kinematic distributions of $\Zboson\gamma$ production is compared with SM expectations using the parton shower Monte Carlo generator $\SHERPA$ and the NNLO parton-level generators \textsc{Mcfm} and {\textsc{Matrix}}. The NNPDF3.0 PDF set was used for the $\SHERPA$, {\textsc{Mcfm}} and {\textsc{Matrix}} generation. The values of the renormalization and factorization scales were set to $m_{\Zboson\gamma}$ for the {\textsc{Mcfm}} and {\textsc{Matrix}} NNLO generation of the $\Zboson\gamma$ process. The photon isolation criterion at the parton level is applied by considering a cone of variable opening angle $\Delta R$ (with maximum opening angle $\Delta R_{\mathrm{max}}=0.1$) centred around the photon direction, and requiring that the transverse energy flow inside that cone be always less than a given fraction of the photon $\pT$; this fraction is set to 0.1 when $\Delta R = \Delta R_{\mathrm{max}}$, and tends smoothly to zero when $\Delta R \rightarrow 0$, as described in Ref.~\cite{Frixione:1998jh}. Due to this procedure, the contribution from photon fragmentation to the NNLO calculations of the {\textsc{Mcfm}} and {\textsc{Matrix}} SM predictions is zero. Events generated with $\SHERPA$, as described in Section~\ref{sec:samples}, are also compared with the particle-level measurements. For the NNLO parton-level predictions, parton-to-particle correction factors \mbox{$C^{*(\mathrm{parton}\to\mathrm{particle})}$} must be applied in order to obtain the particle-level cross sections. These correction factors are computed as the ratios of the $pp\rightarrow Z\gamma$ cross sections predicted by $\SHERPA$ with hadronization and the underlying event disabled to the cross sections with them enabled. The systematic uncertainty in the correction factors is evaluated by using a signal sample from an alternative generator ({\textsc{MG5\_aMC@NLO}}), taking the resulting change in $C^{*(\mathrm{parton}\to\mathrm{particle})}$ as the one-sided size of a symmetrized value for the uncertainty. This accounts for uncertainties in both the parton shower modelling and the description of the underlying event. The value of $C^{*(\mathrm{parton}\to\mathrm{particle})}$ is found to be $0.87\pm0.04$ for the inclusive predictions and $0.97\pm0.04$ for the exclusive predictions. For the exclusive case, the parton-to-particle correction includes an additional contribution from the jet veto, which compensates for the difference in the photon isolation between the parton and particle levels. The particle-level cross sections are then obtained by multiplying the NNLO parton-level cross-section values by the $C^{*(\mathrm{parton}\to\mathrm{particle})}$ correction factors, and are displayed in Table~\ref{table:ZgCrossSections}. The systematic uncertainty in the expected NNLO SM cross sections arising from uncertainties in the QCD scale is estimated by varying the QCD scales by factors of 0.5 and 2.0 (separately for the renormalization and factorization scales, removing combinations where the two variations differ by a factor of four). The effect of the QCD scale uncertainty on the prediction for the first bin of the various differential cross-section measurements also accounts for uncertainties arising from the incomplete cancellation of divergences associated with soft gluon emission in fixed-order perturbative calculations of $Z\gamma$ production. This effect is appreciable because of the symmetric $E_{\mathrm{T}}^{\gamma}$ and $p_{\mathrm{T}}^{\nu\bar{\nu}}$ thresholds used in defining the SR. The corresponding corrections are estimated conservatively from the cited MC generators by evaluating the degree of compensation of the divergence that arises when the $p_{\mathrm{T}}^{\nu\bar{\nu}}$($E_{\mathrm{T}}^{\gamma}$) requirement is lowered to a value significantly below the value of the $E_{\mathrm{T}}^{\gamma}$($p_{\mathrm{T}}^{\nu\bar{\nu}}$) requirement of 150~\GeV. The systematic uncertainty due to the PDF choice is computed using the eigenvectors of the NNPDF 3.0 PDF set~\cite{Ball:2014uwa} and the envelope of the differences between the results obtained with the CT14~\cite{Dulat:2015mca} and MMHT2014~\cite{Harland-Lang:2014zoa} PDF sets, according to the PDF4LHC recommendations~\cite{Butterworth:2015oua}. \textsc{Matrix} predictions do not include the systematic uncertainty due to the PDF choice. \subsection{Differential extended fiducial cross section} \label{sec:Diff_cross_sec} The measurement of the $Z\gamma$ production differential cross sections allows a comparison of experimental results with SM expectations for both the absolute rates and the shapes of kinematic distributions. The measurements are performed as a function of several observables that are sensitive to higher-order perturbative QCD corrections~\cite{Campbell_Zg_NNLO} and to a possible manifestation of aTGCs~\cite{PhysRevD.47.4889}: photon transverse energy~($E_{\mathrm{T}}^{\gamma}$), the transverse momentum of the neutrino--antineutrino pair~($p_{\mathrm{T}}^{\nu\bar{\nu}}$), and jet multiplicity~($N_{\mathrm{jets}}$). The differential cross sections are defined in the extended fiducial region, and are extracted with an unfolding procedure that corrects for measurement inefficiencies and resolution effects that modify the observed distributions. The procedure described in Ref.~\cite{STDM-2014-01} is followed, using an iterative Bayesian method~\cite{Agostini:1994zf}. For each distribution, events from simulated signal MC samples are used to generate a response matrix that accounts for bin-to-bin migration between the reconstruction-level and particle-level distributions. The statistical uncertainties of the unfolded distributions are estimated using pseudo-experiments, generated by fluctuating each bin of the observed spectrum according to a Poisson distribution with a mean value equal to the observed yield. The shape uncertainties arising from the limited size of the signal MC sample are also obtained by generating pseudo-experiments. The sources of systematic uncertainty are discussed in Section~\ref{xsec_intro}, with their impact on the unfolded distribution assessed by varying the response matrix for each of the systematic uncertainty sources by one standard deviation and combining the resulting differences from the nominal values in quadrature. The differential cross sections as a function of $E_{\mathrm{T}}^{\gamma}$ and $p_{\mathrm{T}}^{\nu\bar{\nu}}$ are shown in Figures~\ref{fig:UnfoldedPhotonEt} and~\ref{fig:UnfoldedMET}, respectively, for both the inclusive and exclusive measurements. Figure~\ref{fig:UnfoldedNJets} shows the cross section measured in bins of jet multiplicity. The values of the SM expectations shown in the figures are obtained as described in Section~\ref{sec:SMpredictions}. \begin{figure}[hbtp] \begin{center} \includegraphics[width=0.45\textwidth]{./figures/diff_phET_inc.pdf} \includegraphics[width=0.45\textwidth]{./figures/diff_phET_exc.pdf} \caption{The measured (points with error bars) and predicted differential cross sections as a function of $E_{\mathrm{T}}^{\gamma}$ for the $pp \rightarrow Z(\nu\bar{\nu})\gamma$ process in the inclusive $N_{\mathrm{jets}} \geq 0$ (left) and exclusive $N_{\mathrm{jets}} = 0$ (right) extended fiducial regions. The error bars on the data points show the sum in quadrature of the statistical and systematic uncertainties. The \textsc{Mcfm} NNLO predictions are shown with shaded bands that indicate the theoretical uncertainties described in Section~\ref{sec:SMpredictions}. For the \SHERPA predictions, systematic uncertainty is not considered, and the statistical uncertainties arising from the size of the MC samples are too small to be visible. The lower plots show the ratios of the SM expectation to the measured values (shaded bands), with the error bars on the points showing the relative uncertainties in the experimental measurements. The bin size varies from 50~\GeV\ to 500~\GeV.} \label{fig:UnfoldedPhotonEt} \end{center} \end{figure} \begin{figure}[hbtp] \begin{center} \includegraphics[width=0.45\textwidth]{./figures/diff_metPT_inc.pdf} \includegraphics[width=0.45\textwidth]{./figures/diff_metPT_exc.pdf} \caption{The measured (points with error bars) and predicted differential cross sections as a function of $p_{\mathrm{T}}^{\nu\bar{\nu}}$ for the $pp \rightarrow Z(\nu\bar{\nu})\gamma$ process in the inclusive $N_{\mathrm{jets}} \geq 0$ (left) and exclusive $N_{\mathrm{jets}} = 0$ (right) extended fiducial regions. The error bars on the data points show the sum in quadrature of the statistical and systematic uncertainties. The \textsc{Mcfm} NNLO predictions are shown with shaded bands that indicate the theoretical uncertainties described in Section~\ref{sec:SMpredictions}. For the \SHERPA predictions, systematic uncertainty is not considered, and the statistical uncertainties arising from the size of the MC samples are too small to be visible. The lower plots show the ratios of the SM expectation to the measured values (shaded bands), with the error bars on the points showing the relative uncertainties in the experimental measurements. The bin size varies from 50~\GeV\ to 500~\GeV.} \label{fig:UnfoldedMET} \end{center} \end{figure} \begin{figure}[hbtp] \begin{center} \includegraphics[width=0.45\textwidth]{./figures/diff_Njets_inc.pdf} \caption{ The measured (points with error bars) and predicted cross sections as a function of $N_{\mathrm{jets}}$ for the $pp \rightarrow Z(\nu\bar{\nu})\gamma$ process in the extended fiducial region. The error bars on the data points show the sum in quadrature of the statistical and systematic uncertainties. The \textsc{Mcfm} NNLO predictions are shown with shaded bands that indicate the theoretical uncertainties described in Section~\ref{sec:SMpredictions}. For the \SHERPA predictions, systematic uncertainty is not considered, and the statistical uncertainties arising from the size of the MC samples are too small to be visible. The lower plots show the ratios of the SM expectation to the measured values (shaded bands), with the error bars on the points showing the relative uncertainties in the experimental measurements. } \label{fig:UnfoldedNJets} \end{center} \end{figure} Good agreement with SM expectations is observed in all but the last bin of the $\ET^{\gamma}$ inclusive distribution. This disagreement is a consequence of the corresponding disagreement observed in Figure~\ref{fig:SR_inclusive}, which was investigated and found to be consistent with having arisen from a statistical fluctuation of the data. \subsection{ATLAS detector and experimental data set} \label{sec:detector} The ATLAS detector at the LHC is described in detail in Ref.~\cite{PERF-2007-01}. A short overview is presented here, with an emphasis on the subdetectors needed for a precision measurement of the $Z(\nu\bar{\nu})\gamma$ final state. The ATLAS detector covers nearly the entire solid angle surrounding the collision point. Its major components are an inner tracking detector (ID) surrounded by a thin superconducting solenoid providing a 2~T axial magnetic field, electromagnetic (ECAL) and hadron (HCAL) calorimeters, and a muon spectrometer (MS). The ID is composed of three subsystems. Two detectors cover the pseudorapidity range $|\eta|<2.5$: the silicon pixel detector and the silicon microstrip tracker (SCT). The outermost system of the ID, with an acceptance of $|\eta|<2.0$, is composed of a transition radiation tracker (TRT). The TRT provides identification information for electrons by the detection of transition radiation. The MS is composed of three large superconducting air-core toroid magnets, a system of three stations of chambers for tracking measurements, with high precision in the range $|\eta|<2.7$, and a muon trigger system covering the range $|\eta|<$ 2.4. The ECAL is composed of alternating layers of passive lead absorber interspersed with active liquid-argon gaps. It covers the range of |$\eta$| < 3.2 and plays a crucial role in photon identification. For |$\eta$| $<$ 2.5 the calorimeter has three longitudinal layers in shower depth, with the first layer having the highest granularity in the $\eta$ coordinate, and the second layer collecting most of the electromagnetic shower energy for high-$p_{\mathrm{T}}$ objects. A thin presampler layer precedes the ECAL over the range $|\eta|<1.8$, and is used to correct for the energy lost by EM particles upstream of the calorimeter. The HCAL, surrounding the ECAL, is based on two different technologies, with scintillator tiles or liquid-argon as the active medium, and with either steel, copper, or tungsten as the absorber material. Photons are identified as narrow, isolated showers in the ECAL with no penetration into the HCAL. The fine segmentation of the ATLAS calorimeter system allows an efficient separation of jets from isolated prompt photons. Collision events are selected using a hardware-based first-level trigger and a software-based high-level trigger. The resulting recorded event rate from LHC $pp$ collisions at $\sqrt{s}$~=~13~\TeV\ during the data-taking period in 2015 and 2016 was approximately 1~kHz \cite{TRIG-2016-01-corr}. After applying criteria to ensure good ATLAS detector operation, the total integrated luminosity useful for data analysis is 36.1~fb$^{-1}$. The uncertainty in the combined 2015+2016 integrated luminosity is 2.1$\%$. It is derived, following a methodology similar to that detailed in Ref.~\cite{DAPR-2013-01}, and using the LUCID-2 detector for the baseline luminosity measurements \cite{LUCID2}, from calibration of the luminosity scale using $x$--$y$ beam-separation scans. \subsection{Simulation of signal and backgrounds} \label{sec:samples} Simulated signal and background events were produced with various Monte Carlo event generators, processed through a full ATLAS detector simulation~\cite{SOFT-2010-01} using \textsc{Geant4}~\cite{bib-geant4}, and then reconstructed with the same procedure used for data. Additional $pp$ interactions (pile-up), in the same and neighbouring bunch crossings, were overlaid on the hard-scattering process in the MC simulation. The MC events were then reweighted to reproduce the distribution of the number of interactions per bunch crossing observed in data. For the signal modeling $\SHERPA$~2.2.2~\cite{SherpaREF} with the NNPDF3.0 NNLO PDF set~\cite{Ball:2014uwa} is used as the baseline event generator. The signal sample was generated with up to three additional final-state partons at leading order (LO) and up to one additional final-state parton at next-to-leading order (NLO). Alternative signal samples, the first generated using $\SHERPA$~2.1.1 with the CT10 PDF set~\cite{Lai:2010vv} and the second generated using {\textsc{MG5\_aMC@NLO}}~2.3.3~\cite{madgraph} with the NNPDF3.0 NLO PDF set and interfaced to the $\PYTHIA$~8.212~\cite{Sjostrand:2007gs} parton shower model, are considered for studies of systematic uncertainties. Signal samples with non-zero anomalous triple gauge-boson couplings were also generated using $\SHERPA$~2.1.1 with the CT10 PDF set. The values of coupling constants used in the generation are chosen to be equal to the expected limits obtained in a previous ATLAS study~\cite{STDM-2014-01}. Background events containing $Z$ bosons with associated jets were simulated using $\SHERPA$~2.1.1 with the CT10 PDF set, while background events containing $W$ bosons with associated jets were simulated using $\SHERPA$~2.2.0 with the NNPDF3.0 NNLO PDF set. For both of these processes the matrix elements were calculated for up to two partons at NLO and four partons at LO. Background events containing a photon with associated jets were simulated using $\SHERPA$~2.1.1 with the CT10 PDF set. Matrix elements were calculated with up to four partons at LO. Background events containing a lepton pair and a photon with associated jets were simulated using $\SHERPA$~2.2.2 with the NNPDF3.0 NNLO PDF set. Matrix elements including all diagrams with three electroweak couplings were calculated for up to one parton at NLO and up to three partons at LO. \subsection{Object selection} \label{sec_objsel} Photon candidates are reconstructed~\cite{PERF-2013-04} from ECAL energy clusters with $|\eta|<2.37$ and $\ET>150$~\GeV. They are classified either as converted (candidates with a matching reconstructed conversion vertex or a matching track consistent with having originated from a photon conversion) or as unconverted (all other candidates). Both kinds of photon candidates are used in the analysis. Electron candidates are reconstructed~\cite{ATLAS-CONF-2016-024} from ECAL energy clusters with $|\eta|<2.47$ that are associated with a reconstructed track in the ID with transverse momentum $\pT>7$~\GeV. The ECAL cluster of the electron/photon candidate must lie outside the transition region between the barrel and endcap (\mbox{$1.37<|\eta|<1.52$}). Muon candidates are reconstructed from tracks in the MS that have been matched to a corresponding track in the inner detector, and are referred to as ``combined muons''. The combined track is required to have $\pT>7$~\GeV\ and $|\eta|<2.7$. The shower shapes produced in the ECAL are used to identify photons and electrons. Photons are required to pass all the requirements on shower shape variables which correspond to the \textit{tight} photon identification criteria~\cite{ATL-PHYS-PUB-2016-014}. The \textit{tight} photon identification efficiency ranges from 88\% (96\%) to 92\% (98\%) for unconverted (converted) photons with $\pT>100$~\GeV. A sample of ``preselected'' photons, used for the calculation of missing transverse momentum, are required to satisfy the less restrictive \textit{loose} identification criteria of Ref.~\cite{ATL-PHYS-PUB-2016-014}. Electron candidates are required to satisfy \textit{loose}~\cite{ATLAS-CONF-2016-024} electron identification criteria, whose efficiency is greater than 84\%. Muon candidates are required to satisfy \textit{tight} identification criteria as described in Ref.~\cite{PERF-2015-10}, with efficiency greater than 90\% for combined muons used in the selection. Electron and muon candidates are required to originate from the primary vertex\footnote{Each primary vertex candidate is reconstructed from at least two associated tracks with $\pT>0.4$~\GeV. The primary vertex is selected among the primary vertex candidates as the one with the highest sum of the squared transverse momenta of its associated tracks.} by demanding that the significance of the transverse impact parameter, defined as the absolute value of the track's transverse impact parameter, $d_0$, measured relative to the beam trajectory, divided by its uncertainty, $\sigma_{d_{0}}$, satisfy $|d_{0}|/\sigma_{d_{0}} < 3$ for muons and $|d_{0}|/\sigma_{d_{0}} < 5$ for electrons. The difference $z_0$ between the value of the $z$ coordinate of the point on the track at which $d_0$ is defined, and the longitudinal position of the primary vertex, is required to satisfy $|z_0\cdot \mathrm{sin}(\theta)| < 0.5$~mm for both the muons and electrons. Photon, electron and muon candidates are required to be isolated from other particles. The following criteria are used for photons: the total transverse energy in ECAL energy clusters within $\Delta R = 0.4$ of the photon candidate is required to be less than 2.45~\GeV~+~$0.022\cdot\ET^{\gamma}$, and the scalar sum of the transverse momenta of the tracks located within a distance $\Delta R = 0.2$ of the photon candidate is required to be less than $0.05\cdot\pT^{\gamma}$. For preselected photons, isolation criteria are not applied. For muons and electrons, the isolation requirement is based on track information and is tuned to have an efficiency of at least 99\%~\cite{PERF-2015-10}. Jets are reconstructed from topological clusters in the calorimeter~\cite{PERF-2014-07} using the anti-$k_t$ algorithm~\cite{Cacciari:2008gp} with a radius parameter of $R = 0.4.$ Events with jets arising from detector noise or other non-collision sources are discarded~\cite{ATLAS-CONF-2015-029}. A multivariate combination of track-based variables is used to suppress jets originating from pile-up in the ID acceptance~\cite{PERF-2014-03}. The energy of each jet is calibrated and corrected for detector effects using a combination of simulated events and in situ methods~\cite{PERF-2016-04} using data collected at $\sqrt{s} = 13$~\TeV. The selected jets are required to have $\pT$ larger than 50~\GeV\ and $|\eta|<4.5$. The missing transverse momentum is defined as the negative vector sum of the transverse momenta of all reconstructed physics objects in the event~\cite{Aaboud:2018tkc} (leptons with $\pT>7$~\GeV, preselected photons with $\pT>10$~\GeV\ and jets with $\pT>20$~\GeV), plus a ``soft term'' incorporating tracks from the primary vertex that are not associated with any such objects~\cite{ATL-PHYS-PUB-2015-027}. The resulting vector is denoted $\vec{E}_{\mathrm{T}}^{\mathrm{miss}}$ since it includes calorimetric energy measurements, and its magnitude $\met$ is used as a measure of the total transverse momentum of neutrinos in the event. To resolve ambiguities in the object reconstruction, jet candidates lying within $\Delta R=0.3$ of the photon candidates are removed. \subsection{Signal region definition} The signal region (SR) is defined to have exactly one \textit{tight} isolated photon, as described above. In order to reduce the contamination from events that do not contain high-energy neutrinos (mainly $\gamma$~+ jet background with fake $\met$ from jet momenta mismeasurements) the selected events are required to have $\met>150$~\GeV. To reduce the number of $W(\ell\nu)\gamma$ and $Z(\ell\ell)\gamma$ events, a lepton veto is applied: events with any selected electrons or muons are discarded. A requirement of at least $10.5$~\GeV$^{1/2}$ for the $\met$ significance, defined as $\met{}/\sqrt{\Sigma \pT^{\mathrm{jet}}+\ET^{\gamma}}$, further suppresses background contributions with fake $\met$. An additional angular separation requirement $\Delta\phi(\vec{E}_{\mathrm{T}}^{\mathrm{miss}},\gamma)>\pi/2$ is made, which suppresses the $pp\rightarrow W(e\nu)+X$ background. These object and event selection requirements define the reconstruction-level fiducial region and are summarized in Table~\ref{table:ZgSigReg}. \begin{table}[hbtp] \begin{center} \begin{tabular}{ccc} \hline Photons\hspace{1cm} & Leptons\hspace{1cm} & Jets\\ \hline $\ET > 150$~\GeV\hspace{1cm} & $\pT>7$~\GeV\hspace{1cm} & $\pT > 50$~\GeV\\ $|\eta| < 2.37$,\hspace{1cm} & $|\eta|<2.47(2.7)$ for $e(\mu)$,\hspace{1cm} & $|\eta| < $ 4.5\\ excluding $1.37<|\eta|<1.52$\hspace{1cm} & excluding $1.37<|\eta^e|<1.52$\hspace{1cm} & $\Delta R(\mathrm{jet},\gamma) > 0.3$\\ \hline \multicolumn{3}{c}{Event selection}\\ \hline \multicolumn{3}{c}{$N^\gamma=1$,\hspace{0.2cm} $N^{e,\mu}=0$,\hspace{0.2cm} $\met>150$~\GeV,\hspace{0.2cm} $\met\ \mathrm{signif.}>10.5$~\GeV$^{1/2}$,\hspace{0.2cm} $\Delta\phi(\vec{E}_{\mathrm{T}}^{\mathrm{miss}},\gamma)>\pi/2$}\\ \multicolumn{3}{c}{Inclusive : $N_{\mathrm{jet}} \geq 0$,\hspace{0.2cm} Exclusive : $N_{\mathrm{jet}} = 0$}\\ \hline \end{tabular} \caption{Definition of the fiducial region. The object selection is presented in the top part of the table, while the event selection is described in the bottom part.} \label{table:ZgSigReg} \end{center} \end{table} To simplify the interpretation of the results and comparison with theory predictions, the cross section is measured in an extended fiducial region, defined at particle level\footnote{``Particle level'' quantities are defined in terms of stable particles in the MC event record with a proper decay length $c\tau > 10$~mm which are produced from the hard scattering, including those that are the products of hadronization. The particle-level jets are reconstructed using the anti-$k_t$ algorithm with a radius parameter of $R=0.4$, using all stable particles except for muons and neutrinos. The particle-level jets in ATLAS do not include muons because jets are built from calorimeter clusters, excluding muons.} in Table~\ref{table:ZgRegions_extfid}. Compared with the fiducial region, the extended fiducial region removes requirements on $\met$ significance, $\Delta\phi(\vec{E}_{\mathrm{T}}^{\mathrm{miss}},\gamma)$, the lepton veto and the transition $\eta$ region for photons. In the signal event selection at particle level, the $\met$ significance and $\Delta\phi(\vec{E}_{\mathrm{T}}^{\mathrm{miss}},\gamma)$ are given by $p^{\nu\bar{\nu}}_\mathrm{T}{}/\sqrt{\Sigma \pT^{\mathrm{jet}}+\ET^{\gamma}}$ and $\Delta\phi(\vec{p}^{\;\nu\bar{\nu}}_\mathrm{T},\gamma)$, respectively. Photon isolation at the particle level is performed using the same requirements and cone sizes as described for the reconstruction-level isolation in Section~\ref{sec_objsel}. \begin{table}[hbtp] \begin{center} \begin{tabular}{lc} \hline Category & Requirement \\ \hline Photons & $\ET^{\gamma}$ $>$ 150~\GeV \\ & $|\eta| < $ 2.37 \\ \hline Jets & $|\eta| < $ 4.5 \\ & $\pT$ $>$ 50~\GeV \\ & $\Delta R(\mathrm{jet},\gamma) >$ 0.3 \\ & Inclusive : $N_{\mathrm{jet}} \geq 0$, Exclusive : $N_{\mathrm{jet}} = 0$\\ \hline Neutrino & $\pT^{\nu\bar{\nu}}>150$~\GeV \\ \hline \end{tabular} \caption{Definition of the extended fiducial region. At particle level, $\pT^{\nu\bar{\nu}}$ is the equivalent of $\met$.} \label{table:ZgRegions_extfid} \end{center} \end{table} \section{Introduction} \label{sec:intro} \input{introduction} \section{ATLAS detector and data samples} \label{sec:atlasdet_MC} \input{detector} \input{samples} \section{Selection of $Z(\nu\bar{\nu})\gamma$ events} \label{sec:selection} \input{selection} \section{Background estimation} \label{sec:backgrounds} \input{bkg_est} \section{Integrated and differential cross sections} \label{sec:crossection} \input{cross_sec} \section{Limits on triple gauge-boson couplings} \label{sec:atgc} \input{atgc} \FloatBarrier \section{Conclusion} \label{sec:summary} \input{summary} \section*{Acknowledgements} \input{acknowledgements/Acknowledgements} \clearpage \printbibliography \clearpage \input{atlas_authlist} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} It is a well-known fact that two mutually interacting dynamical/mechanical systems, when coupled, cannot preserve their individual motions in the collective system. This is manifested in the equation of motion of the collective system as the existence of additional terms, to those belonging to the individual subsystems. It is the Hamiltonian realization of this collective motion, of two mutually interacting physical systems, that we study in the present paper. In order to be able to present the mutual interactions as Lie group / Lie algebra actions, we shall consider the systems whose configuration spaces are Lie groups, \cite{AzIz98,baurle1997,OLV,SaWe13}. If the configuration space of a physical system admits a Lie group structure, then the reduced Hamiltonian dynamics can be achieved on the dual space of the Lie algebra \cite{AbMa78,Av, LiMa12, MaRa13} since it carries a natural Poisson structure, called the Lie-Poisson bracket. Many physical systems fit into this geometry, such as the rigid body, fluid and plasma theories \cite{Ho08,HoScSt09}. Coupling two different characters of a physical system, such as the fluid motion under the EM field or the rigid body motion under the gravity \cite{Rati80}, are particular instances of coupled systems where only one-sided interaction is allowed. Such systems have been studied, in literature, by the semidirect product theory, which has been successfully established both in Lagrangian dynamics, see for example \cite{Cendra98,marsden1998symplectic}, and Hamiltonian dynamics, see for example \cite{marsden1984semidirect,marsden1984reduction}, see also \cite{VaPaEs20}. \textbf{Double cross sum (matched pair) Lie groups.} Semidirect product Lie group constructions allow only one-sided action. As such, mutual actions are beyond the scope of the semidirect product theory. On the other hand, the double cross sum (matched pair) theory of \cite{Ma90, Ma902, Ma20} rises over two Lie groups with mutual actions, subject to certain compatibility conditions. As a result, the Cartesian product of two such Lie groups becomes a Lie group by itself, through a suitable multiplication built over the mutual actions, called the ``double cross sum'' of these Lie groups \cite{Ma90, Ma902, Ma20}. A double cross product Lie group, then, contains the constitutive Lie groups as trivially intersecting Lie subgroups. Historically, generalizations of the semidirect product theory may be traced back to \cite{Sz50,Sz58,szep1962sulle,Za42} under the name of the Zappa-Sz\'ep product, see also \cite{Br05}. The idea was revived later in \cite{Mack72}, as the ``product of subgroups'' (and as the ``double Lie group'' in \cite{LuWe90}), from the representation theory point of view. However, the name ``matched pair'' was first used in \cite{Sing70,Sing72} for the Hopf algebra extensions. A pair of groups were referred to as ``matched pairs'', for the first time, in \cite{Ta81} as yet another incarnation of the intimate relation between Hopf algebras and groups. We add that double cross sum (matched pair) Lie algebras are studied in \cite{KoMa88} under the name of ``twilled extension''. See \cite{OS21} for more details and a list of applications of the theory both in mathematics and physics. \textbf{Dynamics on double cross sums (matched pairs).} Double cross sum (matched pair) of Lie groups (algebras) provides a promising geometrical framework for studying the collective motion of two mutually interacting physical systems. However, the Hamiltonian (Lie-Poisson) theory over double cross sum groups (and their double cross sum Lie algebras) is developed only recently in \cite{OS17}. This Hamiltonian matched pair theory has immediately found applications in kinetic theory \cite{EsPaGr17}, and fluid theories \cite{EsGRGuPa19}. The Lagrangian counterpart of the double cross sums has been developed for the first order Euler-Poincar\'{e} theories in \cite{OS16}, and for the higher order theories in \cite{EsKuSu20}. For discrete Lagrangian dynamics in the framework of Lie groupoids, the matched pair approach has found applications in \cite{OS21}. Evidently, one may use the matched pair strategy to decouple a physical system into two of its interacting subsystems as well. So, one may study the constitutive subsystems in order to examine the whole system. In \cite{OS20}, it has been shown that non-relativistic collisionless Vlasov's plasma can be written as a non-trivial double cross sum (matched pair) of its two subdynamics. One of the constitutive subdynamics is the compressible Euler fluid, while the other is the kinetic moments of order $>2$ of the plasma density function. In this exhibition, the double cross sum (matched pair) decomposition is in harmony with the nature of physics as well. One of the motivations of this study is to find a proper algebraic/geometric framework in order to decompose the dynamics of Vlasov's kinetic moments at any order (see \cite{GiHoTr,GiHoTr08} for the geometry of kinetic moments). Such a trial is hopeless even in the realm of matched pair Hamiltonian dynamics since the graded character of the underlying Lie algebra manifests only two Lie subalgebra decompositions. One is to cut after the zeroth moment, and the other is to cut after the first moment. Any other cut yields a Lie subalgebra with a complementary space, which is merely a subspace. Even though the higher order cuts attract deep attention, the literature seems to contain a gap in this issue. Regarding the cut from the second moment, we refer the reader to \cite{EsGRGuPa19} for the $10$-moment kinetic theory which paves the way towards the whole Grad hierarchy \cite{grad1965boltzmann} including the entropic moments \cite{grmela2017hamiltonian}. We also refer to an incomplete list \cite{Le96,PeChMoTa15,Pe90} for the related works on the kinetic moments. We shall address this issue as the first goal of this paper. Since our approach is pure algebraic/geometric, we shall need to determine a proper Lie algebra extension before we dive into the details of this goal. \textbf{Extended structures.} Lie algebra actions are not the only sources to couple two Lie algebras. More generally, Lie algebra extensions may also be formed by the weak-actions and (twisted) 2-cocycles as well. An ``extended structure'', introduced in \cite{Agormili14, AgoreMili-book}, provides a method to couple a Lie algebra and a vector space in a universal way. More precisely, a Lie algebra and a vector space are coupled using the action of the Lie algebra on the vector space, and the corresponding weak \textit{action} (encoded by a twisted 2-cocycle) of the linear space on the Lie algebra. From the decomposition point of view, this corresponds to the decomposition of a Lie algebra into a Lie subalgebra and its vector space complement. In particular, if the complementary space happens to be a Lie subalgebra, then the extended structure reduces to a double cross sum. Postponing the details into Section \ref{Sec-MP}, we shall for now be content with the following diagram. \begin{equation} \xymatrix{ &\text{Extended Structure} \ar[ddl]|{\text{twisted cocycle is zero}} \ar[ddr]|{\text{one of the actions is zero}} \\ \\ \underset{\text{(Matched Pairs)}}{\text{Double Cross Sums}} && \text{2-cocycle Extensions}} \end{equation} That is, the theory of extended structures accommodates both the matched pair theory and the theory of 2-cocycle extensions as particular instances. \textbf{The first goal of the present work.} For the semidirect product Hamiltonian theory, introductions of the central extensions have already been studied, see for example, in \cite{MaMiPe98}. But such an extension is missing for the matched pair Hamiltonian theory. In accordance with this observation, the first goal of the present work is to fill this gap with a theory of Hamiltonian (Lie-Poisson) dynamics on the dual of \textit{extended structures}. This will be achieved in Section \ref{LP-Ext-Sec}. The rich geometry of extended structures enables us to couple a Lie-Poisson bracket with some external variables interacting in all possible ways. From the decomposition point of view, this corresponds to the decomposition of Lie-Poisson dynamics to one of its Lie-Poisson subdynamics, and a complementary subsystem, which is not necessarily Lie-Poisson. So, the Lie-Poisson bracket and the Lie-Poisson equations provided in Subsection \ref{2-LP-Coec} evidently applicable to any Lie-Poisson model. In other words, they determine the most general form of the decomposition. We remark that, in the matched pair Hamiltonian dynamics, the constitutive systems must be Lie-Poisson by itself. In other words, the matched pair Hamiltonian dynamics turns out to be now a particular instance of extended Hamiltonian dynamics in Subsection \ref{2-LP-Coec}. Extended Hamiltonian framework in Subsection \ref{2-LP-Coec} fits very well with the cuts of order $\geq 2$ kinetic moments of the density function of the Vlasov's plasma. So it provides an algebraic/geometric answer to the problem motivating the first goal. This decomposition is now more or less direct in the realm of the work \cite{OS20}. We shall, on the other hand, postpone it to a future work where we plan to discuss the geometry with further physical intuitions. As an illustration of the model in Subsection \ref{2-LP-Coec}, in the present work, we address another phenomenon in plasma dynamics, namely; all possible decompositions of (3 particles) BBGKY (Bogoliubov-Born-Green-Kirkwood-Yvon) hierarchy \cite{Ha}. Accordingly, in Section \ref{examples}, we shall examine this concrete model in full detail in order to make the novel Lie-Poisson structures introduced in this paper more clear. In \cite{marsden1984hamiltonian}, it was proved that BBGKY hierarchy can be recasted as a Lie-Poisson equation for $n>3$ particles. Focusing on the case $n=3$, which is missing in \cite{marsden1984hamiltonian}, we shall present two decompositions of the BBGKY dynamics; the matched pair (double cross sum) decomposition, and the extended structure decomposition. This way, we shall also be able to compare these two approaches. These decompositions will precisely determine the relationship between the moments of the $3$-particle plasma density function. Without the extended Hamiltonian dynamics, on the other hand, the latter decomposition wouldn't be possible. \textbf{The second goal of the present work.} Being Lie algebras, 2-cocycle extensions of Lie algebras may form matched pairs along with the properly coupled 2-cocycle terms. We shall derive the conditions for the matched pair of 2-cocycle extensions to be a 2-cocycle in Section \ref{Sec-Cop-co} as the second goal of this note. We record the result in Proposition \ref{Prop-Gokhan}. In addition, Lie-Poisson dynamics on the coupled system will be presented. The geometric framework, and the dynamical equations, will be illustrated in Section \ref{Sec-3D} via two copies of the Heisenberg algebra. It is well-known that the symplectic two-form on the two-dimensional Euclidean space can be used to determine an extended Lie algebra structure on the three-dimensional Euclidean space. These results with the Heisenberg algebra in $3D$ \cite{Ha15}. Being a nilpotent Lie algebra of class two, the Heisenberg algebra may be matched by itself, \cite{Ma20}. This provides an application of the extended Hamiltonian theory as well as Proposition \ref{Prop-Gokhan} and results with the physically interesting equations on the dual spaces. The coadjoint orbits of the Heisenberg Lie algebras read the canonical Hamiltonian formalism. So that coupling of two Heisenberg algebras provides a mutual coupling of two canonical Hamiltonian dynamics under mutual interaction, which was also missing in the literature. \textbf{Coupling of dissipative systems.} If a dynamical system is in the Hamiltonian form, then as a result of the skew-symmetry of the Poisson bracket, the Hamiltonian function is a conserved quantity \cite{MaRa13}. This geometric fact corresponds to the conservation of energy when applied to some physical problems. The time-reversal character of the Hamiltonian dynamics depends basically on this observation. Those systems violating the time-reversal property, therefore, could not be put into the Hamiltonian formulation. Nevertheless, one may achieve to add a (Rayleigh type) dissipative term to the Lie-Poisson dynamics by means of a linear operator from the dual space to the Lie algebra \cite{bloch96}. This naive strategy works very well for many physical problems. More generally, and perhaps a more geometric approach is to add an additional feature to the manifold. There are methods, in the literature, to achieve this. \textbf{GENERIC - Metriplectic systems.} In the early '80s, some extensions of the Poisson geometry are introduced independently in order to add dissipative terms into Hamiltonian formulations (see Subsection \ref{Sec-Generic}). These geometries are known today under the name as metriplectic dynamics \cite{Ka84,Ka85,Mo84,Mo86} or as GENERIC (General Equation for Non-Equilibrium Reversible Irreversible Coupling) \cite{Gr84,GrOt97,Ot05}. In metriplectic systems, the geometry is determined by two compatible brackets, namely a Poisson bracket and a (possibly semi-Riemannian) symmetric bracket. In GENERIC, which is more general \cite{Mielke2011}, a dissipation potential is employed in order to arrive at the irreversible part of the dynamics. Accordingly, the Legendre transformation of dissipation potential determines the time irreversible part of the dynamics \cite{PaGr18,Gr2018}. If the potential is quadratic, then one arrives at a symmetry bracket as a particular case. One of the problems in this coupling is to determine a proper symmetric bracket, or a dissipation potential compatible with the Poisson geometry (see Subsection \ref{Sec-Sym-Bra}). In the present work, we shall refer to geometric ways to obtain symmetric brackets; such as the double bracket \cite{Br88,Br93,Br94}, Cartan-Killing bracket \cite{Mo09}, and Casimir dissipation bracket \cite{GB13}. We are interested in these geometries since, in the Lie-Poisson framework, they may be defined by the Lie algebra bracket directly in an algorithmic way. In the present work, we shall study extensions/couplings of the symmetric brackets as well as the dissipative systems. The latter will be the third goal here. \textbf{The third goal of the present work.} In order to present a complete picture, we shall study the couplings of the dissipative terms which are added to the Lie-Poisson dynamics as the third goal of this present work in Section \ref{Sec-C-DS}. In other words, our third goal is to couple two metriplectic systems under mutual actions. First, we aim to provide a way to couple two mutually interacting systems involving Rayleigh type dissipative terms. Later, we present couplings of the Double brackets, the Cartan-Killing brackets, and the Casimir dissipation brackets. As for the coupling problem, our emphasis shall be on $3D$ systems. Accordingly, we present two illustrations. One, given in Section \ref{Diss-Gen-Exp}, will be on the rigid body dynamics. We shall provide couplings of both reversible and irreversible rigid body dynamics under mutual interactions. From the decomposition point of view, this corresponds to the Iwasava decomposition of $SL(2,\mathbb{C)}$. The other, in Section \ref{Sec-3D}, will be to continue with the Heisenberg algebra, endowing the geometry with dissipations. \textbf{Contents.} In the following section, we shall be exhibiting a brief summary of the preliminaries for the sake of the completeness of the work. These are including Hamiltonian dynamics and metriplectic dynamics. Section \ref{Sec-MP} is reserved for the presentation of the extended structure, and its particular instances such as a matched pair of Lie algebras and 2-cocycle extensions. In Section \ref{LP-Ext-Sec}, Lie-Poisson dynamics are studied in the dual space of the extended structure. In Section \ref{examples}, we shall decompose BBGKY dynamics as an illustration of the theoretical results obtained in the previous sections. In Section \ref{Sec-Cop-co}, we shall determine the conditions for a matched pair 2-cocycle to be a 2-cocycle of a matched pair. Couplings of the symmetric brackets are given in Section \ref{Sec-C-DS}. In Sections \ref{Sec-3D} and \ref{Diss-Gen-Exp}, $3D$ examples are provided. \section{Fundamentals: Lie-Poisson Dynamics, and Dissipations} \subsection{Hamiltonian Dynamics} Consider a Poisson manifold $\left( P,\{\bullet , \bullet \}\right)$ \cite{LaPi12,Va12}. On this geometry, Hamilton's equation generated by a Hamiltonian function(al) $\mathcal{H}$ is defined to be \begin{equation} \dot{\textbf{z}}=\{\textbf{z},\mathcal{H}\}, \end{equation} where $\mathbf{z}$ is in $P$. Define Hamiltonian vector field $X_{\mathcal{H}}$ for a Hamiltonian function(al) $\mathcal{H}$, as follows \begin{equation} X_{\mathcal{H}}(\mathcal{F})=\{\mathcal{F},\mathcal{H}\}. \end{equation} A function(al) $\mathcal{C}$ is called a Casimir function(al) if it commutes with all other function(al)s that is, $\{\mathcal{F},\mathcal{C}\}=0$ for all $\mathcal{F}$. If there does not exist any non-constant Casimir function(al) for a Poisson bracket, then we say that the Poisson bracket is non-degenerate. It should be noted that the Hamiltonian vector field generated by a Casimir function(al) $\mathcal{C}$ is identically zero. The characteristic distribution, that is the image space of all Hamiltonian vector fields is integrable. This reads a foliation of $P$ into a collection of symplectic leaves \cite{wei83}. That is, on each leaf, the Poisson bracket turns out to be non-degenerate. If the bracket is already non-degenerate on $P$ then there exists only one leaf, and $P$ turns out to be a symplectic manifold. Skew-symmetry of Poisson bracket verifies that Hamiltonian function(al) is preserved throughout the motion. Since Hamiltonian function(al) is taken as the total energy in classical systems, we may call this property as the conservation of energy. This manifests the reversible character of Hamiltonian dynamics. Referring to Poisson bracket, we define a bivector field $\Lambda$ as follows \begin{equation} \label{bivec-PoissonBra} \Lambda(d\mathcal{F},d\mathcal{H}):=\{\mathcal{F},\mathcal{H}\} \end{equation} for all $\mathcal{F}$ and $\mathcal{H}$, \cite{DuZu06}. Here, $d\mathcal{F}$ and $d\mathcal{H}$ denote the de-Rham exterior derivatives. So that, we may alternatively introduce a Poisson manifold by a tuple $(P,\Lambda)$ consisting of a manifold and a bivector field. There is Schouten-Nijenhuis algebra on bivector fields \cite{deAzPeBu96}. In this picture, the Jacobi identity turns out to be the commutation of $\Lambda$ with itself under the Schouten-Nijenhuis bracket that is, \begin{equation} \label{Poisson-cond} [\Lambda,\Lambda]=0. \end{equation} \textbf{Lie-Poisson systems.} Consider a Lie algebra $\mathfrak{K}$ equipped with a Lie bracket $[\bullet ,\bullet ]$, \cite{Ja79}. Dual $\mathfrak{K}^{\ast }$ admits a Poisson bracket, called Lie-Poisson bracket \cite{Ho08,HoScSt09,demethods2011,LiMa12,MaRa13}. For two function(al)s $\mathcal{F}$ and $\mathcal{H}$, the (plus/minus) Lie-Poisson bracket is defined to be \begin{equation} \label{LP-Bra} \{\mathcal{F},\mathcal{H}\} ( \textbf{z} ) = \pm \Big\langle \textbf{z} ,\left[ \frac{\delta \mathcal{F}}{ \delta \textbf{z}},\frac{\delta \mathcal{H}}{\delta \textbf{z} }\right]\Big\rangle \end{equation} where $\delta \mathcal{F}/\delta \textbf{z} $ is the partial derivative (for infinite dimensional cases, the Fr\'{e}chet derivative) of the function(al) $\mathcal{F}$. Here, the pairing on the right hand side is the duality between $\mathfrak{K}^*$ and $\mathfrak{K}$ whereas the bracket is the Lie algebra bracket on $\mathfrak{K}$. Note that, we assume the reflexivity condition on $\mathfrak{K}$, that is the double dual $\mathfrak{K}^{**}=\mathfrak{K}$. The dynamics of an observable $\mathcal{F}$, governed by a Hamiltonian function(al) $\mathcal{H}$, is then computed to be \begin{equation}\label{aaa} \dot{\mathcal{F}}=\{\mathcal{F},\mathcal{H}\} ( \textbf{z} ) = \pm \Big\langle \textbf{z} ,\left[ \frac{\delta \mathcal{F}}{\delta \textbf{z} },\frac{\delta \mathcal{H}}{ \delta \textbf{z}}\right] \Big\rangle =\pm \Big\langle \textbf{z} ,-\mathop{\rm ad}\nolimits_{\delta \mathcal{H}/ \delta \textbf{z}}\frac{\delta \mathcal{F}}{\delta \textbf{z} }\Big\rangle = \pm \Big\langle \mathop{\rm ad}\nolimits_{\delta \mathcal{H}/ \delta \textbf{z}}^{\ast }\textbf{z} ,\frac{\delta \mathcal{F}}{\delta \textbf{z} } \Big\rangle. \end{equation} Here, $\mathop{\rm ad}\nolimits_\textbf{x} \textbf{x}':=[\textbf{x},\textbf{x}']$ for all $\textbf{x}$ and $\textbf{x}'$ in $\mathfrak{K}$ is the (left) adjoint action of the Lie algebra $\mathfrak{K}$ on itself whereas $\mathop{\rm ad}\nolimits^*$ is the (left) coadjoint action of the Lie algebra $\mathfrak{K}$ on the dual space $\mathfrak{K}^*$. Notice that $\mathop{\rm ad}\nolimits^*_\textbf{x}$ is defined to be minus of the linear algebraic dual of $\mathop{\rm ad}\nolimits_\textbf{x}$. Then, we obtain the equation of motion governed by a Hamiltonian function(al) $\mathcal{H}$ as \begin{equation} \label{eqnofmotion} \dot{\textbf{z}} \mp \mathop{\rm ad}\nolimits_{{\delta \mathcal{H}}/{\delta \textbf{z} }}^{\ast }\textbf{z}=0. \end{equation} \begin{remark} There are a plus/minus notations in \eqref{LP-Bra} and \eqref{aaa}. A plus sign appears if the reduction (Lie-Poisson reduction) is performed referring to a right symmetry whereas a minus sign appears if the reduction is performed referring to a left symmetry. For the plasma dynamics (see Section \ref{examples}), we shall refer plus Lie-Poisson bracket since Vlasov's plasma has a right (called relabelling) symmetry. For finite-dimensional rigid body motion (see Section \ref{Sec-3D}), we shall employ minus Lie-Poisson bracket. \end{remark} \textbf{Coordinate realizations.} Assume a (local) coordinate chart $(z_i)$ (we prefer subscripts since we only focus on the dual spaces) around a point $\textbf{z}$ in $P$. Then the Poisson bivector can be represented by a set of coefficient functions $\Lambda_{ij}$ determining a Poisson bracket as \begin{equation} \label{PB-local} \{\mathcal{F},\mathcal{H}\}=\Lambda_{ij} \frac{\partial \mathcal{F}}{\partial z_i}\frac{\partial \mathcal{H}}{\partial z_i}. \end{equation} Then the equation of motion generated by a Hamiltonian function $\mathcal{H}$ becomes \begin{equation} \dot{z}_i=\Lambda_{ij} \frac{\partial \mathcal{H}}{\partial z_j}. \end{equation} Let us examine the Lie-Poisson structure which is defined on the dual of a finite dimensional Lie algebra. For this, assume that an $K$ dimensional Lie algebra $\mathfrak{K}$ admitting a basis $\{\textbf{k}_{i}\}=\{\textbf{k}_{1},\dots, \textbf{k}_{K}\}$. The Lie algebra bracket on $\mathfrak{K}$ determine a set of scalars $C_{ij}^{l}$, called structure constants, satisfying \begin{equation}\label{cerceve} [\textbf{k}_{i},\textbf{k}_{j}]=C_{ij}^{m}\textbf{k}_{m}, \end{equation} where the summation convention is assumed over the repeated indices. Note that, after fixing a basis, the structure constants define a Lie bracket in a unique way. One has the dual basis $\{\textbf{k}^i\}=\{\textbf{k}^{1},\dots, \textbf{k}^{K}\}$ on the dual space $\mathfrak{K}^*$. We denote an element of $\mathfrak{K}^*$ by $\textbf{z}=z_i\textbf{k}^i$ where the coordinates $\{z_1, \dots,z_N\}$ are being real numbers. The (plus/minus) Lie-Poisson bracket \eqref{LP-Bra} can be computed in this picture as \begin{equation} \label{LP-loc} \{\mathcal{F},\mathcal{H}\}= \pm C^{n}_{ij} z_{n} \frac {\partial \mathcal{F}}{\delta z_{i}} \frac {\partial \mathcal{H}}{\delta z_{j}}. \end{equation} The calculation in (\ref{LP-loc}) reads that the coefficients $\Lambda_{ij}$ of the (plus/minus) Poisson bivector (\ref{PB-local}) are determined through the linear relations \begin{equation} \label{SC-LP} \Lambda_{ij}=\pm C_{ij}^mz_m. \end{equation} In this case, Lie-Poisson equations \eqref{eqnofmotion} are computed to be \begin{equation} \label{loc-LP-Eqn} \dot{z}_j\mp C^{n}_{ij} z_{n}\frac{\partial \mathcal{H}}{\delta z_i }=0. \end{equation} \textbf{Rayleigh dissipation.} Let us present a simply way to add dissipation to Lie-Poisson dynamics. Define a linear transformation $\Upsilon $ from $\mathfrak{K}^*$ to $\mathfrak{K}$. We equip a dissipative term to the right hand side of the Lie-Poisson system \eqref{eqnofmotion} by simply adding $\mp ad^*_{\Upsilon(\textbf{z})} \textbf{z}$ that is, \begin{equation}\label{RD-Eqn} \dot{z}\mp ad^*_{\delta \mathcal{H} / \delta \textbf{z}} \textbf{z} = \mp ad^*_{\Upsilon(\textbf{z})} \textbf{z} \end{equation} see \cite{bloch96}. We ask that $\Upsilon$ be a gradient relative to a certain metric at least on adjoint orbits. In the upcoming subsection, we introduce a geometric framework for obtaining dissipation in the Lie-Poisson setting. \subsection{GENERIC (Metriplectic) Systems} \label{Sec-Generic} In order to add dissipative terms to Hamiltonian dynamics, two geometric models are addressed in the literature, namely metriplectic systems \cite{Ka84,Ka85,Mo84,Mo86} and GENERIC (an acronym for General Equation for Non-Equilibrium Reversible-Irreversible Coupling), \cite{GrOt97,Gr2018}. In metriplectic systems, Poisson geometry is equipped with a proper symmetric bracket (possibly semi-Riemannian). In GENERIC, which is more general \cite{Mielke2011}, a dissipation potential is employed. For convex potential functions, after the Legendre transformation, one arrives at the irreversible part of the dynamics \cite{PaGr18,Gr2018}. If the potential is quadratic, then one arrives at a symmetric bracket as a particular case. In this work, we are interested in dissipative dynamics defined through symmetric brackets \cite{Otto2001}. Let us depict this geometry in detail. Consider a Poisson manifold $(P,\{\bullet,\bullet\})$ and assume, additionally, a symmetric bracket $(\bullet,\bullet)$ on the space of smooth functions on $P$. The metriplectic bracket $[\vert\bullet,\bullet\vert]$ on the manifold $P$ is defined by the addition of the Poisson bracket and the symmetric bracket that is, for two function(al)s $\mathcal{H}$ and $\mathcal{F}$, \begin{equation}\label{MB} \left[\vert \mathcal{H},\mathcal{F} \vert\right] = \{\mathcal{H},\mathcal{F}\} + a (\mathcal{H},\mathcal{F}), \end{equation} where $a$ being a scalar. Note that, a metriplectic bracket is an example of a Leibniz bracket \cite{OrPl04}. There is no unique way to define a symmetric bracket. One way is to introduce a (maybe semi-)Riemannian metric $\mathcal{G}$ on $M$. After a bracket is established, the next task is to determine the generating function(al)s. In accordance with this, we determine two different kinds of the GENERIC (metriplectic) systems \cite{Gu07}. In the first kind of GENERIC (metriplectic) systems, one refers a single function(al) $\mathcal{F}$ to generate the equations of motion. So that, the dynamics is given as \begin{equation}\label{MD1} \dot{\textbf{z}}=\left[\vert \textbf{z},\mathcal{F}\vert\right]=\{\textbf{z},\mathcal{F} \}+ a (\textbf{z},\mathcal{F}) \end{equation} for $\textbf{z}\in P$. By, particularly, choosing the metric $\mathcal{G}$ positive definite, and by letting $a$ be equal to $-1$, one arrives at the dissipation of the generating function $\mathcal{F}$ in time. In the second kind of GENERIC (metriplectic) systems, there exist two function(al)s, namely a Hamiltonian function(al) $\mathcal{H}$ and an entropy-type function(al) $\mathcal{S}$. In this time, the dynamics is written as \begin{equation}\label{MD2} \dot{\textbf{z}}=\{\textbf{z},\mathcal{H} \} + a (\textbf{z},\mathcal{S}). \end{equation} If the following identities \begin{equation}\label{C1} \{\mathcal{S},\mathcal{H}\}= 0, \qquad (\mathcal{H},\mathcal{S})= 0 \end{equation} hold, the metriplectic dynamics (\ref{MD2}) can be generated by a single function (free energy function) $\mathcal{F}=\mathcal{H}-\mathcal{S}$ defined as the difference of the Hamiltonian and entropy-type function(al)s. For such kind of systems, Hamiltonian function(al) $\mathcal{H}$ is a conserved quantity whereas the dissipative behavior of the system is interpreted as the increase of entropy along trajectories assuming that $a$ is positive. This case is possible if the Poisson structure is degenerate and the symmetric tensor $G$ is at most semi-definite. In the following subsection, we introduce some examples of symmetric brackets that can be attached to the Lie-Poisson bracket. \subsection{Some Symmetric Brackets on the Duals of Lie algebras} \label{Sec-Sym-Bra} In this subsection, we list some symmetric brackets available on the dual $\mathfrak{K}^*$ of a Lie algebra $\mathfrak{K}$. After a symmetric bracket is determined, say $(\bullet,\bullet)$ the irreversible dynamics governed by a generating function, say $\mathcal{S}$, is computed to be \begin{equation} \label{diss-dyn-eq} \dot{\textbf{z}}=a(\textbf{z},\mathcal{S}), \end{equation} where $a$ is a real number. Assuming a basis $\{\mathbf{k}_i\}$, and the according local coordinates $\{z^i\}$ on $\mathfrak{K}$, the primary goal in this subsection is to define a symmetric tensor field \begin{equation} \mathcal{G}=\mathcal{G}_{ij} dz^i\otimes dz^j \end{equation} on $\mathfrak{K}$. In the dual space $\mathfrak{K}^*$, we employ the dual basis $\{\textbf{k}^{i}\}$ and the coordinates $\{z_i\}$. There are two distinguished functions on the Lie-Poisson picture. Here is a list: \textbf{Double bracket.} Recall the structure of a Lie algebra $\mathfrak{K}$ given in \eqref{cerceve}. In the Lie-Poisson setting, the coefficients of the Poisson bivector are determined by the structure constants of the Lie algebra as \eqref{SC-LP} that is, $\Lambda_{ij}=C_{ij}^lz_l$ \cite{Mo09}. For two functions $\mathcal{F}$ and $\mathcal{H}$, we define a symmetric bracket, literarily called double bracket, \begin{equation}\label{doubledissi} (\mathcal{F},\mathcal{H})^{(D)}= \sum_{j} \Lambda_{ij} \Lambda_{lj}\frac{\partial \mathcal{F}}{\partial z_i} \frac{\partial \mathcal{H}}{\partial z_l} = \sum_{j}C_{ij}^rC_{lj}^s z_r z_s \frac{\partial \mathcal{F}}{\partial z_i} \frac{\partial \mathcal{H}}{\partial z_l}, \end{equation} see \cite{Br93}. So that we can write the coefficients of the symmetric bracket as in terms of the structure constants of the Lie algebra as follows \begin{equation} \mathcal{G}_{ij}=\sum_{l}C_{il}^rC_{jl}^s z_r z_s. \end{equation} Now we define a metriplectic bracket on $\mathfrak{K}^*$ by adding Lie-Poisson bracket \eqref{LP-loc} and double bracket \eqref{doubledissi} that is \begin{equation} \dot{\mathcal{F}}=[\vert\mathcal{F},\mathcal{H}\vert]^{(D)}=\{\mathcal{F},\mathcal{H}\}+a (\mathcal{F},\mathcal{H})^{(D)}. \end{equation} So that according to the definition \eqref{MD1} we compute the equation of motion as \begin{equation} \label{aaaa} \dot{z}_j \mp C^{m}_{ij} z_{m}\frac{\partial \mathcal{H}}{\partial z_i }= a\sum_{i} C_{ji}^r C_{mi}^n z_r z_n \frac{\partial \mathcal{H}}{\partial z_m} \end{equation} where on the left hand side we have the reversible Hamiltonian dynamics whereas the dissipative term is located at the right hand side. \textbf{Cartan-Killing bracket.} Consider a Lie algebra given in coordinates as \eqref{cerceve}. Referring to skew-symmetry of the structure constants the Cartan-Killing metric is defined as \begin{equation} \label{CK} \mathcal{G}_{i j}=C_{i m}^{n}C_{j n}^{m}. \end{equation} It can be easily shown that the scalars $\mathcal{G}_{i j}$ define a symmetric and bilinear covariant tensor \cite{Mo09}. We define a symmetric bracket for functions $\mathcal{F}$ and $\mathcal{H}$ in terms of the metric as follows \begin{equation} \label{loc-car} (\mathcal{F},\mathcal{H})^{(CK)}= \frac{\partial \mathcal{F}}{\partial z_{i}}\mathcal{G}_{ij} \frac{\partial \mathcal{H}}{\partial z_{j}}=C_{i j}^{n}C_{m n}^{j}\frac{\partial \mathcal{F}}{\partial z_{i}}\frac{\partial \mathcal{H}}{\partial z_{m}}. \end{equation} To arrive at a metriplectic bracket on $\mathfrak{K}^*$, we add the Lie-Poisson bracket \eqref{LP-loc} and the symmetric bracket (\ref{loc-car}) that is, \begin{equation} \dot{\mathcal{F}}=[\vert\mathcal{F},\mathcal{H}\vert]^{(CK)}=\{\mathcal{F},\mathcal{H}\}+a (\mathcal{F},\mathcal{H})^{(CK)}. \end{equation} Accordingly, the metriplectic dynamics is computed to be \begin{equation} \begin{split} \dot{z}_j \mp C^{m}_{ij} z_{m}\frac{\partial \mathcal{H}}{\partial z_i }=aC_{j i}^{n}C_{m n}^{i} \frac{\partial \mathcal{H}}{\partial z_{m}} \end{split} \end{equation} where, on the left hand side, we have the reversible Hamiltonian dynamics whereas, on the right hand side, the dissipative term is exhibited. \textbf{Casimir dissipation bracket.} Define a symmetric bilinear operator $\psi $ on Lie algebra $\mathfrak{K}$. Referring to any Casimir function $\mathcal{C}$ of the Lie-Poisson bracket, define a symmetric bracket \cite{GB13} of two functions $\mathcal{F}$ and $\mathcal{H}$ as \begin{equation}\label{ccs} (\mathcal{F},\mathcal{H})^{(CD)}=-\psi \left( \left[ \frac{\delta \mathcal{F}}{\delta \textbf{z} },\frac{\delta \mathcal{H}}{\delta \textbf{z} }\right] ,\left[ \frac{\delta C}{\delta \textbf{z}},\frac{% \delta \mathcal{H}}{\delta \textbf{z}}\right] \right). \end{equation} This bracket is suitable for the second type of metriplectic bracket. Note that the change of Hamiltonian function over time is constant and the change of Casimir function can be given as \begin{equation*} \dot{\mathcal{C}} =-\psi \left( \left[ \frac{ \delta \mathcal{C}}{\delta \textbf{z} },\frac{\delta \mathcal{H}}{\delta \textbf{z} }\right] ,\left[ \frac{ \delta \mathcal{C}}{\delta \textbf{z} },\frac{\delta \mathcal{H}}{\delta \textbf{z} }\right] \right)=-\left\Vert \left[ \frac{\delta \mathcal{C}}{\delta \textbf{z} },\frac{\delta \mathcal{H}}{\delta \textbf{z} }\right] \right\Vert ^{2}<0. \end{equation*} The dynamics of an arbitrary observable $\mathcal{F}$ governed by a Hamiltonian function $\mathcal{H}$ is deduced by this bracket as \begin{equation} \begin{split} \dot{\mathcal{F}}=-\psi \left( \left[ \frac{\delta \mathcal{F}}{\delta \textbf{z} },\frac{ \delta \mathcal{H}}{\delta \textbf{z} }\right] ,\left[ \frac{\delta \mathcal{C}}{\delta \textbf{z} },\frac{ \delta \mathcal{H}}{\delta \textbf{z} }\right] \right) = \left\langle \left[ \frac{\delta \mathcal{C}}{\delta \textbf{z} },\frac{ \delta \mathcal{H}}{\delta \textbf{z} }\right]^{\flat} ,ad_{{\delta H}/{\delta \textbf{z} }}\frac{ \delta \mathcal{F}}{\delta \textbf{z} } \right\rangle &=\left\langle -ad^*_{{\delta \mathcal{H}}/{\delta \textbf{z} }}\left[ \frac{\delta C}{\delta \textbf{z} },\frac{ \delta \mathcal{H}}{\delta \textbf{z} }\right]^\flat , \frac{ \delta \mathcal{F}}{\delta \textbf{z} } \right\rangle. \end{split} \end{equation} Here, the musical mapping $\flat$, from $\mathfrak{K}$ to $\mathfrak{K}^{\ast }$, is defined through the symmetric operator $\psi $, satisfying the identity $\left\langle \textbf{x}^{\flat},\textbf{x}' \right\rangle =\psi \left( \textbf{x},\textbf{x}' \right) $ for two elements $\textbf{x}$ and $\textbf{x}'$ in $\mathfrak{K}$. In this case, the dissipative equation of motion can be written as follows \begin{equation} \label{DB-Dyn} \dot{\textbf{z}}=-ad_{\frac{\delta \mathcal{H}}{\delta \textbf{z}}}^{\ast }\left[ \frac{\delta C}{% \delta \textbf{z}},\frac{\delta \mathcal{H}}{\delta \textbf{z}}\right] ^{\flat}. \end{equation} We collect the Lie-Poisson bracket \eqref{LP-loc} and the Casimir Dissipation bracket \eqref{ccs} together to arrive at the following metriplectic bracket \begin{equation} \dot{\mathcal{F}}=[\vert\mathcal{F},\mathcal{H}\vert]^{(CD)}=\{\mathcal{F},\mathcal{H}\}+a (\mathcal{F},\mathcal{H})^{(CD)}. \end{equation} Then we compute the equation of motion as \begin{equation} \label{MB-DBD-Dyn} \dot{\textbf{z}} \mp ad_{\frac{\delta \mathcal{H}}{\delta \textbf{z}}}^{\ast }z =(-a)ad_{\frac{\delta H}{\delta \textbf{z} }}^{\ast }\left[ \frac{\delta \mathcal{C}}{\delta \textbf{z} },\frac{\delta \mathcal{H}}{ \delta \textbf{z} }\right] ^{\flat}. \end{equation} \textbf{Hamilton dissipation bracket.} We start with assuming a symmetric (semi-positive definite) bilinear operator $\psi $ defined on a Lie algebra $\mathfrak{K}$. We fix a Casimir function $\mathcal{C}$ of the Lie-Poisson bracket \eqref{LP-Bra}, and then introduce the following symmetric bracket on the dual space $\mathfrak{K}^*$, for two function(al)s $\mathcal{F}$ and $\mathcal{H}$, given by \begin{equation} \label{HD-sym} (\mathcal{F},\mathcal{H})^{(HD)}=-\psi\Big(\Big[\frac{\delta \mathcal{F}}{\delta \textbf{z}},\frac{\delta \mathcal{C}}{\delta \textbf{z}}\Big],[\frac{\delta \mathcal{H}}{\delta \textbf{z}} ,\frac{\delta \mathcal{C}}{\delta \textbf{z}}\Big]\Big) \end{equation} where the brackets on the right sides are Lie algebra brackets on $\mathfrak{K}$. An interesting feature of this symmetric bracket is to see that the Casimir function(al) $\mathcal{C}$ is a conserved quantity for the dynamics determined by the bracket \eqref{HD-sym} since $\dot{\mathcal{C}}=0$ due to the skew-symmetry of the Lie-bracket. On the other hand the generating function $\mathcal{H}$ dissipates that is \begin{equation*} \dot{\mathcal{H}} =(\mathcal{H},\mathcal{H})^{(HD)}=-\psi \Big( \Big[ \frac{ \delta \mathcal{H}}{\delta \textbf{z} },\frac{\delta \mathcal{C}}{\delta \textbf{z} }\Big] ,\Big[ \frac{ \delta \mathcal{C}}{\delta \textbf{z} },\frac{\delta \mathcal{H}}{\delta \textbf{z} }\Big] \Big)=-\left\Vert \Big[ \frac{\delta \mathcal{H}}{\delta \textbf{z} },\frac{\delta \mathcal{C}}{\delta \textbf{z} }\Big] \right\Vert ^{2} \leq 0. \end{equation*} More general, the gradient flow of an observable $\mathcal{F}$ generated by a function(al) $\mathcal{H}$ is computed to be \begin{equation} \dot{\mathcal{F}}=-\psi \left( \left[ \frac{\delta \mathcal{F}}{\delta \textbf{z} },\frac{ \delta \mathcal{H}}{\delta \textbf{z} }\right] ,\left[ \frac{\delta \mathcal{C}}{\delta \textbf{z} },\frac{ \delta \mathcal{H}}{\delta \textbf{z} }\right] \right) = \left\langle \left[ \frac{\delta \mathcal{C}}{\delta \textbf{z} },\frac{ \delta \mathcal{H}}{\delta \textbf{z} }\right]^{\flat} ,ad_{{\delta H}/{\delta \textbf{z} }}\frac{ \delta \mathcal{F}}{\delta \textbf{z} } \right\rangle =\left\langle -ad^*_{{\delta \mathcal{H}}/{\delta \textbf{z} }}\left[ \frac{\delta C}{\delta \textbf{z} },\frac{ \delta \mathcal{H}}{\delta \textbf{z} }\right]^\flat , \frac{ \delta \mathcal{F}}{\delta \textbf{z} } \right\rangle. \end{equation} Accordingly, we can write the equation of motion generated by $\mathcal{S}$ is \begin{equation} \label{HDB-Dyn} \dot{\textbf{z}}=-ad_{{\delta C}/{% \delta \textbf{z}}}^{\ast }\Big[\frac{\delta \mathcal{C}}{\delta \textbf{z}} ,\frac{\delta \mathcal{S}}{\delta \textbf{z}}\Big] ^{\flat}. \end{equation} Now we are ready to add the Lie-Poisson bracket \eqref{LP-loc} and the symmetric bracket exhibited in \eqref{HD-sym} in order to define a metriplectic (Leibniz) bracket on $\mathfrak{K}^*$. By assuming that both the Lie-Poisson and the gradient dynamics generated by a single function $\mathcal{H}$ we have that \begin{equation} \dot{\mathcal{F}}=[\vert\mathcal{F},\mathcal{H}\vert]^{(HD)}=\{\mathcal{F},\mathcal{H}\}+a (\mathcal{F},\mathcal{H})^{(HD)}. \end{equation} Then we compute the equation of motion as \begin{equation} \label{MB-HD-Dyn} \dot{\textbf{z}} \mp ad_{{\delta \mathcal{H}}/{\delta \textbf{z}}}^{\ast }z =(-a)ad_{{\delta C}/{ \delta \textbf{z}}}^{\ast }\left[ \frac{\delta \mathcal{C}}{\delta \textbf{z} },\frac{\delta \mathcal{H}}{ \delta \textbf{z} }\right] ^{\flat}. \end{equation} \section{Extensions of Lie Algebras}\label{Sec-MP} In this section, we introduce a unifying construction, called \textit{extended structure}, for factorization of Lie algebras. Then, we examine its particular instances such as matched pair Lie algebra and 2-cocycle extension. \subsection{Extended Structures} \label{brzezinski} Let $(\G{g},[\bullet,\bullet])$ be a Lie algebra and, assume that, it acts on a vector space $\G{h}$ from the right that is \begin{equation}\label{Lieact1} \vartriangleleft :\mathfrak{h}\otimes \mathfrak{g}\rightarrow \mathfrak{h} ,\qquad \eta \otimes \xi \mapsto \eta \vartriangleleft \xi. \end{equation} Our goal in this subsection is to construct the most general extension of $\G{g}$ by $\G{h}$. To have this, we permit existence of the following maps \begin{equation} \label{thm-m-h-const-maps} \begin{split} \Phi:\G{h}\otimes\G{h}\longrightarrow \G{g},\qquad (\eta,\eta')\mapsto\Phi(\eta,\eta') \\ \kappa:\G{h}\otimes\G{h}\longrightarrow \G{h} ,\qquad (\eta,\eta')\mapsto\kappa(\eta,\eta') \end{split} \end{equation} along with a linear map \begin{equation}\label{Lieact} \vartriangleright :\mathfrak{h}\otimes \mathfrak{g}\rightarrow \mathfrak{g },\qquad \eta \otimes \xi \mapsto \eta \vartriangleright \xi. \end{equation} Note here that \eqref{Lieact} is not an action since $\mathfrak{h}$ is only a vector space. The need of the operations \eqref{thm-m-h-const-maps} and \eqref{Lieact} will be more evident in the sequel where we examine this in the point of view of decomposition. The following theorem determines the conditions to define a Lie algebra structure on the direct sum $\G{K}=\G{g} \oplus \G{h}$, see also \cite{Agormili14, AgoreMili-book}. \begin{theorem}\label{thm-m-h-const} The direct sum $\G{K}=\G{g} \oplus \G{h}$ is a Lie algebra via \begin{equation} \label{mpla-2-cocyc-1} \lbrack (\xi \oplus\eta ),\,(\xi'\oplus\eta')]_{_\Phi\bowtie}=\big( [\xi,\xi' ]+\eta \vartriangleright \xi'-\eta'\vartriangleright \xi + \Phi(\eta,\eta') \big)\oplus\big(\kappa(\eta,\eta')+\eta \vartriangleleft \xi'-\eta' \vartriangleleft \xi \big). \end{equation} where the mappings are the ones in \eqref{Lieact1}, \eqref{thm-m-h-const-maps} and \eqref{Lieact}, if and only if, for any $\eta,\eta',\eta'' \in \G{h}$, and any $\xi,\xi' \in \G{g}$, \begin{equation} \label{cocycle-compatibility} \begin{split} & \kappa(\eta,\eta) =0, \qquad \Phi(\eta,\eta) = 0, \\ & \kappa(\eta,\eta') \vartriangleleft \xi = \kappa(\eta,\eta' \vartriangleleft \xi) - \kappa(\eta',\eta \vartriangleleft \xi) + \eta \vartriangleleft (\eta' \vartriangleright \xi) - \eta' \vartriangleleft (\eta \vartriangleright \xi), \\ & \kappa(\eta,\eta')\vartriangleright \xi = [\xi,\Phi(\eta,\eta')] + \Phi(\eta,\eta'\vartriangleleft\xi) + \Phi(\eta\vartriangleleft\xi,\eta') + \eta \vartriangleright (\eta' \vartriangleright \xi) - \eta' \vartriangleright (\eta \vartriangleright \xi), \\ & \eta \vartriangleright [\xi,\xi'] = [\xi,\eta \vartriangleright \xi'] - [\xi',\eta \vartriangleright \xi] + (\eta \vartriangleleft \xi) \vartriangleright \xi' - (\eta \vartriangleleft \xi' )\vartriangleright \xi, \\ & \eta \vartriangleleft [\xi,\xi'] = (\eta \vartriangleleft \xi) \vartriangleleft \xi' - (\eta \vartriangleleft \xi' )\vartriangleleft \xi, \\ & \circlearrowright \Phi(\eta,\kappa(\eta',\eta'')) + \circlearrowright \eta \vartriangleright \Phi(\eta',\eta'') = 0, \\ & \circlearrowright \kappa(\eta,\kappa(\eta',\eta'')) + \circlearrowright \eta \vartriangleleft \Phi(\eta',\eta'') = 0, \end{split} \end{equation} where $\circlearrowright$ refers to the cyclic sum over the indicated elements. \end{theorem} \begin{proof} We first observe that \begin{equation} [\eta,\eta] = \big(\Phi(\eta,\eta), \kappa(\eta,\eta) \big) = 0 \end{equation} if and only if \begin{equation} \Phi(\eta,\eta) = 0, \\ \qquad \kappa(\eta,\eta) = 0 \end{equation} for any $\eta \in \G{h}$. Next, we shall consider the mixed Jacobi identities. Let us begin with \begin{equation}\label{J-I} [ \xi,[\eta, \eta']] + [\eta,[\eta', \xi]] + [\eta',[\xi, \eta]] = 0, \end{equation} where \begin{equation} \begin{split} [ \xi,[\eta, \eta']] &= [\xi,(\Phi(\eta,\eta'),\, \kappa(\eta,\eta'))] = \big( - \kappa(\eta,\eta') \vartriangleright \xi + [\xi, \Phi(\eta,\eta')]_\G{g},- \kappa(\eta,\eta') \vartriangleleft \xi \big), \\ [\eta,[\eta',\xi] ] & = [\eta,(- \eta' \vartriangleright \xi,-\eta' \vartriangleleft \xi ) ] =\big(- \eta \vartriangleright(\eta' \vartriangleright \xi) -\Phi(\eta' \vartriangleleft \xi, \eta), -\kappa(\eta' \vartriangleleft \xi, \eta) - \eta \vartriangleleft(\eta' \vartriangleright \xi) \big), \\ [\eta',[\xi, \eta] ] & = [\eta',(\eta \vartriangleright \xi,\eta \vartriangleleft \xi ) ] =\big(\eta' \vartriangleright(\eta \vartriangleright \xi) + \Phi(\eta \vartriangleleft \xi, \eta'),\kappa(\eta \vartriangleleft \xi, \eta') + \eta' \vartriangleleft(\eta \vartriangleright \xi ) )\big). \end{split} \end{equation} Hence, \eqref{J-I} is satisfied if and only if \begin{equation} \begin{split} \kappa(\eta,\eta')\vartriangleleft \xi &= \kappa(\eta \vartriangleleft \xi, \eta') - \kappa(\eta' \vartriangleleft \xi, \eta) + \eta' \vartriangleleft(\eta \vartriangleright \xi) - \eta \vartriangleleft(\eta' \vartriangleright \xi) \\ [\xi, \Phi(\eta,\eta') ]_\G{g} &= \kappa(\eta,\eta') \vartriangleright \xi + \eta \vartriangleright(\eta' \vartriangleright \xi) - \eta' \vartriangleright(\eta \vartriangleright \xi ) + \Phi(\eta' \vartriangleleft \xi, \eta) - \Phi(\eta \vartriangleleft \xi, \eta') \end{split} \end{equation} for any $\eta,\eta' \in \G{h}$, and any $\xi \in \G{g}$. Next, we consider the Jacobi identity of an arbitrary $\eta \in \G{h}$, and any $\xi,\xi' \in \G{g}$, namely; \begin{equation}\label{J-II} [[\xi, \xi'], \eta] + [[\xi', \eta],\xi] + [[\eta, \xi], \xi'] = 0. \end{equation} In this case, \[ [[\xi, \xi'], \eta] = \big(\eta \vartriangleleft [\xi, \xi']_{\G{g}} ,\, \eta \vartriangleright [\xi, \xi']_{\G{g}})\big), \] together with \begin{equation} \begin{split} [[\xi', \eta],\xi] & =\big(( \eta \vartriangleleft \xi')\vartriangleleft \xi ,\, -(\eta \vartriangleleft \xi) \vartriangleright \xi' + [\xi,\eta \vartriangleright \xi']_\G{g}\big), \\ [[\eta, \xi], \xi']& = \big(-(\eta \vartriangleleft \xi) \vartriangleleft \xi',\, (\eta \vartriangleleft \xi') \vartriangleright \xi - [\eta \vartriangleright \xi,\xi']_\G{g}\big). \end{split} \end{equation} As such, \eqref{J-II} is satisfied if and only if \begin{equation} \begin{split} \eta \vartriangleleft [\xi, \xi']_{\G{g}} & = -( \eta \vartriangleleft \xi)\vartriangleleft \xi' + (\eta \vartriangleleft \xi) \vartriangleleft \xi' , \\ \eta \vartriangleright [\xi, \xi']_{\G{g}} & = [\eta \vartriangleright \xi,\xi']_\G{g} + [\xi,\eta \vartriangleright \xi']_\G{g} + (\eta \vartriangleleft \xi) \vartriangleright \xi' - (\eta \vartriangleleft \xi') \vartriangleright \xi \end{split} \end{equation} for any $\eta \in \G{h}$, and any $\xi,\xi' \in \G{g}$. Finally we consider the Jacobi identity \begin{equation}\label{J-III} [[\eta, \eta'], \eta''] + [[\eta', \eta''], \eta] + [[\eta'', \eta], \eta'] = 0 \end{equation} for any $\eta,\eta',\eta'' \in \G{h}$. We have, \begin{align*} [[\eta, \eta'], \eta''] &= \big((\Phi(\eta,\eta'),\kappa(\eta,\eta') ),\, \eta''\big) \\ & =\big(\eta'',\kappa(\kappa(\eta,\eta')) + \eta'' \vartriangleleft \Phi(\eta,\eta'),\, \eta'' \vartriangleright(\Phi(\eta,\eta') + \Phi(\eta'',\kappa(\eta,\eta'))\big), \end{align*} as well as \begin{equation} \begin{split} [[\eta', \eta''], \eta] &= [(\Phi(\eta',\eta''),\kappa(\eta',\eta'')) ,\, \eta] \\ & = \big(\eta,\kappa(\kappa(\eta',\eta'')) + \eta \vartriangleleft \Phi(\eta',\eta'') ,\, \eta'' \vartriangleright \Phi(\eta',\eta) + \Phi(\eta,\kappa(\eta',\eta''))\big) \\ [[\eta'', \eta], \eta'] &= [(\Phi(\eta,\eta''),\kappa(\eta,\eta'')) ,\, \eta'] \\ & = \big(\eta',\kappa(\kappa(\eta,\eta'')) + \eta' \vartriangleleft\Phi(\eta,\eta'') ,\, \eta'\vartriangleright\Phi(\eta,\eta'') + \Phi(\eta',\kappa(\eta,\eta''))\big). \end{split} \end{equation} Accordingly, \eqref{J-III} is satisfied if and only if \begin{equation} \begin{split} \sum_{(\eta,\eta',\eta'')}\, \kappa(\eta'',\kappa(\eta,\eta')) + \sum_{(\eta,\eta',\eta'')}\, \eta''\vartriangleleft\Phi(\eta,\eta') = 0, \\ \sum_{(\eta,\eta',\eta'')}\, \eta''\vartriangleright\Phi(\eta,\eta') + \sum_{(\eta,\eta',\eta'')}\, \Phi(\eta'',\kappa(\eta,\eta')) = 0 \end{split} \end{equation} for any $\eta,\eta',\eta'' \in \G{h}$. \end{proof} We denote the direct product space $\mathfrak{K} = \mathfrak{g} \oplus \mathfrak{h}$ equipped with the Lie algebra bracket \eqref{mpla-2-cocyc-1} by $\mathfrak{K} = \mathfrak{g} _{_\Phi\bowtie} \mathfrak{h}$. In \cite{Agormili14, AgoreMili-book}, the realization presented in Theorem \ref{thm-m-h-const} has already been introduced under the name of \textit{extended structure}. We shall follow this terminology as well. We remark that, the last two identities in \eqref{cocycle-compatibility} are called twisted cocycle identity for $\Phi$ and twisted Jacobi identity for $\kappa$, respectively. In the following subsection, we shall exploit that extended structure realizes both matched (double cross sum) Lie algebra and 2-cocycle extension Lie algebra as particular instances. Let us cite here some related studies presented in this section. See \cite{BrHa99} for a coalgebra discussion related with the extension presented here. We refer \cite{JanViz16} for the extensions of Hamiltonian vector fields. See \cite{Kubo} for extensions of Poisson algebras. \textbf{Decomposing a Lie algebra.} Instead of extending a Lie algebra with its representation space, one can decompose a Lie algebra into the internal direct sum of one of its Lie subalgebra and its complement. The latter is manifesting the inverse of the statement in Theorem \ref{thm-m-h-const}. So that, the (de)composition exhibited here is universal. Let us depict this argument. We start with a Lie algebra $\mathfrak{K}$, and assume a subalgebra, say $\mathfrak{g}$, of it. It is always possible to define a complementary subspace $\mathfrak{h}\subset \mathfrak{K}$ so that $\mathfrak{K} = \mathfrak{g} \oplus \mathfrak{h}$. In most general case, for $\eta,\eta'$ in $\mathfrak{h}$, under the Lie algebra bracket of $\mathfrak{K}$, we have that \begin{equation}\label{Phi-kappa} [\eta,\eta'] =\Phi(\eta,\eta') \oplus \kappa(\eta,\eta') \in \mathfrak{g} \oplus \mathfrak{h} . \end{equation} Here, we define the mappings \begin{eqnarray} \Phi:\mathfrak{h} \times \mathfrak{h} \rightarrow \mathfrak{g}, \qquad \Phi(\eta,\eta') := proj_\mathfrak{g}[\eta,\eta'] \label{phi} \\ \kappa:\mathfrak{h} \times \mathfrak{h} \rightarrow \mathfrak{h}, \qquad \kappa(\eta,\eta') := proj_\mathfrak{h}[\eta,\eta'] \label{kappa} \end{eqnarray} where $proj$ denotes the projection operator. Notice that, if $\Phi$ is identically zero then $\mathfrak{h}$ becomes a Lie subalgebra of $\mathfrak{K}$. In this case, $\kappa$ becomes the Lie algebra bracket on $\mathfrak{h}$. \begin{lemma}[Universal Lemma] \label{uni-prop-bre} Given a decomposition of Lie algebra $\G{K} = \G{g} \oplus \G{h}$, where $\G{g}$ is being a Lie subalgebra. The mappings $\Phi$ and $\kappa$ in \eqref{thm-m-h-const-maps} recovered from \eqref{Phi-kappa} and the mutual \textit{actions} computed through \begin{equation}\label{bracketactions} [0\oplus\eta,\xi\oplus 0] = \eta\vartriangleright \xi \oplus \eta\vartriangleleft \xi \end{equation} satisfies the conditions in \eqref{cocycle-compatibility}. That is, $\G{K}$ can be identified to the extended structure $\mathfrak{g} _{_\Phi\bowtie} \mathfrak{h}$ hence, \eqref{mpla-2-cocyc-1} reads a decomposition of the Lie bracket on $\G{K}$. \end{lemma} \textbf{Coordinate realizations.} Choose a basis $\{\mathbf{e}_\alpha\}$ on an $N$-dimensional Lie algebra $\mathfrak{g}$ and a basis $\{\mathbf{f}_a\}$ on $M$-dimensional vector space $\mathfrak{h}$. Notice that we reserve the Greek scripts while denoting the basis of the Lie algebra $\mathfrak{g}$ whereas we use the Latin scripts for the basis for the vector space $\mathfrak{h}$. We denote \begin{equation}\label{nonbracket2} [\mathbf{e}_\alpha,\mathbf{e}_\beta]=C^\theta_{\alpha \beta}\mathbf{e}_\theta, \qquad [\mathbf{f}_a,\mathbf{f}_b]=\Phi^\alpha_{ab}\mathbf{e}_\alpha+\kappa^d_{ab}\mathbf{f}_d, \end{equation} where the set $C^\theta_{\alpha \beta}$ determines the structure constants of the Lie subalgebra $\mathfrak{g}$ whereas the sets of constants $\Phi^\alpha_{ab}$ and $\kappa^d_{ab}$ are coordinate realizations of the mappings $\Phi$ and $\kappa$ determined in \eqref{Phi-kappa}. We identify the mappings \eqref{Lieact1} and \eqref{Lieact} in terms of the basis $\mathbf{e}_\alpha$, $\mathbf{f}_a$ as \begin{equation} \label{local-act} \mathbf{f}_{a} \vartriangleleft \mathbf{e}_{\alpha}=R_{a \alpha}^{b}\mathbf{f}_{b}, \qquad \mathbf{f}_{a} \vartriangleright \mathbf{e}_{\alpha}=L_{a \alpha}^{\beta}\mathbf{e}_{\beta}. \end{equation} It is needless to say that, the scalars $L_{a \alpha}^{\beta}$ and $R_{a \alpha}^{b}$ determine the mappings in a unique way. In the present finite case, the extended structure $\mathfrak{g} _{_\Phi\bowtie} \mathfrak{h}$ is an $N+M$ dimensional vector space. Referring to the basis of the constitutive subspaces, one can define a basis $\{\bar{\mathbf{e}}_{1}, \dots ,\bar{\mathbf{e}}_{N+M}\}$ on $\mathfrak{g} _{_\Phi\bowtie} \mathfrak{h}$ as \begin{equation} \label{coordmpL} \{\bar{\mathbf{e}}_{\alpha},\bar{\mathbf{e}}_a\}\subset \mathfrak{g} _{_\Phi\bowtie} \mathfrak{h}, \qquad \bar{\mathbf{e}}_{\alpha}=\mathbf{e}_\alpha\oplus 0,\qquad \bar{\mathbf{e}}_a=0\oplus \mathbf{f}_a. \end{equation} In the light of these local choices \eqref{nonbracket2} and \eqref{local-act}, one can calculate the structure constants of the extended Lie bracket \eqref{mpla-2-cocyc-1} via \begin{equation} \label{non-bracket-constants} \begin{split} &\left[\bar{\mathbf{e}}_{\beta}, \bar{\mathbf{e}}_{\alpha}\right]_{_\Phi\bowtie}=\bar{C}_{\beta \alpha}^{\gamma}\bar{\mathbf{e}}_{\gamma} + \bar{C}_{\beta \alpha}^{a} \bar{\mathbf{e}}_{a} =\left [ \mathbf{e}_{\beta}\oplus 0,\mathbf{e}_{\alpha}\oplus 0 \right ]_{_\Phi\bowtie} =C^{\gamma}_{\beta \alpha}\mathbf{e}_{\gamma}\oplus 0, \\ &\left[\bar{\mathbf{e}}_{\beta}, \bar{\mathbf{e}}_{a}\right]_{_\Phi\bowtie}=\bar{C}_{\beta a}^{\gamma}\bar{\mathbf{e}}_{\gamma} +\bar{C}_{\beta a}^{d}\bar{\mathbf{e}}_{d} =\left [ \mathbf{e}_{\beta}\oplus 0,0\oplus \mathbf{f}_{a} \right ]_{_\Phi\bowtie}=-L^{\gamma}_{a \beta}\mathbf{e}_{\gamma}\oplus -R_{a\beta}^d \mathbf{f}_d, \\ &\left[\bar{\mathbf{e}}_{b}, \bar{\mathbf{e}}_{a}\right]_{_\Phi\bowtie}=\bar{C}_{b a}^{\gamma}\bar{\mathbf{e}}_{\gamma}+\bar{C}_{b a}^{d}\bar{\mathbf{e}}_{d} =\left[ 0\oplus \mathbf{f}_{b},0\oplus \mathbf{f}_{a} \right ]_{_\Phi\bowtie} = \Phi^\gamma_{ba}\mathbf{e}_\gamma \oplus \kappa^{d}_{b a}\mathbf{f}_{d}. \end{split} \end{equation} As a result, structure constants of $\mathfrak{g} _{_\Phi\bowtie} \mathfrak{h}$ can be written as \begin{equation} \label{sc-nonbracket} \begin{split} \bar{C}_{\beta \alpha}^{\gamma}&=C_{\beta \alpha}^{\gamma},\qquad \bar{C}_{\beta \alpha}^{a}=0, \qquad \bar{C}_{\beta a}^{\gamma}=-L_{a \beta}^{\gamma},\\ \bar{C}_{\beta a}^{d}&=-R_{a \beta}^{d}, \qquad \bar{C}_{b a}^{\gamma}= \Phi^\gamma_{ba}, \qquad \bar{C}_{b a}^{d}=\kappa_{b a}^{d}. \end{split} \end{equation} \subsection{Double Cross Sum (Matched Pair) Lie Algebra} \label{doublecross} We shall recall the matched pair construction from \cite{Ma90,Ma902}. Let $(\G{g},[\bullet,\bullet]_{\G{g}})$ and $(\G{h},[\bullet,\bullet]_{\G{h}})$ be two Lie algebras admitting mutual actions \begin{equation}\label{matched-pair-mutual-actions} \vartriangleright:\G{h} \otimes \G{g} \to \G{g}, \qquad \vartriangleleft:\G{h} \otimes \G{g} \to \mathfrak{h}. \end{equation} That is, we have that the following identities hold, for any $\xi,\xi' \in \mathfrak{g}$, and any $\eta,\eta' \in \mathfrak{h}$, \begin{equation} \label{matchpaircond} \begin{split} & [\eta,\eta']\vartriangleright \xi = \eta \vartriangleright (\eta' \vartriangleright \xi) - \eta' \vartriangleright (\eta \vartriangleright \xi), \\ & \eta \vartriangleleft [\xi,\xi'] = (\eta \vartriangleleft \xi) \vartriangleleft \xi' -( \eta \vartriangleleft \xi') \vartriangleleft \xi. \end{split} \end{equation} The direct sum $\mathfrak{K}=\mathfrak{g}\oplus \mathfrak{h}$ can be endowed with a Lie algebra structure if some compatibility conditions are satisfied. \begin{theorem} \label{mp-prop} The direct sum of two Lie algebras $\mathfrak{g}$ and $\mathfrak{h}$ under mutual actions \eqref{matched-pair-mutual-actions} is a Lie algebra if it is equipped with the bilinear mapping \begin{equation} \lbrack (\xi \oplus\eta ),\,(\xi'\oplus\eta')]_{\bowtie}=\big( [\xi,\xi' ]+\eta \vartriangleright \xi'-\eta'\vartriangleright \xi \big)\oplus\big([\eta ,\eta']+\eta \vartriangleleft \xi'-\eta' \vartriangleleft \xi \big). \label{mpla} \end{equation} satisfying the following compatibility conditions \begin{equation} \label{comp-mpa} \begin{split} & [\eta,\eta'] \vartriangleleft \xi = [\eta,\eta' \vartriangleleft \xi] - [\eta',\eta \vartriangleleft \xi] + \eta \vartriangleleft (\eta' \vartriangleright \xi) - \eta' \vartriangleleft (\eta \vartriangleright \xi), \\ & \eta \vartriangleright [\xi,\xi'] = [\xi,\eta \vartriangleright \xi'] - [\xi',\eta \vartriangleright \xi] + (\eta \vartriangleleft \xi) \vartriangleright \xi' - (\eta \vartriangleleft \xi' )\vartriangleright \xi \\ \end{split} \end{equation} for any $\eta,\eta' \in \mathfrak{h}$, and any $\xi,\xi' \in \mathfrak{g}$. \end{theorem} If a Lie algebra $\mathfrak{K}$ is constructed in the realm of Proposition \ref{mp-prop}, then we call $\mathfrak{K}$ a matched pair (double cross sum) Lie algebra, and denote it as $\mathfrak{K}=\mathfrak{g}\bowtie \mathfrak{h}$, see also \cite{Zh10}. Notice that, we denote the matched pair Lie bracket \eqref{mpla} by $\lbrack \bullet, \bullet]_{\bowtie}$. If one of the actions in \eqref{matched-pair-mutual-actions} is trivial then we arrive at a semidirect product Lie algebra. So that matched pair Lie algebras are generalizations of the semidirect Lie algebras \cite{Cendra98,Cendra01,Cendra03,Holm98}. \textbf{From the extended structure to matched pairs.} Recall the extended structure in Subsection \ref{brzezinski}. In that realization, a more relaxed construction is given where $\G{h}$ is not necessarily assumed to be a Lie algebra. Consider particular case of that structure where the mapping $\Phi$ in \eqref{thm-m-h-const-maps} is taken to be identically zero. In the realm of Theorem \ref{thm-m-h-const}, for the case of $\Phi\equiv 0$, the last condition in the list \eqref{cocycle-compatibility} gives that $\kappa$ mapping satisfies the Jacobi identity that is \begin{equation} \label{kappa-Jac} \circlearrowright \kappa(\eta,\kappa(\eta',\eta''))=0. \end{equation} This reads that the vector space $\mathfrak{h}$ turns out to be a Lie algebra: \begin{equation}\label{kappabracket} [\eta,\eta']:=\kappa(\eta,\eta'). \end{equation} Further, for $\Phi\equiv 0$, the third line and the fifth line in the compatibility list \eqref{cocycle-compatibility} reduce to the action conditions \eqref{matchpaircond}, and the second and fourth lines in \eqref{cocycle-compatibility} become the matched pair compatibility conditions in \eqref{comp-mpa}. This observation says that Theorem \ref{mp-prop} is a particular case of Theorem \ref{thm-m-h-const}. That is, every matched pair Lie algebra is an extended structure. For the matched pair theory, Universal Lemma \eqref{uni-prop-bre} takes the following particular form \begin{lemma}[Universal Lemma II] \label{universal-prop} Let $\mathfrak{K}$ be a Lie algebra with two Lie subalgebras $\mathfrak{g}$ and $\mathfrak{h}$ such that $\mathfrak{K}$ is isomorphic to the direct sum of $\mathfrak{g}$ and $\mathfrak{h}$ as vector spaces through the vector addition in $\mathfrak{K}$. Then $\mathfrak{K}$ is isomorphic to the matched pair $\mathfrak{g}\bowtie\mathfrak{h}$ as Lie algebras, and the mutual actions \eqref{matched-pair-mutual-actions} are derived from \begin{equation} \label{mab-defn} [\eta,\xi] = \eta \vartriangleright \xi \oplus \eta \vartriangleleft \xi. \end{equation} Here, the inclusions of the subalgebras are defined to be \begin{equation} \mathfrak{g} \longrightarrow \mathfrak{K}: \xi \to (\xi\oplus 0),\qquad \mathfrak{h} \longrightarrow \mathfrak{K}: \eta \to (0\oplus \eta). \end{equation} \end{lemma} \textbf{Coordinate realizations.} Assume that the coordinates are chosen as in Subsection \ref{brzezinski}. To have the local characterization for the matched pair Lie algebra, we first analyse the structure constants given in \eqref{non-bracket-constants}. See that the first and the second lines remain the same but since $\Phi^\alpha_{ab}$ are all zero and, the constants $\kappa^{d}_{b a}$ turn out to be structure constants of the Lie algebra $\G{h}$. In order to highlight this, we denote the structure constants by $D^{d}_{b a}$. Let us record this \begin{equation}\label{nonbracket2-mp} [\mathbf{e}_\alpha,\mathbf{e}_\beta]=C^\gamma_{\alpha \beta}\mathbf{e}_\gamma, \qquad [\mathbf{f}_a,\mathbf{f}_b]=\Phi^\alpha_{ab}\mathbf{e}_\alpha+\kappa^d_{ab}\mathbf{f}_d, \end{equation} Therefore, we have \begin{equation} \left[\bar{\mathbf{e}}_{b}, \bar{\mathbf{e}}_{a}\right]=\bar{C}_{b a}^{\gamma}\bar{\mathbf{e}}_{\gamma}+\bar{C}_{b a}^{d}\bar{\mathbf{e}}_{d} =\left[ 0\oplus \mathbf{f}_{b},0\oplus \mathbf{f}_{a} \right ]_{\bowtie} =0\oplus D^{d}_{b a}\mathbf{f}_{d}. \end{equation} So that the structure constants of the matched Lie algebra are definitely equal to \eqref{sc-nonbracket} except that, in the present case, $\bar{C}_{b a}^{\gamma}=0$, and $\bar{C}_{b a}^{d}=D_{b a}^{d}$: \begin{equation} \label{sc-nonbracket-mp} \begin{split} \bar{C}_{\beta \alpha}^{\gamma}&=C_{\beta \alpha}^{\gamma},\qquad \bar{C}_{\beta \alpha}^{a}=0, \qquad \bar{C}_{\beta a}^{\gamma}=-L_{a \beta}^{\gamma},\\ \bar{C}_{\beta a}^{d}&=-R_{a \beta}^{d}, \qquad \bar{C}_{b a}^{\gamma}= 0, \qquad \bar{C}_{b a}^{d}=D_{b a}^{d}. \end{split} \end{equation} \subsection{2-cocycle Extension}\label{2coc-Sec} In this subsection, we discuss another particular instance of extended structures exhibited in Subsection \ref{brzezinski}. In this case, we assume that the right action \eqref{Lieact1} of $\mathfrak{g}$ on $\mathfrak{h}$ and the Lie bracket on $\mathfrak{g}$ are trivial while all the other geometric ingredients of Theorem \ref{thm-m-h-const} are remaining the same. In other words, in this subsection, \begin{equation} \eta \vartriangleright \xi=0, \qquad [\xi,\xi']=0 \end{equation} for all $\xi$ and $\xi'$ in $\mathfrak{g}$, and for all $\eta$ in $\mathfrak{h}$. As a result of this selection, one observes that the extended Lie bracket \eqref{mpla-2-cocyc-1} and the list of conditions \eqref{cocycle-compatibility} will take particular forms. Let us examine them from the bottom to the top. Since the right action is trivial, the last condition in \eqref{cocycle-compatibility} turns out out to be the Jacobi identity \eqref{kappa-Jac} for $\kappa$. This manifests that the two-tuple $(\G{h},\kappa)$ becomes a Lie algebra. Accordingly, in this section, we denote $\kappa$ by a bracket notation $[\bullet,\bullet]$ as in \eqref{kappabracket}. On the other hand, the penultimate condition, namely the twisted 2-cocycle condition, in \eqref{cocycle-compatibility} takes the particular form \begin{equation}\label{ext-act} \circlearrowright \Phi(\eta,[\eta',\eta'']) + \circlearrowright \eta \vartriangleright \Phi(\eta',\eta'') = 0. \end{equation} This determines $\Phi$ as a $\mathfrak{g}$-valued 2-cocycle on $\mathfrak{h}$ \cite{Fuks}. The second, the fourth and the fifth conditions in \eqref{cocycle-compatibility} are identically satisfied whereas the third line \begin{equation} [\eta,\eta']\vartriangleright \xi = \eta \vartriangleright (\eta' \vartriangleright \xi) - \eta' \vartriangleright (\eta \vartriangleright \xi) \end{equation} is exploiting that $\vartriangleright$ is a left Lie algebra action of $\mathfrak{h}$ on $\mathfrak{g}$. Eventually, we arrive at the following reduced form of the Lie algebra bracket \eqref{mpla-2-cocyc-1} \begin{equation}\label{AAAAA} [\xi \oplus \eta, \xi' \oplus \eta']_{_\Phi\rtimes}= \big(\eta \vartriangleright \xi'-\eta' \vartriangleright \xi+ \Phi(\eta,\eta') \big) \oplus[\eta,\eta']. \end{equation} over the direct product space. In this case, we denote the total space by $\mathfrak{g}_{_\Phi\rtimes}\mathfrak{h}$ equipped with the Lie bracket \eqref{AAAAA} and call it $2$-cocycle extension of $\G{h}$ by a vector field $\G{g}$. Once again, as a manifestation of Universal Lemma \ref{uni-prop-bre} we can discuss the decomposition point of view as follows. Assume a Lie algebra $\mathfrak{K}$ and one of its nontrivial centers, say $\G{g}$. Consider the decomposition $\mathfrak{g} \oplus \mathfrak{h}$ inducing nontrivial $\Phi$ and $\kappa$ mappings as in \eqref{phi} and \eqref{kappa} and a left action \eqref{Lieact} then Universal Lemma \ref{uni-prop-bre} reads that, $\mathfrak{K}$ can be decomposed into a $2$-cocycle extension of $\G{h}$ by $\G{g}$ that is $\mathfrak{K}=\mathfrak{g}_{_\Phi\rtimes}\mathfrak{h}$. \textbf{Coordinate realizations.} Assume once more that the coordinates are chosen as in Subsection \ref{brzezinski}. In this case, the Lie bracket on $\mathfrak{g}$ and, the right action of $\mathfrak{g}$ on $\mathfrak{h}$ are trivial. So that, the structure constants $C_{\beta \alpha}^\gamma$ of $\mathfrak{g}$ given in \eqref{nonbracket2} will be all zero while the constants $\Phi^\alpha_{ab}$ and $D^d_{ab}$ determining $\Phi$ and $\kappa$ mappings remain the same. Since the right action is zero, $R_{a \beta}^d$ in \eqref{local-act} will be zero whereas the scalars $L_{a \alpha}^{\beta}$ are giving the left action. If these aforementioned changes are applied to the system of equations \eqref{non-bracket-constants}, one can reach the structure constants of 2-cocycles. \section{Lie-Poisson Dynamics on Extensions}\label{LP-Ext-Sec} Dual of a Lie algebra admits Lie-Poisson bracket according to the definition in \eqref{LP-Bra}. In the present section, following the order of the extensions and couplings in Section \ref{Sec-MP}, we compute associated Lie-Poisson brackets. \subsection{Lie-Poisson Systems on Duals of Extended Structures} \label{2-LP-Coec} Assume the Lie algebraic framework in Subsection \ref{brzezinski}, and let that all the conditions in Theorem \ref{thm-m-h-const} hold. Now we start with the left action $\vartriangleright$ in (\ref{Lieact}) then, freeze an element $\eta$ in $\mathfrak{h}$ in this operation to have a linear mapping $\eta \vartriangleright$ on the subalgebra $\mathfrak{g}$. This linear mapping and the dual action $\overset{\ast }{\vartriangleleft} \eta$ are \begin{equation} \label{eta-star} \begin{split} \eta \vartriangleright&:\mathfrak{g}\longrightarrow \mathfrak{g}, \qquad \xi \mapsto \eta \vartriangleright \xi \\ \overset{\ast }{\vartriangleleft} &:\mathfrak{g}^*\otimes \G{h} \longrightarrow \mathfrak{g}^*, \qquad \langle \mu \overset{\ast }{\vartriangleleft} \eta, \xi \rangle=\langle \mu, \eta \vartriangleright \xi \rangle. \end{split} \end{equation} This dual mapping is a right representation of $\mathfrak{h}$ on the dual space $\mathfrak{g}^*$. Later, by freezing $\xi\in \mathfrak{g}$ in the left action $\vartriangleright$ in (\ref{Lieact}), we define a linear mapping $\mathfrak{b}_\xi: \mathfrak{h} \mapsto \mathfrak{g}$. We record here this linear mapping $\mathfrak{b}_\xi$ and the dual mapping $\mathfrak{b}_\xi^*$ as \begin{align} \label{b} \mathfrak{b}_\xi&: \mathfrak{h} \longrightarrow \mathfrak{g},\qquad \mathfrak{b}_\xi(\eta)=\eta\vartriangleright \xi, \\ \label{b*} \mathfrak{b}_\xi^*&:\mathfrak{g}^*\longrightarrow \mathfrak{h}^*, \qquad \langle \mathfrak{b}_\xi^*\mu,\eta \rangle= \langle \mu, \mathfrak{b}_\xi \eta \rangle = \langle \mu, \eta\vartriangleright \xi \rangle. \end{align} Consider the right action $\vartriangleleft$ in (\ref{Lieact1}). In this operation, we freeze $\xi$ in $\G{g}$ to have an automorphism on $\mathfrak{h}$, denoted by $\vartriangleleft\xi$. We write $\vartriangleleft\xi$ and its dual $\xi \overset{\ast }{\vartriangleright}$ in the following display \begin{equation} \label{xi-star} \begin{split} \vartriangleleft\xi&:\mathfrak{h} \longrightarrow \mathfrak{h}, \qquad \eta \mapsto \eta \vartriangleleft\xi \\ \overset{\ast }{\vartriangleright}&: \G{g}\times \mathfrak{h}^* \longrightarrow \mathfrak{h}^*, \qquad \langle \xi \overset{\ast }{\vartriangleright}\nu, \eta \rangle =\langle\nu, \eta \vartriangleleft\xi\rangle. \end{split} \end{equation} See that, $\overset{\ast }{\vartriangleright}$ is a left representation of $\mathfrak{g}$ on the dual $\mathfrak{h}^*$. Further, we freeze an element, say $\eta$ in $\mathfrak{h}$, in the right action (\ref{Lieact1}). This enables us to define a linear mapping $\mathfrak{a}_\eta$ from $\mathfrak{g}$ to $\mathfrak{h}$. Here are the mapping $\mathfrak{a}_\eta$ and its dual $\mathfrak{a}_\eta^*$ in a respective order \begin{align} \mathfrak{a}_\eta&:\mathfrak{g}\mapsto \mathfrak{h}, \qquad \mathfrak{a}_\eta(\xi)=\eta\vartriangleleft \xi \label{a} \\ \label{a*} \mathfrak{a}_\eta^*&:\mathfrak{h}^*\mapsto \mathfrak{g}^*, \qquad \langle \mathfrak{a}_\eta^* \nu,\xi \rangle = \langle \nu,\mathfrak{a}_\eta \xi \rangle=\langle \nu,\eta\vartriangleleft \xi \rangle. \end{align} Let us recall mappings $\Phi$ and $\kappa$ displayed in \eqref{phi} and \eqref{kappa}, respectively. Define two functions $\kappa_\eta$ and $\Phi_\eta$ as \begin{align} \kappa_\eta &: \mathfrak{h} \rightarrow \mathfrak{h} \qquad \kappa_\eta(\eta') := \kappa(\eta,\eta')\label{kappan} \\ \Phi_\eta &: \mathfrak{h} \rightarrow \mathfrak{g} \qquad \Phi_\eta(\eta') := \Phi(\eta,\eta')\label{phin} \end{align} where $\eta,\eta' \in \mathfrak{h}$, $\nu \in \mathfrak{h}^*$ and $\mu \in \mathfrak{g}^*$. According to these definitions, the dual mappings are calculated as \begin{align} \kappa_{\eta}^* &: \mathfrak{h}^* \rightarrow \mathfrak{h}^* \qquad \langle \kappa_{\eta}^*\nu, \eta' \rangle = \langle \nu, -\kappa_\eta(\eta') \rangle = -\langle \nu, \kappa(\eta,\eta') \rangle,\label{kappa_coad} \\ \label{phi_coad} \Phi_{\eta}^* &: \mathfrak{g}^* \rightarrow \mathfrak{h}^* \qquad \langle \Phi_{\eta}^*\mu, \eta' \rangle = \langle \mu, -\Phi_\eta(\eta') \rangle = -\langle \mu, \Phi(\eta,\eta') \rangle, \end{align} respectively. \begin{proposition} \label{adfortwisted1} The adjoint action on $\mathfrak{g}_\Phi\bowtie \mathfrak{h}$ is being the extended Lie bracket in \eqref{mpla-2-cocyc-1}, the coadjoint action $\mathop{\rm ad}\nolimits^{\ast}$ of an element $\xi\oplus\eta$ in $\mathfrak{g}_\Phi\bowtie \mathfrak{h}$ onto an element $\mu\oplus\nu$ in the dual space $\mathfrak{g}^*\oplus \mathfrak{h}^{\ast}$ is computed to be \begin{equation} \label{ad-*-} \mathop{\rm ad}\nolimits_{(\xi\oplus\eta)}^{\ast}(\mu\oplus\nu)=\underbrace{ \big(ad^{\ast}_{\xi} \mu -\mu \overset{\ast }{% \vartriangleleft}\eta - \mathfrak{a}_{\eta}^{\ast}\nu\big)}_{\in ~ \mathfrak{g}^*}\oplus \underbrace{ \big( \kappa^{\ast}_{\eta} \nu+\Phi_{\eta}^*\mu +\xi \overset{\ast }{\vartriangleright}\nu+ \mathfrak{b}% _{\xi}^{\ast}\mu\big )}_{\in ~ \mathfrak{h}^*}. \end{equation} Here, $ad^{\ast}$ (in italic font) is for the infinitesimal coadjoint actions of subalgebras to their duals. \end{proposition} By using the equations \eqref{kappa_coad} and \eqref{phi_coad}, (plus/minus) extended Lie-Poisson bracket is computed as \begin{equation} \begin{split} \label{LiePoissononcocycle} \left\{ \mathcal{H},\mathcal{F}\right\}_{_\Phi\bowtie}(\mu\oplus\nu) =& \pm \left \langle \mu\oplus\nu, \left[ \Big(\frac{\delta \mathcal{H}}{\delta \mu}\oplus \frac{\delta \mathcal{H}}{\delta \nu} \Big), \Big(\frac{\delta \mathcal{F}}{\delta \mu}\oplus \frac{\delta \mathcal{F}}{\delta \nu} \Big) \right ]_{_\Phi\bowtie}\right \rangle \\ =& \pm \left\langle \mu ,\left[\frac{\delta \mathcal{H}}{% \delta \mu},\frac{\delta \mathcal{F}}{\delta \mu}\right] \right\rangle \pm\left\langle \nu ,\kappa \left(\frac{\delta \mathcal{H}}{\delta \nu},\frac{\delta \mathcal{F}}{\delta \nu}\right)\right\rangle \pm \underbrace{\left\langle \mu , \Phi \left(\frac{\delta \mathcal{H}}{\delta \nu},\frac{\delta \mathcal{F}}{\delta \nu}\right) \right\rangle}_{\text{A: from twisted cocycle}} \\ &\pm \underbrace{\left\langle \mu ,\frac{\delta \mathcal{H}}{\delta \nu} \vartriangleright \frac{\delta \mathcal{F}}{\delta \mu}\right\rangle \mp\left\langle \mu ,\frac{\delta \mathcal{F}}{\delta \nu}\vartriangleright \frac{\delta \mathcal{H}}{\delta \mu}\right\rangle}_ {\text{B: action of $\mathfrak{h}$ on $\mathfrak{g}$ from the left}} \pm\underbrace{ \left\langle \nu ,\frac{\delta \mathcal{H}}{\delta \nu}\vartriangleleft \frac{\delta \mathcal{F}}{\delta \mu}\right\rangle \mp\left\langle \nu ,\frac{\delta \mathcal{F}}{\delta \nu}\vartriangleleft \frac{\delta \mathcal{H}}{\delta \mu}\right\rangle} _{\text{C: action of $\mathfrak{g}$ on $\mathfrak{h}$ from the right}} \end{split} \end{equation} for two functions $\mathcal{H}, \mathcal{F}$. We assume the reflexivity condition which reads that ${\delta \mathcal{H}}/{\delta \mu}$ and ${\delta \mathcal{F}}/{\delta \mu}$ are elements of $\mathfrak{g}$ whereas ${\delta \mathcal{H}}/{\delta \nu}$ and ${\delta \mathcal{F}}/{\delta \nu}$ are elements of $\mathfrak{h}$. The Lie bracket on the first line in \eqref{LiePoissononcocycle} is the extended Lie bracket $[\bullet,\bullet]_{_\Phi\bowtie}$ in \eqref{mpla-2-cocyc-1}. In the Poisson bracket, the term labelled by A is a manifestation of the existence of twisted cocycle $\Phi$. The terms labelled by B are due to the \textit{left action} of $\mathfrak{h}$ on $\mathfrak{g}$ whereas the terms labelled by C are due to the right action of $\mathfrak{g}$ on $\mathfrak{h}$. Recall the (plus/minus) Lie-Poisson equation in \eqref{eqnofmotion} determined as a coadjoint flow. In the light of the Lie-Poisson bracket \eqref{LiePoissononcocycle}, governed by a Hamiltonian function $\mathcal{H}=\mathcal{H}(\mu,\nu)$), for the present picture, the (plus/minus) Lie-Poisson equation is computed as \begin{equation}\label{LPEghcocycle} \begin{split} & \underbrace{\dot{\mu} = \pm ad^{\ast}_{\frac{\delta\mathcal{H}}{\delta\mu}}(\mu)}_{\text{Lie-Poisson Eq. on} \ \mathfrak{g}^*}\mp\underbrace{\mu\overset{\ast }{\vartriangleleft} \frac{\delta\mathcal{H}}{\delta\nu}}_{\text{action of} \ \mathfrak{h}} \mp \underbrace{\mathfrak{a}_{\frac{\delta\mathcal{H}}{\delta\nu}}^{\ast}\nu,} _{\text{action of} \ \mathfrak{g}} \\ &\dot{\nu} =\pm \kappa^{\ast}_{\frac{\delta\mathcal{H}}{\delta\nu}}(\nu) \pm \underbrace{\Phi_{\frac{\delta\mathcal{H}}{\delta\nu}}^*(\mu)} _{\text{twisted cocycle}} \pm \underbrace{ \frac{\delta\mathcal{H}}{\delta\mu} \overset{\ast }{\vartriangleright}\nu}_{\text{action of} \ \mathfrak{g}} \pm \underbrace{\mathfrak{b} _{\frac{\delta\mathcal{H}}{\delta\mu}}^{\ast}\mu. }_{\text{action of} \ \mathfrak{h}} \end{split} \end{equation} Notice that we have determined and labelled the terms in the Lie-Poisson in order to identify them. As depicted in \eqref{LPEghcocycle}, one can easily follow how the Lie-Poisson dynamics on $\G{g}^*$ is extended by addition of terms coming from the mutual actions of $\G{h}$ and $\G{g}$ on each other as well as from the twisted 2-cocycle term. \textbf{Coordinate realizations.} We follow the notation in Subsection \ref{brzezinski}. Recall, $(N+M)$-dimensional extended structure $\mathfrak{K}=\mathfrak{g}\,_\Phi\bowtie \mathfrak{h}$. Denote the dual basis of $\mathfrak{g}^*$ and $\mathfrak{h}^*$ by $\{\mathbf{e}^\alpha\}$ and $\{\mathbf{f}^a\}$, respectively. Then, define the dual basis \begin{equation} \{\bar{\mathbf{e}}^{\alpha},\bar{\mathbf{e}}^a\}\subset \mathfrak{g}^* \oplus \mathfrak{h}^*, \qquad \bar{\mathbf{e}}^{\alpha}=\mathbf{e}^\alpha\oplus 0,\qquad \bar{\mathbf{e}}^a=0\oplus \mathbf{f}^a \end{equation} on the dual space $\mathfrak{g}^*\oplus \mathfrak{h}^*$. Using this basis, we can write an element of $ \mathfrak{g}^* \oplus \mathfrak{h}^* $ as follows \begin{equation} (\mu,\nu)=\mu_{\alpha}\bar{\mathbf{e}}^{\alpha}+\nu_{a}\bar{\mathbf{e}}^a. \end{equation} In this picture, the mappings \eqref{eta-star} and \eqref{xi-star} turn out to be \begin{equation}\label{1} (\mu_\alpha \mathbf{e}^\alpha) \overset{\ast}{\vartriangleleft} (\eta^a \mathbf{f}_a)=\mu_\alpha \eta^a L_{a\beta}^\alpha \mathbf{e}^\beta , \qquad (\xi^\alpha \mathbf{e}_\alpha) \overset{\ast}{\vartriangleright} (\nu_a\mathbf{f}^a)=\nu_a \xi^\alpha R_{b\alpha}^a \mathbf{f}^b, \end{equation} where $L_{a\beta}^\alpha$ and $R_{b\alpha}^a$ are scalars in \eqref{local-act} determining the \textit{actions}. Later, we compute the dual mappings in (\ref{b*}), (\ref{a*}) and also \eqref{kappa_coad}, \eqref{phi_coad} in terms of local coordinates as follows: \begin{align}\label{6} \mathfrak{b}^*_{(\xi^\alpha \mathbf{e}_\alpha)} (\mu_\alpha \mathbf{e}^\alpha)& =\mu_\alpha \xi^\beta L_{a \beta}^\alpha \mathbf{f}^a, \qquad \mathfrak{a}^*_{(\eta^a \mathbf{f}_a)} (\nu_a \mathbf{e}^a)=\nu_a \eta^b R_{b \alpha}^a \mathbf{e}^\alpha, \\ \kappa^{\ast}_\eta \nu &=-\kappa^a_{bd} \nu_a \eta^b \mathbf{f}^d, \qquad \Phi^{\ast}_\eta \mu= -\Phi_{b k}^\alpha \mu_\alpha \eta^b \mathbf{f}^k. \end{align} Therefore, the (plus/minus) Lie-Poisson bracket \eqref{LiePoissononcocycle} is written in coordinates as \begin{equation} \begin{split}\label{Lie-pois-double-non-bracket} \left \{ \mathcal{H},\mathcal{F} \right \}_{_\Phi\bowtie}(\mu\oplus\nu)=&\pm \mu_\alpha C_{\beta \gamma}^\alpha \frac{\partial \mathcal{H}}{\partial \mu_\beta}\frac{\partial \mathcal{F}}{\partial \mu_\gamma} \pm \nu_a \kappa_{b d}^a \frac{\partial \mathcal{H}}{\partial \nu_b}\frac{\partial \mathcal{F}}{\partial \nu_d} \pm \mu_\alpha \Phi_{bk}^\alpha \frac{\partial \mathcal{H}}{\partial \nu_b} \frac{\partial \mathcal{F}}{\partial \nu_k} \\ & \pm \mu_\alpha L_{a \beta }^\alpha \big(\frac{\partial \mathcal{H}}{\partial \nu_a}\frac{\partial \mathcal{F}}{\partial \mu_\beta} -\frac{\partial \mathcal{F}}{\partial \nu_a}\frac{\partial \mathcal{H}}{\partial \mu_\beta}\big) \pm \nu_a R_{b \beta}^a \big( \frac{\partial \mathcal{H}}{\partial \nu_b}\frac{\partial \mathcal{F}}{\partial \mu_\beta}- \frac{\partial \mathcal{F}}{\partial \nu_b}\frac{\partial \mathcal{H}}{\partial \mu_\beta}\big). \end{split} \end{equation} whereas the (plus/minus) Lie-Poisson dynamics \eqref{LPEghcocycle} as \begin{equation}\label{Lie-pois-nonbracket-dyn} \begin{split} \dot{\mu}_\beta&= \pm \mu_\rho C_{\beta \alpha}^\rho \frac{\partial \mathcal{H}}{\partial \mu_\alpha} \mp \mu_\alpha L_{a \beta }^\alpha \frac{\partial \mathcal{H}}{\partial \nu_a} \mp \nu_a R_{b \beta }^a \frac{\partial \mathcal{H}}{\partial \nu_b} , \\ \dot{\nu}_d&= \pm \mu_\alpha \Phi_{db}^\alpha \frac{\partial \mathcal{H}}{\partial \nu_b} \pm \nu_a \kappa_{d b}^a \frac{\partial \mathcal{H}}{\partial \nu_b} \pm \nu_a R_{d \alpha }^a \frac{\partial \mathcal{H}}{\partial \mu_\alpha} \pm \mu_\alpha L_{d \beta }^\alpha \frac{\partial \mathcal{H}}{\partial \mu_\beta} . \end{split} \end{equation} \subsection{Matched Lie-Poisson Systems}\label{Sec-MP-LP} In Subsection \ref{doublecross}, matched pair Lie algebra $\G{g}\bowtie \G{h}$ is realized as a particular instance of extended structure by choosing the twisted 2-cocycle $\Phi$ as trivial. So that, it is argued that, for a matched pair Lie algebra, both of the constitutive spaces, $\G{g}$ and $\G{h}$ are Lie subalgebras. Therefore, in this case, the duals of each of these subspaces, namely $\G{g}^*$ and $\G{h}^*$, admit Lie-Poisson flows. This lets us to claim that the Lie-Poisson dynamics on the dual $\mathfrak{g}^*\oplus \mathfrak{h}^*$ of matched pair can be considered as the collective motion of two Lie-Poisson subdynamics \cite{OS16}. Algebraically, this corresponds to matching of two Lie coalgebras. We refer \cite{Mi80} for the Lie coalgebra structure of the dual space. On the dual of a matched pair, both of the dual actions $\overset{\ast }{\vartriangleleft} $ and $\overset{\ast }{\vartriangleright}$, exhibited in $\eqref{eta-star}$ and $\eqref{xi-star}$ respectively, are equally valid. Notice that, for the present discussion $\vartriangleright$ is a true left action of $\G{h}$ on $\G{g}$ so that $\overset{\ast }{\vartriangleleft} $ is a true right dual action of $\G{h}$ on $\G{g}^*$. This is not true for extended structure since $\G{h}$ is not assumed to be a Lie subalgebra. It is immediate to observe that the dual mappings $\mathfrak{b}_\xi^*$ and $\mathfrak{a}_\eta^*$, in $\eqref{b*}$ and $\eqref{a*}$ respectively, are remaining the same. The difference of matched pair construction form the extended structure is that the $\kappa$ mapping, in \eqref{kappa}, is a Lie bracket and that $\Phi$ mapping, in \eqref{phi}, is zero. As previously stated, we prefer to denote $\kappa$ by a bracket, so we write the mapping \eqref{kappa_coad} as \begin{equation}\label{kappaequivad} \kappa_{\eta}^*\nu=ad^{\ast}_{\eta} \nu. \end{equation} These observations lead us to the following proposition as a particular case of Proposition \ref{adfortwisted1}, see also \cite{EsPaGr17,OS16}. \begin{proposition} \label{ad-*-prop} The adjoint action on $\mathfrak{g}\bowtie \mathfrak{h}$ is being the matched pair Lie bracket in \eqref{mpla}, the coadjoint action $\mathop{\rm ad}\nolimits^{\ast}$ of an element $\xi\oplus\eta$ in $\mathfrak{g}\oplus \mathfrak{h}$ onto an element $\mu\oplus\nu$ in the dual space $\mathfrak{g}^*\oplus \mathfrak{h}^{\ast}$ is computed to be \begin{equation} \label{ad-*} \mathop{\rm ad}\nolimits_{(\xi\oplus\eta)}^{\ast}(\mu\oplus\nu)=\underbrace{ \big(ad^{\ast}_{\xi} \mu -\mu \overset{\ast }{% \vartriangleleft}\eta - \mathfrak{a}_{\eta}^{\ast}\nu\big)}_{\in ~ \mathfrak{g}^*}\oplus \underbrace{ \big( ad^{\ast}_{\eta} \nu +\xi \overset{\ast }{\vartriangleright}\nu+ \mathfrak{b}% _{\xi}^{\ast}\mu\big )}_{\in ~ \mathfrak{h}^*}. \end{equation} \end{proposition} Later, on the dual space $\mathfrak{g}^\ast \oplus \mathfrak{h}^\ast$, the (plus/minus) Lie-Poisson bracket of double cross sum is computed to be \begin{equation} \begin{split} \label{LiePoissonongh} & \left\{ \mathcal{H},\mathcal{F}\right\}_{\bowtie}(\mu\oplus\nu) = \underbrace{\pm \left\langle \mu ,\left[\frac{\delta \mathcal{H}}{% \delta \mu},\frac{\delta \mathcal{F}}{\delta \mu}\right]\right\rangle \pm \left\langle \nu ,\left[\frac{\delta \mathcal{H}}{\delta \nu},\frac{\delta \mathcal{F}}{\delta \nu}\right]\right\rangle}_{\text{A: direct product}} \, \underbrace{\mp\left\langle \mu ,\frac{\delta \mathcal{H}}{\delta \nu} \vartriangleright \frac{\delta \mathcal{F}}{\delta \mu}\right\rangle \pm \left\langle \mu ,\frac{\delta \mathcal{F}}{\delta \nu}\vartriangleright \frac{\delta \mathcal{H}}{\delta \mu}\right\rangle}_ {\text{B: via the left action of $\mathfrak{h}$ on $\mathfrak{g}$}} \\ & \hspace{3cm} \underbrace{\mp \left\langle \nu ,\frac{\delta \mathcal{H}}{\delta \nu}\vartriangleleft \frac{\delta \mathcal{F}}{\delta \mu}\right\rangle \pm \left\langle \nu ,\frac{\delta \mathcal{F}}{\delta \nu}\vartriangleleft \frac{\delta \mathcal{H}}{\delta \mu}\right\rangle} _{\text{C: via the right action of $\mathfrak{g}$ on $\mathfrak{h}$}}. \end{split} \end{equation} Notice that, the terms labelled by A are just the sum of individual Poisson brackets on the dual spaces $\mathfrak{g}^* $ and $\mathfrak{h}^* $ of the constitutive Lie subalgebras $\mathfrak{g}$ and $\mathfrak{h}$, respectively. The terms labeled by B is a result of the left action of $\mathfrak{h}$ on $\mathfrak{g}$ whereas the terms labelled by C is due to the right action of $\mathfrak{g}$ on $\mathfrak{h}$. For the case of one-sided actions, that is semidirect product theories, B or C drops. If there is no action then, both B and C drop. In the light of the (plus/minus) matched pair Lie-Poisson bracket \eqref{LiePoissonongh}, matched pair Lie-Poisson equations generated by a Hamiltonian function $\mathcal{H}=\mathcal{H}(\mu,\nu)$ on $\mathfrak{g}^\ast\oplus\mathfrak{h}^\ast$ is computed to be \begin{equation}\label{LPEgh} \begin{split} & \underbrace{\dot{\mu}= \pm ad^{\ast}_{\frac{\delta\mathcal{H}}{\delta\mu}}(\mu)}_{\text{Lie-Poisson Eq. on }\mathfrak{g}^*}\, \underbrace{\mp\mu\overset{\ast }{\vartriangleleft} \frac{\delta\mathcal{H}}{\delta\nu}} _{\text{action of } \mathfrak{h}}\, \underbrace{\pm\mathfrak{a}_{\frac{\delta\mathcal{H}}{\delta\nu}}^{\ast}\nu} _{\text{action of }\mathfrak{g}}, \\ &\underbrace{\dot{\nu} =\pm ad^{\ast}_{\frac{\delta\mathcal{H}}{\delta\nu}}(\nu)}_ {\text{Lie-Poisson Eq. on }\mathfrak{h}^*}\, \underbrace{\mp \frac{\delta\mathcal{H}}{\delta\mu} \overset{\ast }{\vartriangleright}\nu}_{\text{action of }\mathfrak{g}}\, \underbrace{\pm\mathfrak{b} _{\frac{\delta\mathcal{H}}{\delta\mu}}^{\ast}\mu }_{\text{action of }\mathfrak{h}} . \end{split} \end{equation} The first terms on the right hand sides are the individual equations of motions. The other terms are the dual and cross actions appearing as manifestations of the mutual actions. \textbf{Coordinate realizations.} Recall the Lie-Poisson bracket in \eqref{Lie-pois-double-non-bracket} and the Lie-Poisson equations \eqref{Lie-pois-nonbracket-dyn} computed for the case of extended structures. In the matched pair case, we take the constants determining twisted 2-cocycle as zero that is $\Phi_{b a}^{\gamma}=0$, and the structure constants for the Lie algebra $\G{h}$ as $\kappa_{b a}^{d}=D_{b a}^{d}$. So that, the (plus/minus) matched Lie-Poisson bracket \eqref{LiePoissonongh} takes the following form in the coordinates \begin{equation} \begin{split}\label{Lie-pois-double-non-bracket-mp} \left \{ \mathcal{H},\mathcal{F} \right \}_{_\Phi\bowtie}(\mu\oplus\nu)=&\pm \mu_\alpha C_{\beta \gamma}^\alpha \frac{\partial \mathcal{H}}{\partial \mu_\beta}\frac{\partial \mathcal{F}}{\partial \mu_\gamma} \pm \nu_a D_{b d}^a \frac{\partial \mathcal{H}}{\partial \nu_b}\frac{\partial \mathcal{F}}{\partial \nu_d} \pm \mu_\alpha L_{a \beta }^\alpha \big(\frac{\partial \mathcal{H}}{\partial \nu_a}\frac{\partial \mathcal{F}}{\partial \mu_\beta} -\frac{\partial \mathcal{F}}{\partial \nu_a}\frac{\partial \mathcal{H}}{\partial \mu_\beta}\big) \\ & \pm \nu_a R_{b \beta}^a \big( \frac{\partial \mathcal{H}}{\partial \nu_b}\frac{\partial \mathcal{F}}{\partial \mu_\beta}- \frac{\partial \mathcal{F}}{\partial \nu_b}\frac{\partial \mathcal{H}}{\partial \mu_\beta}\big). \\ \end{split} \end{equation} The matched (plus/minus) Lie-Poisson dynamics in \eqref{LPEgh} is computed to be \begin{equation}\label{Lie-pois-nonbracket-dyn-mp} \begin{split} \dot{\mu}_\beta&= \pm \mu_\rho C_{\beta \alpha}^\rho \frac{\partial \mathcal{H}}{\partial \mu_\alpha} \mp \mu_\alpha L_{a \beta }^\alpha \frac{\partial \mathcal{H}}{\partial \nu_a} \mp \nu_a R_{b \beta }^a \frac{\partial \mathcal{H}}{\partial \nu_b} , \\ \dot{\nu}_d&= \pm \nu_a D_{d b}^a \frac{\partial \mathcal{H}}{\partial \nu_b} \pm \nu_a R_{d \alpha }^a \frac{\partial \mathcal{H}}{\partial \mu_\alpha} \pm \mu_\alpha L_{d \beta }^\alpha \frac{\partial \mathcal{H}}{\partial \mu_\beta} . \end{split} \end{equation} \subsection{Lie Poisson Dynamics on Duals of 2-cocycles} \label{LPD-2c} In Subsection \ref{2coc-Sec}, it is shown that 2-cocycle extension $\mathfrak{g}_{_\Phi\rtimes}\mathfrak{h}$ of a Lie algebra $h$ over its representation space $\G{g}$, as a particular case of extended structure $\mathfrak{g}_{_\Phi\bowtie}\mathfrak{h}$. Thus, the Lie-Poisson dynamics on the dual space of 2-cocycle extensions would be derived through the Lie-Poisson dynamics on the dual space of extended structures which is given in Subsection \ref{2-LP-Coec}. So that, we follow Subsection \ref{2coc-Sec} and make particular choice that the Lie bracket on $\mathfrak{g}$ is trivial. This results with several consequences. One is that the left action $\vartriangleright$ and the right dual action $\overset{\ast }{\vartriangleleft}$, in \eqref{eta-star}, are trivial. Also, observe that the coadjoint action on $\mathfrak{g}^*$ becomes identically zero. In addition, in this case, $\Phi$ turns out to be a true 2-cocycle and, $\kappa$ becomes a Lie bracket on the vector space $\mathfrak{h}$. Thus, the dual of $\kappa$ suits the coadjiont action as in \eqref{kappaequivad}. We employ all these modifications to the Lie-Poisson bracket \eqref{LiePoissononcocycle} on the dual of extended structure to arrive at the (plus/minus) Lie-Poisson bracket on the dual of 2-cocycle extension $\mathfrak{g}_{_\Phi\rtimes}\mathfrak{h}$ as \begin{equation} \label{centextLiepois} \left\{ \mathcal{H},\mathcal{F}\right\}_{_\Phi\rtimes}(\mu\oplus\nu) = \pm \left\langle \nu ,[\frac{\delta \mathcal{H}}{\delta \nu},\frac{\delta \mathcal{F}}{\delta \nu}] \right\rangle \pm \underbrace{\left\langle \mu , \Phi \left(\frac{\delta \mathcal{H}}{\delta \nu},\frac{\delta \mathcal{F}}{\delta \nu}\right) \right\rangle}_{\text{A: 2-cocycle}} \, \underbrace{\pm\left\langle \mu ,\frac{\delta \mathcal{H}}{\delta \nu} \vartriangleright \frac{\delta \mathcal{F}}{\delta \mu}\right\rangle \mp \left\langle \mu ,\frac{\delta \mathcal{F}}{\delta \nu}\vartriangleright \frac{\delta \mathcal{H}}{\delta \mu}\right\rangle}_ {\text{B: left action of $\mathfrak{h}$ on $\mathfrak{g}$}}. \end{equation} Here, the first term on the right hand side is the Lie-Poisson bracket on $\G{h}^*$. The term labelled as A is due to 2-cocycle $\Phi$ whereas the terms labelled as B are due to the left action of $\G{h}$ on $\G{g}$. For the Lie-Poisson bracket \eqref{centextLiepois}, the Lie-Poisson equations governed by a Hamiltonian function $\mathcal{H}=\mathcal{H}(\mu,\nu)$is computed to be \begin{equation}\label{centextLiepois2} \dot{\mu} = \mp \underbrace{\mu\overset{\ast }{\vartriangleleft} \frac{\delta\mathcal{H}}{\delta\nu}} _{\text{action of} \ \mathfrak{h}},\qquad \underbrace{\dot{\nu} =\pm ad^{\ast}_{\frac{\delta\mathcal{H}}{\delta\nu}}(\nu)}_ {\text{Lie-Poisson Eq. on }\mathfrak{h}^*}\, \underbrace{\pm \Phi_{\frac{\delta\mathcal{H}}{\delta\nu}}^*(\mu)} _{\text{2-cocycle}}\, \underbrace{\pm \mathfrak{b} _{\frac{\delta\mathcal{H}}{\delta\mu}}^{\ast}\mu }_{\text{action of} \ \mathfrak{h}}. \end{equation} A direct observation gives that the Lie-Poisson equation \eqref{centextLiepois2} is a particular case of the Lie Poisson equation \eqref{LPEghcocycle} where $\overset{\ast }{\vartriangleright}$, $\mathfrak{a}^*$ are zero. \textbf{Coordinate realizations.} In Subsection \ref{2coc-Sec}, 2-cocycle extension is written in terms of coordinates for finite dimensional cases. Referring to Subsection \ref{2-LP-Coec}, we write the Lie-Poisson bracket \eqref{centextLiepois} in coordinates as \begin{equation} \label{Lie-pois-double-non-bracket-tc} \left \{ \mathcal{H},\mathcal{F} \right \}_{_\Phi\bowtie}(\mu\oplus\nu)= \pm \nu_a D_{b d}^a \frac{\partial \mathcal{H}}{\partial \nu_b}\frac{\partial \mathcal{F}}{\partial \nu_d} \pm \mu_\alpha \Phi_{bk}^\alpha \frac{\partial \mathcal{H}}{\partial \nu_b} \frac{\partial \mathcal{F}}{\partial \nu_k} \pm \mu_\alpha L_{a \beta }^\alpha \big(\frac{\partial \mathcal{H}}{\partial \nu_a}\frac{\partial \mathcal{F}}{\partial \mu_\beta} -\frac{\partial \mathcal{F}}{\partial \nu_a}\frac{\partial \mathcal{H}}{\partial \mu_\beta}\big). \end{equation} Notice that, we write the structure constants on $\G{h}$ as $D_{b d}^a $. Further, we can write the Lie Poisson equations in \eqref{centextLiepois2} as \begin{equation}\label{Lie-pois-nonbracket-dyn-tc} \dot{\mu} _\beta= \mp \mu_\alpha L_{a \beta }^\alpha \frac{\partial \mathcal{H}}{\partial \nu_a} , \qquad \dot{\nu}_d= \pm \mu_\alpha \Phi_{db}^\alpha \frac{\partial \mathcal{H}}{\partial \nu_b} \pm \nu_a D_{d b}^a \frac{\partial \mathcal{H}}{\partial \nu_b} \pm \mu_\alpha L_{d \beta }^\alpha \frac{\partial \mathcal{H}}{\partial \mu_\beta} . \end{equation} \section{Illustration: Decomposing 3 Particles BBGKY Hierarchy} \label{examples} In this section, we provide a concrete example to Lie algebraic constructions and the Lie-Poisson structures introduced up to now. For this, we focus on BBGKY hierarchy in plasma dynamics \cite{Ha}. In \cite{marsden1984hamiltonian}, it is proved that BBGKY hierarchy can be recasted as a Lie-Poisson equation. The formulations presented in that study is for $n>3$. In the present work, we focus on $n=3$ which is missing in \cite{marsden1984hamiltonian}. Accordingly, we first determine the dynamics of BBGKY hierarchy for $n=3$. Then, we shall investigate its Lie-Poisson form. Two decomposition of the BBGKY dynamics will be presented (1) as a matched pair and, (2) as an extended structure. \subsection{BBGKY Dynamics for 3 Particles} Assume that a plasma rests in a finite $3D$ manifold $Q$ in $\mathbb{R}^3$. Being a cotangent bundle, $P=T^*Q$ is a symplectic and a Poisson manifold \cite{MaRa13,LiMa12}. Define the product symplectic space $P^3=P\times P \times P$ endowed with the product symplectic and product Poisson structures. We denote the $3$ particle density function by $f_3=f_3 (z_1,z_2,z_3)$ on $P^3$. The dynamics of $3$-particle plasma density function is governed by the Vlasov equation \begin{equation}\label{Vlasov} \frac{\partial f_3}{\partial t}=\{H(z_1,z_2,z_3),f_3(z_1,z_2,z_3)\} \end{equation} where $H$ is the total energy of the plasma particles \cite{MaRaScSpWe83,MaWe82,Mo82}. Here, $\{\bullet,\bullet\}$ denotes the canonical Poisson bracket on $P_3$ with respect to the variables $z_1,z_2,z_3$. We refer \cite{EsGRGuPa19,EsGu12,GiHoTr,Gu10} for some recent studies on Vlasov motion related with the geometry here. In the present work, we assume that, the particle energy function is in form \begin{equation}\label{en-func} \begin{split} H&=\sum_i H_1(z_i) + \sum_{i<j} H_2(z_i,z_j)+ H_3(z_1,z_2,z_3)\\ &= H_1(z_1)+H_1(z_2)+H_1(z_3)+H_2(z_1,z_2)+H_2(z_1,z_3)+H_2(z_2,z_3)+H_3(z_1,z_2,z_3). \end{split} \end{equation} Here, the functions $H_2$ and $H_3$ are assumed to be symmetric. See that, the dynamical equations in\cite{marsden1984hamiltonian} is for $H_3=0$. We present here the theory with a nontrivial $H_3$. \textbf{Dynamics of moments.} Now, we determine the moments of the plasma density function $f_3=f_3 (z_1,z_2,z_3)$ on $P^3$ as \begin{equation} \label{moments} \begin{split} f_1(z_1):=3\int f_3(z_1,z_2,z_3)dz_2 dz_3 \\ f_2(z_1,z_2):=6\int f_3(z_1,z_2,z_3) dz_3. \end{split} \end{equation} To find the dynamics of the moments $f_1$ and $f_2$, we simply take the partial derivatives of \eqref{moments} then, directly substitute the Vlasov equation \eqref{Vlasov} into these expressions. In order to arrive at the equation of governing the moment functions, we record the following identity, on the Poisson space $P$, \begin{equation} \label{eqn} \int \{h(z),f(z)\}_z ~ dz=0, \end{equation} which is valid for any two functions. The equation \eqref{eqn} is the result of that we have omitted the boundary terms. In \eqref{eqn}, $\{\bullet ,\bullet\}_z $ stands for the Poisson bracket on $P$. We shall be referring to this identity in the sequel. We compute the dynamics of $f_1$ as \begin{equation} \label{Dyn-f-1} \begin{split} \frac{\partial f_1}{\partial t}&=3 \int \{H(z_1,z_2,z_3), f_3(z_1,z_2,z_3)\} dz_2dz_3 \\ & =3 \int \Big \{ \sum_i H_1(z_i) + \sum_{i<j} H_2(z_i,z_j)+ H_3(z_1,z_2,z_3),f_3(z_1,z_2,z_3) \Big \} dz_2dz_3 \\ & =3 \int \Big \{ \sum_i H_1(z_1) ,f_3(z_1,z_2,z_3) \Big \}_{z_1} dz_2dz_3 +3 \int \{ H_2(z_1,z_2)+H_2(z_1,z_3), f_3(z_1,z_2,z_3) \}_{z_1} dz_2dz_3 \\ &\qquad +3 \int \{ H_3(z_1,z_2,z_3),f_3(z_1,z_2,z_3) \}_{z_1}dz_2dz_3 \\ & = \Big \{ \sum_i H_1(z_1) ,3 \int f_3(z_1,z_2,z_3) dz_2dz_3 \Big \}_{z_1} + \int \Big \{ H_2(z_1,z_2), 6\int f_3(z_1,z_2,z_3) dz_3 \Big \}_{z_1} dz_2 \\ & \qquad + 3 \int \{ H_3(z_1,z_2,z_3),f_3(z_1,z_2,z_3) \}_{z_1}dz_2dz_3 \\ &=\{H_1(z_1),f_1(z_1)\}_{z_1}+\int \{H_2(z_1,z_2),f_2(z_1,z_2)\}_{z_1}dz_2 +3 \int \{H_3(z_1,z_2,z_3),f_3(z_1,z_2,z_3)\}_{z_1}dz_2dz_3. \end{split} \end{equation} In the second equality, we have employed the energy function $H$ in \eqref{en-func}. In the third equality, we have used the identity \eqref{eqn} several times. We have substituted the definitions of the densty functions $f_1$ and $f_2$ in the last equality. Here, the notation $\{\bullet ,\bullet\}_{z_1}$ is the Poisson bracket only for $z_1$ variable even if the functions inside the bracket are depending on some other variables. In a similar fashion, the dynamics of the moment function $f_2$, can be computed as \begin{equation} \label{Dyn-f-2} \begin{split} \frac{\partial f_2}{\partial t}&=6 \int \{H(z_1,z_2,z_3), f_3(z_1,z_2,z_3)\} dz_3 \\ &=6 \int \Big \{ \sum_i H_1(z_i) + \sum_{i<j} H_2(z_i,z_j)+ H_3(z_1,z_2,z_3),f_3(z_1,z_2,z_3) \Big \} dz_3 \\ &= 6 \int \{H_1(z_1)+H_1(z_2),f_3(z_1,z_2,z_3)\}_{z_1,z_2}dz_3+6\int \{H_2(z_1,z_2),f_3(z_1,z_2,z_3)\}_{z_1,z_2} dz_3 \\ &\qquad + 6\int \{H_2(z_1,z_3)+H_2(z_2,z_3),f_3(z_1,z_2,z_3)\}_{z_1,z_2} dz_3 + 6\int \{H_3(z_1,z_2,z_3),f_3(z_1,z_2,z_3)\}_{z_1,z_2} dz_3 \\ &= \{H_1(z_1)+H_1(z_2),6 \int f_3(z_1,z_2,z_3) dz_3\}_{z_1,z_2}+ \{H_2(z_1,z_2),6\int f_3(z_1,z_2,z_3)dz_3 \}_{z_1,z_2} \\ &\qquad + 6\int \{H_2(z_1,z_3)+H_2(z_2,z_3),f_3(z_1,z_2,z_3)\}_{z_1,z_2} dz_3 + 6\int \{H_3(z_1,z_2,z_3),f_3(z_1,z_2,z_3)\}_{z_1,z_2} dz_3 \\ &= \{H_1(z_1)+H_1(z_2), f_2(z_1,z_2) \}_{z_1,z_2}+ \{H_2(z_1,z_2),f_2(z_1,z_2) \}_{z_1,z_2} \\ &\qquad + 6\int \{H_2(z_1,z_3)+H_2(z_2,z_3),f_3(z_1,z_2,z_3)\}_{z_1,z_2} dz_3 + 6\int \{H_3(z_1,z_2,z_3),f_3(z_1,z_2,z_3)\}_{z_1,z_2} dz_3. \end{split} \end{equation} Here, $\{\bullet ,\bullet\}_{z_1,z_2}$ is the Poisson bracket only for $z_1$ and $z_2$ variables even if the functions inside the bracket are depending on some other variables. \subsection{Lie-Poisson Realization of BBGKY hierarchy}\label{Alg-BBGKY} We denote $A_1$ as space of smooth functions on $P$. Equipped with the canonical Poisson bracket, $A_1$ is a Lie algebra. Similarly, we define $A_2$ and $A_3$ as Lie algebras of the symmetric functions on $P^2$ and $P^3$, respectively. There are hierarchical embeddings defined to be \begin{equation}\label{embedding-BBGKY} \begin{split} & A_1 \longrightarrow A_2, \qquad K_1(z_1)\mapsto K_1^{(2)}(z_1,z_2):= K_1(z_1)+ K_1(z_2)\\ & A_1 \longrightarrow A_3, \qquad K_1(z_1)\mapsto K_1^{(3)}(z_1,z_2,z_3):= K_1(z_1)+ K_1(z_2)+ K_1(z_3) \\ & A_2 \longrightarrow A_3, \qquad K_2(z_1,z_2)\mapsto K_2^{(3)}(z_1,z_2,z_3):=K_2(z_1,z_2)+K_2(z_2,z_1) + K_2(z_1,z_3)\\ & \hspace{8cm}+K_2(z_3,z_1)+K_2(z_2,z_3)+K_2(z_3,z_2). \end{split} \end{equation} We define the following direct sum \begin{equation}\label{A-3} \C{A}:=A_3\oplus A_2 \oplus A_1, \end{equation} then, introduce the following mapping from the product space $\C{A}$ to the Lie algebra $A_3$ of symmetric functions on $P^3$ that is \begin{equation} \label{alpha-1} \begin{split} \alpha: \C{A} \longrightarrow A_3 ,\qquad (K_3,K_2,K_1) \longrightarrow K_3(z_1,z_2,z_3)+K_2^{(3)}(z_1,z_2,z_3)+K_1^{(3)}(z_1,z_2,z_3) \end{split} \end{equation} Notice that, $\alpha$ in \eqref{alpha-1} turns out to be a Lie algebra homomorphism if, on the domain space $\C{A} $, the following Lie bracket is assumed \begin{equation}\label{hiearchybracket} \begin{split} \big [(K_3,K_2,K_1),(L_3,L_2,L_1)\big]_{\C{A}}=\Big(\{K_3,L_3\}+\{K_3,L_2^{(3)}\}+\{K_3,L_1^{(3)}\}+\{K_2^{(3)},L_3\}+\{K_1^{(3)},L_3\}\\ +\{K_2^{(3)},L_2^{(3)}\}, \{K_2,L^{(2)}_1\}_{z_1,z_2}+\{K^{(2)}_1,L_2\}_{z_1,z_2},\{K_1,L_1\}_{z_1}\Big), \end{split} \end{equation} where, on the right hand side, the notation $\{\bullet,\bullet\}$ without a subscript is the Poisson bracket on $P_3$, $\{\bullet,\bullet\}_{z_1,z_2}$ is the Poisson bracket only for $z_1$ and $z_2$ variables, $\{\bullet,\bullet\}_{z_1}$ is the Poisson bracket only for $z_1$ variable. The dual mapping of $\alpha$ in \eqref{alpha-1} computed to be \begin{equation} \alpha^*: A_3^*\longrightarrow \C{A}^*, \qquad f_3\mapsto \Big(f_3,f_2=\int 6f_3(z_1,z_2,z_3)dz_3, f_1=\int 3 f_3(z_1,z_2,z_3)dz_2dz_3\Big), \end{equation} becomes a momentum and a Poisson map. Notice that, the mapping $\alpha^*$ determines the moment functions exhibited in \eqref{moments}. \textbf{Coadjoint action.} Assume that the adjoint action of the $\C{A}$ on itself is the Lie bracket $[\bullet,\bullet]_{\C{A}}$ in \eqref{hiearchybracket}. The coadjoint action of the space $\C{A}$ on its dual $\C{A}^*$ is \begin{equation}\label{Cala} \begin{split} &\Big\langle ad^*_{(L_3,L_2,L_1)}(f_3,f_2,f_1),(K_3,K_2,K_1)\Big\rangle = - \Big\langle (f_3,f_2,f_1),ad_{(L_3,L_2,L_1)}(K_3,K_2,K_1)\Big\rangle \\ & \hspace{1 cm}= \Big\langle (f_3,f_2,f_1),\big[(K_3,K_2,K_1),(L_3,L_2,L_1)\big]_{\C{A}}\Big\rangle \\ &\hspace{1 cm} =\Big\langle f_3, \{K_3,L_3\}+\{K_3,L_2^{(3)}\}+\{K_3,L_1^{(3)}\}+\{K_2^{(3)},L_3\}+\{K_1^{(3)},L_3\}+\{K_2^{(3)},L_2^{(3)}\}\Big\rangle \\ &\hspace{3 cm}+ \big\langle f_2, \{K_2,L^{(2)}_1\}_{z_1,z_2}+\{K^{(2)}_1,L_2\}_{z_1,z_2} \big\rangle + \big\langle f_1, \{K_1,L_1\}_{z_1}\big\rangle. \end{split} \end{equation} In the second line, we have substituted the Lie algebra bracket \eqref{hiearchybracket}. In the last equality, the first pairing is the one available between $A_3^*$ and $A_3$ with the symplectic volume $dz_1dz_2dz_3$ whereas the second one is between $A_2^*$ and $A_2$ with the symplectic volume $dz_1dz_2$, and finally the last one is between $A_1^*$ and $A_1$ with the symplectic volume $dz_1$. It is evident that, in order to arrive at the explicit expression of the coadjoint action from \eqref{Cala}, we need to single out the functions $K_3$, $K_2$ and $K_1$. For this, we first recall the following association property of the smooth functions \begin{equation} \label{eqn-2} \int h(z) \{k(z),l(z)\}_z ~ dz=\int k(z) \{l(z),h(z)\}_z ~ dz. \end{equation} To see this identity, we simply consider the Leibniz identity \begin{equation}\label{leib} \{h(z)k(z),l(z)\}_z=h(z)\{k(z),l(z)\}_z+ k(z)\{h(z),l(z)\}_z, \end{equation} then, take the integral of this expression. In this case, the left hand side turns out to be zero due to \eqref{eqn}. The integrals on the right hand side of \eqref{leib} is exactly \eqref{eqn-2} after a reordering. Let us apply the identity \eqref{eqn-2} to the first pairing on the last equality in \eqref{Cala}. We have that \begin{equation} \label{La-1} \begin{split} & \int f_3(z_1,z_2,z_3) \Big( \{K_3,L_3\}+\{K_3,L_2^{(3)}\}+\{K_3,L_1^{(3)}\}+\{K_2^{(3)},L_3\}+\{K_1^{(3)},L_3\} \\ &\hspace{2 cm} +\{K_2^{(3)},L_2^{(3)}\}\Big )(z_1,z_2,z_3) dz_1dz_2dz_3 \\ &\hspace{1 cm} =\int \Big( K_3 \{L_3,f_3\} (z_1,z_2,z_3) + K_3 \{L_2^{(3)},f_3\} (z_1,z_2,z_3) + K_3 \{L_1^{(3)},f_3\} (z_1,z_2,z_3)\Big) dz_1dz_2dz_3 \\ &\hspace{2 cm} +\int K_2(z_1,z_2)\Big(6\int\{L_3,f_3\}_{z_1,z_2}dz_3 \Big) dz_1dz_2 + \int K_1(z_1)\Big(3\int\{L_3,f_3\}_{z_1}dz_2dz_3 \Big) dz_1 \\ &\hspace{2 cm} +2\int K_2(z_1,z_2) \{L_2,f_2\}_{z_1,z_2} dz_1dz_2 \\ &\hspace{2 cm} + \int K_2(z_1,z_2)\Big(12\int\{L_2(z_1,z_3)+L_2(z_2,z_3),f_3\}_{z_1,z_2}dz_3 \Big) dz_1dz_2, \end{split} \end{equation} where we have employed the identities \eqref{eqn} and \eqref{eqn-2} and the definitions of moments in \eqref{moments}. In a similar way, we compute the pairings on the last line of \eqref{Cala} as follows \begin{equation}\label{La-2} \begin{split} & \int f_2(z_1,z_2) \Big(\{K_2,L^{(2)}_1\}_{z_1,z_2} +\{K^{(2)}_1,L_2\}_{z_1,z_2} \big)(z_1,z_2) dz_1dz_2 + \int f_1(z_1) \{K_1,L_1\}_{z_1}(z_1) dz_1 \\ &\hspace{1 cm} = \int K_2 (z_1,z_2) \{L^{(2)}_1 ,f_2\} _{z_1,z_2} (z_1,z_2)dz_1dz_2 + \int K_1 (z_1) \Big( \int \{L_2(z_1,z_2),f_2(z_1,z_2)\}_{z_1} dz_2 \Big) dz_1 \\ &\hspace{2 cm} + \int K_1(z_1) \{L_1,f_1 \}_{z_1}(z_1) dz_1. \end{split} \end{equation} In \eqref{La-1} and \eqref{La-2}, we collect the terms involving $K_3$, $K_2$ and $K_1$ in an order and then, arrive at the coadjoint flow \begin{equation} ad^*_{(L_3,L_2,L_1)}(f_3,f_2,f_1)= (\tilde{f}_3,\tilde{f}_2,\tilde{f}_1) \end{equation} where \begin{equation} \begin{split} \tilde{f}_3(z_1,z_2,z_3)&= \{L_3+L_2^{(3)}+L_1^{(3)},f_3\} (z_1,z_2,z_3), \\ \tilde{f}_2(z_1,z_2)&= \{L_1^{(2)}+2L_2, f_2\}_{z_1,z_2}(z_1,z_2) + 12\int \{L_2(z_1,z_3)+L_2(z_2,z_3),f_3(z_1,z_2,z_3)\}_{z_1,z_2} dz_3 \\ &\qquad+ 6\int \{L_3,f_3\}_{z_1,z_2} (z_1,z_2,z_3) dz_3 \\ \tilde{f}_1(z_1)&= \{L_1,f_1\}_{z_1}(z_1)+2\int \{L_2(z_1,z_2),f_2(z_1,z_2)\}_{z_1}dz_2+3 \int \{L_3(z_1,z_2,z_3),f_3(z_1,z_2,z_3)\}_{z_1}dz_2dz_3 \end{split} \end{equation} \textbf{Lie-Poisson equation.} We assume now the following functional \begin{equation} \mathcal{H}(f_3,f_2,f_1)=\int H_3(z_1,z_1,z_3)f_3(z_1,z_1,z_3) dz_1dz_2dz_3+ \int \frac{1}{2} H_2(z_1,z_2) f_2(z_1,z_1) dz_1dz_2 + \int H_1(z_1)f_2(z_1)dz_1. \end{equation} Then we have that \begin{equation} \frac{\delta \mathcal{H}}{\delta (f_3,f_2,f_1)} =\Big(\frac{\delta \mathcal{H}}{\delta f_3}, \frac{\delta \mathcal{H}}{\delta f_2}, \frac{\delta \mathcal{H}}{\delta f_1} \Big) =\big(H_3(z_1,z_1,z_3), \frac{1}{2}H_2(z_1,z_2), H_1(z_1) \big)\in \C{A} \end{equation} which are the energy function in \eqref{en-func} under the isomorphism $\alpha$ in \eqref{alpha-1}. If the three tuple $H=(H_3,(1/2)H_2,H_1)$ is substituted in to the coadjoint action then we arrive at \begin{equation}\label{LP-BBGKY} \frac{\partial}{\partial t} (f_3,f_2,f_1)= ad^*_{{\delta \mathcal{H}}/{\delta (f_3,f_2,f_1)}}(f_3,f_2,f_1) = ad^*_{(H_3,(1/2)H_2,H_1)}(f_3,f_2,f_1). \end{equation} A direct calculation proves that the coadjoint flow \eqref{LP-BBGKY} is precisely the system of equations \eqref{Dyn-f-1}, \eqref{Dyn-f-2} and \eqref{Vlasov} governing the dynamics of the moments. \subsection{BBGKY Hierarchy as a Matched Pair}\label{D-Alg-BBGKY} Recall the direct sum $\C{A}=A_3\oplus A_2 \oplus A_1$ in \eqref{A-3}. One evident decomposition of $\C{A}$ is given by \begin{equation} \label{a3} \C{A}=\mathfrak{g}_{32}\oplus\mathfrak{h}_1, \qquad \qquad \mathfrak{g}_{32}= A_3\oplus A_2 \text{ and } \mathfrak{h}_1=A_1. \end{equation} In the present subsection, we examine this realization in the matched pair decomposition point of view. \textbf{Decomposition of the Lie algebra.} It is a direct calculation two show that the Lie bracket (\ref{hiearchybracket}) is closed if it is restricted to the constitutive subspaces $\mathfrak{g}_{32}$ and $\mathfrak{h}_1$. This reads that both $\mathfrak{g}_{32}$ and $\mathfrak{h}_1$ are Lie subalgebras. So, as a manifestation of Universal Lemma \ref{universal-prop}, this decomposition can be written as a matched pair decomposition introduced in Subsection \ref{doublecross}. We first determine the Lie algebra brackets on $\mathfrak{g}_{32}$ and $\mathfrak{h}_1$ by restricting the bracket \eqref{hiearchybracket} to the subspaces $\mathfrak{g}_{32}$ and $\mathfrak{h}_1$ \begin{equation} \label{Alg-32-1} \begin{split} [\bullet,\bullet]_{32}&: \mathfrak{g}_{32}\otimes \mathfrak{g}_{32}\longrightarrow \mathfrak{g}_{32}, \qquad [(K_3,K_2),(L_3,L_2)]_{32}= \Big(\{K_3+K_2^{(3)},L_3\}+\{K_3+K_2^{(3)},L_2^{(3)}\}, 0\Big), \\ [\bullet,\bullet]_{1}&: \mathfrak{h}_1\otimes \mathfrak{h}_1\longrightarrow \mathfrak{h}_1, \qquad [K_1,L_1]_1=\{K_1,L_1\}_{z_1}, \end{split} \end{equation} respectively. In order to compute mutual actions, we recall the identity \eqref{bracketactions}. In the present case, we compute \begin{equation} \big(K_1 \vartriangleright (L_3,L_2)\big) \oplus \big( K_1 \vartriangleleft (L_3,L_2) \big):=[(0,0,K_1),(L_3,L_2,0)]_{\C{A}} = \big( \{K_1^{(3)},L_3\},\{K_1^{(2)},L_2\}_{z_1,z_2} \big) \oplus 0 \end{equation} then conclude that the left action of $\mathfrak{h}_1$ on $\mathfrak{g}_{32}$, and the right action of $\mathfrak{g}_{32}$ on $\mathfrak{h}_1$ as \begin{equation} \label{Act-32-1} \begin{split} \vartriangleright &:\mathfrak{h}_1 \otimes \mathfrak{g}_{32}\longrightarrow \mathfrak{g}_{32},\qquad K_1 \vartriangleright (L_3,L_2)= \big( \{K_1^{(3)},L_3\},\{K_1^{(2)},L_2\}_{z_1,z_2} \big), \\ \vartriangleleft &: \mathfrak{h}_1 \otimes \mathfrak{g}_{32}\longrightarrow \mathfrak{h}_1, \qquad K_1 \vartriangleleft (L_3,L_2)=0, \end{split} \end{equation} respectively. Notice that, the right action $\vartriangleleft$ is trivial so that the Lie algebra $\C{A}=\mathfrak{g}_{32}\rtimes \mathfrak{h}_1$ is a semidirect product Lie algebra. The Lie bracket $[\bullet,\bullet]_{\C{A}}$ in \eqref{hiearchybracket} admits the following decomposition \begin{equation} [(K_3,K_2)\oplus K_1,(L_3,L_2)\oplus L_1]=\big([(K_3,K_2),(L_3,L_2)]_{32}+ K_1 \vartriangleright (L_3,L_2)-L_1 \vartriangleright (K_3,K_2) \big )\oplus[K_1,L_1]_{1}. \end{equation} Here, the subalgebras $[\bullet,\bullet]_{32}$ and $[\bullet,\bullet]_{1}$ are the ones in \eqref{Alg-32-1}, and the left action is in \eqref{Act-32-1}. This realization is precisely in the matched pair Lie bracket form \eqref{mpla} where the left action $\vartriangleleft$ is trivial. Let us apply this to the Lie-Poisson formulation of BBGKY Dynamics \eqref{LiePoissonongh}. \textbf{Decomposition of BBGKY dynamics.} The dual spaces of the constitutive Lie subalgebras $\mathfrak{g}_{32}$ and $\mathfrak{h}_1$ are $\mathfrak{g}_{32}^*=A_3^*\oplus A_2^* $ and $\mathfrak{h}_1^*=A_1^*$, respectively. So that, we can write $\C{A}^*=\mathfrak{g}_{32}^*\oplus \mathfrak{h}_1^*$. The coadjoint action of $\mathfrak{g}_{32}$ on $\mathfrak{g}_{32}^*$, and the coadjoint action of $\mathfrak{h}_{1}$ on $\mathfrak{h}_{1}^*$ are \begin{equation} \begin{split} &ad_{(L_3,L_2)}(f_3,f_2)=\Big(\{L_3,f_3\} + 6\int \{L_2(z_1,z_2),f_3(z_1,z_2,z_3)\}_{z_1,z_2}dz_3, 2\{L_2(z_1,z_2) ,f_2(z_1,z_2) \}_{z_1,z_2} \\ & \hspace{2cm} + 6\int \{L_3(z_1,z_2,z_3),f_3(z_1,z_2,z_3)\}_{z_1,z_2}dz_3 + 12\int\{L_2(z_1,z_3)+L_2(z_2,z_3),f_3\}_{z_1,z_2}dz_3\Big), \\ &ad_{K_1}f_1=\{K_1(z_1) ,f_1(z_1) \}_{z_1}, \end{split} \end{equation} respectively. Recall the mutual actions of $\mathfrak{g}_{32}$ and $\mathfrak{h}_{1}$ on each other given in \eqref{Act-32-1}. The dual of these actions are computed to be \begin{equation} \begin{split} \overset{\ast }{\vartriangleleft} &:\mathfrak{g}_{32}^*\otimes \mathfrak{h}_{1} \longrightarrow \mathfrak{g}_{32}^*, \qquad (f_3,f_2)\overset{\ast }{\vartriangleleft}K_1 = \big( 3\{f_3(z_1,z_2,z_3),K_1(z_1)\} _{z_1},2\{f_2(z_1,z_2),K_1(z_1)\} _{z_1} \big) \\ \overset{\ast }{\vartriangleright} &:\mathfrak{g}_{32} \otimes \mathfrak{h}_{1}^* \longrightarrow \mathfrak{h}_{1}^* ,\qquad (L_3,L_2)\overset{\ast }{\vartriangleright} f_1 =0. \end{split} \end{equation} The mapping $\mathfrak{b}$ in \eqref{b} and its dual \eqref{b*} are computed to be \begin{equation} \begin{split} \mathfrak{b}_{(L_3,L_2)}&: \mathfrak{h}_{1} \longrightarrow \mathfrak{g}_{32} , \qquad \mathfrak{b}_{(L_3,L_2)}(K_1):=K_1 \vartriangleright (L_3,L_2) \\ \mathfrak{b}_{(L_3,L_2)}^*&: \mathfrak{g}_{32} ^* \longrightarrow \mathfrak{h}_{1}^*, \qquad \mathfrak{b}_{(L_3,L_2)}^*(f_3,f_2)=3\int\{L_3,f_3\}_{z_1}dz_2dz_3 + 2 \int\{L_2,f_2\}_{z_1}dz_2. \end{split} \end{equation} Since the left action is trivial both the mapping $\mathfrak{a}$ in \eqref{a} and its dual \eqref{a*} are trivial. It is now straight forward to modify the matched Lie-Poisson equation \eqref{LPEgh} to the present version and determine the coadjoint flow as \begin{equation}\label{LPEgh-BBGKY-MP} \begin{split} \frac{d (f_3,f_2)}{dt} & = ad^{\ast}_{(L_3,L_2)}(f_3,f_2) - (f_3,f_2)\overset{\ast }{\vartriangleleft}K_1 \\ \frac{d f_1}{dt} &= ad^{\ast}_{K_1} f_1 +\mathfrak{b}_{(L_3,L_2)}^*(f_3,f_2). \end{split} \end{equation} This equations is precisely the coadjoint flow \eqref{LP-BBGKY} realization of BBGKY dynamics. Evidently takes the classical form if $(L_3,L_2,K_1)=(H_3,(1/2)H_2,H_1)$. \subsection{BBGKY Hierarchy as an Extended Structure}\label{LPe-BBGKY} Recall once more the direct sum $\C{A}=A_3\oplus A_2 \oplus A_1$ in \eqref{A-3}. We have examined a matched pair decomposition \eqref{a3} of this sum. An alternative decomposition of $\C{A}$ can be given by \begin{equation} \label{a32} \C{A}=\mathfrak{g}_{3}\oplus\mathfrak{h}_{21}, \qquad \qquad \mathfrak{g}_{3}:= A_3,\text{ and } \mathfrak{h}_{21}= A_2\oplus A_1. \end{equation} \textbf{Decomposition of the Lie algebra.} It is straightforward to see that, $\mathfrak{g}_{3}$ is a subalgebra of $\C{A}$ with induced bracket \begin{equation} \label{Alg-3} [\bullet,\bullet]_{3}: \mathfrak{g}_{3}\otimes \mathfrak{g}_{3}\longrightarrow \mathfrak{g}_{3}, \qquad [K_3,L_3]_{3}= \{K_3,L_3\}, \end{equation} where $\{\bullet,\bullet\}$ is the Poisson bracket on $P^3$. On the other hand, the subspace $\mathfrak{h}_{21}$ fails to be so. Indeed, the Lie bracket \eqref{hiearchybracket} of two generic elements $(0,K_2,K_1)$ and $(0,L_2,L_1)$ in $\mathfrak{h}_2$ is \begin{equation} [(0,K_2,K_1),(0,L_2,L_1)]_{\C{A}}=\{K_2^{(3)},L_2^{(3)}\} \oplus \big(\{K_2,L^{(2)}_1\}_{z_1,z_2}+\{K^{(2)}_1,L_2\}_{z_1,z_2},\{K_1,L_1\}_{z_1}\big)\in \mathfrak{g}_{3}\oplus\mathfrak{h}_{21}, \end{equation} where the first term on the right hand side falls into $\mathfrak{g}_{3}$ whereas the the second and the third terms are in $\mathfrak{h}_{21}$. So that, this decomposition should be analysed in the light of extended structures presented in Subsection \ref{brzezinski}. Accordingly, by referring to \eqref{Phi-kappa}, we define \begin{equation}\label{k-p-Bre} \begin{split} \Phi &:\mathfrak{h}_{21} \otimes \mathfrak{h}_{21} \longrightarrow \mathfrak{g}_{3}, \qquad ((K_2,K_1),(L_2,L_1))\mapsto \{K_2^{(3)},L_2^{(3)}\} \\ \kappa &: \mathfrak{h}_{21} \otimes \mathfrak{h}_{21} \longrightarrow \mathfrak{h}_{21}, \qquad ((K_2,K_1),(L_2,L_1))\mapsto \big(\{K_2,L^{(2)}_1\}_{z_1,z_2}+\{K^{(2)}_1,L_2\}_{z_1,z_2},\{K_1,L_1\}_{z_1}\big). \end{split} \end{equation} Now, we are ready to compute mutual \textit{actions} defined in \eqref{Lieact} and \eqref{Lieact1} between the constitutive spaces $\mathfrak{h}_{21}$ and $\mathfrak{g}_3$. To obtain the fomulas, we employ the identity \eqref{bracketactions} that is \begin{equation} \begin{split} (K_2,K_1) \vartriangleright L_3 \oplus (K_2,K_1) \vartriangleleft L_3 &= [(0,K_2,K_1),(L_3,0,0)]_{\C{A}} \\ &=\big(\{K^{(3)}_2,L_3\}+\{K^{(3)}_1,L_3\}\big) \oplus (0,0) \in \mathfrak{g}_3 \oplus \mathfrak{h}_{21} \end{split} \end{equation} which gives that \begin{equation}\label{L-Act-Bre-BBGKY} \begin{split} \vartriangleright&:\mathfrak{h}_{21} \otimes \mathfrak{g}_3 \longrightarrow \mathfrak{g}_3, \qquad (K_2,K_1) \vartriangleright L_3=\{K^{(3)}_2,L_3\}+\{K^{(3)}_1,L_3\} \\ \vartriangleleft&: \mathfrak{h}_{21} \otimes \mathfrak{g}_3 \longrightarrow \mathfrak{h}_{21}, \qquad (K_2,K_1) \vartriangleleft L_3=(0,0). \end{split} \end{equation} Due to Universal Lemma \ref{uni-prop-bre}, the decomposition $\mathfrak{g}_{3}\oplus\mathfrak{h}_{21}$ of the Lie algebra $\C{A}$ reads the decomposition of the Lie bracket $\{\bullet,\bullet\}_{\C{A}}$ given in \eqref{hiearchybracket} into the form of extended Lie bracket \ref{brzezinski} where the right action $\vartriangleleft$ is trivial \begin{equation} \label{mpla-2-cocyc-1-BBGKY} \begin{split} &[K_3 \oplus(K_2,K_1) ,L_3 \oplus(L_2,L_1)]_{_\Phi\bowtie}=\big( \{K_3,L_3\}+(K_2,K_1) \vartriangleright L_3-(L_2,L_1)\vartriangleright K_3 \\ &\hspace{6cm}+ \Phi((K_2,K_1) ,(L_2,L_1) ) \big)\oplus \kappa((K_2,K_1) ,(L_2,L_1) ), \end{split} \end{equation} where the left action is the one given in \eqref{L-Act-Bre-BBGKY} whereas $\Phi$ and $\kappa$ mapping are those available in \eqref{k-p-Bre}. \textbf{An alternative decomposition of the Lie algebra.} The hierarchy of the moment functions suggests an alternative formulation of $\Phi$ and $\kappa$ mappings in \eqref{k-p-Bre}. This is due to the fact that the term $[K_2^{(3)},L_2^{(3)}]$ can be written as a sum of some terms in $\mathfrak{h}_{21}$ and some terms in $\mathfrak{g}_{3}$. Indeed, \begin{equation} \begin{split} [K_2^{(3)},L_2^{(3)}]&=2(\{K_2(z_1,z_2),L_2(z_1,z_2)\}_{z_1,z_2})^{(3)}(z_1,z_2,z_3) \\ &\qquad +4\{K_2(z_1,z_2),L_2(z_1,z_3)+L_2(z_2,z_3) \}_{z_1,z_2} + 4\{K_2(z_1,z_3),L_2(z_1,z_2)+L_2(z_2,z_3) \}_{z_1,z_3} \\&\qquad + 4\{K_2(z_2,z_3),L_2(z_1,z_2)+L_2(z_1,z_3)\}_{z_2,z_3}. \end{split} \end{equation} Here, as depicted in the display, the first term on the right hand side can be written as the image of the symmetric function $\{K_2(z_1,z_2),L_2(z_1,z_2)\}$ under the embedding $A_2 \mapsto A_3$ given in \eqref{embedding-BBGKY}. Accordingly, instead of $\Phi$ and $\kappa$ mappings in \eqref{k-p-Bre}, we can propose the following alternatives \begin{equation}\label{k-p-Bre-alt} \begin{split} \tilde{\Phi} &:\mathfrak{h}_{21} \otimes \mathfrak{h}_{21} \longrightarrow \mathfrak{g}_{3}, \\ & ((K_2,K_1),(L_2,L_1))\mapsto 4\{K_2(z_1,z_2),L_2(z_1,z_3)+L_2(z_2,z_3) \} + 4\{K_2(z_1,z_3),L_2(z_1,z_2)+L_2(z_2,z_3) \} \\&\hspace{7cm}+ 4\{K_2(z_2,z_3),L_2(z_1,z_2)+L_2(z_1,z_3) \} \\ \tilde{\kappa} &: \mathfrak{h}_{21} \otimes \mathfrak{h}_{21} \longrightarrow \mathfrak{h}_{21}, \\ & ((K_2,K_1),(L_2,L_1))\mapsto \Big(\{K_2(z_1,z_2),L_1(z_1)+L_1(z_2)\}_{z_1,z_2}+\{K_1(z_1)+K_1(z_2),L_2(z_1,z_2)\}_{z_1,z_2}\\&\hspace{7cm}+2\{K_2(z_1,z_2),L_2(z_1,z_2)\}_{z_1,z_2} ,\{K_1(z_1),L_1(z_1)\}_{z_1}\Big). \end{split} \end{equation} Evidently, this observation reads an alternative Lie bracket operation on $\C{A}$ as well. We denote this by a tilde notation $[ \bullet , \bullet ]_{\tilde{\Phi}\bowtie}$ and record as follows \begin{equation} \label{mpla-2-cocyc-1-BBGKY-2} \begin{split} &[ K_3 \oplus(K_2,K_1) , L_3 \oplus(L_2,L_1) ]_{\tilde{\Phi}\bowtie}=\big( \{K_3,L_3\}+(K_2,K_1) \vartriangleright L_3-(L_2,L_1)\vartriangleright K_3 \\ &\hspace{6cm}+ \tilde{\Phi}((K_2,K_1) ,(L_2,L_1) ) \big)\oplus \tilde{\kappa}((K_2,K_1) ,(L_2,L_1) ), \end{split} \end{equation} where $\{K_3,L_3\}$ is the Poisson bracket on $P^3$, and $\vartriangleright$ is the left action in \eqref{L-Act-Bre-BBGKY}. \textbf{Decomposition of the dynamics: $\C{A}^*=\mathfrak{g}_{3}^*\oplus\mathfrak{h}_{21}^*$.} We start with the dualization of the mutual actions in \eqref{L-Act-Bre-BBGKY}. Notice that, the left action is trivial, so that it induces a trivial dual action. For the right action, we compute the dual action as \begin{equation} \mathfrak{g}_{3}^*\overset{\ast }{\vartriangleleft}\mathfrak{h}_{21}\longrightarrow \mathfrak{g}_{3}^*, \qquad f_3 \overset{\ast }{\vartriangleleft} (K_2,K_1)=\{f_3,K_2^{(3)}\}+ \{f_3,K_1^{(3)}\}. \end{equation} Using the right action in \eqref{L-Act-Bre-BBGKY}, and in the lights of the definitions in \eqref{b} and\eqref{b*}, we compute the following mapping along with its dual \begin{equation} \begin{split} \mathfrak{b}_{L_3}&:\mathfrak{h}_{21}\longrightarrow \mathfrak{g}_{3},\qquad \mathfrak{b}_{L_3}(K_2,K_1)=(K_2,K_1) \vartriangleright L_3=[K^{(3)}_2,L_3]+[K^{(3)}_1,L_3] \\ \mathfrak{b}^*_{L_3}&:\mathfrak{g}_{3}^* \longrightarrow \mathfrak{h}_{21}^* ,\qquad \mathfrak{b}^*_{L_3}f_3=\big(6\int \{L_3,f_3 \}_{z_1,z_2} dz_3, 3\int \{L_3,f_3 \}_{z_1} dz_2dz_3 \big). \end{split} \end{equation} Further, according to \eqref{kappan} and \eqref{phin}, by freezing the first entries of $\tilde{\Phi}$ and $\tilde{\kappa}$ in \eqref{k-p-Bre-alt}, we arrive at linear mappings. One od these mappings is $\tilde{\Phi}_{(K_2,K_1)}$ from $\mathfrak{h}_{21}$ to $\mathfrak{g}_{3}$, and other is $\tilde{\kappa}_{(K_2,K_1)}$ from $\mathfrak{h}_{21}$ to $\mathfrak{h}_{21}$. Dualizations of these mappings result with \begin{equation} \begin{split} &\tilde{\Phi}_{(K_2,K_1)}^*: \mathfrak{g}_{3}^* \longrightarrow \mathfrak{h}_{21}^*,\quad \tilde{\Phi}_{(K_2,K_1)}^*f_3=\Big(24 \int \{K_2(z_1,z_3),f_3(z_1,z_2,z_3)\}_{z_1} dz_3 ,0\Big), \\ & \tilde{\kappa}_{(K_2,K_1)}^*:\mathfrak{h}_{21}^* \longrightarrow \mathfrak{h}_{21}^*, \quad \tilde{\kappa}_{(K_2,K_1)}^*(f_2,f_1)=\Big( 2\{K_1(z_1),f_2(z_1,z_2)\}_{z_1} +2\{K_2(z_1,z_2),f_2(z_1,z_2)\}_{z_1,z_2}, \\&\hspace{7cm} 2\int \{K_2(z_1,z_2),f_2(z_1,z_2)\}_{z_1}dz_2+\{K_1(z_1),f_1(z_1)\}_{z_1} \Big). \end{split} \end{equation} Now, recall the decomposed Lie-Poisson equation \eqref{LPEghcocycle}. Since, in the present case, the right action is trivial, we take the terms involving $\overset{\ast }{\vartriangleleft}$ and $\mathfrak{a}^*$ as zero. After substituting all these into the Lie-Poisson equation, we have that \begin{equation}\label{LPEghcocycle-BBGKY} \begin{split} & \frac{df _3}{dt} = ad^{\ast}_{K_3}(f _3) - f _3 \overset{\ast }{\vartriangleleft} (K_2,K_1), \\ &\frac{d (f_2,f_1)}{dt} = \tilde{\kappa}^{\ast}_{(K_2,K_1)}(f_2,f_1) + \tilde{\Phi}_{(K_2,K_1)}^*f_3 + \mathfrak{b}^*_{L_3}f_3. \end{split} \end{equation} Here, $ad^{\ast}_{K_3}(f _3)=\{K_3,f_3\}$ is the coadjoint action of $\mathfrak{g}_3$ on its dual space $\mathfrak{g}_3^*$. If we take $K_3=H_3$, $K_2=(1/2)H_2$ and $K_1=H_1$ then, the system is exactly the dynamics of the moments in \eqref{Dyn-f-1}, \eqref{Dyn-f-2} and \eqref{Vlasov} by decomposing the coadjoint flow \eqref{LP-BBGKY}. \section{Coupling of 2-cocycles} \label{Sec-Cop-co} In Subsection \ref{2coc-Sec}, 2-cocycle extensions are exhibited as particular instances of extended structures. In this section, we couple two 2-cocycle extensions under mutual actions. This will be achieved by matched pair theory available in Subsection \ref{doublecross}. Our goal is to explore conditions for a matched pair of 2-cocycles to be a 2-cocycle of a matched pair. We shall, further, study dynamics on the coupled system to have the Lie-Poisson equations for the collective motion. \subsection{Coupling of 2-cocycle Extensions} \label{centrallyextend} We start with two Lie algebras, say $\mathfrak{l}$ and $\mathfrak{k}$, and two vector spaces $V$ and $W$. Assume a $V$-valued 2-cocycle $\varphi$ on $\mathfrak{l}$, and a $W$-valued 2-cocycle $\phi$ on $\mathfrak{k}$ given by \begin{equation}\label{phi-psi} \varphi: \mathfrak{l} \times \mathfrak{l} \rightarrow V, \qquad \phi:\mathfrak{k} \times \mathfrak{k} \rightarrow W. \end{equation} Further, we consider a left action of the Lie algebra $\mathfrak{l}$ on the vector spaces $V$, and a left action of the Lie algebra $\mathfrak{k}$ on the vector spaces $W$ that is \begin{equation} \label{actionsoflk} \begin{split} \downharpoonleft &:\mathfrak{l}\otimes V \rightarrow V,\qquad l \otimes v \mapsto l \downharpoonleft v, \\ \downharpoonright &:\mathfrak{k} \otimes W \rightarrow W,\qquad k \otimes w \mapsto k \downharpoonright w. \\ \end{split} \end{equation} If these actions are compatible with the 2-cocycle maps as in \eqref{ext-act} then, one arrives at the following 2-cocycle extensions \begin{equation}\label{2-2-coc} \mathfrak{g}:=V {_\varphi\rtimes} \: \mathfrak{l},\qquad \mathfrak{h} :=W {_\phi\rtimes} \: \mathfrak{k}. \end{equation} In the light of the discussions done in Subsection \ref{2coc-Sec}, after some proper modifications of \eqref{AAAAA}, we have the following Lie algebra brackets on the extended Lie algebras $\mathfrak{g}$ and $\mathfrak{h}$ \begin{equation} \label{bracketontwococ} \begin{split} [v \oplus l, v' \oplus l']_{_\varphi\rtimes}&= \big(l \downharpoonleft v'-l' \downharpoonleft v+ \varphi(l,l') \big) \oplus[l,l'], \\ [w \oplus k, w' \oplus k']_{_\phi\rtimes}&= \big(k \downharpoonright w'-k' \downharpoonright w+ \phi(k,k') \big) \oplus[k,k'], \\ \end{split} \end{equation} respectively. Here, the bracket $[l,l']$ is the Lie algebra bracket on $\mathfrak{l}$ whereas the bracket $[k,k']$ is the Lie bracket on $\mathfrak{k}$. \textbf{Matching of 2-cocycles.} We now examine matched pair of 2-cocycle extensions $\mathfrak{g}=V {_\varphi\rtimes} \: \mathfrak{l}$ and $\mathfrak{h}=W {_\phi\rtimes} \: \mathfrak{k}$. To this end, we consider the followings $3$ sets of mappings: \textbf{(1)} We first consider mutual Lie algebras actions of $\mathfrak{l}$ and $\mathfrak{k}$ on each other \begin{equation}\label{Lieact-k-l} \begin{split} \blacktriangleright &:\mathfrak{k}\otimes \mathfrak{l}\rightarrow \mathfrak{l},\qquad k \otimes l \mapsto k \blacktriangleright l, \\ \blacktriangleleft &:\mathfrak{k}\otimes \mathfrak{l}\rightarrow \mathfrak{k} ,\qquad k \otimes l \mapsto k \blacktriangleleft l. \end{split} \end{equation} We assume that these actions satisfy the compatibility condition \eqref{comp-mpa} hence, determine a matched pair Lie algebra denoted by $\mathfrak{l} \bowtie \mathfrak{k}$. \textbf{(2)} In order to extend the mutual actions given in \eqref{Lieact-k-l} to the product spaces $\mathfrak{g}$ and $\mathfrak{h}$ in \eqref{2-2-coc}, we introduce a right action of $\mathfrak{l}$ on $W$ and, a left action $\mathfrak{k}$ on $V$ given by \begin{equation}\label{actionsofkl} \begin{split} \curvearrowright &:\mathfrak{k}\otimes V \rightarrow V,\qquad k \otimes v \mapsto k \curvearrowright v, \\ \curvearrowleft&:W \otimes \mathfrak{l} \rightarrow W,\qquad w \otimes l \mapsto w \curvearrowleft l, \\ \end{split} \end{equation} respectively. \textbf{(3)} In addition, it is always possible to have the following cross representations \begin{equation}\label{epsiot} \begin{split} \epsilon&: \mathfrak{k} \otimes \mathfrak{l} \rightarrow V, \qquad \epsilon(k,l) \in V, \\ \iota&: \mathfrak{k} \otimes \mathfrak{l} \rightarrow W, \qquad \iota(k,l) \in W. \end{split} \end{equation} Referring to the mappings \eqref{Lieact-k-l}, \eqref{actionsofkl} and \eqref{epsiot}, we define mutual actions of 2-cocycle extensions $\mathfrak{g}=V {_\varphi\rtimes} \: \mathfrak{l}$ and $\mathfrak{h} =W {_\phi\rtimes} \: \mathfrak{k}$ as follows \begin{equation}\label{left-comp} \begin{split} \vartriangleright &: ~ (W {_\phi\rtimes} \: \mathfrak{k}) \times (V {_\varphi\rtimes} \: \mathfrak{l}) \longrightarrow V {_\varphi\rtimes} ~ \mathfrak{l}, \qquad ((w\oplus k),(v\oplus l))\mapsto \left( k \curvearrowright v+\epsilon(k,l)) \oplus (k \blacktriangleright l \right), \\ \vartriangleleft &: ~ (W {_\phi\rtimes} \: \mathfrak{k}) \times (V {_\varphi\rtimes} \: \mathfrak{l}) \longrightarrow W {_\phi\rtimes} \: \mathfrak{k}, \qquad ((w \oplus k),(v \oplus l))\mapsto \left( w \curvearrowleft l+\iota(k,l))\oplus (k \blacktriangleleft l \right). \end{split} \end{equation} It is possible to see that $\vartriangleright$ is a left action whereas $\vartriangleleft$ is a right action. In order to construct a matched pair of $\mathfrak{h} =W {_\phi\rtimes} \: \mathfrak{k}$ and $\mathfrak{g}=V {_\varphi\rtimes} \: \mathfrak{l}$, one needs to justify the compatibility conditions in \eqref{comp-mpa}. A direct observation gives that, for the actions \eqref{left-comp}, the compatibility conditions \eqref{comp-mpa} consist of $4$ equations. Two of them, those for the second terms in the decompositions, involve only the left $\blacktriangleright$ and the right $\blacktriangleleft$ actions in \eqref{Lieact-k-l}. These two equations are precisely the matched pair compatibility conditions for $\mathfrak{l} \bowtie \mathfrak{k}$. Since, we assume that $\mathfrak{l} \bowtie \mathfrak{k}$ is a matched pair, these two compatibility conditions are automatically satisfied. So, we left with other two compatibility conditions. For any $k,k'$ in $\G{k}$, $l,l'$ in $\G{l}$, $v,v'$ in $V$, and $w,w'$ in $W$, these equations are computed to be \begin{equation} \begin{split} & k \curvearrowright(l \downharpoonleft v'-l' \downharpoonleft v+\varphi(l,l'))+\epsilon(k,\lbrack l, l' \rbrack)=l \downharpoonleft(k \curvearrowright v'+\epsilon(k,l'))-(k \blacktriangleright l')\downharpoonleft v+ \varphi(l,k \blacktriangleright l') \\ &\hspace{4cm} -l' \downharpoonleft(k \curvearrowright v+\epsilon(k,l))+(k\blacktriangleright l) \downharpoonleft v'-\varphi(l',k \blacktriangleright l) +(k \blacktriangleleft l) \curvearrowright v'-(k \blacktriangleleft l')\curvearrowright v\\ &\hspace{4cm} +\epsilon(k \blacktriangleleft l,l')-\iota(k \blacktriangleleft l',l) -(w \curvearrowleft l'+\iota(k,l'))\curvearrowleft l \\ & (k \downharpoonright w'-k' \downharpoonright w+\phi(k,k'))\curvearrowleft l+\iota([k,k'],l)=k\downharpoonright(w'\curvearrowleft l+\iota(k',l))-(k' \blacktriangleleft l)\downharpoonright w+\phi(k,k' \blacktriangleleft l) \\ &\hspace{4cm}-k'\downharpoonright(w \curvearrowleft l+\iota(k,l))+(k \blacktriangleleft l) \downharpoonright w'-\phi(k',k\blacktriangleleft l) -w \curvearrowleft(k' \blacktriangleright l)\\ &\hspace{4cm} +w'\curvearrowleft(k \blacktriangleright l)+\iota(k',k \blacktriangleright l) -\epsilon(k,k'\blacktriangleright l) +k \curvearrowright (k' \curvearrowright v+\epsilon(k',l)), \end{split} \end{equation} where $\downharpoonleft$ and $\downharpoonright$ are the left actions in \eqref{actionsoflk}, $\blacktriangleright$ and $\blacktriangleleft$ are actions in \eqref{Lieact-k-l}, $\curvearrowright$ and $\curvearrowleft$ are the actions in \eqref{actionsofkl}, $\epsilon$ and $\iota$ are the mappings in \eqref{epsiot}. By assuming that these conditions are satisfied, we define the following matched pair Lie algebra \begin{equation}\label{mp-2c} \mathfrak{g} \bowtie \mathfrak{h} = (V {_\varphi\rtimes} \: \mathfrak{l})\bowtie (W {_\phi\rtimes} \: \mathfrak{k}). \end{equation} To compute the matched Lie algebra bracket on this total space, we recall the general formula of the matched Lie bracket in \eqref{mpla} then, by employing the actions in \eqref{left-comp}, compute \begin{equation}\label{bracketlk-1} \big[\big ( (v\oplus l)\oplus(w\oplus k)\big),\big((v'\oplus l') \oplus (w'\oplus k')\big)\big]_{\bowtie}= (\bar{v}\oplus \bar{l})\oplus(\bar{w}\oplus \bar{k}) \end{equation} where \begin{equation}\label{bracketlk-2} \begin{split} & \bar{v}= l \downharpoonleft v'-l'\downharpoonleft v+k \curvearrowright v'-k'\curvearrowright v+\epsilon(k,l') -\epsilon(k',l)+\varphi(l,l'), \\ & \bar{l}= [l,l']+k \blacktriangleright l'-k' \blacktriangleright l , \\ & \bar{w}= k \downharpoonright w'-k' \downharpoonright w+w \curvearrowleft l'-w' \curvearrowleft l+\iota(k,l') -\iota(k',l)+\phi(k,k') , \\ & \bar{k}=[k,k']+k \blacktriangleleft l'-k' \blacktriangleleft l. \end{split} \end{equation} \textbf{Matched pair as a 2-cocycle.} Now, we investigate that under which conditions the matched pair $\mathfrak{g} \bowtie \mathfrak{h} $ in \eqref{mp-2c} turns out to be a 2-cocycle extension by itself. To show this, we need to determine a left action a 2-cocycle. Let us determine these one by one. \textbf{(Left action).} Recall the mutual actions in \eqref{Lieact-k-l} and the matched pair algebra $\mathfrak{l} \bowtie \mathfrak{k}$. It is evident that $\mathfrak{l} \bowtie \mathfrak{k}$ is a Lie subalgebra of $\mathfrak{g} \bowtie \mathfrak{h} $. Define a left action of $\mathfrak{l} \bowtie \mathfrak{k}$ on the product space $V \oplus W$ as follows: \begin{equation}\label{fifthact} {\cdot\kern-.33em\triangleright}: (\mathfrak{l} \bowtie \mathfrak{k}) \times (V \oplus W) \longrightarrow (V \oplus W), \quad (l\oplus k) ~ {\cdot\kern-.33em\triangleright} ~ (v\oplus w) =\big(l \downharpoonleft v+k \curvearrowright v \big) \oplus\big( -w \curvearrowleft l+k \downharpoonright w\big), \end{equation} where we have employed the left actions $\downharpoonleft$ and $\downharpoonright$ in \eqref{actionsoflk}, the actions $\curvearrowright$ and $\curvearrowleft$ in \eqref{actionsofkl}. To be a left action, \eqref{fifthact} needs to satisfy the first condition in \eqref{matchpaircond}. We compute this as \begin{equation} \begin{split} (k \blacktriangleright l')\downharpoonleft v-(k' \blacktriangleright l)\downharpoonleft v+(k \blacktriangleleft l') \curvearrowright v-(k' \blacktriangleleft l) \curvearrowright v &=l \downharpoonleft (k' \curvearrowright v)-l' \downharpoonleft (k \curvearrowright v) \\ &\quad k \curvearrowright (l' \downharpoonleft v)-k' \curvearrowright(l \downharpoonleft v) \\ \end{split} \end{equation} for the first entry in \eqref{fifthact}. Notice that, this is an equation defined on the vector space $V$. For the second entry, we have \begin{equation} \begin{split} -w \curvearrowleft(k \blacktriangleright l')+w \curvearrowleft(k' \blacktriangleright l)+(k \blacktriangleleft l')\downharpoonright w-(k' \blacktriangleleft l)\downharpoonright w&= -(k' \downharpoonright w)\curvearrowleft l+(k \downharpoonright w)\curvearrowleft l' \\ &-k \downharpoonright(w \curvearrowleft l')+k'\downharpoonright(w \curvearrowleft l) \end{split} \end{equation} where $\blacktriangleright$ and $\blacktriangleleft$ are the mutual actions exhibited in \eqref{Lieact-k-l}. \textbf{(2-cocycle).} Later, we introduce a $(V \oplus W)$-valued 2-cocycle on $\mathfrak{l} \bowtie \mathfrak{k}$, in terms of the 2-cocycles $\varphi$ and $\phi$ given in \eqref{phi-psi}, as follows \begin{equation}\label{theta} \Theta: (\mathfrak{l} \bowtie \mathfrak{k}) \times (\mathfrak{l} \bowtie \mathfrak{k}) \longrightarrow V \oplus W, \quad \Theta((l\oplus k),(l' \oplus k'))=\big(\varphi(l , l')+\epsilon(k,l')-\epsilon(k',l)\big) \oplus \big(\phi(k, k')+\iota(k,l')-\iota(k',l)\big), \end{equation} One needs to ask the compatibility conditions, in \eqref{ext-act}, \begin{equation} \label{con-2} \begin{split} 0=&\varphi(l,k''\blacktriangleright l'-k'\blacktriangleright l'')+\epsilon(k,[l'',l'])+\epsilon(k,k''\blacktriangleright l'-k'\blacktriangleright l'')-\epsilon([k'',k'],l)-\epsilon(k''\blacktriangleleft l'-k'\blacktriangleleft l'',l)\\ &- l \downharpoonleft(\varphi(l',l'')+\epsilon(k',l'')-\epsilon(k'',l'))-k \curvearrowright(\varphi(l',l'')+\epsilon(k',l'')-\epsilon(k'',l'))\\ &\varphi(l',k\blacktriangleright l''-k''\blacktriangleright l)+\epsilon(k',[l,l''])+\epsilon(k',k\blacktriangleright l''-k''\blacktriangleright l)-\epsilon([k,k''],l')-\epsilon(k\blacktriangleleft l''-k''\blacktriangleleft l,l')\\ &- l' \downharpoonleft(\varphi(l'',l)+\epsilon(k'',l)-\epsilon(k,l''))-k \curvearrowright(\varphi(l'',l)+\epsilon(k'',l)-\epsilon(k,l''))\\ &\varphi(l'',k'\blacktriangleright l-k\blacktriangleright l')+\epsilon(k'',[l',l])+\epsilon(k'',k'\blacktriangleright l-k\blacktriangleright l')-\epsilon([k',k],l'')-\epsilon(k'\blacktriangleleft l-k\blacktriangleleft l',l'')\\ &- l'' \downharpoonleft(\varphi(l,l')+\epsilon(k,l')-\epsilon(k',l))-k \curvearrowright(\varphi(l,l')+\epsilon(k,l')-\epsilon(k',l))\\ 0=&\phi(l,k''\blacktriangleright l'-k'\blacktriangleright l'')+\iota(k,[l'',l'])+\iota(k,k''\blacktriangleright l'-k'\blacktriangleright l'')-\iota([k'',k'],l)-\iota(k''\blacktriangleleft l'-k'\blacktriangleleft l'',l)\\ &- l \downharpoonleft(\phi(l',l'')+\iota(k',l'')-\iota(k'',l'))-k \curvearrowright(\phi(l',l'')+\iota(k',l'')-\iota(k'',l'))\\ &\phi(l',k\blacktriangleright l''-k''\blacktriangleright l)+\iota(k',[l,l''])+\iota(k',k\blacktriangleright l''-k''\blacktriangleright l)-\iota([k,k''],l')-\iota(k\blacktriangleleft l''-k''\blacktriangleleft l,l')\\ &- l' \downharpoonleft(\phi(l'',l)+\iota(k'',l)-\iota(k,l''))-k \curvearrowright(\phi(l'',l)+\iota(k'',l)-\iota(k,l''))\\ &\varphi(l'',k'\blacktriangleright l-k\blacktriangleright l')+\iota(k'',[l',l])+\iota(k'',k'\blacktriangleright l-k\blacktriangleright l')-\iota([k',k],l'')-\iota(k'\blacktriangleleft l-k\blacktriangleleft l',l'')\\ &- l'' \downharpoonleft(\phi(l,l')+\iota(k,l')-\iota(k',l))-k \curvearrowright(\phi(l,l')+\iota(k,l')-\iota(k',l)). \end{split} \end{equation} We are ready now to define a 2-cocycle extension of the Lie alegbra $\mathfrak{l} \bowtie \mathfrak{k}$ via $V \oplus W$. We record this by \begin{equation} \label{2-co-mp} (V \oplus W) {_\Theta\rtimes} (\mathfrak{l} \bowtie \mathfrak{k}). \end{equation} To arrive at the Lie bracket on this space, one only needs to employ the explicit definitions of the left action ${\cdot\kern-.33em\triangleright}$ in 2-cocycle $\Theta$ into the generic formula \eqref{AAAAA} of Lie bracket for 2-cocycle extensions. This results with \begin{equation}\label{AAAAA--} \begin{split} &[(v\oplus w) \oplus (l\oplus k), (v '\oplus w ') \oplus (l'\oplus k')]_{_\Theta\rtimes}= \big((l\oplus k) \, {\cdot\kern-.33em\triangleright} \, (v '\oplus w ')-(l'\oplus k') \, {\cdot\kern-.33em\triangleright} \, (v\oplus w) \\ &\hspace{4cm} + \Theta ((l\oplus k),(l'\oplus k')) \big) \oplus[(l\oplus k),(l'\oplus k')], \end{split} \end{equation} where $ {\cdot\kern-.33em\triangleright}$ is the left action in \eqref{fifthact}, and the bracket $[(l\oplus k),(l'\oplus k')]$ is the matched pair Lie bracket on $\mathfrak{l} \bowtie \mathfrak{k}$. It is immediate to see that extended Lie algebra bracket on \eqref{AAAAA--} is precisely equal to the matched Lie bracket given in \eqref{bracketlk-1} and \eqref{bracketlk-2}, up to some reordering. Eventually, we are ready now to collect all the discussions done so far in the following proposition. \begin{proposition}\label{Prop-Gokhan} The matched pair Lie algebra $\G{g}\bowtie \G{h}$, given in \eqref{mp-2c}, of 2-cocycle extensions $\mathfrak{g}=V {_\varphi\rtimes} \mathfrak{l}$ and $ \mathfrak{h}=W {_\phi\rtimes} \mathfrak{k}$ is a 2-cocycle extension admitting a left action ${\cdot\kern-.33em\triangleright}$ in \eqref{fifthact} and a 2-cocycle $\Theta$ in \eqref{theta} that is \begin{equation}\label{claim} (V {_\varphi\rtimes} \:\mathfrak{l}) \bowtie (W {_\phi\rtimes} \: \mathfrak{k}) \cong (V\oplus W) {_\Theta\rtimes} \: (\mathfrak{l} \bowtie \mathfrak{k}). \end{equation} \end{proposition} Even though, we have derived all the mappings and conditions up to now explicitly. There is a short but implicit way to arrive at that proposition by employing Universal Lemma \ref{universal-prop}. For this, we first embed Lie subalgebras to the space $(V \oplus W) {_\Theta\rtimes} (\mathfrak{l} \bowtie \mathfrak{k})$ as follows \begin{equation} \begin{split} \mathfrak{g}&\longrightarrow (V \oplus W) {_\Theta\rtimes} (\mathfrak{l} \bowtie \mathfrak{k}), \qquad (v \oplus l) \mapsto (v \oplus 0)\oplus(l \oplus 0) \\ \mathfrak{h}&\longrightarrow (V \oplus W) {_\Theta\rtimes} (\mathfrak{l} \bowtie \mathfrak{k}), \qquad (w\oplus k) \mapsto (0 \oplus w) \oplus( 0 \oplus k). \end{split} \end{equation} Then Universal Lemma \ref{universal-prop} reads that the total space admits a matched pair decomposition. \textbf{A Particular case.} For future reference, we now examine a particular case of Proposition \ref{Prop-Gokhan}. First, we choose the left actions $\downharpoonleft $ and $\downharpoonright$ in \eqref{actionsoflk} as trivial. So that, the Lie brackets on the 2-cocycle extensions in \eqref{bracketontwococ} turn out to be \begin{equation} \begin{split} [v \oplus l, v' \oplus l']_{_\varphi\rtimes}&= \varphi(l,l') \oplus[l,l'], \\ [w \oplus k, w' \oplus k']_{_\phi\rtimes}&= \phi(k,k') \oplus[k,k']. \\ \end{split} \end{equation} In addition, consider that the mutual actions of $\G{k}$ and $\G{l}$ in \eqref{Lieact-k-l} and the cross representations in \eqref{actionsofkl} are all zero. Hence, we have that, the mutual actions of $\G{h}$ and $\G{g}$ in \eqref{left-comp} are \begin{equation}\label{heismutual} \begin{split} \vartriangleright &: ((w\oplus k),(v\oplus l))\mapsto \left( \epsilon(k,l) \oplus 0 \right), \\ \vartriangleleft &: ((w\oplus k),(v\oplus l))\mapsto \left(\iota(k,l) \oplus 0 \right). \end{split} \end{equation} A direct calculation gives us that, in the present setting, $\vartriangleright$ is a left action if and only if $\epsilon(k,[l,l'])=0$, and $\vartriangleleft$ is a right action if and only if $\iota([k,k'],l)=0$. Eventually, we claim that, the assumptions in this case reduce the matched Lie algebra bracket in \eqref{bracketlk-1} and \eqref{bracketlk-2} as \begin{equation}\label{heisbracket} \begin{split} \big[\big ( (v\oplus l)\oplus (w\oplus k)\big),\big((v'\oplus l')\oplus(w'\oplus k')\big)\big]_{\bowtie}=&\Big(\big(\epsilon(k,l')-\epsilon(k',l)+\varphi(l,l')\big)\oplus 0\Big) \\ &\oplus \Big(\big(\iota(k,l') -\iota(k',l)+\phi(k,k') \big) \oplus 0 \Big) \end{split} \end{equation} where $\epsilon$ and $\iota$ are as in \eqref{epsiot}. In order to implement Proposition \ref{Prop-Gokhan}, we exploit a proper left action and a 2-cocycle operator. Here, according to \eqref{fifthact}, we take the left action as trivial whereas the 2-cocycle term $\Theta$ is precisely equal to the one in \eqref{theta}. \subsection{Lie Poisson Dynamics on Duals of Matched 2-cocycle Extensions} \label{twococyclematchedLie} We have exhibited matched pair of 2-cocycle extensions in the previous subsection. To arrive at Hamiltonian dynamics on the dual picture, we make use the Lie-Poisson formalism on 2-cocycle extensions, that is the theory in Subsection \ref{LPD-2c}. In that subsection, there exist both the Lie-Poisson bracket \eqref{centextLiepois} and the Lie-Poisson equations \eqref{centextLiepois2} for the case of 2-cocycles. \textbf{Lie-Poisson brackets.} Let us consider the following notation on the dual spaces \begin{equation}\label{elements} \mu=\mu_{V^*} \oplus \mu_{W^*} \in V^* \oplus W^*,\qquad \nu= \nu_{\mathfrak{l}^*} \oplus \nu_{\mathfrak{k}^*} \in \mathfrak{l}^* \oplus \mathfrak{k} ^*. \end{equation} Substitute the 2-cocycle $\Theta$ in \eqref{theta} and the left action ${\cdot\kern-.33em\triangleright}$ in \eqref{fifthact} in the Lie-Poisson bracket \eqref{centextLiepois}. Therefore, for this situation, (plus/minus) Lie-Poisson bracket is \begin{equation} \label{Liepoiscenter} \begin{split} \left\{ \mathcal{H},\mathcal{F}\right\}_{_\Theta\rtimes}(\mu\oplus\nu) = & \pm \Big\langle \nu_{\mathfrak{l}^*} \oplus \nu_{\mathfrak{k}^*} , \big[ \big( \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}\oplus \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}} \big) , \big( \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{l}^*}}\oplus \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{k}^*}} \big) \big] \Big\rangle \\ & \pm \Big\langle \mu_{V^*} \oplus \mu_{W^*}, \Theta \Big( \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}\oplus \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}}, \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{l}^*}}\oplus \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{k}^*}} \Big) \Big \rangle \\ & \pm \Big\langle \mu_{V^*} \oplus \mu_{W^*} , \big( \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}\oplus \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}}\big) {\cdot\kern-.33em\triangleright} \big( \frac{\delta \mathcal{F}}{\delta \mu_{V^*}}\oplus \frac{\delta \mathcal{F}}{\delta \mu_{W^*}} \big) \Big \rangle \\ & \mp \Big\langle \mu_{V^*} \oplus \mu_{W^*} , \big( \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{l}^*}}\oplus \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{k}^*}}\big) {\cdot\kern-.33em\triangleright} \big( \frac{\delta \mathcal{H}}{\delta \mu_{V^*}}\oplus \frac{\delta \mathcal{H}}{\delta \mu_{W^*}} \big) \Big \rangle, \end{split} \end{equation} where the bracket on the first line is the matched pair Lie bracket on $\G{l}\bowtie\G{k}$. Here, the first pairing is the one between $\G{l}^*\times \G{k}^*$ and $\G{l}\bowtie\G{k}$, the others are the pairing between $V^*\oplus W^*$ and $V\oplus W$. It is maybe needless to say that we have assumed that all the vector spaces studied in here are reflexive. If the explicit expressions for the Lie bracket on $\G{l}\bowtie\G{k}$, the left action ${\cdot\kern-.33em\triangleright}$ in \eqref{fifthact}, and the 2-cocycle $\Theta$ in \eqref{theta} are substituted into the bracket \eqref{Liepoiscenter}, one arrives at the bracket as \begin{equation} \label{Liepoiscenter-2} \begin{split} & \left\{ \mathcal{H},\mathcal{F}\right\}_{_\Theta\rtimes}(\mu\oplus\nu) = \pm \big \langle \nu_{\mathfrak{l}^*}, \big [ \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}, \frac{\delta \mathcal{F}} {\delta \nu_{\mathfrak{l}^*}}\big] +\frac{\delta \mathcal{H}}{ \delta \nu_{\mathfrak{k}^*}} \blacktriangleright \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{l}^*}}-\frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{k}^*}} \blacktriangleright \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}} \big \rangle \\ & \hspace{3,1cm} \pm \big \langle \nu_{\mathfrak{k}^*},\big \lbrack \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}}, \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{k}^*}}\big \rbrack + \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}} \blacktriangleleft \frac{\delta \mathcal{F}}{\delta\nu_{\mathfrak{l}^*}} - \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{k}^*}} \blacktriangleleft \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}} \big \rangle \\ & \hspace{3,1cm} \pm \big \langle \mu_{V^*}, \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}} \downharpoonleft \frac{\delta \mathcal{F}}{\delta \mu_{V^*}} + \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}} \curvearrowright \frac{\delta \mathcal{F}}{\delta \mu_{V^*}} + \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{l}^*}} \downharpoonleft \frac{\delta \mathcal{H}}{\delta \mu_{V^*}} + \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{k}^*}} \curvearrowright \frac{\delta \mathcal{H}}{\delta \mu_{V^*}} \big \rangle \\ & \hspace{3,1cm} \pm \big \langle \mu_{V^*},\epsilon(\frac{\delta \mathcal{H}}{\delta \nu_{ \mathfrak{k}^*}},\frac{\delta \mathcal{F}}{\delta \nu_{ \mathfrak{l}^*}})-\epsilon(\frac{\delta \mathcal{F}}{\delta \nu_{ \mathfrak{k}^*}},\frac{\delta \mathcal{H}}{\delta \nu_{ \mathfrak{l}^*}}) + \varphi(\frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}},\frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{l}^*}}) \\ & \hspace{3,1cm} \mp \big \langle \mu_{W^*}, -\frac{\delta \mathcal{F}}{\delta \mu_{W^*}} \curvearrowleft \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}+\frac{\delta \mathcal{H}}{\delta \nu_{ \mathfrak{k}^*}} \downharpoonright \frac{\delta \mathcal{F}}{\delta \mu_{W^*}}-\frac{\delta \mathcal{H}}{\delta \mu_{W^*}}\curvearrowleft\frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{l}^*}}+\frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{k}^*}} \downharpoonright \frac{\delta \mathcal{H}}{\delta \mu_{W^*}} \big\rangle. \\ & \hspace{3,1cm} \pm \big \langle \mu_{W^*} , \iota(\frac{\delta \mathcal{H}}{\delta \nu_{ \mathfrak{k}^*}},\frac{\delta \mathcal{F}}{\delta \nu_{ \mathfrak{l}^*}})-\iota(\frac{\delta \mathcal{F}}{\delta \nu_{ \mathfrak{k}^*}},\frac{\delta \mathcal{H}}{\delta \nu_{ \mathfrak{l}^*}})+ \phi(\frac{\delta \mathcal{H}}{\delta \nu_{ \mathfrak{k}^*}},\frac{\delta \mathcal{F}}{\delta \nu_{ \mathfrak{k}^*}}) \big \rangle \end{split} \end{equation} Here, $\blacktriangleright$ and $\blacktriangleleft$ are the actions in \eqref{Lieact-k-l}, $\curvearrowleft $ and $\curvearrowright$ are in \eqref{actionsofkl} whereas $\downharpoonleft $ and $\downharpoonleft$ are those in \eqref{actionsoflk}. In the light of Proposition \ref{Prop-Gokhan}, one can write the bracket \eqref{Liepoiscenter} as a matched pair Lie-Poisson bracket. To this end, by reordering the elements in \eqref{elements}, define \begin{equation} \tilde{\mu}=\mu_{V^*}\oplus \nu_{\mathfrak{l}^*}\in \mathfrak{g}=V {_\varphi\rtimes} \mathfrak{l}, \qquad \tilde{\nu}=\mu_{W^*}\oplus \nu_{\mathfrak{k}^*}\in \mathfrak{h}=W {_\phi\rtimes} \mathfrak{k}. \end{equation} Then the matched pair Lie-Poisson bracket in \eqref{LiePoissonongh} takes the form \begin{equation} \begin{split} \label{LiePoissonongh-1} \left\{ \mathcal{H},\mathcal{F}\right\}_{\bowtie}(\tilde{\mu}\oplus\tilde{\nu}) =& \pm \left\{ \mathcal{H},\mathcal{F}\right\}_{\varphi\rtimes }(\tilde{\mu}) \pm \left\{ \mathcal{H},\mathcal{F}\right\}_{\phi\rtimes }(\tilde{\nu}) \\ & \mp \Big \langle \mu_{V^*}\oplus \nu_{\mathfrak{l}^*} , \big( \frac{\delta \mathcal{H}}{\delta \mu_{W^*}}\oplus \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}} \big) \vartriangleright \big( \frac{\delta \mathcal{F}}{\delta \mu_{V^*}}\oplus \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{l}^*}} \big) - \big( \frac{\delta \mathcal{F}}{\delta \mu_{W^*}}\oplus \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{k}^*}} \big) \vartriangleright \big( \frac{\delta \mathcal{H}}{\delta \mu_{V^*}}\oplus \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}} \big) \Big \rangle \\ & \mp \Big \langle \mu_{W^*}\oplus \nu_{\mathfrak{k}^*} , \big( \frac{\delta \mathcal{H}}{\delta \mu_{W^*}}\oplus \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}} \big) \vartriangleleft \big( \frac{\delta \mathcal{F}}{\delta \mu_{V^*}}\oplus \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{l}^*}} \big) - \big( \frac{\delta \mathcal{F}}{\delta \mu_{W^*}}\oplus \frac{\delta \mathcal{F}}{\delta \nu_{\mathfrak{k}^*}} \big) \vartriangleleft \big( \frac{\delta \mathcal{H}}{\delta \mu_{V^*}}\oplus \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}} \big) \Big \rangle, \end{split} \end{equation} where the actions $\vartriangleleft$ and $\vartriangleright$ are in \eqref{left-comp}. Notice that, the Lie-Poisson brackets on the right hand side of the first line are the individual Lie-Poisson brackets on the dual spaces $\G{g}^*$ and $\G{h}^*$, respectively. Those terms available in the second line of \eqref{LiePoissonongh-1} are manifestations of the left action of $\G{h}$ on $\G{g}$ whereas the third line is due to the right action of $\G{g}$ on $\G{h}$. A direct computation gives that the Lie-Poisson bracket \eqref{LiePoissonongh-1} is equal to the Lie-Poisson bracket in \eqref{Liepoiscenter-2}. \textbf{Dual actions:} First, define the dual actions of $\curvearrowright$ and $\curvearrowleft$ in \eqref{actionsofkl} as \begin{equation} \label{dualofyanok} \begin{split} \overset{\ast}{\curvearrowleft} &: V^* \rightarrow V^*, \qquad \langle \mu_{V^*} \overset{\ast}{\curvearrowleft} k,v \rangle=\langle \mu_{V^*}, k \curvearrowright v \rangle, \\ \overset{\ast}{\curvearrowright} &: W^* \rightarrow W^*, \qquad \langle l \overset{\ast}{\curvearrowright} \mu_{W^*},w \rangle=\langle \mu_{W^*}, w \curvearrowleft l \rangle, \end{split} \end{equation} respectively. See that, $\overset{\ast}{\curvearrowleft}$ is a right action whereas $\overset{\ast}{\curvearrowright}$ is a left action. Secondly, introduce the dual (right) actions of $\downharpoonleft$ and $\downharpoonright$ in \eqref{actionsoflk} as \begin{equation} \label{dualofduzok} \begin{split} \overset{\ast}{\downharpoonleft} k &: W^* \rightarrow W^*, \qquad \langle \mu_{W^*} \overset{\ast}{\downharpoonleft} k,w \rangle=\langle \mu_{W^*}, k \downharpoonright w \rangle, \\ \overset{\ast}{\downharpoonright} l &: V^* \rightarrow V^*, \qquad \langle \mu_{V} \overset{\ast}{\downharpoonright} l,v \rangle=\langle \mu_{V^*}, l \downharpoonleft v \rangle, \end{split} \end{equation} respectively. Then, determine the dual actions of $\blacktriangleleft$ and $\blacktriangleright$ in \eqref{Lieact-k-l} \begin{equation}\label{dualofkara} \begin{split} \overset{\ast}{\blacktriangleleft}&: \mathfrak{k} \otimes \mathfrak{l}^* \longrightarrow \mathfrak{l}^*, \qquad \langle k \overset{\ast}{\blacktriangleleft} \nu_{\mathfrak{l}^*}, l \rangle=\langle \nu_{\mathfrak{l}^*}, k \blacktriangleright l \rangle, \\ \overset{\ast}{\blacktriangleright}&: \mathfrak{l} \otimes \mathfrak{k}^* \longrightarrow \mathfrak{k}^*, \qquad \langle l \overset{\ast}{\blacktriangleright} \nu_{\mathfrak{k}^*}, k \rangle=\langle \nu_{\mathfrak{k}^*}, k \blacktriangleleft l \rangle, \end{split} \end{equation} respectively. We define the dual mappings of the 2-cocycles $\varphi$ and $\phi$ as \begin{equation}\label{dualofvp} \begin{split} \varphi^*_{l}&:V^* \longrightarrow \mathfrak{l}^*, \qquad \langle \varphi^*_{l} \mu_{V^*},l'\rangle=-\langle \mu_{V^*}, \varphi_{l} l' \rangle, \\ \phi^*_{k}&:W^* \longrightarrow \mathfrak{k}^*, \qquad \langle \phi^*_{k} \mu_{W^*},k'\rangle=-\langle \mu_{W^*}, \phi_{k} k' \rangle. \end{split} \end{equation} Lastly, exhibit the dual mappings of $\epsilon$ and $\iota$ in \eqref{epsiot} as \begin{equation}\label{dualofei} \begin{split} \epsilon^*_{k}&:V^* \longrightarrow \mathfrak{l}^*, \qquad \langle \epsilon^*_{k} \mu_{V^*},l \rangle=-\langle \mu_{V^*}, \epsilon_{k} l \rangle, \\ \iota^*_{k}&:W^* \longrightarrow \mathfrak{l}^*, \qquad \langle \iota^*_{k} \mu_{W^*},l \rangle=-\langle \mu_{W^*}, \iota_{k} l \rangle. \end{split} \end{equation} \textbf{Lie-Poisson equations.} According to the equations \eqref{centextLiepois2}, it suffice to define dual mappings of the action ${\cdot\kern-.33em\triangleright}$ \eqref{fifthact} and $\Theta$ \eqref{theta} to write Lie-Poisson dynamics. For ${\cdot\kern-.33em\triangleright}$, by the definiton, we compute \begin{equation}\label{dualoffifth1} \begin{split} \langle (\mu_{V^*}\oplus \mu_{W^*})\,\overset{\ast}{{\cdot\kern-.44em\triangleleft}}\, (l\oplus k),(v\oplus w)\rangle &=\langle\mu_{V^*},l \downharpoonleft v+ k \curvearrowright v \rangle+\langle \mu_{W^*},-w \curvearrowleft l+k \downharpoonright w \rangle, \\ &=\langle \mu_{V^*}, l \downharpoonleft v \rangle+\langle \mu_{V^*}, k \curvearrowright v \rangle+\langle \mu_{W^*},-w \curvearrowleft l \rangle+ \langle \mu_{W^*}, k\downharpoonright w \rangle. \end{split} \end{equation} Then, we have \begin{equation} \label{dotactiondual} (\mu_{V^*}\oplus \mu_{W^*})~\overset{\ast}{{\cdot\kern-.44em\triangleleft}}~ (l\oplus k)= \big(\mu_{V^*} \overset{\ast}{\curvearrowleft} k+\mu_{V^*} \overset{\ast}{\downharpoonright} l \big) \oplus \big( l \overset{\ast}{\curvearrowright} \mu_{W^*}+\mu_{W^*} \overset{\ast}{\downharpoonleft} k \big). \end{equation} For $\Theta$, we compute \begin{equation} \label{thetadual} \Theta^*_{(l\oplus k)}(\mu_{V^*}\oplus \mu_{W^*})=(\varphi^*_{l}\mu_{V^*}+\epsilon^*_{k}\mu_{V^*}-\iota^*_{k}\mu_{W^*}) \oplus (\phi^*_{k}\mu_{W^*}+\iota^*_{k}\mu_{W^*} -\epsilon^*_{k}\mu_{V^*}), \end{equation} where $\varphi^*_{l}, \phi^*_{k}$ defined as \eqref{dualofvp} and $\epsilon^*_{k}, \iota^*_{k}$ defined as \eqref{dualofei}. There are two more dual mappings we need to get for the left side of the equation \eqref{centextLiepois2}. These are $\mathfrak{b}^*_{(l\oplus k)}(\mu_{V^*}\oplus \mu_{W^*})$ and the coadjoint action of $(\frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}\oplus \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}})$ on the dual element $ (\nu_{\mathfrak{l}^*}\oplus \nu_{\mathfrak{k}^*})$. Being a matched pair, we can employ the equation \eqref{ad-*} for the case of $\mathfrak{l}\bowtie \mathfrak{k}$. Accordingly, we arrive at \begin{equation}\label{addual} ad^*_{(\frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}\oplus \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}})}(\nu_{\mathfrak{l}^*}\oplus \nu_{\mathfrak{k}^*}) =\big(ad^*_{\frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}} \nu_{\mathfrak{l}^*}+\nu_{\mathfrak{l}^*} \overset{\ast}{\blacktriangleleft} \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}} + \nu_{\mathfrak{k}^*} \overset{\ast}{\blacktriangleright} \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}} \big) \oplus \big( ad^*_{\frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}}} \nu_{\mathfrak{k}^*}-\nu_{\mathfrak{l}^*} \overset{\ast}{\blacktriangleleft} \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}} - \nu_{\mathfrak{k}^*} \overset{\ast}{\blacktriangleright} \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}\big), \end{equation} where $\overset{\ast}{\blacktriangleleft}$ and $\overset{\ast}{\blacktriangleright} $ denote dual actions in the equation \eqref{dualofkara}. Finally, using equation \eqref{b*}, we arrive \begin{equation} \label{bdual} \mathfrak{b}^*_{(l,k)}\mu_{V^*}=(\mu_{V^*}\overset{\ast}{\downharpoonright} l+ \mu_{V^*} \overset{\ast}{\curvearrowleft} k) \oplus (\mu_{W^*} \overset{\ast}{\downharpoonleft}k-\mu_{W^*}\overset{\ast}{\curvearrowright} l). \end{equation} Therefore, according to the equations \eqref{dotactiondual}, \eqref{thetadual}, \eqref{addual},\eqref{bdual} the (plus/minus) Lie-Poisson equations (governed by Hamiltonian function $\mathcal{H}=\mathcal{H}((\mu_{V^*},\mu_{W^*}),(\nu_{\mathfrak{l}^*},\nu_{\mathfrak{k}^*}))$ is computed as \begin{equation}\label{Liepoismc1} \begin{split} \dot{\mu}_{V^*}&= \mu_{V^*} \overset{\ast}{\downharpoonright} \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}+\mu_{V^*}\overset{\ast}{\curvearrowleft}\frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}}, \\ \dot{\mu}_{W^*} &= -\frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}\overset{\ast}{\curvearrowright}\mu_{W^*} + \mu_{W^*} \overset{\ast}{\downharpoonleft} \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}}, \\ \dot{\nu}_{\mathfrak{l}^*}&= \varphi^*_{l}\mu_{V^*}+\epsilon^*_{k}\mu_{V^*}-\iota^*_{k}\mu_{W^*}+ad^*_{\frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}} \nu_{\mathfrak{l}^*}+\nu_{\mathfrak{l}^*} \overset{\ast}{\blacktriangleleft} \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}} + \nu_{\mathfrak{k}^*} \overset{\ast}{\blacktriangleright} \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}}+\mu_{V^*}\overset{\ast}{\downharpoonright} l+ \mu_{V^*} \overset{\ast}{\curvearrowleft} k, \\ \dot{\nu}_{\mathfrak{k}^*}&= \phi^*_{k}\mu_{W^*}+\iota^*_{k}\mu_{W^*}-\epsilon^*_{k}\mu_{V^*}+ad^*_{\frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}}} \nu_{\mathfrak{k}^*}-\nu_{\mathfrak{l}^*} \overset{\ast}{\blacktriangleleft} \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}} - \nu_{\mathfrak{k}^*} \overset{\ast}{\blacktriangleright} \frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}+\mu_{W^*} \overset{\ast}{\downharpoonleft}k-\mu_{W^*}\overset{\ast}{\curvearrowright} l. \end{split} \end{equation} \textbf{Particular case:} Now, we examine how Lie-Poisson equations look like for the particular case we gave in \eqref{centrallyextend}. It is possible to briefly remind: we took the left actions $\downharpoonleft $ and $\downharpoonright$ in \eqref{actionsoflk}, the mutual actions of $\G{k}$ and $\G{l}$ in \eqref{Lieact-k-l} and the cross representations in \eqref{actionsofkl} as trivial. Since the calculations of these choices are made in the previous section, it is possible to see the effects directly for Lie-Poisson equations \eqref{Liepoismc1}. See that, both $\dot{\mu}_{V^*}$ and $\dot{\mu}_{W^*}$ are zero and \begin{equation} \begin{split} \dot{\nu}_{\mathfrak{l}^*}&=\varphi^*_{l}\mu_{V^*}+\epsilon^*_{k}\mu_{V^*}-\iota^*_{k}\mu_{W^*}+ad^*_{\frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{l}^*}}} \nu_{\mathfrak{l}^*} \\ \dot{\nu}_{\mathfrak{k}^*}&=\phi^*_{k}\mu_{W^*}+\iota^*_{k}\mu_{W^*}-\epsilon^*_{k}\mu_{V^*}+ad^*_{\frac{\delta \mathcal{H}}{\delta \nu_{\mathfrak{k}^*}}} \nu_{\mathfrak{k}^*}. \end{split} \end{equation} \section{Couplings of Dissipative Systems} \label{Sec-C-DS} In the present section, we consider two Lie algebras $\G{g}$ and $\G{h}$ under mutual actions \eqref{matched-pair-mutual-actions} satisfying the compatibility conditions in \eqref{matched-pair-mutual-actions}. So that, we have a well-define matched pair Lie algebra structure $\mathfrak{g}\bowtie\mathfrak{h}$ equipped with the matched pair Lie bracket \eqref{mpla}. As explained in Subsection \ref{Sec-MP-LP}, on the dual space $\mathfrak{g}^{\ast}\oplus\mathfrak{h}^{\ast}$, there exists matched Lie-Poisson bracket $\{\bullet, \bullet\}_{\bowtie}$ displayed in \eqref{LiePoissonongh}. This Poisson structure lets us to arrive at matched pair Lie-Poisson equations \eqref{LPEgh} governing the collective motion of the individual Lie-Poisson dynamics on $\G{g}^*$ and $\G{h}^*$. In all the subsection, we follow this construction. After presenting discussions on coupling of Rayleigh dissipation in the following subsection, we shall follow the order given in Subsection \ref{Sec-Sym-Bra}, and examine the coupling problem of symmetric brackets. \subsection{Rayleigh type Dissipation} In \eqref{RD-Eqn}, Rayleigh type dissipation is introduced to the Lie-Poisson system by means the coadjoint action and a linear operator, from the Lie algebra the dual space to the algebra. In this subsection, we provide a way to couple two Lie-Poisson dynamics admitting Rayleigh type dissipative terms. For this, we first determine the dynamics of constitutive systems. Assume that, on the dual space $\G{g}^*$, for the Lie-Poisson dynamics, Rayleigh type dissipation is provided by a linear operator $\Upsilon^\G{g}:\G{g}^*\mapsto \G{g}$ that is \begin{equation}\label{RD-Eqn-1} \dot{\mu}\mp ad^*_{\frac{\delta \mathcal{H}}{ \delta \mu}} \mu = \mp ad^*_{\Upsilon^\G{g}(\mu)} \mu. \end{equation} Assume also that on $\G{h}^*$, for the Lie-Poisson dynamics, Rayleigh type dissipation is provided by a linear operator $\Upsilon^\G{h}:\G{h}^*\mapsto \G{h}$. so that, \begin{equation}\label{RD-Eqn-2} \dot{\nu}\mp ad^*_{\frac{\delta \mathcal{H}}{ \delta \nu}} \nu = \mp ad^*_{\Upsilon^\G{h}(\nu)} \nu. \end{equation} To couple the dynamics in \eqref{RD-Eqn-1} and \eqref{RD-Eqn-2}, we introduce a linear operator from the dual space $\mathfrak{g}^{\ast}\oplus\mathfrak{h}^{\ast}$ to the matched pair Lie algebra $\mathfrak{g}\bowtie\mathfrak{h}$ given by \begin{equation} \mathfrak{g}^*\oplus\mathfrak{h}^* \longrightarrow \mathfrak{g} \oplus\mathfrak{h},\qquad (\mu\oplus \nu) \mapsto (\Upsilon^{\G{g}}(\mu)\oplus \Upsilon^{\G{h}}(\nu)), \end{equation} where $\Upsilon^{\G{g}}$ and $\Upsilon^{\G{h}}$ are the linear mappings, in \eqref{RD-Eqn-1} and \eqref{RD-Eqn-2}, generating the dissipation for the individual systems. The dissipative term generated by the mapping $\Lambda$ is computed to be \begin{equation} \label{Rayleigh} \mp ad^{*}_{\Upsilon^{\G{g}}(\mu) \oplus \Upsilon^{\G{h}}(\nu)}(\mu \oplus \nu)=\big(\mp ad^{*}_{\Upsilon^{\G{g}}(\mu)}\mu \pm \mu \overset{\ast }{\vartriangleleft }\Upsilon^{\G{h}}(\nu)\mp \mathfrak{a}^{*}_{\Upsilon^{\G{h}}(\nu)}\nu \big) \oplus \big(\mp ad^{*}_{\Upsilon^{\G{h}}(\nu)}\nu \pm \Upsilon^{\G{g}}(\mu) \overset{\ast }{\vartriangleright } \nu\mp \mathfrak{b}^{*}_{\Upsilon^{\G{g}}(\mu)}\mu\big), \end{equation} where the dual actions $\overset{\ast }{\vartriangleleft }$ and $\overset{\ast }{\vartriangleright }$ are those given in \eqref{eta-star} and \eqref{xi-star}, respectively. Notice that, the cross actions $\mathfrak{a}^{*}$ and $\mathfrak{b}^{*}$ are the ones in \eqref{a*} and \eqref{b*}, respectively. dissipation term will be obtained as above. Observe that, while coupling the dissipative terms in \eqref{Rayleigh}, we respect the mutual actions. So that, the collective dissipative term is manifesting the mutual actions. It reduces to the direct sum of the dissipative terms of the individual motions if the actions are trivial. Obeying the general construction in \eqref{RD-Eqn}, we merge the dissipative terms in \eqref{Rayleigh} with the matched pair Lie-Poisson equations \eqref{LPEgh}. This reads the coupled system \begin{equation} \label{eqofmoofRay} \begin{split} &\dot{\mu} \mp ad^{*}_{\delta \mathcal{H}/ \delta \mu}\mu\pm \frac{\delta \mathcal{H}}{\delta \mu} \overset{\ast }{\vartriangleleft }\nu \mp \mathfrak{a}^{*}_{\delta \mathcal{H} / \delta \nu} \nu = \mp ad^{*}_{\Upsilon^{\G{g}}(\mu)}\mu \pm \mu \overset{\ast }{\vartriangleleft }\Upsilon^{\G{h}}(\nu) \mp \mathfrak{a}^{*}_{\Upsilon^{\G{h}}(\nu)}\nu \\ &\dot{\nu} \mp ad^{*}_{\delta \mathcal{H}/ \delta \nu}\nu \pm \frac{\delta \mathcal{H}}{\delta \nu} \overset{\ast }{\vartriangleright } \mp \mathfrak{b}^{*}_{\delta \mathcal{H} / \delta \mu} \mu= \mp ad^{*}_{\Upsilon^{\G{h}}(\nu)}\nu \pm \Upsilon^{\G{g}}(\mu) \overset{\ast }{\vartriangleright }\nu \mp \mathfrak{b}^{*}_{\Upsilon^{\G{g}}(\mu)}\mu. \end{split} \end{equation} It is evident that, this formulation respects the mutual actions. By taking one these actions, one arrives at the semidirect product theory for the Lie-Poisson system with Rayleigh type dissipation. If both of the actions are trivial, it is immediate to see that the system \eqref{eqofmoofRay} turns out to be simple collection of the individual motions in \eqref{RD-Eqn-1} and \eqref{RD-Eqn-2}. \subsection{Matched Double Bracket} Recall that, in \eqref{doubledissi}, we have presented double bracket in terms of the structure constants of Lie algebra. So that, to have a symmetric bracket on the matched Lie-Poisson geometry, we first recall the structure constants of the matched pair Lie algebra given in \eqref{sc-nonbracket-mp}. Then referring to the coordinate realization of matched pair Lie-Poisson bracket, in \eqref{Lie-pois-double-non-bracket}, we compute associated Poisson bivector $\Lambda$ as \begin{equation}\label{cosym2} \Lambda_{\alpha \beta}=\pm C_{\beta \alpha}^\gamma \mu_\gamma, \qquad \Lambda_{\alpha b}=\mp R_{b \alpha}^d \nu_d \mp L_{b \alpha}^\gamma \mu_\gamma, \qquad \Lambda_{a \beta }= \pm R_{ \beta a}^d \nu_d \pm L_{ \beta a}^\gamma \mu_\gamma, \qquad \Lambda_{a b }=\pm D_{ a b}^d \nu_d, \end{equation} where $C_{\beta \alpha}^\gamma$'s are structure constants on $\G{g}$, $D_{ a b}^d$'s are structure constants on $\G{h}$. Here, $R_{b \alpha}^d$'s and $L_{b \alpha}^\gamma $'s are constants defining the right and the left actions according to the exhibitions in \eqref{local-act}, respectively. In accordance with this coordinate realizations and in the light of the definition \eqref{doubledissi}, matched Double bracket dissipation $(\mathcal{F},\mathcal{S})^{(mD)}$ for two functions $ \mathcal{F} $ and $ \mathcal{S} $ defined on $ \mathfrak{g}^* \oplus \mathfrak{h}^* $ is \begin{equation}\label{matchdouble} \begin{split} (\mathcal{F},\mathcal{S})^{(mD)}(\mu , \nu)=&(\sum_{b}\Lambda_{\alpha b} \Lambda_{ \beta b }+\sum_{\gamma}\Lambda_{\alpha \gamma} \Lambda_{ \beta \gamma })\frac{\partial \mathcal{F}}{\partial \mu_\alpha}\frac{\partial \mathcal{S}}{\partial \mu_\beta} + (\sum_{b}\Lambda_{a b} \Lambda_{ \alpha b }+\sum_{\gamma}\Lambda_{a \gamma} \Lambda_{ \alpha \gamma })\frac{\partial \mathcal{F}}{\partial \nu_a}\frac{\partial \mathcal{S}}{\partial \mu_\alpha } \\ & +(\sum_{a}\Lambda_{b a} \Lambda_{ c a }+\sum_{\beta}\Lambda_{b \beta} \Lambda_{c \beta })\frac{\partial \mathcal{F}}{\partial \nu_b}\frac{\partial \mathcal{S}}{\partial \nu_c} + (\sum_{\beta}\Lambda_{\alpha \beta} \Lambda_{ b \beta } + \sum_{a}\Lambda_{\alpha a} \Lambda_{ b a })\frac{\partial \mathcal{F}}{\partial \mu_\alpha}\frac{\partial \mathcal{S}}{\partial \nu_b}. \\ \end{split} \end{equation} Referring to this bracket, the dissipative dynamics \eqref{diss-dyn-eq} for $a=1$, generated by a functional $\mathcal{S}$, is computed to be \begin{equation}\label{eqofmoofmatchdouble} \begin{split} &\dot{\mu}_\beta=(\sum_{b}\Lambda_{\beta b}\Lambda_{\alpha b}+\sum_{\gamma}\Lambda_{\beta \gamma}\Lambda_{ \alpha \gamma })\frac{\partial \mathcal{S}}{\partial \mu_\alpha} +(\sum_{\alpha}\Lambda_{\beta \alpha} \Lambda_{ a \alpha }+\sum_{b}\Lambda_{\beta b} \Lambda_{ a b })\frac{\partial \mathcal{S}}{\partial \nu_a} \\ &\dot{\nu}_d=(\sum_{b}\Lambda_{d b} \Lambda_{ \alpha b }+ \sum_{\gamma}\Lambda_{d \gamma} \Lambda_{ \alpha \gamma })\frac{\partial \mathcal{S}}{\partial \mu_\alpha} + (\sum_{a}\Lambda_{d a} \Lambda_{ n a } +\sum_{\alpha}\Lambda_{d \alpha} \Lambda_{n \alpha })\frac{\partial \mathcal{S}}{\partial \nu_n}. \end{split} \end{equation} In order to arrive at the explicit expression of the symmetric bracket \eqref{matchdouble} and the disspative dynamics in \eqref{eqofmoofmatchdouble} in terms of the local characterizations of left and the right actions and the structure constants of the constitutive subalgebras, one needs to substitute the calculations \eqref{cosym2} into \eqref{matchdouble} and \eqref{eqofmoofmatchdouble}. Now, we add the matched Lie-Poisson bracket $\{\bullet,\bullet\}_{\bowtie}$, given in \eqref{Lie-pois-double-non-bracket-mp}, and the matched double bracket $(\bullet,\bullet)^{(mD)}$ in \eqref{matchdouble}. This reads the matched metriplectic bracket. The matched metriplectic dynamics generated by a Hamiltonian function $\C{H}$ and an entropy type function $\C{S}$ is computed to be \begin{equation}\label{mMD-DB} \dot{\mathbf{z}}=[ \left\vert \mathbf{z},\mathcal{H}\right\vert ]_{\bowtie,D}= \{\mathbf{z},\mathcal{H}\}_{\bowtie}+a (z,\mathcal{S})^{(mD)} \end{equation} In order to arrive at the explicit expression of the equations of motion in \eqref{mMD-DB}, it is enough to add the reversible matched pair dynamics in \eqref{Lie-pois-nonbracket-dyn-mp} and the irreversible matched pair dynamics in \eqref{eqofmoofmatchdouble}. By taking one of the actions trivial, one arrives semidirect product metriplectic bracket and semidirect product dynamical equation. If, both of the actions are trivial, then the coupling turns out to be a simple addition. \subsection{Matched Cartan-Killing Bracket} \label{cartan} Once more, we recall the structure constants \eqref{sc-nonbracket-mp} of the matched pair Lie algebra. Referring to \eqref{CK}, we first compute the matched pair Cartan metric as $\bar{\mathcal{G}}_{a \beta}$ and $\bar{\mathcal{G}}_{\alpha b}$ \begin{equation}\label{CM-MP} \begin{split} \bar{\mathcal{G}}_{\alpha b}&=-R_{a \alpha }^{d}D_{bd}^{a} +L_{a \alpha}^{\beta}R_{b \beta}^{a} +C_{\alpha \beta}^{\gamma}L_{b \gamma}^{\beta}, \qquad \bar{\mathcal{G}}_{ab}=L_{a \alpha }^{\beta}L_{b \beta }^{\alpha} +D_{ad}^{k}D_{b k}^{d} \\ \bar{\mathcal{G}}_{a \beta}&=L_{a \epsilon }^{\gamma}C_{ \beta \gamma}^{\epsilon} -D_{ ab}^{d}R_{ d \beta}^{b} +R_{a \gamma }^{d}L_{d \beta}^{\gamma} , \qquad \bar{\mathcal{G}}_{\alpha \beta} =R_{a \alpha }^{b}R_{ b \beta }^{a}+C_{\alpha \gamma}^{\epsilon}C_{\beta \epsilon}^{\gamma} \end{split} \end{equation} on the matched pair Lie algebra $\mathfrak{g} \oplus \mathfrak{h }$. We write an element of $ \mathfrak{g}^* \oplus \mathfrak{h}^* $ as $ (\mu,\nu)=\mu_{\alpha}\bar{\mathbf{e}}^{\alpha}+\nu_{a}\bar{\mathbf{e}}^a$. We compute the matched pair Cartan-Killing bracket, defined in \eqref{loc-car}, as \begin{equation} \label{Gen-Killing} \begin{split} (\C{F},\C{H})^{(mCK)} = & \frac{\partial \C{F}}{\partial \mu_{\alpha}}\bar{\mathcal{G}}_{\alpha \beta} \frac{\partial \C{H}}{\partial \mu_{\beta}} + \frac{\partial \C{F}}{\partial \mu_{\alpha}}\bar{\mathcal{G}}_{\alpha b} \frac{\partial \C{H}}{\partial \nu_{b}} + \frac{\partial \C{F}}{\partial \nu_{a}}\bar{\mathcal{G}}_{a \beta} \frac{\partial \C{H}}{\partial \mu_{\beta}} + \frac{\partial \C{F}}{\partial \nu_{a}}\bar{\mathcal{G}}_{a b} \frac{\partial \C{H}}{\partial \nu_{b}} \\ =& (R_{\alpha a}^{b}R_{ \beta b}^{a}+C_{\alpha \gamma}^{\epsilon}C_{\beta \epsilon}^{\gamma}) \frac{\partial \C{F}}{\partial \mu_{\alpha}} \frac{\partial \C{H}}{\partial \mu_{\beta}} + (-R_{a \alpha }^{d}D_{bd}^{a} +L_{a \alpha}^{\beta}R_{b \beta}^{a} +C_{\alpha \beta}^{\gamma}L_{\gamma b}^{\beta}) \frac{\partial \C{F}}{\partial \mu_{\alpha}} \frac{\partial \C{H}}{\partial \nu_{b}} \\ & + (R_{\gamma a}^{d}L_{d \beta}^{\gamma}+L_{\alpha a }^{\gamma}C_{ \beta \gamma}^{\alpha} -D_{ ab}^{d}R_{ d \beta}^{b}) \frac{\partial \C{F}}{\partial \nu_{a}} \frac{\partial \C{H}}{\partial \mu_{\beta}} + (L_{\alpha a }^{\beta}L_{\beta b}^{\alpha} +D_{ad}^{k}D_{b k}^{d}) \frac{\partial \C{F}}{\partial \nu_{a}} \frac{\partial \C{H}}{\partial \nu_{b}}. \end{split} \end{equation} According to this formulation (\ref{Gen-Killing}), for a functional $\mathcal{S}$ the equation of motion will be \begin{equation} \label{eqofmoofcart} \dot{\mu}_\beta = \bar{\mathcal{G}}_{\beta a}\frac{\partial S}{\partial \nu_a}+\bar{\mathcal{G}}_{\beta \alpha}\frac{\partial S}{\partial \mu_\alpha}, \qquad \dot{\nu}_d = \bar{\mathcal{G}}_{d \beta}\frac{\partial S}{\partial \mu_\beta}+\bar{\mathcal{G}}_{d a}\frac{\partial S}{\partial \nu_a}. \end{equation} We substitute the explicit represetations of the metric \eqref{CM-MP} into the system \eqref{eqofmoofcart} and arrive at that \begin{equation} \label{eqofmoofcart-gen} \begin{split} \dot{\mu}_\beta &= (-R_{a \beta }^{d}D_{bd}^{a} +L_{a \beta}^{\alpha}R_{b \alpha}^{a} +C_{\beta \alpha}^{\gamma}L_{\gamma b}^{\alpha}) \frac{\partial S}{\partial \nu_b} + (R_{\alpha a}^{b}R_{ \beta b}^{a}+C_{\alpha \gamma}^{\epsilon}C_{\beta \epsilon}^{\gamma}) \frac{\partial S}{\partial \mu_\beta}, \\ \dot{\nu}_d &= (R_{\gamma d}^{a}L_{a \beta}^{\gamma}+L_{\alpha d }^{\gamma}C_{ \beta \gamma}^{\alpha} -D_{ db}^{a}R_{ a \beta}^{b} ) \frac{\partial S}{\partial \mu_\beta} +( L_{\alpha a }^{\beta}L_{\beta d}^{\alpha} +D_{ab}^{k}D_{d k}^{b}) \frac{\partial S}{\partial \nu_a}. \end{split} \end{equation} \subsection{Matched Casimir Dissipation Bracket} Recall the Casimir dissipation bracket given in \eqref{ccs}. In order to carry this discussion to coupled systems on $\mathfrak{g}^*\times \mathfrak{h}^*$, as it can deduced from that equation, we first need to determine a real valued bilinear operator on the matched pair Lie algebra $\mathfrak{g}\bowtie \mathfrak{h}$ and then employ the pairing equipped with a Casimir function(al). Let us first determine dissipations individually on $\mathfrak{g}^*$ and $\mathfrak{h}^*$ then couple them. Consider symmetric bilinear operators on $\mathfrak{g}$ ve $\mathfrak{h}$, denoted by $\psi $ and $\vartheta $, respectively. Assume that $\C{C}$ is a Casimir function $% \mathfrak{g}^{\ast }$ and $\mathcal{D}$ is a Casimir function on $\mathfrak{h}^{\ast }$. Then, Casimir dissipation brackets are \begin{equation}\label{casimir} \begin{split} (\mathcal{F},\mathcal{H})^{(CD)}_{\mathfrak{g}^*}\left( \mu \right) & =-\psi \left( \left[ \frac{\delta \mathcal{F}}{\delta \mu },\frac{\delta \mathcal{H}}{\delta \mu }\right] _{\mathfrak{g}},\left[ \frac{\delta \mathcal{C}}{\delta \mu },\frac{\delta \mathcal{H}}{\delta \mu }\right] _{\mathfrak{g}}\right) \\ (\mathcal{F},\mathcal{H})^{(CD)}_{\mathfrak{h}^*}\left( \nu \right) &=-\vartheta \left( % \left[ \frac{\delta \mathcal{F}}{\delta \nu },\frac{\delta \mathcal{H}}{\delta \nu }\right] _{ \mathfrak{h}},\left[ \frac{\delta \mathcal{D}}{\delta \nu },\frac{\delta \mathcal{H}}{\delta \nu }\right] _{\mathfrak{h}}\right). \end{split} \end{equation} Define a real valued symmetric bilinear operator on $\mathfrak{g}\oplus \mathfrak{h}$, using $\psi $ and $\vartheta $, as follows \begin{equation}\label{sym-opp} \left( \psi ,\vartheta \right) :(\mathfrak{g}\bowtie \mathfrak{h})\times (\mathfrak{g}\bowtie \mathfrak{h})\longrightarrow \mathbb{R}, \qquad ( \xi \oplus \eta ,\xi'\oplus \eta' ) \mapsto \psi \left( \xi ,\xi' \right) +\vartheta \left( \eta ,\eta' \right) . \end{equation} In terms of the Casimir functions $\C{C}$ and $\C{D} $ on $\mathfrak{g}^*$ and $\mathfrak{h}^*$, respectively, we define a Casimir function $( \C{C},\C{D} )$ on $\mathfrak{g}^*\times \mathfrak{h}^*$, for example, as follows \begin{equation} ( \C{C},\C{D} )(\mu,\nu)=\C{C}(\mu)+ \C{D}(\nu). \end{equation} So that, matched Casimir dissipation bracket is defined to be \begin{equation}\label{mCD-Bra} \begin{split} (\mathcal{F},\mathcal{H})^{(mCD)}(\mu\oplus\nu)&=-\left( \psi ,\vartheta \right) \left( \left[ \left( \frac{\delta \mathcal{F}}{\delta \mu }\oplus\frac{ \delta \mathcal{F}}{\delta \nu }\right) ,\left( \frac{\delta \mathcal{H}}{\delta \mu }\oplus\frac{ \delta \mathcal{H}}{\delta \nu }\right) \right]_{\bowtie} , \left[ \left( \frac{\delta \mathcal{C}}{\delta \mu }\oplus \frac{\delta \mathcal{D}}{\delta \nu }\right) ,\left( \frac{\delta \mathcal{H}}{\delta \mu }\oplus \frac{\delta \mathcal{H}}{\delta \nu }\right) \right]_{\bowtie} \right) \end{split} \end{equation} where $[\bullet,\bullet]_{\bowtie}$ is the matched Lie algebra bracket in \eqref{mpla}. Referring to this bracket, the dissipative dynamics \eqref{diss-dyn-eq} for $a=1$, generated by a functional $\mathcal{H}$, is a system of equations. The dynamics on $\G{g}^*$ is \begin{equation}\label{mCD-eq-1} \begin{split} \dot{\mu} &=-ad_{\frac{\delta \mathcal{H}}{\delta \mu }}^{\ast }\big[ \frac{\delta \mathcal{C}}{\delta \mu },\frac{\delta \mathcal{H}}{\delta \mu }\big] ^{\flat}-ad_{\frac{\delta \mathcal{H}}{\delta \mu }}^{\ast }\big( \frac{\delta \mathcal{D}}{\delta \nu }\vartriangleright \frac{ \delta \mathcal{H}}{\delta \mu }\big) ^{\flat}+ad_{\frac{\delta \mathcal{H}}{\delta \mu }}^{\ast }\big( \frac{\delta \mathcal{H}}{\delta \nu }\vartriangleright \frac{\delta \mathcal{C}}{\delta \mu }\big) ^{\flat} -\big[\frac{\delta \mathcal{C}}{\delta \mu },\frac{\delta \mathcal{H}}{\delta \mu }\big]^{\flat}\overset{ \ast }{\vartriangleleft }\frac{\delta \mathcal{H}}{\delta \nu }\\ &-\big[\frac{\delta \mathcal{D}}{ \delta \nu }\vartriangleright \frac{\delta \mathcal{H}}{\delta \mu }\big]^{\flat}\overset{\ast }{\vartriangleleft }\frac{\delta \mathcal{H}}{\delta \nu }+\big[\frac{\delta \mathcal{H}}{\delta \nu }\vartriangleright \frac{\delta \mathcal{C}}{\delta \mu }\big]^{\flat}\overset{\ast }{ \vartriangleleft }\frac{\delta \mathcal{H}}{\delta \nu } -\mathfrak{a}_{\frac{\delta \mathcal{H}}{\delta \nu }}^{\ast }\big[\frac{\delta \mathcal{D}}{ \delta \nu },\frac{\delta \mathcal{H}}{\delta \nu }\big]^{\flat}-\mathfrak{a}_{\frac{\delta \mathcal{H}}{ \delta \nu }}^{\ast } \big( \frac{\delta \mathcal{D}}{\delta \nu }\vartriangleleft \frac{\delta \mathcal{H}}{\delta \mu }\big) ^{\flat}-\mathfrak{a}_{\frac{\delta \mathcal{H}}{ \delta \nu }}^{\ast }\big( \frac{\delta \mathcal{H}}{\delta \nu }\vartriangleleft \frac{\delta \mathcal{C}}{\delta \mu }\big) ^{\flat} \end{split} \end{equation} whereas the dynamics on $\G{h}^*$ is \begin{equation}\label{mCD-eq-2} \begin{split} \dot{\nu} &=-ad_{\frac{\delta \mathcal{H}}{\delta \nu }}^{\ast }\big[\frac{\delta \mathcal{D}}{\delta \nu }, \frac{\delta \mathcal{H}}{\delta \nu }\big]^{\flat}-ad_{\frac{\delta \mathcal{H}}{\delta \nu }}^{\ast }\big( \frac{\delta \mathcal{D}}{\delta \nu }\vartriangleleft \frac{\delta \mathcal{H}}{\delta \mu }\big) ^{\flat}+ad_{\frac{\delta \mathcal{H}}{\delta \nu }}^{\ast }\big( \frac{ \delta \mathcal{H}}{\delta \nu }\vartriangleleft \frac{\delta \mathcal{C}}{\delta \mu }\big) ^{\flat} -\frac{\delta \mathcal{H}}{\delta \mu }\overset{\ast }{\vartriangleright }[\frac{ \delta \mathcal{D}}{\delta \nu },\frac{\delta \mathcal{H}}{\delta \nu }]^{\flat} \\ &-\frac{\delta \mathcal{H}}{ \delta \mu }\overset{\ast }{\vartriangleright }\big[\frac{\delta \mathcal{D}}{\delta \nu } \vartriangleleft \frac{\delta \mathcal{H}}{\delta \mu }\big]^{\flat}+\frac{\delta \mathcal{H}}{\delta \mu }\overset{\ast }{\vartriangleright }\big[\frac{\delta \mathcal{H}}{\delta \nu } \vartriangleleft \frac{\delta \mathcal{C}}{\delta \mu }\big]^{\flat} -\mathfrak{b}_{\frac{\delta \mathcal{H}}{\delta \mu }}^{\ast }\big[\frac{\delta \mathcal{C}}{ \delta \mu },\frac{\delta \mathcal{H}}{\delta \mu }\big]^{\flat}-\mathfrak{b}_{\frac{\delta \mathcal{H}}{ \delta \mu }}^{\ast }\big( \frac{\delta \mathcal{D}}{\delta \nu }\vartriangleright \frac{\delta \mathcal{H}}{\delta \mu }\big) ^{\flat}+\mathfrak{b}_{\frac{\delta \mathcal{H}}{ \delta \mu }}^{\ast }\big( \frac{\delta \mathcal{H}}{\delta \nu }\vartriangleright \frac{\delta \mathcal{C}}{\delta \mu }\big) ^{\flat}. \end{split} \end{equation} Here, the right action $\vartriangleright$ and the left action $\vartriangleleft$ are those in \eqref{matched-pair-mutual-actions} whereas $\overset{\ast }{ \vartriangleright}$ and $\overset{\ast }{\vartriangleleft }$ are the dual actions in \eqref{eta-star} and \eqref{xi-star}, respectively. Notice that dual operators $\mathfrak{a}^*$ is in \eqref{a*} , and $\mathfrak{b}^*$ is in \eqref{b*}. Here, superscript $\flat$ denotes the dualization obtained through for the symmetric operators $\psi$ and $\vartheta$ given by \begin{equation} \langle \xi ^{\flat },\xi' \rangle =\psi ( \xi ,\xi'), \qquad \langle \eta ^{\flat },\eta' \rangle =\vartheta ( \eta ,\eta'). \end{equation} In order to prevent to notation inflation, we denote these two mappings by the same notation as we did while denoting the Lie algebra brackets on $\G{g}$ and $\G{h}$. We can couple the matched irreversible motion, that is matched Casimir dissipation motion, in \eqref{mCD-eq-1} and \eqref{mCD-eq-2} with matched reversible motion, that is matched Lie-Poisson dynamics, in \eqref{LPEgh}. This results with matched pair of metriplectic system involving Casimir dissipation terms, and simple achieved by adding the right hand sides of the systems obeying the order. this collective motion, can be determined by a single matched metriplectic bracket \begin{equation*} \lbrack \left\vert \mathcal{F},\mathcal{H}\right\vert ]_{\bowtie,CD}= \{\mathcal{F},\mathcal{H}\}_{\bowtie}+a(\mathcal{F},\mathcal{H})^{(mCD)}, \end{equation*} where $\{\bullet,\bullet\}_{\bowtie}$ is the matched Lie-Poisson bracket in \eqref{LiePoissonongh} and $(\bullet,\bullet)^{(mCD)}$ is the matched Casimir dissipation bracket in \eqref{mCD-Bra}. In this case, the dynamics governed by a Hamiltonian function(al) $\C{H}$, is implicitly written by \begin{equation*} (\dot{\mu}\oplus \dot{\nu})=\lbrack \left\vert (\mu\oplus\nu),\mathcal{H}\right\vert ]_{\bowtie,CD}. \end{equation*} \subsection{Matched Hamilton Dissipation Bracket} At first recall the Hamilton dissipation bracket given in \eqref{HD-sym} and the pure irreversible motion in \eqref{HDB-Dyn}. In this subsection, we couple (match) two Hamilton dissipation bracket in form \eqref{HD-sym} and two pure irreversible motion in \eqref{HDB-Dyn}. Accordingly, obeying the notation presenting in the presvious subsection we introduce the following Hamilton dissipations brackets on the constitutive spaces $\mathfrak{g}^*$ and $\mathfrak{h}^*$, for two bilinear operators $\psi$ and $\vartheta$, as follows \begin{equation} \begin{split} (\mathcal{F},\mathcal{H})^{HD}_{\mathfrak{g}}\left( \mu \right) & =-\psi \left( \left[ \frac{\delta \mathcal{F}}{\delta \mu },\frac{\delta \mathcal{C}}{\delta \mu }\right]_{\mathfrak{g}},\left[ \frac{\delta \mathcal{H}}{\delta \mu },\frac{\delta \mathcal{C}}{\delta \mu }\right] _{\mathfrak{g}}\right) \\ (\mathcal{F},\mathcal{H})^{HD}_{\mathfrak{h}}\left( \nu \right) & =-\vartheta \left( % \left[ \frac{\delta \mathcal{F}}{\delta \nu },\frac{\delta \mathcal{D}}{\delta \nu }\right] _{ \mathfrak{h}},\left[ \frac{\delta \mathcal{H}}{\delta \nu },\frac{\delta \mathcal{D}}{\delta \nu }\right] _{\mathfrak{h}}\right). \end{split} \end{equation} where $\mathcal{C}$ and $\mathcal{D}$ are Casimir functions on $\mathfrak{g}^*$ and $\mathfrak{h}^*$, respectively. In order to match these symmetric brackets, recall the real valued bilinear map \eqref{sym-opp} defined on matched pair Lie algebra $\mathfrak{g} \bowtie \mathfrak{h}$. Then, introduce matched Hamilton dissipation bracket \begin{equation} \label{macthedhamilton} (\mathcal{F},\mathcal{H})^{(mHD)}(\mu\oplus \nu)=-\left( \psi ,\vartheta \right) \left( \left[ \left( \frac{\delta \mathcal{F}}{\delta \mu }\oplus\frac{ \delta \mathcal{F}}{\delta \nu }\right) ,\left( \frac{\delta \mathcal{C}}{\delta \mu }\oplus\frac{ \delta \mathcal{D}}{\delta \nu }\right) \right] ,\left[ \left( \frac{\delta \mathcal{H}}{\delta \mu }\oplus\frac{\delta \mathcal{H}}{\delta \nu }\right) ,\left( \frac{\delta \mathcal{C}}{\delta \mu }\oplus\frac{\delta \mathcal{D}}{\delta \nu }\right) \right] \right), \end{equation} where the brackets inside the paring are the matched Lie bracket in \eqref{mpla}. Irreversible the dynamics on $\mathfrak{g}^*\times \mathfrak{h}^*$ can be obtained as \begin{equation} \begin{split} \dot{\mu} &=-ad_{\frac{\delta \mathcal{C}}{\delta \mu }}^{\ast }\left[ \frac{\delta \mathcal{H}}{\delta \mu },\frac{\delta \mathcal{C}}{\delta \mu }\right] ^{\flat}-ad_{\frac{\delta \mathcal{C}}{\delta \mu }}^{\ast }\left( \frac{\delta \mathcal{H}}{\delta \nu }\vartriangleright \frac{ \delta \mathcal{C}}{\delta \mu }\right) ^{\flat}+ad_{\frac{\delta \mathcal{C}}{\delta \mu }}^{\ast }\left( \frac{\delta \mathcal{D}}{\delta \nu }\vartriangleright \frac{\delta \mathcal{H}}{\delta \mu }\right) ^{\flat} -[\frac{\delta \mathcal{H}}{\delta \mu },\frac{\delta \mathcal{C}}{\delta \mu }]^{\flat}\overset{ \ast }{\vartriangleleft }\frac{\delta \mathcal{D}}{\delta \nu } \\ &-[\frac{\delta \mathcal{H}}{ \delta \nu }\vartriangleright \frac{\delta \mathcal{C}}{\delta \mu }]^{\flat}\overset{\ast }{\vartriangleleft }\frac{\delta \mathcal{D}}{\delta \nu }+[\frac{\delta \mathcal{D}}{\delta \nu }\vartriangleright \frac{\delta \mathcal{H}}{\delta \mu }]^{\flat}\overset{\ast }{ \vartriangleleft }\frac{\delta \mathcal{D}}{\delta \nu } -\mathfrak{a}_{\frac{\delta \mathcal{D}}{\delta \nu }}^{\ast }[\frac{\delta \mathcal{H}}{ \delta \nu },\frac{\delta \mathcal{D}}{\delta \nu }]^{\flat}-\mathfrak{a}_{\frac{\delta \mathcal{D}}{ \delta \nu }}^{\ast }\left( \frac{\delta \mathcal{H}}{\delta \nu }\vartriangleleft \frac{\delta \mathcal{C}}{\delta \mu }\right) ^{\flat}-\mathfrak{a}_{\frac{\delta \mathcal{D}}{ \delta \nu }}^{\ast }\left( \frac{\delta \mathcal{D}}{\delta \nu }\vartriangleleft \frac{\delta \mathcal{H}}{\delta \mu }\right) ^{\flat} \\ \dot{\nu} &=-ad_{\frac{\delta \mathcal{D}}{\delta \nu }}^{\ast }[\frac{\delta \mathcal{H}}{\delta \nu }, \frac{\delta \mathcal{D}}{\delta \nu }]^{\flat}-ad_{\frac{\delta \mathcal{D}}{\delta \nu }}^{\ast }\left( \frac{\delta \mathcal{H}}{\delta \nu }\vartriangleleft \frac{\delta \mathcal{C}}{\delta \mu }\right) ^{\flat}+ad_{\frac{\delta \mathcal{D}}{\delta \nu }}^{\ast }\left( \frac{ \delta \mathcal{D}}{\delta \nu }\vartriangleleft \frac{\delta \mathcal{H}}{\delta \mu }\right) ^{\flat} -\frac{\delta \mathcal{C}}{\delta \mu }\overset{\ast }{\vartriangleright }[\frac{ \delta \mathcal{H}}{\delta \nu },\frac{\delta \mathcal{D}}{\delta \nu }]^{\flat}\\ &-\frac{\delta \mathcal{C}}{ \delta \mu }\overset{\ast }{\vartriangleright }[\frac{\delta \mathcal{H}}{\delta \nu } \vartriangleleft \frac{\delta \mathcal{C}}{\delta \mu }]^{\flat}+\frac{\delta \mathcal{C}}{\delta \mu }\overset{\ast }{\vartriangleright }[\frac{\delta \mathcal{D}}{\delta \nu } \vartriangleleft \frac{\delta \mathcal{H}}{\delta \mu }]^{\flat} -\mathfrak{b}_{\frac{\delta \mathcal{C}}{\delta \mu }}^{\ast }[\frac{\delta \mathcal{H}}{ \delta \mu },\frac{\delta \mathcal{C}}{\delta \mu }]^{\flat}-\mathfrak{b}_{\frac{\delta \mathcal{C}}{ \delta \mu }}^{\ast }\left( \frac{\delta \mathcal{H}}{\delta \nu }\vartriangleright \frac{\delta \mathcal{C}}{\delta \mu }\right) ^{\flat}+\mathfrak{b}_{\frac{\delta \mathcal{C}}{ \delta \mu }}^{\ast }\left( \frac{\delta \mathcal{D}}{\delta \nu }\vartriangleright \frac{\delta \mathcal{H}}{\delta \mu }\right) ^{\flat}. \end{split} \end{equation} Here, the right action $\vartriangleright$ and the left action $\vartriangleleft$ are those in \eqref{matched-pair-mutual-actions} whereas $\overset{\ast }{ \vartriangleright}$ and $\overset{\ast }{\vartriangleleft }$ are the dual actions in \eqref{eta-star} and \eqref{xi-star}, respectively and $\mathfrak{a}$, $\mathfrak{a}^*$ defined as in \eqref{a}, \eqref{a*} and $\mathfrak{b}$, $\mathfrak{b}^*$ defined as in \eqref{b}, \eqref{b*}. \section{Illustration: Heisenberg Algebras in Mutual Actions}\label{Sec-3D} \subsection{Heisenberg Algebra and Lie-Poisson Dynamics} We start with a $3$ dimensional Heisenberg algebra which we denote by $\mathfrak{g}$, see \cite{Ma20}. We assume a basis $\{\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3\}$ for $\G{g}$, and define the Lie algebra operation as \begin{equation}\label{Hei-brack} [\mathbf{e}_1,\mathbf{e}_3]=0, \qquad [\mathbf{e}_1,\mathbf{e}_2]=\mathbf{e}_3, \qquad [\mathbf{e}_2,\mathbf{e}_3]=0 . \end{equation} These read that the structure constants as $C_{12}^3=-C_{21}^3=1$ while the rest is zero. Heisenberg algebra can be written as a 2-cocycle extension Lie algebra. So that we can examine it in the realms of the discussions done in Subsection \ref{2coc-Sec}. To see this, referring to the basis of the algebra $\mathfrak{g}$, we define two linear spaces $V=\langle\mathbf{e}_3\rangle$ and $\mathfrak{l}=\langle\mathbf{e}_1,\mathbf{e}_2\rangle$. Accordingly, we introduce a $V$-valued skew-symmetric bilinear mapping on $\mathfrak{l}$ as \begin{equation} \varphi: \mathfrak{l} \times \mathfrak{l} \longrightarrow V, \qquad \varphi(\mathbf{e}_1,\mathbf{e}_1)=0, \quad \varphi(\mathbf{e}_2,\mathbf{e}_2)=0, \quad \varphi(\mathbf{e}_1,\mathbf{e}_2)=\mathbf{e}_3. \end{equation} It is straightforward to verify that $\varphi$ is a 2-cocycle. Taking the left action of $\G{l}$ on $V$ (see the first action in the list \eqref{actionsoflk}) is zero, we arrive easily that the bracket \eqref{Hei-brack} is indeed in a 2-cocycle extension form \eqref{AAAAA}. \textbf{Coadjoint flow.} The dual space is denoted by $\mathfrak{g}^*$ with the dual basis $\{ \textbf{e}^1,\textbf{e}^2,\textbf{e}^3\}$. Assume the following coordinates $\xi=(\xi_1,\xi_2,\xi_3)$ in $\G{g}$, and $\mu=(\mu_1,\mu_2,\mu_3)$ in $\G{g}^*$. Then, the coadjoint action of a Lie algebra element $\xi$ in $\mathfrak{g}$ to a dual element $\mu$ in $\mathfrak{g}^*$ is computed to be \begin{equation} ad^*:\mathfrak{g}\times \mathfrak{g}^*\longrightarrow \mathfrak{g}^*, \qquad ad^*_\xi \mu=(\mu_3\xi^2,-\mu_3\xi^1,0). \end{equation} Referring to this calculation, we write the Lie-Poisson dynamics \eqref{loc-LP-Eqn} generated by a Hamiltonian function $\mathcal{H}$ as follows \begin{equation} \label{Ray-mot} \dot{\mu}_1=\mu_3 \frac{\partial \mathcal{H}}{\partial \mu_2}, \qquad \dot{\mu}_2=-\mu_3 \frac{\partial \mathcal{H}}{\partial \mu_1} \qquad \dot{\mu}_3=0. \end{equation} Here, the latter gives that $\mu_3$ is a constant. In equation (\ref{Ray-mot}), if we choose $\mu_1=q$, $\mu_2=p$, and $\mu_3=1$, then we arrive at the Hamilton's equations in its very classical form \begin{equation} \dot{q}= \frac{\partial \mathcal{H}}{\partial p}, \qquad \dot{p}=-\frac{\partial\mathcal{H}}{\partial q}. \end{equation} Therefore we claim that, in the present geometry, the Hamiltonian dynamics can be realized as a coadjoint flow. \textbf{Double Bracket dissipation.} For the present case, the symmetric double bracket in \eqref{doubledissi} is computed to be \begin{equation} (\C{F},\C{S})^{(D)}(\mu)=\mu_3^2\big( \frac{\partial \mathcal{S}}{\partial \mu_1}\frac{\partial \mathcal{F}}{\partial \mu_1} + \frac{\partial \mathcal{S}}{\partial \mu_2}\frac{\partial \mathcal{F}}{\partial \mu_2}\big). \end{equation} Therefore for a function $\mathcal{S}$, the irreversible dynamics due to the symmetric bracket is computed to be \begin{equation} \dot{\mu}_1=\mu_3^2 \frac{\partial \mathcal{S}}{\partial \mu_1}, \qquad \dot{\mu}_2=\mu_3^2\frac{\partial \mathcal{S}}{\partial\mu_2}, \qquad \dot{\mu}_3=0. \end{equation} Then the metriplectic equations of motion (\ref{aaaa}) are computed to be \begin{equation} \label{MD-Ex-1} \dot{\mu}_1=\mu_3 \frac{\partial \mathcal{H}}{\partial \mu_2}+\mu_3^2\frac{\partial\mathcal{S}}{\partial \mu_1}, \qquad \dot{\mu}_2=-\mu_3 \frac{\partial \mathcal{H}}{\partial \mu_1}+\mu_3^2\frac{\partial \mathcal{S}}{\partial\mu_2},\qquad \dot{\mu}_3=0. \end{equation} If we choose $\mu_1=q$, $\mu_2=p$, and $\mu_3=1$, then the metriplectic dynamics \eqref{MD-Ex-1} turns out to be \begin{equation} \label{qdot-1} \dot{q}=\frac{\partial \mathcal{H}}{\partial p}+ \frac{\partial \mathcal{S}}{\delta q} , \qquad \dot{p}=-\frac{\partial \mathcal{H}}{\partial q}+ \frac{\partial \mathcal{S}}{\partial p}. \end{equation} Two interesting oarticular instances of the present dynamics: \textbf{(1)} Let us take the Hamiltonian function $\mathcal{H}=p^2+V(q)$ to be the total energy of the system and $S=S(q)$ then the system \eqref{qdot-1} reduces to \begin{equation}\label{2ndODE-1} \ddot{q}-S_{qq} \dot{q}-V_q=0. \end{equation} We cite \cite{Mielke2011} for a more elegant geometrization of the second order ODE \eqref{2ndODE-1} in terms of the GENERIC framework. \textbf{(2)} As another naive application of the dissipative system \eqref{qdot-1}, we consider a general Hamiltonian function $H$ but choose $S=ap^2/2$ for a scalar $a$, then a fairly straight-forward calculation gives \eqref{qdot-1} that \begin{equation}\label{dyn-exp} \dot{q}=\frac{\partial \mathcal{H}}{\partial p}, \qquad \dot{p}=-ap+\frac{\partial \mathcal{H}}{\partial q}. \end{equation} This is the conformal Hamiltonian dynamics as described in \cite{Perl}. To see the geometry behind this dynamics, at first consider the vector field $X=(\partial \mathcal{H}/\partial p) \partial_q+(-ap+{\partial \mathcal{H}}/\partial q )\partial_ p$ generating \eqref{dyn-exp} and then define the symplectic two-form $\Omega=dq \wedge dp$. A Hamiltonin vector field preserves the symplectic two-forms but the vector field $X$ satisfies \begin{equation} \mathfrak{L}_X \Omega= d\big(d\mathcal{H}-apdq\big) = adq \wedge dp=a \Omega \end{equation} where $\mathfrak{L}$ denotes the Lie derivative. This reads that $X$ preserved the symplectic two-form up to some conformal factor $a$. \subsection{Coupling of Two Heisenberg Algebras and Matched Lie-Poisson Dynamics} Consider two $3D$ Heisenberg algebras $\mathfrak{g}$ and $\mathfrak{h}$. We choose a basis \begin{equation}\label{Hei-basis} \{ \mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3\}\in \mathfrak{g}, \qquad \{ \mathbf{f}_1,\mathbf{f}_2,\mathbf{f}_3\}\in \mathfrak{h}. \end{equation} To fix the notation, we record here the coordinate realizations of two arbitrary elements $\xi$ and $\xi'$ in $\mathfrak{g}$, and two arbitrary elements $\eta$ and $\eta'$ in $\mathfrak{h}$ as follows \begin{equation} \label{elementsV+W} \begin{split} &\xi=\xi^1 \mathbf{e}_1+ \xi^2 \mathbf{e}_2+ \xi^3 \mathbf{e}_3, \qquad \xi'=\xi^{1'} \mathbf{e}_1+ \xi^{2'} \mathbf{e}_2+ \xi^{3'} \mathbf{e}_3, \\ &\eta=\eta^1 \mathbf{f}_1+ \eta^2 \mathbf{f}_2+ \eta^3 \mathbf{f}_3, \qquad \eta'=\eta^{1 '} \mathbf{f}_1+\eta^{2'} \mathbf{f}_2+ \eta^{3'} \mathbf{f}_3. \end{split} \end{equation} In accordance with these choices, the Lie algebra brackets on $\mathfrak{g}$ and $\mathfrak{h}$ can be exhibited in the form \begin{equation}\label{bracketsonVW} [\xi,\xi']=(\xi^1 \xi^{2'} -\xi^{1 '} \xi^2 )\mathbf{e}_3, \qquad {[\eta,\eta']=(\eta^1 \eta^{2'}-\eta^{1'} \eta^2)\mathbf{f}_3} \end{equation} respectively. Obeying the notation in \eqref{Lieact}, and referring to the coordinate realizations \eqref{elementsV+W}, we introduce a right action of $\mathfrak{g}$ on $\mathfrak{h}$, and a left action $\mathfrak{h}$ on $\mathfrak{g}$ as \cite{Ma20} \begin{equation} \label{actionofHeisenberg} \begin{split} \vartriangleright &:\mathfrak{h}\otimes \mathfrak{g}\rightarrow \mathfrak{g}, \qquad \eta \vartriangleright \xi=-\eta^1 \xi^2 \mathbf{e}_3 ,\\ \vartriangleleft &:\mathfrak{h}\otimes \mathfrak{g}\rightarrow \mathfrak{g}, \qquad \eta \vartriangleleft \xi=-\xi^{1}\eta^2\mathbf{f}_3 . \end{split} \end{equation} This reads that the action constants $R_{12}^3=-1$, $L_{12}^3=-1$ while the rest are zero. It is straight forward to prove that these actions are indeed satisfying the matched pair Lie algebra conditions in \eqref{comp-mpa}. Accordingly, for the present coupling, the matched pair Lie algebra bracket \eqref{mpla} on $\G{g}\bowtie \G{h}$ is computed to be \begin{equation} \label{heisliebrac} [(\xi,\eta),(\xi',\eta')]_{\bowtie}=(\xi^1\xi^{2'}-\xi^2 \xi^{1'}-\eta^1\xi^{2'}+\eta^{1'} \xi^2)\mathbf{e}_3 \oplus (\eta^1\eta^{2'}-\eta^2 \eta^{1'}-\eta^2\xi^{1'}+\eta^{2'} \xi^1)\mathbf{f}_3. \end{equation} \textbf{As coupling of two cocycle extensions.} Referring to the basis \eqref{Hei-basis}, Heisenberg algebras $\mathfrak{g}$ and $\mathfrak{h}$ can be realized as 2-cocycle extensions \begin{equation} \label{decomp-Hei} V{_\varphi\rtimes}\mathfrak{l} :=\langle \mathbf{e_3} \rangle {_\varphi\rtimes} \langle \mathbf{e_1},\mathbf{e_2} \rangle, \qquad W {_\phi\rtimes} \mathfrak{k}:=\langle \mathbf{f_3} \rangle{_\phi\rtimes} \langle \mathbf{f_1},\mathbf{f_2} \rangle, \end{equation} in the light of the following 2-cocycles \begin{equation} \label{eps-iota} \begin{split} \varphi&:\mathfrak{l} \times \mathfrak{l} \longrightarrow V, \qquad \varphi(\mathbf{e}_1,\mathbf{e}_1)=0, \quad \varphi(\mathbf{e}_2,\mathbf{e}_2)=0, \quad \varphi(\mathbf{e}_1,\mathbf{e}_2)=\textbf{e}_3, \\ \phi&:\mathfrak{k} \times \mathfrak{k} \longrightarrow W, \qquad \phi(\mathbf{f}_1,\mathbf{f}_1)=0, \quad \phi(\mathbf{f}_2,\mathbf{f}_2)=0, \quad \phi(\mathbf{f}_1,\mathbf{f}_2)=\mathbf{f}_3 \end{split} \end{equation} while the left actions in \eqref{actionsoflk} are zero. We now examine Proposition \ref{Prop-Gokhan} for the matched pair of two Heisenberg algebras. That is, we show that $\G{g}\bowtie \G{h}$ is a 2-cocycle extension by itself. For this, following the notation in Subsection \ref{centrallyextend}, we take that both \textbf{(1)} mutual actions $\blacktriangleleft$ and $\blacktriangleright$ of $\mathfrak{l}$ and $\mathfrak{k}$ on each other exhibited in \eqref{Lieact-k-l}, \textbf{(2)} cross actions $\curvearrowleft$ and $\curvearrowright$ in \eqref{actionsofkl} are all zero mappings whereas \textbf{(3)} mappings $\epsilon$ and $\iota$ given in \eqref{epsiot} are computed to be \begin{equation} \epsilon(\mathbf{f_1},\mathbf{e_2})=-\mathbf{e_3} \qquad \iota(\mathbf{f_2},\mathbf{e_1})=-\mathbf{f_3}, \end{equation} while the rest are zero. Referring to these choices, it is now straight forward to observe that both the left action $\vartriangleright$ and the right action $\vartriangleleft$ of the Heisenberg algebras given in the display \eqref{actionofHeisenberg} can be recasted in the form of \eqref{left-comp}. This reads that the matched Lie algebra bracket \eqref{heisliebrac} on Heisenberg algebras is indeed fitting \eqref{bracketlk-1}. On the other, now we show that the matched pair $\G{g}\bowtie \G{h}$ is a 2-cocycle extension of $\G{l}\bowtie \G{k}=\langle \mathbf{e}_1, \mathbf{e}_2\rangle \bowtie \langle \mathbf{f}_1, \mathbf{f}_2\rangle $ over its representation space $V\oplus W:=\langle \mathbf{e}_3 \rangle \oplus \langle \mathbf{f}_3 \rangle $ according to the decomposition in \eqref{decomp-Hei}. We remark here that, the mutual actions in $\G{l}\bowtie \G{k}$ are trivial since we take that $\blacktriangleleft$ and $\blacktriangleright$ of $\mathfrak{l}$ and $\mathfrak{k}$ as zero mappings. Nevertheless, we determine the left action ${\cdot\kern-.33em\triangleright}$ of the Lie algebra $\G{l}\bowtie \G{k}$ onto $V\oplus W$ as trivial. We introduce a cocycle \begin{equation} \begin{split} \Theta&: (\mathfrak{l} \bowtie \mathfrak{k}) \times (\mathfrak{l} \bowtie \mathfrak{k}) \longrightarrow V \oplus W \\&:\big (\langle \mathbf{e}_1, \mathbf{e}_2\rangle \bowtie \langle \mathbf{f}_1, \mathbf{f}_2\rangle \big ) \times \big (\langle \mathbf{e}_1, \mathbf{e}_2\rangle \bowtie \langle \mathbf{f}_1, \mathbf{f}_2\rangle \big ) \longrightarrow \langle \mathbf{e}_3 \rangle \oplus \langle \mathbf{f}_3 \rangle, \end{split} \end{equation} which, referring to the formula in \eqref{theta}, can be exploited as \begin{equation} \label{Hei-coco} \begin{split} \Theta(\xi^1\mathbf{e}_1+\xi^2\mathbf{e}_2+\eta^1\mathbf{f}_1+\eta^2\mathbf{f}_2,\xi^{1'}\mathbf{e}_1+\xi^{2'}\mathbf{e}_2+\eta^{1'}\mathbf{f}_1+\eta^{2'}\mathbf{f}_2)&=(\xi^1\xi^{2'}-\xi^{1'}\xi^2-\eta^1\xi^{2'}+\eta^{1'}\xi^2)\mathbf{e}_3 \\&\qquad \oplus (\eta^1\eta^{2'}-\eta^2 \eta^{1'}-\eta^2\xi^{1'}+\eta^{2'} \xi^1)\mathbf{f}_3. \end{split} \end{equation} So, the trivial action ${\cdot\kern-.33em\triangleright}$ of $\G{l}\bowtie \G{k}$ on $V\oplus W$ with the cocycle \eqref{Hei-coco}, it is immediate to see the matched Lie algebra bracket \eqref{heisliebrac} is also a 2-cocycle extension bracket by obeying \eqref{AAAAA}. To sum up, we can argue that coupling of two Heisenberg algebras is a nontrivial example of Proposition \ref{Prop-Gokhan}. \textbf{Matched Lie-Poisson equations.} Let us first fix the notation for the dual elements as \begin{equation} \label{daul-elements-exp-1} \mu=\mu_1 \mathbf{e}^1+\mu_2 \mathbf{e}^2+\mu_3 \mathbf{e}^3 \in \mathfrak{g}^*, \qquad \nu=\nu_1 \mathbf{f}^1+\nu_2 \mathbf{f}^2+\nu_3 \mathbf{f}^3 \in \mathfrak{h}^*. \end{equation} The dual actions $\overset{\ast}{\vartriangleleft}$ in (\ref{xi-star}) and $\overset{\ast}{\vartriangleright}$ in ( \ref{eta-star}), the cross actions $\mathfrak{a}^*$ in (\ref{a*}) and $\mathfrak{b}^*$ in (\ref{b*}) are computed to be \begin{equation}\label{dualactionsofheis} \mu \overset{\ast}{\vartriangleleft} \eta=-\mu_3\eta^1 \mathbf{e}^2, \qquad \xi \overset{\ast}{\vartriangleright} \nu=-\nu_3 \xi^1 \mathbf{f}^2 , \qquad \mathfrak{b}^*_\xi \mu=-\mu_3 \xi^2 \mathbf{f}^1, \qquad \mathfrak{a}^*_{\eta} \nu=-\nu_3 \eta^2 \mathbf{e}^1. \end{equation} Then the matched Lie-Poisson equations (\ref{LPEgh}) generated by a Hamiltonian function $\mathcal{H}=\mathcal{H}(\mu,\nu)$ on $\mathfrak{g}^* \oplus \mathfrak{h}^*$ is computed to be \begin{equation} \label{Ray-mot-1} \begin{split} \dot{\mu}_1&=-\nu_3 \frac{\partial \mathcal{H}}{\partial \nu_2}+\mu_3 \frac{\partial \mathcal{H}}{\partial \mu_2}, \hspace{1.2cm} \dot{\mu}_2=\mu_3 \frac{\partial \mathcal{H}}{\partial \nu_1}-\mu_3 \frac{\partial \mathcal{H}}{\partial \mu_1}, \hspace{1cm} \dot{\mu}_3=0 \\ \dot{\nu}_1&=-\mu_3 \frac{\partial \mathcal{H}}{\partial \mu_2}+\nu_3 \frac{\partial \mathcal{H}}{\partial \nu_2}, \hspace{1cm} \dot{\nu}_2=-\nu_3 \frac{\partial \mathcal{H}}{\partial \nu_1}+\nu_3 \frac{\partial \mathcal{H}}{\partial \mu_1}, \hspace{1.1cm} \dot{\nu}_3=0. \end{split} \end{equation} Let us study some particular instances on this system of equations. In equation (\ref{Ray-mot-1}), if we choose $\mu_1=q$, $\mu_2=p$, $\nu_1=u$, $\nu_2=w$, and $\mu_3=\nu_3=1$ then we arrive at the Hamilton's equations in a coupled form \begin{equation}\label{Lie-poissonqp} \begin{split} \dot{q}&= \frac{\partial \mathcal{H}}{\partial p}-\frac{\partial \mathcal{H}}{\partial w}, \hspace{1.2cm} \dot{p}=\frac{\partial \mathcal{H}}{\partial u}-\frac{\partial \mathcal{H}}{\partial q}, \\ \dot{u}&= -\frac{\partial \mathcal{H}}{\partial p}+\frac{\partial \mathcal{H}}{\partial w}, \hspace{1cm} \dot{w}=-\frac{\partial \mathcal{H}}{\partial u}+\frac{\partial \mathcal{H}}{\partial q}. \end{split} \end{equation} \textbf{Rayleigh type dissipation.} For the present case, in order to add Rayleigh type dissipation term to the Lie-Poisson dynamics on $\mathfrak{g}^* \oplus \mathfrak{h}^*$ as \begin{equation} \mathfrak{g}^* \oplus \mathfrak{h}^* \longrightarrow \mathfrak{g} \bowtie \mathfrak{h}, \qquad \mu\oplus\nu \quad \mapsto ( \Upsilon^{\G{g}^1}\mathbf{e}_1+\Upsilon^{\G{g}^2}\mathbf{e}_2+\Upsilon^{\G{g}^3}\mathbf{e}_3 , \Upsilon^{\G{h}^1}\mathbf{f}_1+\Upsilon^{\G{h}^2}\mathbf{f}_2+ \Upsilon^{\G{h}^3}\mathbf{f}_3 ). \end{equation} Referring to the matched Lie-Poisson equation of motions (\ref{eqofmoofRay}) for the dual spaces $\mathfrak{g}^*\oplus \mathfrak{h}^*$ can be obtained as follows \begin{equation} \begin{split} &\dot{\mu}_1-\mu_3\frac{\partial \mathcal{H}}{\partial \mu_2} +\nu_3 \frac{\partial \mathcal{H}}{\partial \nu_2}=\nu_3\Upsilon^{\G{h}^2}-\mu_3\Upsilon^{\G{g}^2}, \qquad \dot{\mu}_2-\mu_3 \frac{\partial \mathcal{H}}{\partial \nu_1}+\mu_3 \frac{\partial \mathcal{H}}{\partial \mu_1}=\mu_3\Upsilon^{\G{g}^1}-\mu_3 \Upsilon^{\G{h}^1}, \qquad \dot{\mu}_3=0, \\ &\dot{\nu}_1 + \mu_3 \frac{\partial \mathcal{H}}{\partial \mu_2}-\nu_3 \frac{\partial \mathcal{H}}{\partial \nu_2}=\mu_3\Upsilon^{\G{g}^2}-\nu_3\Upsilon^{\G{h}^2},\qquad \dot{\nu}_2 +\nu_3 \frac{\partial \mathcal{H}}{\partial \nu_1}-\nu_3 \frac{\partial \mathcal{H}}{\partial \mu_1}=\nu_3\Upsilon^{\G{h}^1}-\nu_3 \Upsilon^{\G{g}^1}, \qquad \dot{\nu}_3=0. \end{split} \end{equation} \textbf{Double Bracket dissipation.} For the present discussion, matched double bracket \eqref{matchdouble} takes the particular form \begin{equation}\label{matchdoublebrofheis} \begin{split} (\mathcal{F},\mathcal{H})^{(mD)}(\mu\oplus \nu)=&\mu_3^2 \frac{\partial \mathcal{F}}{\partial \mu_1} \frac{\partial \mathcal{H}}{\partial \mu_1}+(2\mu^2_3+2\nu_3\mu_3+\nu^2_3)\frac{\partial \mathcal{F}}{\partial \mu_2} \frac{\partial \mathcal{H}}{\partial \mu_2}-(\mu_3+\nu_3)^2\frac{\partial \mathcal{F}}{\partial \nu_2} \frac{\partial \mathcal{H}}{\partial \mu_2}\\ &+(2\nu^2_3+2\mu_3\nu_3+\mu^2_3)\frac{\partial \mathcal{F}}{\partial \nu_2} \frac{\partial \mathcal{H}}{\partial \nu_2} -(\mu_3+\nu_3)^2 \frac{\partial \mathcal{F}}{\partial \mu_2} \frac{\partial \mathcal{H}}{\partial \nu_2}+\nu^2_3 \frac{\partial \mathcal{F}}{\partial \nu_1} \frac{\partial \mathcal{H}}{\partial \nu_1} . \end{split} \end{equation} Then the irreversible dynamics \eqref{eqofmoofmatchdouble} is computed to be \begin{equation} \begin{split} \dot{\mu}_1&=\frac{\partial \mathcal{S}}{\partial \mu_1}\mu_3^2, \hspace{1cm} \dot{\mu_2}=\frac{\partial \mathcal{S}}{\partial \mu_2}\mu_3^2, \hspace{1cm} \dot{\mu}_3=0 \\ \dot{\nu}_1&=\frac{\partial \mathcal{S}}{\partial \nu_1} \nu_3^2, \hspace{1cm} \dot{\nu_2}=\frac{\partial \mathcal{S}}{\partial \nu_2} \nu_3^2, \hspace{1.2cm} \dot{\nu}_3=0. \end{split} \end{equation} Let us collect the reversible Lie-Poisson dynamics in \eqref{Ray-mot-1} and the irreversible dynamics generated by $\mathcal{S}$ under the realm of the symmetric (double) bracket in \eqref{matchdoublebrofheis}. Then the metriplectic equations of motion are computed to be \begin{equation} \label{MD-Ex} \begin{split} \dot{\mu}_1&=\mu_3 \frac{\partial \mathcal{H}}{\partial \mu_2}-\nu_3 \frac{\partial \mathcal{H}}{\partial \nu_2}+\frac{\partial \mathcal{S}}{\partial \mu_1}\mu_3^2 \hspace{1.3cm} \dot{\mu_2}=\mu_3 \frac{\partial \mathcal{H}}{\partial \nu_1}-\mu_3 \frac{\partial \mathcal{H}}{\partial \mu_1}+\frac{\partial \mathcal{S}}{\partial \mu_2}\mu_3^2, \\ \dot{\nu}_1&=-\mu_3 \frac{\partial \mathcal{H}}{\partial \mu_2}-\nu_3 \frac{\partial \mathcal{H}}{\partial \nu_2}+\frac{\partial \mathcal{S}}{\partial \nu_1} \nu_3^2, \hspace{1cm} \dot{\nu}_2=\nu_3 \frac{\partial \mathcal{H}}{\partial \nu_1}+\nu_3 \frac{\partial \mathcal{H}}{\partial \mu_1}+\frac{\partial \mathcal{S}}{\partial \nu_2} \nu_3^2. \end{split} \end{equation} After the choice $\mu_1=q$, $\mu_2=p$, $\nu_1=u$, $\nu_2=w$, and $\mu_3=\nu_3=1$ read the following system \begin{equation} \label{qdot} \begin{split} \dot{q}&=\frac{\partial \mathcal{H}}{\partial p}-\frac{\partial \mathcal{H}}{\partial w}+ \frac{\partial \mathcal{S}}{\partial q} , \hspace{1.3cm} \dot{p}=\frac{\partial \mathcal{H}}{\partial u}-\frac{\partial \mathcal{H}}{\partial q}+\frac{\partial \mathcal{S}}{\partial p} \\ \dot{u}&=-\frac{\partial \mathcal{H}}{\partial p}-\frac{\partial \mathcal{H}}{\partial w}+ \frac{\partial \mathcal{S}}{\partial u} , \hspace{1cm} \dot{w}=\frac{\partial \mathcal{H}}{\partial u}+\frac{\partial \mathcal{H}}{\partial q}+ \frac{\partial \mathcal{S}}{\partial w}. \end{split} \end{equation} Let us more particularly take the Hamiltonian function $\mathcal{H}=1/2(p^2+w^2)+V(q,u)$ to be the total energy of the system then from (\ref{qdot}) we obtain \begin{equation}\label{2ndODE} \begin{split} \ddot{q}-\mathcal{S}_{qq}\dot{q}+2V_q+\mathcal{S}_w-\mathcal{S}_p=0 \\ \ddot{u}-\mathcal{S}_{uu}\dot{u}+2V_u+\mathcal{S}_w+\mathcal{S}_p=0 \end{split} \end{equation} \section{Illustration: Rigid Bodies} \label{Diss-Gen-Exp} We consider two identical three dimensional Euclidean spaces denoted by $\mathfrak{g}=\mathbb{R}^{3}$ and $\mathfrak{h}=\mathbb{R}^{3}_\mathbf{k}$. These are Lie algebras equipped with the following Lie algebra structures \begin{equation}\label{bracketsonsu2} \left [ \xi,\xi' \right ]_\mathfrak{\mathbb{R}^{3}}= \xi \times \xi', \qquad \left [ \eta, \eta' \right ]_{\mathbb{R}^{3}_\mathbf{k}}=\mathbf{k} \times (\eta \times \eta'), \end{equation} respectively, \cite{Ma90}. Here, $\times$ denotes cross product on $ \mathbb{R}^{3}$, and $\textbf{k}$ is the unit vector $(0,0,1)$ available in the standard basis on $ \mathbb{R}^{3}$. Notice that, the subscript $\mathbb{R}^{3}_\mathbf{k}$ is just to remind that the bracket is not classical cross product on $\mathbb{R}^3$ instead its the second bracket in \eqref{bracketsonsu2}. We obey the notations presented in Subsection \ref{doublecross}. The coadjoint action of $\mathfrak{g}$ on it dual $\mathfrak{g}^*\simeq \mathbb{R}^{3}$, and the coadjoint action of $\mathfrak{h}$ on it dual $\mathfrak{h}^*\simeq \mathbb{R}^{3}$ are computed to be \begin{equation} ad^*_\xi \mu =\mu \times \xi, \qquad ad^*_{\eta} \nu =(\nu \cdot \eta)\mathbf{k}-\nu(\mathbf{k}\cdot \eta). \end{equation} In order to employ the notation in \eqref{nonbracket2-mp}, we introduce the following basis $(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})$ on $\mathbb{R}^{3}$ and $(\mathbf{f}_{1},\mathbf{f}_{2},\mathbf{f}_{3})$ on $\mathbb{R}^{3}_\mathbf{k}$ \begin{equation} \mathbf{e}_{1}=\mathbf{f}_1=\left ( 1,0,0 \right ), \quad \mathbf{e}_{2}=\mathbf{f}_2=\left ( 0,1,0 \right ),\quad \mathbf{e}_{3}=\mathbf{f}_3=\textbf{k}=\left ( 0,0,1 \right ). \end{equation} So that, the strcuture constants in \eqref{nonbracket2-mp} turn out to be \begin{equation}\label{C} C_{12}^3=C_{31}^2=C_{23}^1=1, \qquad D_{13}^1=D_{23}^2=1, \end{equation} where the rest is zero. \textbf{Mutual actions.} Left action of $\mathbb{R}^{3}_\textbf{k}$ on $\mathbb{R}^{3}$, and right action $\mathbb{R}^{3}$ on $\mathbb{R}^{3}_\textbf{k}$ are defined through \begin{equation} \label{actionsofsu2} \begin{split} \vartriangleleft&: \mathbb{R}^{3}_\textbf{k} \times \mathbb{R}^{3} \rightarrow \mathbb{R}^{3}_\textbf{k}, \qquad \eta \vartriangleleft \xi :=\eta \times \xi \\ \vartriangleright&: \mathbb{R}^{3}_\textbf{k} \times \mathbb{R}^{3} \rightarrow \mathbb{R}^{3}, \qquad \eta \vartriangleright \xi := \eta \times (\xi \times \textbf{k}). \end{split} \end{equation} It is straight forward to prove that these actions are indeed satisfying the matched pair Lie algebra conditions in \eqref{comp-mpa}. Referring to the notations fixed in \eqref{local-act}, the constants determining the actions (\ref{actionsofsu2}), can be computed to be \begin{equation}\label{L} L_{11}^3=L_{22}^3=-1 ,\qquad L_{32}^2=L_{31}^1=1, \qquad R_{12}^3=R_{31}^2=R_{23}^1=-1,\qquad R_{13}^2=R_{21}^3=R_{32}^1=1, \end{equation} with the rest is zero. According to Theorem \ref{mp-prop}, one can define the matched pair Lie algebra $\mathbb{R}^{3}\bowtie \mathbb{R}^{3}_\textbf{k} $ equipped with the matched Lie bracket \eqref{mpla} calculated as \begin{equation} \label{cerc} \left [ \xi\oplus \eta , \xi'\oplus \eta' \right ]_{\bowtie} =\big(\xi \times\xi'+\eta \times(\xi'\times \mathbf{k})-\eta'\times (\xi \times \mathbf{k})\big) \oplus\big( \mathbf{k}\times(\eta\times\eta')+\eta\times \xi' - \eta'\times \xi \big). \end{equation} \textbf{Matched Lie-Poisson dynamics.} The Lie-Poisson dynamics on the dual space $\G{g}^* \times \G{h}^* \simeq \mathbb{R}^{3}\times \mathbb{R}^{3} $ can be obtained by employing Proposition \ref{ad-*-prop}. To have this, first we compute the duals of the actions \eqref{actionsofsu2}, in a respective order, as \begin{equation}\label{dualactionsofsu} \mu \overset{\ast}{\vartriangleleft} \eta=\mu(\eta\cdot \mathbf{k})-(\mu\cdot \mathbf{k})\eta, \qquad \xi \overset{\ast}{\vartriangleright} \nu =\xi \times \nu \end{equation} whereas the mappings \eqref{a*} and \eqref{b*} are \begin{equation}\label{dualactionsofsu-2} \mathfrak{a}^*_{\eta} \nu=\nu \times \eta, \qquad \mathfrak{b}^*_\xi \mu=(\mu \cdot \xi)\mathbf{k}-(\mu \cdot \mathbf{k})\xi. \end{equation} Notice that, the Lie-Poisson formulation on the dual of $\G{g}^*$ corresponds to the the rigid body dynamics in $3D$, \cite{Ho08,MaRa13}. So we can consider the matched pair dynamics in this setting as the coupling of two bodies in $3D$. Since the dynamics for the rigid bodies are given by minus Lie-Poisson equation, we refer minus Lie-Poisson bracket and minus Lie-Poisson equations. Assume the following coordinates $\mu=(\mu_1,\mu_2,\mu_3)$ and $\nu=(\nu_1,\nu_2,\nu_3)$. Recall that, in \eqref{cosym2}, the explicit realization of the matched Lie-Poisson bivector, defined for the matched pair Lie-Poisson bracket \eqref{Lie-pois-double-non-bracket-tc}, has already been given. In the following table, we exhibit the coefficients of the matched Lie-Poisson bivector for the present case. \begin{center} \begin{tabular}{SSSSS} \toprule[1.8pt] {$\Lambda$} & {$\Lambda_{\alpha \beta}$} & {$\Lambda_{ \alpha b }$} & {$\Lambda_{ a \beta }$} & {$\Lambda_{ a b }$} \\ \midrule[1.5pt] {$\Lambda_{11}$} & 0 & {$\mu_3$} &{$-\mu_3$} &0 \\ {$\Lambda_{12}$} &{-$\mu_3$} & {-$\nu_3$} &{$\nu_3$} & 0 \\ {$\Lambda_{13}$} & {$\mu_2$} & {$\nu_2-\mu_1$} & {$-\nu_2+\mu_1$} & {$\nu_1$} \\ \midrule {$\Lambda_{21}$} &{$\mu_3$} & {$\nu_3$} & {$-\nu_3$} & 0 \\ {$\Lambda_{22}$} & 0 & {$\mu_3$} & {$-\mu_3$} & 0 \\ $\Lambda_{23}${} & {$\mu_1$} & {-$\nu_1-\mu_2$} & {$\nu_1+\mu_2$} & {$\nu_2$} \\ \midrule {$\Lambda_{31}$} & {-$\mu_2$} & {-$\nu_2$} & {$\nu_2$} & {$-\nu_1$} \\ {$\Lambda_{32}$} & {-$\mu_1$} & ${\nu_1}$ & ${-\nu_1}$ & {$-\nu_2$} \\ {$\Lambda_{33}$} & 0 & 0 & 0 & 0 \\ \bottomrule \end{tabular} \end{center} We remark that the first column determined the Lie-Poisson bivector on $\G{g}^*$ whereas the last one is the Lie-Poisson bivector on $\G{h}^*$ whereas the second and third columns are manifesting the mutual actions \eqref{actionsofsu2}. Collecting all these results, we compute the matched Lie-Poisson equations (\ref{LPEgh}) generated by a Hamiltonian function $\mathcal{H}$ on the dual space as \begin{equation} \begin{split} \dot{\mu}&=\frac{\partial \mathcal{H}}{\partial\mu}\times \mu+(\frac{\partial \mathcal{H}}{\partial \nu}\cdot \mathbf{k})\mu-(\mu\cdot \mathbf{k})\frac{\partial \mathcal{H}}{\partial \nu}-\nu \times \frac{\partial \mathcal{H}}{\partial \nu}, \\ \dot{\nu}&=(\mathbf{k} \cdot \frac{\partial \mathcal{H}}{\partial \nu})\nu-(\nu \cdot \frac{\partial \mathcal{H}}{\partial \nu})\mathbf{k}+\frac{\partial \mathcal{H}}{\partial \mu}\times \nu+(\mu \cdot \mathbf{k})\frac{\partial \mathcal{H}}{\partial \mu}-(\mu \cdot \frac{\partial \mathcal{H}}{\partial \mu})\mathbf{k}. \end{split} \end{equation} In Section \ref{Sec-C-DS}, couplings of various ways of dissipations are listed. We examine now these couplings for the concrete example given in previous subsection. \textbf{Rayleigh dissipation.} In this $3D$ framework, and for a linear operator \begin{equation} (\mathbb{R}^{3})^*\oplus (\mathbb{R}^{3}_{\mathbf{k}})^* \longrightarrow \mathbb{R}^{3} \bowtie \mathbb{R}^{3}_{\mathbf{k}}, \qquad \mu\oplus\nu \mapsto (\Upsilon^{\G{g}}(\mu)\oplus\Upsilon^{\G{h}}(\nu)), \end{equation} the matched Lie-Poisson systems with Rayleigh type dissipations, that is the system \eqref{eqofmoofRay}, takes the particular form \begin{equation} \begin{split} &\dot{\mu}-\frac{\partial \mathcal{H}}{\partial \mu}\times \mu-\mu(\frac{\partial \mathcal{H}}{\partial \nu}\cdot \mathbf{k})+(\mu\cdot \mathbf{k})\frac{\partial \mathcal{H}}{\partial \nu}+\nu \times \frac{\partial \mathcal{H}}{\partial \nu}=\mu \times \Upsilon^{\G{g}}(\mu)-\mu(\Upsilon^{\G{h}}(\nu)\cdot \mathbf{k})+(\mu\cdot \mathbf{k})\Upsilon^{\G{h}}(\nu)+\nu \times \Upsilon^{\G{h}}(\nu) \\ &\dot{\nu}-\nu(\mathbf{k}\cdot \frac{\partial \mathcal{H}}{\partial \nu})+(\nu \cdot \frac{\partial \mathcal{H}}{\partial \nu})\mathbf{k}-\frac{\partial \mathcal{H}}{\partial \mu}\times \nu -(\mu \cdot \mathbf{k})\frac{\partial \mathcal{H}}{\partial \mu}+(\mu \cdot \frac{\partial \mathcal{H}}{\partial \mu})\mathbf{k} = (\nu \cdot \Upsilon^{\G{h}}(\nu))\mathbf{k}-\nu(\mathbf{k}\cdot \Upsilon^{\G{h}}(\nu))\\ & \hspace{9cm}- \Upsilon^{\G{h}}(\nu) \times \nu +(\mu \cdot \Upsilon^{\G{g}}(\mu))\mathbf{k}-(\mu \cdot \mathbf{k})\Upsilon^{\G{g}}(\mu). \end{split} \end{equation} \textbf{Cartan-Killing dissipation.} Determine the matched Cartan-Killing metric \eqref{CM-MP} as \begin{equation} [\C{G}_{ij}]=\begin{bmatrix} -4 & 0 & 0 & 0 & -5 & 0\\ 0 & -4 & 0 & 5 & 0 & 0\\ 0 & 0 & -4 & 0 & 0 & 0\\ 0 & 5 & 0 & -2 & 0 & 0\\ -5 & 0 & 0 & 0 & -2 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 \\ \end{bmatrix}. \end{equation} Then, irreversible dynamics generated by a function $\C{S}$, presented in (\ref{eqofmoofcart}), is computed to be \begin{equation} \begin{split} &\dot{\mu}_1=-4\frac{\partial \mathcal{S}}{\partial \mu_1}+5\frac{\partial \mathcal{S}}{\partial \mu_2}, \hspace{1cm} \dot{\mu}_2=-4\frac{\partial \mathcal{S}}{\partial \mu_2}-5\frac{\partial \mathcal{S}}{\partial \mu_1}, \hspace{1cm} \dot{\mu}_3=-4\frac{\partial \mathcal{S}}{\partial \mu_3},\\ &\dot{\nu}_1=-5\frac{\partial \mathcal{S}}{\partial \nu_2}-2\frac{\partial \mathcal{S}}{\partial \nu_1}, \hspace{1.1cm} \dot{\nu}_2=5\frac{\partial \mathcal{S}}{\partial \nu_1}-2\frac{\partial \mathcal{S}}{\partial \nu_2}, \hspace{1.38cm} \dot{\nu}_3=\frac{\partial \mathcal{S}}{\partial \nu_3}. \end{split} \end{equation} \section{Acknowledgments} This paper is a part of the project "Matched pairs of Lagrangian and Hamiltonian Systems" supported by T\"UB\.ITAK (the Scientific and Technological Research Council of Turkey) with the project number 117F426. All of the authors are grateful to T\"UB\.ITAK for the support. The first named author (OE) is grateful to Prof. Miroslav Grmela, Prof. Parta Guha, Prof. Michal Pavelka, and Prof. Petr Vágner for enlightening discussions on GENERIC. We are grateful to Begüm Ateşli for discussions done on extension theories. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }